We believe "earables" is the next significant milestone in wearable computing. With sensing, processing, and communications converging into these devices, we envision a host of new possibilities within the next 5 years. The leap from today’s ear-phones to "earables" could mimic the transformation from basic-phones to smartphones. Today’s smartphones are hardly a calling device, much like how tomorrow’s earables will hardly be a wireless speaker or microphone. See our vision slides


Algorithms & Learning Problems

  • Under-determined blind source separation (UBSS)
  • Learning multi-modal translators for speech de-noising
  • Near-Far mapping of head related transfer functions (HRTF)
  • Blind channel interpolation for personal sound zones (PSZ)
  • Derivative free optimization for human feedback functions
  • Sensor fusion for motion tracking
  • Geometric sub-spaces for direction of arrival (DoA) estimation
  • Interference alignment via angular aliasing
  • Self-supervised region-wise source extraction

Systems & Applications

  • Immersive acoustic augmented reality (AAR)
  • Indoor localization and navigation
  • Head, face, and mouth activity sensing
  • Vibratory communication
  • Inaudible acoustics
  • Health and vital sign sensing from glasses
  • Reading in the acoustic domain
  • Continuous multi-modal security
  • Audio-visual systems

Platforms & Experiments

prototype prototype1

Invited Seminars & Talks

Invited seminar at 2020 NUS Computer Science Week, Singapore
Keynote at IASA, 2022
Workshop talk on Earable Computing, ACM HotMobile, 2021
Conference talk on AoA Factorization, ICASSP 2021
Conference talk on Ear-AR, ACM MobiCom 2020
Conference talk on EarSense, ACM MobiCom 2020
Conference talk on UNIQ, ACM SIGCOMM 2020
Conference talk on VoLoc, ACM MobiCom 2020
Conference talk on MUTE, ACM SIGCOMM 2018
Invited seminar at Charles Babbage Seminar (Cambridge Univ.)
Keynote at EarComp workshop 2019
Keynote at MobiUK workshop (Oxford Univ.)


ICRA, 2023

RoSS: Rotation-induced Aliasing for Audio Source Separation

RoSS exploits the intrinsic angular/delay aliasing phenomenon in microphone arrays to solve under-determined source separation and AoA estimation problems via rotational motion.
ICML, 2022

Learning to Separate Voices by Spatial Regions HTML

This paper explores a self-supervised technique for binaural source separation which relaxes the maximum number of sources constraint.
ACM HotMobile, 2021

Earable Computing: A New Area to Think About

This position paper argues that earphones hold the potential to disrupt mobile, wearable computing.
arXiv, 2021

Estimating multiple Angles of Arrival in a Steering Vector Space

This paper estimates the AoA of multiple uncorrelated and correlated signals (echoes) by analyzing them in a steering-vector sub-space.

Personalizing Head Related Transfer Functions for Earables

UNIQ enables better spatial acoustics on earables by estimating personalized HRTF using off-the-shelf mobile devices.

Angle-of-arrival (AOA) Factorization in Multipath Channels

TThis paper aims to estimate K angle of arrivals (AoA) using an array of M > K microphones, for unknown and correlated source signals.

Ear-AR: Indoor Acoustic Augmented Reality on Earphones

Ear-AR enables indoor localization and acoustic AR via sensor fusion between ear IMU, phone IMU, and acoustics.

EarSense: Earphones as a Teeth Activity Sensor

EarSense uses today’s earphone speaker and microphone to sense and localize teeth gestures (applications in health monitoring and HCI)

Voice Localization Using Nearby Wall Reflections

VoLoc shows the feasibility of inferring user location from voice commands, useful for voice assistants like Amazon Alexa and Google Home.
ACM Earcomp(Workshop with Ubicomp'19), 2019

STEAR: Robust Step Counting from Earables

STEAR discusses why earphone IMU serves as a much better sensor than phone or watch IMU, for motion tracking applications.

MUTE: Bringing IoT to Noise Cancellation

MUTE exploits the velocity gap between RF and sound to improve active noise cancellation.

Data, Demo, & Code

IMUV: Self-supervised IMU-based speech decoding


Demo: Separating Voices by Spatial Regions, ICML 2022


Health Diarization: IMU activity classification

Some of our work covered by


Earable computing

a new research area in the making

The future of AR is

earbuds, not eyeglasses

Project EarSense

Finally, apps to sink your teeth into

Team Members

Yu-Lin (Wally) Wei
PhD Student, UIUC
Zhijian Yang
PhD Student, UIUC
Hyungjoo Seo
PhD Student, UIUC
Rajalaxmi Rajagopalan
PhD Student, UIUC
Sattwik Basu
PhD Student, UIUC
Debottam Dutta
PhD Student, UIUC
Zhongweiyang (Alan) Xu
MS Student, UIUC
Avinash Subramaniam
MS Student, UIUC
Sahil Bhandary Karnoor
MS Student, UIUC
Akash Mittal
MS Student, UIUC
Chaitanya Amballa
MS Student, UIUC
Eric Dong
Undergraduate Student, UIUC
Jaewook Lee
Undergraduate Student, UIUC
Phoebe Chen
Undergraduate Student, UIUC
Bashima Islam
Postdoct, UIUC
Romit Roy Choudhury
Professor, ECE & CS, UIUC


Past & Present
Ziyue (Liz) Li
Alumni 2020, UIUC
Waymo @ Google
Sheng Shen
Alumni 2019, UIUC
Facebook Reality Labs
Jay Prakash
Visitng Scholar, SUTD
Haitham Hassanieh
Asst. Professor, ECE & CS, UIUC
Rakesh Kumar
Professor, ECE, UIUC
We are looking for PhD students with background in sensing, (acoustic) signal processing, communications, embedded systems, and machine learning.

Copyright © All rights reserved | This template is made by Colorlib