*To receive email announcements and live stream information for upcoming seminars, please subscribe to the SystemX Seminar/EE310 Mailing list here.
Technologies for human-machine interface plays a key role in human augmentation, prosthetics, robot learning, and virtual reality. Specifically, devices for tracking our hand enables a variety of interactive and virtual tasks such as object recognition, manipulation and even communication. However, there remains a big gap in capabilities compared to human in terms of precision, fast learning, and low-power consumption.
In this talk, I explain the newly developed fast-learnable electronic skin device which enables user-independent, data-efficient recognition of different hand tasks. This work is the first practical approach that is both lean enough in form and adaptable enough to work for essentially any user with limited data. It consists of direct printable electrical nanomesh that is coupled with an unsupervised meta-learning framework. The developed system rapidly adapts to various users and tasks, including command recognition, keyboard typing, and object recognition in virtual space.
Dr. Kyun Kyu (Richard) Kim is currently a postdoctoral fellow at Stanford University in Zhenan Bao research group. He received his Ph.D. from Seoul National University in 2021 in Mechanical Engineering. He developed a series of soft human skin-like electronic devices which are enhanced by AI algorithms that incorporate both hardware and algorithmic efficiency. These devices comprise soft skin sensors that conformably adheres with the user’s skin, replacing conventional devices that are both bulky and complex. When combined with AI algorithms, these devices enable a single sensory component to generate highly informative signals that would otherwise require numerous sensory units. Recognizing his results, he has been recently selected as the Korean list of 2022 MIT innovators under 35.