Next-generation Cameras and Displays Incorporating Optics and Machine Intelligence

SystemX Affiliates: login to view related content.

Topic: 
Next-generation Cameras and Displays Incorporating Optics and Machine Intelligence
Thursday, February 10, 2022 - 5:30pm to 6:30pm
Venue: 
Hewlett 103
Speaker: 
Yifan (Evan) Peng - Postdoc
Abstract / Description: 

*To receive email announcements and live stream information for upcoming seminars, please subscribe to the SystemX Seminar/EE310 Mailing list here.

From cameras to displays, visual computing systems are becoming ubiquitous in our daily life. However, their underlying design principles have stagnated after decades of evolution. Existing imaging devices require dedicated hardware that is not only complex and bulky, but also exhibits only suboptimal results in certain visual computing scenarios. This shortcoming is due to a lack of joint design between hardware and software, importantly, impeding the delivery of vivid 3D visual experience of displays. By bridging advances in computer science and optics with extensive machine intelligence strategies, my work engineers physically compact, yet functionally powerful imaging solutions of cameras and displays for applications in photography, wearable computing, IoT products, autonomous driving, medical imaging, and VR/AR/MR.

In this talk, I will describe two classes of computational imaging modalities. Firstly, in Deep Optics, we jointly optimize lightweight diffractive optics and differentiable image processing algorithms to enable high-fidelity imaging in domain-specific cameras. Additionally, I will discuss Neural Holography, which also applies the unique combination of machine intelligence and physics to solve long-standing problems of computer-generated holography. Specifically, I will describe several holographic display architectures that leverage the advantages of camera-in-the-loop optimization and neural network model representation to deliver full-color, high-quality holographic images. Driven by trending machine intelligence, these hardware-software jointly optimized imaging solutions can unlock the full potential of traditional cameras and displays and enable next-generation visual computing systems.

Bio: 

Yifan (Evan) Peng is a Postdoctoral Research Fellow at Stanford University in Computational Imaging Lab. His research interest rides across the interdisciplinary fields of optics/photonics, computer graphics, computer vision, and AI. Much of his recent work concerns developing visual computing modalities combining optics and machine intelligence, for both cameras and displays. His recent achievements in Deep Optics and Neural Holography have received numerous attention and consultation from both academia and industry. He completed his Ph.D. in Computer Science at the University of British Columbia, and his M.Sc. and B.Eng. in Optical Science and Engineering at Zhejiang University. During the Ph.D. career, he was also a Visiting Research Student at Stanford Computational Imaging Lab and Visual Computing Center, King Abdullah University of Science and Technology.

Website: http://stanford.edu/~evanpeng/