Energy-Efficient Circuits and Systems for Computational Imaging and Vision on Mobile Devices

SystemX Affiliates: login to view related content.

Topic: 
Energy-Efficient Circuits and Systems for Computational Imaging and Vision on Mobile Devices
Thursday, March 8, 2018 - 4:30pm to 5:30pm
Venue: 
Y2E2 Room 111
Speaker: 
Priyanka Raina - Visiting Research Scientist - Nvidia & Assistant Professor - Electrical Engineering - Stanford University (Starting September 2018)
Abstract / Description: 

85% of images today are taken by cell phones. These images are not merely projections of light from the scene onto the camera sensor but result from a deep calculation. This calculation involves a number of computational imaging algorithms such as high dynamic range (HDR) imaging, panorama stitching, image deblurring and low-light imaging that compensate for camera limitations, and a number of deep learning based vision algorithms such as face recognition, object recognition and scene understanding that make inference on these images for a variety of emerging applications. However, because of their high computational complexity, mobile CPU or GPU based implementations of these algorithms do not achieve real-time performance. Moreover, offloading these algorithms to the cloud is not a viable solution because wirelessly transmitting large amounts of image data results in long latency and high energy consumption, making them unsuitable for mobile devices.

My approach to solving this problem has to been to design energy-efficient hardware accelerators targeted at these applications. In this talk, I will present my work on the architecture design and implementation of three complete computational imaging systems for energy-constrained mobile environments: (1) an energy-scalable accelerator for blind image deblurring, (2) a reconfigurable bilateral filtering processor for computational photography applications such as HDR imaging, low-light imaging and glare reduction, and (3) a low-power processor for real-time motion magnification in videos. Each of these accelerator-based systems achieves 2 to 3 orders of magnitude improvement in runtime and 3 to 4 orders of magnitude improvement in energy compared to existing implementations on CPU or GPU platforms. In my talk, I will present the energy minimization techniques that I employed in my designs to obtain these improvements. In addition, I will talk about how these systems achieve energy scalability by trading off accuracy with execution time. This is essential in real-life applications where one might still want to run a complex algorithm in a low-battery scenario but might be willing to sacrifice some visual quality.

I will conclude my talk by giving my vision for how such accelerator-based systems will enable energy-efficient integration of computational imaging and deep learning based vision algorithms into mobile and wearable devices for emerging applications such as autonomous driving, micro-robotics, assistive technology, medical imaging and augmented and virtual reality.

Bio: 

Priyanka Raina will be starting as an Assistant Professor in Electrical Engineering at Stanford University in​ ​September 2018. She is currently a Visiting Research Scientist in the Architecture Research Group at NVIDIA​ ​Corporation. She received her Ph.D. degree in 2018 and S.M. degree in 2013 in Electrical Engineering and​ ​Computer Science from MIT and her B.Tech. degree in Electrical Engineering from Indian Institute of Technology​ ​(IIT) Delhi in 2011.​ Priyanka’s current research interests are in the area of designing energy-efficient and high-performance circuits and systems for enabling complex computational photography, computer vision and machine learning based applications on mobile and wearable devices. Her research results include the demonstration of the first hardware-accelerated systems for blind image deblurring (​awarded the best student paper award at ESSCIRC 2016 and ​the​ 2016 ISSCC ​s​tudent ​r​esearch ​p​review ​a​ward), high-dynamic-range and low-light imaging (presented at ISSCC 2013, JSSC 2013) and real-time motion magnification in videos.