Keynote and invited speakers

Keynote speakers

August 1st, 11:30: Shree Nayar

Shree Nayar

Title: Advances in Visual Communication

Abstract: We have entered a new age of digital communication. Today, users on social and other platforms are communicating with each other more frequently using photos and videos than text and audio. In this context, we are interested in developing technologies that dramatically lower the physical effort and the cognitive load required during visual communication. I will present some of the approaches developed by my team at Snap Research to make visual communication fast, easy and engaging. Our work draws on several fields including anthropology, imaging, vision, communications, AR/VR, robotics, and HCI.

Biography: Shree K. Nayar is the T. C. Chang Professor of Computer Science at Columbia University. He has also served as Director of NYC Research at Snap Inc. At Columbia, he heads the Computer Vision Laboratory (CAVE), which develops computational imaging and computer vision systems. His work is motivated by applications in the fields of digital imaging, computer vision, computer graphics, robotics, virtual reality, augmented reality, and human-computer interfaces.

Nayar received his PhD degree in Electrical and Computer Engineering from the Robotics Institute at Carnegie Mellon University. For his research and teaching he has received several honors including the David Marr Prize (1990 and 1995), the David and Lucile Packard Fellowship (1992), the National Young Investigator Award (1993), the NTT Distinguished Scientific Achievement Award (1994), the Keck Foundation Award for Excellence in Teaching (1995), the Columbia Great Teacher Award (2006), the Carnegie Mellon Alumni Achievement Award (2009), Sony Appreciation Honor (2014), the Columbia Engineering Distinguished Faculty Teaching Award (2015), the IEEE PAMI Distinguished Researcher Award (2019), and the Funai Achievement Award (2021). For his contributions to computer vision and computational imaging, he was elected to the National Academy of Engineering in 2008, the American Academy of Arts and Sciences in 2011, and the National Academy of Inventors in 2014.

August 2nd, 11:30: Changhuei Yang

Changhuei Yang

Title: Computation in microscopy: How computers are changing the way we build and use microscopes

Abstract: The level of computational power we can currently access, has significantly changed the way we think about, process and interact with microscopy information. In this talk, I will discuss some of our recent computational microscopy and deep learning work, that showcase some of these shifts in the context of pathology and life science research. I will talk about Fourier Ptychographic Microscopy (FPM) – the first demonstrated computational approach for numerically zeroing out physical aberrations from microscopy images. As a novel way to collect and process microscopy data, FPM can also bring significant workflow advantages to pathology. I will also talk about the use of Deep Learning in image analysis, and point out some of impactful ways Deep Learning can improve the way we deal with image data in pathology and life science research. Looking into the near future, the surprising findings of these current endeavors strongly indicate that the redesign of microscope to better suit these computational needs would be instrumental for the next level of AI based image analysis.

Biography: Changhuei Yang is the Thomas G. Myers Professor of Electrical Engineering, Bioengineering and Medical Engineering at Caltech. He works in the area of biophotonics and computational imaging. His research team has developed numerous novel biomedical imaging technologies over the past 2 decades – including technologies for focusing light deeply into animals using time-reversal optical methods, lensless microscopy, ePetri, Fourier Ptychography, and non-invasive brain activity monitoring methods. He has worked with major companies, including BioRad, Amgen and Micron-Aptina, to develop solutions for their technological challenges.
He has received the NSF Career Award, the Coulter Foundation Early Career Phase I and II Awards, and the NIH Director’s New Innovator Award. In 2008 he was named one of Discover Magazine’s ‘20 Best Brains Under 40’. He is a Coulter Fellow, an AIMBE Fellow and an OSA Fellow. He was elected as a Fellow in the National Academy of Inventors in 2020.

August 3rd, 11:30: Joyce Farrell

Joyce Farrell

Title: Physics-based end-to-end image systems simulations

Abstract: The ability to experiment and innovate in the design of imaging systems is important for many applications, including consumer photography, medical imaging, and autonomous driving. This talk describes a physics-based end-to-end image systems simulation programming environment that combines quantitative computer graphics with models of optics and image sensors. We assess the accuracy of simulations by comparing real and synthetic camera image data. Our image systems simulation software is open-source and freely available through GitHub.

Biography: Joyce Farrell is a senior research associate and lecturer in the Stanford School of Engineering and the executive director of the Stanford Center for Image Systems Engineering (SCIEN). She received her BS from the University of California at San Diego and her PhD from Stanford University. She was a postdoctoral fellow at NASA Ames Research Center, New York University, and Xerox PARC, before joining the research staff at Hewlett Packard in 1985. In 2000 Joyce joined Shutterfly, a startup company specializing in online digital photofinishing, and in 2001 she formed ImagEval Consulting, LLC, a company specializing in the development of software and design tools for image systems simulation. In 2003, Joyce returned to Stanford University to develop the SCIEN Industry Affiliates Program.

Invited speakers

August 1st, 10:30: Yi Xue

Yi Xue

Title: 3D fluorescence and phase microscopy with scattering samples

Abstract: Optical imaging is often hindered by light scattering. Scattered photons contribute to the background noise and degrade the signal-to-noise ratio (SNR) of fluorescent images. To tackle this challenge, I developed several strategies for both multiphoton microscopy and one-photon microscopy to image through scattering media. Multiphoton microscopy has been widely used for deep tissue imaging due to long excitation wavelength and inherent optical sectioning ability, but imaging speed is relatively slow because of scanning. Multiphoton microscopy with parallelized excitation and detection improves the imaging speed, but scattered fluorescent photons degrade the SNR of images. To achieve both high speed and high SNR, I developed a two-photon imaging technique that combines structured illumination and a digital filter of spatial frequency to discard scattered photons and only keeps ballistic photons. On the other hand, scattered photons carry the information of the heterogeneity of scattering media, quantitatively evaluated by refractive index. Instead of discarding scattered photons, I developed a one-photon technique that decodes the refractive index of media from scattered fluorescence images. This technique models a scattering medium as a series of thin layers and describe the light path in the medium. By measuring the fluorescent images and solving the inverse problem, this technique enables the reconstruction of the 3D refractive index of scattering media and digital correction of scattering in fluorescence images.

Biography: Dr. Yi Xue is an assistant professor at the University of California, Davis. Before joining UC Davis, she was a postdoc fellow in Prof. Laura Waller’s lab at UC Berkeley. She received her PhD and MS degrees in Mechanical Engineering from Massachusetts Institute of Technology in 2019 and 2015, respectively, and her BEng degree in Optical Engineering from Zhejiang University, China, in 2013. Her current research interests include computational optics, multiphoton microscopy, brain imaging and optogenetics.

August 1st, 14:00: Aydogan Ozcan

Aydogan Ozcan

Title: Diffractive Optical Networks and Computational Imaging Without a Computer

Abstract: We will discuss diffractive optical networks designed by deep learning to all-optically implement various complex functions as the input light diffracts through spatially-engineered surfaces. These diffractive processors complete their computational task at the speed of light propagation through thin, passive optical layers and have various applications, e.g., all-optical image analysis, feature detection, object classification, computational imaging and seeing through diffusers. They also enable task-specific camera designs and new optical components for, e.g., spatial, spectral and temporal beam shaping and spatially-controlled wavelength division multiplexing. These deep learning-designed diffractive networks broadly impact (1) all-optical statistical inference engines, (2) computational cameras and microscopes, and (3) inverse design of optical systems that are task-specific.

Biography: Dr. Aydogan Ozcan is the Chancellor’s Professor and the Volgenau Chair for Engineering Innovation at UCLA and an HHMI Professor with the Howard Hughes Medical Institute, leading the Bio- and Nano-Photonics Laboratory at UCLA School of Engineering and is also the Associate Director of the California NanoSystems Institute. Dr. Ozcan is elected Fellow of the National Academy of Inventors (NAI), holds>55 issued/granted patents, and is the author of one book and the co-author of >800 peer-reviewed publications in major scientific journals and conferences. Dr. Ozcan is the founder and a member of the Board of Directors of Lucendi Inc., Hana Diagnostics, Pictor Labs, as well as Holomic/Cellmic LLC, which was named a Technology Pioneer by The World Economic Forum in 2015. Dr. Ozcan is also a Fellow of the American Association for the Advancement of Science (AAAS), the International Photonics Society (SPIE), the Optical Society of America (OSA), the American Institute for Medical and Biological Engineering (AIMBE), the Institute of Electrical and Electronics Engineers (IEEE), the Royal Society of Chemistry (RSC), the American Physical Society (APS) and the Guggenheim Foundation, and has received major awards including the Presidential Early Career Award for Scientists and Engineers, International Commission for Optics (ICO) Prize, Joseph Fraunhofer Award & Robert M. Burley Prize (Optica), Biophotonics Technology Innovator Award (SPIE), Rahmi M. Koc Science Medal, International Photonics Society Early Career Achievement Award (SPIE), Army Young Investigator Award, NSF CAREER Award, NIH Director’s New Innovator Award, Navy Young Investigator Award, IEEE Photonics Society Young Investigator Award and Distinguished Lecturer Award, National Geographic Emerging Explorer Award, National Academy of Engineering The Grainger Foundation Frontiers of Engineering Award and MIT’s TR35 Award for his seminal contributions to computational imaging, sensing and diagnostics. Dr. Ozcan is also listed as a Highly Cited Researcher by Web of Science, Clarivate.

August 2nd, 09:00: Wenzhen Yuan

Wenzhen Yuan

Title: Connecting Optics and Mechanics: How do Vision-based Sensors Help Robots Understand Touch?

Abstract: In this talk, I will introduce the development of a high-resolution robotic tactile sensor GelSight, and how it can help robots understand and interact with the physical world. GelSight is a vision-based tactile sensor that measures the geometry of the contact surface with a spatial resolution of around 25 micrometers, and it also measures the shear forces and torques at the contact surface. With the help of high-resolution information, a robot could easily detect the precise shape and texture of the object surfaces and therefore recognize them. It can also help robots get more information from contact, such as understanding different physical properties of the objects and assisting in manipulation tasks. I will also introduce some open challenges in sensor design and our effort to address them, including using physically-based rendering to model the sensors and using micro-manufacturing technologies to make new sensors.

Biography: Wenzhen Yuan is an assistant professor in the Robotics Institute at Carnegie Mellon University and the director of the CMU RoboTouch Lab. She is a pioneer in high-resolution tactile sensing for robots, and she also works in multi-modal robot perception, soft robots, robot manipulation, and haptics. Yuan received her Master of Science and PhD degrees from MIT and Bachelor of Engineering from Tsinghua University.

August 2nd, 10:30: Jianwei (John) Miao

Jianwei (John) Miao

Title: Computational Microscopy: Coherent Diffractive Imaging with Photons and Electrons

Abstract: Since the invention of compound microscopes in the 17th century, lens-based microscopy, such as optical, phase-contrast, fluorescence, confocal, super-resolution and electron microscopes, has played an important role in the evolution of modern science and technology. In 1999, a novel form of microscopy, known as coherent diffractive imaging (CDI), lensless or computational microscopy, was developed to transform our conventional view of microscopy, in which the physical lens of a microscope was replaced by a computational algorithm. The well-known phase problem was solved by oversampling with iterative algorithms. CDI methods such as plane-wave CDI, ptychography (i.e., scanning CDI) and Bragg CDI have since been implemented for a wide range of applications in the physical and biological sciences using synchrotron radiation, X-ray free electron lasers, high harmonic generation, optical and electron microscopy. In this talk, I will present some recent methodology developments of this rapidly-growing filed and highlight several important cross-disciplinary applications.

Biography: Jianwei (John) Miao is Professor of Physics & Astronomy and the California NanoSystems Institute at UCLA. He is an internationally renowned pioneer in the development of novel imaging methods with X-rays and electrons. He performed the seminal experiment of extending X-ray crystallography to allow structural determination of non-crystalline specimens in 1999, which is known CDI. In 2012, he applied CDI algorithms to pioneer atomic electron tomography (AET) for 3D structure determination of materials without assuming crystallinity. He has performed several groundbreaking AET experiments to determine the 3D structure of crystal defects at the single-atom level. In 2019, he developed 4D AET to observe crystal nucleation at atomic resolution, showing that a theory beyond classical nucleation theory is needed to describe nucleation at the atomic scale. More recently, he advanced AET to solve a long-standing grand challenge in the physical sciences – determining the 3D atomic structure of amorphous solids for the first time.
Miao is the Deputy Director of the STROBE NSF Science and Technology Center, Associate Editor for Science Advances, and Crystallography Reviews. His honors and awards include the Werner Meyer-Ilse Memorial Award (1999), Alfred P. Sloan Research Fellowship (2006-2008), Outstanding Teacher of the Year Award in Physics & Astronomy at UCLA (2006-2007), Kavli Frontiers Fellowship (2010), Theodore von Kármán Fellowship from the RWTH Aachen University (2013), Microscopy Today Innovation Award (2013), University of Strasbourg Institute for Advanced Study Fellowship (2015-2017), Fellow of American Physical Society (2016), NSF Creativity Award (2018), and Innovation in Materials Characterization Award from Materials Research Society (2021).

August 2nd, 14:00: Marie Ygouf

Marie Ygouf

Title: Space Starlight Suppression Technology Demonstration: The Nancy Grace Roman Space Telescope Coronagraph

Abstract: After the James Webb Space Telescope (JWST), NASA’s next flagship astrophysics mission is the ambitious Wide Field Infrared Survey Telescope, currently on track for a 2027 launch. The Nancy Grace Roman Space Telescope Coronagraph Instrument will be the first high-performance stellar coronagraph using active wavefront control for deep starlight suppression in space, providing unprecedented levels of contrast, spatial resolution, and sensitivity for astronomical observations in the optical. During its Technology Demonstration phase, the Roman Coronagraph will resolve the signal of an exoplanet via photometry and spectroscopy and directly image and measure the polarization of disks. Future flagship mission concepts aim to characterize Earth analogues with visible light flux ratios of ~10-10, and the Roman Coronagraph is a critical intermediate step toward that goal, with predicted capability of ~10-9. Here, we introduce the ideas of adaptive optics and coronagraph design, present the coronagraph’s capability as well as some anticipated results from its technology demonstration.

Biography: My research focuses on high-contrast imaging in the view of directly detecting and characterizing exoplanets. I am particularly interested in improving the performance of instruments for exoplanet science, taking profit of data analysis and of the detailed characterization of the instrumental limitations and calibration capabilities.
I am part of the Roman Coronagraph Project Science team and I am also preparing high-contrast imaging observations of circumstellar environments with the NIRCam GTO team and was awarded JWST time through several GO programs.

August 3rd, 09:00: Lihong Wang

Lihong Wang

Title: Photoacoustic Tomography of Molecular Absorption from Organelles to Patients

Abstract: Photoacoustic tomography (PAT) has been developed for in vivo functional, metabolic, molecular, and histologic imaging by physically combining optical and ultrasonic waves. Broad applications include early-cancer detection and brain imaging. High-resolution pure optical imaging is limited to superficial imaging within the optical diffusion limit (~1 mm in the skin) in scattering tissue. By synergistically combining light and sound, PAT in the form of either photoacoustic computed tomography or photoacoustic microscopy breaks through this limit and provides deep penetration at high ultrasonic resolution and high optical contrast. PAT is the only modality capable of in vivo imaging across the length scales of organelles, cells, tissues, and organs (or small-animal organisms) with consistent molecular contrast. The US FDA has approved PAT in 2021 for breast cancer diagnosis. The annual conference on PAT has become the largest in SPIE’s 20,000-attendee Photonics West since 2010. In addition, compressed ultrafast photography, the world’s fastest real-time camera, will be touched upon.

Biography: Lihong Wang edited the first book on photoacoustic tomography. His book entitled “Biomedical Optics: Principles and Imaging,” one of the first textbooks in the field, won the 2010 Joseph W. Goodman Book Writing Award. He has published 560 peer-reviewed journal articles and delivered 570 keynote/plenary/invited talks. His Google Scholar h-index and citations have reached 149 and 94,000, respectively. His laboratory was the first to report functional photoacoustic tomography, 3D photoacoustic microscopy, photoacoustic endoscopy, photoacoustic reporter gene imaging, the universal photoacoustic reconstruction algorithm, and CUP (world’s fastest camera). He chairs the annual conference on Photons plus Ultrasound, the largest conference at Photonics West. He was the Editor-in-Chief of the Journal of Biomedical Optics. He received the NIH Director’s Pioneer, NIH Director’s Transformative Research, and NIH/NCI Outstanding Investigator awards. He also received the OSA C.E.K. Mees Medal, IEEE Technical Achievement Award, IEEE Biomedical Engineering Award, SPIE Britton Chance Biomedical Optics Award, IPPA Senior Prize, and OSA Michael S. Feld Biophotonics Award. He is a Fellow of the AAAS, AIMBE, Electromagnetics Academy, IAMBE, IEEE, NAI, OSA, and SPIE as well as a Foreign Fellow of COS. An honorary doctorate was conferred on him by Lund University, Sweden. He was inducted into the National Academy of Engineering.

August 3rd, 10:30: Rajesh Menon

Rajesh Menon

Title: Non-anthropocentric Imaging with and without optics

Abstract: Imaging that is not constrained by human perception could be advantageous for enhanced privacy, to enable low power, persistent applications, and to improve inferencing by exploiting properties of light that are unavailable to humans (eg. Spectrum, polarization, etc.). By co-optimizing the imager with subsequent image processing, we showcase 3 examples: (1) snapshot hyper-spectral imaging and inferencing; (2) snapshot deep-brain fluorescence microscopy; and (3) optics-free imaging and inferencing. New modalities for signal recording, optics enhanced by nanomanufacturing, and advanced computational capabilities promise exciting new opportunities.

Biography: Rajesh Menon combines his expertise in nanofabrication, computation and optical engineering to impact several fields including inverse-designed photonics, flat lenses and unconventional imaging. Rajesh is a Fellow of the OSA, and Senior Member of the IEEE and the SPIE. Among his other honors are a NASA Early-Stage-Innovations Award, NSF CAREER Award and the International Commission for Optics (ICO) Prize. Rajesh currently directs the Laboratory for Optical Nanotechnologies at the University of Utah. He received S.M. and Ph.D. degrees from MIT.

August 3rd, 14:00: Kevin Hand

Kevin Hand

Title: Alien Oceans on Earth and Beyond

Abstract: Where is the best place to find living life beyond Earth? It may be that the small, ice-covered moons of Jupiter and Saturn harbor some of the most habitable real estate in our Solar System. Life loves liquid water and these moons have lots of it. These alien oceans of the outer solar system have likely persisted for much of the history of the solar system and as a result they are highly compelling targets in our search for life beyond Earth. Within these oceans may reside a second origin of life itself, and the answer to whether or not we live in a biological universe, or one in which life on Earth represents a biological singularity. Dr. Hand will explain the science behind why we think we know these oceans exist and what we know about the conditions on these worlds. He will focus on Jupiter’s moon Europa, which is a top priority for future missions. Dr. Hand will also detail how the exploration of Earth’s ocean is helping to guide our understanding of the potential habitability of worlds like Europa.

Biography: Dr. Kevin P. Hand is a planetary scientist and astrobiologist at NASA’s Jet Propulsion Laboratory, where he directs the Ocean Worlds Lab (http://oceanworldslab.jpl.nasa.gov). His research focuses on the origin, evolution and distribution of life in the solar system with an emphasis on moons of the outer solar system that likely harbor liquid water oceans. He is the pre-Project Scientist for NASA’s Europa Lander mission concept and was co-chair of the 2016 Europa Lander Science Definition Team. From 2011-2016 Hand served as Deputy Chief Scientist for Solar System Exploration at JPL. His fieldwork has brought him to Antarctica, the Arctic, the depths of Earth’s ocean, the glaciers of Kilimanjaro and Mt. Kenya, and the desert of Namibia. His book ‘Alien Oceans: The Search for Life in the Depths of Space’, was recently published by Princeton University Press.

August 3rd, 16:00: Sara Beery

Sara Beery

Title: Computational Imaging Challenges in Ecological Monitoring

Abstract: We require systems to monitor species in real time and in greater detail to quickly understand which conservation and sustainability efforts are most effective and take corrective action. Current ecological monitoring systems generate data far faster than researchers can analyze it, making scaling up impossible without automated data processing. However, ecological data collected in the field presents a number of challenges that current methods, like deep learning, are not designed to tackle. These include strong spatiotemporal correlations, imperfect data quality, fine-grained categories, and long-tailed distributions. Beyond this, many sensors currently used in environmental monitoring, such as motion-triggered cameras used for wildlife monitoring or underwater static sonar for monitoring fish populations, are suboptimal. These sensors are often biased in terms of which taxa they capture effectively, have a tendency to capture vast volumes of “empty” data which is expensive to store, move, and process, and have low signal to noise ratios. I’ll discuss several open challenges in environmental monitoring that have the potential to be solved with novel computational imaging approaches that co-develop sensor technology with data processing methodology.

Biography: Sara Beery is an incoming Assistant Professor at MIT, where her lab will focus on the development of computer vision methods that enable efficient, accessible, and equitable global-scale environmental monitoring. She recently received her PhD in Computing and Mathematical Sciences at Caltech, advised by Pietro Perona. She was honored to be awarded the Amori Doctoral Prize in CMS, an NSF Graduate Research Fellowship, a PIMCO Data Science Fellowship and an Amazon AI4Science Fellowship. She seeks to break down knowledge barriers between fields: she founded the successful AI for Conservation slack community (with over 950 members), and she is the founding director of the Caltech Summer School on Computer Vision Methods for Ecology. She works closely with Microsoft AI for Earth, Google Research, and Wildlife Insights where she helps turn her research into usable tools for the ecological community. Sara’s experiences as a professional ballerina, a nontraditional student, and a queer woman have taught her the value of unique and diverse perspectives, both inside and outside of the research community. She is passionate about increasing diversity and inclusion in STEM through mentorship, teaching, and outreach.

August 3rd, 17:00: David Van Valen

David Van Valen

Title: Everything as code

Abstract: Biological systems are difficult to study because they consist of tens of thousands of parts, vary in space and time, and their fundamental unit—the cell—displays remarkable variation in its behavior. These challenges have spurred the development of genomics and imaging technologies over the past 30 years that have revolutionized our ability to capture information about biological systems in the form of images. Excitingly, these advances are poised to place the microscope back at the center of the modern biologist’s toolkit. Because we can now access temporal, spatial, and “parts list” variation via imaging, images have the potential to be a standard data type for biology.
For this vision to become reality, biology needs a new data infrastructure. Imaging methods are of little use if it is too difficult to convert the resulting data into quantitative, interpretable information. New deep learning methods are proving to be essential to reliable interpretation of imaging data. These methods differ from conventional algorithms in that they learn how to perform tasks from labeled data; they have demonstrated immense promise, but they are challenging to use in practice. The expansive training data required to power them are sorely lacking, as are easy-to-use software tools for creating and deploying new models. Solving these challenges through open software is a key goal of the Van Valen lab. In this talk, I describe DeepCell, a collection of software tools that meet the data, model, and deployment challenges associated with deep learning. These include tools for distributed labeling of biological imaging data, a collection of modern deep learning architectures tailored for biological image analysis tasks, and cloud-native software for making deep learning methods accessible to the broader life science community. I discuss how we have used DeepCell to label large-scale imaging datasets to power deep learning methods that achieve human level performance and enable new experimental designs for imaging-based experiments.

Biography: David Van Valen PhD is faculty in the Division of Biology and Bioengineering at Caltech. His research group’s long-term interest is to develop a quantitative understanding of how living systems process, store, and transfer information, and to unravel how this information processing is perturbed in human disease states. To that end, his group leverages and pioneers the latest advances in imaging, genomics, and machine learning to produce quantitative measurements with single-cell resolution as well as predictive models of living systems. Prior to joining the faculty, he studied mathematics (BS 2003) and physics (BS 2003) at the Massachusetts Institute of Technology, applied physics (PhD 2011) at Caltech, and medicine at the David Geffen School of Medicine at UCLA (MD 2013).