Fei-Fei Li

April 26, 2023

Center for Data Science for Enterprise & Society
Data Science Distinguished Lecture Series

Co-sponsored with Cornell’s Center for Social Sciences and the Bowers College of Computing and Information Science.

CCSS Annual Lecture
Fei-Fei Li, Stanford University

What we see and what we value: AI with a human perspective

5:30 P.M., Alice Statler Auditorium

This popular event, open to the Ithaca community and the campus audience, generally draws an overflow audience and is followed by a reception.

Dr. Fei-Fei Li is the inaugural Sequoia Professor in the Computer Science Department at Stanford University and Co-Director of Stanford’s Human-Centered AI Institute. She served as the Director of Stanford’s AI Lab from 2013 to 2018. From January 2017 to September 2018, she was Vice President at Google and served as Chief Scientist of AI/ML at Google Cloud. Dr. Fei-Fei Li obtained her B.A. degree in physics from Princeton in 1999 with High Honors and her Ph.D. degree in electrical engineering from California Institute of Technology (Caltech) in 2005.




One of the most ancient sensory functions, vision emerged in prehistoric animals more than 540 million years ago. Since then animals, empowered first by the ability to perceive the world, and then to move around and change the world, developed more and more sophisticated intelligence systems, culminating in human intelligence. Throughout this process, visual intelligence has been a cornerstone of animal intelligence. Enabling machines to see is hence a critical step toward building intelligent machines. In this talk, I will explore a series of projects with my students and collaborators, all aiming to develop intelligent visual machines using machine learning and deep learning methods. I begin by explaining how neuroscience and cognitive science inspired the development of algorithms that enabled computers to see what humans see. Then I discuss intriguing limitations of human visual attention and how we can develop computer algorithms and applications to help, in effect allowing computers to see what humans don’t see.  Yet this leads to important social and ethical considerations about what we do not want to see or do not want to be seen, inspiring work on privacy computing in computer vision, as well as the importance of addressing data bias in vision algorithms. Finally I address the tremendous potential and opportunity to develop smart cameras and robots that help people see or do what we want machines’ help seeing or doing, shifting the narrative from AI’s potential to replace people to AI’s opportunity to help people. We present our work in ambient intelligence in healthcare as well as household robots as examples of AI’s potential to augment human capabilities. Last but not least, the cumulative observations of developing AI from a human-centered perspective has led to the establishment of Stanford’s Institute for Human-centered AI (HAI). I will showcase a small sample of interdisciplinary projects supported by HAI.

For more on Dr. Li, visit her website here.