AI4All Day 1: An Exciting Start, Introduction to AI, Stanford HAI Demos

Nidhi Parthasarathy
4 min readAug 20, 2022

--

Nidhi Parthasarathy, Monday - June 27th, 2022

Introduction to AI

Day one started with an “Introduction to AI’’ lecture by Alaa Youssef (a postdoctoral fellow at Stanford’s Center for AI in Medicine and Imaging). She gave an overview of AI, ML, and deep learning terminology and their applications in our daily life. It was fascinating to hear about the history of AI, including the introduction of the Turing test (“can a human being interact with a computer without knowing it is a computer?”) all the way to Imagenet and its impact on the AI research field. She introduced the various areas within AI — computer vision, natural language processing, robotics, and computational biology — and the impact AI has been having in each of these areas. I learnt about how AI is now better at diagnosing skin cancer better than even doctors, and also about the effective use of AI for forest fire prediction! Alaa also talked about racial biases in AI algorithms and privacy concerns, and concluded with all the open research opportunities in the area.

Our first day at AI4All (as posted on Stanford HAI instagram)

Medical Analytics Cohort

For the next session, we then broke into cohorts based on our interests. Mine was “Medical AI.” We had an icebreaker and an overview discussion around the most exciting applications of AI and the biggest risks of using AI. In my cohort, there were seven other students from different parts of the world — from Canada, India, New York, New Jersey, and California — and it was fun to get to know them and bond over AI!

A Talk by Fei-Fei Li

The afternoon session featured a really exciting talk from Fei-Fei Li (Professor at Stanford). I had listened to her TED talk earlier and she is one of my inspirations for my AI journey, so I was really looking forward to her talk. She started with an overview of the history of vision, all the way to the evolution of species from hundreds of years back, and how vision was a cornerstone of intelligence, responsible for our survival. She discussed how she got into her research around “making machines see well.” Her phrase “to see is to understand” was very inspiring. She discussed her experiments around how humans could comprehend a lot of information in a relatively small amount of time (even one-eight of a second!), and the challenges around capturing all the detail in the scene.

Her discussions about visual illusions were mind blowing! “Vision begins in the eye, but truly takes place in the brain!” Dr. Li discussed how computers convert pictures into numbers and how mathematical models could operate on the data to identify how those numbers made images like cats, etc. She talked about her inspiration from how kids learn to identify images and how that led to ImageNet to get a lot of training data — over 15 million images — which led to various convolutional models to recognize and classify images.

The next level of a kid’s development, beyond just plain recognition, is around crafting stories around images (for example, a full sentence or a paragraph describing a scene). She talked about how AI models could be evolved to do such tasks too.

She then concluded her talk with a discussion of applications of AI/computer-vision, for example using it to detect symptoms in diseases and to monitor and manage systems. She also pointed out how like any other technology, AI is a tool and it could be a double-edged sword, to be used with careful consideration of ethics and privacy. She ended her talk with how she thought we were all the future of AI, leaving me and all the other students very inspired and excited!

Stanford HAI Demos

This was followed by a demo session from Ruohan Gao and Ruohan Zhang, postdoctoral researchers at Stanford working on computer vision and machine learning. They talked about their work on using multi-sensory input to help robots understand their world, for example using both sound and vision to help a robot fill blocks with objects or when inserting a cylindrical peg into 3D printed bases. They also talked about KiloOSF VisionNet, an AI model that could render any image and simulators for Robotics as a safe, fair, and reproducible playground for AI models to learn. They also talked about reinforcement learning to use rewards and how their robots could get over 100 years of experience in just a few days! I had always thought robots could learn faster than humans, but this was very cool to know. I think this rapid speed of learning might even help robots surpass humans, helping them complete tasks that humans cannot do today.

A Virtual Social

We ended the day with a one hour social (virtual, of course!) where we met with other students from other groups and got to learn about each other and played games, etc.

Overall, a very exciting day one!

Continue reading for Day 2.

--

--

Nidhi Parthasarathy

Highschooler from San Jose, CA passionate about STEM/technology and applications for societal good. Avid coder and dancer.