AI4All Day 10: Working with Our Model, The Future of Technology, AI for Mental Health

Nidhi Parthasarathy
4 min readAug 20, 2022

--

Nidhi Parthasarathy, Monday, July 11th 2022

Project Time!

The last week of the program!

Today, in the first session, we continued working on our project. We decided not to flatten the pictures (since it took too much time) and instead decided to use RESNET18 as our CNN model as it was pretrained. However, to add the pictures to RESNET, we had to convert the grayscale pictures to color (RGB). So, we worked on adding these additional dimensions to the picture. We got the CNN model working and we then trained our model for 10 epochs. We also began our testing.

Project Screenshot from Analysis Codelab

Shaping The Future Of Technology - Stacy Hobson

In the afternoon, we had a really interesting keynote from Stacy Hobson, Director of Responsible and Inclusive Technologies Research at IBM. Her presentation was on shaping the future of technology.

She started off by introducing herself: Stacy has a background in Computer Science, Neuroscience and Cognitive Science. She has worked in IBM research on numerous projects, and now leads a research team focused on Responsible and Inclusive Technologies.

Her presentation was very interactive. She started the talk by asking us how technology makes our lives better. We came to the conclusion that technology benefits our life in four ways: access to information, communication (zoom, chat, sms), efficiency (eg. more efficient ways to do surgery), and safety (eg. radars and sensors in cars, autonomous vehicles).

She then discussed AI-supported decision-making in 6 different categories. She talked about education (where AI is used to help in making decisions for college entrance requirements, etc.), employment (AI gives first level recommendation on who to consider for employment), financial services (loan, mortgage, etc), healthcare, insurance, social services, etc.

Then, she asked us about what we thought the drawbacks of AI were. We had a lively discussion about different examples of AI taking shortcuts or being biased. Stacy pointed out how these could potentially result in significant consequences (loss of job opportunities, being separated from families, or even early death). These problems occurred because of societal bias, non-inclusive design considerations, non-representative datasets, and poor model design. Then, she talked about how her research aims to make AI responsible and inclusive, identify informed approaches to reduce AI harm, and develop innovations to help address biases and racial/social inequities.

This was one of the examples Stacy gave about the impacts of bias <Full Article>

She also talked about how her work drives positive technology outcomes for all by talking to people with deep knowledge in advanced and emerging areas, getting access and knowledge about diverse communities, and having the openness to widen their perspectives.

She ended with her request to us. She asked us to think about how we envision the future, how technologies are used in this world, and what we would do to encourage a better future for AI.

Bandits and Reinforcement Learning for Mental Health - Scott Fleming

Next, Scott Fleming (Phd student in biomedical informatics at Stanford) talked to us about “Bandits and Reinforcement Learning for Mental Health”.

He started by giving an overview of his experience in healthcare. He discussed how most efficient and effective health care systems all make decisions beforehand, which are codified as clinical guidelines.

Good health care is basically good decision making, and good decision making comes from good information, and good information comes from many places including randomized control trials. It takes 17 years for research evidence to reach clinical trials! He also talked about the good, the bad, and the ugly of randomized control trials. The “good” was the control for confounding factors, and how it provided “direct” and “causal” evidence on the efficacy of different treatments, interventions, care delivery methods, etc. The “bad” was that it was not practical/feasible in all situations (especially considering the large product space patients x treatments). The “ugly” was that only a very small fraction of disease treatments in America were based on clinical trials.

Only 10–20% of trials are based on randomized control trials; only 6% of asthma patients are eligible for randomized control trials. He further went into detail on each of these topics in the lecture. Overall, it was a very interesting lecture and I learned a lot!

Read on for day 11.

--

--

Nidhi Parthasarathy
Nidhi Parthasarathy

Written by Nidhi Parthasarathy

Highschooler from San Jose, CA passionate about STEM/technology and applications for societal good. Avid coder and dancer.

No responses yet