Skip to main content
Special Feature

All's not fair with AI

  • from Shaastra :: vol 02 issue 06 :: Nov - Dec 2023
With AI outcomes influencing decisions, 'fairness' is a key metric to be added to a model's parameters.

With AI predictions increasingly impacting people's lives, developers are looking at biases in models, and working to eliminate stereotypes.

It became increasingly clear to venture capitalist Raji Baskaran as she surfed the net that Google preferred 'Hari' to 'Hamsa'. Baskaran had done a quick search to show Hamsa Balakrishnan's work profile to an investor interested in a start-up run by the Massachusetts Institute of Technology (MIT) Professor. But instead of Hamsa, the search engine invariably threw up the name of her brother, Hari, also a Professor at MIT.

This experience in the summer of 2019 in Portland, U.S., drew Baskaran's attention to a yawning gender gap in data scraped from the internet. So, in 2022, she set up Superbloom Studios to bridge these gaps in the digital world. The company is now working on a long-term, open-source project called Hidden Voices, which seeks to reduce gender biases in search algorithms by adding 10,000 women's biography drafts to a not-for-profit library such as Wikipedia.

Over 50% of Wikipedia users are women, but only 15% of its editors are women – and fewer than 20% biographies are of living women, points out Baskaran. Search algorithms develop a bias based on statistics to give "relevant" results, she adds. Better-represented groups (men in science, in this case) are likely to show up more in searches, and, consequently, likely to be quoted more, and thus increase their representation further. "It's a chicken-and-egg problem," Baskaran says.

With Hidden Voices, she hopes to ensure that the available data for artificial intelligence (AI) models is better represented. "With the help of expert volunteers who point us to credible sources, we use ML (machine learning) models that can collate data to create Wikipedia-style biographies of underrepresented minorities, starting with women in STEM (Science, Technology, Engineering, and Mathematics)," she says.

The impact of human biases seeping into AI extends beyond search algorithms. Predictive ML models today greatly help in making sound judgements. Even before humans enter the decision-making loop, AI foretells a person's chances of, say, being shortlisted for a job, being eligible for a housing loan, or even developing a certain type of cancer.

With AI outcomes influencing decisions, 'fairness' is a key metric to be added to a model's parameters, along with accuracy, size and efficiency. To ascertain whether or not a model is fair, a person would need to understand how AI algorithms arrive at their outcomes.

With this as part of its mission, the Centre for Responsible AI (CeRAI) was set up in Chennai this year. "Machine learning works because it is able to generalise by making some assumptions about what is similar and what is dissimilar," says B. Ravindran, head of CeRAI. Biasing, in ML, is classifying: a way to learn about the world, through experience.

"The fact that two people of the same age group, who work in the same place, and like the same author, might like the same new book, is not offensive. But when you say that two people of the same ethnicity will only like a certain kind of food, that becomes problematic," Ravindran explains. And in India, with its many ethnicities and religions, even the idea of what is problematic might change from State to State, district to district. "With the diversity in India, a lot of the bias is not codified — it is implicit. You know a stereotype when you see it, but there isn't a legal characterisation of what bias is." 

CONTINUE READING

Get unlimited digital access on any device.

Get the print magazine delivered at home.

Subscribe

PAST ISSUES - Free to Read

share-alt-square
Volume 01 Issue 04 Jul-Aug 2022
Read This Issue
share-alt-square
Volume 01 Edition 03 Sep-Oct 2021
Read This Issue
Search by Keywords, Topic or Author

© 2024 IIT MADRAS - All rights reserved

Powered by RAGE