The majority of us use artificial intelligence every day — without even realizing it. Like when Google predicts your search phrase or you issue a command to Siri or you scroll through ads and articles on your Facebook feed.
And that, says AI technologist, Kriti Sharma, is dangerous.
“Despite the common public perception that algorithms aren’t biased like humans, in reality, they are learning racist and sexist behavior from existing data and the bias of their creators.
“AI is even reinforcing human stereotypes.”
This story is part of a series about the intersection of gender and language, "
From ‘Mx.’ to ‘hen’: When ‘masculine’ and ‘feminine’ words aren’t enough
London-based Sharma, who is currently working with the Obama Foundation and was recently named in the
"Forbes 30 under 30 Europe" list
“They have all been given obedient, servile, female personalities. They are turning our lights on and off, ordering our shopping. Whereas for more high-powered tasks, such as making business decisions, AI is often given a male personality — take IBM’s Watson or Salesforce’s Einstein.”
As AI becomes more ingrained in our daily lives, Sharma is determined to ensure technology is not building a prejudiced future.
“A child growing up in an AI-powered world today could be learning to bark orders at a female voice assistant — I think we all would agree this is dangerous.”
One of the biggest issues, she says, is that the workforce in technology fields, and particularly in AI, is male-dominated.
In the US, only 5 percent of tech startups are owned by women
“I strongly believe if we had more diverse technology teams, these issues would have been detected and acted upon much earlier — possibly they would never have even happened.”
Sharma cites
an example at Boston University
There are also economic ramifications of biased AI.
A 2015 study
Related:
How English-language pronouns are taught around the world
“The problem is, there isn’t a single worst offender,” Sharma says. “It’s endemic to the tech industry. The biggest challenge we’re facing is that we will end up creating more inequality in the fourth industrial revolution by perpetuating the bias that already exists, rather than use technology to solve issues of gender, race and age inequality.”
According to a survey released last year,
Sharma created the world’s first personal chatbot for business finance at British company Sage, where she is vice president of AI, and hired the company’s first “conversation designer” — a role specifically for analyzing voice tones and personalities of AI assistants.
When Sharma first started developing AI at Sage, she proposed a gender-neutral personality for the company’s new assistant — named Pegg.
Related:
The three-letter word that rocked a nation
“Pegg is proud of being a bot and does not pretend to be human,” she explains. “Initially, there was a lack of awareness within the company and outside world to stereotypes in AI but I found it very encouraging that I got a very welcoming response to my effort.”
Sage’s move to
publish "The Ethics of Code: Developing AI for Business with Five Core Principles,"
Another intention behind Sharma’s work is to make AI more transparent to the public.
“As consumers, we are not informed of why the machine recommended a service or product to us. In the last 18 months, due to major sociopolitical events, such as the influence of AI-powered voter targeting and the spread of misinformation in the recent US elections, this is becoming a more recognized issue. It has also been brought to the forefront of public conversations by the awareness of phenomena like fake news.”
Although Sharma says it’s disappointing there has been no industrywide policies or initiatives to confront the problem head-on, she’s continuing to call on CEOs and business boards to take action and develop inclusive, ethical AI.
Related:
How do you talk about gender when the words ‘simply don’t exist’ in your language?
“By 2020,
it’s been predicted
From PRI's The World ©2017
PRI