What matters to you.
0:00
0:00
NEXT UP:
 
Top

Forum Network

Free online lectures: Explore a world of ideas

Funding provided by:

Algorithmic Bias: A Case Study of Facebook Ads

In partnership with:
Date and time
Tuesday, May 28, 2019

Measuring discrimination that can be caused by data-driven algorithms, and finding ways to mitigate it has been one of the most important recent developments in computer science. In this talk, Muhammad Ali, computer scientist and Ph.D. student at Northeastern University, will provide a general introduction to the problems of algorithmic bias, and present a recent case study of Facebook ads. After launching several ads, his research measures how, in the absence of any ad targeting criteria specified by the advertiser, the platform’s powerful machine learning algorithms are able to skew the ad’s delivery along gender and race by themselves — including ads for legally protected categories such as employment and housing. The results of this research further the conversation that, in addition to the content creators, internet platforms share the responsibility for the content their users see.

98985190-44e4-4114-aabe-dc9d67fc8c88.png
Muhammad Ali is a Ph.D. student at Northeastern University’s Khoury College of Computer Sciences. His research revolves around the novel area of algorithmic auditing — where he measures how large-scale data-driven algorithms could be exhibiting biased or discriminatory behavior against certain demographic groups. He works to measure, as well as find ways to curtail such behavior. He has a MS in computer science from Saarland University, and experience working at the Max Planck Institute for Software Systems in Germany. When not working, he likes to nerd over indie video games and South Asian food. He’s on Twitter as @lukshmichowk.
Explore: