Skip to Main Content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.
"Are Face-Detection Cameras Racist?" by Adam Rose
Rose's article details how cameras and web-cams aren't always culturally competent. For example, the Nikon Coolpix S630 digital camera failed to recognize when Joz Wang, a Taiwanese-American strategy consultant, had her eyes open. The webcams in Hewlett-Packard's computers had ongoing problems with detecting black people's faces and movements. This issue was brought to light on a YouTube video called "HP computers are racist."
"New Algorithms Perpetuate Old Biases in Child Welfare Cases" by Elizabeth Brico
Elizabeth Brico -- a Cuban mental health blogger -- believes Florida's use of predictive analytics in their child welfare system determined her risk status and influenced her investigator's recommendation to take her children away from her. Brico and her children have become distressed in the wake of their separation. Her situation is hardly an exception. Brico is one of many mothers of color who are unfairly judged by an algorithm and the humans attempting to make decisions from the data it provides.
"The Algorithm That Helped Google Translate Become Sexist" by Parmy Olson
Typing "Asian girls" into Google brings up pornographic and highly sexualized results
After Noble brought up how typing "black girls" into Google brought back pornographic results, Google corrected the issue. The same can't be said for typing "Asian girls," but Noble begs a bigger question here. Why does typing "____ girls" into Google bring up porn even when the individual never typed porn into the search bar? Many people believe that the content that pops up on the first page of Google is the most popular/credible, but that doesn't explain this disturbing trend.
"How We Analyzed the COMPAS Recidivism Algorithm" by ProPublica
In 2017, ProPublic analyzed a computer program called Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) used in courthouses to help assign criminal sentences. The software flagged black defendants as twice as likely to be misclassified as higher risk as compared to their white counterparts. They found numerous other instances of biases within the program as well.
"A beauty contest was judged by AI and the robots didn't like dark skin " by Sam Levin
Beauty.Ai was the first software algorithm to be used in an international beauty contest judged by “machines.” It should have used objective factors such as facial symmetry and wrinkles to identify the winners. Approximately 6,000 people from more than 100 countries submitted photos for the contest. Out of 44 winners, nearly all were white, a handful were Asian, and only one had dark skin. However, many people of color submitted photos to the contest; including large groups from India and Africa.
"Education with a Debt Sentence: For-Profit Colleges as American Dream Crushers and Factories of Debt" by Hannah Appel and Astra Taylor
For-profit colleges admit the highest amount of African-American students, and it isn't because of a push for more diversity in higher education. The recruiters at these universities are trained to prey on "isolated, impatient” individuals with low self-esteem" who have few people in their lives who support them and a grim future outlook. Some research shows that degrees from these schools are worth about as much as a high school diploma in the workplace, thus accruing more debt for already underprivileged groups. The weapon that pulls these students in? Google's targeted advertising. The University of Phoenix is one of Google’s largest advertiser, spending up to $200,000 a day on advertising. In comparison, it spends as little as $700 per student per year.
"Artificial Intelligence Is Now Used to Predict Crime. But Is It Biased?" by Randy Rieland
PredPol, a predictive policing software, is designed to help police stop crimes before they occur by analyzing locations where previous arrests occurred. It's used by over 60 police forces in the US. However, the American Civil Liberties Union (ACLU), the Brennan Center for Justice, and other civil rights organizations have raised questions about the risk of bias. Historical data from police practices can create a feedback loop where police decisions, instead of actual reported crimes, inform the algorithm. This trend could lead to inappropriately labeling certain neighborhoods as "bad" and others as "good." Another possible issue could be that police may be more inclined to use more force in neighborhoods they believe are "bad."
"How algorithms rule our working lives" by Cathy O'Neil
O'Neil's article follows the story of a Vanderbilt graduate named Kyle Behm. A year and a half after receiving treatment for bipolar disorder, Behm applied for a job at Kroger supermarket. He had a friend who could put in a good word for him and seemingly had a strong chance of landing a job. Behm never got called back for an interview. His friend explained to him that a personality test -- created by Kronos, a workforce management company based outside Boston -- screened him out of the job. Behm recalled the test was similar to the "five factor model" tests he took during his treatment for bipolar disorder. Behm applied to other minimum-wage jobs and encountered the same situation. He filed an ongoing class-action lawsuit against seven companies, including Home Depot and Waglreens, alleging the use of the personality test was unlawful.
"Researchers find gender and racial bias in Amazon’s facial recognition software, widely used by cops" by Nicole Karlis
"Questioning the Fairness of Targeting Ads Online" by Carnegie Mellon University