{"id":77271,"date":"2024-10-31T11:06:36","date_gmt":"2024-10-31T18:06:36","guid":{"rendered":"https:\/\/blog.mozilla.org\/?p=77271"},"modified":"2024-10-31T11:07:57","modified_gmt":"2024-10-31T18:07:57","slug":"ai-bias-gemma-galdon-clavell","status":"publish","type":"post","link":"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/","title":{"rendered":"The AI problem we can\u2019t ignore"},"content":{"rendered":"\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"How Gemma Galdon-Clavell beat the odds to lead AI reform\" width=\"640\" height=\"360\" src=\"https:\/\/www.youtube.com\/embed\/b-BxQe4RZc0?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>In August 2020, as the pandemic confined people to their homes, the <a href=\"https:\/\/www.bbc.com\/news\/explainers-53807730\" target=\"_blank\" rel=\"noreferrer noopener\">U.K. canceled A-level exams<\/a> and turned to an algorithm to calculate grades, key for university admissions. Based on historical data that reflected the resource advantages of private schools, the algorithm disproportionately downgraded state students. Those who attended private schools, meanwhile, received inflated grades. News of the results set off widespread backlash. The <a href=\"https:\/\/www.cnn.com\/2020\/08\/23\/tech\/algorithms-bias-inequality-intl-gbr\/index.html\" target=\"_blank\" rel=\"noreferrer noopener\">system reinforced social inequities<\/a>, critics said.<\/p>\n\n\n\n<p>This isn\u2019t just a one-off mistake \u2013 it\u2019s a sign of AI bias creeping into our lives, according to Gemma Galdon-Clavell, a tech policy expert and one of Mozilla\u2019s 2025 <a href=\"https:\/\/rise25.mozilla.org\/\">Rise25<\/a> honorees. Whether it\u2019s deciding who gets into college or a job, who qualifies for a loan, or how health care is distributed, bias in AI can set back efforts toward a more equitable society.<\/p>\n\n\n\n<p>In an opinion piece for Context by the Thomson Reuters Foundation, Gemma asks us to consider the consequences of not addressing this issue. She argues that bias and fairness are the biggest yet often overlooked threats of AI. <strong>You can<\/strong> <strong><a href=\"https:\/\/www.context.news\/ai\/opinion\/ais-dark-secret-its-rolling-back-progress-on-equality\" target=\"_blank\" rel=\"noreferrer noopener\">read her essay here<\/a>.\u00a0<\/strong><\/p>\n\n\n\n<p>We chatted with Gemma about her piece below.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Can you give examples of how AI is already affecting us?<\/strong><\/h2>\n\n\n\n<p>AI is involved in nearly everything \u2014 whether you\u2019re applying for a job, seeing a doctor, or applying for housing or benefits. Your resume might be screened by an AI, your wait time at the hospital could be determined by an AI triage system, and decisions about loans or mortgages are often assisted by AI. It\u2019s woven into so many aspects of decision-making, but we don\u2019t always see it.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Why is bias in AI so problematic?<\/strong><\/h2>\n\n\n\n<p>AI systems look for patterns and then replicate them. These patterns are based on majority data, which means that minorities \u2014 people who don\u2019t fit the majority patterns \u2014 are often disadvantaged. Without specific measures built into AI systems to address this, they will inevitably reinforce existing biases. Bias is probably the most dangerous technical challenge in AI, and it\u2019s not being tackled head-on.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>How can we address these issues?<\/strong><\/h2>\n\n\n\n<p>At <a href=\"https:\/\/eticas.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">Eticas<\/a>, we build software to identify outliers \u2014 people who don\u2019t fit into majority patterns. We assess whether these outliers are relevant and make sure they aren\u2019t excluded from positive outcomes. We also run a nonprofit that helps communities affected by biased AI systems. If a community feels they\u2019ve been negatively impacted by an AI system, we work with them to reverse-engineer it, helping them understand how it works and giving them the tools to advocate for fairer systems.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What can someone do if an AI system affects them, but they don\u2019t fully understand how it works?<\/strong><\/h2>\n\n\n\n<p>Unfortunately, not much right now. Often, people don\u2019t even know an AI system made a decision about their lives. And there aren\u2019t many mechanisms in place for contesting those decisions. It\u2019s different from buying a faulty product, where you have recourse. If AI makes a decision you don\u2019t agree with, there\u2019s very little you can do. That\u2019s one of the biggest challenges we need to address \u2014 creating systems of accountability for when AI makes mistakes.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>You\u2019ve highlighted the challenges. What gives you hope about the future of AI?<\/strong><\/h2>\n\n\n\n<p>The progress of our work on AI auditing! For years now we&#8217;ve been showing how there is an alternative AI future, one where AI products are built with trust and safety at heart, where AI audits are seen as proof of responsibility and accountability \u2014 and ultimately, safety. I often mention how my work is to build the seatbelts of AI, the pieces that make innovation safer and better. A world where we find non-audited AI as unthinkable as cars without seatbelts or brakes, that&#8217;s an AI future worth fighting for.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In August 2020, as the pandemic confined people to their homes, the U.K. canceled A-level exams and turned to an algorithm to calculate grades, key for university admissions. Based on historical data that reflected the resource advantages of private schools, the algorithm disproportionately downgraded state students. Those who attended private schools, meanwhile, received inflated grades. [&hellip;]<\/p>\n","protected":false},"author":1889,"featured_media":77272,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[464197],"tags":[317823,4708,464184],"coauthors":[464117],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v22.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>The AI problem we can\u2019t ignore<\/title>\n<meta name=\"description\" content=\"Bias and fairness are the biggest yet often overlooked threats of AI, says Gemma Galdon-Clavell, a tech policy expert and a Mozilla Rise25 honoree.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/\",\"url\":\"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/\",\"name\":\"The AI problem we can\u2019t ignore\",\"isPartOf\":{\"@id\":\"https:\/\/blog.mozilla.org\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/blog.mozilla.org\/wp-content\/blogs.dir\/278\/files\/2024\/10\/gemma.jpg\",\"datePublished\":\"2024-10-31T18:06:36+00:00\",\"dateModified\":\"2024-10-31T18:07:57+00:00\",\"author\":{\"@id\":\"https:\/\/blog.mozilla.org\/en\/#\/schema\/person\/ff2a2684ab8dcbe5372151857748455d\"},\"description\":\"Bias and fairness are the biggest yet often overlooked threats of AI, says Gemma Galdon-Clavell, a tech policy expert and a Mozilla Rise25 honoree.\",\"breadcrumb\":{\"@id\":\"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/#primaryimage\",\"url\":\"https:\/\/blog.mozilla.org\/wp-content\/blogs.dir\/278\/files\/2024\/10\/gemma.jpg\",\"contentUrl\":\"https:\/\/blog.mozilla.org\/wp-content\/blogs.dir\/278\/files\/2024\/10\/gemma.jpg\",\"width\":1280,\"height\":720,\"caption\":\"A woman with long brown hair and white-framed glasses stands in front of a bright red wall, smiling\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/blog.mozilla.org\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"The AI problem we can\u2019t ignore\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.mozilla.org\/en\/#website\",\"url\":\"https:\/\/blog.mozilla.org\/en\/\",\"name\":\"The Mozilla Blog\",\"description\":\"News and Updates about Mozilla\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.mozilla.org\/en\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.mozilla.org\/en\/#\/schema\/person\/ff2a2684ab8dcbe5372151857748455d\",\"name\":\"Kristina Bravo\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/blog.mozilla.org\/en\/#\/schema\/person\/image\/cd320165a9224f3c60c912bf4086a89f\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/22fa545a3c48bc13cc1d84d5e09ffbff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/22fa545a3c48bc13cc1d84d5e09ffbff?s=96&d=mm&r=g\",\"caption\":\"Kristina Bravo\"},\"url\":\"https:\/\/blog.mozilla.org\/en\/author\/kbravo\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The AI problem we can\u2019t ignore","description":"Bias and fairness are the biggest yet often overlooked threats of AI, says Gemma Galdon-Clavell, a tech policy expert and a Mozilla Rise25 honoree.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/","schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/","url":"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/","name":"The AI problem we can\u2019t ignore","isPartOf":{"@id":"https:\/\/blog.mozilla.org\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/#primaryimage"},"image":{"@id":"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/#primaryimage"},"thumbnailUrl":"https:\/\/blog.mozilla.org\/wp-content\/blogs.dir\/278\/files\/2024\/10\/gemma.jpg","datePublished":"2024-10-31T18:06:36+00:00","dateModified":"2024-10-31T18:07:57+00:00","author":{"@id":"https:\/\/blog.mozilla.org\/en\/#\/schema\/person\/ff2a2684ab8dcbe5372151857748455d"},"description":"Bias and fairness are the biggest yet often overlooked threats of AI, says Gemma Galdon-Clavell, a tech policy expert and a Mozilla Rise25 honoree.","breadcrumb":{"@id":"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/#primaryimage","url":"https:\/\/blog.mozilla.org\/wp-content\/blogs.dir\/278\/files\/2024\/10\/gemma.jpg","contentUrl":"https:\/\/blog.mozilla.org\/wp-content\/blogs.dir\/278\/files\/2024\/10\/gemma.jpg","width":1280,"height":720,"caption":"A woman with long brown hair and white-framed glasses stands in front of a bright red wall, smiling"},{"@type":"BreadcrumbList","@id":"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/ai-bias-gemma-galdon-clavell\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/blog.mozilla.org\/en\/"},{"@type":"ListItem","position":2,"name":"The AI problem we can\u2019t ignore"}]},{"@type":"WebSite","@id":"https:\/\/blog.mozilla.org\/en\/#website","url":"https:\/\/blog.mozilla.org\/en\/","name":"The Mozilla Blog","description":"News and Updates about Mozilla","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.mozilla.org\/en\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/blog.mozilla.org\/en\/#\/schema\/person\/ff2a2684ab8dcbe5372151857748455d","name":"Kristina Bravo","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.mozilla.org\/en\/#\/schema\/person\/image\/cd320165a9224f3c60c912bf4086a89f","url":"https:\/\/secure.gravatar.com\/avatar\/22fa545a3c48bc13cc1d84d5e09ffbff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/22fa545a3c48bc13cc1d84d5e09ffbff?s=96&d=mm&r=g","caption":"Kristina Bravo"},"url":"https:\/\/blog.mozilla.org\/en\/author\/kbravo\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.mozilla.org\/en\/wp-json\/wp\/v2\/posts\/77271"}],"collection":[{"href":"https:\/\/blog.mozilla.org\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mozilla.org\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mozilla.org\/en\/wp-json\/wp\/v2\/users\/1889"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mozilla.org\/en\/wp-json\/wp\/v2\/comments?post=77271"}],"version-history":[{"count":0,"href":"https:\/\/blog.mozilla.org\/en\/wp-json\/wp\/v2\/posts\/77271\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.mozilla.org\/en\/wp-json\/wp\/v2\/media\/77272"}],"wp:attachment":[{"href":"https:\/\/blog.mozilla.org\/en\/wp-json\/wp\/v2\/media?parent=77271"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mozilla.org\/en\/wp-json\/wp\/v2\/categories?post=77271"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mozilla.org\/en\/wp-json\/wp\/v2\/tags?post=77271"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mozilla.org\/en\/wp-json\/wp\/v2\/coauthors?post=77271"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}