GRACE: Global Review of AI Community Ethics
https://ojs.stanford.edu/ojs/index.php/grace
<p><em><span style="font-weight: 400;">GRACE: Global Review of AI Community Ethics </span></em><span style="font-weight: 400;">is a new peer-reviewed, international journal at Stanford University. An open-access journal, indexed in Google Scholar, GRACE offers a unique intellectual forum for AI Ethics practitioners to share their work. </span></p> <p><strong>OUR SECOND issue is now LIVE</strong></p> <p> </p> <h1>Vol. 2 No. 1 (2024): AI in Education, Culture, Finance, and War</h1> <div class="obj_issue_toc"> <div class="heading"> <div class="cover"><img src="https://ojs.stanford.edu/ojs/public/journals/39/cover_issue_156_en.png" alt="blue and orange picture of student protest with the word " Justice" in white letters" /></div> <div class="published"><span class="label">Published: </span><span class="value">2024-01-22</span></div> </div> </div> <p> </p>Stanford UniversityenGRACE: Global Review of AI Community EthicsSTANFORD STUDENTS FOR ISRAELI TECH
https://ojs.stanford.edu/ojs/index.php/grace/article/view/3231
<p><span style="font-weight: 400;">This paper affirms Israeli technology and its potential to improve the quality of life for both Israelis and Palestinians. It argues the choice lies in the Palestinians’ hands —</span><span style="font-weight: 400;"> </span><span style="font-weight: 400;">renounce Hamas, renounce terror, acknowledge Israel’s right to exist, and we can move toward peace. Focusing on the Israel-Hamas war, we examine efforts to delegitimize Israel, its AI-driven tech, and its right to self-defense. Anti-Zionists and tech critics decry Israel as a “racist,” “settler colonialist” project armed with tech that enforces its “apartheid.” Yet, they promote their own ethnic “Arab Palestine” “from the river to the sea,” as though anti-racist justice consists in statehood for Arab Palestinians but not any other inhabitants of the land. Through a close reading of anti-Zionist work and the key influences of Palestinian scholars Edward W. Said and Elia Zureik, we demonstrate that the settler colonialism thesis, which claims to be an “objective, historical, secular” lens, is flawed and antisemitic. This is a long read. We hope to do justice to Said and Zureik’s work and seriously address criticisms of Israel, its technology, and Palestinian loss of life, before offering our positive views moving forward. Contrary to the anti-Zionist critics and the </span><span style="font-weight: 400;">Boycott, Divestment, and Sanctions (BDS) movement, we believe that Palestinian suffering can end without delegitimizing Israel. Most importantly, normalization of Palestinian and Israeli relations can provide Palestinians self-determination, freedom, and equality. By normalization, we mean that neither side will attain the all-or-nothing demands of groups like BDS or far right Israeli organizations, and that instead both will gain acceptance and equality. After October 7, 2023, such a coexistence appears increasingly fraught, but remains possible. </span><span style="font-weight: 400;">We end with our view of Israeli technology as an important component for an Israeli and Palestinian future.</span></p> <p><br /><br /></p>
Frameworks STANFORD STUDENTS FOR ISRAELI TECH
Copyright (c) 2024 STUDENTS FOR ISRAELI TECH Starkman
2024-01-222024-01-22STANFORD SIT-IN TO STOP GENOCIDE
https://ojs.stanford.edu/ojs/index.php/grace/article/view/3230
<p><a href="https://www.instagram.com/sit_in_to_stop_genocide/?hl=en"><span style="font-weight: 400;">The Stanford Sit-In to Stop Genocide</span></a><span style="font-weight: 400;"> is a collective of Stanford University undergrads, graduate students, alumni, post docs, faculty, and community members standing in solidarity with Palestinians against Israeli occupation, apartheid, and genocide. We sustained the longest demonstration in Stanford history, a 120-day overnight protest in White Plaza calling on the university to divest from Israel. Our paper begins with an overview of Israeli settler colonialism, occupation, apartheid, and the current genocide in Gaza. A crucial component of Israel’s occupation is the US-backed Israeli military-industrial-technology complex. We criticize the unethical use of AI biometric surveillance, brutal AI-equipped weapons systems, and cyberwarfare against human rights activists as well as censorship of Palestinian rights activists. Given the deep complicity of tech industries in the genocide of Palestinians, Stanford’s foundational role in Silicon Valley, and the historical successes of worker- and student-led divestment movements, we argue that tech workers and Stanford community members have a crucial role to play in ending our institutions’ support for genocide. We support the global divestment movement, draw parallels to the anti-apartheid movement in South Africa, and advocate for economic and political pressures as mechanisms for securing Palestinian liberation. We call on readers to join the movement for Stanford to divest from companies supporting Israeli apartheid and push for an ethics of engineering that refuses complicity in all state and settler colonial violence.</span></p>
Frameworks STANFORD SIT-IN
Copyright (c) 2024 STANFORD SIT-IN
2024-01-222024-01-22Regulating FinTech: The Path to Actual Financial Inclusion in the United States
https://ojs.stanford.edu/ojs/index.php/grace/article/view/3228
<p><span style="font-weight: 400;">Over the past decade, financial technology (FinTech) has redefined the financial sector. FinTech refers to algorithms, software, applications, and other technologies that work to improve and automate financial services. This paper investigates FinTech’s role in financial inclusion in the United States with data and qualitative analysis from government research and peer-reviewed reports. It concludes that despite FinTech’s growth and success over the past decade, the industry has made little impact on financial inclusion for Black, Hispanic, and lower-class Americans. Some FinTech products trap these groups into cycles of debt, which further extends generational cycles of wealth inequality. The first section of this paper establishes the landscape of wealth inequality in the United States and analyzes the FinTech revolution. The following section examines how lending and algorithmic bias are reflected in FinTech services. Most notably, this paper concludes with solutions of how FinTech companies and the government can work to correct this inherent discrimination. FinTech companies can partner with nonprofits and introduce socially conscious practices to expand financial resources. The government must introduce FinTech-specific legislation and enforce current laws to protect underserved Americans and fix the inequalities that they helped create. </span></p>
Social Impact Papers Hayden Thompson
Copyright (c) 2024 Hayden Thompson
2024-01-222024-01-22Preparing Ghana for the Artificial Intelligence Ghanaians Want
https://ojs.stanford.edu/ojs/index.php/grace/article/view/3227
<p><em><span style="font-weight: 400;">How much</span></em><span style="font-weight: 400;"> and </span><em><span style="font-weight: 400;">which</span></em><span style="font-weight: 400;"> AI does Ghana need when its challenges remain largely infrastructural? What kind of AI does Ghana need in healthcare, agriculture, and education when there are still not enough hospitals, roads, and schools? Many of the innovations from tech corporations in the Global North envision products of no immediate use to Africans. What would we do with generative models in classrooms when we still need more classrooms and teachers? Why would we need autonomous vehicles when many of our roads remain unpaved? Both questions show that too often the Global North wants to test their technologies on our populations and gather our data, while offering these in the guise of philanthropy. What we need is investment in infrastructure, which has historically been uneven, from the Chinese with their Belt and Road initiative to Americans who have claimed they want to provide the “last mile” of Internet connectivity when in fact the “first mile” remains still unreliable. This paper interviews Ghanaians who work in tech and considers the kinds of infrastructural investments that help Ghana and enable our participation in building algorithms we want that will serve Ghanaians.</span></p>
Social Impact Papers George BirikorangBernard Birikorang
Copyright (c) 2024 George Birikorang
2024-01-222024-01-22AI & Copyright: A Case Study of the Music Industry
https://ojs.stanford.edu/ojs/index.php/grace/article/view/3226
<p>Recently, artists of all types have been questioning what generative AI means for their livelihoods. Historically, progress in technology has led to fear of displacement among artists—the advent of photography was felt as a great threat by portrait painters. However, despite historical precedents, a novel question emerges in the way that generative AI systems are trained. Namely, the training of these systems requires models to be fed incomprehensibly large amounts of data which results in copyrighted materials being used. When models are operated by commercial agents, we must ask ourselves: what should the rights be of the human actors who have produced creative work which is used—without their permission or credit—to train AI systems deployed for the financial gain of others? Answering these questions will help regulators to create better policies and lead technologists to design more human-centered technology. Already, many legal scholars, technologists, and corporate lawyers have offered opinions on this topic. However, missing from this conversation are the voices of actual creative workers themselves. In the following paper, I develop a set of principles outlining what the rights of artists should be with respect to the use of their work in the training and deployment of generative AI systems. Namely, I examine the music industry to understand artists’ perspectives in order to arrive at principles of (a) increased inter-stakeholder communication, (b) dataset transparency requirements, (c) the ability for an artist to opt out of a training dataset, and (d) fair application of “fair use” law.</p>
Social Impact Papers Lila Shroff
Copyright (c) 2024 Lila Shroff
2024-01-222024-01-22Technology’s Dual Role in Language Marginalization and Revitalization
https://ojs.stanford.edu/ojs/index.php/grace/article/view/3225
<p>This article provides a framework in approaching indigenous language research and the development of digital language tools and resources for them. According to an estimate by UNESCO, 43% of languages used in the world are endangered, many being forced out because of settler linguistic colonialism. More recently, there has been a language shift toward majority languages online. Past scholars who have discussed technology’s impact on minority languages have focused on if these tools have supported language relearning or pushed a language shift. But this often ignores the community perspective and incentives in language research and the building of these tools. I interview Wesley Leonard, a citizen of the Miami Tribe of Oklahoma and a leading scholar in the discussion of indigenous data sovereignty and language reclamation to construct a framework for digital minority language research and development. The key principles for this research are data usage consent, data accessibility and removability, and constant ongoing communication with the language community about development goals. I then apply this framework to a case study of Google Woolaroo and Kupu, two visual language translation apps for minority language communities. While Woolaroo follows many of these principles, Kupu has proven itself to be a more effective tool as it was designed in tandem with the Maori community for classroom use, which demonstrates the importance of community-driven development in building effective language resources.</p>
Social Impact Papers Thomas Yim
Copyright (c) 2024 Thomas Yim
2024-01-222024-01-22Demystify ChatGPT: Anthropomorphism around generative AI
https://ojs.stanford.edu/ojs/index.php/grace/article/view/3222
<p><span style="font-weight: 400;">The recent release of Large Language Models GPT, DALLE, and Bard undoubtedly marks the advent of the era of generative artificial intelligence. Different from traditional AI systems designed specifically for specific tasks, generative AIs are able to handle cross-context, general-purpose tasks. The fact that they can better mimic humankind leads to a heightened propensity of anthropomorphism around it. Anthropomorphism refers to the attribution of human-like qualities and intentions to AI systems. However, whether it is justified to compare AI systems to human intelligence in the case of generative AI has rarely been discussed in the current literature. I 1) identify the differences between generative AI and traditional AI from a technological perspective, 2) take a conceptual analysis by drawing on Chomsky’s theory of “CALU” to illustrate why generative AI is still not comparable to human intelligence, 3) conduct both quantitative and qualitative analysis to study the current manifestation of anthropomorphism around generative AI in the public discourse. My argument is that although perceived as an AI system that can perform cross-context, general-purpose tasks, generative AI is still not comparable to human beings as its “computational” nature is fundamentally different from the “creative” nature of human languages. Thus, the anthropomorphism around generative AI is not justified, leading to false expectations of AI systems and overblown fears towards them. The rhetorics around generative AI are of great importance because they shape how the public perceives AI. We need to “demystify” the AI systems such that the public representations of generative AI are genuine, complete, and authentic.</span></p>
Social Impact Papers AI, anthromorphism, generative AIJunyi (Joey) Ji
Copyright (c) 2024 Junyi (Joey) Ji
2024-01-222024-01-22Queer Bias in Natural Language Processing
https://ojs.stanford.edu/ojs/index.php/grace/article/view/3221
<p><span style="font-weight: 400;">Research in the growing field of NLP bias has made significant progress related to characteristics such as race and (binary) gender. However, bias with respect to queer communities and experiences has been critically underexplored. In this paper, I review sources of bias and describe the unique risks that biased NLP systems pose for queer individuals. I break down the social and computational factors which act as barriers to research in queer bias and discuss the importance of continued involvement with queer stakeholders within the research process. I then review common models of gender and sexuality in NLP bias research and argue how cis- and heteronormative assumptions as the standard in NLP academic frontiers continues to perpetuate research which excludes queer experiences. Finally, I review emerging methods and successes in evaluating queer bias in NLP systems, setting out recommendations on how to expand from these works and pointing towards a framework for future work in queer bias.</span></p>
Social Impact Papers Towards More Expansive Frameworks of Gender and Sexuality in NLP Bias ResearchAmy (Azure) Zhou
Copyright (c) 2024 Amy (Azure) Zhou
2024-01-222024-01-22Editors' Introduction
https://ojs.stanford.edu/ojs/index.php/grace/article/view/3220
<p><a href="https://ojs.stanford.edu/ojs/index.php/grace"><em><span style="font-weight: 400;">GRACE, Global Review of AI Community Ethics</span></em></a><span style="font-weight: 400;">, a Stanford student-run journal mentored by </span><a href="https://profiles.stanford.edu/harriett-jernigan"><span style="font-weight: 400;">Dr. Harriett Jernigan</span></a><span style="font-weight: 400;"> in the </span><a href="https://pwr.stanford.edu/"><span style="font-weight: 400;">Program in Writing and Rhetorics </span></a><span style="font-weight: 400;">and the </span><a href="https://pwrnotations.stanford.edu/about/about-ncr"><span style="font-weight: 400;">Notation in Cultural Rhetorics,</span></a><span style="font-weight: 400;"> provides a unique venue for young scholars, undergraduates, and early career researchers writing about justice and tech from global perspectives. This year for volume two, GRACE received more than 3000 papers from young scholars. Many submissions addressed problems related to generative models, but most fell outside the range of global frameworks and social impacts, which is the focus of GRACE. Such interest confirmed for us the need for more venues for early career grad students, college students, as well as high school students. We selected these seven exceptional social impact essays all reflecting on generative models.</span></p>
Editors' IntroductionNour Mary AissaouiWilhelmina Onyothi NekotoMuhammad Khattak
Copyright (c) 2024 Nour Aissaiou, Wilhelmina Onyothi Nekoto, Muhammad Khattak
2024-01-222024-01-22Interview with Misgina Gebretsadik
https://ojs.stanford.edu/ojs/index.php/grace/article/view/3219
<p> </p> <p><strong>Interview with Misgina Gebretsadik</strong></p> <p><strong>Computer Science student Mekelle Institute of Technology</strong></p> <p><br /><br /></p> <p><strong>GRACE: </strong><span style="font-weight: 400;"> </span><em><span style="font-weight: 400;">tell us about your life and how you got interested in computer science</span></em></p>
Research Notes and CommentaryMisgina Gebretsadik
Copyright (c) 2024 Misgina Gebretsadik
2024-01-222024-01-22Interview with Bonaventure Dossou
https://ojs.stanford.edu/ojs/index.php/grace/article/view/3218
<p><strong>Interview with Bonaventure Dossou</strong></p> <p><strong>Lelapa.ai, lead researcher; Computer Science Ph.D. student at McGill University</strong></p> <p> </p> <p> </p> <p><strong>GRACE</strong><strong>: </strong><strong><em>Tell us about </em></strong><a href="https://lelapa.ai/"><strong><em>Lelapa.ai</em></strong></a><strong><em> and your experiences at the </em></strong><a href="https://deeplearningindaba.com/2023/"><strong><em>Deep Learning Indaba</em></strong></a><strong><em>? What did you present?</em></strong></p>
Research Notes and CommentaryBonaventure Dossou
Copyright (c) 2024 Bonaventure Dossou
2024-01-222024-01-22Interview with Dr. Paul Azunre
https://ojs.stanford.edu/ojs/index.php/grace/article/view/3217
<p><span style="font-weight: 400;">Interview with Dr. Paul Azunre</span></p> <p><span style="font-weight: 400;">Founder</span><a href="https://twitter.com/GhanaNLP"><span style="font-weight: 400;"> </span><span style="font-weight: 400;">@GhanaNLP</span></a></p>
Research Notes and CommentaryPaul Azunre
Copyright (c) 2024 Paul Azunre
2024-01-222024-01-22AI Cold War with China?
https://ojs.stanford.edu/ojs/index.php/grace/article/view/3216
<p><span style="font-weight: 400;">Deeply concerned about innovation and national security, some Silicon Valley observers, like Eric Schmidt and others, view competition with China as a far greater threat to the United States than the many public harms an unregulated AI poses. They argue we must devote ourselves to winning the U.S.-China competition at any cost and worry that ethical inquiry is a distraction from this political reality. But the choice is not merely between political realism and normative reasoning. In the United States, which enjoys decentralized pluralist discussions as opposed to China’s centralized government mandates, one can hear both political realist and normative ethical positions. Americans of widely disparate perspectives debate how to create high-quality datasets that address misinformation, bias, harm, and labor issues while working towards developing models that better serve the diverse publics they impact. AI Ethics in the United States is not merely one group mounting a naive political distraction, but rather many competing voices from industry, government, academia, NGOs, and social activism (Bender et al., 2021; Weinstein et al., 2021; Birhane, 2021). As this paper considers the diverse viewpoints on AI ethics, it argues for an American advantage over China because, in our context, even while clearly admitting all of America’s historical, political, social, and economic flaws, inclusion and pluralism are possible. Amid all this noisy American discussion, it remains possible to adopt a potentially Rawlsian perspective, where one could argue that the American conversation on AI ethics need not cement itself into any one ideology. Through a Rawlsian “overlapping consensus” on the many possibilities of algorithmic harm, Americans might share their “considerable differences in citizens’ conceptions of justice” and agree on productive paths forward despite their differing ideologies (Rawls, 2020). </span></p>
Social Impact Papers Jonathan XueLifu Guo
Copyright (c) 2024 Jonathan Xue
2024-01-222024-01-22Computer Science as a Black Vocation:
https://ojs.stanford.edu/ojs/index.php/grace/article/view/3214
<p><span style="font-weight: 400;">What sort of education will best uplift Black people in America? This longstanding question continues in American universities today, beginning with the famous debate between the first president of the Tuskegee Institute, African American Educator, Booker T. Washington (1856 –1915) and Harvard-educated sociologist and philosopher, W.E.B. Du Bois (1868 – 1963). While Washington’s (1901) </span><em><span style="font-weight: 400;">Up From Slavery</span></em><span style="font-weight: 400;"> drew on abolitionist Frederick Douglass’ 1853 essay “Learn Trades or Starve,” Du Bois argued in his seminal </span><em><span style="font-weight: 400;">The Souls of Black Folk</span></em><span style="font-weight: 400;"> (1903) that a liberal education would best build cultural competence to enable Black people to enter positions of power in a white-dominated society. In fact, both men understood how mastering technology could offer a key to financial well-being and autonomy for Black people, but they famously disagreed on what kind of education would secure equity and empowerment. </span></p> <p><span style="font-weight: 400;">This debate continues among Black students in higher education today: Is computer science merely a technical vocational skill? Or does it have the potential to offer a uniquely Black vocation that imparts wealth and enables us to build cultural competency as well? Do we even want cultural competency, as Du Bois defined it, or should we follow others who have developed newer, more meaningful models of knowledge and Black belonging? Our paper examines the persistence of the Booker T. Washington vs. W.E.B. DuBois debate and builds on Duke Professor Alicia Nicki Washington’s redefinition of cultural competency to argue that CS “vocational” training is never separated from cultural education. We also draw on an important new study from von Vacano et al. (2022) that provides methods for more inclusive STEM education for students from historically marginalized groups. Through a critical review of the debate and empirical survey of 135 Black computer science graduates at different American institutions of higher education, we demonstrate how computer science can become a better vocation for Black Americans. Our paper reconsiders the importance of vocational training in higher education and demonstrates that the many cultural roadblocks Washington and Du Bois identified persist, while both the meaning of vocational and higher education have transformed. Where the Washington-Dubois debate made “vocation” sound antithetical to higher education, we believe it plays an important role in university and college education, and that for us “vocation,” especially with respect to Black empowerment, is no mere acclimation to industry, but rather a calling to serve our community. </span></p> <p><span style="font-weight: 400;"> </span></p> <p> </p>
Frameworks Shawn FilerChristian Davis
Copyright (c) 2024 Christian Davis
2024-01-222024-01-22Editors' Introduction
https://ojs.stanford.edu/ojs/index.php/grace/article/view/2596
Editors' IntroductionGRACE Editors
Copyright (c) 2023 Dr. Harriett Jernigan
2023-02-172023-02-17Centering Africans in the Digital Scramble for Africa
https://ojs.stanford.edu/ojs/index.php/grace/article/view/2594
<p>Once again, western superpowers are scrambling to conquer Africa’s resources. Unlike in previous centuries, however, the resource is data. In their zeal to improve the accuracy of machine learning and AI algorithms, western Big Tech giants have resorted to expanding their reach to the dataset-rich African continent, where inadequate or non-existent data protection laws fail to protect local populations. Many African nations find themselves unprepared for the onslaught because there are so many infrastructural issues that they have had little ability to prepare for this new form of colonization. Until now, data issues were considered “soft” or western problems of less immediate concern. In fact, the Africans were not wrong in their focus. Infrastructure and education are indeed the main concerns, but simultaneously western tech companies are conducting a new digital colonialism bringing in algorithmic technologies, which cause more harm than the supposed benefits they claim. Some African expatriates like Dr. Timnit Gebru, Dr. Abeba Birhane, Dr. Rediet Abebe, and others offer trenchant analyses of the current situation of digital colonization. This paper adopts their criticisms to argue that African nations must protect themselves against colonization and instead choose informed, publicly debated African-generated initiatives that advance African education and digital infrastructure. I call for a strategy that centers Africa: One that educates Africans in their schools as well as all public institutions, so that when tech colonizers come supposedly philanthropic benefits, Africans are prepared to negotiate their interests and shape their own digital communities as they wish.</p>
Research Notes and CommentaryWayne Chinganga
Copyright (c) 2023 Wayne Chinganga
2023-02-172023-02-17Algorithmic Palestine
https://ojs.stanford.edu/ojs/index.php/grace/article/view/2593
<p>Algorithmic technology rarely serves Palestine, but it can, if Palestinians have sovereignty over their data, language, models, land, and lives. For the last 75 years, since the creation of the State of Israel and the <em>Nakba</em>, Palestine has lacked sovereignty over all its basic living conditions. Currently, most algorithmic technology serves Israel’s occupation and surveillance of every sphere of Palestinian existence. Our paper first describes the current algorithmic conditions imposed on Palestine, showing how Silicon Valley Big Tech firms collude with the occupation even as they develop programs for Palestinians claim technology itself can lead to liberation. Rejecting such tech solutionism, we show the promising work Palestinians currently produce and delineate next steps for a free and thriving Palestine that include an intentional use of technology on terms Palestinians set themselves. We believe education and technology are essential to this end, and with a clear understanding of the many potential harms of algorithms, we propose that Palestinians design and maintain their own to serve their communities. Will algorithms free Palestine? No. But they can help build a free Palestine, which could include the right of return and/or full sovereignty over our contiguous lands, free of Israel’s divisions, surveillance, and administration.</p>
Social Impact Papers Lara HafezMaryam KhalilRonnie Hafez
Copyright (c) 2023 Lara Hafez, Maryam Khalil, Ronnie Hafez
2023-02-162023-02-16The Algorithmic Bias and Misrepresentation of Mixed Race Identities
https://ojs.stanford.edu/ojs/index.php/grace/article/view/2592
<p> </p> <p>Artificial intelligence (AI) is a rapidly advancing technology that has prompted breakthroughs in many fields, from DeepMind’s AlphaFold that solved the seemingly insurmountable 50-year-old protein folding problem to OpenAI’s powerful GPT-3 language model that used the internet to learn how to generate natural language like a human. However, this seemingly unbounded progress is intimately intertwined with a plethora of ethical concerns of racial bias and discrimination.</p>
Social Impact Papers Sergio Charles
Copyright (c) 2023 Sergio Charles
2023-02-162023-02-16Reimagining the Data Subject in GDPR
https://ojs.stanford.edu/ojs/index.php/grace/article/view/2591
<p>Scholarship centering marginalized groups has long questioned the universality of a “default” person. In her landmark text <em>Justice, Gender, and the Family</em>, feminist philosopher Susan Moller Okin argues that “almost all current theories continue to assume that the ‘individual’ who is the basic subject of their theories is the male head of a fairly traditional household.”<sup>1</sup> Professor Donna Haraway’s framework of the “view from nowhere” shows that a supposedly universal perspective, which she calls the “God trick,” can actually shield a “very specific position (male, white, heterosexual, human).”<sup>2</sup> In the words of bestselling author Caroline Criado Perez, “this reality is inescapable for anyone whose identity does not go without saying, for anyone whose needs and perspective are routinely forgotten. For anyone who is used to jarring up against a world that has not been designed around them and their needs.”<sup>3</sup> Political philosopher Iris Marion Young writes that “[t]he privileged groups lose their particularity; in assuming the position of the scientific subject they become disembodied, transcending particularity and materiality, agents of a universal view from nowhere. The oppressed groups, on the other hand, are locked in their objectified bodies, blind, dumb, and passive.”<sup>4</sup> Without explicitly mentioning race, gender, or any other aspect of identity, an abstract conception of subjecthood runs the risk of insidiously adopting the identity at the top of the societal hierarchy. When a certain level of abstraction is necessary, is there a way to universalize more inclusively, to affirmatively elevate the worldviews of those traditionally left out of the narrative?</p>
Social Impact Papers Ananya Karthik
Copyright (c) 2023 Ananya Karthik
2023-02-162023-02-16Review: Data Conscience: Algorithmic Siege on our Humanity
https://ojs.stanford.edu/ojs/index.php/grace/article/view/2588
<p>Among the increasing number of AI Ethics conversations, leading computer scientist and data equity strategist, Dr. Brandeis Hill Marshall’s important new book, <em>Data Conscience: Algorithmic Siege on our Humanity</em> offers a unique perspective: actionable, how-to strategies for engineers and AI ethicists for mitigating AI harms. Unlike many ethics conversations in tech, which tend either toward heavily theorical or technical jargon, Marshall addresses her readers with an accessible, welcoming style articulating both the problems of social impact and the pitfalls of building algorithms. In a clear, optimistic tone, she motivates her readers to listen and learn more. Most importantly, she tells them they belong. Non-technical beginners will find it easy to join the discussion, more advanced audiences to rethink their assumptions about the potentials of algorithms, and everyone will become more aware of the necessity of including marginalized groups in the end-to-end algorithmic development process.</p>
ReviewsAlyssa JonesAlexis Mack
Copyright (c) 2023 Alyssa Jones, Alexis Mack
2023-02-142023-02-14Review: Viral Justice: How We Grow the World We Want
https://ojs.stanford.edu/ojs/index.php/grace/article/view/2587
<p><em>Viral Justic</em>e is a lexicon of racial injustice in the United States, interweaving historical and contemporary case studies, academic research, and autobiographical testimony. Throughout seven chapters, Princeton Professor of African American Studies and Global Health, Ruha Benjamin exposes the mutually reinforcing mechanisms of oppression upholding white supremacy and urges readers to question the narratives that feed them. We enter Benjamin’s story through the front door of her childhood home. The White House is the name of her grandmother’s small, weathered residency in South Central Los Angeles, situating the reader in the overpoliced, politicized existence of a Black family. It is through this autobiographical lens that Ruha Benjamin introduces her grandmother’s abolitionist affirmations, that developed in response to the incessant racism, exclusion, and neglect of the Black community, and precipitated her own political awakening.</p>
ReviewsBook reviewViral JusticeRuha BenjaminSayo Lyra Stefflbauer
Copyright (c) 2023 Sayo Lyra Stefflbauer
2023-02-142023-02-14Decolonizing NLP for “Low-resource Languages”
https://ojs.stanford.edu/ojs/index.php/grace/article/view/2584
<p><span style="font-weight: 400;">Today African languages are spoken by more than a billion people, yet in the world of machine translation and </span><span style="font-weight: 400;">natural language processing (NLP),</span><span style="font-weight: 400;"> these are considered </span><span style="font-weight: 400;">“low-resource languages” (LRLs) because they lack the same level of data, linguistic resources, computerization, and researcher expertise as “high resource languages” such as French and English (Cieri, 2016). The </span><span style="font-weight: 400;">reasons African languages remain still “low resource,” however, extend far beyond issues of data availability and instead reflect marginalization in a global society dominated by Western technology (Nekoto et al., 2020). Indeed, of the 7000 languages currently in use worldwide, over 2000 of these are African languages, yet machine translation focuses on a mere 20 global languages (Joshi, et al. 2020). </span><span style="font-weight: 400;">As Africans build data sets for their languages, they continue to struggle to gain agency over their own data and stories (Abebe, et al., 2021). </span><span style="font-weight: 400;">Given the history of African colonialism and its linguistic domination, Dr. Abebe Birhane’s article, “Algorithmic Injustice: A Relational Ethics Approach,” (Birhane, 2020) offers an important framework for developing machine translation for “low-resource” African languages.</span><span style="font-weight: 400;"> Our response to Birhane considers the impact of NLP on Africa, and applies Birhane’s ethics to support the project of decolonization of African data and data subjects. </span></p>
Frameworks Tolúlọpẹ́ Ògúnrẹ̀míWilhelmina Onyothi Nekoto Saron Samuel
Copyright (c) 2023 Tolúlọpẹ́ Ògúnrẹ̀mí, Wilhelmina Onyothi Nekoto , Saron Samuel
2023-02-142023-02-14Analytic Relationality and the Relational Ethics of the Global South
https://ojs.stanford.edu/ojs/index.php/grace/article/view/2583
<p><span style="font-weight: 400;">In this paper we consider how the work of Abeba Birhane relates to other theories of relationality from European and North American analytical philosophy, and ask whether, given Birhane’s critical perspective on western rationalism, analytic frameworks might be at all compatible. Analytic descriptions of relational ethics draw on Watson, Smith, Scanlon, Darwall, and Bovens’ theories of relationality and “being held responsible,” when an actor contravenes the norms of a relationship with others. Armed with these frameworks, analytical philosophers hope to critically evaluate and—eventually—regulate the global political economies of data and computing industries. Yet, critics like Birhane argue that such frameworks remain mired in the colonial project of western rationality, which is complicit in the digital colonization of the Global South. Even if these analytical relational frameworks address fora which debate accountability about the “many hands” responsible for algorithms, they refer to individual actors and responsibility within western corporate and institutional structures. Moreover, they presume an equal moral status for all actors, which in reality is often not the case since western technology disproportionately harms communities in the Global South. Meanwhile, relational ethics as Birhane formulates them offer an alternative view that arises from communities. These two philosophical approaches to relationality, while often at odds with one another, do share some conceptual histories and potential compatibilities. We argue the analytic enterprise can ground policy work with clear definitions of contested terms like “harm,” “understanding,” and “responsibility.” Birhane’s relational ethics also draw on both western rationality and concepts of lived experience in these communities. The synthesis of these two types of relationality may help develop an inclusive, actionable, enforceable AI ethics. Still, it is important to remember that in the Global South the relational focus is communities and their well-being, something analytical frameworks aspire to but have yet to adequately address.</span></p>
Frameworks Relational ethicsAnalytic RelationalityAbeba BirhaneAI EthicsJulia KwakNakeema Stefflbauer
Copyright (c) 2023 Julia Kwak, Nakeema Stefflbauer
2023-02-142023-02-14Against Relationality
https://ojs.stanford.edu/ojs/index.php/grace/article/view/2582
<p>For over 100 years, social critics have decried the transformation of the west into a mechanized and mathematical society — not only in terms of technology — but also because values are increasingly assessed quantitatively without much regard for human existential and spiritual fulfillment. Dr. Abeba Birhane, “Algorithmic injustice: a relational ethics approach” (2021) comments on this societal mechanization in the context of machine learning’s effects on marginalized communities. She argues that the western rationalist position creates a “veneer of objectivity” and positions itself as “value-free, neutral, and amoral,” while leading to harmful social impacts of “historical inequalities” and “asymmetrical power hierarchies, ”which are mathematicised by western thought. According to Birhane, we should be critical of rationality and consider “the lived experience of marginalized communities.” If we practice a relational ethics, we can attain a better qualitative assessment of AI’s social harms. Yet while Birhane presents relational ethics as an alternative to western rational quantitative systems of power, her own methodology derives significantly from the western sources she blames.</p>
Frameworks Mimi St. Johns
Copyright (c) 2023 Mimi St. John
2023-02-142023-02-14GRACE Editors' introduction to the Frameworks Section
https://ojs.stanford.edu/ojs/index.php/grace/article/view/2581
<p>Dr. Abeba Birhane's provocative AI ethics paper, “Algorithmic Injustice: A Relational Ethics Approach,” draws on frameworks too often neglected in AI ethics studies. Her important work on race, justice, and ethics frameworks for machine learning algorithms calls for inclusion of relational Sub-Saharan African philosophies in such a curriculum. Delineating the ethical limitations of European individualist rationality as a definition for personhood, especially in marginalized communities and on the African continent, Birhane shows how traditionally European frameworks fail to address the perspectives of those whom AI most impacts. Following many important African philosophers like Mogobe B. Ramose, Emmanuel Chukwudi Eze, Ifeanyi Menkiti, Sabelo Mhlambi, and others, Birhane offers the AI Ethics community important insights from African relational ethics, which link one’s personhood to the personhood of others, and show that to talk about AI harms one must understand the communal relational perspective.</p>
Frameworks Relational EthicsAI EthicsAlgorithmsBethel BayrauWayne Chinganga
Copyright (c) 2023 Bethel Bayrau, Wayne Chinganga
2023-02-142023-02-14ChatGPT is not all you need. A State of the Art Review of large Generative AI models
https://ojs.stanford.edu/ojs/index.php/grace/article/view/2572
<p>During the last two years there has been a plethora of large generative models such as ChatGPT or Stable Diffusion that have been published. Concretely, these models are able to perform tasks such as being a general question and answering system or automatically creating artistic images that are revolutionizing several sectors. Consequently, the implications that these generative models have in the industry and society are enormous, as several job positions may be transformed. For example, Generative AI is capable of transforming effectively and creatively texts to images, like the DALLE-2 model; text to 3D images, like the Dreamfusion model; images to text, like the Flamingo model; texts to video, like the Phenaki model; texts to audio, like the AudioLM model; texts to other texts, like ChatGPT; texts to code, like the Codex model; texts to scientific texts, like the Galactica model or even create algorithms like AlphaTensor. This work consists on an attempt to describe in a concise way the main models are sectors that are affected by generative AI and to provide a taxonomy of the main generative models published recently.</p>
ReviewsRoberto Gozalo-Brizuela Eduardo C. Garrido-Merchán
Copyright (c) 2023 Roberto Gozalo-Brizuela, Eduardo C. Garrido-Merch´án
2023-02-062023-02-06Kiwibots on Kampus
https://ojs.stanford.edu/ojs/index.php/grace/article/view/2375
<p>As of Spring 2022, Howard University deployed autonomous food-delivery vechicles in partnership with Kiwibots. For many members of the Howard community, they were not made aware of the partnerships until the Kiwibots began traveling around the campus. The sudden appearance of the technology breeded skepticism and general interest in understanding the technology. In an attempt to provide information to the Howard community, some foundational knowledge about the Kiwibots and the university's partnership is detailed. Furthermore, based on the foundational information about Kiwibots, this article raises several ethical concerns about the Kiwibots and how it relates to larger trends in technological deployment in Black communties and spaces. As a first step, explanations about how the Kiwibot systems process image data and other personally identifiable information (PII) need to be more transparently shared. A deeper interrogation of university corporate partnerships and especially partnerships with minority-institutions is ultimately necessary for addressing the ethical concerns related to the technology.</p>
Research Notes and CommentaryTeanna Barrett
Copyright (c) 2023 Teanna Barrett
2023-02-162023-02-16Mitigating Racial Bias in Healthcare AI Development
https://ojs.stanford.edu/ojs/index.php/grace/article/view/2328
<p><span style="font-weight: 400;">Physicians are guided by the principle, “first, do no harm,” but in Silicon Valley, software developers embrace a different motto, “move fast, and break things.” These contrasting philosophies clash in healthcare, where machine learning (ML) and artificial intelligence (AI) are becoming increasingly influential. The unintentional incorporation of bias in AI development and deployment can be severely damaging to patients’ wellbeing. Our research will review the ways bias in healthcare AI, specifically racial bias, affects patients and current regulations to prevent bias. We will investigate this information to make professional, developmental, and legislative recommendations for stakeholders in healthcare AI to mitigate bias in their work.</span></p>
Social Impact Papers Athena XueJodie MengCasey Nguyen
Copyright (c) 2023 Athena Xue, Jodie Meng, Casey Nguyen
2023-02-162023-02-16