Artificial Intelligence (AI) Systems, the Poor, and Consent: A Feminist Anti-Colonial Lens to Digitalized Surveillance

September 18, 2023

Artificial Intelligence (AI) Systems, the Poor, and Consent: A Feminist Anti-Colonial Lens to Digitalized Surveillance

I am a current doctoral student in the School of Social Welfare. My research interests are in exploring the impact of neoliberal policies and privatization of education and healthcare on the quality of life and well-being of working-class BIPOC families. With a deep interest in social justice, but zero coding experience whatsoever, I applied to the Data Science for Social Justice workshop this past summer – and I am grateful for the opportunity to have participated. What I thought would be a summer of coding (which it definitely was!), however, also turned into a critical and important reflection regarding ethics and harm surrounding the use of the same technologies we were using and learning about (i.e., AI systems, etc.). It was special to have a space within the institution to have these conversations with talented, diverse, authentic, and creative peers. Overall, the workshop introduced me to critical frameworks for how to talk about these technologies and their potential harm and/or impact, especially on marginalized groups. Below, I share a summary of how a feminist, anti-colonial lens to digitalized surveillance in today’s modern world can help frame our understanding of consent and a nuanced discourse to the protection of vulnerable individuals.

Today’s digital age has created a sea of endless datafication where our everyday interactions, actions, and conversations are turned into data. The advancements of automated artificial intelligence (AI) systems, and their infrastructure in which they are created and trained on, have catapulted us into an era of consistent monitoring and surveillance. This state of constant data collection brings to question two important topics for discussion: 

  1. What are the ethics of consistent data collection? 

  2. Who is most vulnerable and at-risk for having these technologies weaponized against them? 

Feminist and anti-colonial theories that can provide a framework that highlight and assess the power dynamics and the consent of our data bodies. In this blog post, I discuss Varon and Peña’s (2021) “Artificial intelligence and consent: a feminist anti-colonial critique”, which offers a feminist, anti-colonial perspective to the discussion of digitalized surveillance, consent, and its potential harm on vulnerable groups.

AI systems are making automated decisions in our governments, and consequences can be much more dire than simply “privacy concerns” (Eubanks, 2018). Philip Alston, former United Nations Rapporteur on Extreme Poverty and Human Rights, coined the term “Digital Welfare States,” bringing attention to how “systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish” (OHCR, 2019). These harms have also been discussed at length by Virginia Eubanks (2018), where in her book “Automating Inequality,” she shows how gradually and more frequently “poor and working-class people are targeted by new tools of digital poverty management.” Eubanks (2018) also presents examples that detail how automated decision-making tools are used in areas of finance, employment, healthcare, and policing – with the most invasive and punitive technologies used on the poor.

Consent is an important concept to consider in data protection legislation and policies. However, poor people rarely have the opportunity for opting out of data collection processes surrounding welfare services (Muñoz, 2019). They are also unable to withdraw consent, as services require datafication for receiving services. This brings to question the notion of consent at its core; if there is no power to say “no”, then is consent valid? Today’s leading digital consent model is sometimes referred to as a “binary”, or an opt-out, system. In other words, folx are often expected to choose between “all” or “nothing,” which points to the power imbalances and coercive nature of consent. This is sometimes referred to a neoliberal approach to consent, which assumes that we are all autonomous and free individuals (Cohen, 2018). However, this framing of consent disregards unequal power imbalances for vulnerable people – they may not actually have the ability to say “no.” This mode of thinking is also aligned with the idea of the free market, where personal information can be collected as part of the price for using these technologies (Nissenbaum, 2011).

Therefore, anti-colonial feminist frameworks of consent should be re-centered in the data collection and privacy protocols. Varon & Peña (2021) argue that “only collectively, it might be possible to partially redress power imbalance and actually question the path of some tech developments.” Anti-colonial perspectives help reframe the argument under a notion of collectivity and right to self-determination. The right to self-determination has also been reinforced in the UN Declaration on Indigenous Peoples, which states and recognizes that: “indigenous peoples have suffered from historic injustices as a result of, inter alia, their colonization and dispossession of their lands, territories and resources, thus preventing them from exercising, in particular, their right to development in accordance with their own needs and interests” (UNGA, 2007). 

Varon & Peña (2021) argue exactly that we should extend this framework to the conversation of consent, recentering the conversation on self-determination and collectivity (over individualism). In other words, we should ask ourselves how indigenous ontologies and epistemologies can help guide our AI systems. To this point, the authors brilliantly point to the fact that modern ethical debates of AI systems for solutions to mitigate bias and harm are often human-centered, and directly contradict many indigenous epistemologies that refuse to elevate human-being above all living beings (Abdilla et al., 2020). The Indigenous Protocols and Artificial Intelligence (IP//AI) Incubator, for instance, developed the Country Centered Design (CCD) framework as an alternative to the human-centered design process (Lewis, 2020). This Indigenous-led process comprises four key cycles: culture, research, strategy, and technology – which reflect the nature of our relationships with natural, complex systems.  This framework assumes the perspective that “you can never stand outside a system and oversee or intervene - you must embrace the fact that you are part of that system”, all while centering the needs of “County” or “Land” and respecting its agency and autonomy as an intelligent entity (Abdilla et al., 2021, p. 9). Here, we see a clear separation from the binary and traditional Western approach to consent and data design, which is often focused on the individual.

Both feminist and anti-colonial frameworks can offer a nuanced perspective to thinking about how we should approach solving some of the harms these technologies may pose. Anti-colonial approaches imply “dismantling violent impositions,” (p.22) such as the modern-day status quo around consent and privacy (Varon & Peña, 2021). An anti-colonial approach should be based on inclusion from the start of the creation process of an AI system, and a willingness to embrace multiplicity and plurality, rather than the individual, power-imbalanced method that prevails today.

References

  1. Abdilla, A. Keller, M. Shaw, R., Yunkaporta, T. (2021). Out of the Black Box: Indigenous protocols for AI. Old Ways, New

  2. Cohen, J. E. (2018). Turning Privacy Inside Out. Theoretical Inquiries in Law 20.1 (2019 Forthcoming)https://ssrn.com/abstract=3162178

  3. Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor (First Edition). St. Martin’s Press.

  4. Lewis, JE (2020). Indigenous Protocol and Artificial Intelligence Position Paper. The Initiative for Indigenous Futures and the Canadian Institute for Advanced Research (CIFAR). Honolulu, Hawaiʻi.

  5. Muñoz Arce, G. (2019). The neoliberal turn in Chilean social work: Frontline struggles against individualism and fragmentation. European Journal of Social Work, 22(2), 289–300. https://doi.org/10.1080/13691457.2018.1529657

  6. Nissenbaum, H. (2011). A Contextual Approach to Privacy Online. Daedalus, 140(4), 32–48. https://doi.org/10.1162/DAED_a_00113

  7. Peña, P., & Varon, J. (2019). Consent to our Data Bodies: Lessons from feminist theories to enforce data protection [Medium Post]. Coding Rights. https://medium.com/codingrights/the-ability-to-say-no-on-the-internet-b4bdebdf46d7

  8. Peña, P., & Varon, J. (2019). Decolonizing AI: a transfeminist approach to data and social justice [Global Information Society Watch 2019]. Association for Progressive Communications. https://www.giswatch.org/node/6203

  9. United Nations General Assembly. (2007). United Nations Declaration on the Rights of Indigenous Peopleshttps://undocs.org/A/RES/61/295

  10. United Nations General Assembly. (2019). Report of the Special Rapporteur on extreme poverty and human rightshttps://undocs.org/A/74/493

  11. Varon, J. & Peña, P. (2021). Artificial intelligence and consent: a feminist anti-colonial critique. Internet Policy Review, 10(4). https://doi.org/10.14763/2021.4.1602