Artificial Intelligence, Ethics, Society, and Gender Studies: Power, Bias, and the Politics of Intelligent Machines
DOI:
https://doi.org/10.53573/rhimrj.2026.v13n03.014Keywords:
artificial intelligence, AI ethics, gender studies, algorithmic bias, feminist technology studies, data justice, surveillance, intersectionality, decolonial AI, labourAbstract
The alarming spread of technologies based on artificial intelligence (AI) in the social, economic, political, and cultural sectors has elevated the ethical concern of such systems to an urgent academic and popular issue. This research paper places AI ethics into the overarching context of feminist theory, gender studies, the critical race theory, and the sociology of technology, alleging that the issues of algorithmic bias, data justice, surveillance, labour, and the regulation of intelligent machines cannot be discussed outside the context of old structures of gender, race, and classes as well as coloniality. The article is written with the help of interdisciplinary research based on computer science, philosophy, feminist science and technology studies, political economy and legal theory to provide a thorough review of the literature concerning the key theoretical frameworks and empirical research traditions. It considers the gendered and racialised aspects of AI research and implementation, politics of data and representation, implications of AI in labour and care work, AI in surveillance and bodily sovereignty, and the difficulty in ensuring ethical and equitable AI governance systems. The article posits that the genuinely ethical approach towards AI has to be feminist, intersectional and decolonial in nature, that considers not only technical design, but also the social relations of production, deployment and contestation that provide AI systems with their real-world meanings.
References
Abdalla, M., & Baroudi, M. (2021). The grey hoodie project: Big tobacco, big tech, and the threat on academic integrity. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 287-297). ACM.
Ajunwa, I. (2016). Algorithms at work: Productivity monitoring applications and wearable technology as the new data-centric research agenda for employment and labor law. Saint Louis University Law Journal, 63(1), 21-54.
AlgorithmWatch. (2021). Civil society statement: The EU's Artificial Intelligence Act needs to be fundamentally reformed. AlgorithmWatch.
American Civil Liberties Union. (2020). Detroit man sues after being wrongfully arrested based on facial recognition technology. ACLU.
Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Polity Press.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610-623). ACM.
Birhane, A. (2021). Algorithmic injustice: A relational ethics approach. Patterns, 2(2), 100205.
Bridges, K. M. (2022). Digital discrimination and the carceral state: Race, data, and the policing of abortion. California Law Review, 110(5), 1445-1503.
Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton.
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency (pp. 77-91). PMLR.
Charlton, J. I. (1998). Nothing about us without us: Disability oppression and empowerment. University of California Press.
Citron, D. K. (2014). Hate crimes in cyberspace. Harvard University Press.
Citron, D. K. (2019). Sexual privacy. Yale Law Journal, 128(7), 1870-1960.
Citron, D. K., & Chesney, R. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753-1820.
Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. MIT Press.
Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Crawford, K., & Joler, V. (2018). Anatomy of an AI system: The Amazon Echo as an anatomical map of human labor, data and planetary resources. AI Now Institute and Share Lab.
Crenshaw, K. (1989). Demarginalizing the intersection of race and sex: A Black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. University of Chicago Legal Forum, 1989, 139-167.
Crenshaw, K. (1991). Mapping the margins: Intersectionality, identity politics, and violence against women of color. Stanford Law Review, 43(6), 1241-1299.
Crenshaw, K. (2017). On intersectionality: Essential writings. The New Press.
Data & Society Research Institute. (2018). Algorithmic accountability: A primer. Data & Society.
Delgado, F., Yang, S., Madaio, M., & Yang, Q. (2021). Stakeholder participation in AI: Beyond 'add diverse stakeholders and stir.' Ethics and Information Technology, 23(4), 719-735.
Dencik, L., Hintz, A., Redden, J., & Treré, E. (2019). Exploring data justice: Conceptions, applications and directions. Information, Communication & Society, 22(7), 873-881.
Entman, R. M. (1993). Framing: Toward clarification of a fractured paradigm. Journal of Communication, 43(4), 51-58.
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press.
Fairclough, N. (1992). Discourse and social change. Polity Press.
Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254-280.
Friedman, B., Kahn, P. H., & Borning, A. (2002). Value sensitive design: Theory and methods. University of Washington Technical Report (No. 02-12-01).
Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14(3), 330-347.
Garvie, C., Bedoya, A., & Frankel, J. (2016). The perpetual line-up: Unregulated police face recognition in America. Georgetown Law Center on Privacy and Technology.
Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daume, H., & Crawford, K. (2018). Datasheets for datasets. Proceedings of the Workshop on Fairness, Accountability, and Transparency in Machine Learning.
Gilligan, C. (1982). In a different voice: Psychological theory and women's development. Harvard University Press.
Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Houghton Mifflin Harcourt.
Green, B. (2019). 'Good' isn't good enough. Proceedings of the AI for Social Good Workshop at NeurIPS.
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99-120.
Hanna, A., Denton, E., Smart, A., & Smith-Loud, J. (2020). Towards a critical race methodology in algorithmic fairness. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 501-512). ACM.
Hao, K. (2020, December 4). We read the paper that forced Timnit Gebru out of Google. Here's what it says. MIT Technology Review.
Haraway, D. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575-599.
Haraway, D. (1991). Simians, cyborgs, and women: The reinvention of nature. Routledge.
Harding, S. (1986). The science question in feminism. Cornell University Press.
Harding, S. (1991). Whose science? Whose knowledge? Thinking from women's lives. Cornell University Press.
Harding, S. (2015). Objectivity and diversity: Another logic of scientific research. University of Chicago Press.
Hartzog, W. (2018). Privacy's blueprint: The battle to control the design of new technologies. Harvard University Press.
Held, V. (2006). The ethics of care: Personal, political, and global. Oxford University Press.
Henry, N., Flynn, A., & Powell, A. (2020). Technology-facilitated domestic and sexual violence: A review. Violence Against Women, 26(15-16), 1828-1854.
Henry, N., & Powell, A. (2016). Technology-facilitated sexual violence: A literature review of empirical research. Trauma, Violence, & Abuse, 19(2), 195-208.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
Khullar, D. (2022). The abortion data that could be used against you. The New Yorker.
Kwet, M. (2019). Digital colonialism: US empire and the new imperialism in the Global South. Race & Class, 60(4), 3-26.
Lee, M. K., Jain, A., Cha, H. J., Ojha, S., & Kusbit, D. (2019). Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1-26.
Madaio, M. A., Stark, L., Wortman Vaughan, J., & Wallach, H. (2020). Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-14). ACM.
Maguire, P. (1987). Doing participatory research: A feminist approach. Center for International Education, University of Massachusetts.
Manyika, J., Lund, S., Chui, M., Bughin, J., Woetzel, J., Batra, P., Ko, R., & Sanghvi, S. (2017). Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages. McKinsey Global Institute.
McGlynn, C., Rackley, E., & Houghton, R. (2017). Beyond 'revenge porn': The continuum of image-based sexual abuse. Feminist Legal Studies, 25(1), 25-46.
Metcalf, J., Moss, E., & boyd, d. (2019). Owning ethics: Corporate logics, silicon valley, and the institutionalization of ethics. Social Research: An International Quarterly, 86(2), 449-476.
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 220-229). ACM.
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501-507.
Moss, E., & Metcalf, J. (2020). Ethics owners: A new model of organizational responsibility in data-driven technology. Data & Society Research Institute.
Nass, C., & Brave, S. (2005). Wired for speech: How voice activates and advances the human-computer relationship. MIT Press.
Nedelsky, J. (2011). Law's relations: A relational theory of self, autonomy, and law. Oxford University Press.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
Noddings, N. (1984). Caring: A feminine approach to ethics and moral education. University of California Press.
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.
O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
Puig de la Bellacasa, M. (2017). Matters of care: Speculative ethics in more than human worlds. University of Minnesota Press.
Reason, P., & Bradbury, H. (Eds.). (2001). Handbook of action research: Participative inquiry and practice. Sage.
Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic impact assessments: A practical framework for public agency accountability. AI Now Institute.
Sharkey, N., & Sharkey, A. (2012). Granny and the robots: Ethical issues in robot care for the elderly. Ethics and Information Technology, 14(1), 27-40.
Sloane, M., Moss, E., Awomolo, O., & Forlano, L. (2020). Participation is not a design fix for machine learning. Proceedings of the ICML Workshop on Participatory Approaches to Machine Learning.
Smuha, N. A. (2021). From a 'race to AI' to a 'race to AI governance': Regulatory competition for artificial intelligence. Law, Innovation and Technology, 13(1), 57-84.
Sparrow, R. (2002). The march of the robot dogs. Ethics and Information Technology, 4(4), 305-318.
Sparrow, R., & Sparrow, L. (2006). In the hands of machines? The future of aged care. Minds and Machines, 16(2), 141-161.
Suchman, L. (1987). Plans and situated actions: The problem of human-machine communication. Cambridge University Press.
Suchman, L. (2007). Human-machine reconfigurations: Plans and situated actions (2nd ed.). Cambridge University Press.
Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, 4(2), 1-14.
Tech Safety. (2020). Technology safety: Understanding the connections between technology, abuse, and safety. National Network to End Domestic Violence.
Tronto, J. C. (1993). Moral boundaries: A political argument for an ethic of care. Routledge.
Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books.
UNESCO. (2019). I'd blush if I could: Closing gender divides in digital skills through education. United Nations Educational, Scientific and Cultural Organization.
Vallor, S. (2011). Carebots and caregivers: Sustaining the ethical ideal of care in the twenty-first century. Philosophy & Technology, 24(3), 251-268.
Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press.
van Dijk, T. A. (1993). Elite discourse and racism. Sage.
Veale, M., & Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International, 22(4), 97-112.
Veale, M., & Edwards, L. (2018). Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling. Computer Law & Security Review, 34(2), 398-404.
Wajcman, J. (1991). Feminism confronts technology. Penn State University Press.
Wajcman, J. (2004). Technofeminism. Polity Press.
Wajcman, J. (2015). Pressed for time: The acceleration of life in digital capitalism. University of Chicago Press.
West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race, and power in AI. AI Now Institute.
Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121-136.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.