Posted on 11 May 2021 by Alexander Tagesson (Division of Cognitive Science) and Juan Ocampo (Department of Business Administration).
The views expressed in this publication are those of the authors and do not necessarily represent those of the Agenda 2030 Graduate School or Lund University. The present document is being issued without formal editing.
This post is part of a blog post series on AI and sustainability.
AI ethics is a broad field that considers itself with questions spanning several sub-domains. In this post we want to highlight some of the issues regarding the ethical development and use of artificial intelligence (AI), as we believe there is significant overlap with sustainable development and use of AI. For starters, several experts from different backgrounds are currently discussing the different challenges and risks that AI poses to humanity[1]. More and more, research institutes focus on trying to understand and mitigate the potential risks connected to AI, including weaponization, accidents, and security breaches[2]. It is an imperative to talk about ethics and transparency along with AI.
Hagendorff[3] performed a meta-analysis on articles discussing the most common ethical guidelines regarding AI. Most articles discuss accountability, privacy and fairness and these guidelines are most often operationalized mathematically and implemented in terms of a technical solution. This runs the risk of not connecting guidelines to a wider societal context, but rather think of them as technical problems with technical solutions. There is also often a mention of the potential benefits we can gain from using AI for sustainable development, but no paper mentions that AI-practices may be countering sustainability goals, creating e-waste, mining natural resources, consume energy and force people into becoming “clickworkers” [4] [5] [6] [7] [8].
Cathy O’Neil talks about “weapons of math destruction”. Building on her experience in the financial world, O’Neil questions the quality and reliability of the algorithms that are being developed as objective and reliable decision making instruments. However, in reality some of this algorithms “encoded human prejudice, misunderstandings, and bias in systems that are increasingly managing our lives”[9]. It is important then that programmers, users, and public in general become aware the way in which algorithms are creating the future world that we will inhabit together[10]. If we no longer are certain that we will be able to control an AI-system, or even be able to secure that algorithms are doing what they are meant to do[11], the ethical choice is likely to refrain from creating that system[12] and that choice ought undoubtedly also be considered the most sustainable one, given the potential risks and uncertainties for human life.
Other issues concerning the use of AI ethics is the “technological race” between AI superpowers, where actors race each other to reach AI supremacy and regulators are lagging behind. Within this competitive atmosphere actors focus on development of new technology and become more willing to take risks and impede ethical considerations[13]. Moreover, most of the big tech companies do not seem to care about ethical artificial intelligence beyond using it as a market strategy. They do not face any real consequences for developing algorithms that produce unethical behaviour[14] and are at the same time silencing ethical considerations that might hinder sustainable development[15] .
It is not an easy task to keep up with the Big Tech, however regulators are setting some rules in order to avoid undesirable outcomes and promote the use of AI for solving societal challenges. For example, the EU has proposed the “Artificial Intelligence Act[16]” with the objective of ensuring: (i) the respect of existing law on fundamental rights, (ii) legal certainty for investment; (iii) governance and enforcement of the law; and (iv) a suitable market development. If you are interested in how policies around AI are being implemented, we invite you to read the book, “Human-centred AI in the EU” [17] . In these compilation, Stefan Larsson and his colleagues discuss how different member states of the European Union (e.g. Poland, Italy and the Nordic states) have implemented and adopted the AI policies.
Aware of the ethical risks around AI, it was a first and necessary step to bring forward some of these considerations in regard to this technology. However, there are many opportunities for AI in the context for sustainable development and this will also be part of this series of posts. In the following weeks we will get into more specific cases on how AI is being used in different socio-economic and environmental fields. Far from rigorous, this is first an invitation to the readers to be open and critical about the possibilities and risks that AI poses and an overall introduction to what you will encounter in the future posts.
[1] See Daniel Kahneman and Yuval Noah Harari discussion for different perspectives on the topic https://www.youtube.com/watch?v=P-NMhsS7VRQ
[2] https://futureoflife.org/; https://www.cser.ac.uk/)
[3] Hagendorff, T. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds & Machines 30, 99–120 (2020). https://doi.org/10.1007/s11023-020-09517-8
[4] Crawford, Kate, and Vladan Joler. 2018. “Anatomy of an AI System.” Accessed April 30, 2021. https://anatomyof.ai/.
[5] Irani, Lilly. 2016. “The Hidden Faces of Automation.” XRDS 23 (2): 34–37.
[6] Veglis, Andreas. 2014. “Moderation Techniques for Social Media Content.” In Social Computing and Social Media, edited by David Hutchison, Takeo Kanade, Josef Kittler, Jon M. Kleinberg, Alfred Kobsa, Friedemann Mattern, John C. Mitchell et al., 137–48. Cham: Springer International Publishing.
[7] Fang, Lee. 2019. “Google Hired Gig Economy Workers to Improve Artificial Intelligence in Controversial Drone-Targeting Project.” Accessed February 13, 2019. https://theintercept.com/2019/02/04/google-ai-project-maven-figure-eight/.
[8] Casilli, Antonio A. 2017. “Digital Labor Studies Go Global: Toward a Digital Decolonial Turn.” International Journal of Communication 11: 1934–3954.
[9] Cathy O’Neil. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group, USA.
[10] https://www.ssec.wisc.edu/~billh/g/hibbard_agi_workshop.pdf)
[11] This is known as GlassBox: For more see: Rai, A. Explainable AI: from black box to glass box. J. of the Acad. Mark. Sci. 48, 137–141 (2020). https://doi.org/10.1007/s11747-019-00710-5 or https://www2.deloitte.com/nl/nl/pages/innovatie/artikelen/unboxing-the-box-with-glassbox-a-toolkit-to-create-transparency-in-artificial-intelligence.html
[12] https://futureoflife.org/ai-principles/
[13] Hagendorff, T. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds & Machines 30, 99–120 (2020). https://doi.org/10.1007/s11023-020-09517-8
[14] Ibid
[15] https://www.washingtonpost.com/technology/2020/12/23/google-timnit-gebru-ai-ethics/
[16] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
[17] Larsson, S., Ingram Bogusz, C., & Andersson Schwarz, J. (Eds.) (Accepted/In press). Human-Centred AI in the EU: Trustworthiness as a strategic priority in the European Member States. European Liberal Forum asbl. https://portal.research.lu.se/portal/files/87349373/Larsson_et_al_2020_AI_in_the_EU_final.pdf