SCIENCE AND ENGINEERING ETHICS

Rethinking Geoengineering Governance Utilizing the Playing God Argument: Considerations of Knowledge, Control, and Benevolence
Dreiman J and Green BP
Horizon Scan of Emerging Issues at the Intersection of National Security, Artificial Intelligence, and Human Performance Enhancement
Hereth B, de Boisboissel G, Bricknell MC, Brincker M, Casebeer W, Davidovic J, Davis J, Earl J, Eisikovits N, Feldman D, Garcia LF, Gilbert F, Guérin V, Henschke A, Hughes J, Lambert D, Latheef S, Moreno JD, Peebles IS, T Pham M, Pindyck S, Rudyak I, Shinomiya N, Shortland ND, Sparrow R, Stramondo J, Tabouy L, Tubig P, Whetham D and Evans NG
Choosing Less Harmful Alternatives: The Ethics of Harm Reduction in Emerging Technologies
Turner C
Compliance with Clinical Guidelines and AI-Based Clinical Decision Support Systems: Implications for Ethics and Trust
Pardoux É and Kerasidou A
Artificial intelligence (AI) is gradually transforming healthcare. However, despite its promised benefits, AI in healthcare also raises a number of ethical, legal and social concerns. Compliance by design (CbD) has been proposed as one way of addressing some of these concerns. In the context of healthcare, CbD efforts could focus on building compliance with existing clinical guidelines (CGs), given that they provide the best practices identified according to evidence-based medicine. In this paper we use the example of AI-based clinical decision support systems (CDSS) to theoretically examine whether medical AI tools could be designed to be inherently compliant with CGs, and implication for ethics and trust. We argue that AI-based CDSS systematically complying with CGs when applied to specific patient cases are not desirable, as CGs, despite their usefulness in guiding medical decision-making, are only recommendations on how to diagnose and treat medical conditions. We thus propose a new understanding of CbD for CGs as a sociotechnical program supported by AI that applies to the whole clinical decision-making process rather than just understanding CbD for CGs as a process located only within the AI tool. This implies taking into account emerging knowledge from actual clinical practices to put CGs in perspective, reflexivity from users regarding the information needed for decision-making, as well as a shift in the design culture, from AI as a stand-alone tool to AI as an in-situ service located within particular healthcare settings.
Responsibility Gaps, LLMs & Organisations: Many Agents, Many Levels, and Many Interactions
Constantinescu M and Kaptein M
In this article, we propose a business ethics-inspired approach to address the distribution dimension of responsibility gaps introduced by general-purpose AI models, particularly large language models (LLMs). We argue that the pervasive deployment of LLMs exacerbates the long-standing problem of “many hands” in business ethics, which concerns the challenge of allocating moral responsibility for collective outcomes. In response to this issue, we introduce the “many-agents-many-levels-many-interactions” approach, labelled M3, which addresses responsibility gaps in LLM deployment by considering the complex web of interactions among diverse types of agents operating across multiple levels of action. The M3 approach demonstrates that responsibility distribution is not merely a function of agents’ roles or causal proximity, but primarily of the range and depth of their interactions. Contrary to reductionist views that suggest such complexity inevitably diffuses responsibility to the point of its disappearance, we argue that these interactions provide normative grounds for safeguarding the attribution of responsibility to agents. Central to the M3 approach is identifying agents who serve as nodes of interaction and therefore emerge as key loci of responsibility due to their capacity to influence others across different levels. We position LLM-developing organisations as an example of such agents. As nodes of interactions, LLM-developing organisations exert substantial influence over other agents and should be attributed broader responsibility for harmful outcomes of LLMs. The M3 approach thus offers a normative and practical tool for bridging potential gaps in the distribution of responsibility for LLM deployment.
What's Economics Got to Do with It? Providing Theoretical Clarity on ELSA of AI
Ryan M and Blok V
While research in the ethics of artificial intelligence (AI) has grown recently, the relationship between AI's ethical and economic dimensions is under-researched. This is surprising, given the considerable investments in AI by Big Tech companies (e.g., Microsoft, META and IBM) and their ambiguous role in today's public debate on AI. After the second Trump election, this ambiguity has resulted in industry opposition to rules and regulations (e.g., disinvestments in moderation facilities at social media platforms and calls for deregulation). AI ethics must respond to the economic underpinnings of this situation.While economics in AI ethics has also been seen in recent funding schemes (e.g., investment in 30 ethical, legal, and social (ELSA) labs), there is a ambiguity in how these AI ELSA labs should respond to economic aspects. This paper examines the role of economics in responsible AI research, using the case of the ELSA lab approach. The four features of ELSA (proximity, anticipation, interdisciplinarity, and interactivity) serve as a point of departure to demonstrate how economics can be integrated within the ELSA framework of AI. This paper proposes that economics should be integrated within these four ELSA features to implement responsible AI successfully.
Embodiment, Relationships, and Sexuality: An Ethical Analysis of Extended Reality Technologies
Ramirez EJ, Clark L, Campbell S, Dreiman J, Clay D, Gupta R and Jennett S
Communication technologies change the way we relate to each other and ourselves. In this essay we analyze the effects that extended reality (XR) technologies are likely to have on conceptions of the self, romantic relationships, and other associated concepts like sexual orientation. While these technologies are in their infancy, key psychological and philosophical concepts are already being explored. We begin by defining extended reality and the family of technologies that make it possible. We pay special attention to the way these immersive technologies ground the experiences of presence which can become virtually real. These experiences provide a useful framework for understanding the phenomena of XR embodiment. XR embodiment, the experience of one’s self as embodied in XR, opens up the possibility of blended physical and digital narrative selves which form the basis of new forms of relationships. In a future where XR is incorporated into the basic social and political structures of society, XR embodiment and virtually real experiences challenge normative concepts like sex and sexual orientation. Contemporary conceptions of the self, sex, consent, and love emerged in purely physical contexts to help us navigate the limitations of physical embodiment. XR embodiment requires a new ethical framework to make room for these possibilities. We end the paper by assessing ethical risks XR embodiment can introduce for XR developers, and researchers.
Promoting User Involvement to Foster Technological Citizenship in the Digitizing Healthcare Domain
Gardenier AM, Cramer I and van Est R
Artificial intelligence (AI) is playing an increasingly prominent role in healthcare technology, particularly in patient monitoring and diagnosis. While AI offers significant benefits, it raises concerns about the patient-provider relationship and key care values. To mitigate these risks and align technology with societal values, experts stress the importance of user involvement in technology development. However, input from patients or nurses during AI development remains uncommon. The rapidly digitizing healthcare thus demonstrates a context where technological citizenship is not yet thriving, underscoring the need for improvement in this area. This article examines a case study where nurses contributed to the development of an AI-driven monitoring camera for the cardiothoracic surgery department. Based on interview data, our study shows that involving users, particularly nurses, can lead to the improved innovation of a socio-technical care practice, greater recognition of the importance of user involvement among technology developers, and increased empowerment of nurses with regards to the technology that impacts their work practice. To promote technological citizenship in the evolving healthcare landscape, this article offers recommendations for fostering greater user involvement during the development and implementation of new technologies.
Mele's Digital Zygote: Developer Responsibility for Neural Networks
Søgaard A and Stamatiou F
Should developers be held responsible for the predictions of their neural networks-and if not, does that introduce a responsibility gap? The claim that neural networks introduce a responsibility gap has seen significant pushback, with philosophers arguing that the gap can be bridged, or did not exist in the first place. We show how the responsibility gap turns on whether we can distinguish between foreseeable and unforeseeable neural network predictions. Empirical facts about neural networks tell us we cannot, which seems to force developers to either assume full responsibility or no responsibility at all, introducing a responsibility gap-unless, of course, the same empirical facts hold true of humans, in which case there is no gap, but the trouble is simply with the classical notion of responsibility. We revisit and revise Mele's Zygote, as well as the famous Palsgraf case, and argue that in fact, what complicates responsibility assignment for neural networks also complicates responsibility assignment for humans, and humans seem to confront us with the same all-or-nothing dilemma. Thus, we agree there is no technology-induced responsibility gap (there was no gap in the first place), but for slightly different reasons than our predecessors.
Societal Readiness Thinking Process 2.0: Incorporating Epistemic Reflexivity for Responsible Innovation
Braun R, Bernstein MJ, Chakraborty A, Starkbaum J and Winkler F
Frameworks for ascertaining the societal dimensions of research and innovation (R&I), such as the Societal Readiness Thinking Tool (SRTT), have supported reflection on ethics and responsibility but often risk reducing reflexivity to procedural checklists or impact assessments. This paper develops an enhanced version, the reflexive SRTT 2.0 process, by incorporating concepts of epistemic reflexivity and ethnomethodological sensitivity. We introduce the concept of reflexive societal readiness, which understands readiness as a situated, ongoing accomplishment shaped by both local practices and institutional "relations of ruling." Drawing on ethnomethodological observations, reflexive questionnaires, and an initial workshop in the Horizon Europe project AGRO4AGRI, we examined how researchers engaged with reflexivity in practice. Our findings reveal three recurring patterns: reflexivity was often deflected through reliance on methodological safeguards, outsourced to societal impact experts or stakeholders, and substituted with compliance to regulatory frameworks or dominant imaginaries of sustainability and competitiveness. These practices uphold internal project orders and limit the potential for interdisciplinary learning and critical engagement. To address these obstacles, SRTT 2.0 proposes a reflexive process combining (a) observation of situated practices, (b) reflexive questioning that foregrounds individual positionalities, and (c) workshops that foster collaborative and institutional learning. This design enables researchers to critically interrogate their assumptions, engage more meaningfully with inclusion, and question the sociotechnical imaginaries shaping their work. We argue that embedding such reflexive processes into project lifecycles can extend and strengthen Responsible Research and Innovation (RRI) frameworks by cultivating collaborative, empathetic, and institutional learning. While challenges remain, SRTT 2.0 offers a transferable pathway for fostering more reflexive and responsible innovation practices.
Addressing Autonomy Risks in Generative Chatbots with the Socratic Method
Lu W and Hu Z
Autonomy is a fundamental ethical principle in artificial intelligence (AI) ethics. Current discussions regarding autonomy-related risks in human-AI interaction, as well as potential mitigation strategies, have mainly focused on recommendation systems and algorithmic decision-making systems. However, systematic analyses of the autonomy issues posed by newly emerging generative AI or chatbots (such as ChatGPT) remain scarce. This paper aims to bridge this gap and proposes a Socratic method-based chatbot, herein designated SocrAI, as a potential countermeasure informed by recent technological advancements. We identify two primary forms of autonomy risk associated with generative chatbots-false mental states and cognitive deskilling-and, through an examination of their underlying causes, argue that the Socratic method offers a plausible means of mitigation. The paper further assesses the feasibility of employing the Socratic method within generative chatbots to preserve and support users' autonomy and outlines the prospective implementation of SocrAI together with directions for future work. SocrAI represents a novel attempt to strengthen human agency, one that encourages users to become self-initiating inquirers who, with the assistance of AI, actively engage their abilities in the pursuit of answers.
Filling the Responsibility Gap: Agency and Responsibility in the Technological Age
Xia YH
The rise of automation, such as artificial intelligence, has significantly reduced human agency in artifact-involving actions, thereby giving rise to a moral dilemma known as the responsibility gap. In this dilemma, moral undesirable consequences arise from technology, yet no one is held accountable because individuals lack control over their actions and their outcomes, as well as the ability to predict these outcomes, rendering them morally blameless. This paper proposes a framework based on externalist agency and distributed responsibility, aiming to address this problem by assigning accountability to artifacts. Externalist agency holds that agency is a gradient concept rather than an all-or-nothing one. Between fully autonomous agency and non-agency, there are three intermediate gradients: distributed agency, constrained agency, and derivative agency. Distributed responsibility holds that responsibility should be allocated according to these gradients of agency. For both humans and artifacts: if an agent possesses derivative agency, forgiveness is warranted; if it possesses distributed agency, punishment is justifiable. This paper further suggests that punishment for artifacts can take the form of punishment for technological lineage—that is, modifying the design of technologies to guide their evolution toward safer and more ethical directions.
A Case Study in Arts-Informed Ethics Education in the Nuclear and Radiological Sciences
Martinez NE, Donaher SE, Nagata JS and Shuller-Nickles L
There is a need for cross-disciplinary researchers and professionals in the radiological sciences who can navigate complex interconnected ethical-social-technical issues, communicate across a wide audience in consideration of multiple stakeholder perspectives, and remain self-critical, aware, and reflective of the field with the intent of continuous improvement within the broader profession. Given that traditional curriculum related to the nuclear and radiological sciences emphasizes the technological, scientific aspects of radioactivity and ionizing radiation, a graduate-level course in "nuclear culture" was developed that employs various forms of art, expression, and material culture as a vehicle for encouraging deeper, dedicated reflection on related social and ethical issues. This paper provides a description of course structure and representative content, along with discussion of student perceptions, as a case study in the use of an arts-informed approach (that is, a STEAM-based approach) to guiding students through perspective-taking, self-expression, and authentic and critical evaluation of broad issues surrounding current and historical use of radiation. The course consists of discussions, reflective writing, and projects, supplemented with hands-on activities. Student perceptions of the course, elucidated through thematic analysis of post-hoc surveys, revolved around: novel educational approaches; student engagement and validation; ethics, morals, and empathy; and the societal impact of nuclear. Review of student and instructor perceptions suggests that art in various forms can be incorporated into graduate-level curriculum to improve the educational experience of nuclear-focused students and promote deeper reflection and understanding of social and ethical issues related to their chosen field.
From Nano to Quantum: Ethics Through a Lens of Continuity
Shelley-Egan C and De Jong E
A significant amount of scholarship and funding has been dedicated to ethical and social studies of new and emerging science and technology (NEST), from nanotechnology to synthetic biology, and Artificial Intelligence. Quantum technologies comprise the latest NEST attracting interest from scholarship in the social sciences and humanities. While there is a small community now emerging around broader discussion of quantum technologies in society, the concepts of ethics of quantum technologies and responsible innovation are still fluid. In this article, we argue that lessons from previous instances of NEST can offer important insights into the early stages of quantum technology discourse and development. In the embryonic stages of discourse around NEST, there is often an undue emphasis on the novelty of ethical issues, leading to speculation and misplaced resources and energy. Using a lens of continuity, we revisit experiences and lessons from nanotechnology discourse. Zooming in on key characteristics of the nanoethics discourse, we use these features as analytical tools with which to assess and analyse emerging discourse around quantum technologies. We point to continuities between nano and quantum discourse, including the focus on 'responsible' or 'good' technology; the intensification of ethical issues brought about by enabling technologies; the limitations and risks of speculative ethics; the effects of ambivalence on the framing of ethics; and the importance of paying attention to the present. These issues are taken forward to avoid 'reinventing the wheel' and to offer guidance in shaping the ethics discourse around quantum technologies into a more focused and effective debate.
Ethics Readiness of Technology: The Case for Aligning Ethical Approaches with Technological Maturity
de Jong E
The ethics of emerging technologies faces an anticipation dilemma: engaging too early risks overly speculative concerns, while engaging too late may forfeit the chance to shape a technology's trajectory. Despite various methods to address this challenge, no framework exists to assess their suitability across different stages of technological development. This paper proposes such a framework. I conceptualise two main ethical approaches: outcomes-oriented ethics, which assesses the potential consequences of a technology's materialisation, and meaning-oriented ethics, which examines how (social) meaning is attributed to a technology. I argue that the strengths and limitations of outcomes- and meaning-oriented ethics depend on the uncertainties surrounding a technology, which shift as it matures. To capture this evolution, I introduce the concept of ethics readiness-the readiness of a technology to undergo detailed ethical scrutiny. Building on the widely known Technology Readiness Levels (TRLs), I propose Ethics Readiness Levels (ERLs) to illustrate how the suitability of ethical approaches evolves with a technology's development. At lower ERLs, where uncertainties are most pronounced, meaning-oriented ethics proves more effective; while at higher ERLs, as impacts become clearer, outcomes-oriented ethics gains relevance. By linking Ethics Readiness to Technology Readiness, this framework underscores that the appropriateness of ethical approaches evolves alongside technological maturity, ensuring scrutiny remains grounded and relevant. Finally, I demonstrate the practical value of this framework by applying it to quantum technologies, showing how Ethics Readiness can guide effective ethical engagement.
Big Data, Machine Learning, and Personalization in Health Systems: Ethical Issues and Emerging Trade-Offs
Canali S, Falcetta A, Pavan M, Roveri M and Schiaffonati V
The use of big data and machine learning has been discussed in an expanding literature, detailing concerns on ethical issues and societal implications. In this paper we focus on big data and machine learning in the context of health systems and with the specific purpose of personalization. Whilst personalization is considered very promising in this context, by focusing on concrete uses of personalized models for glucose monitoring and anomaly detection we identify issues that emerge with personalized models and show that personalization is not necessarily nor always a positive development. We argue that there is a new problem of trade-offs between the expected benefits of personalization and new and exacerbated issues - results that have serious implications for strategies of mitigation and ethical concerns on big data and machine learning.
Hidden Agents, Explicit Obligations: A Linguistic Analysis of AI Ethics Guidelines
Griffin TA, Goorman R, Green BP and Welie JVM
Since 2013, many organizations, governments, and coalitions have issued ethics guidelines aimed at achieving ethically sound artificial intelligence (AI). The literature evaluating these guidelines has so far focused more on is in them (e.g., principles) than on is expected to enact them (e.g., developers). We argue that ethical agency in AI Ethics guidelines has been under-scrutinized in the literature, and we seek here to fill that gap. This study relies on transitivity analysis to evaluate 87 operational ethics guidelines for the representation of moral agents and their agency. We identified normative key words, their linguistic function, and agency attribution in 6,935 statements. Our findings reveal 11 distinct agents, with deployers, developers, and AI systems being the most frequently invoked. However, the ethical agency attributed to developers and deployers was overwhelmingly implied, while the tasks assigned to them were more often normative than descriptive. That the agency of the two most powerful agents in AI development is so often hidden in ethics guidelines reveals that the challenges associated with implementing AI ethics guidelines does not stem merely from the “principles to practice” problem, but from a more deeply rooted issue regarding how guideline authors conceive of ethical agency. We evaluate the findings through the dual lenses of agent backgrounding and agent suppression and conclude with advice for authors of ethics guidelines.
After Harm: A Plea for Moral Repair after Algorithms Have Failed
Wong PH and Rieder G
In response to growing concerns over the societal impacts of AI and algorithmic decision-making, current scholarly and legal efforts have mainly focused on identifying risks and implementing safeguards against harmful consequences, with regulations seeking to ensure that systems are secure, trustworthy, and ethical. This preventative approach, however, rests on the assumption that algorithmic harm can essentially be avoided by specifying rules and requirements that protect against potential dangers. Consequently, comparatively little attention has been given to post-harm scenarios, i.e. cases and situations where individuals have already been harmed by an algorithmic system. We contend that this inattention to the aftermath of harm constitutes a major blind spot in AI ethics and governance and propose the notion of algorithmic imprint as a sensitizing concept for understanding both the nature and potential longer-term effects of algorithmic harm. Arguing that neither the decommissioning of harmful systems nor the reversal of damaging decisions is sufficient to fully address these effects, we suggest that a more comprehensive response to algorithmic harm requires engaging in discussions on moral repair, offering directions on what such a plea for moral repair ultimately entails.
Who Cares for Space Debris? Conflicting Logics of Security and Sustainability in Space Situational Awareness Practices
Klimburg-Witjes N, Strycker K and Braun V
Satellites and space technologies enable global communication, navigation, and weather forecasting, and are vital for financial systems, disaster management, climate monitoring, military missions and many more. Yet, decades of spaceflight activities have left an ever-growing debris formation - rocket part, defunct satellites, and propellant residues and more - in Earth's orbits. A congested outer space has now taken the shape of a haunting specter. Hurtling through space at incredibly high velocities, space debris has become a risk for active satellites and space infrastructures alike. This article offers a novel perspective on the security legacies and infrastructures of space debris mitigation and how these affect current and future space debris detection, knowledge production, and mitigation practices. Acknowledging that space debris is not just a technical challenge, but an ethico-political problem, we develop a transdisciplinary approach that links social science to aerospace engineering and practical insights and experiences from the European Space Agency´ (ESA) Space Debris Office. Specifically, we examine the role of secrecy and (mis)trust between international space agencies and how these complicate space situational awareness practices. Attending to the "mundane" practices of how space debris experts cope with uncertainty and security logics offers a crucial starting point to developing an ethical approach that prioritizes care and responsibility for innovation over ever more technological fixes to socio-political problems. Space debris encapsulates our historical and cultural value constellations, prompting us to reflect on sustainability and responsibility for Earth-Space systems in the future.
Principles and Framework for the Operationalisation of Meaningful Human Control Over Autonomous Systems
Calvert SC
With a plethora of different seemingly diverging expansions for use of Meaningful Human Control (MHC) in practice, this paper proposes an alignment for the operationalisation of MHC for autonomous systems by proposing operational principles for MHC and introducing a generic framework for its application. The increasing integration of autonomous systems in various domains emphasises a critical need to maintain human control to ensure responsible safety, accountability, and ethical operation of these systems. The concept of MHC offers an ideal concept for the design and evaluation of human control over autonomous systems, while considering human and technology capabilities. Through conceptual synthesis of existing literature and investigation across various domains and related concepts, principles for the operationalisation of MHC are set out to provide tangible guidelines for researchers and practitioners aiming to implement MHC in their systems. The proposed framework dissects generic components of systems and their subsystems aligned with different agents, stakeholders and processes at different levels of proximity to an autonomous technology. The framework is domain-agnostic, emphasizing the universal applicability of the MHC principles irrespective of the technological context, paving the way for safer and more responsible autonomous systems.
A Social Disruptiveness-Based Approach to AI Governance: Complementing the Risk-Based Approach of the AI Act
Marchiori S, Hopster JKG, Puzio A, Riemsdijk MBV, Kraaijeveld SR, Lundgren B, Viehoff J and Frank LE
The AI Act advances a risk-based approach to the legal regulation of AI systems in the European Union. While we support this development, we argue that adequate AI governance requires paying attention to the broader implications of AI systems on the socio-technical landscape in which they are designed, developed, and used. In addition to risk-based impact assessments, this involves coming to terms with the socially disruptive implications of AI, which should be governed and guided in a dynamic ecosystem of regulation, law, ethics, and evolving human practice. In this paper, we outline a 'social disruptiveness-based' approach to AI governance aimed at addressing disruptions by AI that are not easily captured by legal regulation, but that are nonetheless of great societal and ethical concern. We argue that integrating the AI Act risk-based approach with a social disruptiveness-based approach can offer a more nuanced understanding of the dimensions of impact of AI systems on society at large, thus enhancing the governance of AI and other socially disruptive technologies.