International Conference on Computer Ethics
http://journals.library.iit.edu/index.php/CEPE2023
<p>The <strong>CEPE</strong> conference series is recognized as one of the premier international events on computer and information ethics attended by delegates from all over the world.</p> <p>Conferences are held about every 24 months, alternating between Europe and the United States. INSEIT is the main organizer/sponsor of the CEPE series.</p> <p>The 2023 CEPE was the International Conference on Computer Ethics: Philosophical Enquiry 2023, hosted by Illinois Institute of Technology in Chicago, IL.</p>Illinois Institute of Technologyen-USInternational Conference on Computer EthicsTheoretical Underpinnings of Virtual Reality: From Second Life to Meta
http://journals.library.iit.edu/index.php/CEPE2023/article/view/279
<p>Since Facebook’s transition and rebranding to ‘Meta’ in October 2021, there is a renewed academic and societal interest in the notions of ‘metaverse,’ ‘virtual reality’ (VR), and ‘virtuality’ (see e.g., Novak, 2022; Gent, 2022). This renewed interest reminds of the debates around the three-dimensional social virtual worlds like Second Life in 2007.</p> <p>This paper has a two-fold conceptual aim. <em>First</em>, it presents a critical synthesis of how late-twentieth and twenty-first-century philosophers and media theorists have conceptualised virtuality and its relation to reality, in the context of VR. The analysis carefully distinguishes seven theories.</p> <p>The <em>second </em>part focuses on a comparison (similarities and dissimilarities) between Second Life and Meta. The starting points are four conceptualisations of virtuality: an ontological, a phenomenological (in terms of subjective embodied experience), a cultural, and a technological conceptualisation (e.g., VR; augmented reality).</p> <p>Ultimately, both aims and parts seek to contribute to a better and more nuanced understanding of the theoretical underpinnings of the current academic and societal discussions about Meta.</p>Kathleen Gabriels
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Trust Through Explanation? On the claim for explainable medical decision support systems
http://journals.library.iit.edu/index.php/CEPE2023/article/view/263
<p>Extended Abstract</p>Sebastian Schleidgen
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Framing Effects in the Operationalization of Differential Privacy Systems as Code-Driven Law
http://journals.library.iit.edu/index.php/CEPE2023/article/view/264
<p>n/a</p>Jeremy Seeman
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011A Labor History of Health Records: On Medical Scribes and the Ethics of Automation
http://journals.library.iit.edu/index.php/CEPE2023/article/view/265
<p>This paper explores the human labor demands that underpin the utility of patient health records. I examine where these labor demands originated historically, and I consider how they might evolve, given the recent rise of artificial intelligence (AI) being developed to automate the collection and categorization of patient health information. Using a sociotechnical framework, the paper identifies a complicated paradox: the labor of medical scribes has become crucial for the benefits of electronic health records (EHR) to be realized; simultaneously, scribe work has been regarded in medical literature as inconspicuous and transitory, a stopgap measure wholly replaceable by a more efficient solution. The paper thus critically interrogates the premise that automation can replicate and replace scribe labor, and it examines the ethics of moving toward a fuller reliance on AI.</p>Sara M. B. Simon
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Responsibility Before Freedom: closing the responsibility gaps for autonomous machines
http://journals.library.iit.edu/index.php/CEPE2023/article/view/266
<p>The introduction of autonomous machines (AMs) in human domains has raised challenging questions about the attribution of responsibility; referred to as the <em>responsibility gap</em>. In this paper, we address the gap by positing that entities cannot be granted the freedom of action unless they can also recognise the same right for others—and be subject to blame or punishment in cases of undermining the rights of others. Since AMs fail to meet this criterion, the users who utilize an AM to pursue their goals can instead grant the machine <em>their </em>(the user’s) right to act autonomously on their behalf. Thus, an AM’s right to act freely hinges on the user’s duty to recognise others’ right to be free. Since responsibility is attributed <em>before </em>an entity is given the freedom to act, the responsibility gap only arises when we ignore the fact that AMs have no right of acting freely on their own. We also discuss some attractive features of the approach, address some potential objections, and compare our theory to existing proposals. We conclude by arguing that holding users responsible for the behaviour of AMs promotes a responsible use of AI while it indirectly motivates companies to make safer machines.</p>Shervin MirzaeighaziJakob Stenseke
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Social Media Algorithms and Social Distrust
http://journals.library.iit.edu/index.php/CEPE2023/article/view/267
<p>Extended Abstract</p>Heather Stewart
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011War or Peace Between Humanity and Artificial Intelligence
http://journals.library.iit.edu/index.php/CEPE2023/article/view/268
<p>Extended Abstract</p>Wolfhart Totschnig
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011An Examination of Doctors’ Attitude Toward Medical AI: Turkey Sample
http://journals.library.iit.edu/index.php/CEPE2023/article/view/269
<p>Extended Abstract</p>Seda Turan
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Causes and Reasons – Decisions, Responsibility, and Trust in Techno-Social Interactions
http://journals.library.iit.edu/index.php/CEPE2023/article/view/270
<p>The interaction between humans and AI creates a new type of interaction that goes beyond subject object relations. AI technologies cannot always be described as a conventional object due to its activity capabilities and the black box aspect. An additional category is created, which is outlined by the ‘<em>sobject approach’</em>. This creates the opportunity to study the human-like characteristics of the interaction on the part of the AI. The ‘<em>social’ </em>possibilities of AI can thus be focused by referring to ‘<em>techno-social’ </em>rather than ‘<em>social’ </em>interactions, since the possibilities are different from the human sociality but exist in the human-social lifeworld. If an AI is a techno-social interaction partner, it can ‘<em>act’ </em>and make ‘<em>decisions’</em>. The additional category can therefore be used to investigate what types of decisions there are, if they are based on ‘reasons or causes’, whether they can be ‘trusted’, and if one can assign or delegate ‘<em>responsibility’ </em>to such technology. Thus, classical ethical questions regarding subjective categories like decision-making, trust, and trustworthiness, and responsibility can be rethought for somewhat human-like but not human technologies like AI. </p>Larissa Ullmann
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Rebalancing the digital convenience equation through narrative imagination
http://journals.library.iit.edu/index.php/CEPE2023/article/view/271
<p>Extended Abstract</p>Fernando NascimentoAnya Workman
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011The Normative Side of Building Friendship with AI Companions
http://journals.library.iit.edu/index.php/CEPE2023/article/view/272
<p>Extended Abstract</p>Tugba Yoldas
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011MIIND and HEART: Measuring and designing for thicker qualities of user experience
http://journals.library.iit.edu/index.php/CEPE2023/article/view/274
<p>In this paper, we discuss our interdisciplinary approach to developing a new framework for evaluating how design elements of digital technologies interact with joy in user experience. We explain why this framework is needed, given a distinction between thin and thick engagement in user experience. We elaborate on our framework in light of four case studies we conducted or supervised, showing how cognitive and normative elements of user experience might be better engaged with in UX frameworks, and in the design and use of technology more generally.</p>Rachel Siow RobertsonJennifer GeorgeMatthew Kuan Johnson
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Digital Transformations of Democracy: Requirements for Successful Problem Solving in the Age of Anthropocene
http://journals.library.iit.edu/index.php/CEPE2023/article/view/275
<p>Extended Abstract</p>Jan-Philipp Kruse
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Understanding Freedom in the Age of Machines: What Does It Mean to Be Digitally Free?
http://journals.library.iit.edu/index.php/CEPE2023/article/view/276
<p>The 21st century has ushered in a spate of disruptive digital technologies that are enabling us to exercise our freedoms in new ways. But with these new freedoms come new challenges, new ways in which these technologies are actually making us <em>unfree</em>, and we still don’t have a full picture of what that means. So how are we to engage with these technologies in the meantime? I will not suggest that we get rid of them, but I will explore the idea that we can exercise our freedoms not only by using these technologies but also by choosing <em>not </em>to use them. But that in turn raises a new question, namely: What exactly is this negative freedom from digital technologies, and how plausibly can we exercise it? In exploring this question, I consider the language of rights that has been used to articulate this negative freedom—with a focus on the rights set forth in data-protection, labour, and administrative law—suggesting that while these rights are far from the perfect tool for exercising our freedom from digital technologies, they nonetheless can go a long way toward that end.</p>Migle Laukyte
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Registering AI-generated Patents: A Revolution in Distributive Justice?
http://journals.library.iit.edu/index.php/CEPE2023/article/view/277
<p>Extended Abstract</p>Naama Daniel
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Data After Death: Remembrance and Resurrection
http://journals.library.iit.edu/index.php/CEPE2023/article/view/278
<p>Extended Abstract</p>Alexis Elder
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Including a Social Perspective In AI Ethics: The Contribution of a Dialogue Between American Pragmatism and Critical Theory
http://journals.library.iit.edu/index.php/CEPE2023/article/view/262
<p> Throughout the history of moral philosophy, the theoretical postures have been privileged. Modern ethics is no exception and is indeed characterized by the predominance of voluntarist and universalist frameworks, which are primarily concerned with the actions of the moral agent, with no real regard for the conditions of possibility necessary for the effective realization of moral actions. Recent developments in applied ethics have shown that an integral application of classical ethical frameworks does not adequately address the new moral dilemmas emerging from our different spheres of activity. Artificial intelligence (AI) ethics once again demonstrates the inadequacy of traditional ethical frameworks to deal with the many ethical issues related to the pervasiveness of AI systems. Indeed, the dominant theories in ethics fail to take account of the shared responsibility that characterizes the moral obligations we have towards AI systems. The particularity of pragmatist ethics is that it aims at a practical intervention without however renouncing the conceptual clarifications necessary for such an intervention. We will demonstrate how the characteristics of pragmatist ethics avoids certain pitfalls in AI ethics and provides a conceptual framework particularly well suited to address the ethical issues related to the increasing use of AI systems in our societies.</p>Andréane Sabourin LaflammeFrédérick Bruneault
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Deepfakes, Public Announcements, and Political Mobilization
http://journals.library.iit.edu/index.php/CEPE2023/article/view/280
<p>Extended Abstract</p>Megan Hyska
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011 Stop at red? Engineering meets ethics
http://journals.library.iit.edu/index.php/CEPE2023/article/view/281
<p>Over the past few years, artificial intelligence has fueled a revolution in several scientific fields. Intelligent agents can now give medical advice, translate spoken language, recommend news, and drive different types of vehicles, to name but a few. Some of these agents need to interact with humans and, hence, need to adhere to their social norms. Safety engineers have always worked with critical systems in which catastrophic failures can occur. They need to make ethical decisions in order to keep the system under some acceptable risk level. In this paper, we will propose an approach to give a value to contrary-to-duty behaviors by introducing a risk aversion factor. We will make use of decision theory with uncertain consequences together with a risk matrix used by safety engineers. We will successfully exemplify this approach with the problem in which an autonomous car needs to decide whether to run a red light or not.</p>Ignacio D. Lopez-Miguel
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Where Law and Ethics Meet: A Systematic Review of Ethics Guidelines and Proposed Legal Frameworks on AI
http://journals.library.iit.edu/index.php/CEPE2023/article/view/282
<p>Extended Abstract</p>Désirée Martin Michael W. Schmidt
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011People’s Perception and Expectation of Moral Settings in Autonomous Vehicles: An Australian Case
http://journals.library.iit.edu/index.php/CEPE2023/article/view/284
<p>While Autonomous Vehicles (AVs) can handle the majority of driving situations with relative ease, it is indeed challenging to design a system whose safety performance will fit every situation. Technology errors, misaligned sensors, malicious actors and bad weather can all contribute to imminent collisions. If we assume that the wide-spread use and adoption of AVs is a necessary condition of the many societal benefits that these vehicles have promised to offer, then it is quite clear that any reasonable ethics policy should also consider the various user expectations with which they interact, and the larger societies in which they are implemented. In this paper we aim to evaluate Australian’s perception and expectation on personal AVs relating to various ethical settings. We do this using a survey questionnaire, where the participants are shown 6 dilemma situations involving an AV, and are asked to decide which outcome is the most acceptable to them. We have designed the survey questions with consideration for previous research and have excluded any selection criteria which we believed were biased or redundant in nature. We enhanced our questionnaire by informing participants about the legal implications of each crash scenario. We also provided participants with a randomised choice which we named an Objective Decision System (ODS). If selected, the AV would consider all possible outcomes for a given crash scenario and choose one at random. The randomised decision is non-weighted, which means that all possible outcomes are treated equally. We will use the survey analysis, to list and prioritise Australian’s preferences on personal AVs when dealing with an ethical dilemma, that can help manufacturers in programming and governments in developing AV policies. Finally, we make some recommendations for further researchers as we believe such questionnaires can help arouse people’s curiosity in the various ways that an AV could be programmed to deal with a dilemma situation and would encourage AV adoption.</p>Amir RafieeHugh BreakeyYong WuAbdul Sattar
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Overtrust in Algorithms: An online behavioral study on trust and reliance in AI advice
http://journals.library.iit.edu/index.php/CEPE2023/article/view/285
<p>Extended Abstract</p>Phillipp SchreckArtur KlingbellCassandra Grüzner
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Toward Substantive Models of Rational Agency in the Design of Autonomous AI
http://journals.library.iit.edu/index.php/CEPE2023/article/view/286
<p>Extended Abstract</p>Ava Thomas WrightJacob Sparks
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Ex-post Approaches to Privacy: Trust Norms to Realize the Social Dimension of Privacy
http://journals.library.iit.edu/index.php/CEPE2023/article/view/287
<p>Extended Abstract</p>Haleh Asgarinia
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Improving AI-mediated Hate Speech Detection: A Genuine Ethical Dilemma
http://journals.library.iit.edu/index.php/CEPE2023/article/view/289
<p>AI-mediated hate speech detection is indispensable for contemporary communication platforms. But it has known deficiencies in terms of bias and context-awareness. I argue that improving on these known deficiencies leads into a genuine ethical dilemma: It will increase the epistemic and social utility of these platforms, while also helping bad faith political and corporate actors to suppress unwelcome speech more swiftly and efficiently.</p>Maren Behrensen
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-122023-05-1211Do we have Procreative Obligations to AI Superbeneficiaries?
http://journals.library.iit.edu/index.php/CEPE2023/article/view/290
<p>This paper concerns itself primarily with questions about our obligations to AI superbeneficiaries – entities with inherently valuable interests that exceed those of humans in terms of quality and/or quantity. Specifically, this paper deals with questions about whether we have any obligations to bring AI superbeneficiaries into existence, especially if it turns out that human well-being might very well be at stake. I employs an anti-natalist argument to establish that we have all-things-considered moral obligations against bringing AI superbeneficiaries into existence because of the existential risk they pose to their own survival as well as to the survival of humanity.</p>Sherri Conklin
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-122023-05-1211Humanity Compatible: Aligning Autonomous AI with Kantian Respect for Humanity
http://journals.library.iit.edu/index.php/CEPE2023/article/view/292
<p>Extended Abstract</p>Ava Thomas Wright
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-162023-05-1611Acceleration AI Ethics and the Debate between Stability AI’s Diffusion and OpenAI’s Dall-E
http://journals.library.iit.edu/index.php/CEPE2023/article/view/293
<p>One objection to conventional AI ethics is that it slows innovation. This presentation responds by reconfiguring ethics as an innovation accelerator. The critical elements develop from a contrast between Stability AI’s Diffusion and OpenAI’s Dall-E. By analyzing the divergent values underlying their opposed strategies for development and deployment, five conceptions are identified as common to acceleration ethics. Uncertainty is understood as positive and encouraging, rather than discouraging. Innovation is conceived as intrinsically valuable, instead of worthwhile only as mediated by social effects. AI problems are solved by more AI, not less. Permissions and restrictions governing AI emerge from a decentralized process, instead of a unified authority. The work of ethics is embedded in AI development and application, instead of functioning from outside. Together, these attitudes and practices remake ethics as provoking rather than restraining artificial intelligence.</p>James Brusseau
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-232023-05-2311Moral Attribution in Moral Turing Test
http://journals.library.iit.edu/index.php/CEPE2023/article/view/294
<p>This paper argues Moral Turing Test (MTT) developed by Allen et al. for evaluating morality in AI systems is designed inaptly. Different versions of the MTT focus on the conversational ability of an agent but not the performance of morally significant actions. Arnold and Scheutz also argue against the MTT and state that without focusing on the performance of morally significant actions, the MTT is insufficient. Morality is mainly about morally relevant actions because it does not matter how good a person is at conversing about morally relevant actions. When discussing morality, we consider an agent’s ability to perform specific actions in a morally given situation. We show that Allen et al. do not take into account the distinction between the performance of the moral attribution and the performance of the morally relevant action. This distinction gives a robust account of assessing the morality of an AI system in the MTT.</p>Jolly ThomasMubarak Hussain
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-232023-05-2311Intercultural Information Ethics Applied To The Data Colonialism Concept
http://journals.library.iit.edu/index.php/CEPE2023/article/view/295
<p>There is a universalized and socially accepted view of order and totality focused on the processing of personal data, categorizing subjects and configuring criticism in a European view that sustains the dynamics of modernity, giving rise to what is called Data Colonialism. This paper explores the relationship between Data Colonialism and Intercultural Information Ethics - IIE, focusing on whether these concepts are connected. The study argued that just as industrial capitalism transformed society by commodifying labor, data capitalism is changing society by commodifying human life through collecting, controlling, and exploiting personal data. This practice contributes to class division and digital colonialism, where digital territories become sites of extraction and exploitation. Data Colonialism and IIE both address issues of informational justice in diverse cultural contexts. IIE can provide insights into analyzing these relationships from the perspective of local cultures on privacy, informed consent, and information sharing, which differ greatly between different cultures. IIE understand and respect different cultural perspectives on information, while Data Colonialism refers to companies and governments exploiting personal data without consent and reproducing colonial power relations. An intercultural ethical approach to information can help analyze the effects of data colonialism and promote justice and equity in different cultural contexts. By recognizing these colonization processes in the digital age, in which there are ethical implications in relation to the transit of information and cultural differences, we propose to think about this complex network from the Intercultural Ethics of Information.</p>Jonas Ferrigolo Melo
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-232023-05-2311Smart Machines and Wise Guys: Can Intelligent Machines be Wise?
http://journals.library.iit.edu/index.php/CEPE2023/article/view/296
<p>Extended Abstract</p>Edward Spence
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-252023-05-2511A five-step ethical decision-making model for self-driving vehicles: Which (ethical) theories could guide the process and what values need further investigation?
http://journals.library.iit.edu/index.php/CEPE2023/article/view/297
<p>By choosing a specific trajectory (especially in accident situations), self-driving vehicles (SDVs) will implicitly distribute risks among traffic participants and induce the determination of traffic victims. Acknowledging the normative significance of SDVs’ programming, policymakers and scholars have conceptualized what constitutes ethical decision-making for SDVs. Based on these insights and requirements formulated in contemporary literature and policy drafts, this article proposes a five-step ethical decision model for SDVs during hazardous situations. In particular, this model states a clear sequence of steps, indicates the guiding (ethical) theories that inform each step, and points out a list of values that need further investigation. This model, although not exhaustive and resolute, aims to contribute to the scholarly debate on computational ethics (especially in the field of autonomous driving) and serves practitioners in the automotive sector by providing a decision-making process for SDVs during hazard situations that approximates compliance with ethical theories, shared principles and policymakers’ demands. In the future, assessing the actual impact, effectiveness and admissibility of implementing the here sketched theories, values and process requires an empirical evaluation and testing of the overall decision-making model.</p>Franziska PoszlerMaximilian GeisslingerChristoph Lütge
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-252023-05-2511Beyond Turing: ethical effects of large language models
http://journals.library.iit.edu/index.php/CEPE2023/article/view/244
<p>Extended Abstract</p>Alexei GrinbaumLaurynas Adomaitis
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Psychotherapist Bots: Transference and Countertransference Issues
http://journals.library.iit.edu/index.php/CEPE2023/article/view/228
<p>There is a rapid advancement in the development of psychotherapist bots that are based on artificial intelligence. Chatbots and robots may facilitate treatment by reducing barriers and increasing accessibility. Researchers have shown that psychological bots play an effective role similar to traditional face-to-face psychotherapy in reducing depression and anxiety symptoms. Due to the rapid advancement of psychotherapy technology, therapeutic chatbots are likely to become widely used in the near future. In this context, it is essential to consider both the ethical and clinical aspects of bots and chatbots as mental healthcare improvement assistants. The first part of this abstract outlines the concept of transference and countertransference in human-psychotherapist bot interactions. In this novel form of therapy, topics like transference and countertransference need to be discussed, as well as concepts such as empathy, acceptance, judgment, and safety in therapeutic relationships. We attempt to draw attention to the need to revisit clinical and ethical issues related to the interactions between humans and psychotherapist bots.</p>Fatemeh AmirkhaniZahra NorouziSaeedeh Babaii
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Surveillance Culture and Fundamental Rights: The Excluded and the Beneficiaries
http://journals.library.iit.edu/index.php/CEPE2023/article/view/229
<p>Surveillance has progressively grown in social life in the 20th and 21st centuries. It happened partly because of the adoption of multiple sensors that can extract, collect and analyze an enormous volume of data. This expressive data volume, variety, and processing velocity are known as big data. The increasing adoption of big data and models based on algorithmic intelligence has a massive impact on society because of its dissemination among social spheres through relations between the public and private sectors. This paper aims to discuss surveillance culture and its consequences on fundamental rights such as privacy and freedom of speech. In addition, it is intended to debate the excluded and the beneficiaries of a surveillance society. The methodological approach is the literature review. The conclusion relies on the need for intercultural ethics to strengthen the right to privacy to guarantee not only itself but multiple fundamental rights nowadays. </p>Camila CostaJonas Ferrigolo Melo
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Governance Conflicts and Public Court Records
http://journals.library.iit.edu/index.php/CEPE2023/article/view/230
<p>Datafication of society has heavily influenced the way in which we use technology and how technology is designed. Social informatics research illustrates that technology use by diverse groups is not neutral. With the increased usage of technology, critical analysis of data and uses becomes a more significant topic of research. Data governance, as one priority research subarea, varies widely as it is influenced by those who have the power to control it, raising many research questions. With the differences in how data is governed differing across data, what does that mean for data that is used to train models? How do the implications of data governance shape how what is trained? This paper seeks to evaluate that relationship through multi-method content analysis of governance documents regarding data and access to public court records in Illinois and California. It seeks to address the gap in research surrounding ethical impacts of data governance and how those impacts can have larger implications, both positive and negative.</p>Kyra Milan AbramsMadelyn Rose Sanfilippo
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Can Large Language Models as Chatbots be Social Agents?
http://journals.library.iit.edu/index.php/CEPE2023/article/view/232
<p>Extended Abstract</p>Syed Abumusab
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Sex-bots and touch: what does it all mean for our (human) identity?
http://journals.library.iit.edu/index.php/CEPE2023/article/view/233
<p>Extended Abstract</p>Iva Apostolova
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011A Feminist View of Medical AI Harm
http://journals.library.iit.edu/index.php/CEPE2023/article/view/234
<p>Extended Abstract</p>Clair Baleshta
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Freeing Digital Images at Last?
http://journals.library.iit.edu/index.php/CEPE2023/article/view/235
<p>Extended Abstract</p>Maria Bottis
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Democratic Culture and the Automation of Information: What is really at stake?
http://journals.library.iit.edu/index.php/CEPE2023/article/view/236
<p>Extended Abstract</p>Jason BranfordEloise SoulierLaura Fichtner
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Experiencing AI and the Relational ‘Turn’ in AI Ethics
http://journals.library.iit.edu/index.php/CEPE2023/article/view/237
<p>Extended Abstract</p>Jason Branford
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Epistemic Injustice and Algorithmic Epistemic Injustice in Healthcare
http://journals.library.iit.edu/index.php/CEPE2023/article/view/238
<p>Extended Abstract</p>Jeffrey ByrnesAndrew Spear
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Does it (morally) matter whether the AI machine is conscious?
http://journals.library.iit.edu/index.php/CEPE2023/article/view/239
<p>Extended abstract</p>Kamil Cekiera
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011How do Decision Support Systems Nudge?
http://journals.library.iit.edu/index.php/CEPE2023/article/view/240
<p>Extended Abstract</p>F PedrazzoliF.A. D’AsaroM Badino
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011The Overdemandingness of AI Ethics
http://journals.library.iit.edu/index.php/CEPE2023/article/view/241
<p>Extended abstract</p>Susan Dywer
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Deepfakes and Dishonesty
http://journals.library.iit.edu/index.php/CEPE2023/article/view/242
<p>Extended Abstract</p>Tobias FlatteryChristian Miller
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011On the Possibility of Moral Machines: A Reply to Robert Sparrow
http://journals.library.iit.edu/index.php/CEPE2023/article/view/243
<p>Extended Abstract</p>Dane Leigh Gogoshin
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011What is AI Ethics? Ethics as means of self-regulation and the need for critical reflection
http://journals.library.iit.edu/index.php/CEPE2023/article/view/227
<p>In the wake of the recent digital transformation, AI ethics has been put into practice as a means of self-regulation. Current initiatives of ethical self-regulation can be distinguished into different ethical practices, namely ethics as rule setting (codes of conduct), ethics as rule following (value-oriented development), and ethics as rule compliance checking (boards and audits). Drawing from the literature, I demonstrate that these forms of AI ethics are in constant need of normative reflection and deliberation albeit the structural conditions under which they are enacted give very little room to do so. Accordingly, the AI community should think more about how to establish institutional frameworks that can be conducive for cultivating ethics as critical reflection and deliberation.</p>Suzana Alpsancar
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011When AI Moves Downstream
http://journals.library.iit.edu/index.php/CEPE2023/article/view/245
<p>After computing professionals design, develop, and deploy software, what is their responsibility for subsequent uses of that software “downstream” by others? Furthermore, does it matter ethically if the software in question is considered to be artificial intelligent (AI)? The authors have previously developed a model to explore downstream accountability, called the Software Responsibility Attribution System (SRAS). In this paper, we explore three recent publications relevant to downstream accountability, and focus particularly on examples of AI software. Based on our understanding of the three papers, we suggest refinements of SRAS.</p>Frances GrodzinskyKeith MillerMarty Wolf
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011AI Opacity vs Patient's Autonomy in Decision-Making
http://journals.library.iit.edu/index.php/CEPE2023/article/view/247
<p>Extended Abstract</p>Jose Luis Guerrero Quinones
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Automation, Trust, Responsibility in Algorithmic Warfare
http://journals.library.iit.edu/index.php/CEPE2023/article/view/248
<p>Extended Abstract</p>Stefka Hristova
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Unintelligible Artificial Intelligence and Virtue Ethics
http://journals.library.iit.edu/index.php/CEPE2023/article/view/249
<p>Extended Abstract</p>Mahdi Khalili
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011European or Universal? The European Declaration of Digital Rights in a global context
http://journals.library.iit.edu/index.php/CEPE2023/article/view/250
<p>This paper examines the potential universality of the European Declaration of Digital Rights, which was proposed to protect fundamental rights within the European Union. The investigation provides insights regarding the status of the Declaration as a position statement with a limited binding character, although it could serve as a reference point for future legislation at various levels. Digital Rights (DR) have the character of general principles, and the challenge in implementing them is the question of <em>enforceability</em>. The paper also considers universality in its philosophical, social, ethical, and legal dimensions. Cultural and contextual differences present a challenge to establishing a globally accepted and similarly interpreted set of fundamental rights. Moreover, social values and familiarity with technologies may highly influence social opinion. Additionally, the interpretation of ethical principles varies across different societies, for whom the question of DR is not uniformly relevant or urgent. Legitimacy is a crucial factor in universality, and Peter Wahlgren’s model provides a useful analytical framework that includes political, legal, cultural, functional, and internal rationalities that are crucial facets of legitimacy. The paper suggests two paths for future discussions of universality in DR: a <em>normative approach </em>that focuses on identifying fundamental and universal values, and an <em>empirical approach </em>that seeks values that are widely accepted and uses them as criteria for universality. </p>Maksymilian Kuźmicz
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Building an Algorithmic Advisory System for Moral Decision Making in the Health-Care Sector
http://journals.library.iit.edu/index.php/CEPE2023/article/view/251
<p>Extended Abstract</p>Lukas J. MeierAlice Hein
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011From HHI to HRI: Which Facets of Ethical Decision-Making Should Inform a Robot?
http://journals.library.iit.edu/index.php/CEPE2023/article/view/252
<p>Extended Abstract</p>Jason BorensteinArthur Melo CruzAlan WagnerRonald Arkin
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011AI explicability in medicine and healthcare: fighting against the return to the paternalism
http://journals.library.iit.edu/index.php/CEPE2023/article/view/253
<p>Extended Abstract</p>Lorella Meola
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Can we be friends with AI? What risks would arise from the proliferation of such friendships?
http://journals.library.iit.edu/index.php/CEPE2023/article/view/254
<p>In this paper we analyse friendships between humans and artificial intelligences, exploring the various arguments that have been or could be offered against the value of such friendships, and arguing that these objections do not stand up to critical scrutiny. As such, we argue that there is no good in-principle reason to oppose the development of human-AI friendships (although there may be some practical defeasible reasons to worry about such friendships becoming widespread). If we are right, there are important implications for how friendship is conceptualised and valued in modern times. Furthermore, if human-AI friendships are in principle valuable, the moral responsibilities for how governments and corporations should act in regards to AI friends are quite different to those generated by human-AI friendships being dis-valuable.</p>Nick MunnDan Weijers
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Interpreting Ordinary Uses of Psychological and Moral Terms in the AI
http://journals.library.iit.edu/index.php/CEPE2023/article/view/255
<p>Extended Abstract</p>Hyungrae Noh
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Which Method for Engineering Concepts and Technologies?
http://journals.library.iit.edu/index.php/CEPE2023/article/view/256
<p>Extended Abstract</p>Irene Olivero
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Epistemology and Algo-reliabilism: A Pathway to Sound Ethical Artificial Intelligence
http://journals.library.iit.edu/index.php/CEPE2023/article/view/257
<p>Extended Abstract</p>Helen Titilola Olojede
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011An Investigation in the (In)Visibility of Shadowbanning
http://journals.library.iit.edu/index.php/CEPE2023/article/view/259
<p>Extended Abstract</p>Amanda Pinto
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Ethical and governance considerations for genomic data sharing in the development of medical technologies for melanoma - The iToBoS Project.
http://journals.library.iit.edu/index.php/CEPE2023/article/view/260
<p>Balancing the risks and benefits of using genomics data in health service provision is a complex task. Social, ethical, and legal considerations are nuanced, often complicated by the fact that regulations lag behind rapid pace of technological development. Ethical considerations such as autonomy, beneficence, and non-maleficence are weighed against (and within) complex concepts such as privacy, security, safety, and proportionality. This paper will discuss European H2020 funded project IToBoS<a href="#_ftn1" name="_ftnref1"><sup>[1]</sup></a>, in which an AI diagnostic platform for the early detection of melanoma is being developed. Assuring the project's solutions are produced in an ethically and socially responsible manner, with regulatory compliance at their core, is one of the project's primary goals. This paper will communicate the existing tensions within the health sector, including between the European Commission’s desire for open-data – governed through its proposed Digital Strategy and practically achieved through the creation of a European Health Data Space<a href="#_ftn2" name="_ftnref2"><sup>[2]</sup></a> – and the risks inherent with the generalised sharing of genomics (and other health related) data.</p> <p> </p> <p><a href="#_ftnref1" name="_ftn1"><sup>[1]</sup></a> This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 965221. More information may be found at: https://itobos.eu/</p> <p><a href="#_ftnref2" name="_ftn2"><sup>[2]</sup></a> https://health.ec.europa.eu/ehealth-digital-health-and-care/european-health-data-space_en</p>Robin RenwickNiamh Aspell
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011Do Children Dream of Connected Watches?
http://journals.library.iit.edu/index.php/CEPE2023/article/view/261
<p>As our society relies increasingly on artificial intelligence in day-to-day life, we have very limited knowledge and control of its uses and consequences on our representations, values, behaviors, lifestyles, etc. However, it deeply affects and shapes our relationship to the world: how we interact with others and our environment (e.g. <em>smart</em> devices that we wear or have installed in our home), how we perceive space and time, what control we have on our sleep, health, etc. Since AI designs and uses are developed by companies mainly economically motivated, the examination of the conceptual, anthropological, and moral impacts of the emergent experiences to which our interactions with AI contribute to generally overlooked. There is an urgent need to study these issues in order re-invest in the citizens the possibility to decide in which society they want to live, what values should be promoted, what relationships to the others and the environment should be favored. In this paper, I study the power relations between humans and connected objects.</p>Lisa Roux
Copyright (c) 2023 International Conference on Computer Ethics
2023-05-102023-05-1011