17 Jul 25
YAF Lecture

Democracy and AI

This YAF lecture explores how the use of Generative Artificial Intelligence tools influences society at large, public discourse, and what regulatory measures or other societal adaptations are necessary to sustain a democratic society. After two keynote talks by the distinguished speakers, a moderated discussion will follow.

Greetings: Prof. Dr. Stefan Oeter
Professor for Public Law, International Law, and Foreign Public Law, Universität Hamburg and Vice-President of the Akademie der Wissenschaften in Hamburg

  • Talk: Prof. Dr. Judith Simon
    Professor for Ethics in Information Technology, Universität Hamburg
    Dis/Trusting Generative AI? Assessing four types of deception through Generative AI – Large Language Models and other types of Generative AI has taken the world by storm, intensifying debates on the ethics of AI in general and notions of trustworthy Generative AI in particular. In my talk, I propose the notion of quadruple deception to capture a distinctive feature of Generative AI with significant epistemological, ethical and political implications: 1) deception regarding the ontological status of one’s interactional counterpart, 2) deception regarding the capacities of AI, 3) deception through content created with Generative AI as well as 4) deception resulting from the embedding of Generative AI into all sorts of software systems. Arguing that deception poses a threat to placing trust wisely, I assess the epistemic, ethical and political implications of these four different types of deception. I will end with some conclusions on how to increase the trustworthiness of Generative AI to enable more justified trust in such technologies.
     
  • Talk: Prof. Dr. Sandra Wachter
    Professor of Technology and Regulation, University of Oxford & Hasso Plattner Institute
    Do large language models have a legal duty to tell the truth? – Large language models (LLMs) do not distinguish between fact and fiction. They will return an answer to almost any prompt, yet factually incorrect responses are commonplace.  Our tendency to anthropomorphise machines and trust models as human-like truth tellers — consuming and spreading the bad information that they produce in the process — is uniquely worrying. They are not, strictly speaking, designed to tell the truth.  Yet they are implemented in many sectors where truth and detail matter such as education, science, health, the media, law, and finance. I coin the idea of “careless speech” as a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education, and shared social truth in democratic societies. These subtle mistruths are poised to cumulatively degrade and homogenize knowledge over time.  This begs the question: Do large language models have a legal duty to tell the truth? I will show the prevalence of hallucinations, and I will assess the existence of truth-related obligations in EU human rights law and the Artificial Intelligence Act, Digital Services Act, Product Liability Directive and Artificial Intelligence Liability Directive. I will close by proposing ideas of how to reduce hallucinations in LLMs.
  • Moderation of a short panel with Prof. Simon and Prof. Wachter: Prof. Dr. Christian Herzog
    Professor for the Ethical, Legal and Social Aspects of AI, Universität zu Lübeck and YAF of the Akademie der Wissenschaften in Hamburg.

 

Participation is free but registration is required. You can register here:
https://cloud.adwhh.de/index.php/apps/forms/s/XAzYtEA3y5s6Pc6t38ot9tqf

You will receive a confirmation within 24 hours. Note that space is limited and spots are given on a first come-first-served basis. For questions or an alternative way of registration, please send an email to d.sarikaya(at)uni-luebeck.de Changes will also be communicated via www.awhamburg.de

West Wing of the Main building (ESA West) of the Universität Hamburg, Room 221
Edmund-Siemers-Allee 1
20146 Hamburg

Donnerstag, 17. Juli 2025 um 18:15

Anmeldung bitte unter

Informationen zur Veranstaltung als Download


Ansprechpartnerin

Veronika Schopka
Telefon: +49 40 42948669-12
E-Mail:  veranstaltungen(at)awhamburg.de

Wenn Sie regelmäßig über öffentliche Veranstaltungen der Akademie informiert werden möchten, dann schreiben Sie bitte an veranstaltungen(at)awhamburg.de und teilen uns Ihre Kontaktdaten mit.