ZUR INVASION AKADEMISCHER ELFENBEINTÜRME 
...

Hybrider Workshop

LLMs and the Patterns of Human Language Use

DATE

LOCATION

ORGANIZERS

29-30 August 2024

Weizenbaum-Institut, TU Berlin & online

Anna Strasser, Bettina Berendt, Christoph Durt, Sybille Krämer

funded by

CALL FOR PAPER

– hybrid Workshop in Berlin ‘LLMs and the Patterns of Human Language Use‘–
Deadline: 15.4. 2024

LLMs and the Patterns of Human Language Use
Large Language Models (LLMs) such as ChatGPT and other generative AI systems are the subject of widespread discussions. They are often used to produce output that 'makes sense' in the context of a prompt, such as completing or modifying a text, answering questions, or generating an image or video from a description. However, little is yet known about the possibilities and implications of human-sounding machines entering human communication. The seemingly human-like output of LLMs masks a fundamental difference: LLMs model statistical patterns in huge text corpora, patterns that humans are not normally aware of. Humans do perceive patterns at various levels, but when we produce ordinary language, we do not explicitly compute statistical frequency distributions.
The workshop aims at an interdisciplinary and philosophical understanding of the processing of statistical patterns of LLMs and their possible function in communicative exchange. In the controversy about the communicative potential of LLMs, we start from the thesis that LLMs do not understand meaning and investigate the extent to which they can nevertheless play a role in communication when people interact with them. To this end, concrete examples of LLM applications, such as the use of LLMs in software engineering and for whistleblower protection, will be explained by and discussed with their developers. This is important not only for a better understanding of the kinds of exchanges that are possible with LLMs, but also for the question of how far we can trust them and which uses are ethically acceptable.

Confirmed speakers are:
Bettina Berendt & Dimitri Staufer, Stefania Centrone, Christoph  Durt & Tobias Hey, Elena Esposito, David Gunkel, Sybille Krämer, Geoffrey Rockwell, Anna Strasser

There are some slots reserved for this CFP.

We are looking for international contributions from computer sciences or philosophy and invite you to submit an extended abstract. We encourage joint presentations by researchers from different disciplines due to the interdisciplinary nature of the subject.

To address these issues, the following questions could serve as a starting point:
(1) Are there certain types of language games that can be modeled almost perfectly by LLMs, and are there others that resist computational modeling?
(2) What kinds of patterns in human language use, widely recognized as key features of cultural evolution, can be modeled by computations of statistical patterns?
(3) What is the relationship between patterns and rules?
(4) What role do patterns play for LLMs, and what role do they play for humans?
(5) Are there examples of successful human-human communication where understanding cannot be attributed to all participants?
(6) Given that the text production of LLMs is so radically different from that of humans, to what extent can communicative principles such as trust and reliability be applied to human-machine interaction?

  • Please send an abstract with 500 to max. 1000 words as Pdf- or Word document plus a short biographical note to: berlinerdenkwerkstatt@gmail.com
  • Please use the following subject when submitting: Submission for LLMs in Berlin
  • Deadline: 15.4.2024

The workshop is funded by the DFG, and travel & accommodation costs can be reimbursed under the usual conditions. For environmental reasons, however, we also welcome remote participation, especially concerning transatlantic flights.

Confirmed Speaker with preliminary titles:

  • SYBILLE KRÄMER (Leuphana University of Lüneburg): How should the generative power of LLMs be interpreted? A reflection on the 'cultural technique of flattening' as an anthropotechnical potential.
  • BETTINA BERENDT (TU Berlin / Weizenbaum Institute) & DIMITRI STAUFER (TU Berlin / Weizenbaum Institute): Anonymizing without losing communicative intent? LLMs, whistleblowing, and risk-utility tradeoffs.
  • ANNA STRASSER (DenkWerkstatt Berlin / LMU Munich): What you can learn from developmental psychology for dealing with non-understanding LLMs
  • CHRISTOPH  DURT (University of Heidelberg) & TOBIAS HEY (Karlsruhe Institute of Technology (KIT)): LLM in software engineering: trace link recovery & the problem of relevance
  • ELENA ESPOSITO (University Bielefeld / University Modena-Reggio Emilia): Communication with nonunderstandable machines
  • GEOFFREY ROCKWELL (University of Alberta): ChatGPT: Chatbots can help us rediscover the rich history of dialogue
  • DAVID GUNKEL (Northern Illinois University): Literary Theory for LLMs
  • STEFANIA CENTRONE (TU Munich): Machine Translation, Problem Solving, Pattern Recognition: An Historical-Phenomenological Analysis

 THE ORGANIZERS

ANNA STRASSER

BETTINA BERENDT

CHRISTOPH DURT

SYBILLE KRÄMER

(DenkWerkstatt Berlin / LMU Munich)

(TU Berlin / Weizenbaum Institute)

(University of Heidelberg)

(Leuphana University of Lüneburg)


Longer description

LLMs based on generative AI are often used to produce output that makes sense in relation to a prompt, such as completing or modifying a text, or producing an image or video from a description. But the apparent human-like output masks a fundamental difference: LLMs model statistical patterns in huge corpora of text, patterns that humans are usually either unaware of or only tacitly aware of. Humans do experience patterns at various levels, often quite vividly, but when we produce ordinary language, we do not explicitly compute statistical patterns.
Rather, people make sense of language, although even within a discourse on the same topic, the degree and manner of understanding can vary widely between people. However, meaningful exchange is still possible to some extent, even if the participants have very different understandings of the topic, and some may have no understanding at all. By exploiting statistical patterns in large corpora of text, LLMs produce text that is – to an astonishing degree – grammatical and meaningful to us, and one can expect further surprises. The relationship between meaningful language use and statistical patterns is an open question and considering it in the context of LLMs promises new insights. This will be important not only for a better understanding of the kinds of exchanges possible with LLMs, but also for questions about how much we can trust them and what uses are ethical.
In the international and interdisciplinary workshop, we will discuss the ways in which, despite the fundamental difference in text production, LLMs can still participate in human language games. Are there certain types of language games that can be modeled almost perfectly by LLMs, and are there others that resist computational modeling? It is widely recognized that patterns are an important feature of human cultural development. What kinds of patterns in human language use can be modeled by computations on statistical patterns? What is the relationship between patterns and rules? What is the role of patterns for LLMs, and what is their role in experience and language? Since LLM text production is so radically different from that of humans, can communicative principles such as trust and reliability apply to human-machine interaction? We will discuss these and other questions, both in terms of the fundamental philosophical issues involved, and in terms of concrete new and future applications of LLMs. Philosophical insights on the topic will be brought into discourse with the experience of computational scientists developing new applications.