Apply to the Summer Institute in Computational Social Science (SICSS) - Saarbrücken
The Interdisciplinary Institute for Societal Computing offers a regular Lecture Series to bring together researchers of different academic fields to analyze and discuss the broad topic of society and technology. The Lecture Series is designed as a laboratory of interdisciplinary research to encourage cooperation and new research approaches. The series will feature a mix of speakers from Computer Science, Social Science, and Digital Humanities.
April 11, 2025
Pranav A (NLP & AI Ethics, University of Hamburg)
Policy Impact: From AI Regulation to Academic Inclusion
April 25, 2025
Asmelash Teka Hadgu (Low-resource NLP, DAIR/lesan.ai)
Beyond the AI Hype: Designing Machine Learning Systems that Serve Communities
May 9, 2025
Theresa Gessler (Political Science, Europa-Universität Viadrina)
Six Degrees of Slavery: Measuring Slave-Ownership and Elite Persistence in Britain
May 23, 2025
Jan Lause (Computational Neuroscience, University of Tübingen)
Delving into ChatGPT usage in academic writing through excess vocabulary
June 6, 2025
Tanise Ceron (NLP, Bocconi University)
How to reinforce democratic values in language systems?
June 27, 2025
Zeerak Talat (Responsible Machine Learning and AI, University of Edinburgh)
Is machine learning a Woke, Neutral, or Fascist technic?
July 4, 2025
Isabel Valera (Computer Science, Saarland University)
Society-centered AI: An Integrative Perspective on Algorithmic Fairness
The Lecture Series is in building E1 7, Room 3.23, on the campus of Saarland University from 12h-13h.
If you want to meet one of our speakers on the day of the event, please contact us: hello[@]i2sc.net
If you are interested in our events and want to stay up to date, please subscribe to our mailing list here.
For Guest lectures not from the lecture series, please check our Guest Lectures page.
April 11, 2025
Policy Impact: From AI Regulation to Academic Inclusion
This talk examines how policies affect people across multiple domains. I analyze surveillance loopholes in the EU AI Act introduced by German regulators that potentially compromise civil liberties. The discussion then critiques academic name-change policies that create unnecessary barriers for transgender and queer scholars. Finally, I evaluate academic publishing frameworks that exclude underrepresented communities and propose practical reforms to create more equitable spaces. Throughout, I demonstrate how seemingly neutral policies can cause harm when developed without input from marginalized communities.
April 25, 2025
Beyond the AI Hype: Designing Machine Learning Systems that Serve Communities
Technological advancement, particularly Artificial Intelligence, is often accompanied by significant hype and grand promises. But who do these technologies serve? I will motivate this talk by conducting a reality check of current technologies, such as chatbots, social media platforms, search engines, and knowledge bases for languages spoken by millions in the Horn of Africa. Drawing from hands-on experience, I will then explore the development of machine learning systems for underrepresented languages, focusing on Machine Translation and Automatic Speech Recognition for Amharic and Tigrinya. These case studies offer insight into the technical and social challenges involved, as well as the broader implications for the communities these technologies are intended to serve. I will conclude by discussing how we evaluate machine learning systems. Evaluation is central to defining scientific progress in the field, with benchmarks playing a critical role. However, benchmarks can also be strategically manipulated or misused. I will highlight how these issues manifest in practice and what they reveal about the incentives and priorities driving current AI research.
May 9, 2025
Six Degrees of Slavery: Measuring Slave-Ownership and Elite Persistence in Britain
Although the legacies of slavery in Britain have received increasing attention in recent years, little evidence exists on how such legacies have shaped the country’s political elite. This project seeks to quantify and chart these ties over time, using a novel computational technique harnessing Wikidata, a knowledge graph closely connected to Wikipedia, as a means of operationalizing historical proximity between individuals. This technique enables the measurement of personal proximity to slavery, including via familial and social connections. Aggregating this data for politicians across a series of historical parliaments demonstrates how links to slavery persisted via dynastic and other social ties into the 20th Century and the extent to which ties to slavery still exist among contemporary politicians. We demonstrate how slavery-backed networks have persisted more thoroughly in the Conservative Party and the House of Lords than in other institutions, and explore the robustness of this persistence to historical periods of reform. The results emphasize the extent to which slavery and its legacies continue to play an active role in British politics, as well as offering an efficient and reliable new method for measuring social proximity using online data.
May 23, 2025
Delving into ChatGPT usage in academic writing through excess vocabulary
Recent large language models (LLMs) can generate and revise text with human-level performance and have been widely commercialized in systems like ChatGPT. These models come with clear limitations: they can produce inaccurate information, reinforce existing biases, and be easily misused. Yet, many scientists have been using them to assist their scholarly writing. How widespread is LLM usage in the academic literature currently? To answer this question, we use an unbiased, large-scale approach, free from any assumptions about academic LLM usage. We study vocabulary changes in 14 million PubMed abstracts from 2010-2024 and show how the appearance of LLMs led to an abrupt increase in the frequency of certain style words. Our analysis based on excess word usage suggests that at least 10% of 2024 abstracts were processed with LLMs. This lower bound differed across disciplines, countries, and journals, and was as high as 30% for some PubMed sub-corpora. We show that the appearance of LLM-based writing assistants has had an unprecedented impact on the scientific literature, surpassing the effect of major world events such as the COVID-19 pandemic.
June 6, 2025
How to reinforce democratic values in language systems?
Two key values in democratic societies are to facilitate easy access to a broad range of information, and in parallel, to foster an environment where a diverse range of opinions can thrive. In this presentation, I will talk about two sides of my research that touch on these aspects: investigating how to diversify news recommendations and understanding the stakes of accessing information through language systems such as ChatGPT. I will present ways to provide users with different perspectives and its challenges. Finally, I will present initial results on how users look for information via language systems in contrast with traditional web search engines.
June 27, 2025
Is machine learning a Woke, Neutral, or Fascist technic?
Since the mid-2010s there have been concerns about the social biases that are embedded in the representational space of machine learning models which has led to a flourishing field of addressing the social harms of machine learning. This, in turn, has led to conflicting claims of machine learning being discriminatory, "woke," and value-neutral. In this lecture I will discuss the history of natural language processing and machine learning; and examine methods for developing machine learning tools for human and social data. I argue that claims of wokeness or even neutrality inherently misunderstand (a) the methods used by machine learning and (b) the social and historical contexts under which machine learning technologies are developed. Finally, I will close with a discussion on how machine learning infrastructures maintain and reify colonial legacies of displacing costs onto colonial bodies while centralizing benefits in the heart of the empire.
July 4, 2025
Society-centered AI: An Integrative Perspective on Algorithmic Fairness
In this talk, I will share my never-ending learning journey on algorithmic fairness. I will give an overview of fairness in algorithmic decision-making, reviewing the progress and wrong assumptions made along the way, which have led to new and fascinating research questions. Most of these questions remain open to this day and become even more challenging in the era of generative AI. Thus, this talk will provide only a few answers but many open challenges to motivate the need for a paradigm shift from owner-centered to society-centered AI. With society-centered AI, I aim to bring the values, goals, and needs of all relevant stakeholders into AI development as first-class citizens to ensure that these new technologies are at the service of society.