Prediction is the new horizon of research on artificial forms of intelligence. Indeed, ChatGPT and generative AI have sparked a heated (and often alarmed) debate about their alleged intelligence starting from what is in fact their ability to predict the most appropriate or likely continuation of the conversation. A crucial aspect of this new kind of prediction is that it differs profoundly from the probabilistic forms of forecast familiar to our society since modernity. Our conference brings together leading researchers to investigate its innovative features.
The predictive ability of algorithms seems to become increasingly independent of understanding. Advanced machine learning models work with large amounts of data that they do not understand, and are often themselves incomprehensible. Yet their results have immediate practical relevance because in many cases they do not refer to populations and averages but to specific individual cases: single patients, credit applicants, risky technologies. New forms of transparency are needed, and the emerging area of Explainable AI addresses this, but we also need reliability criteria that allow us to trust the results of processes we do not understand.
From a sociological perspective, this yields urgent problems of control, regulation, management of opacity, identification of bias, and of its consequences in the implementation of results. The conference investigates these issues from an interdisciplinary perspective, combining the analysis of algorithmic prediction with current debates on its concrete impact on different social domains: On the medical, insurance, and legal fields, on policing practices, on public policy, and on the media. The goal is to provide elements for a comprehensive analysis of prediction in digital society.
This interdisciplinary workshop addressed head-on the relation between social theory and insurance. To discuss the conceptual presuppositions of the research in the field being foregrounded, articulated and developed, we brought together sociologists, historians, law scholars and actuarial mathematicians whose recent and current work touches upon these issues: Tom Baker (Pennsylvania), Laurence Barry (Paris), Stephen J. Collier (Berkeley), Alberto Cevolini (Bologna), Elena Esposito (Bielefeld), Pierre François (Paris), Geoffrey W. Clark (New York), Niels Viggo-Haueter (Zürich), Ine Van Hoyweghen (Leuven), Rebecca Elliott (London), Liz McFall (Edinburgh), Turo-Kimmo Lehtonen (Tampere), Vera Linke (Hamburg), Maiju Tanninen (Tampere), Gert Meyers (Leuven), Giancarlo Corsi (Bologna), Jonas Mieke (Bielefeld)
The written record of the workshop can be found here.
The traditional business model of insurance is going through a disruptive change. InsurTech and predictive analytics are raising high expectations because they can offer personalized policy premiums adapted to the individual level of risk. Whereas the personalization of premium setting represents an opportunity, however, it could also become a threat because it can question the principle of risk pooling and spreading on which the whole insurance mechanism is based. This international interdisciplinary workshop involved actuaries, practitioners and sociologists, and addressed the possible socio-economic, ethical and political consequences of algorithmic techniques in the field of insurance. The program can be found here.