Precision Medicine (PM) uses advanced Machine Learning (ML) techniques and big data to develop personalized treatments, but healthcare still relies on traditional statistical procedures not targeted on individuals. This study investigates the impact of ML on epidemiology.
A quantitative analysis of the articles in PubMed for the years 2000–2019 was conducted to investigate the use of statistical methods and ML in epidemiology. Using structural topic modelling, two groups of topics were identified and analysed over time: topics closer to the clinical side of epidemiology and topics closer to the population side.
The curve of the prevalence of topics associated with population epidemiology basically corresponds to the curve of the relative statistical methods, while the more dynamic curve of clinical epidemiology broadly reproduces the trend of algorithmic methods.
The findings suggest that a renewed separation between clinical epidemiology and population epidemiology is emerging, with clinical epidemiology taking more advantage of recent developments in algorithmic techniques and moving closer to bioinformatics, whereas population epidemiology seems to be slower in this innovation.
ESPOSITO, E., ANGELINI, P. & SCHNEIDER, S. (2024). Precision Epidemiology. A Computational Analysis of the Impact of Algorithmic Prediction on the Relationship Between Population Epidemiology and Clinical Epidemiology. Int J Public Health 69:1607396. doi: 10.3389/ijph.2024.1607396.
Sind ChatGPT und generative KI eine Bedrohung oder eine Chance für unsere Zivilisation? Die neuesten Algorithmen, die immer intelligenter zu werden scheinen, greifen in jeden Aspekt unseres Lebens ein – und sind für Menschen immer schwerer zu begreifen. Müssen wir uns Sorgen machen – und machen wir uns die richtigen Sorgen? Wie können wir Maschinen kontrollieren, die wir nicht verstehen? Wenn der Schwerpunkt der KI sich von Intelligenz auf Kommunikation verlagert, stellen sich ganz andere Fragen: Seitdem Algorithmen nicht mehr versuchen, die menschliche Intelligenz zu reproduzieren, haben sie gelernt, immer kompetentere und effizientere Kommunikationspartner zu werden. Nun liegt es an uns, zu lernen, wie wir mit ihnen kommunizieren können.
ESPOSITO, E. (2024). Kommunikation mit unverstänlichen Maschinen. Unruhe Bewahren. Wien-Salzburg: Residenz Verlag, 2024. https://www.residenzverlag.com/buch/kommunikation-mit-unverstandlichen-maschinen.
Open Access: https://pub.uni-bielefeld.de/record/2988982.
Symposium on Explaining Machines, with contributions by Elena Esposito, Jean-Marie John-Mathews, Dominique Cardon, David Weinberger, Mireille Hildebrandt, Bernhard Rieder, Geoff Gordon, Giovanni Sileno.
Explaining Machines: Social Management of Incomprehensible Algorithms. in Sociologica, 16(3), 2022. Symposion issue: https://sociologica.unibo.it/issue/view/1141.
Elena Esposito's introduction locates the debate about Explainable AI in the history of the reflection about AI and outlines the issues discussed in the contributions. ESPOSITO, E. (2022). Explaining Machines: Social Management of Incomprehensible Algorithms. Introduction. Sociologica, 16(3), 2022: 1–4. https://sociologica.unibo.it/article/view/16265.
The growing digitisation in our society also affects policing, which tends to make use of increasingly refined algorithmic tools based on abstract technologies. But the abstraction of technology, we argue, does not necessarily entail an increase in abstraction of police work. This paper contrasts the ‘abstract police’ debate with an analysis of police practices that use digital technologies to achieve greater precision. While the notion of abstract police assumes that computerisation distances police officers from their community, our empirical investigation of a geo-analysis unit in a German Land Office of Criminal Investigation shows that the adoption of abstract procedures does not by itself imply a detachment from local reference and community contact. What we call contextual reference can be productively combined with the impersonality and anonymity of algorithmic procedures, leading also to more effective and focused forms of collaboration with local entities. On the basis of our empirical results, we suggest a more nuanced understanding of the digitalisation of police work. Rather than leading to a progressive estrangement from the community of reference, the use of digital techniques can enable experimentation with innovative forms of ‘precision policing’, particularly in the field of crime prevention.
EGBERG, S., & ESPOSITO, E. (2024). Algorithmic crime prevention. From abstract police to precision policing. Policing and Society, 1–14. https://doi.org/10.1080/10439463.2024.2326516.
Although humans have long sought to produce the most accurate predictions possible about the future and apply them to make decisions in the present, recent developments in machine learning have led to predictions assuming an even more important role in society. There is hardly any societal field in which algorithmic forecasts are not carried out and hardly anyone who has not become the subject of forecasts in their life. Against this backdrop, this chapter presents the existing literature on the topic and reflects on how predictive analytics should be understood from a sociological perspective and on the consequences with which it may be associated. It will show that predictive analytics needs to be grasped as a socio-technical constellation because any genuine reference to the future does not result from the technical-analytical process itself but comes about only through human interpretation of the results as future oriented. In addition, the chapter discusses the human role in the collection of data, in the process of algorithmic model construction, and in the selection and creation of patterns. It also highlights the fact that predictions cannot be separated analytically from their final implementation, as this has important consequences that often follow a circular logic and can, ultimately, have discriminatory consequences.
EGBERT, S. (2023). 'Predictive Analytics: A Sociological Perspective', in Christian Borch, and Juan Pablo Pardo-Guerra (eds), The Oxford Handbook of the Sociology of Machine Learning https://doi.org/10.1093/oxfordhb/9780197653609.013.32
Attempts to generate knowledge about the future and to make it usable for decisions in the present have existed for a long time in human history. Modern societies, however, are characterized by a particularly close relationship to the future and use numerous possibilities of (scientific) foreknowledge production to colonize it. Nonetheless, with recent advances in machine learning fuelled by predictive analytics, approaches to predicting the future in order to optimize strategies and actions in the present are becoming even more important. Before this backdrop, I analyse the application of predictive analytics as “prediction regimes,” utilizing the Foucauldian governmentality approach. It is argued that predictive algorithms serve as “rendering devices,” making the future calculable and, hence, governable in the present.
EGBERT, S. (2024). Algorithmic Futures: Governmentality and Prediction Regimes. In S. Egbert, J. Jarke, B. Prietl, Y. Boeva, H. Heuer, & M. Arnold (Eds.), Algorithmic Regimes: Methods, Interactions, and Politics (pp. 265–286). Amsterdam University Press. https://doi.org/10.2307/jj.11895528.15
This article presents the historical evolution and current state of predictive policing in Germany and the more recent development towards platformised policing. At the same time, the argument is made that the early hype surrounding predictive policing was the prerequisite for the current strong demand for the use of data integration and analysis platforms by the police in Germany. This is because the hype surrounding predictive policing has made leading politicians and police officers more receptive to the opportunities and potential of artificial intelligence, machine learning, etc. and thus laid the foundations for the implementation of more complex platform algorithms. The integration and analysis platforms have a broader influence on police work than the classic spatial predictive policing tools, as they not only influence patrol activities, but also investigative activities, as shown by the example of the market leader, the Gotham platform from the US software company Palantir Technologies, and in contrast to the predictive policing software PRECOBS. However, these platforms are also associated with substantial risks, which are also discussed in conclusion. With regard to juvenile delinquency, the development of predictive policing into analysis platforms means that young people are increasingly becoming the focus of police algorithms, as the spatial predictive policing practised in Germany to date is not related to individuals, but this is different in the case of integration and analysis platforms.
EGBERT, S. (2024). Algorithmisches Polizieren in Deutschland: Von Predictive Policing zu plattformisierter Polizeiarbeit. Zeitschrift für Jugendkriminalrecht und Jugendhilfe (2): 122-130.
Is the future simply open or can it be made more or less open? The awareness of the uncontrollable impact of present action on the future has recently raised a debate about the risks of innovation and rational planning. Relying on Luhmann’s concept of defuturization, the article confronts the two approaches of future-making and preparedness and proposes to combine them with reference to the management of innovation. This adds a purposeful dimension to the discourse about preparedness, aimed so far only at confronting damaging events: one can also be prepared to seize and exploit novel opportunities.
ESPOSITO, E. (2024). Can we use the open future? Preparedness and innovation in times of self-generated uncertainty. European Journal of Social Theory
The openness of the future is rightly considered one of the qualifying aspects of the temporality of modern society. The open future, which does not yet exist in the present, implies radical unpredictability. This article discusses how, in the last few centuries, the resulting uncertainty has been managed with probabilistic tools that compute present information about the future in a controlled way. The probabilistic approach has always been plagued by three fundamental problems: performativity, the need for individualization, and the opacity of predictions. We contrast this approach with recent forms of algorithmic forecasting, which seem to turn these problems into resources and produce an innovative form of prediction. But can a predicted future still be an open future? We explore this specific contemporary modality of historical futures by examining the recent debate about the notion of actionability in precision medicine, which focuses on a form of individualized prediction that enables direct intervention in the future it predicts.
ESPOSITO, E., HOFMANN, D. & COLONI, C. (2023). Can a predicted future still be an open future? Algorithmic forecasts and actionability in precision medicine. History & Theory 0(0) (September 2023)
Mit der zunehmenden Digitalisierung der Gesellschaft verändert sich die Arbeit in vielen Bereichen unserer Gesellschaft, die Polizei eingeschlossen. Dies gilt nicht nur in Bezug auf sich verändernde Kriminalitätsstrukturen und neu entstehende Deliktsfelder (Cybercrime) (vgl. z. B. Rüdiger/Bayerl 2018), sondern ebenfalls auf die Art und Weise, wie Polizei Wissen generiert und in polizeiliche Praktiken übersetzt – auch als Datafizierung der Polizeiarbeit bezeichnet. Zwar werden Softwareprogramme und mithin auch Algorithmen schon seit einigen Jahrzehnten in der hiesigen Polizeiarbeit benutzt – sei es über elektronische Vorgangsbearbeitungssysteme oder Textverarbeitungssoftware –, mit dem gegenwärtigen Digitalisierungsschub werden nun aber auch algorithmische Analyseverfahren polizeilich implementiert, die mit selbstlernenden Systemen operieren (Machine Learning), was wesentliche epistemische wie praktische Implikationen hat. Diese These wollen wir im Folgenden mit Blick auf polizeiliche Prognosearbeit (Predictive Policing) näher beleuchten. Wir gehen im Zuge dessen wie folgt vor: Zunächst stellen wir in der gebotenen Kürze die Grundlagen von Machine Learning-Algorithmen sowie dem damit zusammenhängenden Phänomen von Big Data dar. Danach beschreiben wir die Grundzüge von Predictive Policing und diskutieren daraufhin die wesentlichen Folgen dieser neuer Art, (Kriminalitäts-)Prognosen zu generieren. Dies beziehen wir auf die zunehmende Verschränkung von Prävention und Re-pression, die mit der Nutzung solcher Machine Learning-Verfahren zusammenhängt.
EGBERT, S., ESPOSITO, E. & HEIMSTÄDT, M. (2023). Das Polizieren der Zukunft und die Zukunft der Polizei. In: Polizei.Wissen 6 (2): 20-23
In many organisations, algorithms that generate predictions are used in the course of digitalisation. This article analyses the impact of this algorithmisation on decisions in the police using the example of predictive policing. Predictive policing is the increasing use of forecasting software to predict and prevent criminal behaviour in police organisations. Based on the differentiation of two variants of police predictive software, which are characterised by a different degree of comprehensibility for their users, the article examines the effects of these algorithms on central decision-making premises of police organisations: Programmes, communication channels and people. The analysis provides an outlook on the consequences of the future development of predictive software for the police as an organisation.
EGBERT, S., ESPOSITO, E. & HEIMSTÄDT, M. (2022). Predictive Policing und die algorithmische Rekonfiguration polizeilicher Entscheidungen. In: Soziale Systeme 26(1-2): 189-216.
Predictive policing, i.e. the police application of algorithmic data analysis procedures in order to generate and implement operational forecasts regarding spatio-temporal and/or personal risks of future crime implies an increasing interweaving of preventive and repressive police measures. This is because the operational forecasts that are considered central to predictive policing, which can be translated more or less directly into police action, represent a (further) temporal advance of police intervention, in that crime risks that can be narrowed down in terms of space-time or group or person can be processed with preventive intent. This temporal reconfiguration of police action has a momentous effect on the relationship between prevention and repression, since in the context of targeted and very concrete prevention efforts, police measures are carried out that, by definition, take place before crimes are carried out and are not based on the existence of concrete dangers, since the quality of prediction is not sufficient for this. Nevertheless, these measures can have a repressive quality. In a legal sense, in that offences are dealt with proactively. In a broader sense, oriented towards the semantic content of the concept of repression, in that despite preventive intent they contain repressive elements associated with coercion, which refers to the reduction of the possibilities of action of the affected citizens – not only (inclined) offenders (chilling effects). Overall, it is evident that the strict legal-terminological separation between prevention and repression is not suitable for adequately capturing future-oriented policing via operational forecasts in predictive policing. Rather, predictive policing is to be described as a prepressive practice that operates on the border between police and criminal law and has both preventive and repressive components.
EGBERT, S. (2022). Predictive Policing und die Neukonfiguration des Verhältnisses von Prävention und Repression. In: Feltes, Thomas; Klaas, Katrin; Thüne, Martin (Hrsg.): Digitale Polizei. Frankfurt am Main: Verlag für Polizeiwissenschaft, S. 113-129.
Algorithmic predictions are used in insurance to assess the risk exposure of potential customers. This article examines the impact of digital tools on the field of motor insurance, where telematics devices produce data about policyholders’ driving styles. The individual’s resulting behavioural score is combined with their actuarial score to determine the price of the policy or additional incentives. Current experimentation is moving in the direction of proactivity: instead of waiting for a claim to arise, insurance companies engage in coaching and other interventions to mitigate risk. The article explores the potential consequences of these practices on the social function of insurance, which makes risks bearable by socialising them over a pool of insured individuals. The introduction of behavioural variables and the corresponding idea of fairness could instead isolate individuals in their exposure to risk and affect their attitude towards future initiatives.
CEVOLINI, A. & ESPOSITO, E. (2022). From Actuarial to Behavioral Valuation. The impact of telematics on motor insurance. Valuation Studies 9(1) 2022: 109-139
A proposal that we think about digital technologies such as machine learning not in terms of artificial intelligence but as artificial communication.
Algorithms that work with deep learning and big data are getting so much better at doing so many things that it makes us uncomfortable. How can a device know what our favorite songs are, or what we should write in an email? Have machines become too smart? In Artificial Communication, Elena Esposito argues that drawing this sort of analogy between algorithms and human intelligence is misleading. If machines contribute to social intelligence, it will not be because they have learned how to think like us but because we have learned how to communicate with them. Esposito proposes that we think of “smart” machines not in terms of artificial intelligence but in terms of artificial communication.
To do this, we need a concept of communication that can take into account the possibility that a communication partner may be not a human being but an algorithm—which is not random and is completely controlled, although not by the processes of the human mind. Esposito investigates this by examining the use of algorithms in different areas of social life. She explores the proliferation of lists (and lists of lists) online, explaining that the web works on the basis of lists to produce further lists; the use of visualization; digital profiling and algorithmic individualization, which personalize a mass medium with playlists and recommendations; and the implications of the “right to be forgotten.” Finally, she considers how photographs today seem to be used to escape the present rather than to preserve a memory.
ESPOSITO, E. (2022). Artificial Communication: How Algorithms produce social intelligence. Cambridge (MA), London: MIT Press
Italian translation: Comunicazione artificiale. Come gli algoritmi producono l'intelligenza sociale Milano: Bocconi University Press, 2022.
Chinese translation: 人工沟通:算法如何生产社会智能 by Weng Zhuangzhuang. Shanghai: Shanghai Jiaotong University Press, 2023.
Spanish translation Santiago (Chile): Fondo de Cultura Económica, forthcoming.
A review of the book can be found at E&T.
Interviews with Elena Esposito about her book can be listened to on The Neutral Ground Podcast and New Books Network.
One of the main issues underlying insurance contracts is moral hazard: if people are insured, their exposure to dangers could increase because they have fewer incentives to try to prevent accidents from happening. Digital technologies promise to transform the way insurance companies deal with moral hazard. On one side, these technologies monitor individual behaviour, on the other side they produce data which, in turn, are used to involve policyholders in coaching programs. A case in point is telematics motor insurance. If one looks more closely at coaching programs, however, things look different.
CEVOLINI, A. (2022). Coaching strategies in telematics motor insurance: control or motivation? How insurers try to be proactive in risk mitigation. Movingdots, 21.03.2022.
Dealing with opaque machine learning techniques, the crucial question has become the interpretability of the work of algorithms and their results. The paper argues that the shift towards interpretation requires a move fromartificial intelligence to an innovative formof artificial communication. In many cases the goal of explanation is not to reveal the procedures of themachines but to communicate with them and obtain relevant and controlled information. As human explanations do not require transparency of neural connections or thought processes, so algorithmic explanations do not have to disclose the operations of the machine but have to produce reformulations that make sense to their interlocutors. This move has important consequences for legal communication, where ambiguity plays a fundamental role. The problem of interpretation in legal arguments, the paper argues, is not that algorithms do not explain enough but that theymust explain too much and too precisely, constraining freedom of interpretation and the contestability of legal decisions. The consequencemight be a possible limitation of the autonomy of legal communication that underpins the modern rule of law.
ESPOSITO, E. (2021). Transparency versus explanation: The role of ambiguity in legal AI. Journal of Cross-disciplinary Research in Computational Law, Vol. 1 No. 1 (Nov. 2021).
While insurance was originally devised as a safety net that steps in to compensate for financial losses after an accident has occurred, the information generated by sensors and digital devices now offers insurance companies the opportunity to transform their business by considering prevention. We discuss a new form of risk analytics based on big data and algorithmic prediction in the insurance sector to determine whether accidents could indeed be prevented before they occur, as some now claim is possible. We will use the example of motor insurance where risk analytics is more advanced. Finally, we draw conclusions about insurance’s new preventive role and the effect it may have on the policyholders’ behavior.
GUILLEN, M. & CEVOLINI, A. (2021). Using risk analytics to prevent accidents before they occur – the future of insurance. The Capco Institute Journal of Financial Transformation, Vol. 54 (Nov. 2021): 76-83.
Guillen_Cevolini_Using-Risk-Analytics-to-Prevent_CAPCO_Journal-of-Financial-Transformation_54.pdf
By introducing us into core concepts of Niklas Luhmann’s theory of social systems, Elena Esposito shows their relevance for contemporary social sciences and the study of unsettled times. Contending that society is made not by people but by what connects them - as Luhmann does with his concept of communication - creates a fertile ground for addressing societal challenges as diverse as the Corona pandemic or the algorithmic revolution. Esposito more broadly sees in systems theory a relevant contribution to critical theory and a genuine alternative to its Frankfurt School version, while extending its reach to further conceptual refinement and new empirical issues. Fueling such refinement is her analysis of time and the complex intertwinement between past, present and future - a core issue that runs throughout her work. Her current study on the future as a prediction caught between science and divination offers a fascinating empirical case for it, drawing a thought-provoking parallel between the way algorithmic predictions are constructed today and how divinatory predictions were constructed in ancient times.
ESPOSITO E., SOLD K. & ZIMMERMANN B. (2021). Systems theory and algorithmic futures: Interview with elena esposito. Constructivist Foundations 16(3): 356–361.
Digital prediction tools increasingly complement or replace other practices of coping with an uncertain future. The current COVID-19 pandemic, it seems, is further accelerating the spread of prediction. The prediction of the pandemic yields a pandemic of prediction. In this paper, we explore this dynamic, focusing on contagion models and their transmission back and forth between two domains of society: public health and public safety. We connect this movement with a fundamental duality in the prevention of contagion risk concerning the two sides of being-at-risk and being-a-risk. Both in the spread of a disease and in the spread of criminal behavior, a person at risk can be a risk to others and vice versa. Based on key examples, from this perspective we observe and interpret a circular movement in three phases. In the past, contagion models have moved from public health to public safety, as in the case of the Strategic Subject List used in the policing activity of the Chicago Police Department. In the present COVID-19 pandemic, the analytic tools of policing wander to the domain of public health – exemplary of this movement is the cooperation between the data infrastructure firm Palantir and the UK government’s public health system NHS. The expectation that in the future the predictive capacities of digital contact tracing apps might spill over from public health to policing is currently shaping the development and use of tools such as the Corona-Warn-App in Germany. In all these cases, the challenge of pandemic governance lies in managing the connections and the exchanges between the two areas of public health and public safety while at the same time keeping the autonomy of each.
HEIMSTÄDT, M., EGBERT, S., & ESPOSITO, E. (2021). A Pandemic of Prediction: On the Circulation of Contagion Models between Public Health and Public Safety. Sociologica , Vol 14 n. 3, p. 1-24.
The new insurance business model, driven by digital technologies, is promising because it allows, among many other things, to profile customers at the best, offering them more and more personalized solutions. However, from a sociological standpoint, a number of social issues arises which are worth been further investigated.
CEVOLINI, A. (2020): Insurtech tra rischio e mutualità. Insurance Review, 79, November: 54-57.
https://www.insurancereview.it/insurance/contenuti/speciale/1913/insurtech-tra-rischio-e-mutualita
Big Data seems to reverse the information asymmetry between insurance companies and policyholders. Through the growing development of InsurTech, insurance companies might know more about the policyholder than the policyholder knows about herself. This reversal leads to complex issues of privacy, transparency and circularity of information. What is now called Insurance-of-Things, moreover, could have a disruptive impact on the insurance business, marking a turning point from a reactive approach to a proactive approach. The information available though algorithms could allow the insurer to know future damages in advance and to move from a compensatory approach to a preventive approach. In this paper, we briefly show how these changes could redefine business model, social performances and technical skills in the insurance sector.
CEVOLINI A. & ESPOSITO E. (2020). Il futuro dell'assicurazione. Opportunità e minacce delle tecnologie digitali nell'assicurazione del futuro. Futuri. Rivista Italiana di Future Studies, 13(7): 51-56.
The use of algorithmic prediction in insurance is regarded as the beginning of a new era, because it promises to personalise insurance policies and premiums on the basis of individual behaviour and level of risk. The core idea is that the price of the policy would no longer refer to the calculated uncertainty of a pool of policyholders, with the consequence that everyone would have to pay only for her real exposure to risk. For insurance, however, uncertainty is not only a problem - shared uncertainty is a resource. The availability of individual risk information could undermine the principle of risk-pooling and risk-spreading on which insurance is based. The article examines this disruptive change first by exploring the possible consequences of the use of predictive algorithms to set insurance premiums. Will it endanger the principle of mutualisation of risks, producing new forms of discrimination and exclusion from coverage? In a second step, we analyse how the relationship between the insurer and the policyholder changes when the customer knows that the company has voluminous, and continuously updated, data about her real behaviour.
CEVOLINI, A. & ESPOSITO, E. (2020). From Pool to Profile: Social Consequences of Algorithmic Prediction in Insurance. Big Data & Society.
The common response to a global emergency is a call for coordination. The paper argues, referring to systems theory, that the problem of our functionally differentiated society is not lack of integration, but rather an excess of integration. In dealing with threats that come from the environment, the opportunities for rationality in society lie in the maintenance and exploitation of differences, not in their elimination.
ESPOSITO, Elena. “Systemic Integration and the Need for De-Integration in Pandemic Times.” Sociologica, vol. 14, n. 1 (2020), p. 3-20.