Problem statement

All societal trust relations rely on complex social structures. We can describe these as trust infrastructures whose role is to enable and facilitate the emergence and sustained existence of interpersonal and institutional trust relations. The three most prominent trust infrastructures have a long history, yet digitization poses fundamental challenges to all of them.

  1. Interpersonal, communal trust networks are the most ancient of trust infrastructures. They have traditionally been based on, for example, familiar, ethnic, religious, or tribal relations, professional associations, epistemic or value communities, which create shared understanding of the world and each other. While such interpersonal networks still form the backbone of trust in society, many of the interpersonal, communal relations have been remediated by digital technologies, such as those of remote work and education. It would be naïve to think that the digitization of our social life would leave these interpersonal communal trust relations unaffected. Interpersonal trust networks also play a key role in technology adoption by providing faith in early, untested, unproven innovation.
  • Public trust infrastructures are abstract, institutionalized frameworks to produce trust in modern societies. Public institutions, such as public education, public service media; fair, transparent, accountable, disinterested public administration; the legislative, judicial, and law-enforcement bodies of the state, and their commitment to public values, such as the rule of law, fundamental rights, and principles of good governance; as well as societal institutions, such as the press, science, or academia have to create trust in social, political, and economic relations. Public trust infrastructures (1) can contribute to the trustworthiness of digital technologies (through, for example regulation), and (2) they incorporate – often untrustworthy – digital technologies in their everyday operations, such as predictive policing algorithms, or digital education tools.
  • Private trust infrastructures produce and offer trust as a commodity on the marketplace. Private trust producers, such as lawyers, auditors, accountants, credit rating agencies, insurers, banks, but also commercial brands offer signals and safeguards of trustworthiness for a fee. In the last two decades novel, highly technological private trust infrastructures emerged. Online reputation management services, distributed ledgers, and AI-based predictive systems help strangers to engage in trust necessitating social, economic interactions on a planetary scale.

Digitization both transforms how communal, public, and private trust infrastructures function in society, and upsets the relationship between these trust production forms. Private technological trust producers try to capture and monetize an increasing scope of societal trust relations, from the deeply interpersonal (such as intimacy via dating apps), to the highly impersonal (such as the global flow of goods via e-commerce platforms). Many are designed with the explicit goal of disrupting the state and many of its public functions, including policing, education, currency control, healthcare, or the organization of democratic processes. Yet, the trustworthiness of the digital trust infrastructures seems to be heavily dependent on the public trust infrastructures. European regulation, such as the GDPR, the upcoming Data-, AI-, and Digital Services Acts, the Digital Single Market Directive are all aimed at increasing the trustworthiness of private technological actors, which produce of trust in our interpersonal, social, cultural, economic, political relations.

These three modalities of trust infrastructures correspond with the three pillars the RPA rests upon: the human, the institutional and the technological aspects of trust and trustworthiness. These pillars address digital trust from their own perspectives, but with a strong focus on the cross-cutting questions, aspects, interdependencies:

Pillar I. looks at the human aspects of trust: What kind of consequences may flow from the digital remediation of pre-existing interpersonal trust relations? What kind of role do digital intermediaries play in the formation and maintenance of interpersonal trust and distrust relations? In relation to trusting technology: What kind of individual and social factors shape trustworthiness perceptions, trusting attitudes, and behavior towards digital tech? How does trust and distrust in technology affect the way we interact with them? Postdoc research projects #1 and 3 will address research questions in this pillar.

Pillar II. addresses institutional questions, such as: How does the perceived trustworthiness of digital technologies affect the trust in institutions (such as the municipalities, welfare administration, tax office, courts, police) that use them? And question related to trustworthiness of digital technologies: How do different institutions contribute to the trustworthiness of digital technologies, via for example, regulation, oversight, accountability, transparency? What is the role of markets (through competition, cooperation, standards, interoperability, information aggregation) in establishing trustworthy digital technologies? Postdoc research projects #2, 3, and 4 will address questions in this pillar.

Pillar III. addresses questions specific to the design and operation of (private) digital techno-social systems: How do we define trustworthiness in the technological context? What are the limits of technological design when technologies may often pose unknown, long-term risks, and harms? What technical factors can ensure the trustworthiness of digital technologies? How do the various technical definitions map to perceived and other measures of trustworthiness of digital technologies? What are the limits of purely technical approaches? What is the role of other trustworthiness safeguards, such as regulation, competition, governance in closing the technical gaps? How can technical and non-technical trustworthiness guarantees strengthen each other? What is the role of policy and firm-level governance in ensuring the trustworthiness of technology infrastructures? How do the risks and potential harms flowing from the technology translate into the transformation of human and institutional trust dynamics as well as problems and crises? Postdoc research projects #1, and 2 will address questions in this pillar.

Trust research today suffers from several shortcomings. First, trust is studied in a siloed, fragmented manner, focusing on isolated domains. Computer scientists, lawyers, economists, cognitive scientists, philosophers, sociologists, (organizational/social) psychologists study various aspects of trust and trustworthiness mostly in isolation. Second, trust research often focuses on narrowly defined domains, such as trust in news, or in science; specific technical issues, such as computer security; specific regulations such as the European Digital Services Act; specific technologies, such as AI and recommender systems; or particular firms such as Facebook.