Governance of Trust

Trust is evolving from a social virtue to a system feature — verified and embedded in code. Zero Trust, DAOs, and Privacy-by-Design offer new models for governance, but tech can’t replace ethics. Code builds structure; people give it meaning.

Governance of Trust
Milan 📸 Carmen D

In De Amicitia, Cicero writes that the whole duty of friendship rests upon trust (fides), underscoring a simple but enduring truth: trust is the glue that binds people — and societies — together. Writing at the outset of the Roman Republic's downfall, Cicero sought to define how one might live with integrity amid political collapse. Drawing from Stoic philosophy, he championed a life guided by moral principle — doing what is right not because it is easy or rewarding, but because it is right. In his worldview, trust is neither passive nor naïve; it is a conscious, reasoned act grounded in virtue and emotional detachment (ἀπάθεια).

Today, our digitally hyper-connected world presents a paradox: the more interconnected we become, the less we can rely on implicit or interpersonal trust. As systems grow more complex, so too do the opportunities for manipulation, surveillance, and systemic failure. Like Cicero in his time, we are called to rethink what it means to build and sustain trust — not just between individuals, but within the architectures of modern governance, including in the digital realm.

This post explores how emerging technologies are reshaping the very concept of trust — from something socially granted to something procedurally designed, verified, and sustained. In digital environments, trust is being re-engineered through models such as Zero Trust Architecture, Decentralized Autonomous Organizations (DAOs), and Privacy-by-Design platforms. These innovations aim to embed trust into systems themselves, but they also raise some questions: What do we lose when trust is no longer a relational virtue, but a technical feature?

This post also extends our earlier discussion on Diplomacy 2.0 and peace-building frameworks, probing how emerging technologies might support ethical governance in both digital and physical domains. At the heart of this inquiry remains trust.

Just as Cicero sought to uphold civic virtue in a disintegrating republic, we now ask: 
Is zero-trust governance a betrayal of his ideals — or their evolution?
 By building systems based on verification and de-centralisation, are we undermining trust — or creating a smarter, more resilient version fit for our age?

This post concludes with a reflection on the enduring human role in shaping trust, arguing that while technology can support trustworthiness, it cannot replace moral responsibility or the shared values that make trust meaningful.

In Code we Trust

In traditional institutions, trust is often granted by default through proximity, familiarity, hierarchy or status. But as systems scale, fragment, and digitise, implicit trust becomes difficult to sustain. The question is no longer who is trusted, but how trust is granted, verified, and maintained.

Two emerging technologies illustrate this shift from trust by assumption to trust by design:

  • Zero-Trust Architecture (ZTA): 
Originally a cybersecurity model, it assumes no inherent trust in users, systems, or networks. Its guiding principle — “never trust, always verify” — requires strict identity controls, least-privilege access, and continuous authentication. Trust is not abolished but disciplined: made transparent, auditable, and resistant to both human error and malicious intent.
  • Decentralized Autonomous Organizations (DAOs)
: Shifting from infrastructure to governance, DAOs embody a model of coordination that minimises the need for trust. These blockchain-native entities run on smart contracts — self-executing code that enforces rules without centralized authority. Decision-making is token-based and transparent; resources are allocated by predefined logic; and all actions are permanently recorded on-chain. DAOs aim to embed accountability directly into code, reducing reliance on institutional legacies, personal relationships, or regulatory enforcement.

Together, these models signal a broader shift: trust is no longer assumed, but systematically engineered. While they offer promising tools to manage complexity and mitigate risk, they remain nascent and experimental — particularly in how they re-imagine governance and institutional design.

The Ethics of Digital Trust

Both, Zero Trust Architectures and DAO models challenge traditional structures of authority. But they also surface a deeper question: Can systems that check for trust also carry the values that make trust important?

This is where Privacy-by-Design comes into play. It refers to technologies that embed ethical principles — such as consent, autonomy, and accountability — into the very architecture of digital systems, especially as trust shifts from being implicit to procedural. This growing ecosystem of Privacy-by-Design tools doesn’t just enforce rules; it's designed to safeguard individual rights in environments where trust can no longer be assumed. Examples include:

  • Zero-Knowledge Proofs (ZKPs): Allow individuals to prove facts (e.g., age or credentials) without revealing personal information. Designed to ensure that even the platform cannot access users’ private data, they redefine trust as protection from the system — not just trust in the system. Rather than eliminating trust, they embed it structurally, even in the event of compromise.
  • Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs): Support self-sovereign identity — allowing users to authenticate without relying on centralized authorities.
  • Tor (The Onion Router): Enables anonymous communication by routing internet traffic through multiple encrypted layers, shielding users’ identity and location from surveillance or tracking.
  • Inter-Planetary File System (IPFS): Decentralizes file storage, allowing users to share and access content without dependence on centralized servers or control points.
  • The Fediverse: Built on open standards like ActivityPub, enables federated social networking across platforms like Mastodon, offering user-controlled alternatives to corporate-owned spaces, maintaining greater autonomy over content, visibility, and moderation.

Together, these technologies allow users to participate in digital systems governed by shared rules, while preserving their privacy and autonomy. But these tools are still in their early stages. Adoption is limited, usability remains a barrier, and long-term economic viability remains unclear. Meanwhile, centralised platforms continue to dominate, offering frictionless experiences, seamless integration, and proven business models. The alternatives are promising, but still operationally incomplete.

So where does that leave us?

While Privacy-by-Design can embed fairness, consent, and user control into the foundations of digital infrastructure, code cannot fully replace the ethical and social foundations of trust. Technology can verify identities and enforce permissions, but it cannot define values, build consensus, or guarantee accountability. These still depend on human judgment, shared norms, and a sense of responsibility. As Cicero observed, trust is not merely contractual — it is moral.

Rethinking Governance with AI

Artificial intelligence is emerging as a powerful force in digital governance— not only optimising systems, but re-imagining how institutions build legitimacy, responsiveness, and trust.

In our last post, we explored Diplomacy 2.0 — an evolving paradigm where international cooperation, conflict resolution, and global engagement are enhanced by AI. Here, technology becomes not the negotiator, but the amplifier of more inclusive and responsive diplomacy.

Across contexts — from multilateral diplomacy to corporate governance — AI is already simulating complex scenarios, translating across languages in real time, enhancing decision-making, and detecting cyber threats. These capabilities support faster, more effective responses to emerging challenges. Within organizations, AI bolsters strategic foresight, automates compliance, and reveals hidden risks — contributing to more accountable and informed leadership.

AI’s intersection with de-centralised governance, particularly DAOs, is especially compelling. While smart contracts form the procedural backbone, AI augments these systems by verifying identities, automating workflows, and flagging anomalies — helping maintain integrity without the need for centralised control.

But AI governance requires more than technical safeguards. Like all digital infrastructure, AI governance must be rooted in ethics, rights-based principles, and cultural inclusivity. AI systems should empower individuals and communities — ensuring they retain meaningful control over their personal and collective data. A rights-based data governance is one that respects consent, ensures equitable access, and embeds accountability into system design. In an era of pervasive surveillance, the right of individuals and communities to control their data is more urgent than ever.

Conclusion

As digital systems evolve, trust is not disappearing — it’s being redefined. In a world of verifiable transactions and encoded identities, trust shifts from tacit reliance to active, informed confidence.

Technology won’t replace trust — but it can help create the conditions to rebuild it on stronger foundations. Smart contracts automate enforcement. Distributed ledgers ensure data integrity and auditability. AI offers predictive insight. But it is still people — fallible, diverse, and complex — who bring meaning, intention, and values to these systems.

AI complements a broader shift towards zero-trust and privacy-by-design systems. Yet no architecture, no matter how advanced, can replace the need for human purpose. Machines can reveal patterns and enforce rules; only people can define meaning, values, and direction.

As Cicero placed fides at the heart of civic life, we too must ensure that trust remains anchored in human responsibility. Even in an age shaped by autonomous systems and algorithmic decision-making, this remains true. Trust is not a relic; it is a prerequisite for legitimacy, cooperation, and lasting peace. Once fragile and often broken, trust can now become a shared construct – one we intentionally design to foster understanding, strengthen diplomacy, and sustain connection in an increasingly interdependent world.