A conversation inspired by a wall-hanging.
By Ellie Rennie and Michael Zargham, originally shared on Ellie Rennie's Medium account on October 23, 2025.
Earlier this year, we were invited to give a talk on Knowledge Organisation Infrastructure (KOI) on short notice. In that talk we referred to KOI as enabling digital twins for communities, meaning that it can show how disparate knowledge within an organisation is connected. Sitting in a Uber immediately after the talk, we realised that the metaphor didn’t fit.
A digital twin is designed to predict, to pre-empt, to assert control. Engineers build them to monitor what is happening or to test what might happen before it happens. As BlockScience has written, that is “useful enough to be dangerous” [1]. A model and the world it tries to mirror can never quite align; the gap between them widens with every iteration, requiring humans outside it to recalibrate it. The more we treat the model as the thing it represents, the more we risk acting inside the simulation instead of the world being simulated.
The Mirror
A few months later, in Paris, we returned to this conversation after having published this paper on KOI-enabled Artificial Organisational Intelligence (AOI). The BlockchainGov symposium was being held near the Panthéon, not far from the Musée de Cluny — home to The Lady and the Unicorn tapestries. We stopped by. In one tapestry, the lady holds up a mirror for the unicorn: a magical creature, assisted to see itself, possibly for the first time.
It struck us that our AOI model is more like that mirror than like a digital twin. A mirror offers feedback, not prediction. It doesn’t model or forecast, but allows the viewer to see itself and to see itself seeing. What appears in a mirror is continuous and co-present, not displaced into another space or timeline. It allows for self-attention, self-correction, even self-care.
While the mirror is a better analogy it also can be negatively construed. As feminist thinkers have noted, the mirror metaphor reinforces the idea of a bounded subject: the self who looks, recognises, and regulates. Philosopher and physicist Karen Barad considers such reflection as belonging to an ontology of separateness in which discrete entities observe each other from a distance.
Diffraction
Barad offers an alternative: diffraction. In physics, diffraction describes the interference patterns created when waves encounter one another. In Barad’s philosophy, it becomes a way of understanding how differences emerge through relation rather than separation. Where mirrors reproduce sameness, diffraction shows how things come to matter by overlapping and interfering. Barad writes:
[T]he quantum understanding of diffraction troubles the very notion of dicho-tomy — cutting into two — as a singular act of absolute differentiation, fracturing this from that, now from then. [2]
And (elsewhere) asks:
What if we were to recognize that differentiating is a material act that is not about radical separation, but on the contrary, about making connections and commitments? [3]
Diffractive apparatuses are arrangements through which relations themselves become visible. They show that knowing and being are co-produced, that the observer and the observed are entangled in the same ongoing event.
Perhaps AOI is less a mirror reflecting an organisation back to itself, and more of an apparatus through which relations can be perceived. Thanks to KOI, what appears in AOI isn’t an external image, but the material traces of contributions, knowledge and practices (the ways decisions, norms, and behaviours intersect). Diffraction is a helpful concept as AOI and KOI don’t just represent the organisation but take part in its continual becoming. It helps a group notice the patterns it is already making together, which enables self-attention and self-regulation.

Terraforming
Our colleagues in ADM+S write that digital twins belong to what Foucault first called “environmentality”: systems that sense, simulate, and steer behaviour through the management of conditions [4]. Rather than governing the environment from above, KOI invites participants to terraform from within: to shape the ground rules of the system together in order to form AOI.
Terraforming here isn’t about making distant worlds habitable. As with environmentality, it involves adjusting the game itself (the environment, which may be the organisation) rather than enforcing rules directly on the players (elaborated here). The difference with terraforming is that the players are making those changes collectively rather than having the rules change on them without their knowing.
Conclusion
KOI (and AOI enabled by KOI) don’t exist to model or predict a group; they become part of the infrastructure through which a group is continually composed. They help groups ‘see’ relation rather than representation and to adjust with that co-present view before them. Like the unicorn before its mirror, the group gets to self-reflect on its physical and non-physical (magical) properties. Like the tapestry, this is a woven and intricate thing to depict.
References
[1] Zargham, M, Sisson, D., David, S.J.D, Friedman, A.D., Cordes, R., (2024). Comments Submitted by BlockScience, University of Washington APL Information Risk and Synthetic Intelligence Research Initiative (IRSIRI), Cognitive Security and Education Forum (COGSEC), and the Active Inference Institute (AII) to the Networking and Information Technology Research and Development National Coordination Office’s Request for Comment on The Creation of a National Digital Twins R&D Strategic Plan NITRD-2024–13379. https://doi.org/10.5281/zenodo.13273682
[2] Barad, K. (2014). Diffracting Diffraction: Cutting Together-Apart. Parallax, 20(3), 168–187. https://doi.org/10.1080/13534645.2014.927623
[3] Barad, K. (2010). Quantum Entanglements and Hauntological Relations of Inheritance: Dis/continuities, SpaceTime Enfoldings, and Justice-to-Come. Derrida Today, 3(2), 240–268. https://doi.org/10.3366/drt.2010.0206
[4] Andrejevic, M., Horn, Z. E., & Richardson, M. (2025). Value from digital twins. Platforms & Society, 2. https://journals.sagepub.com/doi/full/10.1177/29768624251358648
About BlockScience
BlockScience® is a systems engineering firm that operationalizes emerging technologies for high reliability organizations. We partner with organizations in healthcare, energy, finance, and government to develop and integrate new capabilities while maintaining reliable operations. Operationalization doesn’t end with due diligence and procurement; it includes integration, calibration and validation. Our R&D practice goes beyond exploration: We operationalize emerging technologies within our own organization first, in order to differentiate between impressive demonstrations and practical solutions. This hands-on experience, combined with rigorous engineering discipline, enables us to cut through hype and provide honest assessments of organizational readiness and technological fitness-for-purpose. We support accountable executives with responsibility for making complex technologies work within their operational constraints, ensuring that people, processes, and tools function together reliably.
