By Ellie Renning, originally posted on her Medium account on September 22, 2025.
The following is derived from a talk I gave at the EPIC 2025 conference in Helsinki. You can read the full (co-authored) paper here. For a deeper dive into our ethnography tools (Telescope bot, Obsidian vault and plugin for KOI), see this recent post.
In his EPIC 2025 keynote discussion, Peter Sarlin of Silo AI pointed out that generative AI is best framed as the first massively successful consumer-facing AI product. Part of its innovation is an interface that makes it easy to query and elicit responses. Products like ChatGPT have been so successful that we risk mistaking such products for the full scope of what AI is or might be.
As our EPIC paper argues, a problem with these products is that they strip away key processes that shape knowledge and determine how it is taken up in the world, including shared rules and group practices. Organisations are devices through which we enact this kind of social ordering. They coordinate their members’ actions through internal institutional structures.
We are asking: What if AI doesn’t erase organisational rules, practices and beliefs, but is informed and bound by them?
This is the horizon of what we are calling Artificial Organisational Intelligence (AOI).
What AOI Is (and Isn’t)
AOI is not about AI doing the job of organisations. It’s not about replacing firms, universities, or communities with chatbots.
Instead, it’s about making organisation-level knowledge machine-readable so that it may be accessible and legible to humans through AI. What if you could talk directly to an organisation rather than just a representative of the organisation?
The intelligence of organisations lives not just in their intellectual assets but also in their routines (Nelson & Winter), in the boundaries they set and maintain (Star & Bowker), and in the everyday improvisations and tacit skills that ethnographers have long documented (Orr). Other presenters at EPIC pointed out that the expertise held by teams is not necessarily known even to their managers.
AOI asks: how do we stitch knowledge objects, rules, routines, and practices together in ways that are machine-readable, without flattening them into a single database or losing what makes organisations (and teams) distinct and more than the sum of the individuals involved?
Building the Infrastructure
To achieve AOI, we need to connect two things:
- The systems where organisational knowledge already resides (such as Slack, GitHub, decision-making platforms, policy registers, memos and document repositories).
- The local practices and tacit insights held by teams, community members, and researchers (keeping in mind that these too evolve in conjunction with automation processes).
We cannot get to AOI until the underlying infrastructure exists to connect organisational knowledge objects with the governance of that knowledge. (Note that I use the word governance broadly here: how organisations sort, share and protect their knowledge matters for their ability to survive, stay on mission, and adapt).
That is the focus of our project. BlockScience is building KOI (Knowledge Organisation Infrastructure) as this foundation.
KOI: A Protocol for Organisational Knowledge
KOI connects knowledge where it already lives. It is a modular architecture that involves nodes, which are responsible for an observed space. A node can process what’s going on there, and then pipe it to other nodes. It also involves reference identifiers (RIDs) that can be used to order and describe the knowledge objects that nodes are processing. You can read more about the technical dimensions of KOI on BlockScience’s blog and Metagov’s FAQ.
One consequence is that organisations can share knowledge with each other without conforming to a single ontology. We can connect tools, which might include generative AI, to assist in translating across organisation ontologies.
This matters because every organisation has its own distinctive ways of sorting and governing. AOI depends on preserving those differences.
Rather than moving raw data, KOI helps us label, route, and govern knowledge with its context and rules intact. This makes it machine-readable with enforceable boundaries, which sets the ground for AOI. KOI is therefore a path to regulative rather than generative AI — meaning that organisations can set and enforce boundaries that machines must respect.
Where Ethnography Comes In
Ethnography has always been about observing what is otherwise unseen, about making practices visible. In second-order cybernetics, the observer is part of the system. Our work takes that seriously: ethnographers aren’t just describing AOI, we are co-constructing it through our work.
We have begun using KOI to connect our own collecting, contextualising, and curating of organisational knowledge and events back into the organisations and communities we are working with. What began as tools for ethnography has become experiments in AOI.
In the demo we produced for EPIC, ethnographic data flows through KOI across three organisations: BlockScience, Metagov and ADMS (our research team).
Our developer Luke Miller has two Obsidian vaults running, containing real notes from ethnographers Kelsie Nabben and Brooke Ann Coco. These notes are comments from Slack channels that people have consented to share (using our Telescope bot). The ethnographers have added field notes, links and quotes. We developed an Obsidian plugin that connects the vaults into KOI so that nodes can communicate with each other.
The outcome is that discussions about KOI from Metagov are sent to BlockScience’s KOI node, where they are grouped with ethnographic notes from BlockScience’s own Slack channel and other notes the ethnographers have chosen to share.
The result is a live cybernetic feedback loop, where ethnographic insights become part of the organisation’s intelligence.
AOI depends on such feedback loops: systems that sense, interpret, and act on information in ways that can be governed. Traditionally, AI governance talks about the human-in-the-loop. But that tends to treat the human as a last-minute check — a kind of moral authority.
AOI is about building the loop itself: specifying the pipes, the permissions, the shared practices that make feedback possible.
Implications for Ethnography
As we discuss in our written paper, this changes ethnography itself, with implications for how we think about observation, interpretation and inscription. Importantly, ethnography is no longer about a single authored narrative output, but an ongoing, dynamic and participative knowledge production process.
Conclusion
- AOI is not about replacing organisations with AI, but making their knowledge legible, governable, and usable by both humans and machines.
- But this requires infrastructures like KOI that get us to regulative (not generative) AI.
- Ethnographic practice is central because it observes and inscribes the tacit and invisible, and helps build the loops that make AOI possible.
In one possible future, organisations could maintain their own local systems and practices, operating as a mesh with other organisations so that boundaries are not barriers to knowledge expansion, but help advance the productive capability of group endeavour.
To connect this to Pauliina Rautio’s keynote, what if the organisation here was not a firm but a land regeneration mission, where the contributions of animals, plants and humans are machine-readable? What if you could talk to the place itself? What if such ‘organisations’ could talk directly to each other, learn from each other?
It is still early days for our work. If you are interested in learning more, please reach out.

The paper the talk is based on was co-authored with Kelsie Nabben, Michael Zargham, Jason Potts, Brooke Ann Coco, and based on technical work by Luke Miller and Matthew Green. Elianna DeSota and David Sisson are also core contributors to the project, which is a collaboration between ADM+S, BlockScience and Metagov.
About BlockScience
BlockScience® is a systems engineering firm that operationalizes emerging technologies for high reliability organizations. We partner with organizations in healthcare, energy, finance, and government to develop and integrate new capabilities while maintaining reliable operations. Operationalization doesn’t end with due diligence and procurement; it includes integration, calibration and validation. Our R&D practice goes beyond exploration: We operationalize emerging technologies within our own organization first, in order to differentiate between impressive demonstrations and practical solutions. This hands-on experience, combined with rigorous engineering discipline, enables us to cut through hype and provide honest assessments of organizational readiness and technological fitness-for-purpose. We support accountable executives with responsibility for making complex technologies work within their operational constraints, ensuring that people, processes, and tools function together reliably.
