Market Design Lessons for ‘Safety First’ Institutional Engineering
Cover image by Maxim Berg @maxberg via Unsplash.com
Engineering, Institutional Design, Economics, Artificial Intelligence, Automation

Market Design Lessons for ‘Safety First’ Institutional Engineering


What can systems engineering and institutional design teach us about how to tackle disruptive technologies such as automation and AI?

In this piece, we briefly examine historical parallels from market design to highlight a ‘safety first’, civil engineering-style perspective on institutional design. This approach builds upon requirements and identifications to support Model-Based Systems Engineering (MBSE) as a design pattern for the age of digitization and algorithmic decision-making. By doing so, typical pitfalls associated with the incorporation of new technologies can be avoided, while still benefiting from the novel — and exciting — prospects such technologies may afford.

Artificial Intelligence (AI) & Programmatic Automation

Two technological innovations with a long history of both theoretical research and practical application - artificial intelligence (AI) and programmatic automation, such as that found in cryptographic smart contracts - are coming to fruition during the first 25 years of the 21st century.

Each innovation has, separately and perhaps soon collectively, precipitated the disruption of not just their previously narrowly defined research and application areas, but also the general ‘life’ of the digital society.

It is at present unknown what the medium-to-long term repercussions will be for e.g. the labor market, economic activity, education and skill development etc., but the general societal consensus appears to simultaneously embrace the potential benefits these technologies safety-first while holding at arm’s length the suspected harm introduced (or perceived to be introduced) by their rapid adoption.

This poses a conundrum to the adoptee — how can these technologies be incorporated by institutions in a safe manner, while at the same time exploiting their synergies, positive externalities, and potentially paradigm-shifting benefits?

Systems Perspective of Institutional Design for Safety

Here and in what follows we adopt a systems perspective of institutional design for safety: because these technological innovations are necessarily incorporated into institutions with humans ‘in the loop’.

With humans 'in the loop', the safety of its participants is paramount and must be evaluated with respect to how the system it corresponds to behaves as a whole, and not solely with regard to its individual components (cf. e.g. Leveson [2016] for a detailed and comprehensive discussion on the systems approach to safety).

Fortunately, a potential solution to this conundrum rests in a well-defined methodological approach, that may be applied to institutional design that renders disruptive technologies beneficial when founded upon safety. This approach has been called “economic engineering” in the economics literature (cf. Roth [2002]) and leverages methodologies from fields such as civil engineering to design and implement safety-first institutional foundations.

Thinking of potentially disruptive technologies as new ‘materials’ allows the designer to consider new, safe ‘structures’ from which beneficial solutions using these (and other) technologies can emerge (e.g. Gordon [2009], who describes how it is structure properties, rather than material properties, that inform system safety).

This approach is defined to be proactive rather than (as is evinced currently) reactive. By designing an institutional framework for safety first and then optimizing conditional upon safety, the error-prone tâtonnement toward optimal solutions based upon incomplete (or inconsistent) design properties, in the face of technological disruption, is avoided.

Similar insights have been made regarding institutional and constitutional design in the past (cf. e.g. Koppenjan and Groenewegen 2005, Zargham et al. 2023). To understand this perspective better, parallels may be drawn from other institutions where technological innovation has disrupted ecosystems previously accepted as ‘safe’, or as having a well-defined risk profile (and hence open to risk management).

As motivating examples, we shall consider below the adoption of safety-first solutions for both financial and non-financial markets, when faced with mass digitization and the advent of high-speed communication networks.

Safety in Financial Markets

In financial markets, a series of stock market crashes — Black Monday in October of 1987, the Financial Crisis of 2008, the ‘Flash Crash’ of 2010, and the COVID Pandemic crash of 2020 — were each assessed by a post-mortem that introduced restrictions on the market’s function to (it was hoped) alleviate future crashes.

These restrictions on the markets function may be divided into the structural, behavioral and regulatory areas, each acting as a method to provide a ‘safety first’ approach to an organically-created ecosystem.

The most straightforward structural solution was the introduction and refinement of the financial market ‘circuit breaker’ (e.g. Kim 2004). This may be seen as a restriction on how the system updates state, by shutting down state updates (price changes) when one or more safety criteria are violated. Such criteria include e.g. large volume orders within a short period of time, large discrepancies between current and futures prices, large volume or price volatility, etc.

By introducing circuit breakers, the system stops updating and (it is thought) provides time for market participants to “take a breather” and rationally ingest market information without disrupting the market’s function of price discovery.

During the 2008 Financial Crisis, restrictions on behavior were also implemented, i.e. parts of the set of available actions to participants were removed, temporarily or permanently, in an effort to preserve market function. The most prominent of these restrictions was the partial or total ban on short selling, i.e. borrowing to purchase shares with an expectation of the share price falling (to pay back borrowing with a profit).

Short selling is thought to impart expectations by other market participants that negative information on the health of a company has been received, causing risk-averse participants to sell that company and bring about a self-fulfilling equilibrium where the share price does indeed fall — regardless of whether negative information ever actually existed. The restriction of short selling is an attempt to break the chain of expectations formation leading to this self-fulfilling equilibrium.

Finally, in the wake of the 2008 Financial Crisis and the 2020 COVID Pandemic crash, financial regulations have changed to require tighter standards for market participants (cf. e.g. Jones and Zeitz 2017) — this is again structural, in the sense that participants are now required to adhere to (e.g. for banks) higher deposit requirements, greater financial disclosure etc., before they are allowed to participate.

But the reasoning behind this is incentive-based: failure to adhere to these standards will open the entity to legal proceedings and prosecution, which (it is surmised) the entity would prefer to avoid. Thus, regulation changes the incentive landscape (rather than the structural or admissible action landscapes) and is a method of steering participants away from failure states and toward a “safe” region for the resulting market trajectories.

Safety in Non-Financial Markets

Nobel Laureate Alvin Roth (Roth 2002, 2008) has been a strong proponent of safety design for non-financial markets, such as the market for medical residents, for school selection, and for organ transplantation, where the direct buying and selling of goods and services (i.e. using price discovery alone as an allocation mechanism) is deemed unethical or impractical, or both.

In each case there is a delicate balance between qualitative criteria and quantitative criteria for success or failure of the resulting market mechanism. Some qualitative criteria might loosely be gathered under the umbrella of Hippocrates’ entreaty to “do no harm”, while other criteria might focus upon a measure of “fairness”. Quantitative criteria would include measures such as the percentage of students matched to a school, or the percentage of two-way (or three-way) organ transplantations.

It has been, and remains, a challenge to properly identify circumstances where a particular mechanism may be applied to promote system safety. While qualitative examples such as the above may be ethically ‘clear’, in practice this is not necessarily clear to all unless the underlying assumptions regarding e.g. institutions, enforcement etc. are properly understood. As technology (such as high speed mobile communications and digitized logistics) has advanced, misplaced or incorrect assumptions often come into conflict with reality. This is exemplified by, for example, the existence in many parts of the world of a black market for organs (cf. e.g. Koplin 2014), where price is used as an allocation mechanism in spite of its ethical negative incentives, such as rewarding organ theft. Thus, understanding the context within which certain mechanisms can be expected to perform as intended is another criterion that must be considered when selecting between different mechanisms or mechanism classes.

The understanding of these criteria (by their context, by their assumptions, etc.) elevates them to requirements that a system (and its corresponding institutions) must satisfy to be safe. Requirements are the foundation for any engineering, be it civil or economic. Basing an optimal solution to a market or non-market allocation problem upon requirements, and measuring the solution’s performance with specific qualitative and quantitative criteria, provides a methodological approach which can inform the design of institutions in the face of technological innovation.

Institution Design: Requirements & Identifications

Requirements

From the above discussion of financial and non-financial markets may be distilled the following classification of requirements governing the design of an institution confronted with disruptive technological innovation:

  1. Understand the externalities, positive and negative, which affect other ecosystems (or parts of the same ecosystem) and, to the extent those ecosystems affect the system under design, adapt the resulting design to account for these effects. For example, high-frequency trading (HFT) in financial markets dramatically lessened the time between a strategy’s modification and its execution, leading to volatility clustering and illiquidity when market participants quickly executed updated strategies at the same time. The resulting contagion within a financial system, such as the contagion observed during the 2008 Financial Crisis, created macroeconomic spillovers that, not taken into consideration, amplified financial market corrections within the larger economy (cf. O’Hara 1995, 2015).
  2. Understand the system’s environment, particularly with respect to its boundaries and interfaces with other areas. Reliance upon high-speed communications in the development of financial markets throughout the 1980s and 1990s, first with high-speed WATS lines and then via the Internet, meant that the boundary between market and non-market information (both in space and time) was moved. This led to the exploitation of larger data sets, both in types of information and quantity of information, which heralded new forms of analysis and entirely new forms of trading behavior, such as algorithmic trading in financial markets and regional-to-national expansion for organ exchange.
  3. Understand how and in what ways a system can fail, before it fails, to ensure that the resulting design can address failure states without an associated social cost. In financial markets the advent of HFT created outcomes that were not known as reachable before they occurred, such as the 2010 ‘Flash Crash’. This resulted in an extension of the circuit breaker mechanism to the HFT domain (cf. Sifat and Mohamad 2020, Chen et al. 2023). In non-financial markets, a breakdown of the matching process in the 1940s between medical residents and hospitals in the United States had to actually occur before a deeper understanding was attempted and an institutional solution designed and achieved (Roth 1984, Roth and Peranson 1999). Passing from these ‘reactionary’ solutions to an understanding of what states are reachable, and under what conditions, remains an active area of research for both financial and non-financial markets, even to the point of leveraging disruptive technologies such as Machine Learning (e.g. Easley et al. 2021).

Requirements serve as a manner to bound the scope and the complexity of the design problem. They form the ‘canvas’, as it were, that the system structure is drawn upon, without restricting the designer’s choices of what to design (apart from those choices which fail to provide safety).

Identifications: after understanding the above, the actual design process of an institution may proceed from the perspective of safety first:

4. Using theory and simulations, identify the reachable states of the system that an institution must be aware of. These include failure states, undesirable states (such as liquidity traps), or trajectories with undesirable properties (such as extreme volatility). Without an understanding of what is possible given the way that participants can interact with the system, it is impossible to design a system with a restricted set of reachable states that are desired or preferred. A safety first approach only accepts designs which meet a well-defined and unambiguous set of requirements, such as minimum standards for quality, maximum acceptable risk in non-failure states, and guarantees (again, up to some level of risk tolerance) that the system is free of catastrophic failure states for any initial conditions and interactions with the system’s environment.

5. Again assisted by theory (particularly incentive theory) and simulations, identify the governance surface that allows stakeholders in the success of the system (which may be participants) to adjust the system’s dynamics in a clearly defined, unambiguous and transparent fashion (cf. e.g. Ramirez 2019, Zargham et al. 2021). This may be especially challenging in decentralized environments, and requires an institutional arrangement which can incorporate both explicit/formal procedures (rules) and implicit/informal procedures (norms). But this is also a challenge for centralized environments, as the institution(s) which act in a governance capacity may be ill-suited for the monitoring and assessment of the disruption brought from technological innovation. (An example is the economic cost associated with the long gestation time between the widespread use of crypto assets in 2018 and the eventual adoption of MiCA, the Markets in Crypto Assets regulatory structure by the European Union, in May of 2023. Comparable regulation in the United States has yet to appear as of this writing.)

6. Based upon the above, identify the manner in which disruptive technologies are incorporated into the existing ‘materials’ used to construct the institution’s ‘structure’. Rather than rely upon haphazard application or potentially myopic gains, building a model where short, medium and long run outcomes can be simulated and tested allows one to introduce innovation in a safety first manner.

Identifications provide the institution designer with an understanding of the possible outcomes associated with both technological implementation and its associated change management through governance. It helps to ensure that technology is properly adopted for its contribution to structural properties, rather than for its material properties, and tempers the desire to add new technology simply because it is new. This is why institutional design (including market design) may be identified with system design within the context of technological innovation.

Practical Tools for Institutional Design

From a practical point of view, there is much in common between institutional design, as described above, and Model-Based Systems Engineering (MBSE), although historically the two have originated from (in some cases vastly) different spheres of influence. MBSE (cf. e.g Madni and Sievers 2018, Shevchenko 2020, and Zargham 2023 for a perspective amalgamating MBSE into MBID, model-based institutional design) proceeds by elevating 4–6 above as requirements themselves that must be satisfied, given the understanding conveyed by 1–3.

Using this viewpoint, the twin disruptive technologies of programmatic automation and AI are essentially ‘materials’. These are used in the creation of the institutional ‘structure’ that must withstand (i.e. avoid failure states associated with) different ‘loads’ from the institution’s environment, its associated externalities, and its internal forces.

The application of MBSE prevents a perspective which is either too abstract, in the sense of washing out externalities or blurring the interface between the institution and its environment, or too detailed, where individual components of an institution are ‘optimized’ without considering how these components are meant to work together. A systems engineering perspective provides a ‘zoomable window’ that can see both the forest and its trees, which is critical to identifying the collective failure states and reachable states of the institution at both the component and system levels, and their meso-level interactions.

Designing and Building for ‘Great Good’

It has been argued in this essay that disruptive technologies such as AI and programmatic automation should be thought of as the substrate upon which novel institutions may be built, akin to engineering a structure with desired properties (and requirements) from materials with their own (possibly different) properties.

This engineering viewpoint for institutional design is most fruitful using a model-based systems perspective, ensuring that system requirements and associated identifications are not 1) missing, 2) misapplied, or 3) assumed — the latter being a particular danger when technology properties are elevated to system properties because they are “obvious”.

Avoiding these and other pitfalls of constructing new structures from novel materials requires a safety-first approach to institutional design: safety for stakeholder objectives, participant investment and values, regulatory applications, and governance effects. To assume that things are safe because “the technology says so” is capricious; to wilfully ignore the (hard) work required to engineer safety is egregious.


Article by Jamsheed Shorish with contributions by Michael Zargham and Ilan Ben-Meir, and editing and publication by Jessica Zartler.

References

Chen, H., A. Petukhov, J. Wang and H. Xing (2023). “The Dark Side of Circuit Breakers”, Journal of Finance, forthcoming.

Easley, D., M. López de Prado, M. O’Hara and Z. Zhang (2021). “Microstructure in the Machine Age”, Review of Financial Studies 34(7), pp. 3316–3363.

Gordon, J. E. (2009). Structures: Or Why Things Don’t Fall Down. Da Capo Press.

Jones, E. and A. O. Zeitz (2017). “The Limits of Globalizing Basel Banking Standards”, Journal of Financial Regulation 3(1), pp. 89–124.

Kim, Y. H. (2004) “What Makes Circuit Breakers Attractive to Financial Markets? A Survey”, Financial Markets, Institutions and Instruments 13(3), pp. 109–146.

Koplin, J. (2014) “Assessing the Likely Harms to Kidney Vendors in Regulated Organ Markets”, The American Journal of Bioethics 14(10), pp. 7–18.

Koppenjan, J. and J. Groenewegen (2005). “Institutional design for complex technological systems”, International Journal of Technology, Policy and Management 5(3), pp. 240–257.

Leveson, N. G. (2016). Engineering a Safer World: Systems Thinking Applied to Safety. MIT Press.

Madhavan, A. (2000). “Market microstructure: A survey”, Journal of Financial Markets 3, pp. 205–228.

Madni, A. M. and M. Sievers (2018). “Model-based systems engineering: Motivation, current status, and research opportunities”, Systems Engineering 21(3), pp. 172–190.

O’Hara, M. (1995). Market Microstructure Theory. Blackwell.

O’Hara, M. (2015). “High frequency market microstructure”, Journal of Financial Economics 116(2), pp. 257–270.

Ramirez, B. (2019). “The Graph Network In Depth — Part 2”, The Graph Blog, retrieved May 23rd, 2023.

Roth, A. (1984). “The Evolution of the Labor Market for Medical Interns and Residents: A Case Study in Game Theory”, Journal of Political Economy 92(6), pp. 991–1016.

Roth, A. and E. Peranson (1999). “The Redesign of the Matching Market for American Physicians: Some Engineering Aspects of Economic Design”, American Economic Review 89(4), pp. 748–780.

Roth, A. (2002). “The Economist as Engineer: Game Theory, Experimentation, and Computation as Tools for Design Economics”, Econometrica 70(4), pp. 1341–1378.

Roth, A. (2008). “What Have We Learned from Market Design?”, The Economic Journal 118(527) pp. 285–310.

Shevchenko, N. (2020) “An Introduction to Model-Based Systems Engineering (MBSE)”, SEI Blog (Software Engineering Institute, Carnegie Mellon University), retrieved April 25th 2023.

Sifat, I. M. and A. Mohamad (2020). “A survey on the magnet effect of circuit breakers in financial markets”, International Review of Economics and Finance 69, pp. 138–151.

Stiglitz, J. E. (2010). “Risk and Global Economic Architecture: Why Full Financial Integration May Be Undesirable”, American Economic Review 100(2), pp. 388–392.

Zargham, M. et al. (2021). “Summoning the Money God”, Medium, retrieved May 23rd 2023.

Zargham, M. (2023). “Model-Based Institutional Design”, Medium, retrieved May 20th 2023.

Zargham, M., E. Alston, K. Nabben and I. Ben-Meir (2023). “What Constitutes a Constitution?”, Medium, retrieved April 24th 2023.


About BlockScience

BlockScience® is a complex systems engineering, R&D, and analytics firm. By integrating ethnography, applied mathematics, and computational science, we analyze and design safe and resilient socio-technical systems. With deep expertise in Market Design, Distributed Systems, and AI, we provide engineering, design, and analytics services to a wide range of clients including for-profit, non-profit, academic, and government organizations.

You've successfully subscribed to BlockScience Blog
You have successfully subscribed to the BlockScience Blog
Welcome back! You've successfully signed in.
Unable to sign you in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Error! Billing info update failed.