Thoughts on Trajectory, Algorithms, Engineering, DAOs and Functional Misalignment
Elevation / Incline / Cycling Heatmap for Brasilia, Brazil
DAO, BlockScience, Gitcoin, Complex Systems, Algorithms

Thoughts on Trajectory, Algorithms, Engineering, DAOs and Functional Misalignment


An RMIT Podcast featuring Danilo Lessa Bernardineli

On June 14, 2022, a conversation with Kelsie Nabben and myself (Danilo Lessa Bernardineli) went live as a ‘Mint & Burnpodcast episode on the dual topics of Engineering Methods in Web3 and Sub-DAOs of DAOs using Gitcoin Fraud Detection & Defense Working Group as a case study. This piece consists of re-formatted questions and answers as well as some free-styled contributions in an effort to disseminate some thoughts and insights.

The questions posed by Kelsie covered my trajectory in BlockScience and Web3, the relationship on algorithms and governance, what it takes to be an Engineer in the Web3 space at large, and commentaries on Viable Systems, GitcoinDAO and the challenges of progressive decentralization as seen in the saga of the proposed Sybil Detection DAO.

The podcast links are as follows: Spotify and Apple Podcasts

On the Trajectory

What is BlockScience

BlockScience is a firm that engages in Complex Systems Engineering. What is that? By Complex Systems we mean that we’re interested in designs that involve a variety of fields and scales. We want to build economies and mechanisms that rely on insights from all the combined sciences: economical sciences, social sciences, data science, computational science and so on.

What we’re interested in at BlockScience

We want to be able to have an in-depth understanding of what happens in the macro universe of the economy without discarding the micro atomic structure of people’s behavior. That’s the complex portion of the work. The engineer portion is about delivering actionable knowledge and recommendations based on our best judgment keeping in mind all the participant's expectations and desirables.

A shard of my academic background

It will become obvious why I’m here. I’m an atmospheric physicist by formation. My academic background is about calibrating observable physical measurements so that climate models have a representative input of the world. There are a lot of commonalities between the practical goals of doing climate extrapolation under an array of scenarios in token engineering. Both of them deal with the topic of using a blend of theoretical and data-driven methods as encapsulated by simulations in order to measure future outcomes given certain decisions and interventions.

A shard of my personal background

There’s also a warm component for why I’m interested into studying complex socio-economical systems. I’m from the Brazilian Amazon. I lived my entire childhood and early years on the city of Ariquemes, Rondônia, and I watched very closely the social, economical and environmental change that happens on the edge of the society frontiers as it is incepted and matured. I’m acutely aware of how incentives that are overly detached from sensing the edges can lead to generalized collapse. Both cohesion and collapse are emerging dynamical properties. The want to understand formally and as unbiased as possibly the world dynamics led me to pursue an interdisciplinary track while pursuing a physics education.

Getting into Crypto

That being said, I usually joke that this was some years ago, because my interest has shifted from Climate Change to Economic Change. Web3 is especially instigating from a complex systems perspective because it combines three things: first, it is open by design. All interactions and events are public and this generates a wealth of behavioural data that is novel in the scientific world. Second, it provides one of the most exciting social experiments ever done. The fact that anyone on the internet can theoretically participate in it is particularly instigating. Third, there is a lot of potential from a cyberneticist point of view. Tokens can have an interesting interpretation of being a unit of energy for a specific functional cluster. It is not uncontroversial however. Some people have dubbed this “unit of energy” view as being analogous to hyper-capitalism.

Getting into BlockScience

It’s definitely very exciting to be in my position being able to observe the real-time development of all of that! And for how I actually got into Web3: it was through BlockScience! I was trying to reproduce some research using causal models for informing public policy around bike adoption in Auckland, New Zealand, and I was shocked by the lack of decent open source libraries for doing complex systems simulations. Most of the tools were proprietary and had severe limitations. The best open-source tool that I could find was cadCAD, and from there I got into the crypto rabbit hole. Mostly because there is a huge demand (and pressure!) for building economics that are sustainable and works for everyone.

On Algorithms in Decentralized Systems

Setting-up for Algorithmic Training

The function of algorithms to humans

Algorithms have a dual function in any circumstance. They can be information processors and process automators. The distinguishing aspect on decentralized systems is related to distributed consensus: that is, the operation of the algorithm is done on multiple instances rather than on a central node.

Algorithms as information reducers

By processing information I mean that they can have the role of generating data or reducing data. Simulations are an example of data generation. Provided an initial configuration, it can generate an infinite amount of data on which analysis can be done, which by the way is an example of information reduction. All analyses in some way or another consist of transforming, grouping and aggregating data.

To reduce information is to increase how much data we can consume

In this information processor sense, algorithms can really be understood as cognitive extensors of what we do. Simulations are the computational analogy to our own imagination, and computational statistics are the analogy to our own analytical process.

Algorithms as automators

As for the process automation sense, algorithms can be understood as directly replacing our labour, like automating boring tasks away given a rule set. Those tend to be simultaneously the most useful and dangerous algorithms, especially when there’s extensive information processing happening inside it. It’s a recurring topic in the data science field to talk about the “Weapons of Math Destruction”, which is a terminology for algorithms that automate human judgement without having any explicit feedback loop. Left unattended, they tend to reinforce their bias every time that they pass an action, which in short-term makes the algorithm self-justifying.

The danger of automating on decentralized systems

This is especially dangerous on decentralized systems, on which there is no single point on which you can appeal to. By making a self-reinforcing algorithm being decentralized without having a feedback loop, is to perpetuate the bias which is propagating. Almost like a governance hydra, where bad decisions justify even worse decisions in a sequence. It is an old adage in computer science that to err is human, but to really break things over you need on a computer. By making the computer decentralized without feedback loops, you’re also removing the shutdown button (or multiplying it by the thousands and shipping across the world!).

Being cautious on algorithmic danger

We should be actively aware and cautious about that. Algorithms can really scale up our information processing abilities. They can allow us to make faster and better judgements. The danger is really when using them to completely remove human involvement on decisions that affect other humans.

What Algorithms do not do: warm processing

As for what algorithms do not do, they’re actually quite limited! For all the information processing capacity that they have, they do have quite a constraint on the input or sensing side. This is related to the concept of “Warm Data”, and this is a definite comparative advantage that humans have over machines. The room on the algorithm’s logic for fuzziness and ambiguity tends to be quite restricted in all aspects. On the input, the transformation and the output.

Warm processing: an open problem that’s being worked on

A lot has been worked into that through Bayesian formalisms and Causal Inference, and to be fair, a lot of process automating algorithms could benefit from adopting that more consciously. However it is still not there yet. Bayesian statistics is still hard for most people (and coders) to grasp, and machines are still slow to use them on each step.

Auto-pilots vs Pilots

We should be really distrustful when considering algorithms as being full substitutes for active human participation on fuzzy problems. It is just like an auto-pilot: it will probably do OK or even better than human pilots in most common circumstances. But I wouldn’t trust it if the engine is catching fire and the plane needs to land on a patch of grass in the middle of a forest!

The solution is Symbiosis: Cyborg Pilots

You know what it is even better than human pilots and autopilots? Cyborg pilots. Human pilots armed with the best possible data of the environment that he’s dealing with, complete with machine provided suggestions and measures of uncertainty. In that way, we are able to mix the algorithms capacity for information processing and automation together with the ample warm dataset that a human brain contains.

Not surprisingly, this is the direction that most avionics are going, even though algorithmic autopilots have been there for a while.

On Complex Systems Engineering in Web3

Revamping a Wheel Hub

What is Engineering, actually?

First we need to make sure that Engineering is well understood as a concept. Engineering is about the entire life-cycle of designing, validating, implementing and maintaining systems through replicable and methodical processes. The life-cycle is not linear, and learnings from each step and projects inform each other.

What makes BlockScience an Engineering Firm

Since our inception, BlockScience has pushed for Web3 projects to follow a more structured Engineering Design Process, which is a way to encapsulate the loops on a typical Web3 project. The specific approach depends heavily on the project stage and the intended goals.

What Engineering Design looks like

When designing systems from scratch, the approach tends to involve a mix of warm data collection and formalism building. The underlying idea is to properly map the set of desirables and properties into specifications that allow for more rigorous treatment, like mathematical specifications or computational models.

How Engineering Validation & Maintenance Works

This is different when validating and maintaining existing systems. The approach then tends to be more familiar to data science workflows. In this case, we often create a Digital Twin of the real system and we use that doing what we call “Grey Box” extrapolations. The general idea is that we are able to mix the fundamental knowledge about how the system works together with data-driven methods so that we’re able to predict the system trajectories under an array of scenarios with more realism than if we did it using black box models. This can be very useful when investigating extreme scenarios or changes in the system inner workings, as the main limitation of most time-series black box models is the assumption that there is some kind of well characteristic distribution behind it.

The essence of Engineering

Engineering as a discipline is about continuous and repetitive use of scientific workflows to provide the best possible understanding and implementation of real world systems. The specific workflow and methodologies is conditional on the real world problem, just like one would use quantum physics to understand atomic behavior or economic history to understand the context, trajectories and pitfalls of aggregate choices.

Why BlockScience is an Awesome Engineering Firm

We’re very fortunate at BlockScience to have a large diversity of experts across multiple disciplines and regions, this definitely provides us a comparative advantage in being able to quickly identify which workflow and method can provide the most fruitful results for most Web 3 engineering problems.

On DAOs

Farmers lunching at Comuna da Terra Irmã Alberta — Jordanésia / SP, Brazil

Literal DAO vs Crypto DAO

It is always important to distinguish the literal DAO: an organization which is autonomous and decentralized by itself from what we usually understand as a (crypto) DAO: which is a literal DAO with the property of using blockchain as an technological enabler for some of its attributes. We can call the latter as being the crypto DAO, but in practice it is easier to just call it a “DAO” to avoid confusion.

What is a Literal DAO

Literal DAOs are the OGs of human collective endeavours. Especially the ones which are implicit rather than explicit. Take cultural customs for instance. They’re usually hard to suppress due to a lack of central nodes, hard to control because they’re emergent, and they’re organized because everyone that’s into it will recognize and understand. In that sense, most cultural identities can be understood as being forms of literal DAOs.

How to identify literal DAOs in real world

Another form of literal DAOs are well functioning participatory governments and enterprises. In fact, the more democratic and inclusive an conventional organization is, the more it will have the literal DAO properties!

Why Crypto DAOs are innovative

So what about crypto DAOs? In my perspective, they’re being an interesting social experiment to achieve something that historically is very hard to do: Literal DAOs that achieve a trifecta of attributes that’s very hard to conciliate. They are: explicit and transparent rules for participation, scalability beyond local or regional boundaries and robustness against external interventions. I would even argue that the closest thing that has come to that is organized religion!

Where Crypto DAOs are and the challenge of recognizing breakthroughs when they do happen

As for what are DAOs now compared to their potential? Way more concrete and real than most people would imagine 5 years ago! The hard thing about any knowledge breakthrough is that there’s a whole difference between the breakthrough being done — or even to know that it was done — and the breakthrough actually becoming so adopted that it becomes invisible. I do believe that crypto DAOs will reach their potential when it’s so obvious that almost no one talks about it: it just makes part of our lives! Just like no one talks about General Relativity when looking at the cell phone location (we barely remember that there’s a constellation of satellites!).

Why it is challenging to be in a place where breakthroughs is imminent

It’s not easy to participate in an environment where cutting edge developments are being made routinely. There’s a constant influx of new ideas, proposals and developments, and they tend to come in with their own set of novel terminology and context. You can be sure that it was the same situation when people were bouncing their heads to communicate how to use General Relativity to describe atomic clock synchronisation on space orbiting satellites without the “energy-momentum tensor” phrase! And we still need to consider that the general problem statement of the crypto DAOs is way more warm and socially directed.

Guesses on possible points in which the breakthrough has happened?

To pinpoint where the breakthrough is, or are, if it happens, is hard a priori. Maybe the launch of 1Hive Gardens would be one of those small steps for the developers but a large step for the crypto history? Or the trajectory of Filecoin, which although not a DAO by itself, but which provides a critical infrastructure for a lot of to-be DAOs, just like electricity is fundamental for modern societies? The Graph and Gitcoin can also be understood as infrastructure providers, but in those cases they do have an managing DAO, which will provide interesting cases for future comparative studies. Maybe the breakthrough is associated with the emergence of specific groups? We definitely aim at BlockScience to make sure that developments follow sustainable patterns in terms of decisions and implementations.

On Functional Alignment & Viable Systems using GitcoinDAO and FDD as a case study

Children fishing on Rio Canaã - Ariquemes / RO, Brazil

What is Functional Alignment

To understand “Functional Alignment”, we need to define the function of a viable system. It is related to its purpose: to the optimization object that it wants to maximize.

What is a System

Systems can be inspected on the whole or by its parts. Also, it is important to be unambiguous about what each system is. There’s an old adage on systems engineering that tells that “a system is what it does”, in the sense that given suitable inputs and resulting outcomes, the system is what it is accomplishing.

What is GitcoinDAO function and what is Gitcoin Grants

The GitcoinDAO as a system exists to create a “web of jobs”, a place where funding for public goods can be found in a decentralized manner. One of the means by which this is achieved is through the Gitcoin Grants subsystem, which uses periodical funding rounds on which donor funds are allocated to participating projects by using the users own contributions as weights for how much to distribute to all the projects.

Why Fraud Detection is required for Gitcoin Grants

The Grants subsystem is vulnerable to attacks. First, fake grants can be created to funnel matching funds. Second, the contribution weighting uses Quadratic Voting for mapping the contribution to match weight, which puts a premium on number of contributions rather than volume of contributions. Third, Collusion is possible, on which multiple grants are created for funneling the match funds to the same cluster of people. Given those challenges, the Grants subsystem requires an Fraud Detection Subsystem in order to make sure that all those attacks are accounted for.

The Functional Mismatch between a Fraud Detection Group and GitcoinDAO

The fundamental functions of GitcoinDAO and FDD are not the same. GitcoinDAO’s core function is to provide public goods funding on which doing fraud detection is a requirement: something critical that needs to be unblocked but not a lot more than that. The FDD core function is to provide the best possible Fraud Detection service for GitcoinDAO, and the more it does the better.

Where the Trouble begins: Functional Mismatch coupled with Autonomy

It is in that sense that we can discuss “functional alignment”. An autonomous system which is free to optimize their own objective can make decisions that will not make sense to the function of a system with different objectives.

Managing the Functional Mismatch

Given that FDD is a subsystem of GitcoinDAO, three extreme outcomes are possible.

1. GitcoinDAO can simply tolerate the functional misalignment — which tends to be easier in bull markets.
2. GitcoinDAO can decide to reduce the FDD autonomy by making it return to doing just the “basics”. This will make the FDD subsystem more coupled and dependent on the DAO as a whole. This is to solve the functional misalignment through merge.
3. GitcoinDAO and FDD become separate entities, on which both parties’ dependencies are parametrized and made explicit. Sort of like making FDD an service provider to GitcoinDAO rather than an explicit part of it. Functional misalignment is resolved in this case through forking.

In practice, things are not as extreme and as abrupt. The real world evolution tends to be a gradual mix with differing weights of the extreme cases.

Where Functional Mismatch is common: Infrastructure Goals vs Consumer Goals

Based on the GitcoinDAO and FDD mismatch, a tale was written: The Tale of the Decentralized Common Goods Factory and its own Sybil Detection Generator, which is a light-weight way of explain the functional misalignment. Infrastructure users and funders rarely want to optimise it: they just want it to work “good enough” so that they get unblocked. The reason why someone would want good roads is not because they’re proud of the engineering excellence, but rather because it doesn’t have cracks and potholes!

Why Functional Mismatch is so common

It is not an easy thing to avoid potholes altogether! It’s not uncommon for all sorts of endeavours to need to source out their own infrastructure because they do require certain levels of performance which are simply not offered somewhere else. That was the case with GitcoinDAO: the only way to meet the quality criteria required for sybil detection was to develop their own solution.

The Driver for Users to build their own Infrastructure: Owning the Quality

What was the quality criteria? Things that are really fundamental in my opinion, but that keeps getting overlooked because of the processual complexity behind it. It is simply about putting the humans on the loop! About having an infrastructure that can be operated, maintained and improved by the same people which also consume the infrastructure.

Conclusion

As mentioned previously, the above content were reformatted answers. If someone listens to the actual podcast, the content is actually going to be different and with different intricacies! So if you liked this post, be sure to listen the original podcast and follow all the contributors that made this possible.

Special thanks to Kelsie Nabben, Jeff Emmett, Jessica Zartler and Michael Zargham for all their direct and indirect contributions. Also, special acknowledgements to BlockScience, GitcoinDAO, Disruption Joe, and all the FDD squad at large for their thoughtful interactions.

You've successfully subscribed to BlockScience Blog
You have successfully subscribed to the BlockScience Blog
Welcome back! You've successfully signed in.
Unable to sign you in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Error! Billing info update failed.