Dominik's Lessons from the Fellowship Part 2

Written by
Dominik Tilman
Published on
March 14, 2024

Searching for signals of trust in an ocean of noise

Kicking off month two of the Co.Lab Fellowship, if I had to sum it up in a word, it’d be “overwhelmed.” But not in a bad way. It’s just that it’s been no small feat to dive deep into my research and try to sift through the mountain of information, categorize it, and then explain it in plain English.

For context, I’m building TrustLevel here, working on reputation protocols that aim to ensure reliability of information and contributions. This grant specifically has me zooming in on how trustworthy reviews of proposals can make or break the funding success of decentralized innovation funds.

In this second article, I’ll zoom out to the core reasons information often falls short in the digital and decentralized worlds.

And hey, if you skipped my first article, no stress — catch up here:

Innovating Trust in Web3: Pioneering the Future of Decentralized Fundraising

Two key learnings that bring subtle but decisive change

Since I already had a solution in mind for developing a reputation system to improve the reliability of information, my fellowship adventure started with finding my niche in a field full of opportunities and use cases.

And during the first month two major things happened:

At first, I identified ‘Decentralized Innovation Funds’ as an ideal entry market where I could find everything I needed to implement a reputation system:

  • Active Communities: It is their activities that make it possible to build a reputation system in the first place, and in return they can benefit from such a system.
  • Rich Data: Uncountable community contributions and interactions generate rich data sets that form the basis to build a reputation system and make it possible to test with real data whether or not the system is able to identify people with high-quality contributions.
  • Easier Metrics: Depending on the use case, the metrics for a good reputation are more or less difficult to measure. With clearly defined proposal outcomes, this is much easier than validating the reliability of news sources, for example.
  • Strong Need: The strong need for a solution to improve the decision-making processes for voters (especially in very open systems with almost no entry barriers for proposers, reviewers and voters).
  • Market Entry: I already have active projects and partners in that field which should make it easier to enter the market.

Secondly, I have come to the realization that the term reputation is associated with too many different concepts. Also, my pitch ‘Reputation system for reliable information’ is too abstract and therefore needs additional explanation. As a result, I then decided to look at the big picture with the many potential use cases and look for common patterns to determine what kind of information is not reliable and how a reputation system can improve this. You can find out how that turned out at the end…

Redrawing the Map — The Context Revisited

So, what exactly do we mean when we talk about ‘information’ in these different digital domains? It can be broadly defined as any content that helps to form a perception about a particular topic and to make a decision or take an action about it. In short, any kind of presentation on a subject, be it an idea, project, research paper, news report, product or person.

Let’s take a quick look at few different industries and what kind of unreliable information we are dealing with and what impact it has:

  • Innovation Funding: Especially in a decentralized environment, unreliable evaluations of project proposals can lead to a misallocation of resources and thus stifle the very innovation they are supposed to promote. The same applies to the assessment of project outcomes. Without this, the impact and therefore the effectiveness of the funding allocations remain unclear.
  • Academic Research: The peer review process is essential to ensure the reliability of published work. However, biased peer reviews and often non-transparent review processes jeopardize the foundation of scientific work, as studies cannot be independently verified, leading to the dissemination of unreliable or erroneous research results.
  • News Media: The reliability of news and reports is under constant threat from misinformation, biased reporting, and sensationalism, fueled by the race to attract viewers which undermines public trust and the fundamental role of journalism in democracy.
  • E-commerce: Fake reviews and dubious sellers undermine consumer trust.
  • DeFi (Decentralized Finance): Unreliable assessments of borrowers and lending protocols can cause lenders to lose significant investments or prevent new borrowers from entering the sector.
  • DAO Governance: In the unique ecosystem of DAOs, where governance is decentralized and decisions are made collectively based on voting, the accuracy and credibility of information is critical. Misleading data or flawed verification processes can skew voting results, distort the evaluation of proposals and inaccurately represent members’ contributions.
  • Collaborative projects: Especially in remote or decentralized environments, the reliability and authenticity of contributors and their contributions is critical. Misrepresentation of skills or false evaluations of contributions reduces project quality and affects morale and trust within teams.

Navigating the Fog — Why is Information not reliable?

If we take a closer look at these areas, we can see a few main causes for the unreliability of information. To create a reputation system, it is essential to find solutions to these causes:

  1. Information Overload: The overwhelming amount of online data, combined with the complexity of certain areas, makes it all but impossible for an individual to accurately evaluate or categorize a single piece of information, leading to an inability to distinguish potentially unreliable sources from reliable ones.
  2. Lack of Verification Mechanisms: Without robust systems to verify the authenticity and accuracy of information or the reliability of sources, digital platforms are vulnerable to the spread of misinformation. Biased or deliberately misleading reviews, whether in academic research, decentralized innovation funds or e-commerce, undermine their reliability and can skew perceptions and lead to the promotion of underserving work or products.
  3. Anonymity and Lack of Accountability: The anonymity afforded by digital platforms, while promoting freedom and participation, often comes at the cost of accountability. This allows individuals to spread false information or engage in deceptive practices without direct repercussions.
  4. Counter-productive Economic Incentives: The drive for clicks, views, and sales can incentivize the creation and dissemination of sensational or outright false information. This is particularly evident in the news industry and e-commerce, where sensationalism and fake reviews can significantly impact consumer behavior.

Navigating the Waters — Starting with my Beachhead Market

Alright, let’s get to the heart of the matter. We’ve zoomed out to see the bigger picture, untangling the web of information and pinpointing exactly why it often slips through our fingers in the digital world. Now it’s time to zoom back in, into the heart of our journey — the world of decentralized innovation funds.

When we talk about decentralized innovation funds, there are of course many possible ways to run them. We assume a very open system here, where anyone can take on any role, especially in the reviewing and voting process.

We have the proposers, who have big ideas and are looking for funding for their projects. Then there are the reviewers, who evaluate the value of these ideas with a critical eye. And let’s not forget the voters, the decision-makers.

And this is where the challenges lie with unreliable information:

The voter’s dilemma:

Imagine this: a flood of projects so vast and technical it’s dizzying. For most, sifting through the pile and separating the wheat from the chaff is a Herculean task. And that’s the crux of the matter: how do you pick the good ones when you’re drowning in options?

The reviewer’s problem:

Then there’s the issue of trust. With an open system, anyone can throw their hat in the ring to review, regardless of their expertise (or lack thereof). There is also often an incentive problem, as an excellent analysis is rewarded just as much as a half-hearted attempt. The result? It takes too much effort to stand out from the fog of reviews.

The Proposer’s Challenge:

On the other hand, proposers face a difficult task. When reviews are not reliable anymore and voters can not rely on them, then decisions come down to a popularity contest. It’s often the big names that get selected and overshadow quality projects. The problem with this? Fame doesn’t always equate to merit. Or you have to spend months in politics and selling your idea to various people, with the risk that it will still fail. And for the outsiders with top-notch proposals, this can be discouraging, so many brilliant ideas are simply lost, and with them the chance for true innovation within the ecosystem.

Mapping the problem:

The interaction between the different players is shown more clearly in the following map. Without reliable decision-making support, it is difficult for the voters to make a good choice. In many ecosystems, instead of a completely open assessment process, a committee or a team of experts is used that either makes the decision itself or makes some kind of recommendation to the voters. In certain cases, this can lead to better results in the short term but it doesn’t scale. Eventually you need to evaluate the committees instead of the reviewers, and you are back where you started.

→ Quick video tour through the map: Mechanics of Decentralized Innovation Funds

What is my new Pitch?

One important realization from the conversations of the last few weeks is that I am talking more and more about a “reputation-based evaluation (or reviewing) system (or protocol) for verifying (or rating) the reliability of information” and my conversation partners have therefore understood much more quickly what I want to achieve with TrustLevel.

But there are so many related terms: Assess, review, validate, verify, evaluate, rate, etc. or system, protocol, software, platform, etc. Let’s see how that evolves…

What’s next?

The countdown begins with the last month of the fellowship. What lies ahead? A culmination of insights, breakthroughs and a few surprises along the way. The grand finale? My comprehensive paper that’s brewing.

Until then, keep your eyes peeled — and follow me and RnDAO on X for the latest information and updates as we bring this journey to an exciting conclusion.