Arbitrum Hackathon Continuation Program Report

Outcomes, Learnings, and Recommendations
The Hackathon Continuation Program ran for six months and supported hackathon teams through structured customer validation (Phase 1) and MVP development (Phase 2). The program began with four teams and advanced two ventures into Phase 2.
This post consolidates the key outcomes, learnings, and operator recommendations from the pilot.
Venture is a long-term game, often taking 7+ years before ROI can be counted. Portfolios diversify across 50+ investments and even the best allocators see high failure rate in their portfolios, as venture is a game where most good bets fail but a small percentage succeeds big. Our pilot program, with only 4 ventures at the start and only two slots for Phase 2, is statistically insignificant to properly assess the ROI a full program could yield. However, the pilot is highly useful to identify challenges and collect leading indicators.
Program Overview
- Duration: 6 months
- Phase 1 (3 months): Customer discovery and validation
- Phase 2 (3 months): MVP development and early go-to-market
- Starting ventures: 4
- Advanced to Phase 2: 2 (FairAI, Contribo)
- Program structure: Investment-based, including SAFE + token warrants
The program tested a venture-origination methodology for Arbitrum, with the goal of generating long-term network value through new applications, protocol usage, and aligned venture ownership.
Program Value
Multiple Web3 ecosystems suffer from deal flow shortages for accelerators and VC investment (e.g. SAFE and Polygon closing down accelerators, Web3 VC complaining of lack of investable projects, our own experience running the Arbitrum Ecosystem Pitch Day). With AI hype sucking talent out of Web3, and strong competition between blockchain ecosystems, there’s significant pressure to figure out better ROI methods to originate or attract applications and protocols than providing larger grants.
This program tested a methodology to originate ventures in Arbitrum and secure ROI via originating apps/protocols that increase usage of the Arbitrum network (network fees), and equity and tokens in the corresponding ventures.
While difficult, developing a repeatable venture-origination engine represents meaningful long-term leverage for Arbitrum.
Pilot’s Outcomes and Recommendations
The investment in this program resulted in the creation of one revenue-generating venture (FairAI) and one venture with users but no revenue yet (Contribo) on Arbitrum. The Arbitrum Foundation holds a SAFE and Token Warrant on these ventures.
- FairAI has now contracts for $10k+ MRR and is advancing their second enterprise deal, showing satisfactory performance.
- Contribo has had to iterate on the original product hypothesis and lost a cofounder but is now making progress with their first pilot.
In addition to the ventures, significant data on program performance was generated and concrete opportunities to address program limitations were identified.
Given the high value of developing a robust venture building methodology in Arbitrum and non-negligible but moderate signs of program success, our recommendation is to NOT scale the program, but to conduct another ‘pilot’ to implement the learnings at a small scale and re-assess.
Scaling the program to a statistically-significant scale could happen after, if the second pilot succeeds in addressing the talent bottleneck of the first and program outcomes improve as a result.
Phase Results Summary
Phase 1 (Customer Validation)
- Started with 4 teams
- 1 venture terminated early due to team capability issues (Nightly)
- 1 venture completed validation but was not advanced due to limited Arbitrum alignment (WeLivedIt)
- 2 ventures (FairAI, Contribo) advanced based on positive customer signals
Phase 2 (MVP Development)
- FairAI: Secured first enterprise client with 2-year contract worth $120k+ annually, achieving ~$10k MRR within 6 months. Validated MVP concept and architecture, now in implementation phase, targeting the manufacturing sector’s knowledge retention and optimisation.
- Contribo: Launched MVP pilot (contribo.xyz/pilot) with two prototypes to validate core hiring hypothesis and acquire initial users. Currently conducting outreach for additional design partners. Recruiting a business-side co-founder to accelerate go-to-market, as the initial cofounder dropped.
Key Improvement Opportunities
1. Talent Attraction Constraints
The primary bottleneck was talent quality and availability. Contributing factors included:
- Very limited marketing budget for the hackathon
- Hackathon formats attracting low-commitment participants
- Inability to advertise the continuation program during the hackathon (When the Hackathon was run, we couldn’t advertise a continuation program (it wasn’t yet approved) thus reducing the appeal of the program to only the hackathon price and not the more significant continuation program investment)
- Lower investment amounts compared to comparable venture programs
- Additionally, in our interactions with builders, Arbitrum is considered to have less “community” (referring to entrepreneurial talent attraction) compared with e.g. Base and Solana.
The program’s competition for spots was lower than intended. While some were strong, overall options were limited and we couldn’t fill empty spots fast enough, leading to capital returned to the DAO after Nightly showed commitment issues instead of replacing them by another team.
Given these significant limitations, the program outcomes are considerably good and there’s a clear path for improvement. We recommend an iteration that includes funding for a variety of entrepreneurial talent attraction experiments connected to a re-run of the program (with adjustments).
2. Cash Management and Coordination
It took a significant amount of time to align on the fund management system and legal contracts with the Arbitrum Foundation and other parties. Market volatility resulted in the program being underfunded. As a result, the program team was severely distracted from our role with the ventures, instead engaging with multiple internal parties to find a solution and ultimately having to campaign for a DAO proposal to top-up the program funds.
The upside: legal and operational templates now exist, reducing friction for future programs.
3. Hackathon Mindset vs Startup Mindset
Hackathon teams were focused on building, but lacked defined customer problems or validated markets. This was expected to some degree and accounted for in the Phase 1 program design to rectify. However, the challenges were deeper than expected, resulting in slower progress and additional demands on venture support.
The Hackathon format attracts talent but primes them with the wrong mindset, ultimately being counterproductive. We recommend reducing the role of Hackathons in future venture programs.
4. Problem Selection vs Solution Building
We were forced to discard applicants who had the right profile but hadn’t already identified a suitable idea. Multiple of the best founders were still in the research stage (instead of rushing to build the wrong idea). Conversely, many candidates were too attached to ideas with low potential. A similar problem is experienced by grant programs and has resulted in a practice of publishing idea lists. However, most of these idea lists lack market assessments and a deeper understanding of the viability of these opportunities. As such, they’re of limited value and can even be counterproductive.
RnDAO has tested two approaches to address this challenge. Through the 2024 Research Fellowship program, we invalidated the concept of mentoring founders to do foundational research. And now with the Hackathon Continuation Program, we tested mentoring founders on validation skills after they have picked a problem. The results are improving, but we still see a gap in supporting founders to select the right problem. We recommend moving further towards the venture studio model, where the Studio research team can do foundational research and shortlist opportunities in collaboration with founders.
What Worked Well
1. Expert, Hands-On Support
Founders consistently highlighted:
- Customer development mentorship
- Learning in public approach (Created accountability and transparency)
- Flexible, customised support over rigid workshop schedules
Hands-on support delivered significantly more value than peer-only learning, though peer interaction remains useful for talent attraction.
2. Stage gates and hands-on monitoring
Data from the venture support team allowed for a deeper understanding of venture potential than what’s regularly available in an accelerator. These insights combined with a legal contract based on staged deployment of capital allowed us to cut funding early when needed and thus optimise capital allocation. One limitation was that startups can occasionally benefit from more time in the current stage, instead of a funding cut or graduation into the next stage. Such a system would have likely improved outcomes for Contribo. We recommend keeping the stage gates and hands-on monitoring, and adding flexibility to extend Phase 1 for an additional 3 months (thus continue with the basic stipend to support further work on validation before more significant funding).
A key improvement would be allowing Phase 1 extensions for promising teams that need more validation time.
Major Recommendations
1. Move Away from Hackathon-First Approach
- Use hackathons only for community building
- Do not require hackathon participation for eligibility
- Introduce research-driven Opportunity Briefs as founder RFPs
- Emphasise validation before product building
2. Restructure Talent Acquisition
- Invest meaningfully in entrepreneur-focused marketing and community building e.g, speed networking events, aspiring entrepreneurs content program, hacker houses, cofounder matchmaking programs, etc.
- Increase investment size to attract higher-caliber talents ($100-150K initial, $250-500K potential follow-on)
- Screen for founder mindset, not just technical ability skills
3. Program Design
- Expand candidate pool beyond hackathon participants
- Maintain expert hands-on support
- Maintain P2P support but only as a secondary system.
- Introduce AI-enabled outreach training earlier
- Improve program flexibility by enabling Phase extensions and maintaining stage-gates in contracting format.
- Perform problem research pre-program
Conclusion
The program revealed that hackathons are poorly aligned with venture development needs. While the hands-on support model proved valuable, the talent pipeline requires a redesign and addressing underinvestment in talent attraction. This is a common challenge across Web3; addressing it would position Arbitrum in a leading role.
Despite important limitations, the program showed moderate success, making us hopeful of Arbitrum becoming highly capable of originating ventures.
Our recommendation is NOT to scale the program, but to conduct another ‘pilot’ to implement the learnings at a small scale and re-assess before scaling.
Find the full report with all details on the Arbitrum Forum.
