Priority Access · Colosseum Frontier

SWARM

Canteen's exclusive priority access for Colosseum's Frontier Hackathon. $25,000 in additional prizes, dedicated mentorship, and direct access to Canteen's network.

Selective cohort · Apply to join.

Prize pool
$25K
Dates
Apr 6 – May 11
Format
Online · 4 weeks
Access
Apply to join
Canteen
Canteen
Research & Technology · New York City
Canteen is a research and technology firm at the intersection of AI, crypto, and payments. We publish the research, run the experiments, and back the builders doing the same.
Next
Steps
01
01
Join our Discord
02
Register for Frontier (be sure to use this link with the referral to be eligble for SWARM prizes)
03
Get the SWARM CLI to submit updates - and (maybe) win some hidden prizes along the way!
uv tool install git+https://github.com/the-canteen-dev/SWARM-cli.git
04
Submit your project on Frontier
You will need: a product demo (live working product), a founder pitch video (who you are, what you built and why), and a public link to your GitHub repo. You will also be asked traction questions — e.g. how many users, labs, or companies you have onboarded.
05
Win! And (maybe) build an amazing company!
The best SWARM projects compete for prizes, Canteen support, and Colosseum's accelerator pipeline.
Prizes
& Access
02
$250k
in follow-on support from Colosseum's venture fund

These investments are offered by Colosseum's Frontier and in addition to SWARM. The best projects compete for both.

01
$25,000 in prizes
Awarded to standout SWARM projects, stacked on top of everything Frontier already offers.
02
Canteen access, start to finish
Direct lines to Canteen researchers and founders for the full four weeks. Real conversations, not scheduled office hours.
03
Priority programming
Dedicated sessions, workshops, and briefings for SWARM builders throughout the four weeks.
04
The Canteen network
Introductions to operators, investors, and builders already working on the agent economy. No cold outreach required.
Priority
Pass
03
The hackathon
Colosseum Frontier

Colosseum's flagship hackathon. Thousands of builders competing for serious prizes and an accelerator pipeline that has launched some of the most important companies in the Solana ecosystem.

All SWARM builders compete in Frontier
Your priority pass
SWARM

Canteen's program running alongside Frontier. Selected builders get $25,000 in additional prizes, dedicated mentorship, and Canteen's full research and founder network.

Selective cohort · Apply below
Ideas
04

RFBs — "Requests for Builders" — are our version of YC's Requests for Startups. Six open problems in the agent economy that we think are worth solving. If one excites you, treat it as extra validation to dive in — but you don't need to work on any of these to participate in SWARM. The best submissions always are what you care most about.

What builders create
  • Agent marketplaces with reputation and ranking algorithms
  • Agent recommendation systems, collaborative filtering for AI
  • Quality scoring mechanisms where other agents rate your agent
  • Agent advertising platforms: how do agents market to agents?
Questions worth exploring
  • Do agents form cliques and networks like humans?
  • Can agent reputation be gamed? How do you prevent Sybil attacks?
  • Do specialized agent networks emerge — trading agents only talk to trading agents?
Example

"AgentGraph" — Yelp for AI agents. Agents build profiles, other agents review them, and on-chain payment history serves as the reputation signal.

Traction metrics
  • Number of agents and agent startups using your product for discovery and reputation
  • Agent-to-agent transactions
  • Network density and structure
  • Reputation score distribution
  • Review authenticity rate
What builders create
  • Voice agents that negotiate deals with other voice agents
  • Live agent auctions where agents bid against each other in real-time
  • Streaming payment systems — pay-per-second for agent services
  • Real-time collaboration tools where agents work together and split payment dynamically
Real-time use cases
  • Shopping agent calls a merchant agent, negotiates bulk discount in real-time
  • Live compute marketplace: AI needs GPU now, agents bid using instant finality
  • Streaming analysis: agent provides live market commentary, paid per second
  • Multi-agent brainstorming: several agents collaborate, payment splits dynamically
Example

"AgentCall" — Voice agents that negotiate with each other. Shopping agent calls merchant agent, negotiates bulk discount in real-time using Solana's instant finality.

Traction metrics
  • Number of agents and agent startups using your product for coordination
  • Number of real-time agent interactions
What builders create
  • Agent advertising platforms where agents pay to promote to other agents
  • Priority queues where paying more means faster responses from busy agents
  • Agent influencer networks where popular agents endorse services
  • Attention bidding systems where agents auction their attention
Questions worth exploring
  • Do agents develop brands like humans?
  • Will there be agent influencers that other agents follow?
  • Can you A/B test messaging on agents?
  • What is the ROI of agent-to-agent marketing?
Example

"AgentAds" — Google Ads but for AI agents. Providers bid for placement when agents search for services. Solana's low fees enable $0.001 attention bids.

Traction metrics
  • How many advertisers can you onboard?
  • How many ads can you create, and how effective are they — how many agents convert?
  • Cost per agent acquisition
What builders create
  • Agent simulation environments with real money on Solana
  • Tools to observe and analyze agent network dynamics
  • Intervention mechanisms: inject agents, change incentives, observe outcomes
  • Agent economy analytics: who is trading with who and what patterns emerge
Research questions
  • Do agent economies develop specialization and division of labor?
  • Can you predict agent behavior from network structure?
  • Do agents form cartels or monopolies?
  • What economic structures emerge naturally?
  • How do agent economies differ from human economies?
Example

"AgentWorld" — Simulation environment where 100 AI agents with different goals interact on Solana. Researchers observe what economic structures emerge using real SOL/USDC with safety limits.

Traction metrics
  • Number of frontier labs interested in using your simulation environment to explore emergent agent dynamics
  • If you built evals with emergent agents, how many LOIs can you get from labs?
What builders create
  • Multi-agent coordination protocols inspired by workflow orchestration
  • Real-time task decomposition where agents bid and self-assemble into teams
  • Dynamic payment splitting based on contribution, effort, and quality
  • Agent swarm controllers where one agent orchestrates many specialized agents
Use cases
  • Research: coordinator agent recruits researcher, writer, and editor agents
  • Complex analysis: data, modeling, and visualization agents collaborate
  • Content production: idea agent, writer agent, editor agent, and designer agent in sequence
  • Live event coverage: multiple agents streaming different angles, paid by contribution
Example

"SwarmOS" — Operating system for agent coordination. Submit a complex task, agents self-assemble into a team using a Solana marketplace, complete the work, and auto-split payment based on contribution tracked on-chain.

Traction metrics
  • If you built evals with agent orchestration, how many LOIs can you get from labs?
  • Number of agent runtimes using your orchestration workflow
  • Number of users for your research, content production, or complex analysis tool

We defaulted to chat because it was easy, not because it's right. If agents are becoming economic and cognitive actors embedded across our daily lives — monitoring our health, managing our finances, coordinating our logistics — the text box is increasingly the wrong surface.

What builders create
  • Voice-native agent interfaces that negotiate, inform, and act without a screen (wearables, smart home, car)
  • IoT-embedded agents that sense physical state and act proactively (your agent notices your fridge is empty and coordinates delivery)
  • Computer vision pipelines where an agent interprets a photo or short video as the primary input modality (receipt scan → expense filed; whiteboard photo → tasks created)
  • Ambient agents that surface information at the right moment rather than waiting to be queried (push vs. pull)
  • Multi-modal interaction primitives: when should an agent speak vs. display vs. vibrate vs. do nothing?
Questions worth exploring
  • When is not interacting the right agent behavior? How do agents learn user attention budgets?
  • Do different demographics develop fundamentally different preferred interaction modalities?
  • What's the latency/richness tradeoff across modalities on Solana's settlement layer?
  • Can agents develop a "social sense" — knowing not to interrupt during a meeting vs. flagging something urgent?
Example

"AmbientAgent" — an agent layer across phone, watch, and home speaker that routes information and requests to the right surface at the right moment. Agent receives a calendar conflict: it whispers in your earbuds during your commute, not via a chat window at your desk. Uses Solana to settle coordination between the notification agent, the calendar agent, and any third-party service agents involved in resolving the conflict.

Traction metrics
  • Daily active users across your deployed surfaces (phone, watch, home speaker, car)
  • Early conversations with hardware or OS partners (wearable OEMs, smart home platforms, automotive) — even a first meeting is a strong signal here
  • Transaction volume settled through agent-initiated actions
  • User retention and engagement depth across modalities
Research
05

Research that points directly to buildable products. Read more about our take on mass agent networks here.

Li's work gives us a concrete design question: when should a swarm deliberate vs. aggregate? The hack here is building that decision as an on-chain primitive — the mechanism selector (debate or vote) lives in a smart contract, debate transcripts are hashed and stored as receipts, and payment releases only after the chosen mechanism reaches quorum. This turns a research finding into auditable, trust-minimized coordination infrastructure. The interesting product wedge is selling it to agent runtime providers (e.g. LangGraph, CrewAI) as a drop-in arbitration layer.

Extreme KV cache compression may enable agent memory to be stored onchain for the first time. This would give agents verifiable, persistent memory that survives across sessions and providers — no single inference provider controls the memory state. The hack is a PoC that snapshots a compressed KV cache (using TurboQuant's technique) at the end of each agent session, pins it to Arweave/Filecoin with a hash on Solana, and rehydrates it at session start. If retrieval quality holds up at extreme compression ratios, this is the missing piece for truly persistent agent identity.

Craftium provides a Minecraft-based framework for building diverse multi-agent environments in a composable way. Can we rewrite the environment core in a Rust-based game engine (Bevy is the obvious choice given its ECS architecture and WASM compilation target)? This creates a permissionless arena for agent training and evaluation.

OASIS simulates one million agents reacting to a seed event. MiroFish wraps this into a structured report. The gap: there's no way to bet on whether the simulation is right. The hack is to wire MiroFish's output — e.g. "68% of simulated agents react negatively to this policy in 72 hours" — into a prediction market where the resolution criterion is measured real-world sentiment (X API, Farcaster, Reddit). The simulation run is committed on-chain before the real-world event unfolds, making it tamper-evident. The interesting research question OASIS itself raises: at what agent-count threshold does the simulation's predicted sentiment distribution start matching real observed distributions? You'd have ground truth data to actually answer this.

MiroFish runs "dual-platform parallel" simulations — agents act across two synthetic social platforms simultaneously. This is a natural structure for zkVM-verifiable replays: run the simulation once, generate a proof that the reported emergent behaviors follow deterministically from the initial graph state and the model weights used. Builders could then ship simulation results to clients who verify the proof without re-running the full simulation. The practical bottleneck is that LLM inference isn't cheaply provable today — but MiroFish-Offline's Ollama backend with a quantized local model (qwen2.5 at 4-bit) is small enough that zkML proof generation becomes at least tractable for short simulation windows.

How we
judge
06

These weightings are recommendations. Our judges have the final say, and the best projects tend to break all the rules.

40%
Innovation
Novel approach, emergent behaviors, genuine research insights. Something that did not exist before you built it.
30%
Agentic Sophistication
How much does the AI actually decide versus just automate?
Full autonomy — AI manages everything
Meaningful agency, partial autonomy
AI features but not truly agentic
30%
Traction
Real users and real transactions during the event. Four weeks is your window to prove it works.
The
Stack
07