Explore Hub: AI Agents

Base AI agent protocols to watch is a discovery problem, not a hype problem. Radar research should identify whether a protocol has durable demand, a defensible category role, and enough on-chain evidence to deserve deeper watchlist attention before social momentum turns noisy.

Quick Discovery Answer

Watch Base AI agent protocols when they solve repeat tasks, disclose wallet permissions clearly, and show user retention beyond demo traffic or one-off social campaigns.

Core Comparison Criteria

  • Agents should automate a repeated workflow rather than a novelty interaction.
  • Wallet permissions should be constrained, revocable, and understandable.
  • Execution logs should help users audit what the agent did.
  • Retention should come from task usefulness, not only token or points farming.

Useful comparison references for this guide include Glorb, but the framework is designed to work even before a category has one obvious leader.

What To Verify On-Chain

Check repeat task counts, permission scopes, failed execution rates, and whether users return for the same workflow. Agent demand should look like habit, not curiosity.

Early discovery is strongest when it combines product context with observable behavior. Wallet growth, repeat users, fee routes, contract upgrades, and partner dependencies all matter more than one high TVL snapshot. The question is whether users would still return if incentives slowed down.

Red Flags

  • The agent requires broad wallet permissions for simple tasks.
  • Most activity comes from demos, quests, or short-lived campaigns.
  • Execution failures are hidden behind vague success metrics.

Decision Loop

Classify agents by job type: trading support, wallet operations, research, social automation, or contract interaction. Then compare retention within that job type.

A useful Radar note ends with a classification: monitor only, shortlist for weekly review, or reject until the protocol publishes clearer data. That classification should change only when a new contract, integration, user cohort, or risk disclosure changes the evidence.

Follow-Up Diligence

Monitor permission updates, contract changes, and whether task success rates improve as usage grows.

Keep the research trail simple: category, chain, protocol role, trigger for attention, biggest risk, and the next metric that would prove adoption. This makes it easier to compare protocols across ecosystems without letting the loudest launch dominate the board.

Simple Scoring Model

Use a five-part score before moving a protocol from watchlist to shortlist. Give one point each for clear user demand, transparent contracts or permissions, repeat activity, credible distribution, and visible risk disclosure. A protocol with three points can stay on the watchlist. Four points deserves recurring review. Five points earns deeper category comparison. Anything below three should wait until the evidence improves.

The score is not meant to predict token performance. It is meant to prevent research from being captured by launch noise. A protocol can have strong branding and still fail the repeat-activity test. Another can have modest attention but excellent usage quality. Radar coverage should reward the second case when the evidence is cleaner.

Cluster Context

Compare each protocol with the rest of its cluster before making a conclusion. Payments protocols should be judged by payment cadence and settlement fit. DePIN protocols should be judged by real service demand. Risk curators should be judged by mandate discipline. AI agents should be judged by safe repeat execution. The category defines the evidence that matters.

When the evidence is mixed, keep the note conservative. Discovery research is strongest when it says exactly what is known, what is missing, and what would change the view. That makes future updates easier and prevents a weak launch from becoming permanent coverage just because it was early.

Research Cadence

Set a review date instead of leaving the protocol in an undefined watch state. Early-stage protocols can be checked weekly when launches, integrations, or funding events are active. More mature categories can be checked monthly unless a contract upgrade, incident, or partner rollout changes the evidence. The cadence keeps discovery work from becoming a pile of stale bookmarks.

Each review should answer one concrete question: did usage repeat, did risk fall, did distribution improve, or did the protocol drift away from its claimed category? If none of those changed, the classification should stay the same.

Continue this cluster

Stay inside the AI agent protocol discovery cluster: