Technical
Architecture and implementation
How the mechanism is built, the decisions behind it, and where to explore the code.
This is conceptual work and experimentation. The code demonstrates ideas but has not been reviewed for security, scalability, or reliability. If you're considering building something real with these concepts, please seek expert input first.
Architecture Overview
Four core components: Capability Description, Matching Engine, Trust System, Exchange Orchestration.
1
Capability Description
Natural language → matchable terms
2
Matching Engine
Graph construction → Cycle detection → Chain ranking
3
Trust System
Identity verification → Track record → Network position
4
Exchange Orchestration
Proposal → Confirmation → Execution → Satisfaction
The Surplus Exchange Protocol has four core components:
Capability Description
AI-powered capability translation interprets natural-language descriptions of surplus and needs into matchable terms. Participants describe what they have and what they need in their own words — the system handles vocabulary differences across industries and contexts.
Matching Engine
Constructs a directed graph of offerings and needs, then uses cycle detection to find viable exchange chains.
Trust System
Calculates trust scores from identity verification, exchange history, and network position. Enforces graduated exposure for new participants.
Exchange Orchestration
Manages the lifecycle of exchange chains from proposal through completion. Participants can confirm, propose adjustments, or decline. Tracks delivery status, satisfaction signals, and provides a clear escalation path when exchanges get stuck.
Matching Algorithm
Graph construction, constraint filtering, multi-dimensional scoring, and chain ranking.
1
Graph
Build network of offerings and needs
2
Cycles
Find closed loops via DFS
3
Rank
Score by quality and trust
4
Propose
Present top chains to participants
Graph Construction
Participants describe:
- Offerings: What they can provide (surplus capacity)
- Needs: What they would value receiving
Each offering-to-need match becomes a directed edge in the graph:
- Node: Participant
- Edge: Potential exchange (from provider to recipient)
- Weight: Match quality score across multiple dimensions (semantic fit, surplus urgency, timing, geographic and sector alignment, relationship diversity)
Offering-to-need matching is powered by AI capability translation, which interprets natural-language descriptions into comparable terms. This enables cross-industry matching — 'pitch deck design' and 'presentation help' are recognised as related capabilities without manual categorisation.
Cycle Detection
The core algorithm finds cycles — closed loops where value can flow:
A → B → C → D → A
Implementation uses a DFS-based approach with pruning:
- Minimum cycle length: 2 (direct swaps)
- Maximum cycle length: 6 (configurable)
- Edge weight threshold filters weak matches
Constraint Filtering
Before scoring, edges are filtered by hard constraints:
- Geographic exclusions (participant in a region the recipient won't accept)
- Provider experience requirements (minimum completed exchanges)
- Trust threshold (provider's trust score below recipient's minimum)
Filtered edges are eliminated before cycle detection, reducing the search space and ensuring all discovered chains are feasible.
Chain Ranking
Found cycles are ranked by:
- Average edge weight (composite match quality across all scoring dimensions)
- Participant trust scores
- Surplus urgency (time-sensitive surplus prioritised)
- Relationship diversity (new partner connections preferred)
- Chain length (shorter preferred for coordination simplicity)
- Timing feasibility
Key files:
src/matching/graph.ts— Graph constructionsrc/matching/scorer.ts— Multi-dimensional scoringsrc/matching/cycles.ts— Cycle detectionsrc/matching/ranking.ts— Chain rankingsrc/capability/— Capability translation
Network Health and Transparency
Concentration limits
The algorithm includes a dual defence against participant concentration. First, diminishing returns in scoring — the more chains a participant is already in, the less their inclusion improves a new chain's score. Second, a governance-set hard cap on chain participation as a safety net. Both thresholds are set by the participant advisory body based on network conditions.
Algorithm transparency
Participants see their own scores, match factors, and the reasons behind matching decisions. They do not see other participants' profiles or network data (consistent with anti-harvesting principles). Algorithm changes require advisory body approval, with a public changelog tracking all modifications.
Trust System
Three-layer trust model: verifiable identity, network position, mutual satisfaction.
Newcomer
Default entry
Probationary
Early track record established
Established
Proven track record
Anchor
High-trust, high-activity
Three-Layer Model
Layer 1: Verifiable Identity
- Business registration verification
- Professional body membership
- Domain ownership
- Establishes baseline accountability
Layer 2: Network Position
- Exchange partner count
- Repeat partner rate
- Chain participation history
- Network position decays with a 180-day half-life — ongoing activity required
- Harder to fake than ratings
Layer 3: Mutual Satisfaction
- Satisfaction signals from both sides of each exchange
- Accumulated as track record
- Simple categories: satisfied / partially satisfied / not satisfied
Trust Tiers
- Bilateral exchanges only
- Single concurrent exchange
Verified identity
- Chain participation
- Multiple concurrent exchanges
3 bilateral exchanges with different partners, or vouching from Established or Anchor member
- Full exchange participation
- Can vouch for new members
5+ completed exchanges, positive signals
- Large, complex chains
- Network stabilising role
20+ exchanges, high satisfaction, network centrality
Key files:
src/trust/calculator.ts— Trust score calculationsrc/trust/tiers.ts— Tier classificationschemas/trust-profile.schema.json— Trust data schema
Participant Types & Roadmap
Human participants now, with a clear path to delegated and autonomous agent integration.
Current: Human Participants
Phase 1 focuses on human-operated businesses:
- Organisations (LLP, Ltd, sole trader)
- Individuals operating professionally
- Human-initiated offerings and needs
- Human confirmation of exchanges
Future: Agent Integration
The protocol is designed to support AI agent participation:
Phase 2: Delegated Agents
- Agent acts on behalf of known participant
- Human retains authority over commitments
- Agent handles discovery and negotiation
Phase 3: Autonomous Agents
- Agents as first-class participants
- Autonomous offering of agent-native capabilities
- Trust profiles for agents (different signals)
Agent Boundary
SEP is not trying to be an agent protocol. It's an exchange protocol that agents can participate in.
The boundary:
- SEP handles: Matching, trust, exchange orchestration
- External agents handle: Capability execution, decision-making, negotiation style
This means SEP can integrate with MCP, A2A, or other agent protocols rather than competing with them.
Key Design Decisions
Six foundational decisions with rationale and trade-offs.
Subjective Value Over Shared Currency
Decision: Each participant maintains their own sense of balance. No shared ledger, no network-wide currency.
Rationale: The surplus frame means baseline is "better than nothing." Eliminates valuation disputes while respecting contextual differences.
Trade-off: Harder to track network health metrics. Corporate accounting may need workarounds.
B2B Focus Over Consumer
Decision: Design primarily for business-to-business exchanges, particularly professional services.
Rationale: Businesses have predictable surplus, professional accountability, higher stakes per exchange. Historical evidence shows B2B systems survive where consumer systems fail.
Trade-off: Smaller initial addressable market.
Trust Through Track Record, Not Ratings
Decision: Trust is calculated from exchange history and network position, not user ratings.
Rationale: Ratings are easily gamed. Network position (partner count, repeat rate, chain participation) is harder to fake because it requires actual exchanges.
Trade-off: New participants face cold-start problem. The Newcomer tier provides a default entry path (bilateral-only exchanges, identity-verified) without requiring introductions. Vouching from Established or Anchor members accelerates progression but is not required.
Cycles Over Direct Matching
Decision: Prioritise multi-party chains over direct swaps.
Rationale: Direct matches are rare. The algorithm's value is finding non-obvious paths across the network.
Trade-off: More complex coordination. Chains fail if any participant defaults.
Algorithm Transparency
Decision: Every participant sees their own scores, match factors, and the reasons behind matching decisions. Algorithm changes require advisory body approval with a public changelog.
Rationale: When gaming the algorithm means being more trustworthy, fulfilling commitments, and building genuine relationships, those are aligned incentives — not a problem. Withholding scores that determine participant opportunities is hard to defend ethically.
Trade-off: Sophisticated actors may optimise for visible metrics. But the metrics reflect genuine participation — you can't improve your score without real exchanges with real partners over time.
Commitment-Based Accountability
Decision: The system tracks whether participants do what they said they would do, not whether they give as much as they receive. Receiving more than giving is not a problem — it's the system working.
Rationale: The surplus frame means the baseline is zero. Anything received is better than nothing. Balance tracking reintroduces currency-like dynamics that the protocol exists to avoid.
Trade-off: No network-wide balance metrics. Corporate accounting may need to assign values independently for compliance purposes.
Schema Overview
JSON schemas for all protocol data structures, with validation.
The protocol defines JSON schemas for all data structures:
| Schema | Purpose |
|---|---|
participant.schema.json | Business identity and profile |
capability-offering.schema.json | What a participant can provide |
need.schema.json | What a participant wants to receive |
exchange-chain.schema.json | A complete chain with edges and status |
trust-profile.schema.json | Trust data for a participant |
protocol-messages.schema.json | Messages between protocol components |
Example: Capability Offering
{
"id": "offering-001",
"participant_id": "participant-abc",
"capability": {
"category": "professional_services",
"description": "Contract review for standard commercial agreements",
"constraints": {
"max_hours": 8,
"turnaround_days": 5
}
},
"availability": {
"valid_from": "2026-03-01",
"valid_until": "2026-06-30"
}
}
Start Here
Repository structure, demo commands, and key entry points.
Repository Structure
surplus-exchange-protocol/
├── src/
│ ├── matching/ ← Start here for algorithm
│ ├── trust/ ← Start here for trust system
│ ├── capability/ ← Capability translation
│ ├── protocol/ ← Exchange lifecycle
│ ├── examples/ ← Runnable demo scripts
│ └── validation/ ← Schema validation
├── schemas/ ← JSON schemas
├── examples/ ← Example data files
└── docs/
├── design/ ← Design decisions
└── specs/ ← Detailed specifications
Running the Demos
# Install dependencies
npm install
# Run matching algorithm demo
npm run match
# Run trust calculation demo
npm run trust
# Run chain tracing demo
npm run trace
# Run capability translation demo (offline, no API key needed)
npm run capability
# Interactive capability translation with AI
npm run capability:live
Key Entry Points
| If you want to understand... | Start with... |
|---|---|
| How matching works | src/matching/cycles.ts |
| How trust is calculated | src/trust/calculator.ts |
| How capability translation works | src/capability/index.ts |
| The data model | schemas/*.schema.json |
| Design rationale | docs/design/decisions.md |
| What's still open | docs/design/open-questions.md |
What's Working vs What's Open
Working:
- Matching algorithm (multi-dimensional scoring, constraint filtering, ranking)
- Trust calculation (4-tier model with exposure limits)
- Capability translation (offline and live modes)
- Schema validation
- Example data and demos
Open:
- Network deployment (no live infrastructure)
- Agent integration (human-only currently)
- Physical goods handling (schema support only)