Frequently Asked Questions
Everything you need to know about ByteBell's MCP-powered cross-repository intelligence.
What is ByteBell?
ByteBell is an MCP server (Model Context Protocol) that gives your AI coding tools — Cursor, Claude Code, Windsurf, VS Code Copilot — instant cross-repository intelligence.
Instead of your AI assistant re-reading files every session and burning through tokens, ByteBell pre-indexes your entire codebase into a persistent knowledge graph. When your AI tool needs context, it pulls it from ByteBell's MCP in milliseconds — 96% cheaper, 85% faster, ~70% fewer tool calls.
Think of it as a always-on memory layer for your AI copilot that understands how all your repositories connect.
How does the ByteBell MCP work?
ByteBell works in three steps:
1. Index: Connect your GitHub/GitLab repos. ByteBell builds a persistent knowledge graph — mapping dependencies, API contracts, service relationships, and code patterns across all your repositories.
2. Connect: Add ByteBell as an MCP server in your AI tool (Cursor, Claude Code, Windsurf, or any MCP-compatible client). One config line, zero plugins to install.
3. Query: Your AI tool automatically pulls cross-repo context from ByteBell when it needs it. Instead of 84 tool calls reading files one-by-one, your AI gets the full picture in ~15 calls — 14 of which are free MCP lookups that cost you $0 in tokens.
The knowledge graph stays in sync with your codebase via webhooks, so context is always current.
How is ByteBell different from GitHub Copilot, Cursor, or Claude Code on their own?
ByteBell doesn't replace your AI coding tool — it supercharges it via MCP.
Without ByteBell, tools like Cursor or Claude Code re-read your files every session, have no memory between conversations, and can only see one repo at a time. This means:
- 84+ tool calls per complex query (each one costs tokens)
- 38,900+ input tokens burned just re-reading context
- No understanding of cross-repo dependencies
- Slow responses as the AI rebuilds context from scratch
With ByteBell MCP connected:
- ~15 total tool calls (14 free via MCP)
- 34,500 tokens served at $0 from pre-indexed context
- Full cross-repository dependency awareness
- 85% faster responses with persistent context
You keep using Cursor, Claude Code, or Windsurf exactly as you do now. ByteBell just makes them dramatically smarter and cheaper.
What is MCP and why does ByteBell use it?
MCP (Model Context Protocol) is an open standard created by Anthropic that lets AI tools connect to external data sources. Think of it like a USB port for AI — any MCP-compatible tool can plug into any MCP server.
ByteBell is built as an MCP server because:
- Zero vendor lock-in: Works with Cursor, Claude Code, Windsurf, VS Code Copilot, or any future MCP-compatible tool
- No plugins to install: One config line and your AI tool gains cross-repo intelligence
- Industry standard: MCP is backed by Anthropic and adopted by the entire AI coding ecosystem
- Token-free context: MCP tool calls from ByteBell don't consume your AI provider's input tokens — the context is served directly from our pre-indexed graph
You don't need to learn a new tool. You keep using whatever AI coding assistant you already love — ByteBell just makes it smarter via MCP.
Which AI tools does ByteBell work with?
ByteBell works with any MCP-compatible AI coding tool, including:
- Cursor — Add ByteBell as an MCP server in settings
- Claude Code (CLI) — Connect via MCP config
- Windsurf — Native MCP support
- VS Code + GitHub Copilot — Via MCP extension
- Any future MCP client — The protocol is an open standard
Setup takes under 2 minutes. Add one config block with your ByteBell API key, and your AI tool instantly gains access to your entire cross-repo knowledge graph.
Does ByteBell work with mono repos or only multi-repo setups?
Yes, ByteBell works with both mono repos and multi-repo architectures.
For mono repos: Even in a single large repository, ByteBell adds massive value. A mono repo with 500K+ files is too large for any AI tool to read in a single session. Your AI assistant wastes tokens re-reading the same files repeatedly, has no memory between sessions, and can't map internal package dependencies. ByteBell pre-indexes the entire mono repo into a knowledge graph, so your AI tool gets instant context on any part of the codebase without burning through tokens.
For multi-repo setups: This is where ByteBell's cross-repository intelligence really shines. It maps dependencies, API contracts, and service relationships across all your repos — something no single-repo AI tool can do.
For hybrid setups: Many teams have a mono repo for core services plus satellite repos for tooling, infrastructure, or SDKs. ByteBell handles all of these as one unified knowledge graph.
The bottom line: if your codebase is too large for an AI to hold in context (and at 500K+ files, it definitely is), ByteBell makes your AI copilot dramatically faster and cheaper regardless of repo structure.
How does ByteBell save money on AI token costs?
Without ByteBell, every time your AI coding tool needs context, it reads files directly — each file read is a tool call that consumes input tokens. For a complex cross-repo query, this means 84+ tool calls and 38,900+ tokens burned just to understand the question.
With ByteBell's MCP:
- 14 out of 15 tool calls are MCP lookups that pull pre-indexed context at $0 token cost
- Only ~1 tool call actually goes to the AI provider
- Total token consumption drops from 38,900 to ~4,400 — a 96% reduction
For a team of 10 developers making 20+ complex queries per day, this translates to thousands of dollars saved monthly in AI API costs alone — before you even count the productivity gains from faster responses.
What languages and frameworks does ByteBell support?
ByteBell is language-agnostic. The knowledge graph captures relationships at the architectural level — API contracts, imports, service calls, dependencies — regardless of implementation language.
Supported languages include Python, JavaScript, TypeScript, Go, Rust, Java, Solidity, C++, and more. It also understands framework-specific patterns in React, Django, Rails, Spring, Express, and others.
This means you can ask questions like "How does our Python API service communicate with the Go microservice?" and get answers that span both codebases with exact file and line citations.
How much does ByteBell cost?
ByteBell pricing scales with your codebase:
Growth Plan: $1,200/month
- Up to 25 repositories
- MCP integration for Cursor, Claude Code, Windsurf
- Cross-repository dependency mapping
- Impact analysis with citations
- +$50/repo/month additional
Scale Plan: $5,000/month (Most Popular)
- Up to 100 repositories
- Advanced knowledge graph with full dependency visualization
- Priority indexing and faster sync
- Dedicated support
- +$30/repo/month additional
Enterprise Plan: $10,000/month
- Unlimited repositories
- On-premises or VPC deployment
- SSO, RBAC, audit logging
- Custom integrations
- Dedicated account manager
ROI: For a team of 10 developers, ByteBell saves ~50 hours/week in developer time and thousands in AI API token costs — paying for itself many times over.
What ROI can we expect from ByteBell?
ByteBell delivers ROI across three dimensions:
1. Direct AI cost savings (96% reduction): Token costs drop from ~38,900 to ~4,400 per complex query. For teams spending $500+/month on AI API tokens, ByteBell pays for itself in token savings alone.
2. Developer time savings:
- 85% faster AI responses — no more waiting for the AI to re-read your codebase
- ~70% fewer tool calls per query — less waiting, more building
- 5+ hours saved per developer per week on context-building and dependency analysis
3. Quality improvements:
- AI responses grounded in your actual code, not hallucinated patterns
- Cross-repo awareness prevents production breaks from missed dependencies
- New developers productive in days instead of months
Typical ROI: For a 10-developer team, ByteBell saves ~50 hours/week in developer time and thousands in AI API costs — against a $599-2,599/month investment.
Is my code safe with ByteBell?
Yes. ByteBell is designed for enterprise security requirements:
- Your code stays yours: ByteBell indexes metadata and relationships — it doesn't store your raw source code
- Encrypted everything: All data encrypted in transit and at rest
- Permission inheritance: ByteBell respects your existing GitHub/GitLab permissions — developers only see repos they already have access to
- Deployment options: Cloud-hosted, your VPC, or fully on-premises
- Audit logging: Full audit trail of all queries and access
- SOC 2 aligned: Enterprise-grade security practices
ByteBell never sends your code to third-party AI providers. The knowledge graph lives in your controlled environment.
How does ByteBell stay in sync with my codebase?
ByteBell's knowledge graph updates automatically via webhooks. When you push code, merge a PR, or update documentation, ByteBell re-indexes the affected parts of the graph in near real-time.
This means:
- No manual re-indexing needed
- Your AI tool always gets current context, not stale data
- Branch-aware — you can query against specific branches
- Version tracking ties answers to specific commits and releases
Compare this to AI tools without ByteBell, which have zero memory between sessions and must re-read everything from scratch each time.
Who is ByteBell built for?
ByteBell is built for engineering teams where the codebase is too large for AI tools to handle efficiently:
- Microservices teams (10-100+ repos): Need cross-repo dependency awareness that single-repo AI tools can't provide
- Mono repo teams (500K+ files): Codebase too large for AI to read in a session — need pre-indexed context
- Platform engineering teams: Maintaining shared infrastructure used by dozens of downstream services
- Fast-growing startups (5-50 engineers): Scaling from monolith to microservices while maintaining velocity
- Web3/blockchain teams: Multiple protocol implementations, cross-chain dependencies, rapid ecosystem changes
If your developers are spending more time understanding code than writing it, ByteBell is for you.
How does ByteBell help onboard new developers?
Without ByteBell, onboarding to a complex codebase takes 3-6 months. New developers are afraid to make changes because they can't see what will break.
With ByteBell connected to their AI tool via MCP, a new developer can immediately ask:
- "How does authentication flow through our services?"
- "Which repos will break if I change this shared library?"
- "What's the data flow from user signup to billing?"
They get accurate, cited answers from your actual codebase — not hallucinated guesses. First meaningful cross-repo PR in under a week instead of months.
Does ByteBell prevent knowledge loss when engineers leave?
Yes. ByteBell's knowledge graph captures the architectural understanding that normally exists only in engineers' heads — dependency relationships, design decisions linked to code and PRs, and cross-repo context.
When an engineer leaves, their knowledge stays queryable. Future team members can ask "Why was this service split into three repos?" and get answers grounded in the actual code history, not tribal memory.
How do I get started with ByteBell?
Option 1: Try it now (no setup):
- Connect to our community deployments — pre-loaded with open-source multi-repo architectures like Ethereum core protocol repos
- Experience full cross-repo MCP functionality immediately
Option 2: 30-minute demo:
- We connect ByteBell to your actual repositories
- See cross-repo intelligence on your real codebase
- Email: saurav@bytebell.ai
Option 3: Self-serve pilot:
- Connect your first 5 repositories
- 14-day free trial, no credit card required
- 5-minute setup for the hosted version
Does ByteBell hallucinate or make up answers?
ByteBell is designed to eliminate AI hallucination for codebase queries. Every answer is grounded in your actual indexed code — specific files, line numbers, and dependency paths.
How it works:
- Multi-agent verification cross-checks dependency claims against real code
- Every citation points to an actual file and line in your repos
- If ByteBell can't verify a relationship with real source code, it says so rather than guessing
This is especially critical for cross-repo work, where a hallucinated dependency could lead to a production incident. ByteBell maintains <4% hallucination rates — compared to ~15-30% for general-purpose AI tools answering codebase questions without indexed context.