Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Troubleshooting

This page covers common problems and their fixes, organized by symptom. If your issue is not listed here, see Getting Help at the bottom.


MCP Server Issues

“Tool not found” in Claude Code

The server is not registered in your MCP configuration.

Fix: Verify the server entry exists:

claude mcp list

You should see codescout listed with 28 tools. If it is missing, register it:

claude mcp add --global codescout -- codescout start --project .

See Installation for the full setup.

Server starts but no tools appear

The project path is not set or points to a directory that does not exist.

Fix: Check the --project argument in your MCP configuration. The path must be an absolute path to an existing directory, or . to use the current working directory. Verify it resolves correctly:

codescout start --project /path/to/your/project

If you omit --project, codescout tries to auto-detect from the current working directory. This works when Claude Code launches the server from within a project, but can fail if the working directory is unexpected.

“Connection refused” or server won’t start

The binary is not installed, not on PATH, or (in HTTP mode) the port is already in use.

Fix:

# Check the binary is installed
which codescout
codescout --version

# If not found, install it
cargo install codescout

# For HTTP mode, check port conflicts
lsof -i :8080

Server crashes on startup

This usually means a required shared library is missing (common with the local-embed feature, which needs ONNX Runtime as a native system library).

Fix: Check the error output on stderr. If it mentions libonnxruntime, install ONNX Runtime on your system, or switch to Ollama (recommended) for local-free embeddings without native dependencies:

# Rebuild using only the remote backend (supports Ollama, OpenAI, etc.)
git clone https://github.com/mareurs/codescout.git
cd codescout
cargo install --path . --no-default-features --features remote-embed

Then start Ollama and configure codescout to use it — see Ollama setup.


LSP and Symbol Tools

Symbol tools return empty results

The language server for that file’s language is not installed or not on PATH.

Fix: Install the required language server and verify it is accessible:

# Rust
which rust-analyzer

# Python
which pyright-langserver

# TypeScript/JavaScript
which typescript-language-server

# Go
which gopls

# Java
which jdtls

# Kotlin
which kotlin-language-server

# C/C++
which clangd

# C#
which OmniSharp

# Ruby
which solargraph

See the Language Support page for the full list.

Also: The language server may still be initializing. This is especially common with jdtls (Java) and kotlin-language-server, which can take 10-30 seconds on first startup while they index the project. Wait a few seconds and retry the tool call.

“No tree-sitter grammar for ‘X’”

A tree-sitter grammar was requested for a language that does not have one bundled.

Fix: Use LSP-based tools instead. symbols provides similar information (file structure, symbol names and kinds) and works for all 9 LSP languages.

If the language is not supported at all, only file operations and semantic search (after indexing) are available.

references returns nothing

Two common causes:

  1. Language server not fully indexed. Some LSP servers need to scan the entire project before they can answer reference queries. This is especially true for Java (jdtls), which builds a workspace model at startup. Wait for initialization to complete and retry.

  2. Symbol has no references. The symbol genuinely is not referenced anywhere in the project. This is correct behavior for unused code.

Symbol tools work for some files but not others

The file’s language may not be recognized, or the language server may have crashed on that specific file.

Fix: Check the server logs (stderr) for error messages from the language server. Restart the MCP server to reset all language server processes. If the problem persists with a specific file, the file may contain syntax errors that the language server cannot parse.


The embedding index has not been built for this project.

Fix: Build the index, then verify:

{ "tool": "index", "arguments": { "action": "build" } }
{ "tool": "workspace", "arguments": { "action": "status" } }

workspace(action: status) shows the number of indexed files and chunks. If both are zero, the index build failed – check server logs for errors.

“Connection refused” when indexing

The default embedding backend is Ollama, which must be running locally.

Fix:

# Start Ollama
ollama serve

# Verify it's running
curl http://localhost:11434/v1/embeddings \
  -d '{"model": "mxbai-embed-large", "input": "test"}'

If you do not want to run Ollama, switch to a different backend. See Embedding Backends.

“Model not found” when indexing

The configured embedding model has not been pulled into Ollama.

Fix:

ollama pull mxbai-embed-large

Or if you configured a different model in project.toml:

ollama pull <your-model-name>

Results seem wrong or irrelevant after changing the model

The index was built with a different embedding model. Vectors from different models are incompatible – mixing them produces meaningless similarity scores.

Fix: Rebuild the index from scratch:

{ "tool": "index", "arguments": { "action": "build", "force": true } }

Then verify the models match:

{ "tool": "workspace", "arguments": { "action": "status" } }

The response includes configured_model and indexed_with_model. They must be the same.

Indexing is very slow

Embedding large codebases with Ollama on CPU can take minutes or longer.

Fix (pick one):

  • Use a faster backend. openai:text-embedding-3-small is significantly faster than local Ollama for large projects.
  • Reduce scope. Add build artifacts, vendored code, and generated files to ignored_paths in project.toml so they are skipped during indexing.
  • Use GPU. If Ollama has GPU access, embedding is much faster. Check ollama ps to verify the model is loaded on GPU.

“No embedding backend compiled in”

The binary was built with --no-default-features and no embedding feature was enabled.

Fix: Reinstall with an embedding backend:

# Remote backend — supports Ollama (recommended), OpenAI, and compatible servers
cargo install codescout --features remote-embed

# Local CPU backend — requires building from source (not available via crates.io)
git clone https://github.com/mareurs/codescout.git
cd codescout
cargo install --path . --features local-embed

Tip: For a free, local setup without building from source, use Ollama with the default remote-embed binary. See Embedding Backends.


Configuration

Changes to project.toml not taking effect

Configuration is loaded when a project is activated. Editing the file after activation does not automatically reload it.

Fix: Call workspace(action: activate) again to reload the configuration:

{ "tool": "workspace", "arguments": { "action": "activate", "path": "/path/to/project" } }

Or restart the MCP server.

“No active project” errors

The server was started without a --project flag and could not auto-detect a project from the working directory.

Fix: Set the project explicitly:

{ "tool": "workspace", "arguments": { "action": "activate", "path": "/path/to/project" } }

Or restart the server with --project:

codescout start --project /path/to/project

File Operations

For a full explanation of the permission model, see Security & Permissions.

“Permission denied” or “Access denied” reading a file

The file is in the built-in deny list or matches a pattern in denied_read_patterns.

The built-in deny list blocks access to sensitive locations regardless of configuration:

~/.ssh
~/.aws
~/.gnupg
~/.config/gcloud
~/.config/gh
~/.docker/config.json
~/.netrc
~/.npmrc
~/.kube/config

On Linux, /etc/shadow and /etc/gshadow are also blocked. On macOS, /etc/master.passwd is blocked.

Fix: This is intentional security behavior. If you genuinely need access to a blocked path, check whether it is in the built-in list (cannot be overridden) or in denied_read_patterns in project.toml (can be removed). See Project Configuration for details.

“Access denied” writing a file outside the project

File write tools are restricted to the project root by default.

Fix: Add the target directory to extra_write_roots in project.toml:

[security]
extra_write_roots = ["/path/to/other/directory"]

Shell commands return “shell execution is disabled”

Shell execution requires two settings to both be enabled.

Fix: Set both fields in project.toml:

[security]
shell_enabled = true
shell_command_mode = "warn"   # or "unrestricted"

shell_enabled is the master switch (default: false). shell_command_mode controls whether a warning is appended to shell output (default: "warn"). Even with shell_enabled = true, setting shell_command_mode = "disabled" blocks all shell calls.


Git Tools

Git tools return errors

Two common causes:

  1. Not a git repository. The project root does not contain a .git directory.

    Fix: Verify with ls -la /path/to/project/.git. If the project is not a git repo, git tools will not work.


Performance

Slow responses on first tool call for a language

The first symbol tool call for a given language starts the language server process. Startup time varies:

ServerTypical startup
rust-analyzer2-5 seconds
pyright-langserver1-3 seconds
typescript-language-server1-2 seconds
gopls1-3 seconds
clangd1-2 seconds
jdtls10-30 seconds
kotlin-language-server5-15 seconds

Fix: This is expected. Subsequent calls are fast because the server stays running. If startup time is a problem for Java or Kotlin, use grep or semantic_search for initial exploration — they have no startup delay.

Large project causes tool timeouts

The default tool timeout is 60 seconds. Operations on very large projects (indexing, initial LSP workspace scan) can exceed this.

Fix: Increase the timeout in project.toml:

[project]
tool_timeout_secs = 120

High memory usage

Language servers can use significant memory, especially jdtls and rust-analyzer on large projects. Running multiple language servers simultaneously compounds this.

Fix: codescout starts language servers on demand, so only languages you actively use consume memory. When the MCP server exits (or receives SIGINT/SIGTERM), it gracefully shuts down all language servers via the LSP shutdown protocol. As a safety net, the LspClient Drop implementation also sends SIGTERM to child processes, ensuring cleanup even on abrupt exits.

If you have multiple Claude Code sessions, each spawns its own codescout process with its own language servers. Close unused sessions to reclaim memory.


Getting Help

If none of the above resolves your issue:

  1. Check server logs. codescout logs to stderr. In stdio mode, Claude Code captures this; look in Claude Code’s MCP server logs. In HTTP mode, stderr goes to the terminal where you started the server.

  2. Enable debug logging. Set RUST_LOG=debug for verbose output:

    RUST_LOG=debug codescout start --project /path/to/project
    

    This shows every tool call, LSP message, and embedding operation.

  3. Check the configuration. Use the workspace tool to see the active configuration as the server sees it:

    { "tool": "workspace", "arguments": { "action": "status" } }
    
  4. File an issue. Open a GitHub issue with:

    • The error message (exact text)
    • The tool call that triggered it
    • Your project.toml (redact any secrets)
    • Output from RUST_LOG=debug if available