Find your files with natural language and ask questions.
A smart file indexer with AI search (RAG engine), automatic OCR, and MCP interface.
Features:
- Indexes plaintext, documents, PDFs, images
- Processes images using automatic OCR and entity extraction
- Search and query files using AI (OpenAI, Ollama, LM Studio)
- MCP server for automation through IDE or AI extension included
Usage:
- Selects and tracks files using patterns like
~/Documents/*.pdf
- Changes across files are tracked and commited to local database
Search and Query (RAG ΒΉ) quality features:
- Files are split using semantic chunking with context headers
- RAG engine uses reranking and expanding of retrieved chunks
ΒΉ Retrieval Augmented Generation
π Collaborators welcome
You are invited to contribute to this open source project!
Feel free to file issues and submit pull requests anytime.
Just getting started?
π’ Install Archive Agent on Linux
Want to know the nitty-gritty details?
π¬ How Archive Agent works
Looking for the CLI command reference?
π» Run Archive Agent
Looking for the MCP tool reference?
π§° MCP Tools
Want to upgrade for the latest features?
β¬οΈ Update Archive Agent
π· Screenshot of command-line interface (CLI):
- β‘ Archive Agent
- Structure
- Supported OS
- Install Archive Agent
- AI provider setup
- How Archive Agent works
- Run Archive Agent
- Quickstart
- Show list of commands
- Create or switch profile
- Open current profile config in nano
- Add included patterns
- Add excluded patterns
- Remove included / excluded patterns
- List included / excluded patterns
- Resolve patterns and track files
- List tracked files
- List changed files
- Commit changed files to database
- Combined track and commit
- Search your files
- Query your files
- Launch Archive Agent GUI
- Start MCP Server
- MCP Tools
- Update Archive Agent
- Archive Agent settings
- Qdrant database
- Developer's guide
- Known issues
- Licensed under GNU GPL v3.0
- GUI sneak peek
Archive Agent has been tested with these configurations:
- Ubuntu 24.04 (PC x64)
If you've successfully installed and tested Archive Agent with a different setup, please let me know and I'll add it here!
Please install these requirements before proceeding:
- π³ Docker (for running Qdrant server)
- π Python >= 3.10 (core runtime) (usually already installed)
This installation method should work on any Linux distribution derived from Ubuntu (e.g. Linux Mint).
To install Archive Agent in the current directory of your choice, run this once:
git clone https://github.com/shredEngineer/Archive-Agent
cd Archive-Agent
chmod +x install.sh
./install.sh
The install.sh
script will execute the following steps in order:
- Download and install
uv
(used for Python environment management) - Install the custom Python environment
- Install the
spaCy
tokenizer model (used for chunking) - Install
pandoc
(used for document parsing) - Download and install the Qdrant docker image with persistent storage and auto-restart
- Install a global
archive-agent
command for the current user
π Archive Agent is now installed!
π Please complete the AI provider setup next.
(Afterward, you'll be ready to Run Archive Agent!)
Archive Agent lets you choose between different AI providers:
-
Remote APIs (higher performance and costs, less privacy):
- OpenAI: Requires an OpenAI API key.
-
Local APIs (lower performance and costs, best privacy):
- Ollama: Requires Ollama running locally.
- LM Studio: Requires LM Studio running locally.
π‘ Good to know: You will be prompted to choose an AI provider at startup; see: Run Archive Agent.
π Note: You can customize the specific models used by the AI provider in the Archive Agent settings. However, you cannot change the AI provider of an existing profile, as the embeddings will be incompatible; to choose a different AI provider, create a new profile instead.
If the OpenAI provider is selected, Archive Agent requires the OpenAI API key.
To export your OpenAI API key, replace sk-...
with your actual key and run this once:
echo "export OPENAI_API_KEY='sk-...'" >> ~/.bashrc && source ~/.bashrc
This will persist the export for the current user.
π‘ Good to know: OpenAI won't use your data for training.
If the Ollama provider is selected, Archive Agent requires Ollama running at http://localhost:11434
.
With the default Archive Agent Settings, these Ollama models are expected to be installed:
ollama pull llama3.1:8b # for chunk/rerank/query
ollama pull llava:7b-v1.6 # for vision
ollama pull nomic-embed-text:v1.5 # for embed
π‘ Good to know: Ollama also works without a GPU. At least 32 GiB RAM is recommended for smooth performance.
If the LM Studio provider is selected, Archive Agent requires LM Studio running at http://localhost:1234
.
With the default Archive Agent Settings, these LM Studio models are expected to be installed:
meta-llama-3.1-8b-instruct # for chunk/rerank/query
llava-v1.5-7b # for vision
text-embedding-nomic-embed-text-v1.5 # for embed
π‘ Good to know: LM Studio also works without a GPU. At least 32 GiB RAM is recommended for smooth performance.
π‘ Overview of Archive Agent processing and control:
graph LR
%% Ingestion Pipeline
subgraph Ingestion
A[Track and Commit Files] --> B[Parse & OCR]
B --> C[Semantic Chunking]
C --> D[Embed Chunks]
D --> E[Store Vectors in Qdrant]
end
%% Query Pipeline
subgraph Query
F[Ask Question] --> G[Embed Question]
G --> H[Retrieve Nearest Chunks]
E --> H
H --> I[Rerank by Relevance]
I --> J[Expand Context]
J --> K[Generate Answer]
K --> L[Show Reply]
end
Archive Agent currently supports these file types:
- Text:
- Plaintext:
.txt
,.md
- Documents:
- ASCII documents:
.html
,.htm
(images not supported) - Binary documents:
.odt
,.docx
(including images)
- ASCII documents:
- PDF documents:
.pdf
(including images; also see OCR strategies)
- Plaintext:
- Images:
.jpg
,.jpeg
,.png
,.gif
,.webp
,.bmp
Ultimately, Archive Agent decodes everything to text like this:
- Plaintext files are decoded to UTF-8.
- Documents are converted to plaintext, images are extracted.
- PDF documents are decoded according to the OCR strategy.
- Images are decoded to text using AI vision.
- The vision model will reject unintelligible images.
- Entity extraction extracts structured information from images.
- Structured information is formatted as image description.
See Archive Agent settings: image_entity_extract
π Note: Unsupported files are tracked but not processed.
For PDF documents, there are different OCR strategies supported by Archive Agent:
-
strict
OCR strategy (recommended):- PDF OCR text layer is ignored.
- PDF pages are treated as images.
- Expensive and slow, but more accurate.
-
relaxed
OCR strategy:- PDF OCR text layer is extracted.
- PDF foreground images are decoded, but background images are ignored.
- Cheap and fast, but less accurate.
-
auto
OCR strategy:- Attempts to select the best OCR strategy for each page, based on the number of characters extracted from the PDF OCR text layer, if any.
- Decides based on
ocr_auto_threshold
, the minimum number of characters forauto
OCR strategy to resolve torelaxed
instead ofstrict
. - Trade-off between cost, speed, and accuracy.
See Archive Agent settings: ocr_strategy
, ocr_auto_threshold
auto
OCR strategy is still experimental.
PDF documents often contain small/scattered images related to page style/layout which cause overhead while contributing little information or even cluttering the result.
π‘ Good to know: You will be prompted to choose an OCR strategy at startup (see Run Archive Agent).
Archive Agent processes decoded text like this:
- Decoded text is sanitized and split into sentences.
- Sentences are grouped into reasonably-sized blocks.
- Each block is split into smaller chunks using an AI model.
- Block boundaries are handled gracefully (last chunk carries over).
- Each chunk is prefixed with a context header (improves search).
- Each chunk is turned into a vector using AI embeddings.
- Each vector is turned into a point with file metadata.
- Each point is stored in the Qdrant database.
See Archive Agent settings: chunk_lines_block
π‘ Good to know: This smart chunking improves the accuracy and effectiveness of the retrieval.
To ensure that every chunk can be traced back to its origin, Archive Agent maps the text contents of each chunk to the corresponding line numbers or page numbers of the source file.
- Line-based files (e.g.,
.txt
) use the range of line numbers as reference. - Page-based files (e.g.,
.pdf
) use the range of page numbers as reference.
π Note: References are only approximate due to paragraph/sentence splitting/joining in the chunking process.
Archive Agent retrieves chunks related to your question like this:
- The question is turned into a vector using AI embeddings.
- Points with similar vectors are retrieved from the Qdrant database.
- Only chunks of points with sufficient score are kept.
See Archive Agent settings: retrieve_score_min
, retrieve_chunks_max
Archive Agent filters the retrieved chunks .
- The retrieved chunks are reranked by relevance to your question.
- Only the top relevant chunks are kept (the other chunks are discarded).
- Each selected chunk is expanded to get a larger context from the relevant documents.
See Archive Agent settings: rerank_chunks_max
, expand_chunks_radius
Archive Agent answers your question using the reranked and expanded chunks like this:
- The LLM receives the chunks as context to the question.
- The LLM's answer is returned as structured output and formatted.
π‘ Good to know: Archive Agent uses an answer template that aims to be universally helpful.
Archive Agent uses patterns to select your files:
-
Patterns can be actual file paths.
-
Patterns can be paths containing wildcards that resolve to actual file paths.
-
π‘ Patterns must be specified as (or resolve to) absolute paths, e.g.
/home/user/Documents/*.txt
(or~/Documents/*.txt
). -
π‘ Use the wildcard
*
to match any file in the given directory. -
π‘ Use the wildcard
**
to match any files and zero or more directories, subdirectories, and symbolic links to directories.
There are included patterns and excluded patterns:
- The set of resolved excluded files is removed from the set of resolved included files.
- Only the remaining set of files (included but not excluded) is tracked by Archive Agent.
- Hidden files are always ignored!
This approach gives you the best control over the specific files or file types to track.
π‘ Good to know: At startup, you will be prompted to choose the following:
- Profile name
- AI provider (see AI Provider Setup)
- OCR strategy (see OCR strategies)
For example, to track your documents and images, run this:
archive-agent include "~/Documents/**" "~/Images/**"
archive-agent update
To start the GUI, run this:
archive-agent
Or, to ask questions from the command line:
archive-agent query "Which files mention donuts?"
To show the list of supported commands, run this:
archive-agent
To switch to a new or existing profile, run this:
archive-agent switch "My Other Profile"
π Note: Always use quotes for the profile name argument, or skip it to get an interactive prompt.
π‘ Good to know: Profiles are useful to manage independent Qdrant collections (see Qdrant database) and Archive Agent settings.
To open the current profile's config (JSON) in the nano
editor, run this:
archive-agent config
See Archive Agent settings for details.
To add one or more included patterns, run this:
archive-agent include "~/Documents/*.txt"
π Note: Always use quotes for the pattern argument (to prevent your shell's wildcard expansion), or skip it to get an interactive prompt.
To add one or more excluded patterns, run this:
archive-agent exclude "~/Documents/*.txt"
π Note: Always use quotes for the pattern argument (to prevent your shell's wildcard expansion), or skip it to get an interactive prompt.
To remove one or more previously included / excluded patterns, run this:
archive-agent remove "~/Documents/*.txt"
π Note: Always use quotes for the pattern argument (to prevent your shell's wildcard expansion), or skip it to get an interactive prompt.
To show the list of included / excluded patterns, run this:
archive-agent patterns
To resolve all patterns and track changes to your files, run this:
archive-agent track
To show the list of tracked files, run this:
archive-agent list
π Note: Don't forget to track
your files first.
To show the list of changed files, run this:
archive-agent diff
π Note: Don't forget to track
your files first.
To sync changes to your files with the Qdrant database, run this:
archive-agent commit
To see additional information on chunking and embedding, pass the --verbose
option.
To bypass the AI cache (vision, chunking, embedding) for this commit, pass the --nocache
option.
π‘ Good to know: Changes are triggered by:
- File added
- File removed
- File changed:
- Different file size
- Different modification date
π Note: Don't forget to track
your files first.
To track
and then commit
in one go, run this:
archive-agent update
To see additional information on chunking and embedding, pass the --verbose
option.
To bypass the AI cache (vision, chunking, embedding) for this commit, pass the --nocache
option.
archive-agent search "Which files mention donuts?"
Lists files relevant to the question.
π Note: Always use quotes for the question argument, or skip it to get an interactive prompt.
To see additional information on embedding, retrieval, reranking and querying, pass the --verbose
option.
To bypass the AI cache (embedding, reranking) for this search, pass the --nocache
option.
archive-agent query "Which files mention donuts?"
Answers your question using RAG.
π Note: Always use quotes for the question argument, or skip it to get an interactive prompt.
To see additional information on embedding, retrieval, reranking and querying, pass the --verbose
option.
To bypass the AI cache (embedding, reranking) for this query, pass the --nocache
option.
To launch the Archive Agent GUI in your browser, run this:
archive-agent gui
π Note: Press CTRL+C
in the console to close the GUI server.
To start the Archive Agent MCP server, run this:
archive-agent mcp
π Note: Press CTRL+C
in the console to close the MCP server.
π‘ Good to know: Use these MCP configurations to let your IDE or AI extension automate Archive Agent:
.vscode/mcp.json
for GitHub Copilot agent mode (VS Code):.roo/mcp.json
for Roo Code (VS Code extension)
Archive Agent exposes these tools via MCP:
MCP tool | Equivalent CLI command(s) | Argument(s) | Description |
---|---|---|---|
get_patterns |
patterns |
None | Get the list of included / excluded patterns. |
get_files_tracked |
track and then list |
None | Get the list of tracked files. |
get_files_changed |
track and then diff |
None | Get the list of changed files. |
get_search_result |
search |
question |
Get the list of files relevant to the question. |
get_answer_rag |
query |
question |
Get answer to question using RAG. |
π Note: These commands are read-only, preventing the AI from changing your Qdrant database.
π‘ Good to know: Just type #get_answer_rag
(e.g.) in your IDE or AI extension to call the tool directly.
This step is not needed right away if you just installed Archive Agent. However, to get the latest features, you should update your installation regularly.
To update your Archive Agent installation, run this in the installation directory:
./update.sh
π Note: If updating doesn't work, try removing the installation directory and then Install Archive Agent again. Your config and data are safely stored in another place; see Archive Agent settings and Qdrant database for details.
π‘ Good to know: To also update the Qdrant docker image, run this:
sudo ./manage-qdrant.sh update
Archive Agent settings are organized as profile folders in ~/.archive-agent-settings/
.
E.g., the default
profile is located in ~/.archive-agent-settings/default/
.
The currently used profile is stored in ~/.archive-agent-settings/profile.json
.
π Note: To delete a profile, simply delete the profile folder. This will not delete the Qdrant collection (see Qdrant database).
The profile configuration is contained in the profile folder as config.json
.
π‘ Good to know: Use the config
CLI command to open the current profile's config (JSON) in the nano
editor (see Open current profile config in nano).
π‘ Good to know: Use the switch
CLI command to switch to a new or existing profile (see Create or switch profile).
Key | Description |
---|---|
config_version |
Config version |
mcp_server_port |
MCP server port (default 8008 ) |
ocr_strategy |
OCR strategy in DecoderSettings.py |
ocr_auto_threshold |
Minimum number of characters for auto OCR strategy to resolve to relaxed instead of strict |
image_entity_extract |
Image handling: true uses entity extraction, false uses OCR. |
chunk_lines_block |
Number of lines per block for chunking |
qdrant_server_url |
URL of the Qdrant server |
qdrant_collection |
Name of the Qdrant collection |
retrieve_score_min |
Minimum similarity score of retrieved chunks (0 ...1 ) |
retrieve_chunks_max |
Maximum number of retrieved chunks |
rerank_chunks_max |
Number of top chunks to keep after reranking |
expand_chunks_radius |
Number of preceding and following chunks to prepend and append to each reranked chunk |
ai_provider |
AI provider in ai_provider_registry.py |
ai_server_url |
AI server URL |
ai_model_chunk |
AI model used for chunking |
ai_model_embed |
AI model used for embedding |
ai_model_rerank |
AI model used for reranking |
ai_model_query |
AI model used for queries |
ai_model_vision |
AI model used for vision ("" disables vision) |
ai_vector_size |
Vector size of embeddings (used for Qdrant collection) |
ai_temperature_query |
Temperature of the query model |
The profile watchlist is contained in the profile folder as watchlist.json
.
The watchlist is managed by these commands only:
include
/exclude
/remove
track
/commit
/update
Each profile folder also contains an ai_cache
folder.
The AI cache ensures that, in a given profile:
- The same image is only OCR-ed once.
- The same text is only chunked once.
- The same text is only embedded once.
- The same combination of chunks is only reranked once.
This way, Archive Agent can quickly resume where it left off if a commit was interrupted.
To bypass the AI cache for a single commit, pass the --nocache
option to the commit
or update
command
(see Commit changed files to database and Combined track and commit).
π‘ Good to know: Queries are never cached, so you always get a fresh answer.
π Note: To clear the entire AI cache, simply delete the profile's cache folder.
π Technical Note: Archive Agent keys the cache using a composite hash made from the text/image bytes, and of the AI model names for chunking, embedding, reranking, and vision. Cache keys are deterministic and change generated whenever you change the chunking, embedding or vision AI model names. Since cache entries are retained forever, switching back to a prior combination of AI model names will again access the "old" keys.
The Qdrant database is stored in ~/.archive-agent-qdrant-storage/
.
π Note: This folder is created by the Qdrant Docker image running as root.
π‘ Good to know: Visit your Qdrant dashboard to manage collections and snapshots.
Archive Agent was written from scratch for educational purposes (on either end of the software).
π‘ Good to know: Tracking the test_data/
gets you started with some kind of test data.
To get started, check out these epic modules:
- Files are processed in
archive_agent/data/FileData.py
- The app context is initialized in
archive_agent/core/ContextManager.py
- The default config is defined in
archive_agent/config/ConfigManager.py
- The CLI commands are defined in
archive_agent/__main__.py
- The commit logic is implemented in
archive_agent/core/CommitManager.py
- The CLI verbosity is handled in
archive_agent/util/CliManager.py
- The GUI is implemented in
archive_agent/core/GuiManager.py
- The AI API prompts for chunking, embedding, vision, and querying are defined in
archive_agent/ai/AiManager.py
- The AI provider registry is located in
archive_agent/ai_provider/ai_provider_registry.py
If you miss something or spot bad patterns, feel free to contribute and refactor!
To run unit tests, check types, and check style, run this:
./audit.sh
-
While
track
initially reports a file as added, subsequenttrack
calls report it as changed. -
Removing and restoring a tracked file in the tracking phase is currently not handled properly:
- Removing a tracked file sets
{size=0, mtime=0, diff=removed}
. - Restoring a tracked file sets
{size=X, mtime=Y, diff=added}
. - Because
size
andmtime
were cleared, we lost the information to detect a restored file.
- Removing a tracked file sets
-
AI vision is employed on empty images as well, even though they could be easily detected locally and skipped.
-
PDF vector images may not convert as expected, due to missing tests. (Using
strict
OCR strategy would certainly help in the meantime.) -
Binary document page numbers (e.g.,
.docx
) are not supported yet. -
References are only approximate due to paragraph/sentence splitting/joining in the chunking process.
-
AI cache does not handle
AiResult
schema migration yet. (If you encounter errors, passing the--nocache
flag or deleting all AI cache folders would certainly help in the meantime.) -
Rejected images (e.g., due to OpenAI content filter policy violation) from PDF pages in
strict
OCR mode are currently left empty instead of resorting to text extracted from PDF OCR layer (if any). -
The SpaCy model
en_core_web_md
used for sentence splitting is only suitable for English source text. Multilingual support is missing at the moment. -
HTML document images are not supported.
Copyright Β© 2025 Dr.-Ing. Paul Wilhelm <[email protected]>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
See LICENSE for details.