A Model Context Protocol (MCP) server that provides automated search monitoring functionality with x402 payment integration. Monitor search queries, detect new content using AI-powered duplicate detection, and automatically process payments for task creation.
The system consists of multiple components working together:
src/api/task_manager_api.py
- FastAPI service with x402 payment middlewaresrc/api/task_manager_client.py
- HTTP client with x402 payment supportsrc/mcp/searcher_mcp.py
- MCP server exposing tools to Claude Desktopsrc/core/search_engine.py
- Web search execution and AI comparisonsrc/core/task_manager.py
- Database operations and task managementsrc/core/models.py
- SQLAlchemy database models
- x402 Protocol - Automatic payments for task creation ($1.00 per task)
- Payment Address -
0x671cE47E4F38051ba3A990Ba306E2885C2Fe4102
- Network - base-sepolia
- Free Endpoints - All operations except task creation
cd /path/to/MCP_Server
uv sync
Copy the example environment file and configure it:
cp .env.example .env
Edit .env
with your configuration:
# x402 Payment Configuration
PAY_TO_ADDRESS=0x671cE47E4F38051ba3A990Ba306E2885C2Fe4102
PRIVATE_KEY=your_ethereum_private_key_here
# API Configuration
TASK_MANAGER_API_URL=http://localhost:8000
# OpenAI Configuration
OPENAI_API_KEY=your_openai_api_key_here
# Database Configuration (optional)
DATABASE_URL=sqlite:///timelooker.db
- AWS CLI configured with appropriate permissions
- AWS CDK installed:
npm install -g aws-cdk
- Python dependencies:
uv sync
# Deploy AWS infrastructure (RDS, Lambda roles, SES, etc.)
python scripts/deploy_infrastructure.py
This creates:
- PostgreSQL RDS database (db.t3.micro)
- S3 bucket for email templates
- AWS Secrets Manager for API keys
- IAM roles for Lambda execution
- SES email identity
After deployment, update the secrets in AWS Secrets Manager:
# Update OpenAI API key
aws secretsmanager update-secret \
--secret-id "timelooker/openai/api-key" \
--secret-string '{"api_key":"your_openai_key_here"}'
# Update X402 private key
aws secretsmanager update-secret \
--secret-id "timelooker/x402/private-key" \
--secret-string '{"private_key":"your_private_key_here"}'
Go to AWS Console > SES > Verified identities and verify your sender email address.
The deployment script creates .env.aws
with the infrastructure details. Update it with your values:
cp .env.aws .env
# Edit .env to add your private key and sender email
Verify that the system can retrieve secrets from AWS:
# Test secrets retrieval
python scripts/test_secrets.py
This will show whether the system can automatically retrieve database credentials, API keys, and other secrets from AWS Secrets Manager.
# Basic initialization
python scripts/init_db.py
# Initialize with sample data
python scripts/init_db.py --sample
# Check schema version
python scripts/init_db.py --version
# Validate database integrity
python scripts/init_db.py --validate
# Run database migrations
python scripts/init_db.py --migrate
# Reset database (drop and recreate)
python scripts/init_db.py --reset
# Reset and create sample data
python scripts/init_db.py --reset --sample
python run_api_server.py
# or
uv run run_api_server.py
Edit your Claude Desktop configuration file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
Choose the appropriate configuration based on your deployment mode:
cp claude_desktop_config_local.json ~/Library/Application\ Support/Claude/claude_desktop_config.json
cp claude_desktop_config_cloud.json ~/Library/Application\ Support/Claude/claude_desktop_config.json
# Then edit the file to replace YOUR_ACCOUNT and endpoint values with actual values from deployment
You can also manually copy the contents from claude_config_example.json
and add your values.
Important: Replace /absolute/path/to/MCP_Server
with your actual absolute path!
Completely restart Claude Desktop after editing the configuration.
The MCP server provides 6 powerful tools for Claude Desktop:
create_search_task (Requires Payment) Create automated monitoring tasks for any search query
- Payment: $1.00 per task creation
- Parameters: query, frequency (min 1 min), email, runtime, sender email
- Cloud Mode: Automatically deploys Lambda function + EventBridge schedule
- Local Mode: Creates task in database for manual execution
- Example: "Create a search task to monitor AI Ethics job postings every hour for the next 3 days"
Run a search for an existing task and get new results
- Parameters: task_id
- Example: "Execute search for task 1"
View all your active monitoring tasks
- Example: "Show me all my search tasks"
Get detailed status and execution history for a task
- Parameters: task_id, number of recent executions to show
- Example: "What's the status of task 2?"
Deactivate a monitoring task
- Parameters: task_id
- Example: "Delete task 3"
Preview search results without creating a task
- Parameters: query, max_results
- Example: "Preview search results for 'Python developer remote jobs'"
Once configured, you can use natural language in Claude Desktop:
- "Create a search task to monitor new iPhone releases every 2 hours for the next week"
- "Set up monitoring for 'remote Python jobs' checking every 5 minutes for 1 hour"
- "Monitor AI safety research papers, check daily for 2 weeks"
- "Show me all my active search tasks"
- "What's the status of my first search task?"
- "Execute a search for task 2"
- "Delete the iPhone monitoring task"
- "Preview search results for 'Machine Learning conferences 2025'"
- "Show me what results I'd get for monitoring crypto news"
- Uses OpenAI to identify genuinely new items vs. formatting variations
- Significantly reduces false notifications
- Considers content similarity, company, and location
- Automatic payment processing for task creation
- Free access to all other operations
- Ethereum-based payments on base-sepolia network
Rich Search Results
- Finds items from multiple sources across the web
- Extracts titles, descriptions, URLs, locations, and more
- Structured data format for easy processing
- Any search query (jobs, products, news, research, etc.)
- Configurable frequency (1 minute to days)
- Customizable runtime periods (1 minute to weeks)
- Automatic notifications when new items are found
- Structured email format with all item details
- Configurable sender/recipient
- Complete execution history for each task
- Performance metrics and error tracking
- Easy status monitoring through Claude Desktop
- Automated schema migrations and version tracking
- Database integrity validation and orphan detection
- Consistent session management with automatic cleanup
- Comprehensive CLI tools for database operations
B� Deployment Options
- Setup: Use
.env
with local database (SQLite) - Task Execution: Manual via MCP tools or scheduled scripts
- API Server: Run locally with
python run_api_server.py
- MCP Server: Run locally with
python run_mcp_server.py
- Database: SQLite file
- Payments: Still processed via x402
- Setup: Use
.env
withDEPLOY_TO_CLOUD=true
- Task Execution: Automatic via Lambda functions + EventBridge
- Infrastructure: RDS PostgreSQL, Lambda functions, SES, S3
- Scaling: Serverless, automatically handles multiple tasks
- Cost: ~$15-30/month for typical usage
- Benefits:
- No manual task execution needed
- Automatic cleanup after runtime expires
- Professional email templates via SES
- Scalable and fault-tolerant
- API Server: Local development with cloud database
- Task Creation: Creates Lambda functions for execution
- Best for: Development while using production infrastructure
# Check every 5 minutes, run eligible tasks
*/5 * * * * cd /path/to/MCP_Server && python scripts/run_scheduled_tasks.py
# Run continuously, checking every 60 seconds
python scripts/run_scheduled_tasks.py --daemon --interval 60
The FastAPI server provides these endpoints:
POST /tasks/
- Create task (requires payment)GET /tasks/
- List active tasksGET /tasks/{task_id}
- Get task detailsDELETE /tasks/{task_id}
- Deactivate taskPOST /executions/
- Create execution recordPUT /executions/{execution_id}
- Update executionGET /tasks/{task_id}/should-run
- Check if task should runPOST /tasks/{task_id}/results
- Save search resultsGET /tasks/{task_id}/results
- Get previous resultsPOST /tasks/{task_id}/notify
- Send email notification
- Check Claude Desktop logs:
~/Library/Logs/Claude/mcp*.log
- Verify the absolute path in your configuration
- Ensure all dependencies are installed
- Restart Claude Desktop completely
- Verify
PRIVATE_KEY
is set in.env
- Ensure you have funds on base-sepolia network
- Check that the private key is valid (without 0x prefix)
- Verify
OPENAI_API_KEY
orANTHROPIC_API_KEY
is set and valid - Check internet connection
- Look at API server logs for detailed error messages
- Check if
timelooker.db
exists and is writable - Run
python scripts/init_db.py --validate
to check database integrity - Run
python scripts/init_db.py --reset
to reinitialize if corrupted - Use
python scripts/init_db.py --version
to check schema version - Verify SQLAlchemy connection string in environment variables
- Real-time Testing: 1-5 minutes (short duration)
- Breaking News: 5-30 minutes
- Job Postings: 1-6 hours
- Product Releases: 6-24 hours
- Research Papers: 1-7 days
Note: Very frequent checks are great for testing, but be mindful of OpenAI API costs for long-running tasks.
MCP_Server/
├── src/
│ ├── api/ # HTTP API layer
│ ├── core/ # Core business logic
│ └── mcp/ # MCP server interface
├── tests/ # All test files
├── scripts/ # Utility scripts
├── run_mcp_server.py # MCP server entry point
└── run_api_server.py # API server entry point
- Testing:
tests/test_search_quality.py
,tests/quick_quality_test.py
,tests/monitor_query_test.py
- Lambda:
scripts/lambda_function.py
,tests/test_lambda.py
- Automation:
scripts/run_scheduled_tasks.py
- Entry Points:
run_mcp_server.py
,run_api_server.py
The project includes a comprehensive test suite covering search quality, payment integration, and API functionality.
# Run all tests
python tests/run_all_tests.py
# Interactive test selection
python tests/run_quality_tests.py
Validate search result quality and duplicate detection:
python tests/quick_quality_test.py
python tests/test_search_quality.py
Test payment flows with mocked x402 client:
python tests/test_x402_integration.py
Tests include:
- TaskManager initialization with payment config
- Create task with payment flow (mocked)
- Free endpoints work without payment
- Missing private key handling
- Payment failure scenarios
Test FastAPI endpoints and database integration:
# Start API server first
python run_api_server.py
# In another terminal, run tests
python tests/test_api_integration.py
Tests include:
- Health endpoint functionality
- Free endpoints (GET requests)
- Payment-required endpoints (POST /tasks/)
- Database model operations
Test AWS Lambda compatibility:
python tests/test_lambda.py
For API integration tests, ensure you have:
- API server running on localhost:8000
- Valid environment variables in
.env
- Database initialized with
python scripts/init_db.py
# Required for payment tests
PRIVATE_KEY=your_test_private_key_here
PAY_TO_ADDRESS=0x671cE47E4F38051ba3A990Ba306E2885C2Fe4102
X402_NETWORK=base-sepolia
# Required for search tests
OPENAI_API_KEY=your_openai_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# Optional for testing
DATABASE_URL=sqlite:///test.db
LOG_LEVEL=INFO
The test suite validates:
- ✅ Search result quality and consistency
- ✅ AI-powered duplicate detection accuracy
- ✅ Payment integration flows
- ✅ API endpoint functionality
- ✅ Database operations
- ✅ Error handling and edge cases
This system provides powerful search monitoring capabilities with seamless payment integration through the x402 protocol!