10 KiB
ThreatHunt Analyst-Assist Agent - Integration Guide
Quick Reference
Files Created
Backend (10 files)
backend/app/agents/core.py- ThreatHuntAgent classbackend/app/agents/providers.py- LLM provider interfacebackend/app/agents/config.py- Agent configurationbackend/app/agents/__init__.py- Module initializationbackend/app/api/routes/agent.py- /api/agent/* endpointsbackend/app/api/__init__.py- API module initbackend/app/main.py- FastAPI applicationbackend/app/__init__.py- App module initbackend/requirements.txt- Python dependenciesbackend/run.py- Development server entry point
Frontend (7 files)
frontend/src/components/AgentPanel.tsx- React chat componentfrontend/src/components/AgentPanel.css- Component stylesfrontend/src/utils/agentApi.ts- API communicationfrontend/src/App.tsx- Main application with agentfrontend/src/App.css- Application stylesfrontend/src/index.tsx- React entry pointfrontend/public/index.html- HTML templatefrontend/package.json- npm dependenciesfrontend/tsconfig.json- TypeScript config
Docker & Config (5 files)
Dockerfile.backend- Backend containerDockerfile.frontend- Frontend containerdocker-compose.yml- Full stack orchestration.env.example- Configuration template.gitignore- Version control exclusions
Documentation (3 files)
AGENT_IMPLEMENTATION.md- Detailed technical guideIMPLEMENTATION_SUMMARY.md- High-level overviewINTEGRATION_GUIDE.md- This file
Provider Configuration Quick Start
Option 1: Online (OpenAI) - Easiest
cp .env.example .env
# Edit .env:
THREAT_HUNT_AGENT_PROVIDER=online
THREAT_HUNT_ONLINE_API_KEY=sk-your-openai-key
THREAT_HUNT_ONLINE_MODEL=gpt-3.5-turbo
docker-compose up -d
# Access at http://localhost:3000
Option 2: Local Model (Ollama) - Best for Privacy
# Install Ollama and pull model
ollama pull mistral # or llama2, neural-chat, etc.
cp .env.example .env
# Edit .env:
THREAT_HUNT_AGENT_PROVIDER=local
THREAT_HUNT_LOCAL_MODEL_PATH=/path/to/model
# Update docker-compose.yml to connect to Ollama
# Add to backend service:
# extra_hosts:
# - "host.docker.internal:host-gateway"
# THREAT_HUNT_AGENT_PROVIDER=local
# THREAT_HUNT_LOCAL_MODEL_PATH=~/.ollama/models/
docker-compose up -d
Option 3: Internal Service - Enterprise
cp .env.example .env
# Edit .env:
THREAT_HUNT_AGENT_PROVIDER=networked
THREAT_HUNT_NETWORKED_ENDPOINT=http://your-inference-service:5000
THREAT_HUNT_NETWORKED_KEY=your-api-key
docker-compose up -d
Installation Steps
Prerequisites
- Docker & Docker Compose (recommended)
- OR Python 3.11 + Node.js 18 (local development)
Method 1: Docker (Recommended)
cd /path/to/ThreatHunt
# 1. Configure provider
cp .env.example .env
# Edit .env and set your LLM provider
# 2. Build and start
docker-compose up -d
# 3. Verify
curl http://localhost:8000/api/agent/health
curl http://localhost:3000
# 4. Access UI
open http://localhost:3000
Method 2: Local Development
Backend:
cd backend
# Create virtual environment
python -m venv venv
source venv/bin/activate # or venv\Scripts\activate on Windows
# Install dependencies
pip install -r requirements.txt
# Set provider (choose one)
export THREAT_HUNT_ONLINE_API_KEY=sk-your-key
# OR
export THREAT_HUNT_LOCAL_MODEL_PATH=/path/to/model
# OR
export THREAT_HUNT_NETWORKED_ENDPOINT=http://service:5000
# Run server
python run.py
# API at http://localhost:8000/docs
Frontend (new terminal):
cd frontend
# Install dependencies
npm install
# Start dev server
REACT_APP_API_URL=http://localhost:8000 npm start
# App at http://localhost:3000
Testing the Agent
1. Check Agent Health
curl http://localhost:8000/api/agent/health
# Expected response (if configured):
{
"status": "healthy",
"provider": "OnlineProvider",
"max_tokens": 1024,
"reasoning_enabled": true
}
2. Test API Directly
curl -X POST http://localhost:8000/api/agent/assist \
-H "Content-Type: application/json" \
-d '{
"query": "What file modifications are suspicious?",
"dataset_name": "FileList",
"artifact_type": "FileList",
"host_identifier": "DESKTOP-TEST",
"data_summary": "System file listing from scan"
}'
3. Test UI
- Open http://localhost:3000
- See sample data table
- Click "Ask" button at bottom right
- Type a question in the agent panel
- Verify response appears with suggestions
Deployment Checklist
- Configure LLM provider (env vars)
- Test agent health endpoint
- Test API with sample request
- Test frontend UI
- Configure CORS if frontend on different domain
- Add authentication (JWT/OAuth) for production
- Set up logging/monitoring
- Create backups of configuration
- Document provider credentials management
- Set up auto-scaling (if needed)
Monitoring & Troubleshooting
Check Logs
# Backend logs
docker-compose logs -f backend
# Frontend logs
docker-compose logs -f frontend
# Specific error
docker-compose logs backend | grep -i error
Common Issues
503 - Agent Unavailable
Cause: No LLM provider configured
Fix: Set THREAT_HUNT_ONLINE_API_KEY or other provider env var
CORS Error in Browser Console
Cause: Frontend and backend on different origins
Fix: Update REACT_APP_API_URL or add frontend domain to CORS
Slow Responses
Cause: LLM provider latency (especially online)
Options:
1. Use local provider instead
2. Reduce MAX_TOKENS
3. Check network connectivity
Provider Not Found
Cause: Model path or endpoint doesn't exist
Fix: Verify path/endpoint in .env
docker-compose exec backend python -c "from app.agents import get_provider; get_provider()"
API Reference
POST /api/agent/assist
Request guidance on artifact data.
Request Body:
{
query: string; // Analyst question
dataset_name?: string; // CSV dataset name
artifact_type?: string; // Artifact type
host_identifier?: string; // Host/IP identifier
data_summary?: string; // Context description
conversation_history?: Array<{ // Previous messages
role: string;
content: string;
}>;
}
Response:
{
guidance: string; // Advisory text
confidence: number; // 0.0 to 1.0
suggested_pivots: string[]; // Analysis directions
suggested_filters: string[]; // Data filters
caveats?: string; // Limitations
reasoning?: string; // Explanation
}
Status Codes:
200- Success400- Bad request503- Service unavailable
GET /api/agent/health
Check agent availability and configuration.
Response:
{
status: "healthy" | "unavailable" | "error";
provider?: string; // Provider class name
max_tokens?: number; // Max response length
reasoning_enabled?: boolean;
configured_providers?: { // If unavailable
local: boolean;
networked: boolean;
online: boolean;
};
}
Security Notes
For Production
-
Authentication: Add JWT token validation to endpoints
from fastapi.security import HTTPBearer security = HTTPBearer() @router.post("/assist") async def assist(request: AssistRequest, credentials: HTTPAuthorizationCredentials = Depends(security)): # Verify token -
Rate Limiting: Install and use
slowapifrom slowapi import Limiter limiter = Limiter(key_func=get_remote_address) @limiter.limit("10/minute") async def assist(request: AssistRequest): -
HTTPS: Use reverse proxy (nginx) with TLS
-
Data Filtering: Filter sensitive data before LLM
# Remove IPs, usernames, hashes filtered = filter_sensitive(request.data_summary) -
Audit Logging: Log all agent requests
logger.info(f"Agent: user={user_id} query={query} host={host}")
Configuration Reference
Agent Settings:
THREAT_HUNT_AGENT_PROVIDER # auto, local, networked, online
THREAT_HUNT_AGENT_MAX_TOKENS # Default: 1024
THREAT_HUNT_AGENT_REASONING # Default: true
THREAT_HUNT_AGENT_HISTORY_LENGTH # Default: 10
THREAT_HUNT_AGENT_FILTER_SENSITIVE # Default: true
Provider: Local:
THREAT_HUNT_LOCAL_MODEL_PATH # Path to .gguf or other model
Provider: Networked:
THREAT_HUNT_NETWORKED_ENDPOINT # http://service:5000
THREAT_HUNT_NETWORKED_KEY # API key for service
Provider: Online:
THREAT_HUNT_ONLINE_API_KEY # Provider API key
THREAT_HUNT_ONLINE_PROVIDER # openai, anthropic, google, etc
THREAT_HUNT_ONLINE_MODEL # Model name (gpt-3.5-turbo, etc)
Architecture Decisions
Why Pluggable Providers?
- Deployment flexibility (cloud, on-prem, hybrid)
- Privacy control (local vs online)
- Cost optimization
- Vendor lock-in prevention
Why Conversation History?
- Better context for follow-up questions
- Maintains thread of investigation
- Reduces redundant explanations
Why Read-Only?
- Safety: Agent cannot accidentally modify data
- Compliance: Adheres to governance requirements
- Trust: Humans retain control
Why Config-Based?
- No code changes for provider switching
- Easy environment-specific configuration
- CI/CD friendly
Next Steps
- Configure Provider: Set env vars for your chosen LLM
- Deploy: Use docker-compose or local development
- Test: Verify health endpoint and sample request
- Integrate: Add to your threat hunting workflow
- Monitor: Track agent usage and quality
- Iterate: Gather analyst feedback and improve
Support & Troubleshooting
See AGENT_IMPLEMENTATION.md for detailed troubleshooting.
Key support files:
- Backend logs:
docker-compose logs backend - Frontend console: Browser DevTools
- Health check:
curl http://localhost:8000/api/agent/health - API docs: http://localhost:8000/docs (when running)
References
- Governance: See
goose-core/governance/AGENT_POLICY.md - Intent: See
THREATHUNT_INTENT.md - Technical: See
AGENT_IMPLEMENTATION.md - FastAPI: https://fastapi.tiangolo.com
- React: https://react.dev
- Docker: https://docs.docker.com