mirror of
https://github.com/mblanke/ThreatHunt.git
synced 2026-03-01 05:50:21 -05:00
8.6 KiB
8.6 KiB
Analyst-Assist Agent Implementation Summary
Completed Implementation
I've successfully implemented a full analyst-assist agent for ThreatHunt following all governance principles from goose-core.
What Was Built
Backend (Python/FastAPI)
✅ Agent Module (backend/app/agents/)
core.py: ThreatHuntAgent class with guidance logicproviders.py: Pluggable LLM provider interface (local, networked, online)config.py: Environment-based configuration management
✅ API Endpoint (backend/app/api/routes/agent.py)
- POST
/api/agent/assist: Request guidance with context - GET
/api/agent/health: Check agent availability
✅ Application Structure
main.py: FastAPI application with CORSrequirements.txt: Dependencies (FastAPI, Uvicorn, Pydantic)run.py: Entry point for local development
Frontend (React/TypeScript)
✅ Agent Chat Component (frontend/src/components/AgentPanel.tsx)
- Chat-style interface for analyst questions
- Context display (dataset, host, artifact)
- Rich response formatting with pivots, filters, caveats
- Conversation history support
- Responsive design
✅ API Integration (frontend/src/utils/agentApi.ts)
- Type-safe request/response definitions
- Health check functionality
- Error handling
✅ Main Application
App.tsx: Dashboard with agent panel in sidebarApp.css: Responsive layout (desktop/mobile)index.tsx,index.html: React setup
✅ Configuration
package.json: Dependencies (React 18, TypeScript)tsconfig.json: TypeScript configuration
Docker & Deployment
✅ Containerization
Dockerfile.backend: Python 3.11 FastAPI containerDockerfile.frontend: Node 18 React production builddocker-compose.yml: Full stack with networking.env.example: Configuration template
LLM Provider Architecture
Three Pluggable Providers
1. Local Provider
THREAT_HUNT_AGENT_PROVIDER=local
THREAT_HUNT_LOCAL_MODEL_PATH=/path/to/model.gguf
- On-device or on-prem models
- GGML, Ollama, vLLM, etc.
2. Networked Provider
THREAT_HUNT_AGENT_PROVIDER=networked
THREAT_HUNT_NETWORKED_ENDPOINT=http://service:5000
THREAT_HUNT_NETWORKED_KEY=api-key
- Shared internal inference services
- Enterprise inference gateways
3. Online Provider
THREAT_HUNT_AGENT_PROVIDER=online
THREAT_HUNT_ONLINE_API_KEY=sk-key
THREAT_HUNT_ONLINE_PROVIDER=openai
THREAT_HUNT_ONLINE_MODEL=gpt-3.5-turbo
- OpenAI, Anthropic, Google, etc.
Auto Selection
THREAT_HUNT_AGENT_PROVIDER=auto
- Tries: local → networked → online
Governance Compliance
✅ Strict Policy Adherence
- No autonomous execution (agents advise only)
- No tool execution (read-only guidance)
- No database/schema changes
- No alert escalation
- Transparent reasoning with caveats
- Analyst retains all authority
✅ follows AGENT_POLICY.md
- Agents guide, explain, suggest
- Agents do NOT execute, escalate, or modify data
- All output is advisory and attributable
✅ Follows THREATHUNT_INTENT.md
- Helps interpret artifact data
- Suggests analytical pivots and filters
- Highlights anomalies
- Assists in hypothesis formation
- Does NOT perform analysis independently
API Specifications
Request
POST /api/agent/assist
{
"query": "What patterns suggest suspicious activity?",
"dataset_name": "FileList-2025-12-26",
"artifact_type": "FileList",
"host_identifier": "DESKTOP-ABC123",
"data_summary": "File listing from system scan",
"conversation_history": []
}
Response
{
"guidance": "Based on the files listed, several patterns stand out...",
"confidence": 0.8,
"suggested_pivots": [
"Analyze temporal patterns",
"Cross-reference with IOCs",
"Check for known malware signatures"
],
"suggested_filters": [
"Filter by modification time > 2025-12-20",
"Sort by file size (largest first)",
"Filter by file extension: .exe, .dll, .ps1"
],
"caveats": "Guidance based on available data context. Verify with additional sources.",
"reasoning": "Analysis generated based on artifact data patterns."
}
Frontend Features
✅ Chat Interface
- Analyst asks questions
- Agent provides guidance
- Message history with timestamps
✅ Context Awareness
- Displays current dataset, host, artifact
- Context automatically included in requests
- Conversation history for continuity
✅ Response Formatting
- Main guidance text
- Clickable suggested pivots
- Suggested data filters (code format)
- Confidence scores
- Caveats section
- Reasoning explanation
- Loading and error states
✅ Responsive Design
- Desktop: side-by-side layout
- Tablet: adjusted spacing
- Mobile: stacked layout
Quick Start
Development
Backend:
cd backend
pip install -r requirements.txt
python run.py
# API at http://localhost:8000
# Docs at http://localhost:8000/docs
Frontend:
cd frontend
npm install
npm start
# App at http://localhost:3000
Docker Deployment
# Copy and edit environment
cp .env.example .env
# Start full stack
docker-compose up -d
# Check health
curl http://localhost:8000/api/agent/health
curl http://localhost:3000
# View logs
docker-compose logs -f backend
docker-compose logs -f frontend
Environment Configuration
# Provider (auto, local, networked, online)
THREAT_HUNT_AGENT_PROVIDER=auto
# Local provider
THREAT_HUNT_LOCAL_MODEL_PATH=/models/model.gguf
# Networked provider
THREAT_HUNT_NETWORKED_ENDPOINT=http://inference:5000
THREAT_HUNT_NETWORKED_KEY=api-key
# Online provider (example: OpenAI)
THREAT_HUNT_ONLINE_API_KEY=sk-your-key
THREAT_HUNT_ONLINE_PROVIDER=openai
THREAT_HUNT_ONLINE_MODEL=gpt-3.5-turbo
# Agent behavior
THREAT_HUNT_AGENT_MAX_TOKENS=1024
THREAT_HUNT_AGENT_REASONING=true
THREAT_HUNT_AGENT_HISTORY_LENGTH=10
THREAT_HUNT_AGENT_FILTER_SENSITIVE=true
# Frontend
REACT_APP_API_URL=http://localhost:8000
File Structure
ThreatHunt/
├── backend/
│ ├── app/
│ │ ├── agents/
│ │ │ ├── __init__.py
│ │ │ ├── core.py
│ │ │ ├── providers.py
│ │ │ └── config.py
│ │ ├── api/routes/
│ │ │ ├── __init__.py
│ │ │ └── agent.py
│ │ ├── __init__.py
│ │ └── main.py
│ ├── requirements.txt
│ └── run.py
├── frontend/
│ ├── src/
│ │ ├── components/
│ │ │ ├── AgentPanel.tsx
│ │ │ └── AgentPanel.css
│ │ ├── utils/
│ │ │ └── agentApi.ts
│ │ ├── App.tsx
│ │ ├── App.css
│ │ ├── index.tsx
│ ├── public/
│ │ └── index.html
│ ├── package.json
│ └── tsconfig.json
├── Dockerfile.backend
├── Dockerfile.frontend
├── docker-compose.yml
├── .env.example
├── .gitignore
├── AGENT_IMPLEMENTATION.md
├── README.md
├── ROADMAP.md
└── THREATHUNT_INTENT.md
Key Design Decisions
- Pluggable Providers: Support multiple LLM backends without changing application code
- Auto-Detection: Smart provider selection for deployment flexibility
- Context-Aware: Agent requests include dataset, host, and artifact context
- Read-Only: Hard constraints prevent agent from executing, modifying, or escalating
- Advisory UI: Frontend emphasizes guidance-only nature with caveats and disclaimers
- Conversation History: Maintains context across multiple analyst queries
- Error Handling: Graceful degradation if LLM provider unavailable
- Containerized: Full Docker support for easy deployment and scaling
Next Steps / Future Enhancements
- Integration Testing: Add pytest/vitest test suites
- Authentication: Add JWT/OAuth to API endpoints
- Rate Limiting: Implement request throttling
- Structured Output: Use LLM JSON mode or function calling
- Data Filtering: Auto-filter sensitive data before LLM
- Caching: Cache common agent responses
- Feedback Loop: Capture guidance quality feedback from analysts
- Audit Trail: Comprehensive logging and compliance reporting
- Fine-tuning: Custom models for cybersecurity domain
- Performance: Optimize latency and throughput
Governance References
This implementation fully complies with:
- ✅
goose-core/governance/AGENT_POLICY.md - ✅
goose-core/governance/AI_RULES.md - ✅
goose-core/governance/SCOPE.md - ✅
goose-core/governance/ALERT_POLICY.md - ✅
goose-core/contracts/finding.json - ✅
ThreatHunt/THREATHUNT_INTENT.md
Core Principle: Agents assist analysts, never act autonomously.