Analytics & Insights
Monitor mock simulation usage, performance, and patterns with comprehensive analytics dashboards. Track request patterns, consumer behavior, error rates, latency distribution, and AI-powered insights.
Prerequisites
- An existing mock simulation with traffic
- Active request history (data to analyze)
- Basic understanding of HTTP metrics
Analytics Dashboard Overview
The analytics dashboard provides multiple views of simulation traffic:
Key Metrics
- Total Requests - Number of requests in time period
- Error Rate - Percentage of failed requests
- Success Rate - Percentage of successful requests
- Average Latency - Mean response time
- P95 Latency - 95th percentile response time
- Unique Consumers - Number of different clients
- Top Routes - Most frequently accessed endpoints
Time Ranges
Select analysis period:
- 1 Day (1d) - Hourly buckets
- 7 Days (7d) - Hourly buckets
- 30 Days (30d) - Daily buckets
- 90 Days (90d) - Daily buckets
Accessing Analytics
Simulation Analytics
- Open mock simulation
- Click Analytics tab
- Select time range
- View overview and metrics
Company-Wide Analytics
View aggregated metrics across all simulations:
- Go to dashboard
- Click Analytics Overview or Company Analytics
- See cross-simulation statistics
- Drill down to specific simulations
Analytics Views
Overview
Summary of key metrics:
Simulation: Payment Service (24h)
├─ Total Requests: 2,545
├─ Success Rate: 96.2%
├─ Error Rate: 3.8%
├─ Avg Latency: 125ms
├─ P95 Latency: 380ms
├─ Unique Consumers: 18
└─ Top Route: POST /api/payments (485 requests)
Click metrics to drill down for details.
Route Heatmap
Visualize request distribution across routes and time:
Heatmap Display:
- Y-axis: Routes (methods + paths)
- X-axis: Time buckets (hours or days)
- Color intensity: Request volume
Using Heatmap:
- View which routes are busiest
- Identify peak usage times
- Spot unused routes (no color)
- Find sudden spikes (bright spots)
- Click cell to drill into route traffic
Example:
GET /api/users ████████████ (high traffic)
POST /api/users ████████ (medium traffic)
PUT /api/users/:id ███ (low traffic)
DELETE /api/users █ (very low traffic)
9am 10am 11am 12pm 1pm 2pm 3pm
Consumer Tracking
Track unique consumers by fingerprint:
Consumer Fingerprint = IP address + User-Agent hash
View consumers:
- Click Consumers in analytics
- See table with:
- Consumer fingerprint (anonymized)
- Request count
- First seen
- Last seen
- Top routes
- HTTP methods used
Use For:
- Identify test clients vs. users
- Track integration partners
- Monitor malicious clients
- Quota tracking per consumer
Error Patterns
Analyze errors by route and status code:
Error Analysis:
- Click Errors view
- See errors grouped by:
- Route (method + path)
- HTTP status code (400, 401, 403, 404, 500, etc.)
- Error count
Example:
POST /api/auth/login
├─ 401 Unauthorized: 23 errors
│ └─ Invalid credentials (from logs)
├─ 429 Too Many Requests: 5 errors
│ └─ Rate limit exceeded
└─ 500 Internal Error: 2 errors
└─ Database connection timeout
GET /api/users/:id
├─ 404 Not Found: 15 errors
│ └─ User ID not found
└─ 403 Forbidden: 3 errors
└─ Permission denied
Insights:
- Most common errors
- Error rate per route
- Error distribution
- Trends over time
Latency Distribution
Analyze response time performance:
Percentiles:
Route: GET /api/data
├─ Min: 15ms
├─ P50: 45ms
├─ P95: 180ms
├─ P99: 420ms
└─ Max: 2450ms
Analysis:
- View latency stats per route
- Identify slow routes
- Compare P50 vs P95 (variability)
- Spot outliers (max)
- Track trends over time
Use For:
- Performance tuning
- SLA compliance checking
- Client timeout issues
- Load balancing decisions
Usage Trends
Time-series data for charting:
Trend Data:
Time Range: Last 7 Days (hourly buckets)
[
{ time: "2026-01-15T09:00:00Z", requests: 142, errors: 5, latency: 120 },
{ time: "2026-01-15T10:00:00Z", requests: 185, errors: 7, latency: 135 },
{ time: "2026-01-15T11:00:00Z", requests: 223, errors: 12, latency: 150 },
...
]
Chart Types:
- Line Chart - Request volume over time
- Area Chart - Success vs. error stacked
- Bar Chart - Requests per hour
- Combination - Multiple metrics
Analyze:
- Peak usage times
- Low traffic periods
- Error spikes
- Latency trends
- Day-over-day patterns
Cost Attribution
Estimate costs per route:
Cost Calculation: $0.001 per request
Route: POST /api/users (500 requests)
Cost: $0.50
Route: GET /api/data (2000 requests)
Cost: $2.00
Total Simulation Cost: $10.45
(Stub implementation for future billing integration)
AI-Powered Insights
Automatic analysis and recommendations:
Insight Categories
Unused Routes
Finding: GET /api/old-endpoint has 0 requests (30 days)
Impact: Dead code in mock config
Action: Consider removing if no longer needed
High Error Rate
Finding: POST /api/upload has 15% error rate
Impact: Likely indicates validation issues
Action: Review error logs and fix validation schema
Slow Responses
Finding: PUT /api/process has P95 latency of 2.5s
Impact: May cause timeouts in tests
Action: Reduce simulated delay or review complex processing
Missing Error Coverage
Finding: GET /api/data only returns 200 responses
Impact: No error scenarios mocked
Action: Add 400, 401, 404, 500 response variants
Viewing Insights
- Open simulation analytics
- Click Insights tab
- See list of insights:
- Title
- Severity (critical, warning, info)
- Description
- Recommended action
- Status (active, dismissed, resolved)
Refreshing Insights
Generate fresh insights:
- Click Refresh Insights
- System re-analyzes simulation
- Previous active insights are resolved
- New insights generated
- View updated insight list
Managing Insights
Dismiss Insight
Acknowledge but don't resolve:
- Click insight
- Click Dismiss
- Insight marked "dismissed"
- Hidden from active list
Resolve Insight
Take action on insight:
- Make recommended change (e.g., add response)
- Click insight
- Click Mark as Resolved
- Insight removed from active list
View Resolved
See completed insights:
- Click Show Resolved
- View archived insights
- See action taken and when
Analytics Workflow
Monitoring Simulation Health
- Check Overview - See current metrics
- Review Errors - Find problem patterns
- Analyze Latency - Identify performance issues
- Read Insights - See AI recommendations
- Take Action - Fix issues
- Verify Trends - Confirm improvement
Identifying Issues
High Error Rate:
- Go to Error Patterns view
- Find routes with errors
- Check error codes (401 = auth, 400 = validation, 500 = server)
- Review validation schema or response logic
- Add test cases for error scenarios
Slow Routes:
- Go to Latency Distribution
- Sort by P95 latency
- Identify slowest routes
- Reduce simulated delay (latencyMs)
- Or review response processing logic
- Verify new latency in next analytics refresh
Unused Routes:
- Check heatmap for no-color rows
- Identify routes with zero requests
- Confirm they're not needed
- Update documentation
- Consider removing if truly unused
Analytics Use Cases
Development
Monitor your mock simulation while developing:
- Verify routes are being called
- Check response times are reasonable
- Catch errors in validation logic
- Track coverage of scenarios
Testing
Analyze test traffic patterns:
- Ensure all critical paths exercised
- Check error scenarios tested
- Verify performance expectations met
- Identify untested code paths
Integration
Monitor consumer behavior:
- Track different client implementations
- Identify problematic patterns
- Monitor error patterns from consumers
- Ensure compatibility
Onboarding
Track new consumer adoption:
- Monitor connection growth
- Track request volume growth
- Identify early errors
- Support integration issues
Incident Response
Analyze traffic during issues:
- When did problems start?
- Which routes affected?
- Error rate spike?
- Latency degradation?
- Consumer impact?
Analytics Best Practices
Regular Monitoring
- Check analytics daily during active development
- Weekly review of trends and insights
- Monthly assessment of health metrics
- Quarterly planning based on usage patterns
Performance Tuning
- Monitor latency trends
- Identify slow routes
- Reduce simulated delays
- Verify improvement in next refresh
- Repeat for next slowest route
Error Reduction
- Check error patterns regularly
- Focus on high-error routes
- Review validation rules
- Add error response variants
- Track error rate improvement
Capacity Planning
Use analytics to anticipate needs:
- Growing request volume
- New consumer onboarding
- Seasonal traffic patterns
- Resource scaling decisions
Troubleshooting
No Analytics Data
- Ensure simulation has traffic
- Wait 5-10 minutes for data to aggregate
- Check time range covers traffic period
- Verify simulation is active
Metrics Look Wrong
- Refresh the analytics page
- Check time range selection
- Verify you're viewing correct simulation
- Compare against request logs
Insights Not Generating
- Ensure simulation has sufficient traffic
- Click Refresh Insights to trigger analysis
- Wait for re-analysis to complete
- Check for system errors
API Reference
Analytics Endpoints
| Method | Path | Description |
|---|---|---|
GET | /instances/:id/analytics/overview | Get summary metrics |
GET | /instances/:id/analytics/heatmap | Get route heatmap |
GET | /instances/:id/analytics/consumers | Get consumer fingerprints |
GET | /instances/:id/analytics/errors | Get error patterns |
GET | /instances/:id/analytics/latency | Get latency distribution |
GET | /instances/:id/analytics/trends | Get time-series trends |
GET | /instances/:id/analytics/cost | Get cost attribution |
GET | /instances/:id/insights | List insights |
POST | /instances/:id/insights/refresh | Refresh insights |
PATCH | /instances/:id/insights/:insightId | Update insight status |
Cross-Simulation Analytics
| Method | Path | Description |
|---|---|---|
GET | /analytics/overview | Cross-simulation summary |
GET | /analytics/cost | Cross-simulation costs |
Next Steps
- Health Scores - Comprehensive simulation quality assessment
- Request Validation - Reduce error rate
- Response Pipelines - Optimize latency
- Dashboards - Create custom monitoring dashboards