Compare commits
2 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| f945ccef05 | |||
| 3703e40442 |
64
.cursor/rules/caching-patterns.mdc
Normal file
64
.cursor/rules/caching-patterns.mdc
Normal file
@@ -0,0 +1,64 @@
|
||||
---
|
||||
description: Caching system patterns and best practices
|
||||
---
|
||||
|
||||
# Caching System Patterns
|
||||
|
||||
## Cache Key Generation
|
||||
- Use SHA256 hashing for cache keys to ensure uniform distribution
|
||||
- Include service prefix (e.g., "steam/", "epic/") based on User-Agent detection
|
||||
- Never include query parameters in cache keys - strip them before hashing
|
||||
- Cache keys should be deterministic and consistent
|
||||
|
||||
## Cache File Format
|
||||
The cache uses a custom format with:
|
||||
- Magic number: "SC2C" (SteamCache2 Cache)
|
||||
- Content hash: SHA256 of response body
|
||||
- Response size: Total HTTP response size
|
||||
- Raw HTTP response: Complete response as received from upstream
|
||||
- Header line format: "SC2C <hash> <size>\n"
|
||||
- Integrity verification on read operations
|
||||
- Automatic corruption detection and cleanup
|
||||
|
||||
## Garbage Collection Algorithms
|
||||
Available algorithms and their use cases:
|
||||
- **LRU**: Best for general gaming patterns, keeps recently accessed content
|
||||
- **LFU**: Good for gaming cafes with popular games
|
||||
- **FIFO**: Predictable behavior, good for testing
|
||||
- **Largest**: Maximizes number of cached files
|
||||
- **Smallest**: Maximizes cache hit rate
|
||||
- **Hybrid**: Combines access time and file size for optimal performance
|
||||
|
||||
## Cache Validation
|
||||
- Always verify Content-Length matches received data
|
||||
- Use SHA256 hashing for content integrity
|
||||
- Don't cache chunked transfer encoding (no Content-Length)
|
||||
- Reject files with invalid or missing Content-Length
|
||||
|
||||
## Request Coalescing
|
||||
- Multiple clients requesting the same file should share the download
|
||||
- Use channels and mutexes to coordinate concurrent requests
|
||||
- Buffer response data for coalesced clients
|
||||
- Clean up coalesced request structures after completion
|
||||
|
||||
## Range Request Support
|
||||
- Always cache the full file, regardless of Range headers
|
||||
- Support serving partial content from cached full files
|
||||
- Parse Range headers correctly (bytes=start-end, bytes=start-, bytes=-suffix)
|
||||
- Return appropriate HTTP status codes (206 for partial content, 416 for invalid ranges)
|
||||
|
||||
## Service Detection
|
||||
- Use regex patterns to match User-Agent strings
|
||||
- Support multiple services (Steam, Epic Games, etc.)
|
||||
- Cache keys include service prefix for isolation
|
||||
- Default to Steam service configuration
|
||||
|
||||
## Memory vs Disk Caching
|
||||
- Memory cache: Fast access, limited size, use LRU or LFU
|
||||
- Disk cache: Slower access, large size, use Hybrid or Largest
|
||||
- Tiered caching: Memory as L1, disk as L2
|
||||
- Dynamic memory management with configurable thresholds
|
||||
- Cache promotion: Move frequently accessed files from disk to memory
|
||||
- Sharded storage: Use directory sharding for Steam keys to reduce inode pressure
|
||||
- Memory-mapped files: Use mmap for large disk operations
|
||||
- Batched operations: Group operations for better performance
|
||||
65
.cursor/rules/configuration-patterns.mdc
Normal file
65
.cursor/rules/configuration-patterns.mdc
Normal file
@@ -0,0 +1,65 @@
|
||||
---
|
||||
description: Configuration management patterns
|
||||
---
|
||||
|
||||
# Configuration Management Patterns
|
||||
|
||||
## YAML Configuration
|
||||
- Use YAML format for human-readable configuration
|
||||
- Provide sensible defaults for all configuration options
|
||||
- Validate configuration on startup
|
||||
- Generate default configuration file on first run
|
||||
|
||||
## Configuration Structure
|
||||
- Group related settings in nested structures
|
||||
- Use descriptive field names with YAML tags
|
||||
- Provide default values in struct tags where possible
|
||||
- Use appropriate data types (strings for sizes, ints for limits)
|
||||
|
||||
## Size Configuration
|
||||
- Use human-readable size strings (e.g., "1GB", "512MB")
|
||||
- Parse sizes using `github.com/docker/go-units`
|
||||
- Support "0" to disable cache layers
|
||||
- Validate size limits are reasonable
|
||||
|
||||
## Garbage Collection Configuration
|
||||
- Support multiple GC algorithms per cache layer
|
||||
- Provide algorithm-specific configuration options
|
||||
- Allow different algorithms for memory vs disk caches
|
||||
- Document algorithm characteristics and use cases
|
||||
|
||||
## Server Configuration
|
||||
- Configure listen address and port
|
||||
- Set concurrency limits (global and per-client)
|
||||
- Configure upstream server URL
|
||||
- Support both absolute and relative upstream URLs
|
||||
|
||||
## Runtime Configuration
|
||||
- Allow command-line overrides for critical settings
|
||||
- Support configuration file path specification
|
||||
- Provide help and version information
|
||||
- Validate configuration before starting services
|
||||
|
||||
## Default Configuration
|
||||
- Generate appropriate defaults for different use cases
|
||||
- Consider system resources when setting defaults
|
||||
- Provide conservative defaults for home users
|
||||
- Document configuration options in comments
|
||||
|
||||
## Configuration Validation
|
||||
- Validate required fields are present
|
||||
- Check that size limits are reasonable
|
||||
- Verify file paths are accessible
|
||||
- Test upstream server connectivity
|
||||
|
||||
## Configuration Updates
|
||||
- Support configuration reloading (if needed)
|
||||
- Handle configuration changes gracefully
|
||||
- Log configuration changes
|
||||
- Maintain backward compatibility
|
||||
|
||||
## Environment-Specific Configuration
|
||||
- Support different configurations for development/production
|
||||
- Allow environment variable overrides
|
||||
- Provide configuration templates for common scenarios
|
||||
- Document configuration best practices
|
||||
77
.cursor/rules/development-workflow.mdc
Normal file
77
.cursor/rules/development-workflow.mdc
Normal file
@@ -0,0 +1,77 @@
|
||||
---
|
||||
description: Development workflow and best practices
|
||||
---
|
||||
|
||||
# Development Workflow for SteamCache2
|
||||
|
||||
## Build System
|
||||
- Use the provided [Makefile](mdc:Makefile) for all build operations
|
||||
- Prefer `make` commands over direct `go` commands
|
||||
- Use `make test` to run all tests before committing
|
||||
- Use `make run-debug` for development with debug logging
|
||||
|
||||
## Code Organization
|
||||
- Keep related functionality in the same package
|
||||
- Use clear package boundaries and interfaces
|
||||
- Minimize dependencies between packages
|
||||
- Follow the existing project structure
|
||||
|
||||
## Git Workflow
|
||||
- Use descriptive commit messages
|
||||
- Keep commits focused and atomic
|
||||
- Test changes thoroughly before committing
|
||||
- Use meaningful branch names
|
||||
|
||||
## Code Review
|
||||
- Review code for correctness and performance
|
||||
- Check for proper error handling
|
||||
- Verify test coverage for new functionality
|
||||
- Ensure code follows project conventions
|
||||
|
||||
## Documentation
|
||||
- Update README.md for user-facing changes
|
||||
- Add comments for complex algorithms
|
||||
- Document configuration options
|
||||
- Keep API documentation current
|
||||
|
||||
## Testing Strategy
|
||||
- Write tests for new functionality
|
||||
- Maintain high test coverage
|
||||
- Test edge cases and error conditions
|
||||
- Run integration tests before major releases
|
||||
|
||||
## Performance Testing
|
||||
- Test with realistic data sizes
|
||||
- Measure performance impact of changes
|
||||
- Profile the application under load
|
||||
- Monitor memory usage and leaks
|
||||
|
||||
## Configuration Management
|
||||
- Test configuration changes thoroughly
|
||||
- Validate configuration on startup
|
||||
- Provide sensible defaults
|
||||
- Document configuration options
|
||||
|
||||
## Error Handling
|
||||
- Implement proper error handling
|
||||
- Use structured logging for errors
|
||||
- Provide meaningful error messages
|
||||
- Handle edge cases gracefully
|
||||
|
||||
## Security Considerations
|
||||
- Validate all inputs
|
||||
- Implement proper rate limiting
|
||||
- Log security-relevant events
|
||||
- Follow security best practices
|
||||
|
||||
## Release Process
|
||||
- Test thoroughly before releasing
|
||||
- Update version information
|
||||
- Create release notes
|
||||
- Tag releases appropriately
|
||||
|
||||
## Maintenance
|
||||
- Monitor application performance
|
||||
- Update dependencies regularly
|
||||
- Fix bugs promptly
|
||||
- Refactor code when needed
|
||||
62
.cursor/rules/golang-conventions.mdc
Normal file
62
.cursor/rules/golang-conventions.mdc
Normal file
@@ -0,0 +1,62 @@
|
||||
---
|
||||
globs: *.go
|
||||
---
|
||||
|
||||
# Go Language Conventions for SteamCache2
|
||||
|
||||
## Code Style
|
||||
- Use `gofmt` and `goimports` for formatting
|
||||
- Follow standard Go naming conventions (camelCase for private, PascalCase for public)
|
||||
- Use meaningful variable names that reflect their purpose
|
||||
- Prefer explicit error handling over panic (except in constructors where configuration is invalid)
|
||||
|
||||
## Package Organization
|
||||
- Keep packages focused and cohesive
|
||||
- Use internal packages for implementation details that shouldn't be exported
|
||||
- Group related functionality together (e.g., all VFS implementations in `vfs/`)
|
||||
- Use interface implementation verification: `var _ Interface = (*Implementation)(nil)`
|
||||
- Create type aliases for backward compatibility when refactoring
|
||||
- Use separate packages for different concerns (e.g., `vfserror`, `types`, `locks`)
|
||||
|
||||
## Error Handling
|
||||
- Always handle errors explicitly - never ignore them with `_`
|
||||
- Use `fmt.Errorf` with `%w` verb for error wrapping
|
||||
- Log errors with context using structured logging (zerolog)
|
||||
- Return meaningful error messages that help with debugging
|
||||
- Create custom error types for domain-specific errors (see `vfs/vfserror/`)
|
||||
- Use `errors.New()` for simple error constants
|
||||
- Include relevant context in error messages (file paths, operation names)
|
||||
|
||||
## Testing
|
||||
- All tests should run with a timeout (as per user rules)
|
||||
- Use table-driven tests for multiple test cases
|
||||
- Use `t.Helper()` in test helper functions
|
||||
- Test both success and failure cases
|
||||
- Use `t.TempDir()` for temporary files in tests
|
||||
|
||||
## Concurrency
|
||||
- Use `sync.RWMutex` for read-heavy operations
|
||||
- Prefer channels over shared memory when possible
|
||||
- Use `context.Context` for cancellation and timeouts
|
||||
- Be explicit about goroutine lifecycle management
|
||||
- Use sharded locks for high-concurrency scenarios (see `vfs/locks/sharding.go`)
|
||||
- Use `atomic.Value` for lock-free data structure updates
|
||||
- Use `sync.Map` for concurrent map operations when appropriate
|
||||
|
||||
## Performance
|
||||
- Use `io.ReadAll` sparingly - prefer streaming for large data
|
||||
- Use connection pooling for HTTP clients
|
||||
- Implement proper resource cleanup (defer statements)
|
||||
- Use buffered channels when appropriate
|
||||
|
||||
## Logging
|
||||
- Use structured logging with zerolog
|
||||
- Include relevant context in log messages (keys, URLs, client IPs)
|
||||
- Use appropriate log levels (Debug, Info, Warn, Error)
|
||||
- Avoid logging sensitive information
|
||||
|
||||
## Memory Management
|
||||
- Be mindful of memory usage in caching scenarios
|
||||
- Use appropriate data structures for the use case
|
||||
- Implement proper cleanup for long-running services
|
||||
- Monitor memory usage in production
|
||||
59
.cursor/rules/http-proxy-patterns.mdc
Normal file
59
.cursor/rules/http-proxy-patterns.mdc
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
description: HTTP proxy and server patterns
|
||||
---
|
||||
|
||||
# HTTP Proxy and Server Patterns
|
||||
|
||||
## Request Handling
|
||||
- Only support GET requests (Steam doesn't use other methods)
|
||||
- Reject non-GET requests with 405 Method Not Allowed
|
||||
- Handle health checks at "/" endpoint
|
||||
- Support LanCache heartbeat at "/lancache-heartbeat"
|
||||
|
||||
## Upstream Communication
|
||||
- Use optimized HTTP transport with connection pooling
|
||||
- Set appropriate timeouts (10s dial, 15s header, 60s total)
|
||||
- Enable HTTP/2 and keep-alives for better performance
|
||||
- Use large buffers (64KB) for better throughput
|
||||
|
||||
## Response Streaming
|
||||
- Stream responses directly to clients for better performance
|
||||
- Support both full file and range request streaming
|
||||
- Preserve original HTTP headers (excluding hop-by-hop headers)
|
||||
- Add cache-specific headers (X-LanCache-Status, X-LanCache-Processed-By)
|
||||
|
||||
## Error Handling
|
||||
- Implement retry logic with exponential backoff
|
||||
- Handle upstream server errors gracefully
|
||||
- Return appropriate HTTP status codes
|
||||
- Log errors with sufficient context for debugging
|
||||
|
||||
## Concurrency Control
|
||||
- Use semaphores to limit concurrent requests globally
|
||||
- Implement per-client rate limiting
|
||||
- Clean up old client limiters to prevent memory leaks
|
||||
- Use proper synchronization for shared data structures
|
||||
|
||||
## Header Management
|
||||
- Copy relevant headers from upstream responses
|
||||
- Exclude hop-by-hop headers (Connection, Keep-Alive, etc.)
|
||||
- Add cache status headers for monitoring
|
||||
- Preserve Content-Type and Content-Length headers
|
||||
|
||||
## Client IP Detection
|
||||
- Check X-Forwarded-For header first (for proxy setups)
|
||||
- Fall back to X-Real-IP header
|
||||
- Use RemoteAddr as final fallback
|
||||
- Handle comma-separated IP lists in X-Forwarded-For
|
||||
|
||||
## Performance Optimizations
|
||||
- Set keep-alive headers for better connection reuse
|
||||
- Use appropriate server timeouts
|
||||
- Implement request coalescing for duplicate requests
|
||||
- Use buffered I/O for better performance
|
||||
|
||||
## Security Considerations
|
||||
- Validate request URLs and paths
|
||||
- Implement rate limiting to prevent abuse
|
||||
- Log suspicious activity
|
||||
- Handle malformed requests gracefully
|
||||
87
.cursor/rules/logging-monitoring-patterns.mdc
Normal file
87
.cursor/rules/logging-monitoring-patterns.mdc
Normal file
@@ -0,0 +1,87 @@
|
||||
---
|
||||
description: Logging and monitoring patterns for SteamCache2
|
||||
---
|
||||
|
||||
# Logging and Monitoring Patterns
|
||||
|
||||
## Structured Logging with Zerolog
|
||||
- Use zerolog for all logging operations
|
||||
- Include structured fields for better querying and analysis
|
||||
- Use appropriate log levels: Debug, Info, Warn, Error
|
||||
- Include timestamps and context in all log messages
|
||||
- Configure log format (JSON for production, console for development)
|
||||
|
||||
## Log Context and Fields
|
||||
- Always include relevant context in log messages
|
||||
- Use consistent field names: `client_ip`, `cache_key`, `url`, `service`
|
||||
- Include operation duration with `Dur()` for performance monitoring
|
||||
- Log cache hit/miss status for analytics
|
||||
- Include file sizes and operation counts for monitoring
|
||||
|
||||
## Performance Monitoring
|
||||
- Log request processing times with `zduration` field
|
||||
- Monitor cache hit/miss ratios
|
||||
- Track memory and disk usage
|
||||
- Log garbage collection events and statistics
|
||||
- Monitor concurrent request counts and limits
|
||||
|
||||
## Error Logging
|
||||
- Log errors with full context and stack traces
|
||||
- Include relevant request information in error logs
|
||||
- Use structured error logging with `Err()` field
|
||||
- Log configuration errors with file paths
|
||||
- Include upstream server errors with status codes
|
||||
|
||||
## Cache Operation Logging
|
||||
- Log cache hits with key and response time
|
||||
- Log cache misses with reason and upstream response time
|
||||
- Log cache corruption detection and cleanup
|
||||
- Log garbage collection operations and evicted items
|
||||
- Log cache promotion events (disk to memory)
|
||||
|
||||
## Service Detection Logging
|
||||
- Log service detection results (Steam, Epic, etc.)
|
||||
- Log User-Agent patterns and matches
|
||||
- Log service configuration changes
|
||||
- Log cache key generation for different services
|
||||
|
||||
## HTTP Request Logging
|
||||
- Log incoming requests with method, URL, and client IP
|
||||
- Log response status codes and sizes
|
||||
- Log upstream server communication
|
||||
- Log rate limiting events and client limits
|
||||
- Log health check and heartbeat requests
|
||||
|
||||
## Configuration Logging
|
||||
- Log configuration loading and validation
|
||||
- Log default configuration generation
|
||||
- Log configuration changes and overrides
|
||||
- Log startup parameters and settings
|
||||
|
||||
## Security Event Logging
|
||||
- Log suspicious request patterns
|
||||
- Log rate limiting violations
|
||||
- Log authentication failures (if applicable)
|
||||
- Log configuration security issues
|
||||
- Log potential security threats
|
||||
|
||||
## System Health Logging
|
||||
- Log memory usage and fragmentation
|
||||
- Log disk usage and capacity
|
||||
- Log connection pool statistics
|
||||
- Log goroutine counts and lifecycle
|
||||
- Log system resource utilization
|
||||
|
||||
## Log Rotation and Management
|
||||
- Implement log rotation for long-running services
|
||||
- Use appropriate log retention policies
|
||||
- Monitor log file sizes and disk usage
|
||||
- Configure log levels for different environments
|
||||
- Use structured logging for log analysis tools
|
||||
|
||||
## Monitoring Integration
|
||||
- Design logs for easy parsing by monitoring tools
|
||||
- Include metrics that can be scraped by Prometheus
|
||||
- Use consistent field naming for dashboard creation
|
||||
- Log events that can trigger alerts
|
||||
- Include correlation IDs for request tracing
|
||||
71
.cursor/rules/performance-optimization.mdc
Normal file
71
.cursor/rules/performance-optimization.mdc
Normal file
@@ -0,0 +1,71 @@
|
||||
---
|
||||
description: Performance optimization guidelines
|
||||
---
|
||||
|
||||
# Performance Optimization Guidelines
|
||||
|
||||
## Memory Management
|
||||
- Use appropriate data structures for the use case
|
||||
- Implement proper cleanup for long-running services
|
||||
- Monitor memory usage and implement limits
|
||||
- Use memory pools for frequently allocated objects
|
||||
|
||||
## I/O Optimization
|
||||
- Use buffered I/O for better performance
|
||||
- Implement connection pooling for HTTP clients
|
||||
- Use appropriate buffer sizes (64KB for HTTP)
|
||||
- Minimize system calls and context switches
|
||||
|
||||
## Concurrency Patterns
|
||||
- Use worker pools for CPU-intensive tasks
|
||||
- Implement proper backpressure with semaphores
|
||||
- Use channels for coordination between goroutines
|
||||
- Avoid excessive goroutine creation
|
||||
|
||||
## Caching Strategies
|
||||
- Use tiered caching (memory + disk) for optimal performance
|
||||
- Implement intelligent cache eviction policies
|
||||
- Use cache warming for predictable access patterns
|
||||
- Monitor cache hit ratios and adjust strategies
|
||||
|
||||
## Network Optimization
|
||||
- Use HTTP/2 when available
|
||||
- Enable connection keep-alives
|
||||
- Use appropriate timeouts for different operations
|
||||
- Implement request coalescing for duplicate requests
|
||||
|
||||
## Data Structures
|
||||
- Choose appropriate data structures for access patterns
|
||||
- Use sync.RWMutex for read-heavy operations
|
||||
- Consider lock-free data structures where appropriate
|
||||
- Minimize memory allocations in hot paths
|
||||
|
||||
## Algorithm Selection
|
||||
- Choose GC algorithms based on access patterns
|
||||
- Use LRU for general gaming workloads
|
||||
- Use LFU for gaming cafes with popular content
|
||||
- Use Hybrid algorithms for mixed workloads
|
||||
|
||||
## Monitoring and Profiling
|
||||
- Implement performance metrics collection
|
||||
- Use structured logging for performance analysis
|
||||
- Monitor key performance indicators
|
||||
- Profile the application under realistic loads
|
||||
|
||||
## Resource Management
|
||||
- Implement proper resource cleanup
|
||||
- Use context.Context for cancellation
|
||||
- Set appropriate limits on resource usage
|
||||
- Monitor resource consumption over time
|
||||
|
||||
## Scalability Considerations
|
||||
- Design for horizontal scaling where possible
|
||||
- Use sharding for large datasets
|
||||
- Implement proper load balancing
|
||||
- Consider distributed caching for large deployments
|
||||
|
||||
## Bottleneck Identification
|
||||
- Profile the application to identify bottlenecks
|
||||
- Focus optimization efforts on the most critical paths
|
||||
- Use appropriate tools for performance analysis
|
||||
- Test performance under realistic conditions
|
||||
57
.cursor/rules/project-structure.mdc
Normal file
57
.cursor/rules/project-structure.mdc
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
# SteamCache2 Project Structure Guide
|
||||
|
||||
This is a high-performance Steam download cache written in Go. The main entry point is [main.go](mdc:main.go), which delegates to the command structure in [cmd/](mdc:cmd/).
|
||||
|
||||
## Core Architecture
|
||||
|
||||
- **Main Entry**: [main.go](mdc:main.go) - Simple entry point that calls `cmd.Execute()`
|
||||
- **Command Layer**: [cmd/root.go](mdc:cmd/root.go) - CLI interface using Cobra, handles configuration loading and service startup
|
||||
- **Core Service**: [steamcache/steamcache.go](mdc:steamcache/steamcache.go) - Main HTTP proxy and caching logic
|
||||
- **Configuration**: [config/config.go](mdc:config/config.go) - YAML-based configuration management
|
||||
- **Virtual File System**: [vfs/](mdc:vfs/) - Abstracted storage layer supporting memory and disk caches
|
||||
|
||||
## Key Components
|
||||
|
||||
### VFS (Virtual File System)
|
||||
- [vfs/vfs.go](mdc:vfs/vfs.go) - Core VFS interface
|
||||
- [vfs/memory/](mdc:vfs/memory/) - In-memory cache implementation
|
||||
- [vfs/disk/](mdc:vfs/disk/) - Disk-based cache implementation
|
||||
- [vfs/cache/](mdc:vfs/cache/) - Cache coordination layer
|
||||
- [vfs/gc/](mdc:vfs/gc/) - Garbage collection algorithms (LRU, LFU, FIFO, etc.)
|
||||
|
||||
### Service Management
|
||||
- Service detection via User-Agent patterns
|
||||
- Support for multiple gaming services (Steam, Epic, etc.)
|
||||
- SHA256-based cache key generation with service prefixes
|
||||
|
||||
### Advanced Features
|
||||
- [vfs/adaptive/](mdc:vfs/adaptive/) - Adaptive caching strategies
|
||||
- [vfs/predictive/](mdc:vfs/predictive/) - Predictive cache warming
|
||||
- Request coalescing for concurrent downloads
|
||||
- Range request support for partial content
|
||||
|
||||
## Development Workflow
|
||||
|
||||
Use the [Makefile](mdc:Makefile) for development:
|
||||
- `make` - Run tests and build
|
||||
- `make test` - Run all tests
|
||||
- `make run` - Run the application
|
||||
- `make run-debug` - Run with debug logging
|
||||
|
||||
## Testing
|
||||
|
||||
- Unit tests: [steamcache/steamcache_test.go](mdc:steamcache/steamcache_test.go)
|
||||
- Integration tests: [steamcache/integration_test.go](mdc:steamcache/integration_test.go)
|
||||
- Test cache data: [steamcache/test_cache/](mdc:steamcache/test_cache/)
|
||||
|
||||
## Configuration
|
||||
|
||||
Default configuration is generated in [config.yaml](mdc:config.yaml) on first run. The application supports:
|
||||
- Memory and disk cache sizing
|
||||
- Garbage collection algorithm selection
|
||||
- Concurrency limits
|
||||
- Upstream server configuration
|
||||
89
.cursor/rules/security-validation-patterns.mdc
Normal file
89
.cursor/rules/security-validation-patterns.mdc
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
description: Security and validation patterns for SteamCache2
|
||||
---
|
||||
|
||||
# Security and Validation Patterns
|
||||
|
||||
## Input Validation
|
||||
- Validate all HTTP request parameters and headers
|
||||
- Sanitize file paths and cache keys to prevent directory traversal
|
||||
- Validate URL paths before processing
|
||||
- Check Content-Length headers for reasonable values
|
||||
- Reject malformed or suspicious requests
|
||||
|
||||
## Cache Key Security
|
||||
- Use SHA256 hashing for all cache keys to prevent collisions
|
||||
- Never include user input directly in cache keys
|
||||
- Strip query parameters from URLs before hashing
|
||||
- Use service prefixes to isolate different services
|
||||
- Validate cache key format and length
|
||||
|
||||
## Content Integrity
|
||||
- Always verify Content-Length matches received data
|
||||
- Use SHA256 hashing for content integrity verification
|
||||
- Don't cache chunked transfer encoding (no Content-Length)
|
||||
- Reject files with invalid or missing Content-Length
|
||||
- Implement cache file format validation with magic numbers
|
||||
|
||||
## Rate Limiting and DoS Protection
|
||||
- Implement global concurrency limits with semaphores
|
||||
- Use per-client rate limiting to prevent abuse
|
||||
- Clean up old client limiters to prevent memory leaks
|
||||
- Set appropriate timeouts for all operations
|
||||
- Monitor and log suspicious activity
|
||||
|
||||
## HTTP Security
|
||||
- Only support GET requests (Steam doesn't use other methods)
|
||||
- Validate HTTP method and reject unsupported methods
|
||||
- Handle malformed HTTP requests gracefully
|
||||
- Implement proper error responses with appropriate status codes
|
||||
- Use hop-by-hop header filtering
|
||||
|
||||
## Client IP Detection
|
||||
- Check X-Forwarded-For header for proxy setups
|
||||
- Fall back to X-Real-IP header
|
||||
- Use RemoteAddr as final fallback
|
||||
- Handle comma-separated IP lists in X-Forwarded-For
|
||||
- Log client IPs for monitoring and debugging
|
||||
|
||||
## Service Detection Security
|
||||
- Use regex patterns for User-Agent matching
|
||||
- Validate service configurations before use
|
||||
- Support multiple services with proper isolation
|
||||
- Default to Steam service configuration
|
||||
- Log service detection for monitoring
|
||||
|
||||
## Error Handling Security
|
||||
- Don't expose internal system information in error messages
|
||||
- Log detailed errors for debugging but return generic messages to clients
|
||||
- Handle errors gracefully without crashing
|
||||
- Implement proper cleanup on errors
|
||||
- Use structured logging for security events
|
||||
|
||||
## Configuration Security
|
||||
- Validate configuration values on startup
|
||||
- Use sensible defaults for security-sensitive settings
|
||||
- Validate file paths and permissions
|
||||
- Check upstream server connectivity
|
||||
- Log configuration changes
|
||||
|
||||
## Memory and Resource Security
|
||||
- Implement memory limits to prevent OOM attacks
|
||||
- Use proper resource cleanup and garbage collection
|
||||
- Monitor memory usage and implement alerts
|
||||
- Use bounded data structures where possible
|
||||
- Implement proper connection limits
|
||||
|
||||
## Logging Security
|
||||
- Don't log sensitive information (passwords, tokens)
|
||||
- Use structured logging for security events
|
||||
- Include relevant context (IPs, URLs, timestamps)
|
||||
- Implement log rotation and retention policies
|
||||
- Monitor logs for security issues
|
||||
|
||||
## Network Security
|
||||
- Use HTTPS for upstream connections when possible
|
||||
- Implement proper TLS configuration
|
||||
- Use connection pooling with appropriate limits
|
||||
- Set reasonable timeouts for network operations
|
||||
- Monitor network traffic for anomalies
|
||||
48
.cursor/rules/steamcache2-overview.mdc
Normal file
48
.cursor/rules/steamcache2-overview.mdc
Normal file
@@ -0,0 +1,48 @@
|
||||
---
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
# SteamCache2 Overview
|
||||
|
||||
SteamCache2 is a high-performance HTTP proxy cache specifically designed for Steam game downloads. It reduces bandwidth usage and speeds up downloads by caching game files locally.
|
||||
|
||||
## Key Features
|
||||
- **Tiered Caching**: Memory + disk cache with intelligent promotion
|
||||
- **Service Detection**: Automatically detects Steam clients via User-Agent
|
||||
- **Request Coalescing**: Multiple clients share downloads of the same file
|
||||
- **Range Support**: Serves partial content from cached full files
|
||||
- **Garbage Collection**: Multiple algorithms (LRU, LFU, FIFO, Hybrid, etc.)
|
||||
- **Adaptive Caching**: Learns from access patterns for better performance
|
||||
|
||||
## Architecture
|
||||
- **HTTP Proxy**: Intercepts Steam requests and serves from cache when possible
|
||||
- **VFS Layer**: Abstracted storage supporting memory and disk caches
|
||||
- **Service Manager**: Handles multiple gaming services (Steam, Epic, etc.)
|
||||
- **GC System**: Intelligent cache eviction with configurable algorithms
|
||||
|
||||
## Development
|
||||
- **Language**: Go 1.23+
|
||||
- **Build**: Use `make` commands (see [Makefile](mdc:Makefile))
|
||||
- **Testing**: Comprehensive unit and integration tests
|
||||
- **Configuration**: YAML-based with automatic generation
|
||||
|
||||
## Performance
|
||||
- **Concurrency**: Configurable request limits and rate limiting
|
||||
- **Memory**: Dynamic memory management with configurable thresholds
|
||||
- **Network**: Optimized HTTP transport with connection pooling
|
||||
- **Storage**: Efficient cache file format with integrity verification
|
||||
|
||||
## Use Cases
|
||||
- **Gaming Cafes**: Reduce bandwidth costs and improve download speeds
|
||||
- **LAN Events**: Share game downloads across multiple clients
|
||||
- **Home Networks**: Speed up game updates for multiple gamers
|
||||
- **Development**: Test game downloads without hitting Steam servers
|
||||
|
||||
## Configuration
|
||||
Default configuration is generated on first run. Key settings:
|
||||
- Cache sizes (memory/disk)
|
||||
- Garbage collection algorithms
|
||||
- Concurrency limits
|
||||
- Upstream server configuration
|
||||
|
||||
See [config.yaml](mdc:config.yaml) for configuration options and [README.md](mdc:README.md) for detailed setup instructions.
|
||||
78
.cursor/rules/testing-guidelines.mdc
Normal file
78
.cursor/rules/testing-guidelines.mdc
Normal file
@@ -0,0 +1,78 @@
|
||||
---
|
||||
globs: *_test.go
|
||||
---
|
||||
|
||||
# Testing Guidelines for SteamCache2
|
||||
|
||||
## Test Structure
|
||||
- Use table-driven tests for multiple test cases
|
||||
- Group related tests in the same test function when appropriate
|
||||
- Use descriptive test names that explain what is being tested
|
||||
- Include both positive and negative test cases
|
||||
|
||||
## Test Data Management
|
||||
- Use `t.TempDir()` for temporary files and directories
|
||||
- Clean up resources in defer statements
|
||||
- Use unique temporary directories for each test to avoid conflicts
|
||||
- Don't rely on external services in unit tests
|
||||
|
||||
## Integration Testing
|
||||
- Mark integration tests with `testing.Short()` checks
|
||||
- Use real Steam URLs for integration tests when appropriate
|
||||
- Test both cache hits and cache misses
|
||||
- Verify response integrity between direct and cached responses
|
||||
- Test against actual Steam servers for real-world validation
|
||||
- Use `httptest.NewServer` for local testing scenarios
|
||||
- Compare direct vs cached responses byte-for-byte
|
||||
|
||||
## Mocking and Stubbing
|
||||
- Use `httptest.NewServer` for HTTP server mocking
|
||||
- Create mock responses that match real Steam responses
|
||||
- Test error conditions and edge cases
|
||||
- Use `httptest.NewRecorder` for response testing
|
||||
|
||||
## Performance Testing
|
||||
- Test with realistic data sizes
|
||||
- Measure cache hit/miss ratios
|
||||
- Test concurrent request handling
|
||||
- Verify memory usage doesn't grow unbounded
|
||||
|
||||
## Cache Testing
|
||||
- Test cache key generation and uniqueness
|
||||
- Verify cache file format serialization/deserialization
|
||||
- Test garbage collection algorithms
|
||||
- Test cache eviction policies
|
||||
- Test cache corruption scenarios and recovery
|
||||
- Verify cache file format integrity (magic numbers, hashes)
|
||||
- Test range request handling from cached files
|
||||
- Test request coalescing behavior
|
||||
|
||||
## Service Detection Testing
|
||||
- Test User-Agent pattern matching
|
||||
- Test service configuration management
|
||||
- Test cache key generation for different services
|
||||
- Test service expandability (adding new services)
|
||||
|
||||
## Error Handling Testing
|
||||
- Test network failures and timeouts
|
||||
- Test malformed requests and responses
|
||||
- Test cache corruption scenarios
|
||||
- Test resource exhaustion conditions
|
||||
|
||||
## Test Timeouts
|
||||
- All tests should run with appropriate timeouts
|
||||
- Use `context.WithTimeout` for long-running operations
|
||||
- Set reasonable timeouts for network operations
|
||||
- Fail fast on obvious errors
|
||||
|
||||
## Test Coverage
|
||||
- Aim for high test coverage on critical paths
|
||||
- Test edge cases and error conditions
|
||||
- Test concurrent access patterns
|
||||
- Test resource cleanup and memory management
|
||||
|
||||
## Test Documentation
|
||||
- Document complex test scenarios
|
||||
- Explain the purpose of integration tests
|
||||
- Include comments for non-obvious test logic
|
||||
- Document expected behavior and assumptions
|
||||
72
.cursor/rules/vfs-patterns.mdc
Normal file
72
.cursor/rules/vfs-patterns.mdc
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
description: VFS (Virtual File System) patterns and architecture
|
||||
---
|
||||
|
||||
# VFS (Virtual File System) Patterns
|
||||
|
||||
## Core VFS Interface
|
||||
- Implement the `vfs.VFS` interface for all storage backends
|
||||
- Use interface implementation verification: `var _ vfs.VFS = (*Implementation)(nil)`
|
||||
- Support both memory and disk-based storage with the same interface
|
||||
- Provide size and capacity information for monitoring
|
||||
|
||||
## Tiered Cache Architecture
|
||||
- Use `vfs/cache/cache.go` for two-tier caching (memory + disk)
|
||||
- Implement lock-free tier switching with `atomic.Value`
|
||||
- Prefer disk tier for persistence, memory tier for speed
|
||||
- Support cache promotion from disk to memory
|
||||
|
||||
## Sharded File Systems
|
||||
- Use sharded directory structures for Steam cache keys
|
||||
- Implement 2-level sharding: `steam/XX/YY/hash` for optimal performance
|
||||
- Use `vfs/locks/sharding.go` for sharded locking
|
||||
- Reduce inode pressure with directory sharding
|
||||
|
||||
## Memory Management
|
||||
- Use `bytes.Buffer` for in-memory file storage
|
||||
- Implement batched time updates for performance
|
||||
- Use LRU lists for eviction tracking
|
||||
- Monitor memory fragmentation and usage
|
||||
|
||||
## Disk Storage
|
||||
- Use memory-mapped files (`mmap`) for large file operations
|
||||
- Implement efficient file path sharding
|
||||
- Use batched operations for better I/O performance
|
||||
- Support concurrent access with proper locking
|
||||
|
||||
## Garbage Collection Integration
|
||||
- Wrap VFS implementations with `vfs/gc/gc.go`
|
||||
- Support multiple GC algorithms (LRU, LFU, FIFO, etc.)
|
||||
- Implement async GC with configurable thresholds
|
||||
- Use eviction functions from `vfs/eviction/eviction.go`
|
||||
|
||||
## Performance Optimizations
|
||||
- Use sharded locks to reduce contention
|
||||
- Implement batched time updates (100ms intervals)
|
||||
- Use atomic operations for lock-free updates
|
||||
- Monitor and log performance metrics
|
||||
|
||||
## Error Handling
|
||||
- Use custom VFS errors from `vfs/vfserror/vfserror.go`
|
||||
- Handle capacity exceeded scenarios gracefully
|
||||
- Implement proper cleanup on errors
|
||||
- Log VFS operations with context
|
||||
|
||||
## File Information Management
|
||||
- Use `vfs/types/types.go` for file metadata
|
||||
- Track access times, sizes, and other statistics
|
||||
- Implement efficient file info storage and retrieval
|
||||
- Support batched metadata updates
|
||||
|
||||
## Adaptive and Predictive Features
|
||||
- Integrate with `vfs/adaptive/adaptive.go` for learning patterns
|
||||
- Use `vfs/predictive/predictive.go` for cache warming
|
||||
- Implement intelligent cache promotion strategies
|
||||
- Monitor access patterns for optimization
|
||||
|
||||
## Testing VFS Implementations
|
||||
- Test with realistic file sizes and access patterns
|
||||
- Verify concurrent access scenarios
|
||||
- Test garbage collection behavior
|
||||
- Validate sharding and path generation
|
||||
- Test error conditions and edge cases
|
||||
120
steamcache/errors/errors.go
Normal file
120
steamcache/errors/errors.go
Normal file
@@ -0,0 +1,120 @@
|
||||
// steamcache/errors/errors.go
|
||||
package errors
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
// Common SteamCache errors
|
||||
var (
|
||||
ErrInvalidURL = errors.New("steamcache: invalid URL")
|
||||
ErrUnsupportedService = errors.New("steamcache: unsupported service")
|
||||
ErrUpstreamUnavailable = errors.New("steamcache: upstream server unavailable")
|
||||
ErrCacheCorrupted = errors.New("steamcache: cache file corrupted")
|
||||
ErrInvalidContentLength = errors.New("steamcache: invalid content length")
|
||||
ErrRequestTimeout = errors.New("steamcache: request timeout")
|
||||
ErrRateLimitExceeded = errors.New("steamcache: rate limit exceeded")
|
||||
ErrInvalidUserAgent = errors.New("steamcache: invalid user agent")
|
||||
)
|
||||
|
||||
// SteamCacheError represents a SteamCache-specific error with context
|
||||
type SteamCacheError struct {
|
||||
Op string // Operation that failed
|
||||
URL string // URL that caused the error
|
||||
ClientIP string // Client IP address
|
||||
StatusCode int // HTTP status code if applicable
|
||||
Err error // Underlying error
|
||||
Context interface{} // Additional context
|
||||
}
|
||||
|
||||
// Error implements the error interface
|
||||
func (e *SteamCacheError) Error() string {
|
||||
if e.URL != "" && e.ClientIP != "" {
|
||||
return fmt.Sprintf("steamcache: %s failed for URL %q from client %s: %v", e.Op, e.URL, e.ClientIP, e.Err)
|
||||
}
|
||||
if e.URL != "" {
|
||||
return fmt.Sprintf("steamcache: %s failed for URL %q: %v", e.Op, e.URL, e.Err)
|
||||
}
|
||||
return fmt.Sprintf("steamcache: %s failed: %v", e.Op, e.Err)
|
||||
}
|
||||
|
||||
// Unwrap returns the underlying error
|
||||
func (e *SteamCacheError) Unwrap() error {
|
||||
return e.Err
|
||||
}
|
||||
|
||||
// NewSteamCacheError creates a new SteamCache error with context
|
||||
func NewSteamCacheError(op, url, clientIP string, err error) *SteamCacheError {
|
||||
return &SteamCacheError{
|
||||
Op: op,
|
||||
URL: url,
|
||||
ClientIP: clientIP,
|
||||
Err: err,
|
||||
}
|
||||
}
|
||||
|
||||
// NewSteamCacheErrorWithStatus creates a new SteamCache error with HTTP status
|
||||
func NewSteamCacheErrorWithStatus(op, url, clientIP string, statusCode int, err error) *SteamCacheError {
|
||||
return &SteamCacheError{
|
||||
Op: op,
|
||||
URL: url,
|
||||
ClientIP: clientIP,
|
||||
StatusCode: statusCode,
|
||||
Err: err,
|
||||
}
|
||||
}
|
||||
|
||||
// NewSteamCacheErrorWithContext creates a new SteamCache error with additional context
|
||||
func NewSteamCacheErrorWithContext(op, url, clientIP string, context interface{}, err error) *SteamCacheError {
|
||||
return &SteamCacheError{
|
||||
Op: op,
|
||||
URL: url,
|
||||
ClientIP: clientIP,
|
||||
Context: context,
|
||||
Err: err,
|
||||
}
|
||||
}
|
||||
|
||||
// IsRetryableError determines if an error is retryable
|
||||
func IsRetryableError(err error) bool {
|
||||
if err == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// Check for specific retryable errors
|
||||
if errors.Is(err, ErrUpstreamUnavailable) ||
|
||||
errors.Is(err, ErrRequestTimeout) {
|
||||
return true
|
||||
}
|
||||
|
||||
// Check for HTTP status codes that are retryable
|
||||
if steamErr, ok := err.(*SteamCacheError); ok {
|
||||
switch steamErr.StatusCode {
|
||||
case http.StatusServiceUnavailable,
|
||||
http.StatusGatewayTimeout,
|
||||
http.StatusTooManyRequests,
|
||||
http.StatusInternalServerError:
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// IsClientError determines if an error is a client error (4xx)
|
||||
func IsClientError(err error) bool {
|
||||
if steamErr, ok := err.(*SteamCacheError); ok {
|
||||
return steamErr.StatusCode >= 400 && steamErr.StatusCode < 500
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// IsServerError determines if an error is a server error (5xx)
|
||||
func IsServerError(err error) bool {
|
||||
if steamErr, ok := err.(*SteamCacheError); ok {
|
||||
return steamErr.StatusCode >= 500
|
||||
}
|
||||
return false
|
||||
}
|
||||
213
steamcache/metrics/metrics.go
Normal file
213
steamcache/metrics/metrics.go
Normal file
@@ -0,0 +1,213 @@
|
||||
// steamcache/metrics/metrics.go
|
||||
package metrics
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Metrics tracks various performance and operational metrics
|
||||
type Metrics struct {
|
||||
// Request metrics
|
||||
TotalRequests int64
|
||||
CacheHits int64
|
||||
CacheMisses int64
|
||||
CacheCoalesced int64
|
||||
Errors int64
|
||||
RateLimited int64
|
||||
|
||||
// Performance metrics
|
||||
TotalResponseTime int64 // in nanoseconds
|
||||
TotalBytesServed int64
|
||||
TotalBytesCached int64
|
||||
|
||||
// Cache metrics
|
||||
MemoryCacheSize int64
|
||||
DiskCacheSize int64
|
||||
MemoryCacheHits int64
|
||||
DiskCacheHits int64
|
||||
|
||||
// Service metrics
|
||||
ServiceRequests map[string]int64
|
||||
serviceMutex sync.RWMutex
|
||||
|
||||
// Time tracking
|
||||
StartTime time.Time
|
||||
LastResetTime time.Time
|
||||
}
|
||||
|
||||
// NewMetrics creates a new metrics instance
|
||||
func NewMetrics() *Metrics {
|
||||
now := time.Now()
|
||||
return &Metrics{
|
||||
ServiceRequests: make(map[string]int64),
|
||||
StartTime: now,
|
||||
LastResetTime: now,
|
||||
}
|
||||
}
|
||||
|
||||
// IncrementTotalRequests increments the total request counter
|
||||
func (m *Metrics) IncrementTotalRequests() {
|
||||
atomic.AddInt64(&m.TotalRequests, 1)
|
||||
}
|
||||
|
||||
// IncrementCacheHits increments the cache hit counter
|
||||
func (m *Metrics) IncrementCacheHits() {
|
||||
atomic.AddInt64(&m.CacheHits, 1)
|
||||
}
|
||||
|
||||
// IncrementCacheMisses increments the cache miss counter
|
||||
func (m *Metrics) IncrementCacheMisses() {
|
||||
atomic.AddInt64(&m.CacheMisses, 1)
|
||||
}
|
||||
|
||||
// IncrementCacheCoalesced increments the coalesced request counter
|
||||
func (m *Metrics) IncrementCacheCoalesced() {
|
||||
atomic.AddInt64(&m.CacheCoalesced, 1)
|
||||
}
|
||||
|
||||
// IncrementErrors increments the error counter
|
||||
func (m *Metrics) IncrementErrors() {
|
||||
atomic.AddInt64(&m.Errors, 1)
|
||||
}
|
||||
|
||||
// IncrementRateLimited increments the rate limited counter
|
||||
func (m *Metrics) IncrementRateLimited() {
|
||||
atomic.AddInt64(&m.RateLimited, 1)
|
||||
}
|
||||
|
||||
// AddResponseTime adds response time to the total
|
||||
func (m *Metrics) AddResponseTime(duration time.Duration) {
|
||||
atomic.AddInt64(&m.TotalResponseTime, int64(duration))
|
||||
}
|
||||
|
||||
// AddBytesServed adds bytes served to the total
|
||||
func (m *Metrics) AddBytesServed(bytes int64) {
|
||||
atomic.AddInt64(&m.TotalBytesServed, bytes)
|
||||
}
|
||||
|
||||
// AddBytesCached adds bytes cached to the total
|
||||
func (m *Metrics) AddBytesCached(bytes int64) {
|
||||
atomic.AddInt64(&m.TotalBytesCached, bytes)
|
||||
}
|
||||
|
||||
// SetMemoryCacheSize sets the current memory cache size
|
||||
func (m *Metrics) SetMemoryCacheSize(size int64) {
|
||||
atomic.StoreInt64(&m.MemoryCacheSize, size)
|
||||
}
|
||||
|
||||
// SetDiskCacheSize sets the current disk cache size
|
||||
func (m *Metrics) SetDiskCacheSize(size int64) {
|
||||
atomic.StoreInt64(&m.DiskCacheSize, size)
|
||||
}
|
||||
|
||||
// IncrementMemoryCacheHits increments memory cache hits
|
||||
func (m *Metrics) IncrementMemoryCacheHits() {
|
||||
atomic.AddInt64(&m.MemoryCacheHits, 1)
|
||||
}
|
||||
|
||||
// IncrementDiskCacheHits increments disk cache hits
|
||||
func (m *Metrics) IncrementDiskCacheHits() {
|
||||
atomic.AddInt64(&m.DiskCacheHits, 1)
|
||||
}
|
||||
|
||||
// IncrementServiceRequests increments requests for a specific service
|
||||
func (m *Metrics) IncrementServiceRequests(service string) {
|
||||
m.serviceMutex.Lock()
|
||||
defer m.serviceMutex.Unlock()
|
||||
m.ServiceRequests[service]++
|
||||
}
|
||||
|
||||
// GetServiceRequests returns the number of requests for a service
|
||||
func (m *Metrics) GetServiceRequests(service string) int64 {
|
||||
m.serviceMutex.RLock()
|
||||
defer m.serviceMutex.RUnlock()
|
||||
return m.ServiceRequests[service]
|
||||
}
|
||||
|
||||
// GetStats returns a snapshot of current metrics
|
||||
func (m *Metrics) GetStats() *Stats {
|
||||
totalRequests := atomic.LoadInt64(&m.TotalRequests)
|
||||
cacheHits := atomic.LoadInt64(&m.CacheHits)
|
||||
cacheMisses := atomic.LoadInt64(&m.CacheMisses)
|
||||
|
||||
var hitRate float64
|
||||
if totalRequests > 0 {
|
||||
hitRate = float64(cacheHits) / float64(totalRequests)
|
||||
}
|
||||
|
||||
var avgResponseTime time.Duration
|
||||
if totalRequests > 0 {
|
||||
avgResponseTime = time.Duration(atomic.LoadInt64(&m.TotalResponseTime) / totalRequests)
|
||||
}
|
||||
|
||||
m.serviceMutex.RLock()
|
||||
serviceRequests := make(map[string]int64)
|
||||
for k, v := range m.ServiceRequests {
|
||||
serviceRequests[k] = v
|
||||
}
|
||||
m.serviceMutex.RUnlock()
|
||||
|
||||
return &Stats{
|
||||
TotalRequests: totalRequests,
|
||||
CacheHits: cacheHits,
|
||||
CacheMisses: cacheMisses,
|
||||
CacheCoalesced: atomic.LoadInt64(&m.CacheCoalesced),
|
||||
Errors: atomic.LoadInt64(&m.Errors),
|
||||
RateLimited: atomic.LoadInt64(&m.RateLimited),
|
||||
HitRate: hitRate,
|
||||
AvgResponseTime: avgResponseTime,
|
||||
TotalBytesServed: atomic.LoadInt64(&m.TotalBytesServed),
|
||||
TotalBytesCached: atomic.LoadInt64(&m.TotalBytesCached),
|
||||
MemoryCacheSize: atomic.LoadInt64(&m.MemoryCacheSize),
|
||||
DiskCacheSize: atomic.LoadInt64(&m.DiskCacheSize),
|
||||
MemoryCacheHits: atomic.LoadInt64(&m.MemoryCacheHits),
|
||||
DiskCacheHits: atomic.LoadInt64(&m.DiskCacheHits),
|
||||
ServiceRequests: serviceRequests,
|
||||
Uptime: time.Since(m.StartTime),
|
||||
LastResetTime: m.LastResetTime,
|
||||
}
|
||||
}
|
||||
|
||||
// Reset resets all metrics to zero
|
||||
func (m *Metrics) Reset() {
|
||||
atomic.StoreInt64(&m.TotalRequests, 0)
|
||||
atomic.StoreInt64(&m.CacheHits, 0)
|
||||
atomic.StoreInt64(&m.CacheMisses, 0)
|
||||
atomic.StoreInt64(&m.CacheCoalesced, 0)
|
||||
atomic.StoreInt64(&m.Errors, 0)
|
||||
atomic.StoreInt64(&m.RateLimited, 0)
|
||||
atomic.StoreInt64(&m.TotalResponseTime, 0)
|
||||
atomic.StoreInt64(&m.TotalBytesServed, 0)
|
||||
atomic.StoreInt64(&m.TotalBytesCached, 0)
|
||||
atomic.StoreInt64(&m.MemoryCacheHits, 0)
|
||||
atomic.StoreInt64(&m.DiskCacheHits, 0)
|
||||
|
||||
m.serviceMutex.Lock()
|
||||
m.ServiceRequests = make(map[string]int64)
|
||||
m.serviceMutex.Unlock()
|
||||
|
||||
m.LastResetTime = time.Now()
|
||||
}
|
||||
|
||||
// Stats represents a snapshot of metrics
|
||||
type Stats struct {
|
||||
TotalRequests int64
|
||||
CacheHits int64
|
||||
CacheMisses int64
|
||||
CacheCoalesced int64
|
||||
Errors int64
|
||||
RateLimited int64
|
||||
HitRate float64
|
||||
AvgResponseTime time.Duration
|
||||
TotalBytesServed int64
|
||||
TotalBytesCached int64
|
||||
MemoryCacheSize int64
|
||||
DiskCacheSize int64
|
||||
MemoryCacheHits int64
|
||||
DiskCacheHits int64
|
||||
ServiceRequests map[string]int64
|
||||
Uptime time.Duration
|
||||
LastResetTime time.Time
|
||||
}
|
||||
@@ -13,7 +13,9 @@ import (
|
||||
"net/url"
|
||||
"os"
|
||||
"regexp"
|
||||
"s1d3sw1ped/steamcache2/steamcache/errors"
|
||||
"s1d3sw1ped/steamcache2/steamcache/logger"
|
||||
"s1d3sw1ped/steamcache2/steamcache/metrics"
|
||||
"s1d3sw1ped/steamcache2/vfs"
|
||||
"s1d3sw1ped/steamcache2/vfs/adaptive"
|
||||
"s1d3sw1ped/steamcache2/vfs/cache"
|
||||
@@ -360,13 +362,14 @@ func (sc *SteamCache) streamCachedResponse(w http.ResponseWriter, r *http.Reques
|
||||
w.Write(rangeData)
|
||||
|
||||
logger.Logger.Info().
|
||||
Str("key", cacheKey).
|
||||
Str("cache_key", cacheKey).
|
||||
Str("url", r.URL.String()).
|
||||
Str("host", r.Host).
|
||||
Str("client_ip", clientIP).
|
||||
Str("status", "HIT").
|
||||
Str("cache_status", "HIT").
|
||||
Str("range", fmt.Sprintf("%d-%d/%d", start, end, totalSize)).
|
||||
Dur("zduration", time.Since(tstart)).
|
||||
Int64("range_size", end-start+1).
|
||||
Dur("response_time", time.Since(tstart)).
|
||||
Msg("cache request")
|
||||
|
||||
return
|
||||
@@ -394,12 +397,13 @@ func (sc *SteamCache) streamCachedResponse(w http.ResponseWriter, r *http.Reques
|
||||
w.Write(bodyData)
|
||||
|
||||
logger.Logger.Info().
|
||||
Str("key", cacheKey).
|
||||
Str("cache_key", cacheKey).
|
||||
Str("url", r.URL.String()).
|
||||
Str("host", r.Host).
|
||||
Str("client_ip", clientIP).
|
||||
Str("status", "HIT").
|
||||
Dur("zduration", time.Since(tstart)).
|
||||
Str("cache_status", "HIT").
|
||||
Int64("file_size", int64(len(bodyData))).
|
||||
Dur("response_time", time.Since(tstart)).
|
||||
Msg("cache request")
|
||||
}
|
||||
|
||||
@@ -495,14 +499,19 @@ func parseRangeHeader(rangeHeader string, totalSize int64) (start, end, total in
|
||||
}
|
||||
|
||||
// generateURLHash creates a SHA256 hash of the entire URL path for cache key
|
||||
func generateURLHash(urlPath string) string {
|
||||
func generateURLHash(urlPath string) (string, error) {
|
||||
// Validate input to prevent cache key pollution
|
||||
if urlPath == "" {
|
||||
return ""
|
||||
return "", errors.NewSteamCacheError("generateURLHash", urlPath, "", errors.ErrInvalidURL)
|
||||
}
|
||||
|
||||
// Additional validation for suspicious patterns
|
||||
if strings.Contains(urlPath, "..") || strings.Contains(urlPath, "//") {
|
||||
return "", errors.NewSteamCacheError("generateURLHash", urlPath, "", errors.ErrInvalidURL)
|
||||
}
|
||||
|
||||
hash := sha256.Sum256([]byte(urlPath))
|
||||
return hex.EncodeToString(hash[:])
|
||||
return hex.EncodeToString(hash[:]), nil
|
||||
}
|
||||
|
||||
// calculateSHA256 calculates SHA256 hash of the given data
|
||||
@@ -512,6 +521,35 @@ func calculateSHA256(data []byte) string {
|
||||
return hex.EncodeToString(hasher.Sum(nil))
|
||||
}
|
||||
|
||||
// validateURLPath validates URL path for security concerns
|
||||
func validateURLPath(urlPath string) error {
|
||||
if urlPath == "" {
|
||||
return errors.NewSteamCacheError("validateURLPath", urlPath, "", errors.ErrInvalidURL)
|
||||
}
|
||||
|
||||
// Check for directory traversal attempts
|
||||
if strings.Contains(urlPath, "..") {
|
||||
return errors.NewSteamCacheError("validateURLPath", urlPath, "", errors.ErrInvalidURL)
|
||||
}
|
||||
|
||||
// Check for double slashes (potential path manipulation)
|
||||
if strings.Contains(urlPath, "//") {
|
||||
return errors.NewSteamCacheError("validateURLPath", urlPath, "", errors.ErrInvalidURL)
|
||||
}
|
||||
|
||||
// Check for suspicious characters
|
||||
if strings.ContainsAny(urlPath, "<>\"'&") {
|
||||
return errors.NewSteamCacheError("validateURLPath", urlPath, "", errors.ErrInvalidURL)
|
||||
}
|
||||
|
||||
// Check for reasonable length (prevent DoS)
|
||||
if len(urlPath) > 2048 {
|
||||
return errors.NewSteamCacheError("validateURLPath", urlPath, "", errors.ErrInvalidURL)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// verifyCompleteFile verifies that we received the complete file by checking Content-Length
|
||||
// Returns true if the file is complete, false if it's incomplete (allowing retry)
|
||||
func (sc *SteamCache) verifyCompleteFile(bodyData []byte, resp *http.Response, urlPath string, cacheKey string) bool {
|
||||
@@ -571,9 +609,20 @@ func (sc *SteamCache) detectService(r *http.Request) (*ServiceConfig, bool) {
|
||||
// The prefix indicates which service the request came from (detected via User-Agent)
|
||||
// Input: /depot/1684171/chunk/0016cfc5019b8baa6026aa1cce93e685d6e06c6e, "steam"
|
||||
// Output: steam/a1b2c3d4e5f678901234567890123456789012345678901234567890
|
||||
func generateServiceCacheKey(urlPath string, servicePrefix string) string {
|
||||
func generateServiceCacheKey(urlPath string, servicePrefix string) (string, error) {
|
||||
// Validate service prefix
|
||||
if servicePrefix == "" {
|
||||
return "", errors.NewSteamCacheError("generateServiceCacheKey", urlPath, "", errors.ErrUnsupportedService)
|
||||
}
|
||||
|
||||
// Generate hash for URL path
|
||||
hash, err := generateURLHash(urlPath)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Create a SHA256 hash of the entire path for all service client requests
|
||||
return servicePrefix + "/" + generateURLHash(urlPath)
|
||||
return servicePrefix + "/" + hash, nil
|
||||
}
|
||||
|
||||
var hopByHopHeaders = map[string]struct{}{
|
||||
@@ -788,6 +837,9 @@ type SteamCache struct {
|
||||
// Dynamic memory management
|
||||
memoryMonitor *memory.MemoryMonitor
|
||||
dynamicCacheMgr *memory.MemoryMonitor
|
||||
|
||||
// Metrics
|
||||
metrics *metrics.Metrics
|
||||
}
|
||||
|
||||
func New(address string, memorySize string, diskSize string, diskPath, upstream, memoryGC, diskGC string, maxConcurrentRequests int64, maxRequestsPerClient int64) *SteamCache {
|
||||
@@ -930,6 +982,9 @@ func New(address string, memorySize string, diskSize string, diskPath, upstream,
|
||||
// Initialize dynamic memory management
|
||||
memoryMonitor: memory.NewMemoryMonitor(uint64(memorysize), 10*time.Second, 0.1), // 10% threshold
|
||||
dynamicCacheMgr: nil, // Will be set after cache creation
|
||||
|
||||
// Initialize metrics
|
||||
metrics: metrics.NewMetrics(),
|
||||
}
|
||||
|
||||
// Initialize dynamic cache manager if we have memory cache
|
||||
@@ -1000,21 +1055,44 @@ func (sc *SteamCache) Shutdown() {
|
||||
sc.wg.Wait()
|
||||
}
|
||||
|
||||
// GetMetrics returns current metrics
|
||||
func (sc *SteamCache) GetMetrics() *metrics.Stats {
|
||||
// Update cache sizes
|
||||
if sc.memory != nil {
|
||||
sc.metrics.SetMemoryCacheSize(sc.memory.Size())
|
||||
}
|
||||
if sc.disk != nil {
|
||||
sc.metrics.SetDiskCacheSize(sc.disk.Size())
|
||||
}
|
||||
|
||||
return sc.metrics.GetStats()
|
||||
}
|
||||
|
||||
// ResetMetrics resets all metrics to zero
|
||||
func (sc *SteamCache) ResetMetrics() {
|
||||
sc.metrics.Reset()
|
||||
}
|
||||
|
||||
func (sc *SteamCache) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
clientIP := getClientIP(r)
|
||||
|
||||
// Set keep-alive headers for better performance
|
||||
w.Header().Set("Connection", "keep-alive")
|
||||
w.Header().Set("Keep-Alive", "timeout=300, max=1000")
|
||||
|
||||
// Apply global concurrency limit first
|
||||
if err := sc.requestSemaphore.Acquire(context.Background(), 1); err != nil {
|
||||
logger.Logger.Warn().Str("client_ip", getClientIP(r)).Msg("Server at capacity, rejecting request")
|
||||
sc.metrics.IncrementRateLimited()
|
||||
logger.Logger.Warn().Str("client_ip", clientIP).Msg("Server at capacity, rejecting request")
|
||||
http.Error(w, "Server busy, please try again later", http.StatusServiceUnavailable)
|
||||
return
|
||||
}
|
||||
defer sc.requestSemaphore.Release(1)
|
||||
|
||||
// Track total requests
|
||||
sc.metrics.IncrementTotalRequests()
|
||||
|
||||
// Apply per-client rate limiting
|
||||
clientIP := getClientIP(r)
|
||||
clientLimiter := sc.getOrCreateClientLimiter(clientIP)
|
||||
|
||||
if err := clientLimiter.semaphore.Acquire(context.Background(), 1); err != nil {
|
||||
@@ -1054,19 +1132,56 @@ func (sc *SteamCache) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
if r.URL.String() == "/metrics" {
|
||||
// Return metrics in a simple text format
|
||||
stats := sc.GetMetrics()
|
||||
w.Header().Set("Content-Type", "text/plain")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
fmt.Fprintf(w, "# SteamCache2 Metrics\n")
|
||||
fmt.Fprintf(w, "total_requests %d\n", stats.TotalRequests)
|
||||
fmt.Fprintf(w, "cache_hits %d\n", stats.CacheHits)
|
||||
fmt.Fprintf(w, "cache_misses %d\n", stats.CacheMisses)
|
||||
fmt.Fprintf(w, "cache_coalesced %d\n", stats.CacheCoalesced)
|
||||
fmt.Fprintf(w, "errors %d\n", stats.Errors)
|
||||
fmt.Fprintf(w, "rate_limited %d\n", stats.RateLimited)
|
||||
fmt.Fprintf(w, "hit_rate %.4f\n", stats.HitRate)
|
||||
fmt.Fprintf(w, "avg_response_time_ms %.2f\n", float64(stats.AvgResponseTime.Nanoseconds())/1e6)
|
||||
fmt.Fprintf(w, "total_bytes_served %d\n", stats.TotalBytesServed)
|
||||
fmt.Fprintf(w, "total_bytes_cached %d\n", stats.TotalBytesCached)
|
||||
fmt.Fprintf(w, "memory_cache_size %d\n", stats.MemoryCacheSize)
|
||||
fmt.Fprintf(w, "disk_cache_size %d\n", stats.DiskCacheSize)
|
||||
fmt.Fprintf(w, "uptime_seconds %.2f\n", stats.Uptime.Seconds())
|
||||
return
|
||||
}
|
||||
|
||||
// Check if this is a request from a supported service
|
||||
if service, isSupported := sc.detectService(r); isSupported {
|
||||
// trim the query parameters from the URL path
|
||||
// this is necessary because the cache key should not include query parameters
|
||||
urlPath, _, _ := strings.Cut(r.URL.String(), "?")
|
||||
|
||||
// Validate URL path for security
|
||||
if err := validateURLPath(urlPath); err != nil {
|
||||
logger.Logger.Warn().
|
||||
Err(err).
|
||||
Str("url", urlPath).
|
||||
Str("client_ip", clientIP).
|
||||
Msg("Invalid URL path detected")
|
||||
http.Error(w, "Invalid URL", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
tstart := time.Now()
|
||||
|
||||
// Generate service cache key: {service}/{hash} (prefix indicates service via User-Agent)
|
||||
cacheKey := generateServiceCacheKey(urlPath, service.Prefix)
|
||||
|
||||
if cacheKey == "" {
|
||||
logger.Logger.Warn().Str("url", urlPath).Msg("Invalid URL")
|
||||
cacheKey, err := generateServiceCacheKey(urlPath, service.Prefix)
|
||||
if err != nil {
|
||||
logger.Logger.Warn().
|
||||
Err(err).
|
||||
Str("url", urlPath).
|
||||
Str("service", service.Name).
|
||||
Str("client_ip", clientIP).
|
||||
Msg("Failed to generate cache key")
|
||||
http.Error(w, "Invalid URL", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
@@ -1110,6 +1225,12 @@ func (sc *SteamCache) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
// Cache validation passed - record access for adaptive/predictive analysis
|
||||
sc.recordCacheAccess(cacheKey, int64(len(cachedData)))
|
||||
|
||||
// Track cache hit metrics
|
||||
sc.metrics.IncrementCacheHits()
|
||||
sc.metrics.AddResponseTime(time.Since(tstart))
|
||||
sc.metrics.AddBytesServed(int64(len(cachedData)))
|
||||
sc.metrics.IncrementServiceRequests(service.Name)
|
||||
|
||||
logger.Logger.Debug().
|
||||
Str("key", cacheKey).
|
||||
Str("url", urlPath).
|
||||
@@ -1175,13 +1296,21 @@ func (sc *SteamCache) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(coalescedReq.statusCode)
|
||||
w.Write(responseData)
|
||||
|
||||
// Track coalesced cache hit metrics
|
||||
sc.metrics.IncrementCacheCoalesced()
|
||||
sc.metrics.AddResponseTime(time.Since(tstart))
|
||||
sc.metrics.AddBytesServed(int64(len(responseData)))
|
||||
sc.metrics.IncrementServiceRequests(service.Name)
|
||||
|
||||
logger.Logger.Info().
|
||||
Str("key", cacheKey).
|
||||
Str("cache_key", cacheKey).
|
||||
Str("url", urlPath).
|
||||
Str("host", r.Host).
|
||||
Str("client_ip", clientIP).
|
||||
Str("status", "HIT-COALESCED").
|
||||
Dur("zduration", time.Since(tstart)).
|
||||
Str("cache_status", "HIT-COALESCED").
|
||||
Int("waiting_clients", coalescedReq.waitingCount).
|
||||
Int64("file_size", int64(len(responseData))).
|
||||
Dur("response_time", time.Since(tstart)).
|
||||
Msg("cache request")
|
||||
|
||||
return
|
||||
@@ -1359,6 +1488,12 @@ func (sc *SteamCache) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(resp.StatusCode)
|
||||
w.Write(bodyData)
|
||||
|
||||
// Track cache miss metrics
|
||||
sc.metrics.IncrementCacheMisses()
|
||||
sc.metrics.AddResponseTime(time.Since(tstart))
|
||||
sc.metrics.AddBytesServed(int64(len(bodyData)))
|
||||
sc.metrics.IncrementServiceRequests(service.Name)
|
||||
|
||||
// Cache the file if validation passed
|
||||
if validationPassed {
|
||||
// Verify we received the complete file by checking Content-Length
|
||||
@@ -1399,6 +1534,8 @@ func (sc *SteamCache) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
Msg("Cache write failed or incomplete - removing corrupted entry")
|
||||
sc.vfs.Delete(cachePath)
|
||||
} else {
|
||||
// Track successful cache write
|
||||
sc.metrics.AddBytesCached(int64(len(cacheData)))
|
||||
logger.Logger.Debug().
|
||||
Str("key", cacheKey).
|
||||
Str("url", urlPath).
|
||||
@@ -1456,12 +1593,14 @@ func (sc *SteamCache) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
|
||||
logger.Logger.Info().
|
||||
Str("key", cacheKey).
|
||||
Str("cache_key", cacheKey).
|
||||
Str("url", urlPath).
|
||||
Str("host", r.Host).
|
||||
Str("client_ip", clientIP).
|
||||
Str("status", "MISS").
|
||||
Dur("zduration", time.Since(tstart)).
|
||||
Str("service", service.Name).
|
||||
Str("cache_status", "MISS").
|
||||
Int64("file_size", int64(len(bodyData))).
|
||||
Dur("response_time", time.Since(tstart)).
|
||||
Msg("cache request")
|
||||
|
||||
return
|
||||
|
||||
@@ -3,6 +3,8 @@ package steamcache
|
||||
|
||||
import (
|
||||
"io"
|
||||
"s1d3sw1ped/steamcache2/steamcache/errors"
|
||||
"s1d3sw1ped/steamcache2/vfs/vfserror"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
@@ -164,10 +166,13 @@ func TestURLHashing(t *testing.T) {
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.desc, func(t *testing.T) {
|
||||
result := generateServiceCacheKey(tc.input, "steam")
|
||||
result, err := generateServiceCacheKey(tc.input, "steam")
|
||||
|
||||
if tc.shouldCache {
|
||||
// Should return a cache key with "steam/" prefix
|
||||
if err != nil {
|
||||
t.Errorf("generateServiceCacheKey(%s, \"steam\") returned error: %v", tc.input, err)
|
||||
}
|
||||
if !strings.HasPrefix(result, "steam/") {
|
||||
t.Errorf("generateServiceCacheKey(%s, \"steam\") = %s, expected steam/ prefix", tc.input, result)
|
||||
}
|
||||
@@ -176,9 +181,9 @@ func TestURLHashing(t *testing.T) {
|
||||
t.Errorf("generateServiceCacheKey(%s, \"steam\") length = %d, expected 70", tc.input, len(result))
|
||||
}
|
||||
} else {
|
||||
// Should return empty string for non-Steam URLs
|
||||
if result != "" {
|
||||
t.Errorf("generateServiceCacheKey(%s, \"steam\") = %s, expected empty string", tc.input, result)
|
||||
// Should return error for invalid URLs
|
||||
if err == nil {
|
||||
t.Errorf("generateServiceCacheKey(%s, \"steam\") should have returned error", tc.input)
|
||||
}
|
||||
}
|
||||
})
|
||||
@@ -322,8 +327,14 @@ func TestServiceManagerExpandability(t *testing.T) {
|
||||
}
|
||||
|
||||
// Test cache key generation for different services
|
||||
steamKey := generateServiceCacheKey("/depot/123/chunk/abc", "steam")
|
||||
epicKey := generateServiceCacheKey("/epic/123/chunk/abc", "epic")
|
||||
steamKey, err := generateServiceCacheKey("/depot/123/chunk/abc", "steam")
|
||||
if err != nil {
|
||||
t.Errorf("Failed to generate Steam cache key: %v", err)
|
||||
}
|
||||
epicKey, err := generateServiceCacheKey("/epic/123/chunk/abc", "epic")
|
||||
if err != nil {
|
||||
t.Errorf("Failed to generate Epic cache key: %v", err)
|
||||
}
|
||||
|
||||
if !strings.HasPrefix(steamKey, "steam/") {
|
||||
t.Errorf("Steam cache key should start with 'steam/', got: %s", steamKey)
|
||||
@@ -367,4 +378,139 @@ func TestSteamKeySharding(t *testing.T) {
|
||||
// and be readable, whereas without sharding it might not work correctly
|
||||
}
|
||||
|
||||
// TestURLValidation tests the URL validation function
|
||||
func TestURLValidation(t *testing.T) {
|
||||
testCases := []struct {
|
||||
urlPath string
|
||||
shouldPass bool
|
||||
description string
|
||||
}{
|
||||
{
|
||||
urlPath: "/depot/123/chunk/abc",
|
||||
shouldPass: true,
|
||||
description: "valid Steam URL",
|
||||
},
|
||||
{
|
||||
urlPath: "/appinfo/456",
|
||||
shouldPass: true,
|
||||
description: "valid app info URL",
|
||||
},
|
||||
{
|
||||
urlPath: "",
|
||||
shouldPass: false,
|
||||
description: "empty URL",
|
||||
},
|
||||
{
|
||||
urlPath: "/depot/../etc/passwd",
|
||||
shouldPass: false,
|
||||
description: "directory traversal attempt",
|
||||
},
|
||||
{
|
||||
urlPath: "/depot//123/chunk/abc",
|
||||
shouldPass: false,
|
||||
description: "double slash",
|
||||
},
|
||||
{
|
||||
urlPath: "/depot/123/chunk/abc<script>",
|
||||
shouldPass: false,
|
||||
description: "suspicious characters",
|
||||
},
|
||||
{
|
||||
urlPath: strings.Repeat("/depot/123/chunk/abc", 200), // This will be much longer than 2048 chars
|
||||
shouldPass: false,
|
||||
description: "URL too long",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.description, func(t *testing.T) {
|
||||
err := validateURLPath(tc.urlPath)
|
||||
if tc.shouldPass && err != nil {
|
||||
t.Errorf("validateURLPath(%q) should pass but got error: %v", tc.urlPath, err)
|
||||
}
|
||||
if !tc.shouldPass && err == nil {
|
||||
t.Errorf("validateURLPath(%q) should fail but passed", tc.urlPath)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestErrorTypes tests the custom error types
|
||||
func TestErrorTypes(t *testing.T) {
|
||||
// Test VFS error
|
||||
vfsErr := vfserror.NewVFSError("test", "key1", vfserror.ErrNotFound)
|
||||
if vfsErr.Error() == "" {
|
||||
t.Error("VFS error should have a message")
|
||||
}
|
||||
if vfsErr.Unwrap() != vfserror.ErrNotFound {
|
||||
t.Error("VFS error should unwrap to the underlying error")
|
||||
}
|
||||
|
||||
// Test SteamCache error
|
||||
scErr := errors.NewSteamCacheError("test", "/test/url", "127.0.0.1", errors.ErrInvalidURL)
|
||||
if scErr.Error() == "" {
|
||||
t.Error("SteamCache error should have a message")
|
||||
}
|
||||
if scErr.Unwrap() != errors.ErrInvalidURL {
|
||||
t.Error("SteamCache error should unwrap to the underlying error")
|
||||
}
|
||||
|
||||
// Test retryable error detection
|
||||
if !errors.IsRetryableError(errors.ErrUpstreamUnavailable) {
|
||||
t.Error("Upstream unavailable should be retryable")
|
||||
}
|
||||
if errors.IsRetryableError(errors.ErrInvalidURL) {
|
||||
t.Error("Invalid URL should not be retryable")
|
||||
}
|
||||
}
|
||||
|
||||
// TestMetrics tests the metrics functionality
|
||||
func TestMetrics(t *testing.T) {
|
||||
td := t.TempDir()
|
||||
sc := New("localhost:8080", "1G", "1G", td, "", "lru", "lru", 200, 5)
|
||||
|
||||
// Test initial metrics
|
||||
stats := sc.GetMetrics()
|
||||
if stats.TotalRequests != 0 {
|
||||
t.Error("Initial total requests should be 0")
|
||||
}
|
||||
if stats.CacheHits != 0 {
|
||||
t.Error("Initial cache hits should be 0")
|
||||
}
|
||||
|
||||
// Test metrics increment
|
||||
sc.metrics.IncrementTotalRequests()
|
||||
sc.metrics.IncrementCacheHits()
|
||||
sc.metrics.IncrementCacheMisses()
|
||||
sc.metrics.AddBytesServed(1024)
|
||||
sc.metrics.IncrementServiceRequests("steam")
|
||||
|
||||
stats = sc.GetMetrics()
|
||||
if stats.TotalRequests != 1 {
|
||||
t.Error("Total requests should be 1")
|
||||
}
|
||||
if stats.CacheHits != 1 {
|
||||
t.Error("Cache hits should be 1")
|
||||
}
|
||||
if stats.CacheMisses != 1 {
|
||||
t.Error("Cache misses should be 1")
|
||||
}
|
||||
if stats.TotalBytesServed != 1024 {
|
||||
t.Error("Total bytes served should be 1024")
|
||||
}
|
||||
if stats.ServiceRequests["steam"] != 1 {
|
||||
t.Error("Steam service requests should be 1")
|
||||
}
|
||||
|
||||
// Test metrics reset
|
||||
sc.ResetMetrics()
|
||||
stats = sc.GetMetrics()
|
||||
if stats.TotalRequests != 0 {
|
||||
t.Error("After reset, total requests should be 0")
|
||||
}
|
||||
if stats.CacheHits != 0 {
|
||||
t.Error("After reset, cache hits should be 0")
|
||||
}
|
||||
}
|
||||
|
||||
// Removed old TestKeyGeneration - replaced with TestURLHashing that uses SHA256
|
||||
|
||||
@@ -1,7 +1,10 @@
|
||||
// vfs/vfserror/vfserror.go
|
||||
package vfserror
|
||||
|
||||
import "errors"
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// Common VFS errors
|
||||
var (
|
||||
@@ -9,4 +12,47 @@ var (
|
||||
ErrInvalidKey = errors.New("vfs: invalid key")
|
||||
ErrAlreadyExists = errors.New("vfs: key already exists")
|
||||
ErrCapacityExceeded = errors.New("vfs: capacity exceeded")
|
||||
ErrCorruptedFile = errors.New("vfs: corrupted file")
|
||||
ErrInvalidSize = errors.New("vfs: invalid size")
|
||||
ErrOperationTimeout = errors.New("vfs: operation timeout")
|
||||
)
|
||||
|
||||
// VFSError represents a VFS-specific error with context
|
||||
type VFSError struct {
|
||||
Op string // Operation that failed
|
||||
Key string // Key that caused the error
|
||||
Err error // Underlying error
|
||||
Size int64 // Size information if relevant
|
||||
}
|
||||
|
||||
// Error implements the error interface
|
||||
func (e *VFSError) Error() string {
|
||||
if e.Key != "" {
|
||||
return fmt.Sprintf("vfs: %s failed for key %q: %v", e.Op, e.Key, e.Err)
|
||||
}
|
||||
return fmt.Sprintf("vfs: %s failed: %v", e.Op, e.Err)
|
||||
}
|
||||
|
||||
// Unwrap returns the underlying error
|
||||
func (e *VFSError) Unwrap() error {
|
||||
return e.Err
|
||||
}
|
||||
|
||||
// NewVFSError creates a new VFS error with context
|
||||
func NewVFSError(op, key string, err error) *VFSError {
|
||||
return &VFSError{
|
||||
Op: op,
|
||||
Key: key,
|
||||
Err: err,
|
||||
}
|
||||
}
|
||||
|
||||
// NewVFSErrorWithSize creates a new VFS error with size context
|
||||
func NewVFSErrorWithSize(op, key string, size int64, err error) *VFSError {
|
||||
return &VFSError{
|
||||
Op: op,
|
||||
Key: key,
|
||||
Size: size,
|
||||
Err: err,
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user