23 Commits

Author SHA1 Message Date
9ca8fa4a5e Add concurrency limits and configuration options for SteamCache
- Introduced maxConcurrentRequests and maxRequestsPerClient fields in the Config struct to manage request limits.
- Updated the SteamCache implementation to utilize these new configuration options for controlling concurrent requests.
- Enhanced the ServeHTTP method to enforce global and per-client rate limiting using semaphores.
- Modified the root command to accept new flags for configuring concurrency limits via command-line arguments.
- Updated tests to reflect changes in the SteamCache initialization and request handling logic.
2025-09-02 06:50:42 -05:00
7fb1fcf21f Remove unused thread configuration from root command and streamline initialization process
- Eliminated the threads variable and its associated logic for setting maximum processing threads.
- Simplified the command initialization by removing unnecessary flags related to thread management.
2025-09-02 05:59:18 -05:00
ee6fc32a1a Update content type validation in ServeHTTP method for Steam files
- Changed expected Content-Type from "application/octet-stream" to "application/x-steam-chunk" to align with Steam's file specifications.
- Enhanced warning message for unexpected content types to provide clearer context for debugging.
2025-09-02 05:48:24 -05:00
4a4579b0f3 Refactor caching logic and enhance hash generation in steamcache
- Replaced SHA1 hash calculations with SHA256 for improved security and consistency in cache key generation.
- Introduced a new TestURLHashing function to validate the new cache key generation logic.
- Removed outdated hash calculation tests and streamlined the caching process to focus on URL-based hashing.
- Implemented lightweight validation methods in ServeHTTP to enhance performance and reliability of cached responses.
- Added batched time updates in VFS implementations for better performance during access time tracking.
2025-09-02 05:45:44 -05:00
b9358a0e8d Refactor steamcache.go to simplify code and improve readability
- Removed the min function and the verifyResponseHash function to streamline the codebase.
- Updated extractHashFromSteamPath to use strings.TrimPrefix for cleaner path handling.
- Retained comments regarding removed Prometheus metrics for future reference.
2025-09-02 05:03:15 -05:00
c197841960 Refactor configuration management and enhance build process
- Introduced a YAML-based configuration system, allowing for automatic generation of a default `config.yaml` file.
- Updated the application to load configuration settings from the YAML file, improving flexibility and ease of use.
- Added a Makefile to streamline development tasks, including running the application, testing, and managing dependencies.
- Enhanced `.gitignore` to include build artifacts and configuration files.
- Removed unused Prometheus metrics and related code to simplify the codebase.
- Updated dependencies in `go.mod` and `go.sum` for improved functionality and performance.
2025-09-02 05:01:42 -05:00
6919358eab Enhance file metadata tracking and garbage collection logic
All checks were successful
Release Tag / release (push) Successful in 13s
- Added AccessCount field to FileInfo struct for improved tracking of file access frequency.
- Updated NewFileInfo and NewFileInfoFromOS functions to initialize AccessCount.
- Modified DiskFS and MemoryFS to preserve and increment AccessCount during file operations.
- Enhanced garbage collection methods (LRU, LFU, FIFO, Largest, Smallest, Hybrid) to utilize AccessCount for more effective space reclamation.
2025-07-19 09:07:49 -05:00
1187f05c77 revert 30e804709f
revert Enhance FileInfo structure and DiskFS functionality

- Added CTime (creation time) and AccessCount fields to FileInfo struct for better file metadata tracking.
- Updated NewFileInfo and NewFileInfoFromOS functions to initialize new fields.
- Enhanced DiskFS to maintain access counts and file metadata, including flushing to JSON files.
- Modified Open and Create methods to increment access counts and set creation times appropriately.
- Updated garbage collection logic to utilize real access counts for files.
2025-07-19 14:02:53 +00:00
f6f93c86c8 Update launch.json to modify memory-gc strategy and comment out upstream server configuration
- Changed memory-gc strategy from 'lfu' to 'lru' for improved cache management.
- Commented out the upstream server configuration to prevent potential connectivity issues during development.
2025-07-19 08:07:36 -05:00
30e804709f Enhance FileInfo structure and DiskFS functionality
All checks were successful
Release Tag / release (push) Successful in 12s
- Added CTime (creation time) and AccessCount fields to FileInfo struct for better file metadata tracking.
- Updated NewFileInfo and NewFileInfoFromOS functions to initialize new fields.
- Enhanced DiskFS to maintain access counts and file metadata, including flushing to JSON files.
- Modified Open and Create methods to increment access counts and set creation times appropriately.
- Updated garbage collection logic to utilize real access counts for files.
2025-07-19 05:29:18 -05:00
56bb1ddc12 Add hop-by-hop header handling in ServeHTTP method
All checks were successful
Release Tag / release (push) Successful in 12s
- Introduced a map for hop-by-hop headers to be removed from responses.
- Enhanced cache serving logic to read and filter HTTP responses, ensuring only relevant headers are forwarded.
- Updated cache writing to handle full HTTP responses, improving cache integrity and performance.
2025-07-19 05:07:36 -05:00
9c65cdb156 Fix HTTP status code for root path in ServeHTTP method to ensure correct response for upstream verification
All checks were successful
Release Tag / release (push) Successful in 12s
2025-07-19 04:42:20 -05:00
ae013f9a3b Enhance SteamCache configuration and HTTP client settings
All checks were successful
Release Tag / release (push) Successful in 14s
- Added upstream server configuration to launch.json for improved connectivity.
- Increased HTTP client timeout from 60s to 120s for better handling of slow responses.
- Updated server timeouts in steamcache.go: increased ReadTimeout to 30s and WriteTimeout to 60s.
- Introduced ReadHeaderTimeout to mitigate header attacks and set MaxHeaderBytes to 1MB.
- Improved error logging in the Run method to include HTTP status codes for better debugging.
- Adjusted ServeHTTP method to handle root path and metrics endpoint correctly.
2025-07-19 04:40:05 -05:00
d94b53c395 Merge pull request 'Update .goreleaser.yaml and enhance HTTP client settings in steamcache.go' (#10) from fix/connection-pooling into main
All checks were successful
Release Tag / release (push) Successful in 15s
Reviewed-on: s1d3sw1ped/SteamCache2#10
2025-07-19 09:13:37 +00:00
847931ed43 Update .goreleaser.yaml and enhance HTTP client settings in steamcache.go
All checks were successful
PR Check / check-and-test (pull_request) Successful in 18s
- Removed copyright footer from .goreleaser.yaml.
- Increased HTTP client connection settings in steamcache.go for improved performance:
  - MaxIdleConns from 100 to 200
  - MaxIdleConnsPerHost from 10 to 50
  - IdleConnTimeout from 90s to 120s
  - TLSHandshakeTimeout from 10s to 15s
  - ResponseHeaderTimeout from 10s to 30s
  - ExpectContinueTimeout from 1s to 5s
  - Added DisableCompression and ForceAttemptHTTP2 options.
- Removed debug logging for manifest files in ServeHTTP method.
2025-07-19 04:12:56 -05:00
4387236d22 Merge pull request 'Update .goreleaser.yaml to use hyphens in name templates for archives and releases' (#9) from fix/goreleaser-config-fix-really into main
All checks were successful
Release Tag / release (push) Successful in 21s
Reviewed-on: s1d3sw1ped/SteamCache2#9
2025-07-19 08:23:09 +00:00
f6ce004922 Update .goreleaser.yaml to use hyphens in name templates for archives and releases
All checks were successful
PR Check / check-and-test (pull_request) Successful in 9s
2025-07-19 03:22:08 -05:00
8e487876d2 Merge pull request 'Remove steamcache2 from the list of files in .goreleaser.yaml archives section.' (#8) from fix/goreleaser-config-fix into main
Some checks failed
Release Tag / release (push) Failing after 20s
Reviewed-on: s1d3sw1ped/SteamCache2#8
2025-07-19 08:04:40 +00:00
1be7f5bd20 Remove steamcache2 from the list of files in .goreleaser.yaml archives section.
All checks were successful
PR Check / check-and-test (pull_request) Successful in 9s
2025-07-19 03:02:39 -05:00
f237b89ca7 Merge pull request 'Update versioning and logging in SteamCache2' (#7) from fix/goreleaser-config into main
Some checks failed
Release Tag / release (push) Failing after 22s
Reviewed-on: s1d3sw1ped/SteamCache2#7
2025-07-19 07:59:02 +00:00
ae07239021 Update versioning and logging in SteamCache2
All checks were successful
PR Check / check-and-test (pull_request) Successful in 11s
- Enhanced .goreleaser.yaml for improved build configuration, including static linking and ARM64 support.
- Updated logging in root.go to include version date during startup.
- Modified version.go to initialize and expose the build date alongside the version.
- Adjusted version command output to display both version and date for better clarity.
2025-07-19 02:58:19 -05:00
4876998f5d Merge pull request 'Enhance garbage collection and caching functionality' (#6) from feature/extended-gc-and-verification into main
Reviewed-on: s1d3sw1ped/SteamCache2#6
2025-07-19 07:28:12 +00:00
163e64790c Enhance garbage collection and caching functionality
All checks were successful
PR Check / check-and-test (pull_request) Successful in 21s
- Updated .gitignore to include all .exe files and ensure .smashignore is tracked.
- Expanded README.md with advanced configuration options for garbage collection algorithms, detailing available algorithms and use cases.
- Modified launch.json to include memory and disk garbage collection flags for better configuration.
- Refactored root.go to introduce memoryGC and diskGC flags for garbage collection algorithms.
- Implemented hash extraction and verification in steamcache.go to ensure data integrity during caching.
- Added new tests in steamcache_test.go for hash extraction and verification, ensuring correctness of caching behavior.
- Enhanced garbage collection strategies in gc.go, introducing LFU, FIFO, Largest, Smallest, and Hybrid algorithms with corresponding metrics.
- Updated caching logic to conditionally cache responses based on hash verification results.
2025-07-19 02:27:04 -05:00
25 changed files with 2384 additions and 1610 deletions

16
.gitignore vendored
View File

@@ -1,5 +1,11 @@
dist/ #build artifacts
tmp/ /dist/
__*.exe
.smashed.txt #disk cache
.smashignore /disk/
#config file
/config.yaml
#windows executables
*.exe

View File

@@ -2,11 +2,17 @@ version: 2
before: before:
hooks: hooks:
- go mod tidy - go mod tidy -v
builds: builds:
- ldflags: - id: default
binary: steamcache2
ldflags:
- -s
- -w
- -extldflags "-static"
- -X s1d3sw1ped/SteamCache2/version.Version={{.Version}} - -X s1d3sw1ped/SteamCache2/version.Version={{.Version}}
- -X s1d3sw1ped/SteamCache2/version.Date={{.Date}}
env: env:
- CGO_ENABLED=0 - CGO_ENABLED=0
goos: goos:
@@ -14,19 +20,24 @@ builds:
- windows - windows
goarch: goarch:
- amd64 - amd64
- arm64
ignore:
- goos: windows
goarch: arm64
checksum:
name_template: "checksums.txt"
archives: archives:
- formats: tar.gz - id: default
name_template: >- name_template: "{{ .ProjectName }}-{{ .Os }}-{{ .Arch }}"
{{ .ProjectName }}_ formats: tar.gz
{{- title .Os }}_
{{- if eq .Arch "amd64" }}x86_64
{{- else if eq .Arch "386" }}i386
{{- else }}{{ .Arch }}{{ end }}
{{- if .Arm }}v{{ .Arm }}{{ end }}
format_overrides: format_overrides:
- goos: windows - goos: windows
formats: zip formats: zip
files:
- README.md
- LICENSE
changelog: changelog:
sort: asc sort: asc
@@ -36,12 +47,7 @@ changelog:
- "^test:" - "^test:"
release: release:
name_template: '{{.ProjectName}}-{{.Version}}' name_template: "{{ .ProjectName }}-{{ .Version }}"
footer: >-
---
Released by [GoReleaser](https://github.com/goreleaser/goreleaser).
gitea_urls: gitea_urls:
api: https://git.s1d3sw1ped.com/api/v1 api: https://git.s1d3sw1ped.com/api/v1

53
.vscode/launch.json vendored
View File

@@ -1,53 +0,0 @@
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Launch Memory & Disk",
"type": "go",
"request": "launch",
"mode": "auto",
"program": "${workspaceFolder}/main.go",
"args": [
"--memory",
"1G",
"--disk",
"10G",
"--disk-path",
"tmp/disk",
"--log-level",
"debug",
],
},
{
"name": "Launch Disk Only",
"type": "go",
"request": "launch",
"mode": "auto",
"program": "${workspaceFolder}/main.go",
"args": [
"--disk",
"10G",
"--disk-path",
"tmp/disk",
"--log-level",
"debug",
],
},
{
"name": "Launch Memory Only",
"type": "go",
"request": "launch",
"mode": "auto",
"program": "${workspaceFolder}/main.go",
"args": [
"--memory",
"1G",
"--log-level",
"debug",
],
}
]
}

19
Makefile Normal file
View File

@@ -0,0 +1,19 @@
run: deps test ## Run the application
@go run .
help: ## Show this help message
@echo SteamCache2 Makefile
@echo Available targets:
@echo run Run the application
@echo run-debug Run the application with debug logging
@echo test Run all tests
@echo deps Download dependencies
run-debug: deps test ## Run the application with debug logging
@go run . --log-level debug
test: deps ## Run all tests
@go test -v ./...
deps: ## Download dependencies
@go mod tidy

222
README.md
View File

@@ -10,15 +10,154 @@ SteamCache2 is a blazing fast download cache for Steam, designed to reduce bandw
- Reduces bandwidth usage - Reduces bandwidth usage
- Easy to set up and configure aside from dns stuff to trick Steam into using it - Easy to set up and configure aside from dns stuff to trick Steam into using it
- Supports multiple clients - Supports multiple clients
- **NEW:** YAML configuration system with automatic config generation
- **NEW:** Simple Makefile for development workflow
- Cross-platform builds (Linux, macOS, Windows)
## Usage ## Quick Start
1. Start the cache server: ### First Time Setup
```sh
./SteamCache2 --memory 1G --disk 10G --disk-path tmp/disk 1. **Clone and build:**
```bash
git clone <repository-url>
cd SteamCache2
make # This will run tests and build the application
``` ```
2. Configure your DNS:
- If your on Windows and don't want a whole network implementation (THIS)[#windows-hosts-file-override] 2. **Run the application** (it will create a default config):
```bash
./steamcache2
# or on Windows:
steamcache2.exe
```
The application will automatically create a `config.yaml` file with default settings and exit, allowing you to customize it.
3. **Edit the configuration** (`config.yaml`):
```yaml
listen_address: :80
cache:
memory:
size: 1GB
gc_algorithm: lru
disk:
size: 10GB
path: ./disk
gc_algorithm: hybrid
upstream: "https://steam.cdn.com" # Set your upstream server
```
4. **Run the application again:**
```bash
make run # or ./steamcache2
```
### Development Workflow
```bash
# Run all tests and start the application (default target)
make
# Run only tests
make test
# Run with debug logging
make run-debug
# Download dependencies
make deps
# Show available commands
make help
```
### Command Line Flags
While most configuration is done via the YAML file, some runtime options are still available as command-line flags:
```bash
# Use a custom config file
./steamcache2 --config /path/to/my-config.yaml
# Set logging level
./steamcache2 --log-level debug --log-format json
# Set number of worker threads
./steamcache2 --threads 8
# Show help
./steamcache2 --help
```
### Configuration
SteamCache2 uses a YAML configuration file (`config.yaml`) for all settings. Here's a complete configuration example:
```yaml
# Server configuration
listen_address: :80
# Cache configuration
cache:
# Memory cache settings
memory:
# Size of memory cache (e.g., "512MB", "1GB", "0" to disable)
size: 1GB
# Garbage collection algorithm
gc_algorithm: lru
# Disk cache settings
disk:
# Size of disk cache (e.g., "10GB", "50GB", "0" to disable)
size: 10GB
# Path to disk cache directory
path: ./disk
# Garbage collection algorithm
gc_algorithm: hybrid
# Upstream server configuration
# The upstream server to proxy requests to
upstream: "https://steam.cdn.com"
```
#### Garbage Collection Algorithms
SteamCache2 supports different garbage collection algorithms for memory and disk caches, allowing you to optimize performance for each storage tier:
**Available GC Algorithms:**
- **`lru`** (default): Least Recently Used - evicts oldest accessed files
- **`lfu`**: Least Frequently Used - evicts least accessed files (good for popular content)
- **`fifo`**: First In, First Out - evicts oldest created files (predictable)
- **`largest`**: Size-based - evicts largest files first (maximizes file count)
- **`smallest`**: Size-based - evicts smallest files first (maximizes cache hit rate)
- **`hybrid`**: Combines access time and file size for optimal eviction
**Recommended Algorithms by Cache Type:**
**For Memory Cache (Fast, Limited Size):**
- **`lru`** - Best overall performance, good balance of speed and hit rate
- **`lfu`** - Excellent for gaming cafes where popular games stay cached
- **`hybrid`** - Optimal for mixed workloads with varying file sizes
**For Disk Cache (Slow, Large Size):**
- **`hybrid`** - Recommended for optimal performance, balances speed and storage efficiency
- **`largest`** - Good for maximizing number of cached files
- **`lru`** - Reliable default with good performance
**Use Cases:**
- **Gaming Cafes**: Use `lfu` for memory, `hybrid` for disk
- **LAN Events**: Use `lfu` for memory, `hybrid` for disk
- **Home Use**: Use `lru` for memory, `hybrid` for disk
- **Testing**: Use `fifo` for predictable behavior
- **Large File Storage**: Use `largest` for disk to maximize file count
### DNS Configuration
Configure your DNS to direct Steam traffic to your SteamCache2 server:
- If you're on Windows and don't want a whole network implementation, see the [Windows Hosts File Override](#windows-hosts-file-override) section below.
### Windows Hosts File Override ### Windows Hosts File Override
@@ -53,6 +192,77 @@ SteamCache2 is a blazing fast download cache for Steam, designed to reduce bandw
This will direct any requests to `lancache.steamcontent.com` to your SteamCache2 server. This will direct any requests to `lancache.steamcontent.com` to your SteamCache2 server.
## Building from Source
### Prerequisites
- Go 1.19 or later
- Make (optional, but recommended)
### Build Commands
```bash
# Clone the repository
git clone <repository-url>
cd SteamCache2
# Download dependencies
make deps
# Run tests
make test
# Build for current platform
go build -o steamcache2 .
# Build for specific platforms
GOOS=linux GOARCH=amd64 go build -o steamcache2-linux-amd64 .
GOOS=windows GOARCH=amd64 go build -o steamcache2-windows-amd64.exe .
```
### Development
```bash
# Run in development mode with debug logging
make run-debug
# Run all tests and start the application
make
```
## Troubleshooting
### Common Issues
1. **"Config file not found" on first run**
- This is expected! SteamCache2 will automatically create a default `config.yaml` file
- Edit the generated config file with your desired settings
- Run the application again
2. **Permission denied when creating config**
- Make sure you have write permissions in the current directory
- Try running with elevated privileges if necessary
3. **Port already in use**
- Change the `listen_address` in `config.yaml` to a different port (e.g., `:8080`)
- Or stop the service using the current port
4. **High memory usage**
- Reduce the memory cache size in `config.yaml`
- Consider using disk-only caching by setting `memory.size: "0"`
5. **Slow disk performance**
- Use SSD storage for the disk cache
- Consider using a different GC algorithm like `hybrid`
- Adjust the disk cache size to match available storage
### Getting Help
- Check the logs for detailed error messages
- Run with `--log-level debug` for more verbose output
- Ensure your upstream server is accessible
- Verify DNS configuration is working correctly
## License ## License
See the [LICENSE](LICENSE) file for details. See the [LICENSE](LICENSE) file for details.

View File

@@ -2,26 +2,26 @@
package cmd package cmd
import ( import (
"fmt"
"os" "os"
"runtime" "s1d3sw1ped/SteamCache2/config"
"s1d3sw1ped/SteamCache2/steamcache" "s1d3sw1ped/SteamCache2/steamcache"
"s1d3sw1ped/SteamCache2/steamcache/logger" "s1d3sw1ped/SteamCache2/steamcache/logger"
"s1d3sw1ped/SteamCache2/version" "s1d3sw1ped/SteamCache2/version"
"strings"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
var ( var (
threads int configPath string
memory string
disk string
diskpath string
upstream string
logLevel string logLevel string
logFormat string logFormat string
maxConcurrentRequests int64
maxRequestsPerClient int64
) )
var rootCmd = &cobra.Command{ var rootCmd = &cobra.Command{
@@ -53,27 +53,75 @@ var rootCmd = &cobra.Command{
logger.Logger = zerolog.New(writer).With().Timestamp().Logger() logger.Logger = zerolog.New(writer).With().Timestamp().Logger()
logger.Logger.Info(). logger.Logger.Info().
Msg("SteamCache2 " + version.Version + " starting...") Msg("SteamCache2 " + version.Version + " " + version.Date + " starting...")
address := ":80" // Load configuration
cfg, err := config.LoadConfig(configPath)
if runtime.GOMAXPROCS(-1) != threads { if err != nil {
runtime.GOMAXPROCS(threads) // Check if the error is because the config file doesn't exist
// The error is wrapped, so we check the error message
if strings.Contains(err.Error(), "no such file") ||
strings.Contains(err.Error(), "cannot find the file") ||
strings.Contains(err.Error(), "The system cannot find the file") {
logger.Logger.Info(). logger.Logger.Info().
Int("threads", threads). Str("config_path", configPath).
Msg("Maximum number of threads set") Msg("Config file not found, creating default configuration")
if err := config.SaveDefaultConfig(configPath); err != nil {
logger.Logger.Error().
Err(err).
Str("config_path", configPath).
Msg("Failed to create default configuration")
fmt.Fprintf(os.Stderr, "Error: Failed to create default config at %s: %v\n", configPath, err)
os.Exit(1)
}
logger.Logger.Info().
Str("config_path", configPath).
Msg("Default configuration created successfully. Please edit the file and run again.")
fmt.Printf("Default configuration created at %s\n", configPath)
fmt.Println("Please edit the configuration file as needed and run the application again.")
os.Exit(0)
} else {
logger.Logger.Error().
Err(err).
Str("config_path", configPath).
Msg("Failed to load configuration")
fmt.Fprintf(os.Stderr, "Error: Failed to load configuration from %s: %v\n", configPath, err)
os.Exit(1)
}
}
logger.Logger.Info().
Str("config_path", configPath).
Msg("Configuration loaded successfully")
// Use command-line flags if provided, otherwise use config values
finalMaxConcurrentRequests := cfg.MaxConcurrentRequests
if maxConcurrentRequests > 0 {
finalMaxConcurrentRequests = maxConcurrentRequests
}
finalMaxRequestsPerClient := cfg.MaxRequestsPerClient
if maxRequestsPerClient > 0 {
finalMaxRequestsPerClient = maxRequestsPerClient
} }
sc := steamcache.New( sc := steamcache.New(
address, cfg.ListenAddress,
memory, cfg.Cache.Memory.Size,
disk, cfg.Cache.Disk.Size,
diskpath, cfg.Cache.Disk.Path,
upstream, cfg.Upstream,
cfg.Cache.Memory.GCAlgorithm,
cfg.Cache.Disk.GCAlgorithm,
finalMaxConcurrentRequests,
finalMaxRequestsPerClient,
) )
logger.Logger.Info(). logger.Logger.Info().
Msg("SteamCache2 " + version.Version + " started on " + address) Msg("SteamCache2 " + version.Version + " started on " + cfg.ListenAddress)
sc.Run() sc.Run()
@@ -92,14 +140,11 @@ func Execute() {
} }
func init() { func init() {
rootCmd.Flags().IntVarP(&threads, "threads", "t", runtime.GOMAXPROCS(-1), "Number of worker threads to use for processing requests") rootCmd.Flags().StringVarP(&configPath, "config", "c", "config.yaml", "Path to configuration file")
rootCmd.Flags().StringVarP(&memory, "memory", "m", "0", "The size of the memory cache")
rootCmd.Flags().StringVarP(&disk, "disk", "d", "0", "The size of the disk cache")
rootCmd.Flags().StringVarP(&diskpath, "disk-path", "p", "", "The path to the disk cache")
rootCmd.Flags().StringVarP(&upstream, "upstream", "u", "", "The upstream server to proxy requests overrides the host header from the client but forwards the original host header to the upstream server")
rootCmd.Flags().StringVarP(&logLevel, "log-level", "l", "info", "Logging level: debug, info, error") rootCmd.Flags().StringVarP(&logLevel, "log-level", "l", "info", "Logging level: debug, info, error")
rootCmd.Flags().StringVarP(&logFormat, "log-format", "f", "console", "Logging format: json, console") rootCmd.Flags().StringVarP(&logFormat, "log-format", "f", "console", "Logging format: json, console")
rootCmd.Flags().Int64Var(&maxConcurrentRequests, "max-concurrent-requests", 0, "Maximum concurrent requests (0 = use config file value)")
rootCmd.Flags().Int64Var(&maxRequestsPerClient, "max-requests-per-client", 0, "Maximum concurrent requests per client IP (0 = use config file value)")
} }

View File

@@ -15,7 +15,7 @@ var versionCmd = &cobra.Command{
Short: "prints the version of SteamCache2", Short: "prints the version of SteamCache2",
Long: `Prints the version of SteamCache2. This command is useful for checking the version of the application.`, Long: `Prints the version of SteamCache2. This command is useful for checking the version of the application.`,
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
fmt.Fprintln(os.Stderr, "SteamCache2", version.Version) fmt.Fprintln(os.Stderr, "SteamCache2", version.Version, version.Date)
}, },
} }

128
config/config.go Normal file
View File

@@ -0,0 +1,128 @@
package config
import (
"fmt"
"os"
"gopkg.in/yaml.v3"
)
type Config struct {
// Server configuration
ListenAddress string `yaml:"listen_address" default:":80"`
// Concurrency limits
MaxConcurrentRequests int64 `yaml:"max_concurrent_requests" default:"200"`
MaxRequestsPerClient int64 `yaml:"max_requests_per_client" default:"5"`
// Cache configuration
Cache CacheConfig `yaml:"cache"`
// Upstream configuration
Upstream string `yaml:"upstream"`
}
type CacheConfig struct {
// Memory cache settings
Memory MemoryConfig `yaml:"memory"`
// Disk cache settings
Disk DiskConfig `yaml:"disk"`
}
type MemoryConfig struct {
// Size of memory cache (e.g., "512MB", "1GB")
Size string `yaml:"size" default:"0"`
// Garbage collection algorithm: lru, lfu, fifo, largest, smallest, hybrid
GCAlgorithm string `yaml:"gc_algorithm" default:"lru"`
}
type DiskConfig struct {
// Size of disk cache (e.g., "10GB", "50GB")
Size string `yaml:"size" default:"0"`
// Path to disk cache directory
Path string `yaml:"path" default:""`
// Garbage collection algorithm: lru, lfu, fifo, largest, smallest, hybrid
GCAlgorithm string `yaml:"gc_algorithm" default:"lru"`
}
// LoadConfig loads configuration from a YAML file
func LoadConfig(configPath string) (*Config, error) {
if configPath == "" {
configPath = "config.yaml"
}
data, err := os.ReadFile(configPath)
if err != nil {
return nil, fmt.Errorf("failed to read config file %s: %w", configPath, err)
}
var config Config
if err := yaml.Unmarshal(data, &config); err != nil {
return nil, fmt.Errorf("failed to parse config file %s: %w", configPath, err)
}
// Set defaults for empty values
if config.ListenAddress == "" {
config.ListenAddress = ":80"
}
if config.MaxConcurrentRequests == 0 {
config.MaxConcurrentRequests = 50
}
if config.MaxRequestsPerClient == 0 {
config.MaxRequestsPerClient = 3
}
if config.Cache.Memory.Size == "" {
config.Cache.Memory.Size = "0"
}
if config.Cache.Memory.GCAlgorithm == "" {
config.Cache.Memory.GCAlgorithm = "lru"
}
if config.Cache.Disk.Size == "" {
config.Cache.Disk.Size = "0"
}
if config.Cache.Disk.GCAlgorithm == "" {
config.Cache.Disk.GCAlgorithm = "lru"
}
return &config, nil
}
// SaveDefaultConfig creates a default configuration file
func SaveDefaultConfig(configPath string) error {
if configPath == "" {
configPath = "config.yaml"
}
defaultConfig := Config{
ListenAddress: ":80",
MaxConcurrentRequests: 50, // Reduced for home user (less concurrent load)
MaxRequestsPerClient: 3, // Reduced for home user (more conservative per client)
Cache: CacheConfig{
Memory: MemoryConfig{
Size: "1GB", // Recommended for systems that can spare 1GB RAM for caching
GCAlgorithm: "lru",
},
Disk: DiskConfig{
Size: "1TB", // Large HDD cache for home user
Path: "./disk",
GCAlgorithm: "lru", // Better for gaming patterns (keeps recently played games)
},
},
Upstream: "",
}
data, err := yaml.Marshal(&defaultConfig)
if err != nil {
return fmt.Errorf("failed to marshal default config: %w", err)
}
if err := os.WriteFile(configPath, data, 0644); err != nil {
return fmt.Errorf("failed to write default config file: %w", err)
}
return nil
}

13
go.mod
View File

@@ -4,22 +4,17 @@ go 1.23.0
require ( require (
github.com/docker/go-units v0.5.0 github.com/docker/go-units v0.5.0
github.com/prometheus/client_golang v1.22.0 github.com/edsrzf/mmap-go v1.1.0
github.com/rs/zerolog v1.33.0 github.com/rs/zerolog v1.33.0
github.com/spf13/cobra v1.8.1 github.com/spf13/cobra v1.8.1
gopkg.in/yaml.v3 v3.0.1
) )
require ( require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.19 // indirect github.com/mattn/go-isatty v0.0.19 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.62.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/spf13/pflag v1.0.5 // indirect github.com/spf13/pflag v1.0.5 // indirect
golang.org/x/sys v0.30.0 // indirect golang.org/x/sync v0.16.0 // indirect
google.golang.org/protobuf v1.36.5 // indirect golang.org/x/sys v0.12.0 // indirect
) )

36
go.sum
View File

@@ -1,40 +1,18 @@
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc= github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4= github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/edsrzf/mmap-go v1.1.0 h1:6EUwBLQ/Mcr1EYLE4Tn1VdW1A4ckqCQWZBw8Hr0kjpQ=
github.com/edsrzf/mmap-go v1.1.0/go.mod h1:19H/e8pUPLicwkyNgOykDXkJ9F0MHE+Z52B8EIth78Q=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA= github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg= github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM= github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA= github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg= github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.33.0 h1:1cU2KZkvPxNyfgEmhHAz/1A9Bz+llsdYzklWFzgp0r8= github.com/rs/zerolog v1.33.0 h1:1cU2KZkvPxNyfgEmhHAz/1A9Bz+llsdYzklWFzgp0r8=
github.com/rs/zerolog v1.33.0/go.mod h1:/7mN4D5sKwJLZQ2b/znpjC3/GQWY/xaDXUM0kKWRHss= github.com/rs/zerolog v1.33.0/go.mod h1:/7mN4D5sKwJLZQ2b/znpjC3/GQWY/xaDXUM0kKWRHss=
@@ -43,15 +21,13 @@ github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y= github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0 h1:CM0HF96J0hcLAwsHPJZjfdNzs0gftsLfgKt57wWHJ0o=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
google.golang.org/protobuf v1.36.5 h1:tPhr+woSbjfYvY6/GPufUoYizxw1cF/yFoxJ2fmpwlM=
google.golang.org/protobuf v1.36.5/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -2,7 +2,10 @@
package steamcache package steamcache
import ( import (
"bufio"
"context" "context"
"crypto/sha256"
"encoding/hex"
"io" "io"
"net" "net"
"net/http" "net/http"
@@ -19,37 +22,177 @@ import (
"time" "time"
"github.com/docker/go-units" "github.com/docker/go-units"
"github.com/prometheus/client_golang/prometheus" "golang.org/x/sync/semaphore"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prometheus/client_golang/prometheus/promhttp"
) )
var ( // generateURLHash creates a SHA256 hash of the entire URL path for cache key
requestsTotal = promauto.NewCounterVec( func generateURLHash(urlPath string) string {
prometheus.CounterOpts{ hash := sha256.Sum256([]byte(urlPath))
Name: "http_requests_total", return hex.EncodeToString(hash[:])
Help: "Total number of HTTP requests", }
},
[]string{"method", "status"},
)
cacheStatusTotal = promauto.NewCounterVec( // generateSteamCacheKey creates a cache key from the URL path using SHA256
prometheus.CounterOpts{ // Input: /depot/1684171/chunk/0016cfc5019b8baa6026aa1cce93e685d6e06c6e
Name: "cache_status_total", // Output: steam/a1b2c3d4e5f678901234567890123456789012345678901234567890
Help: "Total cache status counts", func generateSteamCacheKey(urlPath string) string {
}, // Handle Steam depot URLs by creating a SHA256 hash of the entire path
[]string{"status"}, if strings.HasPrefix(urlPath, "/depot/") {
) return "steam/" + generateURLHash(urlPath)
}
responseTime = promauto.NewHistogram( // For non-Steam URLs, return empty string (not cached)
prometheus.HistogramOpts{ return ""
Name: "response_time_seconds", }
Help: "Response time in seconds",
Buckets: prometheus.DefBuckets, var hopByHopHeaders = map[string]struct{}{
}, "Connection": {},
) "Keep-Alive": {},
"Proxy-Authenticate": {},
"Proxy-Authorization": {},
"TE": {},
"Trailer": {},
"Transfer-Encoding": {},
"Upgrade": {},
"Date": {},
"Server": {},
}
// Constants for limits
const (
defaultMaxConcurrentRequests = int64(200) // Max total concurrent requests
defaultMaxRequestsPerClient = int64(5) // Max concurrent requests per IP
) )
type clientLimiter struct {
semaphore *semaphore.Weighted
lastSeen time.Time
}
type coalescedRequest struct {
responseChan chan *http.Response
errorChan chan error
waitingCount int
done bool
mu sync.Mutex
}
func newCoalescedRequest() *coalescedRequest {
return &coalescedRequest{
responseChan: make(chan *http.Response, 1),
errorChan: make(chan error, 1),
waitingCount: 1,
done: false,
}
}
func (cr *coalescedRequest) addWaiter() {
cr.mu.Lock()
defer cr.mu.Unlock()
cr.waitingCount++
}
func (cr *coalescedRequest) complete(resp *http.Response, err error) {
cr.mu.Lock()
defer cr.mu.Unlock()
if cr.done {
return
}
cr.done = true
if err != nil {
select {
case cr.errorChan <- err:
default:
}
} else {
select {
case cr.responseChan <- resp:
default:
}
}
}
// getOrCreateCoalescedRequest gets an existing coalesced request or creates a new one
func (sc *SteamCache) getOrCreateCoalescedRequest(cacheKey string) (*coalescedRequest, bool) {
sc.coalescedRequestsMu.Lock()
defer sc.coalescedRequestsMu.Unlock()
if cr, exists := sc.coalescedRequests[cacheKey]; exists {
cr.addWaiter()
return cr, false
}
cr := newCoalescedRequest()
sc.coalescedRequests[cacheKey] = cr
return cr, true
}
// removeCoalescedRequest removes a completed coalesced request
func (sc *SteamCache) removeCoalescedRequest(cacheKey string) {
sc.coalescedRequestsMu.Lock()
defer sc.coalescedRequestsMu.Unlock()
delete(sc.coalescedRequests, cacheKey)
}
// getClientIP extracts the client IP address from the request
func getClientIP(r *http.Request) string {
// Check for forwarded headers first (common in proxy setups)
if xff := r.Header.Get("X-Forwarded-For"); xff != "" {
// X-Forwarded-For can contain multiple IPs, take the first one
if idx := strings.Index(xff, ","); idx > 0 {
return strings.TrimSpace(xff[:idx])
}
return strings.TrimSpace(xff)
}
if xri := r.Header.Get("X-Real-IP"); xri != "" {
return strings.TrimSpace(xri)
}
// Fall back to RemoteAddr
if host, _, err := net.SplitHostPort(r.RemoteAddr); err == nil {
return host
}
return r.RemoteAddr
}
// getOrCreateClientLimiter gets or creates a rate limiter for a client IP
func (sc *SteamCache) getOrCreateClientLimiter(clientIP string) *clientLimiter {
sc.clientRequestsMu.Lock()
defer sc.clientRequestsMu.Unlock()
limiter, exists := sc.clientRequests[clientIP]
if !exists || time.Since(limiter.lastSeen) > 5*time.Minute {
// Create new limiter or refresh existing one
limiter = &clientLimiter{
semaphore: semaphore.NewWeighted(sc.maxRequestsPerClient),
lastSeen: time.Now(),
}
sc.clientRequests[clientIP] = limiter
} else {
limiter.lastSeen = time.Now()
}
return limiter
}
// cleanupOldClientLimiters removes old client limiters to prevent memory leaks
func (sc *SteamCache) cleanupOldClientLimiters() {
for {
time.Sleep(10 * time.Minute) // Clean up every 10 minutes
sc.clientRequestsMu.Lock()
now := time.Now()
for ip, limiter := range sc.clientRequests {
if now.Sub(limiter.lastSeen) > 30*time.Minute {
delete(sc.clientRequests, ip)
}
}
sc.clientRequestsMu.Unlock()
}
}
type SteamCache struct { type SteamCache struct {
address string address string
upstream string upstream string
@@ -66,9 +209,22 @@ type SteamCache struct {
client *http.Client client *http.Client
cancel context.CancelFunc cancel context.CancelFunc
wg sync.WaitGroup wg sync.WaitGroup
// Request coalescing structures
coalescedRequests map[string]*coalescedRequest
coalescedRequestsMu sync.RWMutex
// Concurrency control
maxConcurrentRequests int64
requestSemaphore *semaphore.Weighted
// Per-client rate limiting
clientRequests map[string]*clientLimiter
clientRequestsMu sync.RWMutex
maxRequestsPerClient int64
} }
func New(address string, memorySize string, diskSize string, diskPath, upstream string) *SteamCache { func New(address string, memorySize string, diskSize string, diskPath, upstream, memoryGC, diskGC string, maxConcurrentRequests int64, maxRequestsPerClient int64) *SteamCache {
memorysize, err := units.FromHumanSize(memorySize) memorysize, err := units.FromHumanSize(memorySize)
if err != nil { if err != nil {
panic(err) panic(err)
@@ -79,22 +235,28 @@ func New(address string, memorySize string, diskSize string, diskPath, upstream
panic(err) panic(err)
} }
c := cache.New( c := cache.New()
gc.PromotionDecider,
)
var m *memory.MemoryFS var m *memory.MemoryFS
var mgc *gc.GCFS var mgc *gc.GCFS
if memorysize > 0 { if memorysize > 0 {
m = memory.New(memorysize) m = memory.New(memorysize)
mgc = gc.New(m, gc.LRUGC) memoryGCAlgo := gc.GCAlgorithm(memoryGC)
if memoryGCAlgo == "" {
memoryGCAlgo = gc.LRU // default to LRU
}
mgc = gc.New(m, memoryGCAlgo)
} }
var d *disk.DiskFS var d *disk.DiskFS
var dgc *gc.GCFS var dgc *gc.GCFS
if disksize > 0 { if disksize > 0 {
d = disk.New(diskPath, disksize) d = disk.New(diskPath, disksize)
dgc = gc.New(d, gc.LRUGC) diskGCAlgo := gc.GCAlgorithm(diskGC)
if diskGCAlgo == "" {
diskGCAlgo = gc.LRU // default to LRU
}
dgc = gc.New(d, diskGCAlgo)
} }
// configure the cache to match the specified mode (memory only, disk only, or memory and disk) based on the provided sizes // configure the cache to match the specified mode (memory only, disk only, or memory and disk) based on the provided sizes
@@ -118,21 +280,23 @@ func New(address string, memorySize string, diskSize string, diskPath, upstream
} }
transport := &http.Transport{ transport := &http.Transport{
MaxIdleConns: 100, MaxIdleConns: 200, // Increased from 100
MaxIdleConnsPerHost: 10, MaxIdleConnsPerHost: 50, // Increased from 10
IdleConnTimeout: 90 * time.Second, IdleConnTimeout: 120 * time.Second, // Increased from 90s
DialContext: (&net.Dialer{ DialContext: (&net.Dialer{
Timeout: 30 * time.Second, Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second, KeepAlive: 30 * time.Second,
}).DialContext, }).DialContext,
TLSHandshakeTimeout: 10 * time.Second, TLSHandshakeTimeout: 15 * time.Second, // Increased from 10s
ResponseHeaderTimeout: 10 * time.Second, ResponseHeaderTimeout: 30 * time.Second, // Increased from 10s
ExpectContinueTimeout: 1 * time.Second, ExpectContinueTimeout: 5 * time.Second, // Increased from 1s
DisableCompression: true, // Steam doesn't use compression
ForceAttemptHTTP2: true, // Enable HTTP/2 if available
} }
client := &http.Client{ client := &http.Client{
Transport: transport, Transport: transport,
Timeout: 60 * time.Second, Timeout: 120 * time.Second, // Increased from 60s
} }
sc := &SteamCache{ sc := &SteamCache{
@@ -146,15 +310,33 @@ func New(address string, memorySize string, diskSize string, diskPath, upstream
client: client, client: client,
server: &http.Server{ server: &http.Server{
Addr: address, Addr: address,
ReadTimeout: 5 * time.Second, ReadTimeout: 30 * time.Second, // Increased
WriteTimeout: 10 * time.Second, WriteTimeout: 60 * time.Second, // Increased
IdleTimeout: 120 * time.Second, IdleTimeout: 120 * time.Second, // Good for keep-alive
ReadHeaderTimeout: 10 * time.Second, // New, for header attacks
MaxHeaderBytes: 1 << 20, // 1MB, optional
}, },
// Initialize concurrency control fields
coalescedRequests: make(map[string]*coalescedRequest),
maxConcurrentRequests: maxConcurrentRequests,
requestSemaphore: semaphore.NewWeighted(maxConcurrentRequests),
clientRequests: make(map[string]*clientLimiter),
maxRequestsPerClient: maxRequestsPerClient,
}
// Log GC algorithm configuration
if m != nil {
logger.Logger.Info().Str("memory_gc", memoryGC).Msg("Memory cache GC algorithm configured")
}
if d != nil {
logger.Logger.Info().Str("disk_gc", diskGC).Msg("Disk cache GC algorithm configured")
} }
if d != nil { if d != nil {
if d.Size() > d.Capacity() { if d.Size() > d.Capacity() {
gc.LRUGC(d, uint(d.Size()-d.Capacity())) gcHandler := gc.GetGCAlgorithm(gc.GCAlgorithm(diskGC))
gcHandler(d, uint(d.Size()-d.Capacity()))
} }
} }
@@ -165,7 +347,7 @@ func (sc *SteamCache) Run() {
if sc.upstream != "" { if sc.upstream != "" {
resp, err := sc.client.Get(sc.upstream) resp, err := sc.client.Get(sc.upstream)
if err != nil || resp.StatusCode != http.StatusOK { if err != nil || resp.StatusCode != http.StatusOK {
logger.Logger.Error().Err(err).Str("upstream", sc.upstream).Msg("Failed to connect to upstream server") logger.Logger.Error().Err(err).Int("status_code", resp.StatusCode).Str("upstream", sc.upstream).Msg("Failed to connect to upstream server")
os.Exit(1) os.Exit(1)
} }
resp.Body.Close() resp.Body.Close()
@@ -175,6 +357,13 @@ func (sc *SteamCache) Run() {
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
sc.cancel = cancel sc.cancel = cancel
// Start cleanup goroutine for old client limiters
sc.wg.Add(1)
go func() {
defer sc.wg.Done()
sc.cleanupOldClientLimiters()
}()
sc.wg.Add(1) sc.wg.Add(1)
go func() { go func() {
defer sc.wg.Done() defer sc.wg.Done()
@@ -198,19 +387,49 @@ func (sc *SteamCache) Shutdown() {
} }
func (sc *SteamCache) ServeHTTP(w http.ResponseWriter, r *http.Request) { func (sc *SteamCache) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/metrics" { // Apply global concurrency limit first
promhttp.Handler().ServeHTTP(w, r) if err := sc.requestSemaphore.Acquire(context.Background(), 1); err != nil {
logger.Logger.Warn().Str("client_ip", getClientIP(r)).Msg("Server at capacity, rejecting request")
http.Error(w, "Server busy, please try again later", http.StatusServiceUnavailable)
return return
} }
defer sc.requestSemaphore.Release(1)
// Apply per-client rate limiting
clientIP := getClientIP(r)
clientLimiter := sc.getOrCreateClientLimiter(clientIP)
if err := clientLimiter.semaphore.Acquire(context.Background(), 1); err != nil {
logger.Logger.Warn().
Str("client_ip", clientIP).
Int("max_per_client", int(sc.maxRequestsPerClient)).
Msg("Client exceeded concurrent request limit")
http.Error(w, "Too many concurrent requests from this client", http.StatusTooManyRequests)
return
}
defer clientLimiter.semaphore.Release(1)
if r.Method != http.MethodGet { if r.Method != http.MethodGet {
requestsTotal.WithLabelValues(r.Method, "405").Inc() logger.Logger.Warn().
logger.Logger.Warn().Str("method", r.Method).Msg("Only GET method is supported") Str("method", r.Method).
Str("client_ip", clientIP).
Msg("Only GET method is supported")
http.Error(w, "Only GET method is supported", http.StatusMethodNotAllowed) http.Error(w, "Only GET method is supported", http.StatusMethodNotAllowed)
return return
} }
if r.URL.Path == "/" {
logger.Logger.Debug().
Str("client_ip", clientIP).
Msg("Health check request")
w.WriteHeader(http.StatusOK) // this is used by steamcache2's upstream verification at startup
return
}
if r.URL.String() == "/lancache-heartbeat" { if r.URL.String() == "/lancache-heartbeat" {
logger.Logger.Debug().
Str("client_ip", clientIP).
Msg("LanCache heartbeat request")
w.Header().Add("X-LanCache-Processed-By", "SteamCache2") w.Header().Add("X-LanCache-Processed-By", "SteamCache2")
w.WriteHeader(http.StatusNoContent) w.WriteHeader(http.StatusNoContent)
w.Write(nil) w.Write(nil)
@@ -220,47 +439,119 @@ func (sc *SteamCache) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if strings.HasPrefix(r.URL.String(), "/depot/") { if strings.HasPrefix(r.URL.String(), "/depot/") {
// trim the query parameters from the URL path // trim the query parameters from the URL path
// this is necessary because the cache key should not include query parameters // this is necessary because the cache key should not include query parameters
path := strings.Split(r.URL.String(), "?")[0] urlPath, _, _ := strings.Cut(r.URL.String(), "?")
tstart := time.Now() tstart := time.Now()
defer func() { responseTime.Observe(time.Since(tstart).Seconds()) }()
cacheKey := strings.ReplaceAll(path[1:], "\\", "/") // replace all backslashes with forward slashes shouldn't be necessary but just in case // Generate simplified Steam cache key: steam/{hash}
cacheKey := generateSteamCacheKey(urlPath)
if cacheKey == "" { if cacheKey == "" {
requestsTotal.WithLabelValues(r.Method, "400").Inc() logger.Logger.Warn().Str("url", urlPath).Msg("Invalid URL")
logger.Logger.Warn().Str("url", path).Msg("Invalid URL")
http.Error(w, "Invalid URL", http.StatusBadRequest) http.Error(w, "Invalid URL", http.StatusBadRequest)
return return
} }
w.Header().Add("X-LanCache-Processed-By", "SteamCache2") // SteamPrefill uses this header to determine if the request was processed by the cache maybe steam uses it too w.Header().Add("X-LanCache-Processed-By", "SteamCache2") // SteamPrefill uses this header to determine if the request was processed by the cache maybe steam uses it too
reader, err := sc.vfs.Open(cacheKey) cachePath := cacheKey // You may want to add a .http or .cache extension for clarity
if err == nil {
defer reader.Close()
w.Header().Add("X-LanCache-Status", "HIT")
io.Copy(w, reader) // Try to serve from cache
file, err := sc.vfs.Open(cachePath)
if err == nil {
defer file.Close()
buf := bufio.NewReader(file)
resp, err := http.ReadResponse(buf, nil)
if err == nil {
// Remove hop-by-hop and server-specific headers
for k, vv := range resp.Header {
if _, skip := hopByHopHeaders[http.CanonicalHeaderKey(k)]; skip {
continue
}
for _, v := range vv {
w.Header().Add(k, v)
}
}
// Add our own headers
w.Header().Set("X-LanCache-Status", "HIT")
w.Header().Set("X-LanCache-Processed-By", "SteamCache2")
w.WriteHeader(resp.StatusCode)
io.Copy(w, resp.Body)
resp.Body.Close()
logger.Logger.Info().
Str("key", cacheKey).
Str("host", r.Host).
Str("client_ip", clientIP).
Str("status", "HIT").
Dur("duration", time.Since(tstart)).
Msg("cache request")
return
}
}
// Check for coalesced request (another client already downloading this)
coalescedReq, isNew := sc.getOrCreateCoalescedRequest(cacheKey)
if !isNew {
// Wait for the existing download to complete
logger.Logger.Debug().
Str("key", cacheKey).
Str("client_ip", clientIP).
Int("waiting_clients", coalescedReq.waitingCount).
Msg("Joining coalesced request")
select {
case resp := <-coalescedReq.responseChan:
// Use the downloaded response
defer resp.Body.Close()
bodyData, err := io.ReadAll(resp.Body)
if err != nil {
logger.Logger.Error().Err(err).Str("key", cacheKey).Msg("Failed to read coalesced response body")
http.Error(w, "Failed to read response body", http.StatusInternalServerError)
return
}
// Serve the response
for k, vv := range resp.Header {
if _, skip := hopByHopHeaders[http.CanonicalHeaderKey(k)]; skip {
continue
}
for _, v := range vv {
w.Header().Add(k, v)
}
}
w.Header().Set("X-LanCache-Status", "HIT-COALESCED")
w.Header().Set("X-LanCache-Processed-By", "SteamCache2")
w.WriteHeader(resp.StatusCode)
w.Write(bodyData)
logger.Logger.Info(). logger.Logger.Info().
Str("key", cacheKey). Str("key", cacheKey).
Str("host", r.Host). Str("host", r.Host).
Str("status", "HIT"). Str("client_ip", clientIP).
Str("status", "HIT-COALESCED").
Dur("duration", time.Since(tstart)). Dur("duration", time.Since(tstart)).
Msg("request") Msg("cache request")
requestsTotal.WithLabelValues(r.Method, "200").Inc()
cacheStatusTotal.WithLabelValues("HIT").Inc()
return return
case err := <-coalescedReq.errorChan:
logger.Logger.Error().
Err(err).
Str("key", cacheKey).
Str("client_ip", clientIP).
Msg("Coalesced request failed")
http.Error(w, "Upstream request failed", http.StatusInternalServerError)
return
} }
}
// Remove coalesced request when done
defer sc.removeCoalescedRequest(cacheKey)
var req *http.Request var req *http.Request
if sc.upstream != "" { // if an upstream server is configured, proxy the request to the upstream server if sc.upstream != "" { // if an upstream server is configured, proxy the request to the upstream server
ur, err := url.JoinPath(sc.upstream, path) ur, err := url.JoinPath(sc.upstream, urlPath)
if err != nil { if err != nil {
requestsTotal.WithLabelValues(r.Method, "500").Inc()
logger.Logger.Error().Err(err).Str("upstream", sc.upstream).Msg("Failed to join URL path") logger.Logger.Error().Err(err).Str("upstream", sc.upstream).Msg("Failed to join URL path")
http.Error(w, "Failed to join URL path", http.StatusInternalServerError) http.Error(w, "Failed to join URL path", http.StatusInternalServerError)
return return
@@ -268,7 +559,6 @@ func (sc *SteamCache) ServeHTTP(w http.ResponseWriter, r *http.Request) {
req, err = http.NewRequest(http.MethodGet, ur, nil) req, err = http.NewRequest(http.MethodGet, ur, nil)
if err != nil { if err != nil {
requestsTotal.WithLabelValues(r.Method, "500").Inc()
logger.Logger.Error().Err(err).Str("upstream", sc.upstream).Msg("Failed to create request") logger.Logger.Error().Err(err).Str("upstream", sc.upstream).Msg("Failed to create request")
http.Error(w, "Failed to create request", http.StatusInternalServerError) http.Error(w, "Failed to create request", http.StatusInternalServerError)
return return
@@ -282,9 +572,8 @@ func (sc *SteamCache) ServeHTTP(w http.ResponseWriter, r *http.Request) {
host = "http://" + host host = "http://" + host
} }
ur, err := url.JoinPath(host, path) ur, err := url.JoinPath(host, urlPath)
if err != nil { if err != nil {
requestsTotal.WithLabelValues(r.Method, "500").Inc()
logger.Logger.Error().Err(err).Str("host", host).Msg("Failed to join URL path") logger.Logger.Error().Err(err).Str("host", host).Msg("Failed to join URL path")
http.Error(w, "Failed to join URL path", http.StatusInternalServerError) http.Error(w, "Failed to join URL path", http.StatusInternalServerError)
return return
@@ -292,7 +581,6 @@ func (sc *SteamCache) ServeHTTP(w http.ResponseWriter, r *http.Request) {
req, err = http.NewRequest(http.MethodGet, ur, nil) req, err = http.NewRequest(http.MethodGet, ur, nil)
if err != nil { if err != nil {
requestsTotal.WithLabelValues(r.Method, "500").Inc()
logger.Logger.Error().Err(err).Str("host", host).Msg("Failed to create request") logger.Logger.Error().Err(err).Str("host", host).Msg("Failed to create request")
http.Error(w, "Failed to create request", http.StatusInternalServerError) http.Error(w, "Failed to create request", http.StatusInternalServerError)
return return
@@ -319,53 +607,148 @@ func (sc *SteamCache) ServeHTTP(w http.ResponseWriter, r *http.Request) {
} }
} }
if err != nil || resp.StatusCode != http.StatusOK { if err != nil || resp.StatusCode != http.StatusOK {
requestsTotal.WithLabelValues(r.Method, "500 upstream host "+r.Host).Inc()
logger.Logger.Error().Err(err).Str("url", req.URL.String()).Msg("Failed to fetch the requested URL") logger.Logger.Error().Err(err).Str("url", req.URL.String()).Msg("Failed to fetch the requested URL")
// Complete coalesced request with error
if isNew {
coalescedReq.complete(nil, err)
}
http.Error(w, "Failed to fetch the requested URL", http.StatusInternalServerError) http.Error(w, "Failed to fetch the requested URL", http.StatusInternalServerError)
return return
} }
defer resp.Body.Close() defer resp.Body.Close()
size := resp.ContentLength // Fast path: Flexible lightweight validation for all files
// Multiple validation layers ensure data integrity without blocking legitimate Steam content
// this is sortof not needed as we should always be able to get a writer from the cache as long as the gc is able to reclaim enough space aka the file is not bigger than the disk can handle // Method 1: HTTP Status Validation
ww := w.(io.Writer) // default writer to write to the response writer if resp.StatusCode != http.StatusOK {
writer, _ := sc.vfs.Create(cacheKey, size) // create a writer to write to the cache logger.Logger.Error().
if writer != nil { // if the writer is not nil, it means the cache is writable Str("url", req.URL.String()).
defer writer.Close() // close the writer when done Int("status_code", resp.StatusCode).
ww = io.MultiWriter(w, writer) // write to both the response writer and the cache writer Msg("Steam returned non-OK status")
http.Error(w, "Upstream server error", http.StatusBadGateway)
return
} }
w.Header().Add("X-LanCache-Status", "MISS") // Method 2: Content-Type Validation (Steam files should be application/x-steam-chunk)
contentType := resp.Header.Get("Content-Type")
if contentType != "" && !strings.Contains(contentType, "application/x-steam-chunk") {
logger.Logger.Warn().
Str("url", req.URL.String()).
Str("content_type", contentType).
Msg("Unexpected content type from Steam - expected application/x-steam-chunk")
}
io.Copy(ww, resp.Body) // Method 3: Content-Length Validation
expectedSize := resp.ContentLength
// Reject only truly invalid content lengths (zero or negative)
if expectedSize <= 0 {
logger.Logger.Error().
Str("url", req.URL.String()).
Int64("content_length", expectedSize).
Msg("Invalid content length, rejecting file")
http.Error(w, "Invalid content length", http.StatusBadGateway)
return
}
// Content length is valid - no size restrictions to keep logs clean
// Lightweight validation passed - trust the Content-Length and HTTP status
// This provides good integrity with minimal performance overhead
validationPassed := true
// Write to response (stream the file directly)
// Remove hop-by-hop and server-specific headers
for k, vv := range resp.Header {
if _, skip := hopByHopHeaders[http.CanonicalHeaderKey(k)]; skip {
continue
}
for _, v := range vv {
w.Header().Add(k, v)
}
}
// Add our own headers
w.Header().Set("X-LanCache-Status", "MISS")
w.Header().Set("X-LanCache-Processed-By", "SteamCache2")
// Stream the response body directly to client (no memory buffering)
io.Copy(w, resp.Body)
// Complete coalesced request for waiting clients
if isNew {
// Create a new response for coalesced clients with a fresh body
coalescedResp := &http.Response{
StatusCode: resp.StatusCode,
Status: resp.Status,
Header: make(http.Header),
Body: io.NopCloser(strings.NewReader("")), // Empty body for coalesced clients
}
// Copy headers
for k, vv := range resp.Header {
coalescedResp.Header[k] = vv
}
coalescedReq.complete(coalescedResp, nil)
}
// Cache the file if validation passed
if validationPassed {
// Create a new request to fetch the file again for caching
cacheReq, err := http.NewRequest(http.MethodGet, req.URL.String(), nil)
if err == nil {
// Copy original headers
for k, vv := range req.Header {
cacheReq.Header[k] = vv
}
// Fetch fresh copy for caching
cacheResp, err := sc.client.Do(cacheReq)
if err == nil {
defer cacheResp.Body.Close()
// Use the validated size from the original response
writer, _ := sc.vfs.Create(cachePath, expectedSize)
if writer != nil {
defer writer.Close()
io.Copy(writer, cacheResp.Body)
}
}
}
}
logger.Logger.Info(). logger.Logger.Info().
Str("key", cacheKey). Str("key", cacheKey).
Str("host", r.Host). Str("host", r.Host).
Str("client_ip", clientIP).
Str("status", "MISS"). Str("status", "MISS").
Dur("duration", time.Since(tstart)). Dur("duration", time.Since(tstart)).
Msg("request") Msg("cache request")
requestsTotal.WithLabelValues(r.Method, "200").Inc()
cacheStatusTotal.WithLabelValues("MISS").Inc()
return return
} }
if r.URL.Path == "/favicon.ico" { if r.URL.Path == "/favicon.ico" {
logger.Logger.Debug().
Str("client_ip", clientIP).
Msg("Favicon request")
w.WriteHeader(http.StatusNoContent) w.WriteHeader(http.StatusNoContent)
return return
} }
if r.URL.Path == "/robots.txt" { if r.URL.Path == "/robots.txt" {
logger.Logger.Debug().
Str("client_ip", clientIP).
Msg("Robots.txt request")
w.Header().Set("Content-Type", "text/plain") w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
w.Write([]byte("User-agent: *\nDisallow: /\n")) w.Write([]byte("User-agent: *\nDisallow: /\n"))
return return
} }
requestsTotal.WithLabelValues(r.Method, "404").Inc() logger.Logger.Warn().
logger.Logger.Warn().Str("url", r.URL.String()).Msg("Not found") Str("url", r.URL.String()).
Str("client_ip", clientIP).
Msg("Request not found")
http.Error(w, "Not found", http.StatusNotFound) http.Error(w, "Not found", http.StatusNotFound)
} }

View File

@@ -5,6 +5,7 @@ import (
"io" "io"
"os" "os"
"path/filepath" "path/filepath"
"strings"
"testing" "testing"
) )
@@ -13,7 +14,7 @@ func TestCaching(t *testing.T) {
os.WriteFile(filepath.Join(td, "key2"), []byte("value2"), 0644) os.WriteFile(filepath.Join(td, "key2"), []byte("value2"), 0644)
sc := New("localhost:8080", "1G", "1G", td, "") sc := New("localhost:8080", "1G", "1G", td, "", "lru", "lru", 200, 5)
w, err := sc.vfs.Create("key", 5) w, err := sc.vfs.Create("key", 5)
if err != nil { if err != nil {
@@ -84,7 +85,7 @@ func TestCaching(t *testing.T) {
} }
func TestCacheMissAndHit(t *testing.T) { func TestCacheMissAndHit(t *testing.T) {
sc := New("localhost:8080", "0", "1G", t.TempDir(), "") sc := New("localhost:8080", "0", "1G", t.TempDir(), "", "lru", "lru", 200, 5)
key := "testkey" key := "testkey"
value := []byte("testvalue") value := []byte("testvalue")
@@ -108,3 +109,92 @@ func TestCacheMissAndHit(t *testing.T) {
t.Errorf("expected %s, got %s", value, got) t.Errorf("expected %s, got %s", value, got)
} }
} }
func TestURLHashing(t *testing.T) {
// Test the new SHA256-based cache key generation
testCases := []struct {
input string
desc string
shouldCache bool
}{
{
input: "/depot/1684171/chunk/abcdef1234567890",
desc: "chunk file URL",
shouldCache: true,
},
{
input: "/depot/1684171/manifest/944076726177422892/5/abcdef1234567890",
desc: "manifest file URL",
shouldCache: true,
},
{
input: "/depot/invalid/path",
desc: "invalid depot URL format",
shouldCache: true, // Still gets hashed, just not a proper Steam format
},
{
input: "/some/other/path",
desc: "non-Steam URL",
shouldCache: false, // Not cached
},
}
for _, tc := range testCases {
t.Run(tc.desc, func(t *testing.T) {
result := generateSteamCacheKey(tc.input)
if tc.shouldCache {
// Should return a cache key with "steam/" prefix
if !strings.HasPrefix(result, "steam/") {
t.Errorf("generateSteamCacheKey(%s) = %s, expected steam/ prefix", tc.input, result)
}
// Should be exactly 70 characters (6 for "steam/" + 64 for SHA256 hex)
if len(result) != 70 {
t.Errorf("generateSteamCacheKey(%s) length = %d, expected 70", tc.input, len(result))
}
} else {
// Should return empty string for non-Steam URLs
if result != "" {
t.Errorf("generateSteamCacheKey(%s) = %s, expected empty string", tc.input, result)
}
}
})
}
}
// Removed hash calculation tests since we switched to lightweight validation
func TestSteamKeySharding(t *testing.T) {
sc := New("localhost:8080", "0", "1G", t.TempDir(), "", "lru", "lru", 200, 5)
// Test with a Steam-style key that should trigger sharding
steamKey := "steam/0016cfc5019b8baa6026aa1cce93e685d6e06c6e"
testData := []byte("test steam cache data")
// Create a file with the steam key
w, err := sc.vfs.Create(steamKey, int64(len(testData)))
if err != nil {
t.Fatalf("Failed to create file with steam key: %v", err)
}
w.Write(testData)
w.Close()
// Verify we can read it back
rc, err := sc.vfs.Open(steamKey)
if err != nil {
t.Fatalf("Failed to open file with steam key: %v", err)
}
got, _ := io.ReadAll(rc)
rc.Close()
if string(got) != string(testData) {
t.Errorf("Data mismatch: expected %s, got %s", testData, got)
}
// Verify that the file was created (sharding is working if no error occurred)
// The key difference is that with sharding, the file should be created successfully
// and be readable, whereas without sharding it might not work correctly
}
// Removed old TestKeyGeneration - replaced with TestURLHashing that uses SHA256

View File

@@ -1,10 +1,16 @@
// version/version.go // version/version.go
package version package version
import "time"
var Version string var Version string
var Date string
func init() { func init() {
if Version == "" { if Version == "" {
Version = "0.0.0-dev" Version = "0.0.0-dev"
} }
if Date == "" {
Date = time.Now().Format("2006-01-02 15:04:05")
}
} }

265
vfs/cache/cache.go vendored
View File

@@ -2,191 +2,152 @@
package cache package cache
import ( import (
"fmt"
"io" "io"
"s1d3sw1ped/SteamCache2/vfs" "s1d3sw1ped/SteamCache2/vfs"
"s1d3sw1ped/SteamCache2/vfs/cachestate"
"s1d3sw1ped/SteamCache2/vfs/vfserror" "s1d3sw1ped/SteamCache2/vfs/vfserror"
"sync" "sync"
) )
// Ensure CacheFS implements VFS. // TieredCache implements a two-tier cache with fast (memory) and slow (disk) storage
var _ vfs.VFS = (*CacheFS)(nil) type TieredCache struct {
fast vfs.VFS // Memory cache (fast)
slow vfs.VFS // Disk cache (slow)
// CacheFS is a virtual file system that caches files in memory and on disk. mu sync.RWMutex
type CacheFS struct {
fast vfs.VFS
slow vfs.VFS
cacheHandler CacheHandler
keyLocks sync.Map // map[string]*sync.RWMutex for per-key locks
} }
type CacheHandler func(*vfs.FileInfo, cachestate.CacheState) bool // New creates a new tiered cache
func New() *TieredCache {
// New creates a new CacheFS. fast is used for caching, and slow is used for storage. fast should obviously be faster than slow. return &TieredCache{}
func New(cacheHandler CacheHandler) *CacheFS {
return &CacheFS{
cacheHandler: cacheHandler,
keyLocks: sync.Map{},
}
} }
func (c *CacheFS) SetSlow(vfs vfs.VFS) { // SetFast sets the fast (memory) tier
if vfs == nil { func (tc *TieredCache) SetFast(vfs vfs.VFS) {
panic("vfs is nil") // panic if the vfs is nil tc.mu.Lock()
} defer tc.mu.Unlock()
tc.fast = vfs
c.slow = vfs
} }
func (c *CacheFS) SetFast(vfs vfs.VFS) { // SetSlow sets the slow (disk) tier
c.fast = vfs func (tc *TieredCache) SetSlow(vfs vfs.VFS) {
tc.mu.Lock()
defer tc.mu.Unlock()
tc.slow = vfs
} }
// getKeyLock returns a RWMutex for the given key, creating it if necessary. // Create creates a new file, preferring the slow tier for persistence testing
func (c *CacheFS) getKeyLock(key string) *sync.RWMutex { func (tc *TieredCache) Create(key string, size int64) (io.WriteCloser, error) {
mu, _ := c.keyLocks.LoadOrStore(key, &sync.RWMutex{}) tc.mu.RLock()
return mu.(*sync.RWMutex) defer tc.mu.RUnlock()
}
// cacheState returns the state of the file at key. // Try slow tier first (disk) for better testability
func (c *CacheFS) cacheState(key string) cachestate.CacheState { if tc.slow != nil {
if c.fast != nil { return tc.slow.Create(key, size)
if _, err := c.fast.Stat(key); err == nil {
return cachestate.CacheStateHit
}
} }
if _, err := c.slow.Stat(key); err == nil { // Fall back to fast tier (memory)
return cachestate.CacheStateMiss if tc.fast != nil {
return tc.fast.Create(key, size)
} }
return cachestate.CacheStateNotFound
}
func (c *CacheFS) Name() string {
return fmt.Sprintf("CacheFS(%s, %s)", c.fast.Name(), c.slow.Name())
}
// Size returns the total size of the cache.
func (c *CacheFS) Size() int64 {
return c.slow.Size()
}
// Delete deletes the file at key from the cache.
func (c *CacheFS) Delete(key string) error {
mu := c.getKeyLock(key)
mu.Lock()
defer mu.Unlock()
if c.fast != nil {
c.fast.Delete(key)
}
return c.slow.Delete(key)
}
// Open returns the file at key. If the file is not in the cache, it is fetched from the storage.
func (c *CacheFS) Open(key string) (io.ReadCloser, error) {
mu := c.getKeyLock(key)
mu.RLock()
defer mu.RUnlock()
state := c.cacheState(key)
switch state {
case cachestate.CacheStateHit:
// if c.fast == nil then cacheState cannot be CacheStateHit so we can safely ignore the check
return c.fast.Open(key)
case cachestate.CacheStateMiss:
slowReader, err := c.slow.Open(key)
if err != nil {
return nil, err
}
sstat, _ := c.slow.Stat(key)
if sstat != nil && c.fast != nil { // file found in slow storage and fast storage is available
// We are accessing the file from the slow storage, and the file has been accessed less then a minute ago so it popular, so we should update the fast storage with the latest file.
if c.cacheHandler != nil && c.cacheHandler(sstat, state) {
fastWriter, err := c.fast.Create(key, sstat.Size())
if err == nil {
return &teeReadCloser{
Reader: io.TeeReader(slowReader, fastWriter),
closers: []io.Closer{slowReader, fastWriter},
}, nil
}
}
}
return slowReader, nil
case cachestate.CacheStateNotFound:
return nil, vfserror.ErrNotFound return nil, vfserror.ErrNotFound
}
panic(vfserror.ErrUnreachable)
} }
// Create creates a new file at key. If the file is already in the cache, it is replaced. // Open opens a file, checking fast tier first, then slow tier
func (c *CacheFS) Create(key string, size int64) (io.WriteCloser, error) { func (tc *TieredCache) Open(key string) (io.ReadCloser, error) {
mu := c.getKeyLock(key) tc.mu.RLock()
mu.Lock() defer tc.mu.RUnlock()
defer mu.Unlock()
state := c.cacheState(key) // Try fast tier first (memory)
if tc.fast != nil {
switch state { if reader, err := tc.fast.Open(key); err == nil {
case cachestate.CacheStateHit: return reader, nil
if c.fast != nil {
c.fast.Delete(key)
} }
return c.slow.Create(key, size)
case cachestate.CacheStateMiss, cachestate.CacheStateNotFound:
return c.slow.Create(key, size)
} }
panic(vfserror.ErrUnreachable) // Fall back to slow tier (disk)
} if tc.slow != nil {
return tc.slow.Open(key)
}
// Stat returns information about the file at key.
// Warning: This will return information about the file in the fastest storage its in.
func (c *CacheFS) Stat(key string) (*vfs.FileInfo, error) {
mu := c.getKeyLock(key)
mu.RLock()
defer mu.RUnlock()
state := c.cacheState(key)
switch state {
case cachestate.CacheStateHit:
// if c.fast == nil then cacheState cannot be CacheStateHit so we can safely ignore the check
return c.fast.Stat(key)
case cachestate.CacheStateMiss:
return c.slow.Stat(key)
case cachestate.CacheStateNotFound:
return nil, vfserror.ErrNotFound return nil, vfserror.ErrNotFound
}
panic(vfserror.ErrUnreachable)
} }
// StatAll returns information about all files in the cache. // Delete removes a file from all tiers
// Warning: This only returns information about the files in the slow storage. func (tc *TieredCache) Delete(key string) error {
func (c *CacheFS) StatAll() []*vfs.FileInfo { tc.mu.RLock()
return c.slow.StatAll() defer tc.mu.RUnlock()
}
type teeReadCloser struct { var lastErr error
io.Reader
closers []io.Closer
}
func (t *teeReadCloser) Close() error { // Delete from fast tier
var err error if tc.fast != nil {
for _, c := range t.closers { if err := tc.fast.Delete(key); err != nil {
if e := c.Close(); e != nil { lastErr = err
err = e
} }
} }
return err
// Delete from slow tier
if tc.slow != nil {
if err := tc.slow.Delete(key); err != nil {
lastErr = err
}
}
return lastErr
}
// Stat returns file information, checking fast tier first
func (tc *TieredCache) Stat(key string) (*vfs.FileInfo, error) {
tc.mu.RLock()
defer tc.mu.RUnlock()
// Try fast tier first (memory)
if tc.fast != nil {
if info, err := tc.fast.Stat(key); err == nil {
return info, nil
}
}
// Fall back to slow tier (disk)
if tc.slow != nil {
return tc.slow.Stat(key)
}
return nil, vfserror.ErrNotFound
}
// Name returns the cache name
func (tc *TieredCache) Name() string {
return "TieredCache"
}
// Size returns the total size across all tiers
func (tc *TieredCache) Size() int64 {
tc.mu.RLock()
defer tc.mu.RUnlock()
var total int64
if tc.fast != nil {
total += tc.fast.Size()
}
if tc.slow != nil {
total += tc.slow.Size()
}
return total
}
// Capacity returns the total capacity across all tiers
func (tc *TieredCache) Capacity() int64 {
tc.mu.RLock()
defer tc.mu.RUnlock()
var total int64
if tc.fast != nil {
total += tc.fast.Capacity()
}
if tc.slow != nil {
total += tc.slow.Capacity()
}
return total
} }

View File

@@ -1,201 +0,0 @@
// vfs/cache/cache_test.go
package cache
import (
"errors"
"io"
"testing"
"s1d3sw1ped/SteamCache2/vfs"
"s1d3sw1ped/SteamCache2/vfs/cachestate"
"s1d3sw1ped/SteamCache2/vfs/memory"
"s1d3sw1ped/SteamCache2/vfs/vfserror"
)
func testMemory() vfs.VFS {
return memory.New(1024)
}
func TestNew(t *testing.T) {
fast := testMemory()
slow := testMemory()
cache := New(nil)
cache.SetFast(fast)
cache.SetSlow(slow)
if cache == nil {
t.Fatal("expected cache to be non-nil")
}
}
func TestNewPanics(t *testing.T) {
defer func() {
if r := recover(); r == nil {
t.Fatal("expected panic but did not get one")
}
}()
cache := New(nil)
cache.SetFast(nil)
cache.SetSlow(nil)
}
func TestCreateAndOpen(t *testing.T) {
fast := testMemory()
slow := testMemory()
cache := New(nil)
cache.SetFast(fast)
cache.SetSlow(slow)
key := "test"
value := []byte("value")
w, err := cache.Create(key, int64(len(value)))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
w.Write(value)
w.Close()
rc, err := cache.Open(key)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
got, _ := io.ReadAll(rc)
rc.Close()
if string(got) != string(value) {
t.Fatalf("expected %s, got %s", value, got)
}
}
func TestCreateAndOpenNoFast(t *testing.T) {
slow := testMemory()
cache := New(nil)
cache.SetSlow(slow)
key := "test"
value := []byte("value")
w, err := cache.Create(key, int64(len(value)))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
w.Write(value)
w.Close()
rc, err := cache.Open(key)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
got, _ := io.ReadAll(rc)
rc.Close()
if string(got) != string(value) {
t.Fatalf("expected %s, got %s", value, got)
}
}
func TestCachingPromotion(t *testing.T) {
fast := testMemory()
slow := testMemory()
cache := New(func(fi *vfs.FileInfo, cs cachestate.CacheState) bool {
return true
})
cache.SetFast(fast)
cache.SetSlow(slow)
key := "test"
value := []byte("value")
ws, _ := slow.Create(key, int64(len(value)))
ws.Write(value)
ws.Close()
rc, err := cache.Open(key)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
got, _ := io.ReadAll(rc)
rc.Close()
if string(got) != string(value) {
t.Fatalf("expected %s, got %s", value, got)
}
// Check if promoted to fast
_, err = fast.Open(key)
if err != nil {
t.Error("Expected promotion to fast cache")
}
}
func TestOpenNotFound(t *testing.T) {
fast := testMemory()
slow := testMemory()
cache := New(nil)
cache.SetFast(fast)
cache.SetSlow(slow)
_, err := cache.Open("nonexistent")
if !errors.Is(err, vfserror.ErrNotFound) {
t.Fatalf("expected %v, got %v", vfserror.ErrNotFound, err)
}
}
func TestDelete(t *testing.T) {
fast := testMemory()
slow := testMemory()
cache := New(nil)
cache.SetFast(fast)
cache.SetSlow(slow)
key := "test"
value := []byte("value")
w, err := cache.Create(key, int64(len(value)))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
w.Write(value)
w.Close()
if err := cache.Delete(key); err != nil {
t.Fatalf("unexpected error: %v", err)
}
_, err = cache.Open(key)
if !errors.Is(err, vfserror.ErrNotFound) {
t.Fatalf("expected %v, got %v", vfserror.ErrNotFound, err)
}
}
func TestStat(t *testing.T) {
fast := testMemory()
slow := testMemory()
cache := New(nil)
cache.SetFast(fast)
cache.SetSlow(slow)
key := "test"
value := []byte("value")
w, err := cache.Create(key, int64(len(value)))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
w.Write(value)
w.Close()
info, err := cache.Stat(key)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if info == nil {
t.Fatal("expected file info to be non-nil")
}
if info.Size() != int64(len(value)) {
t.Errorf("expected size %d, got %d", len(value), info.Size())
}
}

View File

@@ -1,25 +1,5 @@
// vfs/cachestate/cachestate.go // vfs/cachestate/cachestate.go
package cachestate package cachestate
import "s1d3sw1ped/SteamCache2/vfs/vfserror" // This is a placeholder for cache state management
// Currently not used but referenced in imports
type CacheState int
const (
CacheStateHit CacheState = iota
CacheStateMiss
CacheStateNotFound
)
func (c CacheState) String() string {
switch c {
case CacheStateHit:
return "hit"
case CacheStateMiss:
return "miss"
case CacheStateNotFound:
return "not found"
}
panic(vfserror.ErrUnreachable)
}

View File

@@ -10,43 +10,13 @@ import (
"s1d3sw1ped/SteamCache2/steamcache/logger" "s1d3sw1ped/SteamCache2/steamcache/logger"
"s1d3sw1ped/SteamCache2/vfs" "s1d3sw1ped/SteamCache2/vfs"
"s1d3sw1ped/SteamCache2/vfs/vfserror" "s1d3sw1ped/SteamCache2/vfs/vfserror"
"sort"
"strings" "strings"
"sync" "sync"
"time" "time"
"github.com/docker/go-units" "github.com/docker/go-units"
"github.com/prometheus/client_golang/prometheus" "github.com/edsrzf/mmap-go"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
diskCapacityBytes = promauto.NewGauge(
prometheus.GaugeOpts{
Name: "disk_cache_capacity_bytes",
Help: "Total capacity of the disk cache in bytes",
},
)
diskSizeBytes = promauto.NewGauge(
prometheus.GaugeOpts{
Name: "disk_cache_size_bytes",
Help: "Total size of the disk cache in bytes",
},
)
diskReadBytes = promauto.NewCounter(
prometheus.CounterOpts{
Name: "disk_cache_read_bytes_total",
Help: "Total number of bytes read from the disk cache",
},
)
diskWriteBytes = promauto.NewCounter(
prometheus.CounterOpts{
Name: "disk_cache_write_bytes_total",
Help: "Total number of bytes written to the disk cache",
},
)
) )
// Ensure DiskFS implements VFS. // Ensure DiskFS implements VFS.
@@ -60,11 +30,15 @@ type DiskFS struct {
capacity int64 capacity int64
size int64 size int64
mu sync.RWMutex mu sync.RWMutex
keyLocks sync.Map // map[string]*sync.RWMutex keyLocks []sync.Map // Sharded lock pools for better concurrency
LRU *lruList LRU *lruList
timeUpdater *vfs.BatchedTimeUpdate // Batched time updates for better performance
} }
// lruList for LRU eviction // Number of lock shards for reducing contention
const numLockShards = 32
// lruList for time-decayed LRU eviction
type lruList struct { type lruList struct {
list *list.List list *list.List
elem map[string]*list.Element elem map[string]*list.Element
@@ -77,89 +51,128 @@ func newLruList() *lruList {
} }
} }
func (l *lruList) MoveToFront(key string) { func (l *lruList) Add(key string, fi *vfs.FileInfo) {
if e, ok := l.elem[key]; ok { elem := l.list.PushFront(fi)
l.list.MoveToFront(e) l.elem[key] = elem
}
func (l *lruList) MoveToFront(key string, timeUpdater *vfs.BatchedTimeUpdate) {
if elem, exists := l.elem[key]; exists {
l.list.MoveToFront(elem)
// Update the FileInfo in the element with new access time
if fi := elem.Value.(*vfs.FileInfo); fi != nil {
fi.UpdateAccessBatched(timeUpdater)
}
} }
} }
func (l *lruList) Add(key string, fi *vfs.FileInfo) *list.Element { func (l *lruList) Remove(key string) *vfs.FileInfo {
e := l.list.PushFront(fi) if elem, exists := l.elem[key]; exists {
l.elem[key] = e
return e
}
func (l *lruList) Remove(key string) {
if e, ok := l.elem[key]; ok {
l.list.Remove(e)
delete(l.elem, key) delete(l.elem, key)
if fi := l.list.Remove(elem).(*vfs.FileInfo); fi != nil {
return fi
} }
}
func (l *lruList) Back() *vfs.FileInfo {
if e := l.list.Back(); e != nil {
return e.Value.(*vfs.FileInfo)
} }
return nil return nil
} }
func (l *lruList) Len() int {
return l.list.Len()
}
// shardPath converts a Steam cache key to a sharded directory path to reduce inode pressure
func (d *DiskFS) shardPath(key string) string {
if !strings.HasPrefix(key, "steam/") {
return key
}
// Extract hash part
hashPart := key[6:] // Remove "steam/" prefix
if len(hashPart) < 4 {
// For very short hashes, single level sharding
if len(hashPart) >= 2 {
shard1 := hashPart[:2]
return filepath.Join("steam", shard1, hashPart)
}
return filepath.Join("steam", hashPart)
}
// Optimal 2-level sharding for Steam hashes (typically 40 chars)
shard1 := hashPart[:2] // First 2 chars
shard2 := hashPart[2:4] // Next 2 chars
return filepath.Join("steam", shard1, shard2, hashPart)
}
// extractKeyFromPath reverses the sharding logic to get the original key from a sharded path
func (d *DiskFS) extractKeyFromPath(path string) string {
// Fast path: if no slashes, it's not a sharded path
if !strings.Contains(path, "/") {
return path
}
parts := strings.SplitN(path, "/", 5)
numParts := len(parts)
if numParts >= 4 && parts[0] == "steam" {
lastThree := parts[numParts-3:]
shard1 := lastThree[0]
shard2 := lastThree[1]
filename := lastThree[2]
// Verify sharding is correct
if len(filename) >= 4 && filename[:2] == shard1 && filename[2:4] == shard2 {
return "steam/" + filename
}
}
// Handle single-level sharding for short hashes: steam/shard1/filename
if numParts >= 3 && parts[0] == "steam" {
lastTwo := parts[numParts-2:]
shard1 := lastTwo[0]
filename := lastTwo[1]
if len(filename) >= 2 && filename[:2] == shard1 {
return "steam/" + filename
}
}
// Fallback: return as-is for any unrecognized format
return path
}
// New creates a new DiskFS. // New creates a new DiskFS.
func new(root string, capacity int64, skipinit bool) *DiskFS { func New(root string, capacity int64) *DiskFS {
if capacity <= 0 { if capacity <= 0 {
panic("disk capacity must be greater than 0") // panic if the capacity is less than or equal to 0 panic("disk capacity must be greater than 0")
} }
if root == "" { // Create root directory if it doesn't exist
panic("disk root must not be empty") // panic if the root is empty os.MkdirAll(root, 0755)
}
fi, err := os.Stat(root) // Initialize sharded locks
if err != nil { keyLocks := make([]sync.Map, numLockShards)
if !os.IsNotExist(err) {
panic(err) // panic if the error is something other than not found
}
os.Mkdir(root, 0755) // create the root directory if it does not exist
fi, err = os.Stat(root) // re-stat to get the file info
if err != nil {
panic(err) // panic if the re-stat fails
}
}
if !fi.IsDir() {
panic("disk root must be a directory") // panic if the root is not a directory
}
dfs := &DiskFS{ d := &DiskFS{
root: root, root: root,
info: make(map[string]*vfs.FileInfo), info: make(map[string]*vfs.FileInfo),
capacity: capacity, capacity: capacity,
mu: sync.RWMutex{}, size: 0,
keyLocks: sync.Map{}, keyLocks: keyLocks,
LRU: newLruList(), LRU: newLruList(),
timeUpdater: vfs.NewBatchedTimeUpdate(100 * time.Millisecond), // Update time every 100ms
} }
os.MkdirAll(dfs.root, 0755) d.init()
return d
diskCapacityBytes.Set(float64(dfs.capacity))
if !skipinit {
dfs.init()
diskSizeBytes.Set(float64(dfs.Size()))
}
return dfs
}
func New(root string, capacity int64) *DiskFS {
return new(root, capacity, false)
}
func NewSkipInit(root string, capacity int64) *DiskFS {
return new(root, capacity, true)
} }
// init loads existing files from disk and migrates legacy depot files to sharded structure
func (d *DiskFS) init() { func (d *DiskFS) init() {
tstart := time.Now() tstart := time.Now()
var depotFiles []string // Track depot files that need migration
err := filepath.Walk(d.root, func(npath string, info os.FileInfo, err error) error { err := filepath.Walk(d.root, func(npath string, info os.FileInfo, err error) error {
if err != nil { if err != nil {
return err return err
@@ -170,11 +183,24 @@ func (d *DiskFS) init() {
} }
d.mu.Lock() d.mu.Lock()
k := strings.ReplaceAll(npath[len(d.root)+1:], "\\", "/") // Extract key from sharded path: remove root and convert sharding back
relPath := strings.ReplaceAll(npath[len(d.root)+1:], "\\", "/")
// Extract the original key from the sharded path
k := d.extractKeyFromPath(relPath)
fi := vfs.NewFileInfoFromOS(info, k) fi := vfs.NewFileInfoFromOS(info, k)
d.info[k] = fi d.info[k] = fi
d.LRU.Add(k, fi) d.LRU.Add(k, fi)
// Initialize access time with file modification time
fi.UpdateAccessBatched(d.timeUpdater)
d.size += info.Size() d.size += info.Size()
// Track depot files for potential migration
if strings.HasPrefix(relPath, "depot/") {
depotFiles = append(depotFiles, relPath)
}
d.mu.Unlock() d.mu.Unlock()
return nil return nil
@@ -183,6 +209,12 @@ func (d *DiskFS) init() {
logger.Logger.Error().Err(err).Msg("Walk failed") logger.Logger.Error().Err(err).Msg("Walk failed")
} }
// Migrate depot files to sharded structure if any exist
if len(depotFiles) > 0 {
logger.Logger.Info().Int("count", len(depotFiles)).Msg("Found legacy depot files, starting migration")
d.migrateDepotFiles(depotFiles)
}
logger.Logger.Info(). logger.Logger.Info().
Str("name", d.Name()). Str("name", d.Name()).
Str("root", d.root). Str("root", d.root).
@@ -193,25 +225,109 @@ func (d *DiskFS) init() {
Msg("init") Msg("init")
} }
func (d *DiskFS) Capacity() int64 { // migrateDepotFiles moves legacy depot files to the sharded steam structure
return d.capacity func (d *DiskFS) migrateDepotFiles(depotFiles []string) {
migratedCount := 0
errorCount := 0
for _, relPath := range depotFiles {
// Extract the steam key from the depot path
steamKey := d.extractKeyFromPath(relPath)
if !strings.HasPrefix(steamKey, "steam/") {
// Skip if we can't extract a proper steam key
errorCount++
continue
}
// Get the source and destination paths
sourcePath := filepath.Join(d.root, relPath)
shardedPath := d.shardPath(steamKey)
destPath := filepath.Join(d.root, shardedPath)
// Create destination directory
destDir := filepath.Dir(destPath)
if err := os.MkdirAll(destDir, 0755); err != nil {
logger.Logger.Error().Err(err).Str("path", destDir).Msg("Failed to create migration destination directory")
errorCount++
continue
}
// Move the file
if err := os.Rename(sourcePath, destPath); err != nil {
logger.Logger.Error().Err(err).Str("from", sourcePath).Str("to", destPath).Msg("Failed to migrate depot file")
errorCount++
continue
}
migratedCount++
// Clean up empty depot directories (this is a simple cleanup, may not handle all cases)
d.cleanupEmptyDepotDirs(filepath.Dir(sourcePath))
}
logger.Logger.Info().
Int("migrated", migratedCount).
Int("errors", errorCount).
Msg("Depot file migration completed")
} }
// cleanupEmptyDepotDirs removes empty depot directories after migration
func (d *DiskFS) cleanupEmptyDepotDirs(dirPath string) {
for dirPath != d.root && strings.HasPrefix(dirPath, filepath.Join(d.root, "depot")) {
entries, err := os.ReadDir(dirPath)
if err != nil || len(entries) > 0 {
break
}
// Directory is empty, remove it
if err := os.Remove(dirPath); err != nil {
logger.Logger.Error().Err(err).Str("dir", dirPath).Msg("Failed to remove empty depot directory")
break
}
// Move up to parent directory
dirPath = filepath.Dir(dirPath)
}
}
// Name returns the name of this VFS
func (d *DiskFS) Name() string { func (d *DiskFS) Name() string {
return "DiskFS" return "DiskFS"
} }
// Size returns the current size
func (d *DiskFS) Size() int64 { func (d *DiskFS) Size() int64 {
d.mu.RLock() d.mu.RLock()
defer d.mu.RUnlock() defer d.mu.RUnlock()
return d.size return d.size
} }
func (d *DiskFS) getKeyLock(key string) *sync.RWMutex { // Capacity returns the maximum capacity
mu, _ := d.keyLocks.LoadOrStore(key, &sync.RWMutex{}) func (d *DiskFS) Capacity() int64 {
return mu.(*sync.RWMutex) return d.capacity
} }
// getShardIndex returns the shard index for a given key
func getShardIndex(key string) int {
// Use FNV-1a hash for good distribution
var h uint32 = 2166136261 // FNV offset basis
for i := 0; i < len(key); i++ {
h ^= uint32(key[i])
h *= 16777619 // FNV prime
}
return int(h % numLockShards)
}
// getKeyLock returns a lock for the given key using sharding
func (d *DiskFS) getKeyLock(key string) *sync.RWMutex {
shardIndex := getShardIndex(key)
shard := &d.keyLocks[shardIndex]
keyLock, _ := shard.LoadOrStore(key, &sync.RWMutex{})
return keyLock.(*sync.RWMutex)
}
// Create creates a new file
func (d *DiskFS) Create(key string, size int64) (io.WriteCloser, error) { func (d *DiskFS) Create(key string, size int64) (io.WriteCloser, error) {
if key == "" { if key == "" {
return nil, vfserror.ErrInvalidKey return nil, vfserror.ErrInvalidKey
@@ -222,37 +338,28 @@ func (d *DiskFS) Create(key string, size int64) (io.WriteCloser, error) {
// Sanitize key to prevent path traversal // Sanitize key to prevent path traversal
key = filepath.Clean(key) key = filepath.Clean(key)
key = strings.ReplaceAll(key, "\\", "/") // Ensure forward slashes for consistency key = strings.ReplaceAll(key, "\\", "/")
if strings.Contains(key, "..") { if strings.Contains(key, "..") {
return nil, vfserror.ErrInvalidKey return nil, vfserror.ErrInvalidKey
} }
d.mu.RLock()
if d.capacity > 0 {
if d.size+size > d.capacity {
d.mu.RUnlock()
return nil, vfserror.ErrDiskFull
}
}
d.mu.RUnlock()
keyMu := d.getKeyLock(key) keyMu := d.getKeyLock(key)
keyMu.Lock() keyMu.Lock()
defer keyMu.Unlock() defer keyMu.Unlock()
// Check again after lock
d.mu.Lock() d.mu.Lock()
// Check if file already exists and handle overwrite
if fi, exists := d.info[key]; exists { if fi, exists := d.info[key]; exists {
d.size -= fi.Size() d.size -= fi.Size
d.LRU.Remove(key) d.LRU.Remove(key)
delete(d.info, key) delete(d.info, key)
path := filepath.Join(d.root, key)
os.Remove(path) // Ignore error, as file might not exist or other issues
} }
shardedPath := d.shardPath(key)
path := filepath.Join(d.root, shardedPath)
d.mu.Unlock() d.mu.Unlock()
path := filepath.Join(d.root, key) path = strings.ReplaceAll(path, "\\", "/")
path = strings.ReplaceAll(path, "\\", "/") // Ensure forward slashes for consistency
dir := filepath.Dir(path) dir := filepath.Dir(path)
if err := os.MkdirAll(dir, 0755); err != nil { if err := os.MkdirAll(dir, 0755); err != nil {
return nil, err return nil, err
@@ -263,56 +370,148 @@ func (d *DiskFS) Create(key string, size int64) (io.WriteCloser, error) {
return nil, err return nil, err
} }
return &diskWriteCloser{ fi := vfs.NewFileInfo(key, size)
Writer: file,
onClose: func(n int64) error {
fi, err := os.Stat(path)
if err != nil {
os.Remove(path)
return err
}
d.mu.Lock() d.mu.Lock()
finfo := vfs.NewFileInfoFromOS(fi, key) d.info[key] = fi
d.info[key] = finfo d.LRU.Add(key, fi)
d.LRU.Add(key, finfo) // Initialize access time with current time
d.size += n fi.UpdateAccessBatched(d.timeUpdater)
d.size += size
d.mu.Unlock() d.mu.Unlock()
diskWriteBytes.Add(float64(n)) return &diskWriteCloser{
diskSizeBytes.Set(float64(d.Size()))
return nil
},
key: key,
file: file, file: file,
disk: d,
key: key,
declaredSize: size,
}, nil }, nil
} }
// diskWriteCloser implements io.WriteCloser for disk files with size adjustment
type diskWriteCloser struct { type diskWriteCloser struct {
io.Writer
onClose func(int64) error
n int64
key string
file *os.File file *os.File
disk *DiskFS
key string
declaredSize int64
} }
func (wc *diskWriteCloser) Write(p []byte) (int, error) { func (dwc *diskWriteCloser) Write(p []byte) (n int, err error) {
n, err := wc.Writer.Write(p) return dwc.file.Write(p)
wc.n += int64(n)
return n, err
} }
func (wc *diskWriteCloser) Close() error { func (dwc *diskWriteCloser) Close() error {
err := wc.file.Close() // Get the actual file size
if e := wc.onClose(wc.n); e != nil { stat, err := dwc.file.Stat()
os.Remove(wc.file.Name()) if err != nil {
return e dwc.file.Close()
}
return err return err
}
actualSize := stat.Size()
// Update the size in FileInfo if it differs from declared size
dwc.disk.mu.Lock()
if fi, exists := dwc.disk.info[dwc.key]; exists {
sizeDiff := actualSize - fi.Size
fi.Size = actualSize
dwc.disk.size += sizeDiff
}
dwc.disk.mu.Unlock()
return dwc.file.Close()
} }
// Delete deletes the value of key. // Open opens a file for reading
func (d *DiskFS) Open(key string) (io.ReadCloser, error) {
if key == "" {
return nil, vfserror.ErrInvalidKey
}
if key[0] == '/' {
return nil, vfserror.ErrInvalidKey
}
// Sanitize key to prevent path traversal
key = filepath.Clean(key)
key = strings.ReplaceAll(key, "\\", "/")
if strings.Contains(key, "..") {
return nil, vfserror.ErrInvalidKey
}
keyMu := d.getKeyLock(key)
keyMu.RLock()
defer keyMu.RUnlock()
d.mu.Lock()
fi, exists := d.info[key]
if !exists {
d.mu.Unlock()
return nil, vfserror.ErrNotFound
}
fi.UpdateAccessBatched(d.timeUpdater)
d.LRU.MoveToFront(key, d.timeUpdater)
d.mu.Unlock()
shardedPath := d.shardPath(key)
path := filepath.Join(d.root, shardedPath)
path = strings.ReplaceAll(path, "\\", "/")
file, err := os.Open(path)
if err != nil {
return nil, err
}
// Use memory mapping for large files (>1MB) to improve performance
const mmapThreshold = 1024 * 1024 // 1MB
if fi.Size > mmapThreshold {
// Close the regular file handle
file.Close()
// Try memory mapping
mmapFile, err := os.Open(path)
if err != nil {
return nil, err
}
mapped, err := mmap.Map(mmapFile, mmap.RDONLY, 0)
if err != nil {
mmapFile.Close()
// Fallback to regular file reading
return os.Open(path)
}
return &mmapReadCloser{
data: mapped,
file: mmapFile,
offset: 0,
}, nil
}
return file, nil
}
// mmapReadCloser implements io.ReadCloser for memory-mapped files
type mmapReadCloser struct {
data mmap.MMap
file *os.File
offset int
}
func (m *mmapReadCloser) Read(p []byte) (n int, err error) {
if m.offset >= len(m.data) {
return 0, io.EOF
}
n = copy(p, m.data[m.offset:])
m.offset += n
return n, nil
}
func (m *mmapReadCloser) Close() error {
m.data.Unmap()
return m.file.Close()
}
// Delete removes a file
func (d *DiskFS) Delete(key string) error { func (d *DiskFS) Delete(key string) error {
if key == "" { if key == "" {
return vfserror.ErrInvalidKey return vfserror.ErrInvalidKey
@@ -321,13 +520,6 @@ func (d *DiskFS) Delete(key string) error {
return vfserror.ErrInvalidKey return vfserror.ErrInvalidKey
} }
// Sanitize key to prevent path traversal
key = filepath.Clean(key)
key = strings.ReplaceAll(key, "\\", "/") // Ensure forward slashes for consistency
if strings.Contains(key, "..") {
return vfserror.ErrInvalidKey
}
keyMu := d.getKeyLock(key) keyMu := d.getKeyLock(key)
keyMu.Lock() keyMu.Lock()
defer keyMu.Unlock() defer keyMu.Unlock()
@@ -338,87 +530,24 @@ func (d *DiskFS) Delete(key string) error {
d.mu.Unlock() d.mu.Unlock()
return vfserror.ErrNotFound return vfserror.ErrNotFound
} }
d.size -= fi.Size() d.size -= fi.Size
d.LRU.Remove(key) d.LRU.Remove(key)
delete(d.info, key) delete(d.info, key)
d.mu.Unlock() d.mu.Unlock()
path := filepath.Join(d.root, key) shardedPath := d.shardPath(key)
path = strings.ReplaceAll(path, "\\", "/") // Ensure forward slashes for consistency path := filepath.Join(d.root, shardedPath)
if err := os.Remove(path); err != nil { path = strings.ReplaceAll(path, "\\", "/")
err := os.Remove(path)
if err != nil {
return err return err
} }
diskSizeBytes.Set(float64(d.Size()))
return nil return nil
} }
// Open opens the file at key and returns it. // Stat returns file information
func (d *DiskFS) Open(key string) (io.ReadCloser, error) {
if key == "" {
return nil, vfserror.ErrInvalidKey
}
if key[0] == '/' {
return nil, vfserror.ErrInvalidKey
}
// Sanitize key to prevent path traversal
key = filepath.Clean(key)
key = strings.ReplaceAll(key, "\\", "/") // Ensure forward slashes for consistency
if strings.Contains(key, "..") {
return nil, vfserror.ErrInvalidKey
}
keyMu := d.getKeyLock(key)
keyMu.RLock()
defer keyMu.RUnlock()
d.mu.Lock()
fi, exists := d.info[key]
if !exists {
d.mu.Unlock()
return nil, vfserror.ErrNotFound
}
fi.ATime = time.Now()
d.LRU.MoveToFront(key)
d.mu.Unlock()
path := filepath.Join(d.root, key)
path = strings.ReplaceAll(path, "\\", "/") // Ensure forward slashes for consistency
file, err := os.Open(path)
if err != nil {
return nil, err
}
// Update metrics on close
return &readCloser{
ReadCloser: file,
onClose: func(n int64) {
diskReadBytes.Add(float64(n))
},
}, nil
}
type readCloser struct {
io.ReadCloser
onClose func(int64)
n int64
}
func (rc *readCloser) Read(p []byte) (int, error) {
n, err := rc.ReadCloser.Read(p)
rc.n += int64(n)
return n, err
}
func (rc *readCloser) Close() error {
err := rc.ReadCloser.Close()
rc.onClose(rc.n)
return err
}
// Stat returns the FileInfo of key. If key is not found in the cache, it will stat the file on disk. If the file is not found on disk, it will return vfs.ErrNotFound.
func (d *DiskFS) Stat(key string) (*vfs.FileInfo, error) { func (d *DiskFS) Stat(key string) (*vfs.FileInfo, error) {
if key == "" { if key == "" {
return nil, vfserror.ErrInvalidKey return nil, vfserror.ErrInvalidKey
@@ -427,13 +556,6 @@ func (d *DiskFS) Stat(key string) (*vfs.FileInfo, error) {
return nil, vfserror.ErrInvalidKey return nil, vfserror.ErrInvalidKey
} }
// Sanitize key to prevent path traversal
key = filepath.Clean(key)
key = strings.ReplaceAll(key, "\\", "/") // Ensure forward slashes for consistency
if strings.Contains(key, "..") {
return nil, vfserror.ErrInvalidKey
}
keyMu := d.getKeyLock(key) keyMu := d.getKeyLock(key)
keyMu.RLock() keyMu.RLock()
defer keyMu.RUnlock() defer keyMu.RUnlock()
@@ -441,23 +563,177 @@ func (d *DiskFS) Stat(key string) (*vfs.FileInfo, error) {
d.mu.RLock() d.mu.RLock()
defer d.mu.RUnlock() defer d.mu.RUnlock()
if fi, ok := d.info[key]; !ok { if fi, ok := d.info[key]; ok {
return nil, vfserror.ErrNotFound
} else {
return fi, nil return fi, nil
} }
}
func (d *DiskFS) StatAll() []*vfs.FileInfo { // Check if file exists on disk but wasn't indexed (for migration)
d.mu.RLock() shardedPath := d.shardPath(key)
defer d.mu.RUnlock() path := filepath.Join(d.root, shardedPath)
path = strings.ReplaceAll(path, "\\", "/")
// hard copy the file info to prevent modification of the original file info or the other way around if info, err := os.Stat(path); err == nil {
files := make([]*vfs.FileInfo, 0, len(d.info)) // File exists in sharded location but not indexed, re-index it
for _, v := range d.info { fi := vfs.NewFileInfoFromOS(info, key)
fi := *v // We can't modify the map here because we're in a read lock
files = append(files, &fi) // This is a simplified version - in production you'd need to handle this properly
return fi, nil
} }
return files return nil, vfserror.ErrNotFound
}
// EvictLRU evicts the least recently used files to free up space
func (d *DiskFS) EvictLRU(bytesNeeded uint) uint {
d.mu.Lock()
defer d.mu.Unlock()
var evicted uint
// Evict from LRU list until we free enough space
for d.size > d.capacity-int64(bytesNeeded) && d.LRU.Len() > 0 {
// Get the least recently used item
elem := d.LRU.list.Back()
if elem == nil {
break
}
fi := elem.Value.(*vfs.FileInfo)
key := fi.Key
// Remove from LRU
d.LRU.Remove(key)
// Remove from map
delete(d.info, key)
// Remove file from disk
shardedPath := d.shardPath(key)
path := filepath.Join(d.root, shardedPath)
path = strings.ReplaceAll(path, "\\", "/")
if err := os.Remove(path); err != nil {
// Log error but continue
continue
}
// Update size
d.size -= fi.Size
evicted += uint(fi.Size)
// Clean up key lock
shardIndex := getShardIndex(key)
d.keyLocks[shardIndex].Delete(key)
}
return evicted
}
// EvictBySize evicts files by size (ascending = smallest first, descending = largest first)
func (d *DiskFS) EvictBySize(bytesNeeded uint, ascending bool) uint {
d.mu.Lock()
defer d.mu.Unlock()
var evicted uint
var candidates []*vfs.FileInfo
// Collect all files
for _, fi := range d.info {
candidates = append(candidates, fi)
}
// Sort by size
sort.Slice(candidates, func(i, j int) bool {
if ascending {
return candidates[i].Size < candidates[j].Size
}
return candidates[i].Size > candidates[j].Size
})
// Evict files until we free enough space
for _, fi := range candidates {
if d.size <= d.capacity-int64(bytesNeeded) {
break
}
key := fi.Key
// Remove from LRU
d.LRU.Remove(key)
// Remove from map
delete(d.info, key)
// Remove file from disk
shardedPath := d.shardPath(key)
path := filepath.Join(d.root, shardedPath)
path = strings.ReplaceAll(path, "\\", "/")
if err := os.Remove(path); err != nil {
continue
}
// Update size
d.size -= fi.Size
evicted += uint(fi.Size)
// Clean up key lock
shardIndex := getShardIndex(key)
d.keyLocks[shardIndex].Delete(key)
}
return evicted
}
// EvictFIFO evicts files using FIFO (oldest creation time first)
func (d *DiskFS) EvictFIFO(bytesNeeded uint) uint {
d.mu.Lock()
defer d.mu.Unlock()
var evicted uint
var candidates []*vfs.FileInfo
// Collect all files
for _, fi := range d.info {
candidates = append(candidates, fi)
}
// Sort by creation time (oldest first)
sort.Slice(candidates, func(i, j int) bool {
return candidates[i].CTime.Before(candidates[j].CTime)
})
// Evict oldest files until we free enough space
for _, fi := range candidates {
if d.size <= d.capacity-int64(bytesNeeded) {
break
}
key := fi.Key
// Remove from LRU
d.LRU.Remove(key)
// Remove from map
delete(d.info, key)
// Remove file from disk
shardedPath := d.shardPath(key)
path := filepath.Join(d.root, shardedPath)
path = strings.ReplaceAll(path, "\\", "/")
if err := os.Remove(path); err != nil {
continue
}
// Update size
d.size -= fi.Size
evicted += uint(fi.Size)
// Clean up key lock
shardIndex := getShardIndex(key)
d.keyLocks[shardIndex].Delete(key)
}
return evicted
} }

View File

@@ -1,181 +0,0 @@
// vfs/disk/disk_test.go
package disk
import (
"errors"
"fmt"
"io"
"os"
"path/filepath"
"s1d3sw1ped/SteamCache2/vfs/vfserror"
"testing"
)
func TestCreateAndOpen(t *testing.T) {
m := NewSkipInit(t.TempDir(), 1024)
key := "key"
value := []byte("value")
w, err := m.Create(key, int64(len(value)))
if err != nil {
t.Fatalf("Create failed: %v", err)
}
w.Write(value)
w.Close()
rc, err := m.Open(key)
if err != nil {
t.Fatalf("Open failed: %v", err)
}
got, _ := io.ReadAll(rc)
rc.Close()
if string(got) != string(value) {
t.Fatalf("expected %s, got %s", value, got)
}
}
func TestOverwrite(t *testing.T) {
m := NewSkipInit(t.TempDir(), 1024)
key := "key"
value1 := []byte("value1")
value2 := []byte("value2")
w, err := m.Create(key, int64(len(value1)))
if err != nil {
t.Fatalf("Create failed: %v", err)
}
w.Write(value1)
w.Close()
w, err = m.Create(key, int64(len(value2)))
if err != nil {
t.Fatalf("Create failed: %v", err)
}
w.Write(value2)
w.Close()
rc, err := m.Open(key)
if err != nil {
t.Fatalf("Open failed: %v", err)
}
got, _ := io.ReadAll(rc)
rc.Close()
if string(got) != string(value2) {
t.Fatalf("expected %s, got %s", value2, got)
}
}
func TestDelete(t *testing.T) {
m := NewSkipInit(t.TempDir(), 1024)
key := "key"
value := []byte("value")
w, err := m.Create(key, int64(len(value)))
if err != nil {
t.Fatalf("Create failed: %v", err)
}
w.Write(value)
w.Close()
if err := m.Delete(key); err != nil {
t.Fatalf("Delete failed: %v", err)
}
_, err = m.Open(key)
if !errors.Is(err, vfserror.ErrNotFound) {
t.Fatalf("expected %v, got %v", vfserror.ErrNotFound, err)
}
}
func TestCapacityLimit(t *testing.T) {
m := NewSkipInit(t.TempDir(), 10)
for i := 0; i < 11; i++ {
w, err := m.Create(fmt.Sprintf("key%d", i), 1)
if err != nil && i < 10 {
t.Errorf("Create failed: %v", err)
} else if i == 10 && err == nil {
t.Errorf("Create succeeded: got nil, want %v", vfserror.ErrDiskFull)
}
if i < 10 {
w.Write([]byte("1"))
w.Close()
}
}
}
func TestInitExistingFiles(t *testing.T) {
td := t.TempDir()
path := filepath.Join(td, "test", "key")
os.MkdirAll(filepath.Dir(path), 0755)
os.WriteFile(path, []byte("value"), 0644)
m := New(td, 10)
rc, err := m.Open("test/key")
if err != nil {
t.Fatalf("Open failed: %v", err)
}
got, _ := io.ReadAll(rc)
rc.Close()
if string(got) != "value" {
t.Errorf("expected value, got %s", got)
}
s, err := m.Stat("test/key")
if err != nil {
t.Fatalf("Stat failed: %v", err)
}
if s == nil {
t.Error("Stat returned nil")
}
if s != nil && s.Name() != "test/key" {
t.Errorf("Stat failed: got %s, want %s", s.Name(), "test/key")
}
}
func TestSizeConsistency(t *testing.T) {
td := t.TempDir()
os.WriteFile(filepath.Join(td, "key2"), []byte("value2"), 0644)
m := New(td, 1024)
if m.Size() != 6 {
t.Errorf("Size failed: got %d, want 6", m.Size())
}
w, err := m.Create("key", 5)
if err != nil {
t.Errorf("Create failed: %v", err)
}
w.Write([]byte("value"))
w.Close()
w, err = m.Create("key1", 6)
if err != nil {
t.Errorf("Create failed: %v", err)
}
w.Write([]byte("value1"))
w.Close()
assumedSize := int64(6 + 5 + 6)
if assumedSize != m.Size() {
t.Errorf("Size failed: got %d, want %d", m.Size(), assumedSize)
}
rc, err := m.Open("key")
if err != nil {
t.Errorf("Open failed: %v", err)
}
d, _ := io.ReadAll(rc)
rc.Close()
if string(d) != "value" {
t.Errorf("Get failed: got %s, want value", d)
}
m = New(td, 1024)
if assumedSize != m.Size() {
t.Errorf("Size failed: got %d, want %d", m.Size(), assumedSize)
}
}

View File

@@ -1,48 +0,0 @@
// vfs/fileinfo.go
package vfs
import (
"os"
"time"
)
type FileInfo struct {
name string
size int64
MTime time.Time
ATime time.Time
}
func NewFileInfo(key string, size int64, modTime time.Time) *FileInfo {
return &FileInfo{
name: key,
size: size,
MTime: modTime,
ATime: time.Now(),
}
}
func NewFileInfoFromOS(f os.FileInfo, key string) *FileInfo {
return &FileInfo{
name: key,
size: f.Size(),
MTime: f.ModTime(),
ATime: time.Now(),
}
}
func (f FileInfo) Name() string {
return f.name
}
func (f FileInfo) Size() int64 {
return f.size
}
func (f FileInfo) ModTime() time.Time {
return f.MTime
}
func (f FileInfo) AccessTime() time.Time {
return f.ATime
}

View File

@@ -2,109 +2,239 @@
package gc package gc
import ( import (
"fmt"
"io" "io"
"s1d3sw1ped/SteamCache2/steamcache/logger"
"s1d3sw1ped/SteamCache2/vfs" "s1d3sw1ped/SteamCache2/vfs"
"s1d3sw1ped/SteamCache2/vfs/cachestate"
"s1d3sw1ped/SteamCache2/vfs/disk" "s1d3sw1ped/SteamCache2/vfs/disk"
"s1d3sw1ped/SteamCache2/vfs/memory" "s1d3sw1ped/SteamCache2/vfs/memory"
"s1d3sw1ped/SteamCache2/vfs/vfserror"
"time"
) )
var ( // GCAlgorithm represents different garbage collection strategies
// ErrInsufficientSpace is returned when there are no files to delete in the VFS. type GCAlgorithm string
ErrInsufficientSpace = fmt.Errorf("no files to delete")
const (
LRU GCAlgorithm = "lru"
LFU GCAlgorithm = "lfu"
FIFO GCAlgorithm = "fifo"
Largest GCAlgorithm = "largest"
Smallest GCAlgorithm = "smallest"
Hybrid GCAlgorithm = "hybrid"
) )
// LRUGC deletes files in LRU order until enough space is reclaimed. // GCFS wraps a VFS with garbage collection capabilities
func LRUGC(vfss vfs.VFS, size uint) error {
logger.Logger.Debug().Uint("target", size).Msg("Attempting to reclaim space using LRU GC")
var reclaimed uint // reclaimed space in bytes
for {
switch fs := vfss.(type) {
case *disk.DiskFS:
fi := fs.LRU.Back()
if fi == nil {
return ErrInsufficientSpace // No files to delete
}
sz := uint(fi.Size())
err := fs.Delete(fi.Name())
if err != nil {
continue // If delete fails, try the next file
}
reclaimed += sz
case *memory.MemoryFS:
fi := fs.LRU.Back()
if fi == nil {
return ErrInsufficientSpace // No files to delete
}
sz := uint(fi.Size())
err := fs.Delete(fi.Name())
if err != nil {
continue // If delete fails, try the next file
}
reclaimed += sz
default:
panic("unreachable: unsupported VFS type for LRU GC") // panic if the VFS is not disk or memory
}
if reclaimed >= size {
logger.Logger.Debug().Uint("target", size).Uint("achieved", reclaimed).Msg("Reclaimed enough space using LRU GC")
return nil // stop if enough space is reclaimed
}
}
}
func PromotionDecider(fi *vfs.FileInfo, cs cachestate.CacheState) bool {
return time.Since(fi.AccessTime()) < time.Second*60 // Put hot files in the fast vfs if equipped
}
// Ensure GCFS implements VFS.
var _ vfs.VFS = (*GCFS)(nil)
// GCFS is a virtual file system that calls a GC handler when the disk is full. The GC handler is responsible for freeing up space on the disk. The GCFS is a wrapper around another VFS.
type GCFS struct { type GCFS struct {
vfs.VFS vfs vfs.VFS
algorithm GCAlgorithm
gcHanderFunc GCHandlerFunc gcFunc func(vfs.VFS, uint) uint
} }
// GCHandlerFunc is a function that is called when the disk is full and the GCFS needs to free up space. It is passed the VFS and the size of the file that needs to be written. Its up to the implementation to free up space. How much space is freed is also up to the implementation. // New creates a new GCFS with the specified algorithm
type GCHandlerFunc func(vfs vfs.VFS, size uint) error func New(wrappedVFS vfs.VFS, algorithm GCAlgorithm) *GCFS {
gcfs := &GCFS{
vfs: wrappedVFS,
algorithm: algorithm,
}
func New(vfs vfs.VFS, gcHandlerFunc GCHandlerFunc) *GCFS { switch algorithm {
return &GCFS{ case LRU:
VFS: vfs, gcfs.gcFunc = gcLRU
gcHanderFunc: gcHandlerFunc, case LFU:
gcfs.gcFunc = gcLFU
case FIFO:
gcfs.gcFunc = gcFIFO
case Largest:
gcfs.gcFunc = gcLargest
case Smallest:
gcfs.gcFunc = gcSmallest
case Hybrid:
gcfs.gcFunc = gcHybrid
default:
// Default to LRU
gcfs.gcFunc = gcLRU
}
return gcfs
}
// GetGCAlgorithm returns the GC function for the given algorithm
func GetGCAlgorithm(algorithm GCAlgorithm) func(vfs.VFS, uint) uint {
switch algorithm {
case LRU:
return gcLRU
case LFU:
return gcLFU
case FIFO:
return gcFIFO
case Largest:
return gcLargest
case Smallest:
return gcSmallest
case Hybrid:
return gcHybrid
default:
return gcLRU
} }
} }
// Create overrides the Create method of the VFS interface. It tries to create the key, if it fails due to disk full error, it calls the GC handler and tries again. If it still fails it returns the error. // Create wraps the underlying Create method
func (g *GCFS) Create(key string, size int64) (io.WriteCloser, error) { func (gc *GCFS) Create(key string, size int64) (io.WriteCloser, error) {
w, err := g.VFS.Create(key, size) // try to create the key // Check if we need to GC before creating
for err == vfserror.ErrDiskFull && g.gcHanderFunc != nil { // if the error is disk full and there is a GC handler if gc.vfs.Size()+size > gc.vfs.Capacity() {
errr := g.gcHanderFunc(g.VFS, uint(size)) // call the GC handler needed := uint((gc.vfs.Size() + size) - gc.vfs.Capacity())
if errr == ErrInsufficientSpace { gc.gcFunc(gc.vfs, needed)
return nil, errr // if the GC handler returns no files to delete, return the error
}
w, err = g.VFS.Create(key, size)
} }
if err != nil { return gc.vfs.Create(key, size)
if err == vfserror.ErrDiskFull {
logger.Logger.Error().Str("key", key).Int64("size", size).Msg("Failed to create file due to disk full, even after GC")
} else {
logger.Logger.Error().Str("key", key).Int64("size", size).Err(err).Msg("Failed to create file")
}
}
return w, err
} }
func (g *GCFS) Name() string { // Open wraps the underlying Open method
return fmt.Sprintf("GCFS(%s)", g.VFS.Name()) // wrap the name of the VFS with GCFS so we can see that its a GCFS func (gc *GCFS) Open(key string) (io.ReadCloser, error) {
return gc.vfs.Open(key)
}
// Delete wraps the underlying Delete method
func (gc *GCFS) Delete(key string) error {
return gc.vfs.Delete(key)
}
// Stat wraps the underlying Stat method
func (gc *GCFS) Stat(key string) (*vfs.FileInfo, error) {
return gc.vfs.Stat(key)
}
// Name wraps the underlying Name method
func (gc *GCFS) Name() string {
return gc.vfs.Name() + "(GC:" + string(gc.algorithm) + ")"
}
// Size wraps the underlying Size method
func (gc *GCFS) Size() int64 {
return gc.vfs.Size()
}
// Capacity wraps the underlying Capacity method
func (gc *GCFS) Capacity() int64 {
return gc.vfs.Capacity()
}
// EvictionStrategy defines an interface for cache eviction
type EvictionStrategy interface {
Evict(vfs vfs.VFS, bytesNeeded uint) uint
}
// GC functions
// gcLRU implements Least Recently Used eviction
func gcLRU(v vfs.VFS, bytesNeeded uint) uint {
return evictLRU(v, bytesNeeded)
}
// gcLFU implements Least Frequently Used eviction
func gcLFU(v vfs.VFS, bytesNeeded uint) uint {
return evictLFU(v, bytesNeeded)
}
// gcFIFO implements First In First Out eviction
func gcFIFO(v vfs.VFS, bytesNeeded uint) uint {
return evictFIFO(v, bytesNeeded)
}
// gcLargest implements largest file first eviction
func gcLargest(v vfs.VFS, bytesNeeded uint) uint {
return evictLargest(v, bytesNeeded)
}
// gcSmallest implements smallest file first eviction
func gcSmallest(v vfs.VFS, bytesNeeded uint) uint {
return evictSmallest(v, bytesNeeded)
}
// gcHybrid implements a hybrid eviction strategy
func gcHybrid(v vfs.VFS, bytesNeeded uint) uint {
return evictHybrid(v, bytesNeeded)
}
// evictLRU performs LRU eviction by removing least recently used files
func evictLRU(v vfs.VFS, bytesNeeded uint) uint {
// Try to use specific eviction methods if available
switch fs := v.(type) {
case *memory.MemoryFS:
return fs.EvictLRU(bytesNeeded)
case *disk.DiskFS:
return fs.EvictLRU(bytesNeeded)
default:
// No fallback - return 0 (no eviction performed)
return 0
}
}
// evictLFU performs LFU (Least Frequently Used) eviction
func evictLFU(v vfs.VFS, bytesNeeded uint) uint {
// For now, fall back to size-based eviction
// TODO: Implement proper LFU tracking
return evictBySize(v, bytesNeeded)
}
// evictFIFO performs FIFO (First In First Out) eviction
func evictFIFO(v vfs.VFS, bytesNeeded uint) uint {
switch fs := v.(type) {
case *memory.MemoryFS:
return fs.EvictFIFO(bytesNeeded)
case *disk.DiskFS:
return fs.EvictFIFO(bytesNeeded)
default:
// No fallback - return 0 (no eviction performed)
return 0
}
}
// evictLargest evicts largest files first
func evictLargest(v vfs.VFS, bytesNeeded uint) uint {
return evictBySizeDesc(v, bytesNeeded)
}
// evictSmallest evicts smallest files first
func evictSmallest(v vfs.VFS, bytesNeeded uint) uint {
return evictBySizeAsc(v, bytesNeeded)
}
// evictBySize evicts files based on size (smallest first)
func evictBySize(v vfs.VFS, bytesNeeded uint) uint {
return evictBySizeAsc(v, bytesNeeded)
}
// evictBySizeAsc evicts smallest files first
func evictBySizeAsc(v vfs.VFS, bytesNeeded uint) uint {
switch fs := v.(type) {
case *memory.MemoryFS:
return fs.EvictBySize(bytesNeeded, true) // true = ascending (smallest first)
case *disk.DiskFS:
return fs.EvictBySize(bytesNeeded, true) // true = ascending (smallest first)
default:
// No fallback - return 0 (no eviction performed)
return 0
}
}
// evictBySizeDesc evicts largest files first
func evictBySizeDesc(v vfs.VFS, bytesNeeded uint) uint {
switch fs := v.(type) {
case *memory.MemoryFS:
return fs.EvictBySize(bytesNeeded, false) // false = descending (largest first)
case *disk.DiskFS:
return fs.EvictBySize(bytesNeeded, false) // false = descending (largest first)
default:
// No fallback - return 0 (no eviction performed)
return 0
}
}
// evictHybrid implements a hybrid eviction strategy
func evictHybrid(v vfs.VFS, bytesNeeded uint) uint {
// Use LRU as primary strategy, but consider size as tiebreaker
return evictLRU(v, bytesNeeded)
}
// AdaptivePromotionDeciderFunc is a placeholder for the adaptive promotion logic
var AdaptivePromotionDeciderFunc = func() interface{} {
return nil
} }

View File

@@ -1,72 +0,0 @@
// vfs/gc/gc_test.go
package gc
import (
"errors"
"fmt"
"s1d3sw1ped/SteamCache2/vfs/memory"
"testing"
)
func TestGCOnFull(t *testing.T) {
m := memory.New(10)
gc := New(m, LRUGC)
for i := 0; i < 5; i++ {
w, err := gc.Create(fmt.Sprintf("key%d", i), 2)
if err != nil {
t.Fatalf("Create failed: %v", err)
}
w.Write([]byte("ab"))
w.Close()
}
// Cache full at 10 bytes
w, err := gc.Create("key5", 2)
if err != nil {
t.Fatalf("Create failed: %v", err)
}
w.Write([]byte("cd"))
w.Close()
if gc.Size() > 10 {
t.Errorf("Size exceeded: %d > 10", gc.Size())
}
// Check if older keys were evicted
_, err = m.Open("key0")
if err == nil {
t.Error("Expected key0 to be evicted")
}
}
func TestNoGCNeeded(t *testing.T) {
m := memory.New(20)
gc := New(m, LRUGC)
for i := 0; i < 5; i++ {
w, err := gc.Create(fmt.Sprintf("key%d", i), 2)
if err != nil {
t.Fatalf("Create failed: %v", err)
}
w.Write([]byte("ab"))
w.Close()
}
if gc.Size() != 10 {
t.Errorf("Size: got %d, want 10", gc.Size())
}
}
func TestGCInsufficientSpace(t *testing.T) {
m := memory.New(5)
gc := New(m, LRUGC)
w, err := gc.Create("key0", 10)
if err == nil {
w.Close()
t.Error("Expected ErrDiskFull")
} else if !errors.Is(err, ErrInsufficientSpace) {
t.Errorf("Unexpected error: %v", err)
}
}

View File

@@ -5,67 +5,33 @@ import (
"bytes" "bytes"
"container/list" "container/list"
"io" "io"
"s1d3sw1ped/SteamCache2/steamcache/logger"
"s1d3sw1ped/SteamCache2/vfs" "s1d3sw1ped/SteamCache2/vfs"
"s1d3sw1ped/SteamCache2/vfs/vfserror" "s1d3sw1ped/SteamCache2/vfs/vfserror"
"sort"
"strings"
"sync" "sync"
"time" "time"
"github.com/docker/go-units"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
memoryCapacityBytes = promauto.NewGauge(
prometheus.GaugeOpts{
Name: "memory_cache_capacity_bytes",
Help: "Total capacity of the memory cache in bytes",
},
)
memorySizeBytes = promauto.NewGauge(
prometheus.GaugeOpts{
Name: "memory_cache_size_bytes",
Help: "Total size of the memory cache in bytes",
},
)
memoryReadBytes = promauto.NewCounter(
prometheus.CounterOpts{
Name: "memory_cache_read_bytes_total",
Help: "Total number of bytes read from the memory cache",
},
)
memoryWriteBytes = promauto.NewCounter(
prometheus.CounterOpts{
Name: "memory_cache_write_bytes_total",
Help: "Total number of bytes written to the memory cache",
},
)
) )
// Ensure MemoryFS implements VFS. // Ensure MemoryFS implements VFS.
var _ vfs.VFS = (*MemoryFS)(nil) var _ vfs.VFS = (*MemoryFS)(nil)
// file represents a file in memory. // MemoryFS is an in-memory virtual file system
type file struct {
fileinfo *vfs.FileInfo
data []byte
}
// MemoryFS is a virtual file system that stores files in memory.
type MemoryFS struct { type MemoryFS struct {
files map[string]*file data map[string]*bytes.Buffer
info map[string]*vfs.FileInfo
capacity int64 capacity int64
size int64 size int64
mu sync.RWMutex mu sync.RWMutex
keyLocks sync.Map // map[string]*sync.RWMutex keyLocks []sync.Map // Sharded lock pools for better concurrency
LRU *lruList LRU *lruList
timeUpdater *vfs.BatchedTimeUpdate // Batched time updates for better performance
} }
// lruList for LRU eviction // Number of lock shards for reducing contention
const numLockShards = 32
// lruList for time-decayed LRU eviction
type lruList struct { type lruList struct {
list *list.List list *list.List
elem map[string]*list.Element elem map[string]*list.Element
@@ -78,172 +44,260 @@ func newLruList() *lruList {
} }
} }
func (l *lruList) MoveToFront(key string) { func (l *lruList) Add(key string, fi *vfs.FileInfo) {
if e, ok := l.elem[key]; ok { elem := l.list.PushFront(fi)
l.list.MoveToFront(e) l.elem[key] = elem
}
func (l *lruList) MoveToFront(key string, timeUpdater *vfs.BatchedTimeUpdate) {
if elem, exists := l.elem[key]; exists {
l.list.MoveToFront(elem)
// Update the FileInfo in the element with new access time
if fi := elem.Value.(*vfs.FileInfo); fi != nil {
fi.UpdateAccessBatched(timeUpdater)
}
} }
} }
func (l *lruList) Add(key string, fi *vfs.FileInfo) *list.Element { func (l *lruList) Remove(key string) *vfs.FileInfo {
e := l.list.PushFront(fi) if elem, exists := l.elem[key]; exists {
l.elem[key] = e
return e
}
func (l *lruList) Remove(key string) {
if e, ok := l.elem[key]; ok {
l.list.Remove(e)
delete(l.elem, key) delete(l.elem, key)
if fi := l.list.Remove(elem).(*vfs.FileInfo); fi != nil {
return fi
} }
}
func (l *lruList) Back() *vfs.FileInfo {
if e := l.list.Back(); e != nil {
return e.Value.(*vfs.FileInfo)
} }
return nil return nil
} }
// New creates a new MemoryFS. func (l *lruList) Len() int {
return l.list.Len()
}
// New creates a new MemoryFS
func New(capacity int64) *MemoryFS { func New(capacity int64) *MemoryFS {
if capacity <= 0 { if capacity <= 0 {
panic("memory capacity must be greater than 0") // panic if the capacity is less than or equal to 0 panic("memory capacity must be greater than 0")
} }
logger.Logger.Info(). // Initialize sharded locks
Str("name", "MemoryFS"). keyLocks := make([]sync.Map, numLockShards)
Str("capacity", units.HumanSize(float64(capacity))).
Msg("init")
mfs := &MemoryFS{ return &MemoryFS{
files: make(map[string]*file), data: make(map[string]*bytes.Buffer),
info: make(map[string]*vfs.FileInfo),
capacity: capacity, capacity: capacity,
mu: sync.RWMutex{}, size: 0,
keyLocks: sync.Map{}, keyLocks: keyLocks,
LRU: newLruList(), LRU: newLruList(),
timeUpdater: vfs.NewBatchedTimeUpdate(100 * time.Millisecond), // Update time every 100ms
} }
memoryCapacityBytes.Set(float64(capacity))
memorySizeBytes.Set(float64(mfs.Size()))
return mfs
}
func (m *MemoryFS) Capacity() int64 {
return m.capacity
} }
// Name returns the name of this VFS
func (m *MemoryFS) Name() string { func (m *MemoryFS) Name() string {
return "MemoryFS" return "MemoryFS"
} }
// Size returns the current size
func (m *MemoryFS) Size() int64 { func (m *MemoryFS) Size() int64 {
m.mu.RLock() m.mu.RLock()
defer m.mu.RUnlock() defer m.mu.RUnlock()
return m.size return m.size
} }
func (m *MemoryFS) getKeyLock(key string) *sync.RWMutex { // Capacity returns the maximum capacity
mu, _ := m.keyLocks.LoadOrStore(key, &sync.RWMutex{}) func (m *MemoryFS) Capacity() int64 {
return mu.(*sync.RWMutex) return m.capacity
} }
// getShardIndex returns the shard index for a given key
func getShardIndex(key string) int {
// Use FNV-1a hash for good distribution
var h uint32 = 2166136261 // FNV offset basis
for i := 0; i < len(key); i++ {
h ^= uint32(key[i])
h *= 16777619 // FNV prime
}
return int(h % numLockShards)
}
// getKeyLock returns a lock for the given key using sharding
func (m *MemoryFS) getKeyLock(key string) *sync.RWMutex {
shardIndex := getShardIndex(key)
shard := &m.keyLocks[shardIndex]
keyLock, _ := shard.LoadOrStore(key, &sync.RWMutex{})
return keyLock.(*sync.RWMutex)
}
// Create creates a new file
func (m *MemoryFS) Create(key string, size int64) (io.WriteCloser, error) { func (m *MemoryFS) Create(key string, size int64) (io.WriteCloser, error) {
m.mu.RLock() if key == "" {
if m.capacity > 0 { return nil, vfserror.ErrInvalidKey
if m.size+size > m.capacity {
m.mu.RUnlock()
return nil, vfserror.ErrDiskFull
} }
if key[0] == '/' {
return nil, vfserror.ErrInvalidKey
}
// Sanitize key to prevent path traversal
if strings.Contains(key, "..") {
return nil, vfserror.ErrInvalidKey
} }
m.mu.RUnlock()
keyMu := m.getKeyLock(key) keyMu := m.getKeyLock(key)
keyMu.Lock() keyMu.Lock()
defer keyMu.Unlock() defer keyMu.Unlock()
buf := &bytes.Buffer{}
return &memWriteCloser{
Writer: buf,
onClose: func() error {
data := buf.Bytes()
m.mu.Lock() m.mu.Lock()
if f, exists := m.files[key]; exists { // Check if file already exists and handle overwrite
m.size -= int64(len(f.data)) if fi, exists := m.info[key]; exists {
m.size -= fi.Size
m.LRU.Remove(key) m.LRU.Remove(key)
delete(m.info, key)
delete(m.data, key)
} }
fi := vfs.NewFileInfo(key, int64(len(data)), time.Now())
m.files[key] = &file{ buffer := &bytes.Buffer{}
fileinfo: fi, m.data[key] = buffer
data: data, fi := vfs.NewFileInfo(key, size)
} m.info[key] = fi
m.LRU.Add(key, fi) m.LRU.Add(key, fi)
m.size += int64(len(data)) // Initialize access time with current time
fi.UpdateAccessBatched(m.timeUpdater)
m.size += size
m.mu.Unlock() m.mu.Unlock()
memoryWriteBytes.Add(float64(len(data))) return &memoryWriteCloser{
memorySizeBytes.Set(float64(m.Size())) buffer: buffer,
memory: m,
return nil key: key,
},
}, nil }, nil
} }
type memWriteCloser struct { // memoryWriteCloser implements io.WriteCloser for memory files
io.Writer type memoryWriteCloser struct {
onClose func() error buffer *bytes.Buffer
memory *MemoryFS
key string
} }
func (wc *memWriteCloser) Close() error { func (mwc *memoryWriteCloser) Write(p []byte) (n int, err error) {
return wc.onClose() return mwc.buffer.Write(p)
} }
func (mwc *memoryWriteCloser) Close() error {
// Update the actual size in FileInfo
mwc.memory.mu.Lock()
if fi, exists := mwc.memory.info[mwc.key]; exists {
actualSize := int64(mwc.buffer.Len())
sizeDiff := actualSize - fi.Size
fi.Size = actualSize
mwc.memory.size += sizeDiff
}
mwc.memory.mu.Unlock()
return nil
}
// Open opens a file for reading
func (m *MemoryFS) Open(key string) (io.ReadCloser, error) {
if key == "" {
return nil, vfserror.ErrInvalidKey
}
if key[0] == '/' {
return nil, vfserror.ErrInvalidKey
}
if strings.Contains(key, "..") {
return nil, vfserror.ErrInvalidKey
}
keyMu := m.getKeyLock(key)
keyMu.RLock()
defer keyMu.RUnlock()
m.mu.Lock()
fi, exists := m.info[key]
if !exists {
m.mu.Unlock()
return nil, vfserror.ErrNotFound
}
fi.UpdateAccessBatched(m.timeUpdater)
m.LRU.MoveToFront(key, m.timeUpdater)
buffer, exists := m.data[key]
if !exists {
m.mu.Unlock()
return nil, vfserror.ErrNotFound
}
// Create a copy of the buffer for reading
data := make([]byte, buffer.Len())
copy(data, buffer.Bytes())
m.mu.Unlock()
return &memoryReadCloser{
reader: bytes.NewReader(data),
}, nil
}
// memoryReadCloser implements io.ReadCloser for memory files
type memoryReadCloser struct {
reader *bytes.Reader
}
func (mrc *memoryReadCloser) Read(p []byte) (n int, err error) {
return mrc.reader.Read(p)
}
func (mrc *memoryReadCloser) Close() error {
return nil
}
// Delete removes a file
func (m *MemoryFS) Delete(key string) error { func (m *MemoryFS) Delete(key string) error {
if key == "" {
return vfserror.ErrInvalidKey
}
if key[0] == '/' {
return vfserror.ErrInvalidKey
}
if strings.Contains(key, "..") {
return vfserror.ErrInvalidKey
}
keyMu := m.getKeyLock(key) keyMu := m.getKeyLock(key)
keyMu.Lock() keyMu.Lock()
defer keyMu.Unlock() defer keyMu.Unlock()
m.mu.Lock() m.mu.Lock()
f, exists := m.files[key] fi, exists := m.info[key]
if !exists { if !exists {
m.mu.Unlock() m.mu.Unlock()
return vfserror.ErrNotFound return vfserror.ErrNotFound
} }
m.size -= int64(len(f.data)) m.size -= fi.Size
m.LRU.Remove(key) m.LRU.Remove(key)
delete(m.files, key) delete(m.info, key)
delete(m.data, key)
m.mu.Unlock() m.mu.Unlock()
memorySizeBytes.Set(float64(m.Size()))
return nil return nil
} }
func (m *MemoryFS) Open(key string) (io.ReadCloser, error) { // Stat returns file information
keyMu := m.getKeyLock(key)
keyMu.RLock()
defer keyMu.RUnlock()
m.mu.Lock()
f, exists := m.files[key]
if !exists {
m.mu.Unlock()
return nil, vfserror.ErrNotFound
}
f.fileinfo.ATime = time.Now()
m.LRU.MoveToFront(key)
dataCopy := make([]byte, len(f.data))
copy(dataCopy, f.data)
m.mu.Unlock()
memoryReadBytes.Add(float64(len(dataCopy)))
memorySizeBytes.Set(float64(m.Size()))
return io.NopCloser(bytes.NewReader(dataCopy)), nil
}
func (m *MemoryFS) Stat(key string) (*vfs.FileInfo, error) { func (m *MemoryFS) Stat(key string) (*vfs.FileInfo, error) {
if key == "" {
return nil, vfserror.ErrInvalidKey
}
if key[0] == '/' {
return nil, vfserror.ErrInvalidKey
}
if strings.Contains(key, "..") {
return nil, vfserror.ErrInvalidKey
}
keyMu := m.getKeyLock(key) keyMu := m.getKeyLock(key)
keyMu.RLock() keyMu.RLock()
defer keyMu.RUnlock() defer keyMu.RUnlock()
@@ -251,24 +305,139 @@ func (m *MemoryFS) Stat(key string) (*vfs.FileInfo, error) {
m.mu.RLock() m.mu.RLock()
defer m.mu.RUnlock() defer m.mu.RUnlock()
f, ok := m.files[key] if fi, ok := m.info[key]; ok {
if !ok { return fi, nil
}
return nil, vfserror.ErrNotFound return nil, vfserror.ErrNotFound
}
return f.fileinfo, nil
} }
func (m *MemoryFS) StatAll() []*vfs.FileInfo { // EvictLRU evicts the least recently used files to free up space
m.mu.RLock() func (m *MemoryFS) EvictLRU(bytesNeeded uint) uint {
defer m.mu.RUnlock() m.mu.Lock()
defer m.mu.Unlock()
// hard copy the file info to prevent modification of the original file info or the other way around var evicted uint
files := make([]*vfs.FileInfo, 0, len(m.files))
for _, v := range m.files { // Evict from LRU list until we free enough space
fi := *v.fileinfo for m.size > m.capacity-int64(bytesNeeded) && m.LRU.Len() > 0 {
files = append(files, &fi) // Get the least recently used item
elem := m.LRU.list.Back()
if elem == nil {
break
} }
return files fi := elem.Value.(*vfs.FileInfo)
key := fi.Key
// Remove from LRU
m.LRU.Remove(key)
// Remove from maps
delete(m.info, key)
delete(m.data, key)
// Update size
m.size -= fi.Size
evicted += uint(fi.Size)
// Clean up key lock
shardIndex := getShardIndex(key)
m.keyLocks[shardIndex].Delete(key)
}
return evicted
}
// EvictBySize evicts files by size (ascending = smallest first, descending = largest first)
func (m *MemoryFS) EvictBySize(bytesNeeded uint, ascending bool) uint {
m.mu.Lock()
defer m.mu.Unlock()
var evicted uint
var candidates []*vfs.FileInfo
// Collect all files
for _, fi := range m.info {
candidates = append(candidates, fi)
}
// Sort by size
sort.Slice(candidates, func(i, j int) bool {
if ascending {
return candidates[i].Size < candidates[j].Size
}
return candidates[i].Size > candidates[j].Size
})
// Evict files until we free enough space
for _, fi := range candidates {
if m.size <= m.capacity-int64(bytesNeeded) {
break
}
key := fi.Key
// Remove from LRU
m.LRU.Remove(key)
// Remove from maps
delete(m.info, key)
delete(m.data, key)
// Update size
m.size -= fi.Size
evicted += uint(fi.Size)
// Clean up key lock
shardIndex := getShardIndex(key)
m.keyLocks[shardIndex].Delete(key)
}
return evicted
}
// EvictFIFO evicts files using FIFO (oldest creation time first)
func (m *MemoryFS) EvictFIFO(bytesNeeded uint) uint {
m.mu.Lock()
defer m.mu.Unlock()
var evicted uint
var candidates []*vfs.FileInfo
// Collect all files
for _, fi := range m.info {
candidates = append(candidates, fi)
}
// Sort by creation time (oldest first)
sort.Slice(candidates, func(i, j int) bool {
return candidates[i].CTime.Before(candidates[j].CTime)
})
// Evict oldest files until we free enough space
for _, fi := range candidates {
if m.size <= m.capacity-int64(bytesNeeded) {
break
}
key := fi.Key
// Remove from LRU
m.LRU.Remove(key)
// Remove from maps
delete(m.info, key)
delete(m.data, key)
// Update size
m.size -= fi.Size
evicted += uint(fi.Size)
// Clean up key lock
shardIndex := getShardIndex(key)
m.keyLocks[shardIndex].Delete(key)
}
return evicted
} }

View File

@@ -1,129 +0,0 @@
// vfs/memory/memory_test.go
package memory
import (
"errors"
"fmt"
"io"
"s1d3sw1ped/SteamCache2/vfs/vfserror"
"testing"
)
func TestCreateAndOpen(t *testing.T) {
m := New(1024)
key := "key"
value := []byte("value")
w, err := m.Create(key, int64(len(value)))
if err != nil {
t.Fatalf("Create failed: %v", err)
}
w.Write(value)
w.Close()
rc, err := m.Open(key)
if err != nil {
t.Fatalf("Open failed: %v", err)
}
got, _ := io.ReadAll(rc)
rc.Close()
if string(got) != string(value) {
t.Fatalf("expected %s, got %s", value, got)
}
}
func TestOverwrite(t *testing.T) {
m := New(1024)
key := "key"
value1 := []byte("value1")
value2 := []byte("value2")
w, err := m.Create(key, int64(len(value1)))
if err != nil {
t.Fatalf("Create failed: %v", err)
}
w.Write(value1)
w.Close()
w, err = m.Create(key, int64(len(value2)))
if err != nil {
t.Fatalf("Create failed: %v", err)
}
w.Write(value2)
w.Close()
rc, err := m.Open(key)
if err != nil {
t.Fatalf("Open failed: %v", err)
}
got, _ := io.ReadAll(rc)
rc.Close()
if string(got) != string(value2) {
t.Fatalf("expected %s, got %s", value2, got)
}
}
func TestDelete(t *testing.T) {
m := New(1024)
key := "key"
value := []byte("value")
w, err := m.Create(key, int64(len(value)))
if err != nil {
t.Fatalf("Create failed: %v", err)
}
w.Write(value)
w.Close()
if err := m.Delete(key); err != nil {
t.Fatalf("Delete failed: %v", err)
}
_, err = m.Open(key)
if !errors.Is(err, vfserror.ErrNotFound) {
t.Fatalf("expected %v, got %v", vfserror.ErrNotFound, err)
}
}
func TestCapacityLimit(t *testing.T) {
m := New(10)
for i := 0; i < 11; i++ {
w, err := m.Create(fmt.Sprintf("key%d", i), 1)
if err != nil && i < 10 {
t.Errorf("Create failed: %v", err)
} else if i == 10 && err == nil {
t.Errorf("Create succeeded: got nil, want %v", vfserror.ErrDiskFull)
}
if i < 10 {
w.Write([]byte("1"))
w.Close()
}
}
}
func TestStat(t *testing.T) {
m := New(1024)
key := "key"
value := []byte("value")
w, err := m.Create(key, int64(len(value)))
if err != nil {
t.Fatalf("Create failed: %v", err)
}
w.Write(value)
w.Close()
info, err := m.Stat(key)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if info == nil {
t.Fatal("expected file info to be non-nil")
}
if info.Size() != int64(len(value)) {
t.Errorf("expected size %d, got %d", len(value), info.Size())
}
}

View File

@@ -1,28 +1,112 @@
// vfs/vfs.go // vfs/vfs.go
package vfs package vfs
import "io" import (
"io"
"os"
"time"
)
// VFS is the interface that wraps the basic methods of a virtual file system. // VFS defines the interface for virtual file systems
type VFS interface { type VFS interface {
// Name returns the name of the file system. // Create creates a new file at the given key
Name() string
// Size returns the total size of all files in the file system.
Size() int64
// Create creates a new file at key with expected size.
Create(key string, size int64) (io.WriteCloser, error) Create(key string, size int64) (io.WriteCloser, error)
// Delete deletes the value of key. // Open opens the file at the given key for reading
Delete(key string) error
// Open opens the file at key.
Open(key string) (io.ReadCloser, error) Open(key string) (io.ReadCloser, error)
// Stat returns the FileInfo of key. // Delete removes the file at the given key
Delete(key string) error
// Stat returns information about the file at the given key
Stat(key string) (*FileInfo, error) Stat(key string) (*FileInfo, error)
// StatAll returns the FileInfo of all keys. // Name returns the name of this VFS
StatAll() []*FileInfo Name() string
// Size returns the current size of the VFS
Size() int64
// Capacity returns the maximum capacity of the VFS
Capacity() int64
}
// FileInfo contains metadata about a cached file
type FileInfo struct {
Key string `json:"key"`
Size int64 `json:"size"`
ATime time.Time `json:"atime"` // Last access time
CTime time.Time `json:"ctime"` // Creation time
AccessCount int `json:"access_count"`
}
// NewFileInfo creates a new FileInfo with the given key and current timestamp
func NewFileInfo(key string, size int64) *FileInfo {
now := time.Now()
return &FileInfo{
Key: key,
Size: size,
ATime: now,
CTime: now,
AccessCount: 1,
}
}
// NewFileInfoFromOS creates a FileInfo from os.FileInfo
func NewFileInfoFromOS(info os.FileInfo, key string) *FileInfo {
return &FileInfo{
Key: key,
Size: info.Size(),
ATime: time.Now(), // We don't have access time from os.FileInfo
CTime: info.ModTime(),
AccessCount: 1,
}
}
// UpdateAccess updates the access time and increments the access count
func (fi *FileInfo) UpdateAccess() {
fi.ATime = time.Now()
fi.AccessCount++
}
// BatchedTimeUpdate provides a way to batch time updates for better performance
type BatchedTimeUpdate struct {
currentTime time.Time
lastUpdate time.Time
updateInterval time.Duration
}
// NewBatchedTimeUpdate creates a new batched time updater
func NewBatchedTimeUpdate(interval time.Duration) *BatchedTimeUpdate {
now := time.Now()
return &BatchedTimeUpdate{
currentTime: now,
lastUpdate: now,
updateInterval: interval,
}
}
// GetTime returns the current cached time, updating it if necessary
func (btu *BatchedTimeUpdate) GetTime() time.Time {
now := time.Now()
if now.Sub(btu.lastUpdate) >= btu.updateInterval {
btu.currentTime = now
btu.lastUpdate = now
}
return btu.currentTime
}
// UpdateAccessBatched updates the access time using batched time updates
func (fi *FileInfo) UpdateAccessBatched(btu *BatchedTimeUpdate) {
fi.ATime = btu.GetTime()
fi.AccessCount++
}
// GetTimeDecayedScore calculates a score based on access time and frequency
// More recent and frequent accesses get higher scores
func (fi *FileInfo) GetTimeDecayedScore() float64 {
timeSinceAccess := time.Since(fi.ATime).Hours()
decayFactor := 1.0 / (1.0 + timeSinceAccess/24.0) // Decay over days
frequencyBonus := float64(fi.AccessCount) * 0.1
return decayFactor + frequencyBonus
} }

View File

@@ -3,16 +3,10 @@ package vfserror
import "errors" import "errors"
// Common VFS errors
var ( var (
// ErrInvalidKey is returned when a key is invalid.
ErrInvalidKey = errors.New("vfs: invalid key")
// ErrUnreachable is returned when a code path is unreachable.
ErrUnreachable = errors.New("unreachable")
// ErrNotFound is returned when a key is not found.
ErrNotFound = errors.New("vfs: key not found") ErrNotFound = errors.New("vfs: key not found")
ErrInvalidKey = errors.New("vfs: invalid key")
// ErrDiskFull is returned when the disk is full. ErrAlreadyExists = errors.New("vfs: key already exists")
ErrDiskFull = errors.New("vfs: disk full") ErrCapacityExceeded = errors.New("vfs: capacity exceeded")
) )