21 Commits

Author SHA1 Message Date
a3defe5cf6 Add tests for main package, manager, and various components
- Introduced unit tests for the main package to ensure compilation.
- Added tests for the manager, including validation of upload sessions and handling of Blender binary paths.
- Implemented tests for job token generation and validation, ensuring security and integrity.
- Created tests for configuration management and database schema to verify functionality.
- Added tests for logger and runner components to enhance overall test coverage and reliability.
2026-03-14 22:20:03 -05:00
16d6a95058 Refactor runner and installation scripts for improved functionality
- Removed the `--disable-hiprt` flag from the runner command, simplifying the rendering options for users.
- Updated the `jiggablend-runner` script and README to reflect the removal of the HIPRT control flag, enhancing clarity in usage instructions.
- Enhanced the installation script to provide clearer examples for running the jiggablend manager and runner, improving user experience during setup.
- Implemented a more robust GPU backend detection mechanism, allowing for better compatibility with various hardware configurations.
2026-03-14 21:08:06 -05:00
28cb50492c Update jiggablend-runner script to accept additional runner flags
All checks were successful
Release Tag / release (push) Successful in 17s
- Modified the jiggablend-runner script to allow passing extra flags during execution, enhancing flexibility for users.
- Updated usage instructions to reflect the new syntax, providing examples for better clarity on how to utilize the runner with various options.
2026-03-13 21:18:04 -05:00
dc525fbaa4 Add hardware compatibility flags for CPU rendering and HIPRT control
- Introduced `--force-cpu-rendering` and `--disable-hiprt` flags to the runner command, allowing users to enforce CPU rendering and disable HIPRT acceleration.
- Updated the runner initialization and context structures to accommodate the new flags, enhancing flexibility in rendering configurations.
- Modified the rendering logic to respect these flags, improving compatibility and user control over rendering behavior in Blender.
2026-03-13 21:15:44 -05:00
5303f01f7c Implement GPU backend detection for Blender compatibility
- Added functionality to detect GPU backends (HIP and NVIDIA) during runner registration, enhancing compatibility for Blender versions below 4.x.
- Introduced a new method, DetectAndStoreGPUBackends, to download the latest Blender and run a detection script, storing the results for future rendering decisions.
- Updated rendering logic to force CPU rendering when HIP is detected on systems with Blender < 4.x, ensuring stability and compatibility.
- Enhanced the Context structure to include flags for GPU detection status, improving error handling and rendering decisions based on GPU availability.
2026-03-13 18:32:05 -05:00
bc39fd438b Add installation script for jiggablend binary
- Introduced a new installer.sh script to automate the installation of the latest jiggablend binary for Linux AMD64.
- The script fetches the latest release information, downloads the binary and its checksums, verifies the checksum, and installs the binary and wrapper scripts for the manager and runner.
- Added wrapper scripts for both the manager and runner with test setup instructions, enhancing user experience for initial setup.
2026-03-13 10:26:21 -05:00
4c7f168bce Enhance GPU error detection in RenderProcessor
- Updated gpuErrorSubstrings to include case-insensitive matching for GPU backend errors, improving error detection reliability.
- Modified checkGPUErrorLine to convert log lines to lowercase before checking for error indicators, ensuring consistent matching across different log formats.
2026-03-13 10:26:13 -05:00
6833bb4013 Add GPU error handling and lockout mechanism in Runner
- Introduced gpuLockedOut state in Runner to manage GPU rendering based on detected errors.
- Implemented SetGPULockedOut and IsGPULockedOut methods for controlling GPU usage.
- Enhanced Context to include GPULockedOut and OnGPUError for better error handling.
- Updated RenderProcessor to check for GPU errors in logs and trigger lockout as needed.
- Modified rendering logic to force CPU rendering when GPU lockout is active, improving stability during errors.
2026-03-13 10:01:39 -05:00
f9111ebac4 Remove xvfb-run dependency from rendering process
All checks were successful
Release Tag / release (push) Successful in 16s
- Eliminated the use of xvfb-run for headless rendering in the RenderProcessor, simplifying the command execution for Blender.
- Updated the CheckRequiredTools function to remove the check for xvfb-run, reflecting the change in rendering requirements.
2026-03-12 20:55:45 -05:00
34445dc5cd Update README to clarify Blender requirements for the runner
All checks were successful
Release Tag / release (push) Successful in 1m21s
- Revised the section on the runner to specify that it can run Blender without needing a pre-installed version, as it retrieves the required Blender version from the manager.
2026-03-12 19:59:11 -05:00
63b8ff34c1 Update README to specify fixed API key for testing 2026-03-12 19:47:05 -05:00
2deb47e5ad Refactor web build process and update documentation
- Removed Node.js build artifacts from .gitignore and adjusted Makefile to reflect changes in web UI build process, now using server-rendered Go templates instead of React.
- Updated README to clarify the new web UI architecture and output formats, emphasizing the removal of the Node.js build step.
- Added a command to set the number of frames per render task in manager configuration, enhancing user control over rendering settings.
- Improved Gitea workflow by removing unnecessary npm install step, streamlining the CI process.
2026-03-12 19:44:40 -05:00
d3c5ee0dba Add pagination support for file loading in JobDetails component
All checks were successful
Release Tag / release (push) Successful in 20s
- Introduced a helper function to load all files associated with a job using pagination, improving performance by fetching files in batches.
- Updated the loadDetails function to utilize the new pagination method for retrieving all files instead of just the first page.
- Adjusted file handling logic to ensure proper updates when new files are added, maintaining consistency with the paginated approach.
2026-01-03 10:58:36 -06:00
bb57ce8659 Update task status handling to reset runner_id on job cancellation and failure
All checks were successful
Release Tag / release (push) Successful in 20s
- Modified SQL queries in multiple functions to set runner_id to NULL when updating task statuses for cancelled jobs and failed tasks.
- Ensured that tasks are properly marked as failed with the correct error messages and updated completion timestamps.
- Improved handling of task statuses to prevent potential issues with task assignment and execution.
2026-01-03 09:01:08 -06:00
1a8836e6aa Merge pull request 'Refactor job status handling to prevent race conditions' (#4) from fix-race into master
All checks were successful
Release Tag / release (push) Successful in 18s
Reviewed-on: #4
2026-01-02 18:25:17 -06:00
b51b96a618 Refactor job status handling to prevent race conditions
All checks were successful
PR Check / check-and-test (pull_request) Successful in 26s
- Removed redundant error handling in handleListJobTasks.
- Introduced per-job mutexes in Manager to serialize updateJobStatusFromTasks calls, ensuring thread safety during concurrent task completions.
- Added methods to manage job status update mutexes, including creation and cleanup after job completion or failure.
- Improved error handling in handleGetJobStatusForRunner by consolidating error checks.
2026-01-02 18:22:55 -06:00
8e561922c9 Merge pull request 'Implement file deletion after successful uploads in runner and encoding processes' (#3) from fix-uploads into master
Reviewed-on: #3
2026-01-02 17:51:19 -06:00
1c4bd78f56 Add FFmpeg setup step to Gitea workflow for enhanced media processing
All checks were successful
PR Check / check-and-test (pull_request) Successful in 1m6s
- Included a new step in the test-pr.yaml workflow to set up FFmpeg, improving the project's media handling capabilities.
- This addition complements the existing build steps for Go and frontend assets, ensuring a more comprehensive build environment.
2026-01-02 17:48:46 -06:00
3f2982ddb3 Update Gitea workflow to include frontend build step and adjust Go build command
Some checks failed
PR Check / check-and-test (pull_request) Failing after 42s
- Added a step to install and build the frontend using npm in the test-pr.yaml workflow.
- Modified the Go build command to compile all packages instead of specifying the output binary location.
- This change improves the build process by integrating frontend assets with the backend build.
2026-01-02 17:46:03 -06:00
0b852c5087 Update Gitea workflow to specify output binary location for jiggablend build
Some checks failed
PR Check / check-and-test (pull_request) Failing after 8s
- Changed the build command in the test-pr.yaml workflow to output the jiggablend binary to the bin directory.
- This modification enhances the organization of build artifacts and aligns with project structure.
2026-01-02 17:40:34 -06:00
5e56c7f0e8 Implement file deletion after successful uploads in runner and encoding processes
Some checks failed
PR Check / check-and-test (pull_request) Failing after 9s
- Added logic to delete files after successful uploads in both runner and encode tasks to prevent duplicate uploads.
- Included logging for any errors encountered during file deletion to ensure visibility of issues.
2026-01-02 17:34:41 -06:00
128 changed files with 7059 additions and 13143 deletions

View File

@@ -10,6 +10,7 @@ jobs:
- uses: actions/setup-go@main - uses: actions/setup-go@main
with: with:
go-version-file: 'go.mod' go-version-file: 'go.mod'
- uses: FedericoCarboni/setup-ffmpeg@v3
- run: go mod tidy - run: go mod tidy
- run: go build ./... - run: go build ./...
- run: go test -race -v -shuffle=on ./... - run: go test -race -v -shuffle=on ./...

10
.gitignore vendored
View File

@@ -43,16 +43,6 @@ runner-secrets-*.json
jiggablend-storage/ jiggablend-storage/
jiggablend-workspaces/ jiggablend-workspaces/
# Node.js
web/node_modules/
web/dist/
web/.vite/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
# IDE # IDE
.vscode/ .vscode/
.idea/ .idea/

View File

@@ -3,7 +3,6 @@ version: 2
before: before:
hooks: hooks:
- go mod tidy -v - go mod tidy -v
- sh -c "cd web && npm install && npm run build"
builds: builds:
- id: default - id: default

View File

@@ -5,11 +5,8 @@ build:
@echo "Building with GoReleaser..." @echo "Building with GoReleaser..."
goreleaser build --clean --snapshot --single-target goreleaser build --clean --snapshot --single-target
@mkdir -p bin @mkdir -p bin
@find dist -name jiggablend -type f -exec cp {} bin/jiggablend \; @find dist -name jiggablend -type f -exec cp {} bin/jiggablend.new \;
@mv -f bin/jiggablend.new bin/jiggablend
# Build web UI
build-web: clean-web
cd web && npm install && npm run build
# Cleanup manager logs # Cleanup manager logs
cleanup-manager: cleanup-manager:
@@ -30,7 +27,19 @@ cleanup: cleanup-manager cleanup-runner
run: cleanup build init-test run: cleanup build init-test
@echo "Starting manager and runner in parallel..." @echo "Starting manager and runner in parallel..."
@echo "Press Ctrl+C to stop both..." @echo "Press Ctrl+C to stop both..."
@trap 'kill $$MANAGER_PID $$RUNNER_PID 2>/dev/null; exit' INT TERM; \ @MANAGER_PID=""; RUNNER_PID=""; INTERRUPTED=0; \
cleanup() { \
exit_code=$$?; \
trap - INT TERM EXIT; \
if [ -n "$$RUNNER_PID" ]; then kill -TERM "$$RUNNER_PID" 2>/dev/null || true; fi; \
if [ -n "$$MANAGER_PID" ]; then kill -TERM "$$MANAGER_PID" 2>/dev/null || true; fi; \
if [ -n "$$MANAGER_PID$$RUNNER_PID" ]; then wait $$MANAGER_PID $$RUNNER_PID 2>/dev/null || true; fi; \
if [ "$$INTERRUPTED" -eq 1 ]; then exit 0; fi; \
exit $$exit_code; \
}; \
on_interrupt() { INTERRUPTED=1; cleanup; }; \
trap on_interrupt INT TERM; \
trap cleanup EXIT; \
bin/jiggablend manager -l manager.log & \ bin/jiggablend manager -l manager.log & \
MANAGER_PID=$$!; \ MANAGER_PID=$$!; \
sleep 2; \ sleep 2; \
@@ -63,7 +72,7 @@ clean-bin:
# Clean web build artifacts # Clean web build artifacts
clean-web: clean-web:
rm -rf web/dist/ @echo "No generated web artifacts to clean."
# Run tests # Run tests
test: test:
@@ -75,7 +84,7 @@ help:
@echo "" @echo ""
@echo "Build targets:" @echo "Build targets:"
@echo " build - Build jiggablend binary with embedded web UI" @echo " build - Build jiggablend binary with embedded web UI"
@echo " build-web - Build web UI only" @echo " build-web - Validate web UI assets (no build required)"
@echo "" @echo ""
@echo "Run targets:" @echo "Run targets:"
@echo " run - Run manager and runner in parallel (for testing)" @echo " run - Run manager and runner in parallel (for testing)"
@@ -90,7 +99,7 @@ help:
@echo "" @echo ""
@echo "Other targets:" @echo "Other targets:"
@echo " clean-bin - Clean build artifacts" @echo " clean-bin - Clean build artifacts"
@echo " clean-web - Clean web build artifacts" @echo " clean-web - Clean generated web artifacts (currently none)"
@echo " test - Run Go tests" @echo " test - Run Go tests"
@echo " help - Show this help" @echo " help - Show this help"
@echo "" @echo ""

View File

@@ -12,20 +12,20 @@ Both manager and runner are part of a single binary (`jiggablend`) with subcomma
## Features ## Features
- **Authentication**: OAuth (Google and Discord) and local authentication with user management - **Authentication**: OAuth (Google and Discord) and local authentication with user management
- **Web UI**: Modern React-based interface for job submission and monitoring - **Web UI**: Server-rendered Go templates with HTMX fragments for job submission and monitoring
- **Distributed Rendering**: Scale across multiple runners with automatic job distribution - **Distributed Rendering**: Scale across multiple runners with automatic job distribution
- **Real-time Updates**: WebSocket-based progress tracking and job status updates - **Real-time Updates**: Polling-based UI updates with lightweight HTMX refreshes
- **Video Encoding**: Automatic video encoding from EXR/PNG sequences with multiple codec support: - **Video Encoding**: Automatic video encoding from EXR sequences only. EXR→video always uses HDR (HLG, 10-bit); no option to disable. Codecs:
- H.264 (MP4) - SDR and HDR support - H.264 (MP4) - HDR (HLG)
- AV1 (MP4) - With alpha channel support - AV1 (MP4) - Alpha channel support, HDR
- VP9 (WebM) - With alpha channel and HDR support - VP9 (WebM) - Alpha channel and HDR
- **Output Formats**: PNG, JPEG, EXR, and video formats (MP4, WebM) - **Output Formats**: EXR frame sequence only, or EXR + video (H.264, AV1, VP9). Blender always renders EXR.
- **Blender Version Management**: Support for multiple Blender versions with automatic detection - **Blender Version Management**: Support for multiple Blender versions with automatic detection
- **Metadata Extraction**: Automatic extraction of scene metadata from Blender files - **Metadata Extraction**: Automatic extraction of scene metadata from Blender files
- **Admin Panel**: User and runner management interface - **Admin Panel**: User and runner management interface
- **Runner Management**: API key-based authentication for runners with health monitoring - **Runner Management**: API key-based authentication for runners with health monitoring
- **HDR Support**: Preserve HDR range in video encoding with HLG transfer function - **HDR**: EXR→video is always encoded as HDR (HLG, 10-bit). There is no option to turn it off; for SDR-only output, download the EXR frames and encode locally.
- **Alpha Channel**: Preserve alpha channel in video encoding (AV1 and VP9) - **Alpha**: Alpha is always preserved in EXR frames. In video, alpha is preserved when present in the EXR for AV1 and VP9; H.264 MP4 does not support alpha.
## Prerequisites ## Prerequisites
@@ -37,8 +37,8 @@ Both manager and runner are part of a single binary (`jiggablend`) with subcomma
### Runner ### Runner
- Linux amd64 - Linux amd64
- Blender installed (can use bundled versions from storage)
- FFmpeg installed (required for video encoding) - FFmpeg installed (required for video encoding)
- Able to run Blender (the runner gets the jobs required Blender version from the manager; it does not need Blender pre-installed)
## Installation ## Installation
@@ -68,7 +68,7 @@ make init-test
This will: This will:
- Enable local authentication - Enable local authentication
- Set a fixed API key for testing - Set a fixed API key for testing: `jk_r0_test_key_123456789012345678901234567890`
- Create a test admin user (test@example.com / testpassword) - Create a test admin user (test@example.com / testpassword)
#### Manual Configuration #### Manual Configuration
@@ -154,10 +154,22 @@ bin/jiggablend runner --api-key <your-api-key>
# With custom options # With custom options
bin/jiggablend runner --manager http://localhost:8080 --name my-runner --api-key <key> --log-file runner.log bin/jiggablend runner --manager http://localhost:8080 --name my-runner --api-key <key> --log-file runner.log
# Hardware compatibility flag (force CPU)
bin/jiggablend runner --api-key <key> --force-cpu-rendering
# Using environment variables # Using environment variables
JIGGABLEND_MANAGER=http://localhost:8080 JIGGABLEND_API_KEY=<key> bin/jiggablend runner JIGGABLEND_MANAGER=http://localhost:8080 JIGGABLEND_API_KEY=<key> bin/jiggablend runner
``` ```
### Render Chunk Size Note
For one heavy production scene/profile, chunked rendering (`frames 800-804` in one Blender process) was much slower than one-frame tasks:
- Chunked task (`800-804`): `27m49s` end-to-end (`Task assigned` -> last `Saved`)
- Single-frame tasks (`800`, `801`, `802`, `803`, `804`): `15m04s` wall clock total
In that test, any chunk size greater than `1` caused a major slowdown after the first frame. Fresh installs should already have it set to `1`, but if you see similar performance degradation, try forcing one frame per task (hard reset Blender each frame): `jiggablend manager config set frames-per-render-task 1`. If `1` is worse on your scene/hardware, benchmark and use a higher chunk size instead.
### Running Both (for Testing) ### Running Both (for Testing)
```bash ```bash
@@ -217,9 +229,9 @@ jiggablend/
│ ├── executils/ # Execution utilities │ ├── executils/ # Execution utilities
│ ├── scripts/ # Python scripts for Blender │ ├── scripts/ # Python scripts for Blender
│ └── types/ # Shared types and models │ └── types/ # Shared types and models
├── web/ # React web UI ├── web/ # Embedded templates + static assets
│ ├── src/ # Source files │ ├── templates/ # Go HTML templates and partials
│ └── dist/ # Built files (embedded in binary) │ └── static/ # CSS/JS assets
├── go.mod ├── go.mod
└── Makefile └── Makefile
``` ```
@@ -266,29 +278,25 @@ jiggablend/
- `GET /api/admin/stats` - System statistics - `GET /api/admin/stats` - System statistics
### WebSocket ### WebSocket
- `WS /api/ws` - WebSocket connection for real-time updates - `WS /api/jobs/ws` - Optional API channel for advanced clients
- Subscribe to job channels: `job:{jobId}` - The default web UI uses polling + HTMX for status updates and task views.
- Receive job status updates, progress, and logs
## Output Formats ## Output Formats
The system supports the following output formats: The system supports the following output formats. Blender always renders EXR (linear); the chosen format is the deliverable (frames only or frames + video).
### Image Formats ### Deliverable Formats
- **PNG** - Standard PNG output - **EXR** - EXR frame sequence only (no video)
- **JPEG** - JPEG output - **EXR_264_MP4** - EXR frames + H.264 MP4 (always HDR, HLG)
- **EXR** - OpenEXR format (HDR) - **EXR_AV1_MP4** - EXR frames + AV1 MP4 (alpha support, always HDR)
- **EXR_VP9_WEBM** - EXR frames + VP9 WebM (alpha and HDR)
### Video Formats Video encoding (EXR→video) is always HDR (HLG, 10-bit); there is no option to output SDR video. For SDR-only, download the EXR frames and encode locally.
- **EXR_264_MP4** - H.264 encoded MP4 from EXR sequence (SDR or HDR)
- **EXR_AV1_MP4** - AV1 encoded MP4 from EXR sequence (with alpha channel support)
- **EXR_VP9_WEBM** - VP9 encoded WebM from EXR sequence (with alpha channel and HDR support)
Video encoding features: Video encoding features:
- 2-pass encoding for optimal quality - 2-pass encoding for optimal quality
- HDR preservation using HLG transfer function - EXR→video only (no PNG source); always HLG (HDR), 10-bit, full range
- Alpha channel preservation (AV1 and VP9 only) - Alpha channel preservation (AV1 and VP9 only)
- Automatic detection of source format (EXR or PNG)
- Software encoding (libx264, libaom-av1, libvpx-vp9) - Software encoding (libx264, libaom-av1, libvpx-vp9)
## Storage Structure ## Storage Structure
@@ -320,16 +328,8 @@ go test ./... -timeout 30s
### Web UI Development ### Web UI Development
The web UI is built with React and Vite. To develop the UI: The web UI is server-rendered from embedded templates and static assets in `web/templates` and `web/static`.
No Node/Vite build step is required.
```bash
cd web
npm install
npm run dev # Development server
npm run build # Build for production
```
The built files are embedded in the Go binary using `embed.FS`.
## License ## License

View File

@@ -4,6 +4,7 @@ import (
"fmt" "fmt"
"net/http" "net/http"
"os/exec" "os/exec"
"path/filepath"
"strings" "strings"
"jiggablend/internal/auth" "jiggablend/internal/auth"
@@ -151,7 +152,15 @@ func runManager(cmd *cobra.Command, args []string) {
} }
func checkBlenderAvailable() error { func checkBlenderAvailable() error {
cmd := exec.Command("blender", "--version") blenderPath, err := exec.LookPath("blender")
if err != nil {
return fmt.Errorf("failed to locate blender in PATH: %w", err)
}
blenderPath, err = filepath.Abs(blenderPath)
if err != nil {
return fmt.Errorf("failed to resolve blender path %q: %w", blenderPath, err)
}
cmd := exec.Command(blenderPath, "--version")
output, err := cmd.CombinedOutput() output, err := cmd.CombinedOutput()
if err != nil { if err != nil {
return fmt.Errorf("failed to run 'blender --version': %w (output: %s)", err, string(output)) return fmt.Errorf("failed to run 'blender --version': %w (output: %s)", err, string(output))

View File

@@ -8,6 +8,7 @@ import (
"encoding/hex" "encoding/hex"
"fmt" "fmt"
"os" "os"
"strconv"
"strings" "strings"
"jiggablend/internal/config" "jiggablend/internal/config"
@@ -381,6 +382,25 @@ var setGoogleOAuthCmd = &cobra.Command{
var setDiscordOAuthRedirectURL string var setDiscordOAuthRedirectURL string
var setFramesPerRenderTaskCmd = &cobra.Command{
Use: "frames-per-render-task <n>",
Short: "Set number of frames per render task (min 1)",
Long: `Set how many frames to batch into each render task. Job frame range is divided into chunks of this size. Default is 10.`,
Args: cobra.ExactArgs(1),
Run: func(cmd *cobra.Command, args []string) {
n, err := strconv.Atoi(args[0])
if err != nil || n < 1 {
exitWithError("frames-per-render-task must be a positive integer")
}
withConfig(func(cfg *config.Config, db *database.DB) {
if err := cfg.SetInt(config.KeyFramesPerRenderTask, n); err != nil {
exitWithError("Failed to set frames_per_render_task: %v", err)
}
fmt.Printf("Frames per render task set to %d\n", n)
})
},
}
var setDiscordOAuthCmd = &cobra.Command{ var setDiscordOAuthCmd = &cobra.Command{
Use: "discord-oauth <client-id> <client-secret>", Use: "discord-oauth <client-id> <client-secret>",
Short: "Set Discord OAuth credentials", Short: "Set Discord OAuth credentials",
@@ -558,6 +578,7 @@ func init() {
configCmd.AddCommand(setCmd) configCmd.AddCommand(setCmd)
setCmd.AddCommand(setFixedAPIKeyCmd) setCmd.AddCommand(setFixedAPIKeyCmd)
setCmd.AddCommand(setAllowedOriginsCmd) setCmd.AddCommand(setAllowedOriginsCmd)
setCmd.AddCommand(setFramesPerRenderTaskCmd)
setCmd.AddCommand(setGoogleOAuthCmd) setCmd.AddCommand(setGoogleOAuthCmd)
setCmd.AddCommand(setDiscordOAuthCmd) setCmd.AddCommand(setDiscordOAuthCmd)

View File

@@ -0,0 +1,23 @@
package cmd
import (
"strings"
"testing"
)
func TestGenerateAPIKey_Format(t *testing.T) {
key, prefix, hash, err := generateAPIKey()
if err != nil {
t.Fatalf("generateAPIKey failed: %v", err)
}
if !strings.HasPrefix(prefix, "jk_r") {
t.Fatalf("unexpected prefix: %q", prefix)
}
if !strings.HasPrefix(key, prefix+"_") {
t.Fatalf("key does not include prefix: %q", key)
}
if len(hash) != 64 {
t.Fatalf("expected sha256 hex hash length, got %d", len(hash))
}
}

View File

@@ -0,0 +1,16 @@
package cmd
import "testing"
func TestRootCommand_HasKeySubcommands(t *testing.T) {
names := map[string]bool{}
for _, c := range rootCmd.Commands() {
names[c.Name()] = true
}
for _, required := range []string{"manager", "runner", "version"} {
if !names[required] {
t.Fatalf("expected subcommand %q to be registered", required)
}
}
}

View File

@@ -37,6 +37,7 @@ func init() {
runnerCmd.Flags().String("log-level", "info", "Log level (debug, info, warn, error)") runnerCmd.Flags().String("log-level", "info", "Log level (debug, info, warn, error)")
runnerCmd.Flags().BoolP("verbose", "v", false, "Enable verbose logging (same as --log-level=debug)") runnerCmd.Flags().BoolP("verbose", "v", false, "Enable verbose logging (same as --log-level=debug)")
runnerCmd.Flags().Duration("poll-interval", 5*time.Second, "Job polling interval") runnerCmd.Flags().Duration("poll-interval", 5*time.Second, "Job polling interval")
runnerCmd.Flags().Bool("force-cpu-rendering", false, "Force CPU rendering for all jobs (disables GPU rendering)")
// Bind flags to viper with JIGGABLEND_ prefix // Bind flags to viper with JIGGABLEND_ prefix
runnerViper.SetEnvPrefix("JIGGABLEND") runnerViper.SetEnvPrefix("JIGGABLEND")
@@ -51,6 +52,7 @@ func init() {
runnerViper.BindPFlag("log_level", runnerCmd.Flags().Lookup("log-level")) runnerViper.BindPFlag("log_level", runnerCmd.Flags().Lookup("log-level"))
runnerViper.BindPFlag("verbose", runnerCmd.Flags().Lookup("verbose")) runnerViper.BindPFlag("verbose", runnerCmd.Flags().Lookup("verbose"))
runnerViper.BindPFlag("poll_interval", runnerCmd.Flags().Lookup("poll-interval")) runnerViper.BindPFlag("poll_interval", runnerCmd.Flags().Lookup("poll-interval"))
runnerViper.BindPFlag("force_cpu_rendering", runnerCmd.Flags().Lookup("force-cpu-rendering"))
} }
func runRunner(cmd *cobra.Command, args []string) { func runRunner(cmd *cobra.Command, args []string) {
@@ -63,6 +65,7 @@ func runRunner(cmd *cobra.Command, args []string) {
logLevel := runnerViper.GetString("log_level") logLevel := runnerViper.GetString("log_level")
verbose := runnerViper.GetBool("verbose") verbose := runnerViper.GetBool("verbose")
pollInterval := runnerViper.GetDuration("poll_interval") pollInterval := runnerViper.GetDuration("poll_interval")
forceCPURendering := runnerViper.GetBool("force_cpu_rendering")
var r *runner.Runner var r *runner.Runner
@@ -118,7 +121,7 @@ func runRunner(cmd *cobra.Command, args []string) {
} }
// Create runner // Create runner
r = runner.New(managerURL, name, hostname) r = runner.New(managerURL, name, hostname, forceCPURendering)
// Check for required tools early to fail fast // Check for required tools early to fail fast
if err := r.CheckRequiredTools(); err != nil { if err := r.CheckRequiredTools(); err != nil {
@@ -161,6 +164,9 @@ func runRunner(cmd *cobra.Command, args []string) {
runnerID, err = r.Register(apiKey) runnerID, err = r.Register(apiKey)
if err == nil { if err == nil {
logger.Infof("Registered runner with ID: %d", runnerID) logger.Infof("Registered runner with ID: %d", runnerID)
// Detect GPU vendors/backends from host hardware so we only force CPU for Blender < 4.x when using AMD.
logger.Info("Detecting GPU backends (AMD/NVIDIA/Intel) from host hardware for Blender < 4.x policy...")
r.DetectAndStoreGPUBackends()
break break
} }

View File

@@ -0,0 +1,17 @@
package cmd
import (
"encoding/hex"
"testing"
)
func TestGenerateShortID_IsHex8Bytes(t *testing.T) {
id := generateShortID()
if len(id) != 8 {
t.Fatalf("expected 8 hex chars, got %q", id)
}
if _, err := hex.DecodeString(id); err != nil {
t.Fatalf("id should be hex: %v", err)
}
}

View File

@@ -0,0 +1,13 @@
package cmd
import "testing"
func TestVersionCommand_Metadata(t *testing.T) {
if versionCmd.Use != "version" {
t.Fatalf("unexpected command use: %q", versionCmd.Use)
}
if versionCmd.Run == nil {
t.Fatal("version command run function should be set")
}
}

View File

@@ -0,0 +1,8 @@
package main
import "testing"
func TestMainPackage_Builds(t *testing.T) {
// Smoke test placeholder to keep package main under test compilation.
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 MiB

113
installer.sh Normal file
View File

@@ -0,0 +1,113 @@
#!/bin/bash
set -euo pipefail
# Simple script to install the latest jiggablend binary for Linux AMD64
# and create wrapper scripts for manager and runner using test setup
# Dependencies: curl, jq, tar, sha256sum, sudo (for installation to /usr/local/bin)
REPO="s1d3sw1ped/jiggablend"
API_URL="https://git.s1d3sw1ped.com/api/v1/repos/${REPO}/releases/latest"
ASSET_NAME="jiggablend-linux-amd64.tar.gz"
echo "Fetching latest release information..."
RELEASE_JSON=$(curl -s "$API_URL")
TAG=$(echo "$RELEASE_JSON" | jq -r '.tag_name')
echo "Latest version: $TAG"
ASSET_URL=$(echo "$RELEASE_JSON" | jq -r ".assets[] | select(.name == \"$ASSET_NAME\") | .browser_download_url")
if [ -z "$ASSET_URL" ]; then
echo "Error: Asset $ASSET_NAME not found in latest release."
exit 1
fi
CHECKSUM_URL=$(echo "$RELEASE_JSON" | jq -r '.assets[] | select(.name == "checksums.txt") | .browser_download_url')
if [ -z "$CHECKSUM_URL" ]; then
echo "Error: checksums.txt not found in latest release."
exit 1
fi
echo "Downloading $ASSET_NAME..."
curl -L -o "$ASSET_NAME" "$ASSET_URL"
echo "Downloading checksums.txt..."
curl -L -o "checksums.txt" "$CHECKSUM_URL"
echo "Verifying checksum..."
if ! sha256sum --ignore-missing --quiet -c checksums.txt; then
echo "Error: Checksum verification failed."
rm -f "$ASSET_NAME" checksums.txt
exit 1
fi
echo "Extracting..."
tar -xzf "$ASSET_NAME"
echo "Installing binary to /usr/local/bin (requires sudo)..."
sudo install -m 0755 jiggablend /usr/local/bin/
echo "Creating manager wrapper script..."
cat << 'EOF' > jiggablend-manager.sh
#!/bin/bash
set -euo pipefail
# Wrapper to run jiggablend manager with test setup
# Run this in a directory where you want the db, storage, and logs
mkdir -p logs
rm -f logs/manager.log
# Initialize test configuration
jiggablend manager config enable localauth
jiggablend manager config set fixed-apikey jk_r0_test_key_123456789012345678901234567890 -f -y
jiggablend manager config add user test@example.com testpassword --admin -f -y
# Run manager
jiggablend manager -l logs/manager.log
EOF
chmod +x jiggablend-manager.sh
sudo install -m 0755 jiggablend-manager.sh /usr/local/bin/jiggablend-manager
rm -f jiggablend-manager.sh
echo "Creating runner wrapper script..."
cat << 'EOF' > jiggablend-runner.sh
#!/bin/bash
set -euo pipefail
# Wrapper to run jiggablend runner with test setup
# Usage: jiggablend-runner [MANAGER_URL] [RUNNER_FLAGS...]
# Default MANAGER_URL: http://localhost:8080
# Run this in a directory where you want the logs
MANAGER_URL="http://localhost:8080"
if [[ $# -gt 0 && "$1" != -* ]]; then
MANAGER_URL="$1"
shift
fi
EXTRA_ARGS=("$@")
mkdir -p logs
rm -f logs/runner.log
# Run runner
jiggablend runner -l logs/runner.log --api-key=jk_r0_test_key_123456789012345678901234567890 --manager "$MANAGER_URL" "${EXTRA_ARGS[@]}"
EOF
chmod +x jiggablend-runner.sh
sudo install -m 0755 jiggablend-runner.sh /usr/local/bin/jiggablend-runner
rm -f jiggablend-runner.sh
echo "Cleaning up..."
rm -f "$ASSET_NAME" checksums.txt jiggablend
echo "Installation complete!"
echo "Binary: jiggablend"
echo "Wrappers: jiggablend-manager, jiggablend-runner"
echo "Run 'jiggablend-manager' to start the manager with test config."
echo "Run 'jiggablend-runner [url] [runner flags...]' to start the runner."
echo "Example: jiggablend-runner http://your-manager:8080 --force-cpu-rendering"
echo "Note: Depending on whether you're running the manager or runner, additional dependencies like Blender, ImageMagick, or FFmpeg may be required. See the project README for details."

View File

@@ -668,24 +668,42 @@ func (a *Auth) IsProductionModeFromConfig() bool {
return a.cfg.IsProductionMode() return a.cfg.IsProductionMode()
} }
func (a *Auth) writeUnauthorized(w http.ResponseWriter, r *http.Request) {
// Keep API behavior unchanged for programmatic clients.
if strings.HasPrefix(r.URL.Path, "/api/") {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusUnauthorized)
_ = json.NewEncoder(w).Encode(map[string]string{"error": "Unauthorized"})
return
}
// For HTMX UI fragment requests, trigger a full-page redirect to login.
if strings.EqualFold(r.Header.Get("HX-Request"), "true") {
w.Header().Set("HX-Redirect", "/login")
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusUnauthorized)
_ = json.NewEncoder(w).Encode(map[string]string{"error": "Unauthorized"})
return
}
// For normal browser page requests, redirect to login page.
http.Redirect(w, r, "/login", http.StatusFound)
}
// Middleware creates an authentication middleware // Middleware creates an authentication middleware
func (a *Auth) Middleware(next http.HandlerFunc) http.HandlerFunc { func (a *Auth) Middleware(next http.HandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) { return func(w http.ResponseWriter, r *http.Request) {
cookie, err := r.Cookie("session_id") cookie, err := r.Cookie("session_id")
if err != nil { if err != nil {
log.Printf("Authentication failed: missing session cookie for %s %s", r.Method, r.URL.Path) log.Printf("Authentication failed: missing session cookie for %s %s", r.Method, r.URL.Path)
w.Header().Set("Content-Type", "application/json") a.writeUnauthorized(w, r)
w.WriteHeader(http.StatusUnauthorized)
json.NewEncoder(w).Encode(map[string]string{"error": "Unauthorized"})
return return
} }
session, ok := a.GetSession(cookie.Value) session, ok := a.GetSession(cookie.Value)
if !ok { if !ok {
log.Printf("Authentication failed: invalid session cookie for %s %s", r.Method, r.URL.Path) log.Printf("Authentication failed: invalid session cookie for %s %s", r.Method, r.URL.Path)
w.Header().Set("Content-Type", "application/json") a.writeUnauthorized(w, r)
w.WriteHeader(http.StatusUnauthorized)
json.NewEncoder(w).Encode(map[string]string{"error": "Unauthorized"})
return return
} }
@@ -717,18 +735,14 @@ func (a *Auth) AdminMiddleware(next http.HandlerFunc) http.HandlerFunc {
cookie, err := r.Cookie("session_id") cookie, err := r.Cookie("session_id")
if err != nil { if err != nil {
log.Printf("Admin authentication failed: missing session cookie for %s %s", r.Method, r.URL.Path) log.Printf("Admin authentication failed: missing session cookie for %s %s", r.Method, r.URL.Path)
w.Header().Set("Content-Type", "application/json") a.writeUnauthorized(w, r)
w.WriteHeader(http.StatusUnauthorized)
json.NewEncoder(w).Encode(map[string]string{"error": "Unauthorized"})
return return
} }
session, ok := a.GetSession(cookie.Value) session, ok := a.GetSession(cookie.Value)
if !ok { if !ok {
log.Printf("Admin authentication failed: invalid session cookie for %s %s", r.Method, r.URL.Path) log.Printf("Admin authentication failed: invalid session cookie for %s %s", r.Method, r.URL.Path)
w.Header().Set("Content-Type", "application/json") a.writeUnauthorized(w, r)
w.WriteHeader(http.StatusUnauthorized)
json.NewEncoder(w).Encode(map[string]string{"error": "Unauthorized"})
return return
} }

View File

@@ -0,0 +1,56 @@
package auth
import (
"context"
"net/http"
"net/http/httptest"
"os"
"testing"
)
func TestContextHelpers(t *testing.T) {
ctx := context.Background()
ctx = context.WithValue(ctx, contextKeyUserID, int64(123))
ctx = context.WithValue(ctx, contextKeyIsAdmin, true)
id, ok := GetUserID(ctx)
if !ok || id != 123 {
t.Fatalf("GetUserID() = (%d,%v), want (123,true)", id, ok)
}
if !IsAdmin(ctx) {
t.Fatal("expected IsAdmin to be true")
}
}
func TestIsProductionMode_UsesEnv(t *testing.T) {
t.Setenv("PRODUCTION", "true")
if !IsProductionMode() {
t.Fatal("expected production mode true when env is set")
}
}
func TestWriteUnauthorized_BehaviorByRequestType(t *testing.T) {
a := &Auth{}
reqAPI := httptest.NewRequest(http.MethodGet, "/api/jobs", nil)
rrAPI := httptest.NewRecorder()
a.writeUnauthorized(rrAPI, reqAPI)
if rrAPI.Code != http.StatusUnauthorized {
t.Fatalf("api code = %d", rrAPI.Code)
}
reqPage := httptest.NewRequest(http.MethodGet, "/dashboard", nil)
rrPage := httptest.NewRecorder()
a.writeUnauthorized(rrPage, reqPage)
if rrPage.Code != http.StatusFound {
t.Fatalf("page code = %d", rrPage.Code)
}
}
func TestIsProductionMode_DefaultFalse(t *testing.T) {
_ = os.Unsetenv("PRODUCTION")
if IsProductionMode() {
t.Fatal("expected false when PRODUCTION is unset")
}
}

View File

@@ -0,0 +1,84 @@
package auth
import (
"crypto/hmac"
"crypto/sha256"
"encoding/base64"
"encoding/json"
"strings"
"testing"
"time"
)
func TestGenerateAndValidateJobToken_RoundTrip(t *testing.T) {
token, err := GenerateJobToken(10, 20, 30)
if err != nil {
t.Fatalf("GenerateJobToken failed: %v", err)
}
claims, err := ValidateJobToken(token)
if err != nil {
t.Fatalf("ValidateJobToken failed: %v", err)
}
if claims.JobID != 10 || claims.RunnerID != 20 || claims.TaskID != 30 {
t.Fatalf("unexpected claims: %+v", claims)
}
}
func TestValidateJobToken_RejectsTampering(t *testing.T) {
token, err := GenerateJobToken(1, 2, 3)
if err != nil {
t.Fatalf("GenerateJobToken failed: %v", err)
}
parts := strings.Split(token, ".")
if len(parts) != 2 {
t.Fatalf("unexpected token format: %q", token)
}
rawClaims, err := base64.RawURLEncoding.DecodeString(parts[0])
if err != nil {
t.Fatalf("decode claims failed: %v", err)
}
var claims JobTokenClaims
if err := json.Unmarshal(rawClaims, &claims); err != nil {
t.Fatalf("unmarshal claims failed: %v", err)
}
claims.JobID = 999
tamperedClaims, _ := json.Marshal(claims)
tampered := base64.RawURLEncoding.EncodeToString(tamperedClaims) + "." + parts[1]
if _, err := ValidateJobToken(tampered); err == nil {
t.Fatal("expected signature validation error for tampered token")
}
}
func TestValidateJobToken_RejectsExpired(t *testing.T) {
expiredClaims := JobTokenClaims{
JobID: 1,
RunnerID: 2,
TaskID: 3,
Exp: time.Now().Add(-time.Minute).Unix(),
}
claimsJSON, _ := json.Marshal(expiredClaims)
sigToken, err := GenerateJobToken(1, 2, 3)
if err != nil {
t.Fatalf("GenerateJobToken failed: %v", err)
}
parts := strings.Split(sigToken, ".")
if len(parts) != 2 {
t.Fatalf("unexpected token format: %q", sigToken)
}
// Re-sign expired payload with package secret.
h := signClaimsForTest(claimsJSON)
expiredToken := base64.RawURLEncoding.EncodeToString(claimsJSON) + "." + base64.RawURLEncoding.EncodeToString(h)
if _, err := ValidateJobToken(expiredToken); err == nil {
t.Fatal("expected token expiration error")
}
}
func signClaimsForTest(claims []byte) []byte {
h := hmac.New(sha256.New, jobTokenSecret)
_, _ = h.Write(claims)
return h.Sum(nil)
}

View File

@@ -0,0 +1,32 @@
package auth
import (
"strings"
"testing"
)
func TestGenerateSecret_Length(t *testing.T) {
secret, err := generateSecret(8)
if err != nil {
t.Fatalf("generateSecret failed: %v", err)
}
// hex encoding doubles length
if len(secret) != 16 {
t.Fatalf("unexpected secret length: %d", len(secret))
}
}
func TestGenerateAPIKey_Format(t *testing.T) {
s := &Secrets{}
key, err := s.generateAPIKey()
if err != nil {
t.Fatalf("generateAPIKey failed: %v", err)
}
if !strings.HasPrefix(key, "jk_r") {
t.Fatalf("unexpected key prefix: %q", key)
}
if !strings.Contains(key, "_") {
t.Fatalf("unexpected key format: %q", key)
}
}

View File

@@ -22,6 +22,15 @@ const (
KeyRegistrationEnabled = "registration_enabled" KeyRegistrationEnabled = "registration_enabled"
KeyProductionMode = "production_mode" KeyProductionMode = "production_mode"
KeyAllowedOrigins = "allowed_origins" KeyAllowedOrigins = "allowed_origins"
KeyFramesPerRenderTask = "frames_per_render_task"
// Operational limits (seconds / bytes / counts)
KeyRenderTimeoutSecs = "render_timeout_seconds"
KeyEncodeTimeoutSecs = "encode_timeout_seconds"
KeyMaxUploadBytes = "max_upload_bytes"
KeySessionCookieMaxAge = "session_cookie_max_age"
KeyAPIRateLimit = "api_rate_limit"
KeyAuthRateLimit = "auth_rate_limit"
) )
// Config manages application configuration stored in the database // Config manages application configuration stored in the database
@@ -301,3 +310,43 @@ func (c *Config) AllowedOrigins() string {
return c.GetWithDefault(KeyAllowedOrigins, "") return c.GetWithDefault(KeyAllowedOrigins, "")
} }
// GetFramesPerRenderTask returns how many frames to include per render task (min 1, default 1).
func (c *Config) GetFramesPerRenderTask() int {
n := c.GetIntWithDefault(KeyFramesPerRenderTask, 1)
if n < 1 {
return 1
}
return n
}
// RenderTimeoutSeconds returns the per-frame render timeout in seconds (default 3600 = 1 hour).
func (c *Config) RenderTimeoutSeconds() int {
return c.GetIntWithDefault(KeyRenderTimeoutSecs, 3600)
}
// EncodeTimeoutSeconds returns the video encode timeout in seconds (default 86400 = 24 hours).
func (c *Config) EncodeTimeoutSeconds() int {
return c.GetIntWithDefault(KeyEncodeTimeoutSecs, 86400)
}
// MaxUploadBytes returns the maximum upload size in bytes (default 50 GB).
func (c *Config) MaxUploadBytes() int64 {
v := c.GetIntWithDefault(KeyMaxUploadBytes, 50<<30)
return int64(v)
}
// SessionCookieMaxAgeSec returns the session cookie max-age in seconds (default 86400 = 24 hours).
func (c *Config) SessionCookieMaxAgeSec() int {
return c.GetIntWithDefault(KeySessionCookieMaxAge, 86400)
}
// APIRateLimitPerMinute returns the API rate limit (requests per minute per IP, default 100).
func (c *Config) APIRateLimitPerMinute() int {
return c.GetIntWithDefault(KeyAPIRateLimit, 100)
}
// AuthRateLimitPerMinute returns the auth rate limit (requests per minute per IP, default 10).
func (c *Config) AuthRateLimitPerMinute() int {
return c.GetIntWithDefault(KeyAuthRateLimit, 10)
}

View File

@@ -0,0 +1,66 @@
package config
import (
"path/filepath"
"testing"
"jiggablend/internal/database"
)
func newTestConfig(t *testing.T) *Config {
t.Helper()
db, err := database.NewDB(filepath.Join(t.TempDir(), "cfg.db"))
if err != nil {
t.Fatalf("NewDB failed: %v", err)
}
t.Cleanup(func() { _ = db.Close() })
return NewConfig(db)
}
func TestSetGetExistsDelete(t *testing.T) {
cfg := newTestConfig(t)
if err := cfg.Set("alpha", "1"); err != nil {
t.Fatalf("Set failed: %v", err)
}
v, err := cfg.Get("alpha")
if err != nil {
t.Fatalf("Get failed: %v", err)
}
if v != "1" {
t.Fatalf("unexpected value: %q", v)
}
exists, err := cfg.Exists("alpha")
if err != nil {
t.Fatalf("Exists failed: %v", err)
}
if !exists {
t.Fatal("expected key to exist")
}
if err := cfg.Delete("alpha"); err != nil {
t.Fatalf("Delete failed: %v", err)
}
exists, err = cfg.Exists("alpha")
if err != nil {
t.Fatalf("Exists after delete failed: %v", err)
}
if exists {
t.Fatal("expected key to be deleted")
}
}
func TestGetIntWithDefault_AndMinimumFrameTask(t *testing.T) {
cfg := newTestConfig(t)
if got := cfg.GetIntWithDefault("missing", 17); got != 17 {
t.Fatalf("expected default value, got %d", got)
}
if err := cfg.SetInt(KeyFramesPerRenderTask, 0); err != nil {
t.Fatalf("SetInt failed: %v", err)
}
if got := cfg.GetFramesPerRenderTask(); got != 1 {
t.Fatalf("expected clamped value 1, got %d", got)
}
}

View File

@@ -0,0 +1,31 @@
-- SQLite does not support DROP COLUMN directly; recreate table without frame_end
CREATE TABLE tasks_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
job_id INTEGER NOT NULL,
runner_id INTEGER,
frame INTEGER NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
output_path TEXT,
task_type TEXT NOT NULL DEFAULT 'render',
current_step TEXT,
retry_count INTEGER NOT NULL DEFAULT 0,
max_retries INTEGER NOT NULL DEFAULT 3,
runner_failure_count INTEGER NOT NULL DEFAULT 0,
timeout_seconds INTEGER,
condition TEXT,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
started_at TIMESTAMP,
completed_at TIMESTAMP,
error_message TEXT,
FOREIGN KEY (job_id) REFERENCES jobs(id),
FOREIGN KEY (runner_id) REFERENCES runners(id)
);
INSERT INTO tasks_new (id, job_id, runner_id, frame, status, output_path, task_type, current_step, retry_count, max_retries, runner_failure_count, timeout_seconds, condition, created_at, started_at, completed_at, error_message)
SELECT id, job_id, runner_id, frame, status, output_path, task_type, current_step, retry_count, max_retries, runner_failure_count, timeout_seconds, condition, created_at, started_at, completed_at, error_message FROM tasks;
DROP TABLE tasks;
ALTER TABLE tasks_new RENAME TO tasks;
CREATE INDEX idx_tasks_job_id ON tasks(job_id);
CREATE INDEX idx_tasks_runner_id ON tasks(runner_id);
CREATE INDEX idx_tasks_status ON tasks(status);
CREATE INDEX idx_tasks_job_status ON tasks(job_id, status);
CREATE INDEX idx_tasks_started_at ON tasks(started_at);

View File

@@ -0,0 +1,2 @@
-- Add frame_end to tasks for range-based render tasks (NULL = single frame, same as frame)
ALTER TABLE tasks ADD COLUMN frame_end INTEGER;

View File

@@ -0,0 +1,24 @@
-- SQLite does not support DROP COLUMN directly; recreate the table without last_used_at.
CREATE TABLE runner_api_keys_backup (
id INTEGER PRIMARY KEY AUTOINCREMENT,
key_prefix TEXT NOT NULL,
key_hash TEXT NOT NULL,
name TEXT NOT NULL,
description TEXT,
scope TEXT NOT NULL DEFAULT 'user',
is_active INTEGER NOT NULL DEFAULT 1,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
created_by INTEGER,
FOREIGN KEY (created_by) REFERENCES users(id),
UNIQUE(key_prefix)
);
INSERT INTO runner_api_keys_backup SELECT id, key_prefix, key_hash, name, description, scope, is_active, created_at, created_by FROM runner_api_keys;
DROP TABLE runner_api_keys;
ALTER TABLE runner_api_keys_backup RENAME TO runner_api_keys;
CREATE INDEX idx_runner_api_keys_prefix ON runner_api_keys(key_prefix);
CREATE INDEX idx_runner_api_keys_active ON runner_api_keys(is_active);
CREATE INDEX idx_runner_api_keys_created_by ON runner_api_keys(created_by);

View File

@@ -0,0 +1 @@
ALTER TABLE runner_api_keys ADD COLUMN last_used_at TIMESTAMP;

View File

@@ -0,0 +1,58 @@
package database
import (
"database/sql"
"path/filepath"
"testing"
)
func TestNewDB_RunsMigrationsAndSupportsQueries(t *testing.T) {
dbPath := filepath.Join(t.TempDir(), "test.db")
db, err := NewDB(dbPath)
if err != nil {
t.Fatalf("NewDB failed: %v", err)
}
defer db.Close()
if err := db.Ping(); err != nil {
t.Fatalf("Ping failed: %v", err)
}
var exists bool
err = db.With(func(conn *sql.DB) error {
return conn.QueryRow("SELECT EXISTS(SELECT 1 FROM sqlite_master WHERE type='table' AND name='settings')").Scan(&exists)
})
if err != nil {
t.Fatalf("query failed: %v", err)
}
if !exists {
t.Fatal("expected settings table after migrations")
}
}
func TestWithTx_RollbackOnError(t *testing.T) {
dbPath := filepath.Join(t.TempDir(), "tx.db")
db, err := NewDB(dbPath)
if err != nil {
t.Fatalf("NewDB failed: %v", err)
}
defer db.Close()
_ = db.WithTx(func(tx *sql.Tx) error {
if _, err := tx.Exec("INSERT INTO settings (key, value) VALUES (?, ?)", "rollback_key", "x"); err != nil {
return err
}
return sql.ErrTxDone
})
var count int
if err := db.With(func(conn *sql.DB) error {
return conn.QueryRow("SELECT COUNT(*) FROM settings WHERE key = ?", "rollback_key").Scan(&count)
}); err != nil {
t.Fatalf("count query failed: %v", err)
}
if count != 0 {
t.Fatalf("expected rollback, found %d rows", count)
}
}

View File

@@ -0,0 +1,35 @@
package logger
import (
"path/filepath"
"testing"
)
func TestParseLevel(t *testing.T) {
if ParseLevel("debug") != LevelDebug {
t.Fatal("debug should map to LevelDebug")
}
if ParseLevel("warning") != LevelWarn {
t.Fatal("warning should map to LevelWarn")
}
if ParseLevel("unknown") != LevelInfo {
t.Fatal("unknown should default to LevelInfo")
}
}
func TestSetAndGetLevel(t *testing.T) {
SetLevel(LevelError)
if GetLevel() != LevelError {
t.Fatalf("GetLevel() = %v, want %v", GetLevel(), LevelError)
}
}
func TestNewWithFile_CreatesFile(t *testing.T) {
logPath := filepath.Join(t.TempDir(), "runner.log")
l, err := NewWithFile(logPath)
if err != nil {
t.Fatalf("NewWithFile failed: %v", err)
}
defer l.Close()
}

View File

@@ -121,37 +121,6 @@ func (s *Manager) handleDeleteRunnerAPIKey(w http.ResponseWriter, r *http.Reques
s.respondJSON(w, http.StatusOK, map[string]string{"message": "API key deleted"}) s.respondJSON(w, http.StatusOK, map[string]string{"message": "API key deleted"})
} }
// handleVerifyRunner manually verifies a runner
func (s *Manager) handleVerifyRunner(w http.ResponseWriter, r *http.Request) {
runnerID, err := parseID(r, "id")
if err != nil {
s.respondError(w, http.StatusBadRequest, err.Error())
return
}
// Check if runner exists
var exists bool
err = s.db.With(func(conn *sql.DB) error {
return conn.QueryRow("SELECT EXISTS(SELECT 1 FROM runners WHERE id = ?)", runnerID).Scan(&exists)
})
if err != nil || !exists {
s.respondError(w, http.StatusNotFound, "Runner not found")
return
}
// Mark runner as verified
err = s.db.With(func(conn *sql.DB) error {
_, err := conn.Exec("UPDATE runners SET verified = 1 WHERE id = ?", runnerID)
return err
})
if err != nil {
s.respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to verify runner: %v", err))
return
}
s.respondJSON(w, http.StatusOK, map[string]string{"message": "Runner verified"})
}
// handleDeleteRunner removes a runner // handleDeleteRunner removes a runner
func (s *Manager) handleDeleteRunner(w http.ResponseWriter, r *http.Request) { func (s *Manager) handleDeleteRunner(w http.ResponseWriter, r *http.Request) {
runnerID, err := parseID(r, "id") runnerID, err := parseID(r, "id")
@@ -415,6 +384,12 @@ func (s *Manager) handleSetRegistrationEnabled(w http.ResponseWriter, r *http.Re
// handleSetUserAdminStatus sets a user's admin status (admin only) // handleSetUserAdminStatus sets a user's admin status (admin only)
func (s *Manager) handleSetUserAdminStatus(w http.ResponseWriter, r *http.Request) { func (s *Manager) handleSetUserAdminStatus(w http.ResponseWriter, r *http.Request) {
currentUserID, err := getUserID(r)
if err != nil {
s.respondError(w, http.StatusUnauthorized, err.Error())
return
}
targetUserID, err := parseID(r, "id") targetUserID, err := parseID(r, "id")
if err != nil { if err != nil {
s.respondError(w, http.StatusBadRequest, err.Error()) s.respondError(w, http.StatusBadRequest, err.Error())
@@ -429,6 +404,12 @@ func (s *Manager) handleSetUserAdminStatus(w http.ResponseWriter, r *http.Reques
return return
} }
// Prevent admins from revoking their own admin status.
if targetUserID == currentUserID && !req.IsAdmin {
s.respondError(w, http.StatusBadRequest, "You cannot revoke your own admin status")
return
}
if err := s.auth.SetUserAdminStatus(targetUserID, req.IsAdmin); err != nil { if err := s.auth.SetUserAdminStatus(targetUserID, req.IsAdmin); err != nil {
s.respondError(w, http.StatusBadRequest, err.Error()) s.respondError(w, http.StatusBadRequest, err.Error())
return return

View File

@@ -0,0 +1,35 @@
package api
import (
"bytes"
"net/http"
"net/http/httptest"
"testing"
)
func TestHandleGenerateRunnerAPIKey_UnauthorizedWithoutContext(t *testing.T) {
s := &Manager{}
req := httptest.NewRequest(http.MethodPost, "/api/admin/runner-api-keys", bytes.NewBufferString(`{"name":"k"}`))
rr := httptest.NewRecorder()
s.handleGenerateRunnerAPIKey(rr, req)
if rr.Code != http.StatusUnauthorized {
t.Fatalf("status = %d, want %d", rr.Code, http.StatusUnauthorized)
}
}
func TestHandleGenerateRunnerAPIKey_RejectsBadJSON(t *testing.T) {
s := &Manager{}
req := httptest.NewRequest(http.MethodPost, "/api/admin/runner-api-keys", bytes.NewBufferString(`{`))
rr := httptest.NewRecorder()
s.handleGenerateRunnerAPIKey(rr, req)
// No auth context means unauthorized happens first; this still validates safe
// failure handling for malformed requests in this handler path.
if rr.Code != http.StatusUnauthorized {
t.Fatalf("status = %d, want %d", rr.Code, http.StatusUnauthorized)
}
}

View File

@@ -3,7 +3,6 @@ package api
import ( import (
"archive/tar" "archive/tar"
"compress/bzip2" "compress/bzip2"
"compress/gzip"
"fmt" "fmt"
"io" "io"
"log" "log"
@@ -16,6 +15,8 @@ import (
"strings" "strings"
"sync" "sync"
"time" "time"
"jiggablend/pkg/blendfile"
) )
const ( const (
@@ -331,8 +332,9 @@ func (s *Manager) GetBlenderArchivePath(version *BlenderVersion) (string, error)
// Need to download and decompress // Need to download and decompress
log.Printf("Downloading Blender %s from %s", version.Full, version.URL) log.Printf("Downloading Blender %s from %s", version.Full, version.URL)
// 60-minute timeout for large Blender tarballs; stream to disk via io.Copy below
client := &http.Client{ client := &http.Client{
Timeout: 0, // No timeout for large downloads Timeout: 60 * time.Minute,
} }
resp, err := client.Get(version.URL) resp, err := client.Get(version.URL)
if err != nil { if err != nil {
@@ -438,144 +440,16 @@ func (s *Manager) cleanupExtractedBlenderFolders(blenderDir string, version *Ble
} }
} }
// ParseBlenderVersionFromFile parses the Blender version that a .blend file was saved with // ParseBlenderVersionFromFile parses the Blender version that a .blend file was saved with.
// This reads the file header to determine the version // Delegates to the shared pkg/blendfile implementation.
func ParseBlenderVersionFromFile(blendPath string) (major, minor int, err error) { func ParseBlenderVersionFromFile(blendPath string) (major, minor int, err error) {
file, err := os.Open(blendPath) return blendfile.ParseVersionFromFile(blendPath)
if err != nil {
return 0, 0, fmt.Errorf("failed to open blend file: %w", err)
}
defer file.Close()
return ParseBlenderVersionFromReader(file)
} }
// ParseBlenderVersionFromReader parses the Blender version from a reader // ParseBlenderVersionFromReader parses the Blender version from a reader.
// Useful for reading from uploaded files without saving to disk first // Delegates to the shared pkg/blendfile implementation.
func ParseBlenderVersionFromReader(r io.ReadSeeker) (major, minor int, err error) { func ParseBlenderVersionFromReader(r io.ReadSeeker) (major, minor int, err error) {
// Read the first 12 bytes of the blend file header return blendfile.ParseVersionFromReader(r)
// Format: BLENDER-v<major><minor><patch> or BLENDER_v<major><minor><patch>
// The header is: "BLENDER" (7 bytes) + pointer size (1 byte: '-' for 64-bit, '_' for 32-bit)
// + endianness (1 byte: 'v' for little-endian, 'V' for big-endian)
// + version (3 bytes: e.g., "402" for 4.02)
header := make([]byte, 12)
n, err := r.Read(header)
if err != nil || n < 12 {
return 0, 0, fmt.Errorf("failed to read blend file header: %w", err)
}
// Check for BLENDER magic
if string(header[:7]) != "BLENDER" {
// Might be compressed - try to decompress
r.Seek(0, 0)
return parseCompressedBlendVersion(r)
}
// Parse version from bytes 9-11 (3 digits)
versionStr := string(header[9:12])
var vMajor, vMinor int
// Version format changed in Blender 3.0
// Pre-3.0: "279" = 2.79, "280" = 2.80
// 3.0+: "300" = 3.0, "402" = 4.02, "410" = 4.10
if len(versionStr) == 3 {
// First digit is major version
fmt.Sscanf(string(versionStr[0]), "%d", &vMajor)
// Next two digits are minor version
fmt.Sscanf(versionStr[1:3], "%d", &vMinor)
}
return vMajor, vMinor, nil
}
// parseCompressedBlendVersion handles gzip and zstd compressed blend files
func parseCompressedBlendVersion(r io.ReadSeeker) (major, minor int, err error) {
// Check for compression magic bytes
magic := make([]byte, 4)
if _, err := r.Read(magic); err != nil {
return 0, 0, err
}
r.Seek(0, 0)
if magic[0] == 0x1f && magic[1] == 0x8b {
// gzip compressed
gzReader, err := gzip.NewReader(r)
if err != nil {
return 0, 0, fmt.Errorf("failed to create gzip reader: %w", err)
}
defer gzReader.Close()
header := make([]byte, 12)
n, err := gzReader.Read(header)
if err != nil || n < 12 {
return 0, 0, fmt.Errorf("failed to read compressed blend header: %w", err)
}
if string(header[:7]) != "BLENDER" {
return 0, 0, fmt.Errorf("invalid blend file format")
}
versionStr := string(header[9:12])
var vMajor, vMinor int
if len(versionStr) == 3 {
fmt.Sscanf(string(versionStr[0]), "%d", &vMajor)
fmt.Sscanf(versionStr[1:3], "%d", &vMinor)
}
return vMajor, vMinor, nil
}
// Check for zstd magic (Blender 3.0+): 0x28 0xB5 0x2F 0xFD
if magic[0] == 0x28 && magic[1] == 0xb5 && magic[2] == 0x2f && magic[3] == 0xfd {
return parseZstdBlendVersion(r)
}
return 0, 0, fmt.Errorf("unknown blend file format")
}
// parseZstdBlendVersion handles zstd-compressed blend files (Blender 3.0+)
// Uses zstd command line tool since Go doesn't have native zstd support
func parseZstdBlendVersion(r io.ReadSeeker) (major, minor int, err error) {
r.Seek(0, 0)
// We need to decompress just enough to read the header
// Use zstd command to decompress from stdin
cmd := exec.Command("zstd", "-d", "-c")
cmd.Stdin = r
stdout, err := cmd.StdoutPipe()
if err != nil {
return 0, 0, fmt.Errorf("failed to create zstd stdout pipe: %w", err)
}
if err := cmd.Start(); err != nil {
return 0, 0, fmt.Errorf("failed to start zstd decompression: %w", err)
}
// Read just the header (12 bytes)
header := make([]byte, 12)
n, readErr := io.ReadFull(stdout, header)
// Kill the process early - we only need the header
cmd.Process.Kill()
cmd.Wait()
if readErr != nil || n < 12 {
return 0, 0, fmt.Errorf("failed to read zstd compressed blend header: %v", readErr)
}
if string(header[:7]) != "BLENDER" {
return 0, 0, fmt.Errorf("invalid blend file format in zstd archive")
}
versionStr := string(header[9:12])
var vMajor, vMinor int
if len(versionStr) == 3 {
fmt.Sscanf(string(versionStr[0]), "%d", &vMajor)
fmt.Sscanf(versionStr[1:3], "%d", &vMinor)
}
return vMajor, vMinor, nil
} }
// handleGetBlenderVersions returns available Blender versions // handleGetBlenderVersions returns available Blender versions
@@ -712,7 +586,7 @@ func (s *Manager) handleDownloadBlender(w http.ResponseWriter, r *http.Request)
tarFilename = strings.TrimSuffix(tarFilename, ".bz2") tarFilename = strings.TrimSuffix(tarFilename, ".bz2")
w.Header().Set("Content-Type", "application/x-tar") w.Header().Set("Content-Type", "application/x-tar")
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=%s", tarFilename)) w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=%q", tarFilename))
w.Header().Set("Content-Length", fmt.Sprintf("%d", stat.Size())) w.Header().Set("Content-Length", fmt.Sprintf("%d", stat.Size()))
w.Header().Set("X-Blender-Version", blenderVersion.Full) w.Header().Set("X-Blender-Version", blenderVersion.Full)

View File

@@ -0,0 +1,35 @@
package api
import (
"fmt"
"os/exec"
"path/filepath"
"strings"
)
// resolveBlenderBinaryPath resolves a Blender executable to an absolute path.
func resolveBlenderBinaryPath(blenderBinary string) (string, error) {
if blenderBinary == "" {
return "", fmt.Errorf("blender binary path is empty")
}
// Already contains a path component; normalize it.
if strings.Contains(blenderBinary, string(filepath.Separator)) {
absPath, err := filepath.Abs(blenderBinary)
if err != nil {
return "", fmt.Errorf("failed to resolve blender binary path %q: %w", blenderBinary, err)
}
return absPath, nil
}
// Bare executable name, resolve via PATH.
resolvedPath, err := exec.LookPath(blenderBinary)
if err != nil {
return "", fmt.Errorf("failed to locate blender binary %q in PATH: %w", blenderBinary, err)
}
absPath, err := filepath.Abs(resolvedPath)
if err != nil {
return "", fmt.Errorf("failed to resolve blender binary path %q: %w", resolvedPath, err)
}
return absPath, nil
}

View File

@@ -0,0 +1,23 @@
package api
import (
"path/filepath"
"testing"
)
func TestResolveBlenderBinaryPath_WithPathComponent(t *testing.T) {
got, err := resolveBlenderBinaryPath("./blender")
if err != nil {
t.Fatalf("resolveBlenderBinaryPath failed: %v", err)
}
if !filepath.IsAbs(got) {
t.Fatalf("expected absolute path, got %q", got)
}
}
func TestResolveBlenderBinaryPath_Empty(t *testing.T) {
if _, err := resolveBlenderBinaryPath(""); err == nil {
t.Fatal("expected error for empty path")
}
}

View File

@@ -0,0 +1,27 @@
package api
import (
"testing"
"time"
)
func TestGetLatestBlenderForMajorMinor_UsesCachedVersions(t *testing.T) {
blenderVersionCache.mu.Lock()
blenderVersionCache.versions = []BlenderVersion{
{Major: 4, Minor: 2, Patch: 1, Full: "4.2.1"},
{Major: 4, Minor: 2, Patch: 3, Full: "4.2.3"},
{Major: 4, Minor: 1, Patch: 9, Full: "4.1.9"},
}
blenderVersionCache.fetchedAt = time.Now()
blenderVersionCache.mu.Unlock()
m := &Manager{}
v, err := m.GetLatestBlenderForMajorMinor(4, 2)
if err != nil {
t.Fatalf("GetLatestBlenderForMajorMinor failed: %v", err)
}
if v.Full != "4.2.3" {
t.Fatalf("expected highest patch, got %+v", *v)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,145 @@
package api
import (
"archive/tar"
"bytes"
"net/http/httptest"
"os"
"path/filepath"
"testing"
"time"
)
func TestGenerateAndCheckETag(t *testing.T) {
etag := generateETag(map[string]interface{}{"a": 1})
if etag == "" {
t.Fatal("expected non-empty etag")
}
req := httptest.NewRequest("GET", "/x", nil)
req.Header.Set("If-None-Match", etag)
if !checkETag(req, etag) {
t.Fatal("expected etag match")
}
}
func TestUploadSessionPhase(t *testing.T) {
if got := uploadSessionPhase("uploading"); got != "upload" {
t.Fatalf("unexpected phase: %q", got)
}
if got := uploadSessionPhase("select_blend"); got != "action_required" {
t.Fatalf("unexpected phase: %q", got)
}
if got := uploadSessionPhase("something_else"); got != "processing" {
t.Fatalf("unexpected fallback phase: %q", got)
}
}
func TestParseTarHeader_AndTruncateString(t *testing.T) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
_ = tw.WriteHeader(&tar.Header{Name: "a.txt", Mode: 0644, Size: 3, Typeflag: tar.TypeReg})
_, _ = tw.Write([]byte("abc"))
_ = tw.Close()
raw := buf.Bytes()
if len(raw) < 512 {
t.Fatal("tar buffer unexpectedly small")
}
var h tar.Header
if err := parseTarHeader(raw[:512], &h); err != nil {
t.Fatalf("parseTarHeader failed: %v", err)
}
if h.Name != "a.txt" {
t.Fatalf("unexpected parsed name: %q", h.Name)
}
if got := truncateString("abcdef", 5); got != "ab..." {
t.Fatalf("truncateString = %q, want %q", got, "ab...")
}
}
func TestValidateUploadSessionForJobCreation_MissingSession(t *testing.T) {
s := &Manager{
uploadSessions: map[string]*UploadSession{},
}
_, _, err := s.validateUploadSessionForJobCreation("missing", 1)
if err == nil {
t.Fatal("expected validation error for missing session")
}
sessionErr, ok := err.(*uploadSessionValidationError)
if !ok || sessionErr.Code != uploadSessionExpiredCode {
t.Fatalf("expected %s validation error, got %#v", uploadSessionExpiredCode, err)
}
}
func TestValidateUploadSessionForJobCreation_ContextMissing(t *testing.T) {
tmpDir := t.TempDir()
s := &Manager{
uploadSessions: map[string]*UploadSession{
"s1": {
SessionID: "s1",
UserID: 9,
TempDir: tmpDir,
Status: "completed",
CreatedAt: time.Now(),
},
},
}
if _, _, err := s.validateUploadSessionForJobCreation("s1", 9); err == nil {
t.Fatal("expected error when context.tar is missing")
}
}
func TestValidateUploadSessionForJobCreation_NotReady(t *testing.T) {
tmpDir := t.TempDir()
s := &Manager{
uploadSessions: map[string]*UploadSession{
"s1": {
SessionID: "s1",
UserID: 9,
TempDir: tmpDir,
Status: "processing",
CreatedAt: time.Now(),
},
},
}
_, _, err := s.validateUploadSessionForJobCreation("s1", 9)
if err == nil {
t.Fatal("expected error for session that is not completed")
}
sessionErr, ok := err.(*uploadSessionValidationError)
if !ok || sessionErr.Code != uploadSessionNotReadyCode {
t.Fatalf("expected %s validation error, got %#v", uploadSessionNotReadyCode, err)
}
}
func TestValidateUploadSessionForJobCreation_Success(t *testing.T) {
tmpDir := t.TempDir()
contextPath := filepath.Join(tmpDir, "context.tar")
if err := os.WriteFile(contextPath, []byte("tar-bytes"), 0644); err != nil {
t.Fatalf("write context.tar: %v", err)
}
s := &Manager{
uploadSessions: map[string]*UploadSession{
"s1": {
SessionID: "s1",
UserID: 9,
TempDir: tmpDir,
Status: "completed",
CreatedAt: time.Now(),
},
},
}
session, gotPath, err := s.validateUploadSessionForJobCreation("s1", 9)
if err != nil {
t.Fatalf("expected valid session, got error: %v", err)
}
if session == nil || gotPath != contextPath {
t.Fatalf("unexpected result: session=%v path=%q", session, gotPath)
}
}

View File

@@ -30,27 +30,22 @@ import (
"github.com/gorilla/websocket" "github.com/gorilla/websocket"
) )
// Configuration constants // Configuration constants (non-configurable infrastructure values)
const ( const (
// WebSocket timeouts // WebSocket timeouts
WSReadDeadline = 90 * time.Second WSReadDeadline = 90 * time.Second
WSPingInterval = 30 * time.Second WSPingInterval = 30 * time.Second
WSWriteDeadline = 10 * time.Second WSWriteDeadline = 10 * time.Second
// Task timeouts // Infrastructure timers
RenderTimeout = 60 * 60 // 1 hour for frame rendering
VideoEncodeTimeout = 60 * 60 * 24 // 24 hours for encoding
// Limits
MaxUploadSize = 50 << 30 // 50 GB
RunnerHeartbeatTimeout = 90 * time.Second RunnerHeartbeatTimeout = 90 * time.Second
TaskDistributionInterval = 10 * time.Second TaskDistributionInterval = 10 * time.Second
ProgressUpdateThrottle = 2 * time.Second ProgressUpdateThrottle = 2 * time.Second
// Cookie settings
SessionCookieMaxAge = 86400 // 24 hours
) )
// Operational limits are loaded from database config at Manager initialization.
// Defaults are defined in internal/config/config.go convenience methods.
// Manager represents the manager server // Manager represents the manager server
type Manager struct { type Manager struct {
db *database.DB db *database.DB
@@ -59,6 +54,7 @@ type Manager struct {
secrets *authpkg.Secrets secrets *authpkg.Secrets
storage *storage.Storage storage *storage.Storage
router *chi.Mux router *chi.Mux
ui *uiRenderer
// WebSocket connections // WebSocket connections
wsUpgrader websocket.Upgrader wsUpgrader websocket.Upgrader
@@ -89,6 +85,9 @@ type Manager struct {
// Throttling for task status updates (per task) // Throttling for task status updates (per task)
taskUpdateTimes map[int64]time.Time // key: taskID taskUpdateTimes map[int64]time.Time // key: taskID
taskUpdateTimesMu sync.RWMutex taskUpdateTimesMu sync.RWMutex
// Per-job mutexes to serialize updateJobStatusFromTasks calls and prevent race conditions
jobStatusUpdateMu map[int64]*sync.Mutex // key: jobID
jobStatusUpdateMuMu sync.RWMutex
// Client WebSocket connections (new unified WebSocket) // Client WebSocket connections (new unified WebSocket)
// Key is "userID:connID" to support multiple tabs per user // Key is "userID:connID" to support multiple tabs per user
@@ -105,6 +104,12 @@ type Manager struct {
// Server start time for health checks // Server start time for health checks
startTime time.Time startTime time.Time
// Configurable operational values loaded from config
renderTimeout int // seconds
videoEncodeTimeout int // seconds
maxUploadSize int64 // bytes
sessionCookieMaxAge int // seconds
} }
// ClientConnection represents a client WebSocket connection with subscriptions // ClientConnection represents a client WebSocket connection with subscriptions
@@ -122,10 +127,24 @@ type ClientConnection struct {
type UploadSession struct { type UploadSession struct {
SessionID string SessionID string
UserID int64 UserID int64
TempDir string
Progress float64 Progress float64
Status string // "uploading", "processing", "extracting_metadata", "creating_context", "completed", "error" Status string // "uploading", "processing", "extracting_metadata", "creating_context", "completed", "error"
Phase string // "upload", "processing", "ready", "error", "action_required"
Message string Message string
CreatedAt time.Time CreatedAt time.Time
// Result fields set when Status is "completed" (for async processing)
ResultContextArchive string
ResultMetadata interface{} // *types.BlendMetadata when set
ResultMainBlendFile string
ResultFileName string
ResultFileSize int64
ResultZipExtracted bool
ResultExtractedFilesCnt int
ResultMetadataExtracted bool
ResultMetadataError string // set when Status is "completed" but metadata extraction failed
ErrorMessage string // set when Status is "error"
ResultBlendFiles []string // set when Status is "select_blend" (relative paths for user to pick)
} }
// NewManager creates a new manager server // NewManager creates a new manager server
@@ -134,6 +153,10 @@ func NewManager(db *database.DB, cfg *config.Config, auth *authpkg.Auth, storage
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to initialize secrets: %w", err) return nil, fmt.Errorf("failed to initialize secrets: %w", err)
} }
ui, err := newUIRenderer()
if err != nil {
return nil, fmt.Errorf("failed to initialize UI renderer: %w", err)
}
s := &Manager{ s := &Manager{
db: db, db: db,
@@ -142,7 +165,13 @@ func NewManager(db *database.DB, cfg *config.Config, auth *authpkg.Auth, storage
secrets: secrets, secrets: secrets,
storage: storage, storage: storage,
router: chi.NewRouter(), router: chi.NewRouter(),
ui: ui,
startTime: time.Now(), startTime: time.Now(),
renderTimeout: cfg.RenderTimeoutSeconds(),
videoEncodeTimeout: cfg.EncodeTimeoutSeconds(),
maxUploadSize: cfg.MaxUploadBytes(),
sessionCookieMaxAge: cfg.SessionCookieMaxAgeSec(),
wsUpgrader: websocket.Upgrader{ wsUpgrader: websocket.Upgrader{
CheckOrigin: checkWebSocketOrigin, CheckOrigin: checkWebSocketOrigin,
ReadBufferSize: 1024, ReadBufferSize: 1024,
@@ -162,8 +191,14 @@ func NewManager(db *database.DB, cfg *config.Config, auth *authpkg.Auth, storage
runnerJobConns: make(map[string]*websocket.Conn), runnerJobConns: make(map[string]*websocket.Conn),
runnerJobConnsWriteMu: make(map[string]*sync.Mutex), runnerJobConnsWriteMu: make(map[string]*sync.Mutex),
runnerJobConnsWriteMuMu: sync.RWMutex{}, // Initialize the new field runnerJobConnsWriteMuMu: sync.RWMutex{}, // Initialize the new field
// Per-job mutexes for serializing status updates
jobStatusUpdateMu: make(map[int64]*sync.Mutex),
} }
// Initialize rate limiters from config
apiRateLimiter = NewRateLimiter(cfg.APIRateLimitPerMinute(), time.Minute)
authRateLimiter = NewRateLimiter(cfg.AuthRateLimitPerMinute(), time.Minute)
// Check for required external tools // Check for required external tools
if err := s.checkRequiredTools(); err != nil { if err := s.checkRequiredTools(); err != nil {
return nil, err return nil, err
@@ -242,6 +277,7 @@ type RateLimiter struct {
mu sync.RWMutex mu sync.RWMutex
limit int // max requests limit int // max requests
window time.Duration // time window window time.Duration // time window
stopChan chan struct{}
} }
// NewRateLimiter creates a new rate limiter // NewRateLimiter creates a new rate limiter
@@ -250,12 +286,17 @@ func NewRateLimiter(limit int, window time.Duration) *RateLimiter {
requests: make(map[string][]time.Time), requests: make(map[string][]time.Time),
limit: limit, limit: limit,
window: window, window: window,
stopChan: make(chan struct{}),
} }
// Start cleanup goroutine
go rl.cleanup() go rl.cleanup()
return rl return rl
} }
// Stop shuts down the cleanup goroutine.
func (rl *RateLimiter) Stop() {
close(rl.stopChan)
}
// Allow checks if a request from the given IP is allowed // Allow checks if a request from the given IP is allowed
func (rl *RateLimiter) Allow(ip string) bool { func (rl *RateLimiter) Allow(ip string) bool {
rl.mu.Lock() rl.mu.Lock()
@@ -288,32 +329,37 @@ func (rl *RateLimiter) Allow(ip string) bool {
// cleanup periodically removes old entries // cleanup periodically removes old entries
func (rl *RateLimiter) cleanup() { func (rl *RateLimiter) cleanup() {
ticker := time.NewTicker(5 * time.Minute) ticker := time.NewTicker(5 * time.Minute)
for range ticker.C { defer ticker.Stop()
rl.mu.Lock()
cutoff := time.Now().Add(-rl.window) for {
for ip, reqs := range rl.requests { select {
validReqs := make([]time.Time, 0, len(reqs)) case <-ticker.C:
for _, t := range reqs { rl.mu.Lock()
if t.After(cutoff) { cutoff := time.Now().Add(-rl.window)
validReqs = append(validReqs, t) for ip, reqs := range rl.requests {
validReqs := make([]time.Time, 0, len(reqs))
for _, t := range reqs {
if t.After(cutoff) {
validReqs = append(validReqs, t)
}
}
if len(validReqs) == 0 {
delete(rl.requests, ip)
} else {
rl.requests[ip] = validReqs
} }
} }
if len(validReqs) == 0 { rl.mu.Unlock()
delete(rl.requests, ip) case <-rl.stopChan:
} else { return
rl.requests[ip] = validReqs
}
} }
rl.mu.Unlock()
} }
} }
// Global rate limiters for different endpoint types // Rate limiters — initialized per Manager instance in NewManager.
var ( var (
// General API rate limiter: 100 requests per minute per IP apiRateLimiter *RateLimiter
apiRateLimiter = NewRateLimiter(100, time.Minute) authRateLimiter *RateLimiter
// Auth rate limiter: 10 requests per minute per IP (stricter for login attempts)
authRateLimiter = NewRateLimiter(10, time.Minute)
) )
// rateLimitMiddleware applies rate limiting based on client IP // rateLimitMiddleware applies rate limiting based on client IP
@@ -445,6 +491,7 @@ func (w *gzipResponseWriter) WriteHeader(statusCode int) {
func (s *Manager) setupRoutes() { func (s *Manager) setupRoutes() {
// Health check endpoint (unauthenticated) // Health check endpoint (unauthenticated)
s.router.Get("/api/health", s.handleHealthCheck) s.router.Get("/api/health", s.handleHealthCheck)
s.setupUIRoutes()
// Public routes (with stricter rate limiting for auth endpoints) // Public routes (with stricter rate limiting for auth endpoints)
s.router.Route("/api/auth", func(r chi.Router) { s.router.Route("/api/auth", func(r chi.Router) {
@@ -472,6 +519,7 @@ func (s *Manager) setupRoutes() {
}) })
r.Post("/", s.handleCreateJob) r.Post("/", s.handleCreateJob)
r.Post("/upload", s.handleUploadFileForJobCreation) // Upload before job creation r.Post("/upload", s.handleUploadFileForJobCreation) // Upload before job creation
r.Get("/upload/status", s.handleUploadStatus) // Poll upload processing status (session_id query param)
r.Get("/", s.handleListJobs) r.Get("/", s.handleListJobs)
r.Get("/summary", s.handleListJobsSummary) r.Get("/summary", s.handleListJobsSummary)
r.Post("/batch", s.handleBatchGetJobs) r.Post("/batch", s.handleBatchGetJobs)
@@ -482,6 +530,7 @@ func (s *Manager) setupRoutes() {
r.Get("/{id}/files", s.handleListJobFiles) r.Get("/{id}/files", s.handleListJobFiles)
r.Get("/{id}/files/count", s.handleGetJobFilesCount) r.Get("/{id}/files/count", s.handleGetJobFilesCount)
r.Get("/{id}/context", s.handleListContextArchive) r.Get("/{id}/context", s.handleListContextArchive)
r.Get("/{id}/files/exr-zip", s.handleDownloadEXRZip)
r.Get("/{id}/files/{fileId}/download", s.handleDownloadJobFile) r.Get("/{id}/files/{fileId}/download", s.handleDownloadJobFile)
r.Get("/{id}/files/{fileId}/preview-exr", s.handlePreviewEXR) r.Get("/{id}/files/{fileId}/preview-exr", s.handlePreviewEXR)
r.Get("/{id}/video", s.handleStreamVideo) r.Get("/{id}/video", s.handleStreamVideo)
@@ -517,7 +566,6 @@ func (s *Manager) setupRoutes() {
r.Delete("/{id}", s.handleDeleteRunnerAPIKey) r.Delete("/{id}", s.handleDeleteRunnerAPIKey)
}) })
r.Get("/", s.handleListRunnersAdmin) r.Get("/", s.handleListRunnersAdmin)
r.Post("/{id}/verify", s.handleVerifyRunner)
r.Delete("/{id}", s.handleDeleteRunner) r.Delete("/{id}", s.handleDeleteRunner)
}) })
r.Route("/users", func(r chi.Router) { r.Route("/users", func(r chi.Router) {
@@ -550,6 +598,7 @@ func (s *Manager) setupRoutes() {
return http.HandlerFunc(s.runnerAuthMiddleware(next.ServeHTTP)) return http.HandlerFunc(s.runnerAuthMiddleware(next.ServeHTTP))
}) })
r.Get("/blender/download", s.handleDownloadBlender) r.Get("/blender/download", s.handleDownloadBlender)
r.Get("/jobs/{jobId}/status", s.handleGetJobStatusForRunner)
r.Get("/jobs/{jobId}/files", s.handleGetJobFilesForRunner) r.Get("/jobs/{jobId}/files", s.handleGetJobFilesForRunner)
r.Get("/jobs/{jobId}/metadata", s.handleGetJobMetadataForRunner) r.Get("/jobs/{jobId}/metadata", s.handleGetJobMetadataForRunner)
r.Get("/files/{jobId}/{fileName}", s.handleDownloadFileForRunner) r.Get("/files/{jobId}/{fileName}", s.handleDownloadFileForRunner)
@@ -559,8 +608,8 @@ func (s *Manager) setupRoutes() {
// Blender versions API (public, for job submission page) // Blender versions API (public, for job submission page)
s.router.Get("/api/blender/versions", s.handleGetBlenderVersions) s.router.Get("/api/blender/versions", s.handleGetBlenderVersions)
// Serve static files (embedded React app with SPA fallback) // Static assets for server-rendered UI.
s.router.Handle("/*", web.SPAHandler()) s.router.Handle("/assets/*", web.StaticHandler())
} }
// ServeHTTP implements http.Handler // ServeHTTP implements http.Handler
@@ -581,18 +630,24 @@ func (s *Manager) respondError(w http.ResponseWriter, status int, message string
s.respondJSON(w, status, map[string]string{"error": message}) s.respondJSON(w, status, map[string]string{"error": message})
} }
func (s *Manager) respondErrorWithCode(w http.ResponseWriter, status int, code, message string) {
s.respondJSON(w, status, map[string]string{
"error": message,
"code": code,
})
}
// createSessionCookie creates a secure session cookie with appropriate flags for the environment // createSessionCookie creates a secure session cookie with appropriate flags for the environment
func createSessionCookie(sessionID string) *http.Cookie { func (s *Manager) createSessionCookie(sessionID string) *http.Cookie {
cookie := &http.Cookie{ cookie := &http.Cookie{
Name: "session_id", Name: "session_id",
Value: sessionID, Value: sessionID,
Path: "/", Path: "/",
MaxAge: SessionCookieMaxAge, MaxAge: s.sessionCookieMaxAge,
HttpOnly: true, HttpOnly: true,
SameSite: http.SameSiteLaxMode, SameSite: http.SameSiteLaxMode,
} }
// In production mode, set Secure flag to require HTTPS
if authpkg.IsProductionMode() { if authpkg.IsProductionMode() {
cookie.Secure = true cookie.Secure = true
} }
@@ -684,7 +739,7 @@ func (s *Manager) handleGoogleCallback(w http.ResponseWriter, r *http.Request) {
} }
sessionID := s.auth.CreateSession(session) sessionID := s.auth.CreateSession(session)
http.SetCookie(w, createSessionCookie(sessionID)) http.SetCookie(w, s.createSessionCookie(sessionID))
http.Redirect(w, r, "/", http.StatusFound) http.Redirect(w, r, "/", http.StatusFound)
} }
@@ -717,7 +772,7 @@ func (s *Manager) handleDiscordCallback(w http.ResponseWriter, r *http.Request)
} }
sessionID := s.auth.CreateSession(session) sessionID := s.auth.CreateSession(session)
http.SetCookie(w, createSessionCookie(sessionID)) http.SetCookie(w, s.createSessionCookie(sessionID))
http.Redirect(w, r, "/", http.StatusFound) http.Redirect(w, r, "/", http.StatusFound)
} }
@@ -810,7 +865,7 @@ func (s *Manager) handleLocalRegister(w http.ResponseWriter, r *http.Request) {
} }
sessionID := s.auth.CreateSession(session) sessionID := s.auth.CreateSession(session)
http.SetCookie(w, createSessionCookie(sessionID)) http.SetCookie(w, s.createSessionCookie(sessionID))
s.respondJSON(w, http.StatusCreated, map[string]interface{}{ s.respondJSON(w, http.StatusCreated, map[string]interface{}{
"message": "Registration successful", "message": "Registration successful",
@@ -847,7 +902,7 @@ func (s *Manager) handleLocalLogin(w http.ResponseWriter, r *http.Request) {
} }
sessionID := s.auth.CreateSession(session) sessionID := s.auth.CreateSession(session)
http.SetCookie(w, createSessionCookie(sessionID)) http.SetCookie(w, s.createSessionCookie(sessionID))
s.respondJSON(w, http.StatusOK, map[string]interface{}{ s.respondJSON(w, http.StatusOK, map[string]interface{}{
"message": "Login successful", "message": "Login successful",

View File

@@ -0,0 +1,50 @@
package api
import (
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
)
func TestCheckWebSocketOrigin_DevelopmentAllowsOrigin(t *testing.T) {
t.Setenv("PRODUCTION", "false")
req := httptest.NewRequest("GET", "http://localhost/ws", nil)
req.Host = "localhost:8080"
req.Header.Set("Origin", "http://example.com")
if !checkWebSocketOrigin(req) {
t.Fatal("expected development mode to allow origin")
}
}
func TestCheckWebSocketOrigin_ProductionSameHostAllowed(t *testing.T) {
t.Setenv("PRODUCTION", "true")
t.Setenv("ALLOWED_ORIGINS", "")
req := httptest.NewRequest("GET", "http://localhost/ws", nil)
req.Host = "localhost:8080"
req.Header.Set("Origin", "http://localhost:8080")
if !checkWebSocketOrigin(req) {
t.Fatal("expected same-host origin to be allowed")
}
}
func TestRespondErrorWithCode_IncludesCodeField(t *testing.T) {
s := &Manager{}
rr := httptest.NewRecorder()
s.respondErrorWithCode(rr, http.StatusBadRequest, "UPLOAD_SESSION_EXPIRED", "Upload session expired.")
if rr.Code != http.StatusBadRequest {
t.Fatalf("status = %d, want %d", rr.Code, http.StatusBadRequest)
}
var payload map[string]string
if err := json.Unmarshal(rr.Body.Bytes(), &payload); err != nil {
t.Fatalf("failed to decode response: %v", err)
}
if payload["code"] != "UPLOAD_SESSION_EXPIRED" {
t.Fatalf("unexpected code: %q", payload["code"])
}
if payload["error"] == "" {
t.Fatal("expected non-empty error message")
}
}

View File

@@ -19,6 +19,9 @@ import (
"jiggablend/pkg/types" "jiggablend/pkg/types"
) )
var runMetadataCommand = executils.RunCommand
var resolveMetadataBlenderPath = resolveBlenderBinaryPath
// handleGetJobMetadata retrieves metadata for a job // handleGetJobMetadata retrieves metadata for a job
func (s *Manager) handleGetJobMetadata(w http.ResponseWriter, r *http.Request) { func (s *Manager) handleGetJobMetadata(w http.ResponseWriter, r *http.Request) {
userID, err := getUserID(r) userID, err := getUserID(r)
@@ -141,16 +144,24 @@ func (s *Manager) extractMetadataFromContext(jobID int64) (*types.BlendMetadata,
return nil, fmt.Errorf("failed to create extraction script: %w", err) return nil, fmt.Errorf("failed to create extraction script: %w", err)
} }
// Make blend file path relative to tmpDir to avoid path resolution issues // Use absolute paths to avoid path normalization issues with relative traversal.
blendFileRel, err := filepath.Rel(tmpDir, blendFile) blendFileAbs, err := filepath.Abs(blendFile)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to get relative path for blend file: %w", err) return nil, fmt.Errorf("failed to get absolute path for blend file: %w", err)
}
scriptPathAbs, err := filepath.Abs(scriptPath)
if err != nil {
return nil, fmt.Errorf("failed to get absolute path for extraction script: %w", err)
} }
// Execute Blender with Python script using executils // Execute Blender with Python script using executils
result, err := executils.RunCommand( blenderBinary, err := resolveMetadataBlenderPath("blender")
"blender", if err != nil {
[]string{"-b", blendFileRel, "--python", "extract_metadata.py"}, return nil, err
}
result, err := runMetadataCommand(
blenderBinary,
[]string{"-b", blendFileAbs, "--python", scriptPathAbs},
tmpDir, tmpDir,
nil, // inherit environment nil, // inherit environment
jobID, jobID,
@@ -225,8 +236,17 @@ func (s *Manager) extractTar(tarPath, destDir string) error {
return fmt.Errorf("failed to read tar header: %w", err) return fmt.Errorf("failed to read tar header: %w", err)
} }
// Sanitize path to prevent directory traversal // Sanitize path to prevent directory traversal. TAR stores "/" separators, so normalize first.
target := filepath.Join(destDir, header.Name) normalizedHeaderPath := filepath.FromSlash(header.Name)
cleanHeaderPath := filepath.Clean(normalizedHeaderPath)
if cleanHeaderPath == "." {
continue
}
if filepath.IsAbs(cleanHeaderPath) || strings.HasPrefix(cleanHeaderPath, ".."+string(os.PathSeparator)) || cleanHeaderPath == ".." {
log.Printf("ERROR: Invalid file path in TAR - header: %s", header.Name)
return fmt.Errorf("invalid file path in archive: %s", header.Name)
}
target := filepath.Join(destDir, cleanHeaderPath)
// Ensure target is within destDir // Ensure target is within destDir
cleanTarget := filepath.Clean(target) cleanTarget := filepath.Clean(target)
@@ -237,14 +257,14 @@ func (s *Manager) extractTar(tarPath, destDir string) error {
} }
// Create parent directories // Create parent directories
if err := os.MkdirAll(filepath.Dir(target), 0755); err != nil { if err := os.MkdirAll(filepath.Dir(cleanTarget), 0755); err != nil {
return fmt.Errorf("failed to create directory: %w", err) return fmt.Errorf("failed to create directory: %w", err)
} }
// Write file // Write file
switch header.Typeflag { switch header.Typeflag {
case tar.TypeReg: case tar.TypeReg:
outFile, err := os.Create(target) outFile, err := os.Create(cleanTarget)
if err != nil { if err != nil {
return fmt.Errorf("failed to create file: %w", err) return fmt.Errorf("failed to create file: %w", err)
} }

View File

@@ -0,0 +1,98 @@
package api
import (
"archive/tar"
"bytes"
"os"
"path/filepath"
"testing"
"jiggablend/internal/storage"
"jiggablend/pkg/executils"
)
func TestExtractTar_ExtractsRegularFile(t *testing.T) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
_ = tw.WriteHeader(&tar.Header{Name: "ctx/scene.blend", Mode: 0644, Size: 4, Typeflag: tar.TypeReg})
_, _ = tw.Write([]byte("data"))
_ = tw.Close()
tarPath := filepath.Join(t.TempDir(), "ctx.tar")
if err := os.WriteFile(tarPath, buf.Bytes(), 0644); err != nil {
t.Fatalf("write tar: %v", err)
}
dest := t.TempDir()
m := &Manager{}
if err := m.extractTar(tarPath, dest); err != nil {
t.Fatalf("extractTar failed: %v", err)
}
if _, err := os.Stat(filepath.Join(dest, "ctx", "scene.blend")); err != nil {
t.Fatalf("expected extracted file: %v", err)
}
}
func TestExtractTar_RejectsTraversal(t *testing.T) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
_ = tw.WriteHeader(&tar.Header{Name: "../evil.txt", Mode: 0644, Size: 1, Typeflag: tar.TypeReg})
_, _ = tw.Write([]byte("x"))
_ = tw.Close()
tarPath := filepath.Join(t.TempDir(), "bad.tar")
if err := os.WriteFile(tarPath, buf.Bytes(), 0644); err != nil {
t.Fatalf("write tar: %v", err)
}
m := &Manager{}
if err := m.extractTar(tarPath, t.TempDir()); err == nil {
t.Fatal("expected path traversal error")
}
}
func TestExtractMetadataFromContext_UsesCommandSeam(t *testing.T) {
base := t.TempDir()
st, err := storage.NewStorage(base)
if err != nil {
t.Fatalf("new storage: %v", err)
}
jobID := int64(42)
jobDir := st.JobPath(jobID)
if err := os.MkdirAll(jobDir, 0755); err != nil {
t.Fatalf("mkdir job dir: %v", err)
}
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
_ = tw.WriteHeader(&tar.Header{Name: "scene.blend", Mode: 0644, Size: 4, Typeflag: tar.TypeReg})
_, _ = tw.Write([]byte("fake"))
_ = tw.Close()
if err := os.WriteFile(filepath.Join(jobDir, "context.tar"), buf.Bytes(), 0644); err != nil {
t.Fatalf("write context tar: %v", err)
}
origResolve := resolveMetadataBlenderPath
origRun := runMetadataCommand
resolveMetadataBlenderPath = func(_ string) (string, error) { return "/usr/bin/blender", nil }
runMetadataCommand = func(_ string, _ []string, _ string, _ []string, _ int64, _ *executils.ProcessTracker) (*executils.CommandResult, error) {
return &executils.CommandResult{
Stdout: `noise
{"frame_start":1,"frame_end":3,"has_negative_frames":false,"render_settings":{"resolution_x":1920,"resolution_y":1080,"frame_rate":24,"output_format":"PNG","engine":"CYCLES"},"scene_info":{"camera_count":1,"object_count":2,"material_count":3}}
done`,
}, nil
}
defer func() {
resolveMetadataBlenderPath = origResolve
runMetadataCommand = origRun
}()
m := &Manager{storage: st}
meta, err := m.extractMetadataFromContext(jobID)
if err != nil {
t.Fatalf("extractMetadataFromContext failed: %v", err)
}
if meta.FrameStart != 1 || meta.FrameEnd != 3 {
t.Fatalf("unexpected metadata: %+v", *meta)
}
}

View File

@@ -0,0 +1,109 @@
package api
import (
"fmt"
"html/template"
"log"
"net/http"
"strings"
"time"
authpkg "jiggablend/internal/auth"
"jiggablend/web"
)
type uiRenderer struct {
templates *template.Template
}
type pageData struct {
Title string
CurrentPath string
ContentTemplate string
PageScript string
User *authpkg.Session
Error string
Notice string
Data interface{}
}
func newUIRenderer() (*uiRenderer, error) {
tpl, err := template.New("base").Funcs(template.FuncMap{
"formatTime": func(t time.Time) string {
if t.IsZero() {
return "-"
}
return t.Local().Format("2006-01-02 15:04:05")
},
"statusClass": func(status string) string {
switch status {
case "completed":
return "status-completed"
case "running":
return "status-running"
case "failed":
return "status-failed"
case "cancelled":
return "status-cancelled"
case "online":
return "status-online"
case "offline":
return "status-offline"
case "busy":
return "status-busy"
default:
return "status-pending"
}
},
"progressInt": func(v float64) int {
if v < 0 {
return 0
}
if v > 100 {
return 100
}
return int(v)
},
"derefInt": func(v *int) string {
if v == nil {
return ""
}
return fmt.Sprintf("%d", *v)
},
"derefString": func(v *string) string {
if v == nil {
return ""
}
return *v
},
"hasSuffixFold": func(value, suffix string) bool {
return strings.HasSuffix(strings.ToLower(value), strings.ToLower(suffix))
},
}).ParseFS(
web.GetTemplateFS(),
"templates/*.html",
"templates/partials/*.html",
)
if err != nil {
return nil, fmt.Errorf("parse templates: %w", err)
}
return &uiRenderer{templates: tpl}, nil
}
func (r *uiRenderer) render(w http.ResponseWriter, data pageData) {
w.Header().Set("Content-Type", "text/html; charset=utf-8")
if err := r.templates.ExecuteTemplate(w, "base", data); err != nil {
log.Printf("Template render error: %v", err)
http.Error(w, "template render error", http.StatusInternalServerError)
return
}
}
func (r *uiRenderer) renderTemplate(w http.ResponseWriter, templateName string, data interface{}) {
w.Header().Set("Content-Type", "text/html; charset=utf-8")
if err := r.templates.ExecuteTemplate(w, templateName, data); err != nil {
log.Printf("Template render error for %s: %v", templateName, err)
http.Error(w, "template render error", http.StatusInternalServerError)
return
}
}

View File

@@ -0,0 +1,13 @@
package api
import "testing"
func TestNewUIRendererParsesTemplates(t *testing.T) {
renderer, err := newUIRenderer()
if err != nil {
t.Fatalf("newUIRenderer returned error: %v", err)
}
if renderer == nil || renderer.templates == nil {
t.Fatalf("renderer/templates should not be nil")
}
}

View File

@@ -275,7 +275,8 @@ type NextJobTaskInfo struct {
TaskID int64 `json:"task_id"` TaskID int64 `json:"task_id"`
JobID int64 `json:"job_id"` JobID int64 `json:"job_id"`
JobName string `json:"job_name"` JobName string `json:"job_name"`
Frame int `json:"frame"` Frame int `json:"frame"` // frame start (inclusive)
FrameEnd int `json:"frame_end"` // frame end (inclusive); same as Frame for single-frame
TaskType string `json:"task_type"` TaskType string `json:"task_type"`
Metadata *types.BlendMetadata `json:"metadata,omitempty"` Metadata *types.BlendMetadata `json:"metadata,omitempty"`
} }
@@ -376,6 +377,7 @@ func (s *Manager) handleNextJob(w http.ResponseWriter, r *http.Request) {
TaskID int64 TaskID int64
JobID int64 JobID int64
Frame int Frame int
FrameEnd sql.NullInt64
TaskType string TaskType string
JobName string JobName string
JobUserID int64 JobUserID int64
@@ -385,12 +387,12 @@ func (s *Manager) handleNextJob(w http.ResponseWriter, r *http.Request) {
err = s.db.With(func(conn *sql.DB) error { err = s.db.With(func(conn *sql.DB) error {
rows, err := conn.Query( rows, err := conn.Query(
`SELECT t.id, t.job_id, t.frame, t.task_type, `SELECT t.id, t.job_id, t.frame, t.frame_end, t.task_type,
j.name as job_name, j.user_id, j.blend_metadata, j.name as job_name, j.user_id, j.blend_metadata,
t.condition t.condition
FROM tasks t FROM tasks t
JOIN jobs j ON t.job_id = j.id JOIN jobs j ON t.job_id = j.id
WHERE t.status = ? AND j.status != ? WHERE t.status = ? AND t.runner_id IS NULL AND j.status != ?
ORDER BY t.created_at ASC ORDER BY t.created_at ASC
LIMIT 50`, LIMIT 50`,
types.TaskStatusPending, types.JobStatusCancelled, types.TaskStatusPending, types.JobStatusCancelled,
@@ -403,7 +405,7 @@ func (s *Manager) handleNextJob(w http.ResponseWriter, r *http.Request) {
for rows.Next() { for rows.Next() {
var task taskCandidate var task taskCandidate
var condition sql.NullString var condition sql.NullString
err := rows.Scan(&task.TaskID, &task.JobID, &task.Frame, &task.TaskType, err := rows.Scan(&task.TaskID, &task.JobID, &task.Frame, &task.FrameEnd, &task.TaskType,
&task.JobName, &task.JobUserID, &task.BlendMetadata, &condition) &task.JobName, &task.JobUserID, &task.BlendMetadata, &condition)
if err != nil { if err != nil {
continue continue
@@ -549,6 +551,11 @@ func (s *Manager) handleNextJob(w http.ResponseWriter, r *http.Request) {
// Update job status // Update job status
s.updateJobStatusFromTasks(selectedTask.JobID) s.updateJobStatusFromTasks(selectedTask.JobID)
// Frame end for response: use task range or single frame (NULL frame_end)
frameEnd := selectedTask.Frame
if selectedTask.FrameEnd.Valid {
frameEnd = int(selectedTask.FrameEnd.Int64)
}
// Build response // Build response
response := NextJobResponse{ response := NextJobResponse{
JobToken: jobToken, JobToken: jobToken,
@@ -558,6 +565,7 @@ func (s *Manager) handleNextJob(w http.ResponseWriter, r *http.Request) {
JobID: selectedTask.JobID, JobID: selectedTask.JobID,
JobName: selectedTask.JobName, JobName: selectedTask.JobName,
Frame: selectedTask.Frame, Frame: selectedTask.Frame,
FrameEnd: frameEnd,
TaskType: selectedTask.TaskType, TaskType: selectedTask.TaskType,
Metadata: metadata, Metadata: metadata,
}, },
@@ -757,7 +765,7 @@ func (s *Manager) handleDownloadJobContext(w http.ResponseWriter, r *http.Reques
// Set appropriate headers for tar file // Set appropriate headers for tar file
w.Header().Set("Content-Type", "application/x-tar") w.Header().Set("Content-Type", "application/x-tar")
w.Header().Set("Content-Disposition", "attachment; filename=context.tar") w.Header().Set("Content-Disposition", "attachment; filename=\"context.tar\"")
// Stream the file to the response // Stream the file to the response
io.Copy(w, file) io.Copy(w, file)
@@ -813,7 +821,7 @@ func (s *Manager) handleDownloadJobContextWithToken(w http.ResponseWriter, r *ht
// Set appropriate headers for tar file // Set appropriate headers for tar file
w.Header().Set("Content-Type", "application/x-tar") w.Header().Set("Content-Type", "application/x-tar")
w.Header().Set("Content-Disposition", "attachment; filename=context.tar") w.Header().Set("Content-Disposition", "attachment; filename=\"context.tar\"")
// Stream the file to the response // Stream the file to the response
io.Copy(w, file) io.Copy(w, file)
@@ -828,7 +836,7 @@ func (s *Manager) handleUploadFileFromRunner(w http.ResponseWriter, r *http.Requ
return return
} }
err = r.ParseMultipartForm(MaxUploadSize) // 50 GB (for large output files) err = r.ParseMultipartForm(s.maxUploadSize)
if err != nil { if err != nil {
s.respondError(w, http.StatusBadRequest, fmt.Sprintf("Failed to parse multipart form: %v", err)) s.respondError(w, http.StatusBadRequest, fmt.Sprintf("Failed to parse multipart form: %v", err))
return return
@@ -936,7 +944,7 @@ func (s *Manager) handleUploadFileWithToken(w http.ResponseWriter, r *http.Reque
return return
} }
err = r.ParseMultipartForm(MaxUploadSize) // 50 GB (for large output files) err = r.ParseMultipartForm(s.maxUploadSize)
if err != nil { if err != nil {
s.respondError(w, http.StatusBadRequest, fmt.Sprintf("Failed to parse multipart form: %v", err)) s.respondError(w, http.StatusBadRequest, fmt.Sprintf("Failed to parse multipart form: %v", err))
return return
@@ -1019,6 +1027,10 @@ func (s *Manager) handleGetJobStatusForRunner(w http.ResponseWriter, r *http.Req
&job.CreatedAt, &startedAt, &completedAt, &errorMessage, &job.CreatedAt, &startedAt, &completedAt, &errorMessage,
) )
}) })
if err == sql.ErrNoRows {
s.respondError(w, http.StatusNotFound, "Job not found")
return
}
if err != nil { if err != nil {
s.respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to query job: %v", err)) s.respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to query job: %v", err))
return return
@@ -1037,15 +1049,6 @@ func (s *Manager) handleGetJobStatusForRunner(w http.ResponseWriter, r *http.Req
job.OutputFormat = &outputFormat.String job.OutputFormat = &outputFormat.String
} }
if err == sql.ErrNoRows {
s.respondError(w, http.StatusNotFound, "Job not found")
return
}
if err != nil {
s.respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to query job: %v", err))
return
}
if startedAt.Valid { if startedAt.Valid {
job.StartedAt = &startedAt.Time job.StartedAt = &startedAt.Time
} }
@@ -1225,7 +1228,7 @@ func (s *Manager) handleDownloadFileForRunner(w http.ResponseWriter, r *http.Req
// Set headers // Set headers
w.Header().Set("Content-Type", contentType) w.Header().Set("Content-Type", contentType)
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=%s", decodedFileName)) w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=%q", decodedFileName))
// Stream file // Stream file
io.Copy(w, file) io.Copy(w, file)
@@ -1368,7 +1371,7 @@ func (s *Manager) handleRunnerJobWebSocket(w http.ResponseWriter, r *http.Reques
log.Printf("Job WebSocket disconnected unexpectedly for task %d, marking as failed", taskID) log.Printf("Job WebSocket disconnected unexpectedly for task %d, marking as failed", taskID)
s.db.With(func(conn *sql.DB) error { s.db.With(func(conn *sql.DB) error {
_, err := conn.Exec( _, err := conn.Exec(
`UPDATE tasks SET status = ?, error_message = ?, completed_at = ? WHERE id = ?`, `UPDATE tasks SET status = ?, runner_id = NULL, error_message = ?, completed_at = ? WHERE id = ?`,
types.TaskStatusFailed, "WebSocket connection lost", time.Now(), taskID, types.TaskStatusFailed, "WebSocket connection lost", time.Now(), taskID,
) )
return err return err
@@ -1473,70 +1476,49 @@ func (s *Manager) handleRunnerJobWebSocket(w http.ResponseWriter, r *http.Reques
} }
} }
case "runner_heartbeat": case "runner_heartbeat":
// Lookup runner ID from job's assigned_runner_id s.handleWSRunnerHeartbeat(conn, jobID)
var assignedRunnerID sql.NullInt64
err := s.db.With(func(db *sql.DB) error {
return db.QueryRow(
"SELECT assigned_runner_id FROM jobs WHERE id = ?",
jobID,
).Scan(&assignedRunnerID)
})
if err != nil {
log.Printf("Failed to lookup runner for job %d heartbeat: %v", jobID, err)
// Send error response
response := map[string]interface{}{
"type": "error",
"message": "Failed to process heartbeat",
}
s.sendWebSocketMessage(conn, response)
continue
}
if !assignedRunnerID.Valid {
log.Printf("Job %d has no assigned runner, skipping heartbeat update", jobID)
// Send acknowledgment but no database update
response := map[string]interface{}{
"type": "heartbeat_ack",
"timestamp": time.Now().Unix(),
"message": "No assigned runner for this job",
}
s.sendWebSocketMessage(conn, response)
continue
}
runnerID := assignedRunnerID.Int64
// Update runner heartbeat
err = s.db.With(func(db *sql.DB) error {
_, err := db.Exec(
"UPDATE runners SET last_heartbeat = ?, status = ? WHERE id = ?",
time.Now(), types.RunnerStatusOnline, runnerID,
)
return err
})
if err != nil {
log.Printf("Failed to update runner %d heartbeat for job %d: %v", runnerID, jobID, err)
// Send error response
response := map[string]interface{}{
"type": "error",
"message": "Failed to update heartbeat",
}
s.sendWebSocketMessage(conn, response)
continue
}
// Send acknowledgment
response := map[string]interface{}{
"type": "heartbeat_ack",
"timestamp": time.Now().Unix(),
}
s.sendWebSocketMessage(conn, response)
continue continue
} }
} }
} }
// handleWSRunnerHeartbeat processes a runner heartbeat received over a job WebSocket.
func (s *Manager) handleWSRunnerHeartbeat(conn *websocket.Conn, jobID int64) {
var assignedRunnerID sql.NullInt64
err := s.db.With(func(db *sql.DB) error {
return db.QueryRow(
"SELECT assigned_runner_id FROM jobs WHERE id = ?", jobID,
).Scan(&assignedRunnerID)
})
if err != nil {
log.Printf("Failed to lookup runner for job %d heartbeat: %v", jobID, err)
s.sendWebSocketMessage(conn, map[string]interface{}{"type": "error", "message": "Failed to process heartbeat"})
return
}
if !assignedRunnerID.Valid {
log.Printf("Job %d has no assigned runner, skipping heartbeat update", jobID)
s.sendWebSocketMessage(conn, map[string]interface{}{"type": "heartbeat_ack", "timestamp": time.Now().Unix(), "message": "No assigned runner for this job"})
return
}
runnerID := assignedRunnerID.Int64
err = s.db.With(func(db *sql.DB) error {
_, err := db.Exec(
"UPDATE runners SET last_heartbeat = ?, status = ? WHERE id = ?",
time.Now(), types.RunnerStatusOnline, runnerID,
)
return err
})
if err != nil {
log.Printf("Failed to update runner %d heartbeat for job %d: %v", runnerID, jobID, err)
s.sendWebSocketMessage(conn, map[string]interface{}{"type": "error", "message": "Failed to update heartbeat"})
return
}
s.sendWebSocketMessage(conn, map[string]interface{}{"type": "heartbeat_ack", "timestamp": time.Now().Unix()})
}
// handleWebSocketLog handles log entries from WebSocket // handleWebSocketLog handles log entries from WebSocket
func (s *Manager) handleWebSocketLog(runnerID int64, logEntry WSLogEntry) { func (s *Manager) handleWebSocketLog(runnerID int64, logEntry WSLogEntry) {
// Store log in database // Store log in database
@@ -1683,11 +1665,10 @@ func (s *Manager) handleWebSocketTaskComplete(runnerID int64, taskUpdate WSTaskU
} else { } else {
// No retries remaining - mark as failed // No retries remaining - mark as failed
err = s.db.WithTx(func(tx *sql.Tx) error { err = s.db.WithTx(func(tx *sql.Tx) error {
_, err := tx.Exec(`UPDATE tasks SET status = ? WHERE id = ?`, types.TaskStatusFailed, taskUpdate.TaskID) _, err := tx.Exec(
if err != nil { `UPDATE tasks SET status = ?, runner_id = NULL, completed_at = ? WHERE id = ?`,
return err types.TaskStatusFailed, now, taskUpdate.TaskID,
} )
_, err = tx.Exec(`UPDATE tasks SET completed_at = ? WHERE id = ?`, now, taskUpdate.TaskID)
if err != nil { if err != nil {
return err return err
} }
@@ -1854,7 +1835,7 @@ func (s *Manager) cancelActiveTasksForJob(jobID int64) error {
// Tasks don't have a cancelled status - mark them as failed instead // Tasks don't have a cancelled status - mark them as failed instead
err := s.db.With(func(conn *sql.DB) error { err := s.db.With(func(conn *sql.DB) error {
_, err := conn.Exec( _, err := conn.Exec(
`UPDATE tasks SET status = ?, error_message = ? WHERE job_id = ? AND status IN (?, ?)`, `UPDATE tasks SET status = ?, runner_id = NULL, error_message = ? WHERE job_id = ? AND status IN (?, ?)`,
types.TaskStatusFailed, "Job cancelled", jobID, types.TaskStatusPending, types.TaskStatusRunning, types.TaskStatusFailed, "Job cancelled", jobID, types.TaskStatusPending, types.TaskStatusRunning,
) )
if err != nil { if err != nil {
@@ -1920,227 +1901,252 @@ func (s *Manager) evaluateTaskCondition(taskID int64, jobID int64, conditionJSON
} }
} }
// getJobStatusUpdateMutex returns the mutex for a specific jobID, creating it if needed.
// This ensures serialized execution of updateJobStatusFromTasks per job to prevent race conditions.
func (s *Manager) getJobStatusUpdateMutex(jobID int64) *sync.Mutex {
s.jobStatusUpdateMuMu.Lock()
defer s.jobStatusUpdateMuMu.Unlock()
mu, exists := s.jobStatusUpdateMu[jobID]
if !exists {
mu = &sync.Mutex{}
s.jobStatusUpdateMu[jobID] = mu
}
return mu
}
// cleanupJobStatusUpdateMutex removes the mutex for a jobID after it's no longer needed.
// Should only be called when the job is in a final state (completed/failed) and no more updates are expected.
func (s *Manager) cleanupJobStatusUpdateMutex(jobID int64) {
s.jobStatusUpdateMuMu.Lock()
defer s.jobStatusUpdateMuMu.Unlock()
delete(s.jobStatusUpdateMu, jobID)
}
// updateJobStatusFromTasks updates job status and progress based on task states // updateJobStatusFromTasks updates job status and progress based on task states
// This function is serialized per jobID to prevent race conditions when multiple tasks
// complete concurrently and trigger status updates simultaneously.
func (s *Manager) updateJobStatusFromTasks(jobID int64) { func (s *Manager) updateJobStatusFromTasks(jobID int64) {
now := time.Now() mu := s.getJobStatusUpdateMutex(jobID)
mu.Lock()
defer mu.Unlock()
// All jobs now use parallel runners (one task per frame), so we always use task-based progress currentStatus, err := s.getJobStatus(jobID)
// Get current job status to detect changes
var currentStatus string
err := s.db.With(func(conn *sql.DB) error {
return conn.QueryRow(`SELECT status FROM jobs WHERE id = ?`, jobID).Scan(&currentStatus)
})
if err != nil { if err != nil {
log.Printf("Failed to get current job status for job %d: %v", jobID, err) log.Printf("Failed to get current job status for job %d: %v", jobID, err)
return return
} }
// Count total tasks and completed tasks if currentStatus == string(types.JobStatusCancelled) {
var totalTasks, completedTasks int return
err = s.db.With(func(conn *sql.DB) error { }
err := conn.QueryRow(
counts, err := s.getJobTaskCounts(jobID)
if err != nil {
log.Printf("Failed to count tasks for job %d: %v", jobID, err)
return
}
progress := counts.progress()
if counts.pendingOrRunning == 0 && counts.total > 0 {
s.handleAllTasksFinished(jobID, currentStatus, counts, progress)
} else {
s.handleTasksInProgress(jobID, currentStatus, counts, progress)
}
}
// jobTaskCounts holds task state counts for a job.
type jobTaskCounts struct {
total int
completed int
pendingOrRunning int
failed int
running int
}
func (c *jobTaskCounts) progress() float64 {
if c.total == 0 {
return 0.0
}
return float64(c.completed) / float64(c.total) * 100.0
}
func (s *Manager) getJobStatus(jobID int64) (string, error) {
var status string
err := s.db.With(func(conn *sql.DB) error {
return conn.QueryRow(`SELECT status FROM jobs WHERE id = ?`, jobID).Scan(&status)
})
return status, err
}
func (s *Manager) getJobTaskCounts(jobID int64) (*jobTaskCounts, error) {
c := &jobTaskCounts{}
err := s.db.With(func(conn *sql.DB) error {
if err := conn.QueryRow(
`SELECT COUNT(*) FROM tasks WHERE job_id = ? AND status IN (?, ?, ?, ?)`, `SELECT COUNT(*) FROM tasks WHERE job_id = ? AND status IN (?, ?, ?, ?)`,
jobID, types.TaskStatusPending, types.TaskStatusRunning, types.TaskStatusCompleted, types.TaskStatusFailed, jobID, types.TaskStatusPending, types.TaskStatusRunning, types.TaskStatusCompleted, types.TaskStatusFailed,
).Scan(&totalTasks) ).Scan(&c.total); err != nil {
if err != nil {
return err return err
} }
return conn.QueryRow( if err := conn.QueryRow(
`SELECT COUNT(*) FROM tasks WHERE job_id = ? AND status = ?`, `SELECT COUNT(*) FROM tasks WHERE job_id = ? AND status = ?`,
jobID, types.TaskStatusCompleted, jobID, types.TaskStatusCompleted,
).Scan(&completedTasks) ).Scan(&c.completed); err != nil {
return err
}
if err := conn.QueryRow(
`SELECT COUNT(*) FROM tasks WHERE job_id = ? AND status IN (?, ?)`,
jobID, types.TaskStatusPending, types.TaskStatusRunning,
).Scan(&c.pendingOrRunning); err != nil {
return err
}
if err := conn.QueryRow(
`SELECT COUNT(*) FROM tasks WHERE job_id = ? AND status = ?`,
jobID, types.TaskStatusFailed,
).Scan(&c.failed); err != nil {
return err
}
if err := conn.QueryRow(
`SELECT COUNT(*) FROM tasks WHERE job_id = ? AND status = ?`,
jobID, types.TaskStatusRunning,
).Scan(&c.running); err != nil {
return err
}
return nil
}) })
if err != nil { return c, err
log.Printf("Failed to count completed tasks for job %d: %v", jobID, err) }
return
}
// Calculate progress
var progress float64
if totalTasks == 0 {
// All tasks cancelled or no tasks, set progress to 0
progress = 0.0
} else {
// Standard task-based progress
progress = float64(completedTasks) / float64(totalTasks) * 100.0
}
// handleAllTasksFinished handles the case where no pending/running tasks remain.
func (s *Manager) handleAllTasksFinished(jobID int64, currentStatus string, counts *jobTaskCounts, progress float64) {
now := time.Now()
var jobStatus string var jobStatus string
// Check if all non-cancelled tasks are completed if counts.failed > 0 {
var pendingOrRunningTasks int jobStatus = s.handleFailedTasks(jobID, currentStatus, &progress)
err = s.db.With(func(conn *sql.DB) error { if jobStatus == "" {
return conn.QueryRow( return // retry handled; early exit
`SELECT COUNT(*) FROM tasks }
WHERE job_id = ? AND status IN (?, ?)`, } else {
jobID, types.TaskStatusPending, types.TaskStatusRunning, jobStatus = string(types.JobStatusCompleted)
).Scan(&pendingOrRunningTasks) progress = 100.0
})
if err != nil {
log.Printf("Failed to count pending/running tasks for job %d: %v", jobID, err)
return
} }
if pendingOrRunningTasks == 0 && totalTasks > 0 { s.setJobFinalStatus(jobID, currentStatus, jobStatus, progress, now, counts)
// All tasks are either completed or failed/cancelled }
// Check if any tasks failed
var failedTasks int
s.db.With(func(conn *sql.DB) error {
conn.QueryRow(
`SELECT COUNT(*) FROM tasks WHERE job_id = ? AND status = ?`,
jobID, types.TaskStatusFailed,
).Scan(&failedTasks)
return nil
})
if failedTasks > 0 { // handleFailedTasks decides whether to retry or mark the job failed.
// Some tasks failed - check if job has retries left // Returns "" if a retry was triggered (caller should return early),
var retryCount, maxRetries int // or the final status string.
err := s.db.With(func(conn *sql.DB) error { func (s *Manager) handleFailedTasks(jobID int64, currentStatus string, progress *float64) string {
return conn.QueryRow( var retryCount, maxRetries int
`SELECT retry_count, max_retries FROM jobs WHERE id = ?`, err := s.db.With(func(conn *sql.DB) error {
jobID, return conn.QueryRow(
).Scan(&retryCount, &maxRetries) `SELECT retry_count, max_retries FROM jobs WHERE id = ?`, jobID,
}) ).Scan(&retryCount, &maxRetries)
if err != nil { })
log.Printf("Failed to get retry info for job %d: %v", jobID, err) if err != nil {
// Fall back to marking job as failed log.Printf("Failed to get retry info for job %d: %v", jobID, err)
jobStatus = string(types.JobStatusFailed) return string(types.JobStatusFailed)
} else if retryCount < maxRetries { }
// Job has retries left - reset failed tasks and redistribute
if err := s.resetFailedTasksAndRedistribute(jobID); err != nil { if retryCount < maxRetries {
log.Printf("Failed to reset failed tasks for job %d: %v", jobID, err) if err := s.resetFailedTasksAndRedistribute(jobID); err != nil {
// If reset fails, mark job as failed log.Printf("Failed to reset failed tasks for job %d: %v", jobID, err)
jobStatus = string(types.JobStatusFailed) return string(types.JobStatusFailed)
} else {
// Tasks reset successfully - job remains in running/pending state
// Don't update job status, just update progress
jobStatus = currentStatus // Keep current status
// Recalculate progress after reset (failed tasks are now pending again)
var newTotalTasks, newCompletedTasks int
s.db.With(func(conn *sql.DB) error {
conn.QueryRow(
`SELECT COUNT(*) FROM tasks WHERE job_id = ? AND status IN (?, ?, ?, ?)`,
jobID, types.TaskStatusPending, types.TaskStatusRunning, types.TaskStatusCompleted, types.TaskStatusFailed,
).Scan(&newTotalTasks)
conn.QueryRow(
`SELECT COUNT(*) FROM tasks WHERE job_id = ? AND status = ?`,
jobID, types.TaskStatusCompleted,
).Scan(&newCompletedTasks)
return nil
})
if newTotalTasks > 0 {
progress = float64(newCompletedTasks) / float64(newTotalTasks) * 100.0
}
// Update progress only
err := s.db.With(func(conn *sql.DB) error {
_, err := conn.Exec(
`UPDATE jobs SET progress = ? WHERE id = ?`,
progress, jobID,
)
return err
})
if err != nil {
log.Printf("Failed to update job %d progress: %v", jobID, err)
} else {
// Broadcast job update via WebSocket
s.broadcastJobUpdate(jobID, "job_update", map[string]interface{}{
"status": jobStatus,
"progress": progress,
})
}
return // Exit early since we've handled the retry
}
} else {
// No retries left - mark job as failed and cancel active tasks
jobStatus = string(types.JobStatusFailed)
if err := s.cancelActiveTasksForJob(jobID); err != nil {
log.Printf("Failed to cancel active tasks for job %d: %v", jobID, err)
}
}
} else {
// All tasks completed successfully
jobStatus = string(types.JobStatusCompleted)
progress = 100.0 // Ensure progress is 100% when all tasks complete
} }
// Recalculate progress after reset
// Update job status (if we didn't return early from retry logic) counts, err := s.getJobTaskCounts(jobID)
if jobStatus != "" { if err == nil && counts.total > 0 {
err := s.db.With(func(conn *sql.DB) error { *progress = counts.progress()
_, err := conn.Exec(
`UPDATE jobs SET status = ?, progress = ?, completed_at = ? WHERE id = ?`,
jobStatus, progress, now, jobID,
)
return err
})
if err != nil {
log.Printf("Failed to update job %d status to %s: %v", jobID, jobStatus, err)
} else {
// Only log if status actually changed
if currentStatus != jobStatus {
log.Printf("Updated job %d status from %s to %s (progress: %.1f%%, completed tasks: %d/%d)", jobID, currentStatus, jobStatus, progress, completedTasks, totalTasks)
}
// Broadcast job update via WebSocket
s.broadcastJobUpdate(jobID, "job_update", map[string]interface{}{
"status": jobStatus,
"progress": progress,
"completed_at": now,
})
}
} }
err = s.db.With(func(conn *sql.DB) error {
// Encode tasks are now created immediately when the job is created _, err := conn.Exec(`UPDATE jobs SET progress = ? WHERE id = ?`, *progress, jobID)
// with a condition that prevents assignment until all render tasks are completed.
// No need to create them here anymore.
} else {
// Job has pending or running tasks - determine if it's running or still pending
var runningTasks int
s.db.With(func(conn *sql.DB) error {
conn.QueryRow(
`SELECT COUNT(*) FROM tasks WHERE job_id = ? AND status = ?`,
jobID, types.TaskStatusRunning,
).Scan(&runningTasks)
return nil
})
if runningTasks > 0 {
// Has running tasks - job is running
jobStatus = string(types.JobStatusRunning)
var startedAt sql.NullTime
s.db.With(func(conn *sql.DB) error {
conn.QueryRow(`SELECT started_at FROM jobs WHERE id = ?`, jobID).Scan(&startedAt)
if !startedAt.Valid {
conn.Exec(`UPDATE jobs SET started_at = ? WHERE id = ?`, now, jobID)
}
return nil
})
} else {
// All tasks are pending - job is pending
jobStatus = string(types.JobStatusPending)
}
err := s.db.With(func(conn *sql.DB) error {
_, err := conn.Exec(
`UPDATE jobs SET status = ?, progress = ? WHERE id = ?`,
jobStatus, progress, jobID,
)
return err return err
}) })
if err != nil { if err != nil {
log.Printf("Failed to update job %d status to %s: %v", jobID, jobStatus, err) log.Printf("Failed to update job %d progress: %v", jobID, err)
} else { } else {
// Only log if status actually changed
if currentStatus != jobStatus {
log.Printf("Updated job %d status from %s to %s (progress: %.1f%%, completed: %d/%d, pending: %d, running: %d)", jobID, currentStatus, jobStatus, progress, completedTasks, totalTasks, pendingOrRunningTasks-runningTasks, runningTasks)
}
// Broadcast job update during execution (not just on completion)
s.broadcastJobUpdate(jobID, "job_update", map[string]interface{}{ s.broadcastJobUpdate(jobID, "job_update", map[string]interface{}{
"status": jobStatus, "status": currentStatus,
"progress": progress, "progress": *progress,
}) })
} }
return "" // retry handled
} }
// No retries left
if err := s.cancelActiveTasksForJob(jobID); err != nil {
log.Printf("Failed to cancel active tasks for job %d: %v", jobID, err)
}
return string(types.JobStatusFailed)
}
// setJobFinalStatus persists the terminal job status and broadcasts the update.
func (s *Manager) setJobFinalStatus(jobID int64, currentStatus, jobStatus string, progress float64, now time.Time, counts *jobTaskCounts) {
err := s.db.With(func(conn *sql.DB) error {
_, err := conn.Exec(
`UPDATE jobs SET status = ?, progress = ?, completed_at = ? WHERE id = ?`,
jobStatus, progress, now, jobID,
)
return err
})
if err != nil {
log.Printf("Failed to update job %d status to %s: %v", jobID, jobStatus, err)
return
}
if currentStatus != jobStatus {
log.Printf("Updated job %d status from %s to %s (progress: %.1f%%, completed tasks: %d/%d)", jobID, currentStatus, jobStatus, progress, counts.completed, counts.total)
}
s.broadcastJobUpdate(jobID, "job_update", map[string]interface{}{
"status": jobStatus,
"progress": progress,
"completed_at": now,
})
if jobStatus == string(types.JobStatusCompleted) || jobStatus == string(types.JobStatusFailed) {
s.cleanupJobStatusUpdateMutex(jobID)
}
}
// handleTasksInProgress handles the case where tasks are still pending or running.
func (s *Manager) handleTasksInProgress(jobID int64, currentStatus string, counts *jobTaskCounts, progress float64) {
now := time.Now()
var jobStatus string
if counts.running > 0 {
jobStatus = string(types.JobStatusRunning)
s.db.With(func(conn *sql.DB) error {
var startedAt sql.NullTime
conn.QueryRow(`SELECT started_at FROM jobs WHERE id = ?`, jobID).Scan(&startedAt)
if !startedAt.Valid {
conn.Exec(`UPDATE jobs SET started_at = ? WHERE id = ?`, now, jobID)
}
return nil
})
} else {
jobStatus = string(types.JobStatusPending)
}
err := s.db.With(func(conn *sql.DB) error {
_, err := conn.Exec(
`UPDATE jobs SET status = ?, progress = ? WHERE id = ?`,
jobStatus, progress, jobID,
)
return err
})
if err != nil {
log.Printf("Failed to update job %d status to %s: %v", jobID, jobStatus, err)
return
}
if currentStatus != jobStatus {
pending := counts.pendingOrRunning - counts.running
log.Printf("Updated job %d status from %s to %s (progress: %.1f%%, completed: %d/%d, pending: %d, running: %d)", jobID, currentStatus, jobStatus, progress, counts.completed, counts.total, pending, counts.running)
}
s.broadcastJobUpdate(jobID, "job_update", map[string]interface{}{
"status": jobStatus,
"progress": progress,
})
} }
// broadcastLogToFrontend broadcasts log to connected frontend clients // broadcastLogToFrontend broadcasts log to connected frontend clients

View File

@@ -0,0 +1,21 @@
package api
import "testing"
func TestParseBlenderFrame(t *testing.T) {
frame, ok := parseBlenderFrame("Info Fra:2470 Mem:12.00M")
if !ok || frame != 2470 {
t.Fatalf("parseBlenderFrame() = (%d,%v), want (2470,true)", frame, ok)
}
if _, ok := parseBlenderFrame("no frame here"); ok {
t.Fatal("expected parse to fail for non-frame text")
}
}
func TestJobTaskCounts_Progress(t *testing.T) {
c := &jobTaskCounts{total: 10, completed: 4}
if got := c.progress(); got != 40 {
t.Fatalf("progress() = %v, want 40", got)
}
}

556
internal/manager/ui.go Normal file
View File

@@ -0,0 +1,556 @@
package api
import (
"database/sql"
"fmt"
"net/http"
"strconv"
"strings"
"time"
authpkg "jiggablend/internal/auth"
"github.com/go-chi/chi/v5"
)
type uiJobSummary struct {
ID int64
Name string
Status string
Progress float64
FrameStart *int
FrameEnd *int
OutputFormat *string
CreatedAt time.Time
}
type uiTaskSummary struct {
ID int64
TaskType string
Status string
Frame int
FrameEnd *int
CurrentStep string
RetryCount int
Error string
StartedAt *time.Time
CompletedAt *time.Time
}
type uiFileSummary struct {
ID int64
FileName string
FileType string
FileSize int64
CreatedAt time.Time
}
func (s *Manager) setupUIRoutes() {
s.router.Get("/", s.handleUIRoot)
s.router.Get("/login", s.handleUILoginPage)
s.router.Post("/logout", s.handleUILogout)
s.router.Group(func(r chi.Router) {
r.Use(func(next http.Handler) http.Handler {
return http.HandlerFunc(s.auth.Middleware(next.ServeHTTP))
})
r.Get("/jobs", s.handleUIJobsPage)
r.Get("/jobs/new", s.handleUINewJobPage)
r.Get("/jobs/{id}", s.handleUIJobDetailPage)
r.Get("/ui/fragments/jobs", s.handleUIJobsFragment)
r.Get("/ui/fragments/jobs/{id}/tasks", s.handleUIJobTasksFragment)
r.Get("/ui/fragments/jobs/{id}/files", s.handleUIJobFilesFragment)
})
s.router.Group(func(r chi.Router) {
r.Use(func(next http.Handler) http.Handler {
return http.HandlerFunc(s.auth.AdminMiddleware(next.ServeHTTP))
})
r.Get("/admin", s.handleUIAdminPage)
r.Get("/ui/fragments/admin/runners", s.handleUIAdminRunnersFragment)
r.Get("/ui/fragments/admin/users", s.handleUIAdminUsersFragment)
r.Get("/ui/fragments/admin/apikeys", s.handleUIAdminAPIKeysFragment)
})
}
func (s *Manager) sessionFromRequest(r *http.Request) (*authpkg.Session, bool) {
cookie, err := r.Cookie("session_id")
if err != nil {
return nil, false
}
return s.auth.GetSession(cookie.Value)
}
func (s *Manager) handleUIRoot(w http.ResponseWriter, r *http.Request) {
if _, ok := s.sessionFromRequest(r); ok {
http.Redirect(w, r, "/jobs", http.StatusFound)
return
}
http.Redirect(w, r, "/login", http.StatusFound)
}
func (s *Manager) handleUILoginPage(w http.ResponseWriter, r *http.Request) {
if _, ok := s.sessionFromRequest(r); ok {
http.Redirect(w, r, "/jobs", http.StatusFound)
return
}
s.ui.render(w, pageData{
Title: "Login",
CurrentPath: "/login",
ContentTemplate: "page_login",
PageScript: "/assets/login.js",
Data: map[string]interface{}{
"google_enabled": s.auth.IsGoogleOAuthConfigured(),
"discord_enabled": s.auth.IsDiscordOAuthConfigured(),
"local_enabled": s.auth.IsLocalLoginEnabled(),
"error": r.URL.Query().Get("error"),
},
})
}
func (s *Manager) handleUILogout(w http.ResponseWriter, r *http.Request) {
cookie, err := r.Cookie("session_id")
if err == nil {
s.auth.DeleteSession(cookie.Value)
}
expired := &http.Cookie{
Name: "session_id",
Value: "",
Path: "/",
MaxAge: -1,
HttpOnly: true,
SameSite: http.SameSiteLaxMode,
}
if s.cfg.IsProductionMode() {
expired.Secure = true
}
http.SetCookie(w, expired)
http.Redirect(w, r, "/login", http.StatusFound)
}
func (s *Manager) handleUIJobsPage(w http.ResponseWriter, r *http.Request) {
user, _ := s.sessionFromRequest(r)
s.ui.render(w, pageData{
Title: "Jobs",
CurrentPath: "/jobs",
ContentTemplate: "page_jobs",
PageScript: "/assets/jobs.js",
User: user,
})
}
func (s *Manager) handleUINewJobPage(w http.ResponseWriter, r *http.Request) {
user, _ := s.sessionFromRequest(r)
s.ui.render(w, pageData{
Title: "New Job",
CurrentPath: "/jobs/new",
ContentTemplate: "page_jobs_new",
PageScript: "/assets/job_new.js",
User: user,
})
}
func (s *Manager) handleUIJobDetailPage(w http.ResponseWriter, r *http.Request) {
userID, err := getUserID(r)
if err != nil {
http.Redirect(w, r, "/login", http.StatusFound)
return
}
isAdmin := authpkg.IsAdmin(r.Context())
jobID, err := parseID(r, "id")
if err != nil {
http.NotFound(w, r)
return
}
job, err := s.getUIJob(jobID, userID, isAdmin)
if err != nil {
http.NotFound(w, r)
return
}
user, _ := s.sessionFromRequest(r)
s.ui.render(w, pageData{
Title: fmt.Sprintf("Job %d", jobID),
CurrentPath: "/jobs",
ContentTemplate: "page_job_show",
PageScript: "/assets/job_show.js",
User: user,
Data: map[string]interface{}{"job": job},
})
}
func (s *Manager) handleUIAdminPage(w http.ResponseWriter, r *http.Request) {
user, _ := s.sessionFromRequest(r)
regEnabled, _ := s.auth.IsRegistrationEnabled()
s.ui.render(w, pageData{
Title: "Admin",
CurrentPath: "/admin",
ContentTemplate: "page_admin",
PageScript: "/assets/admin.js",
User: user,
Data: map[string]interface{}{
"registration_enabled": regEnabled,
},
})
}
func (s *Manager) handleUIJobsFragment(w http.ResponseWriter, r *http.Request) {
userID, err := getUserID(r)
if err != nil {
http.Error(w, "unauthorized", http.StatusUnauthorized)
return
}
jobs, err := s.listUIJobSummaries(userID, 50, 0)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
s.ui.renderTemplate(w, "partial_jobs_table", map[string]interface{}{"jobs": jobs})
}
func (s *Manager) handleUIJobTasksFragment(w http.ResponseWriter, r *http.Request) {
userID, err := getUserID(r)
if err != nil {
http.Error(w, "unauthorized", http.StatusUnauthorized)
return
}
isAdmin := authpkg.IsAdmin(r.Context())
jobID, err := parseID(r, "id")
if err != nil {
http.Error(w, "invalid job id", http.StatusBadRequest)
return
}
if _, err := s.getUIJob(jobID, userID, isAdmin); err != nil {
http.Error(w, "job not found", http.StatusNotFound)
return
}
tasks, err := s.listUITasks(jobID)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
s.ui.renderTemplate(w, "partial_job_tasks", map[string]interface{}{
"job_id": jobID,
"tasks": tasks,
})
}
func (s *Manager) handleUIJobFilesFragment(w http.ResponseWriter, r *http.Request) {
userID, err := getUserID(r)
if err != nil {
http.Error(w, "unauthorized", http.StatusUnauthorized)
return
}
isAdmin := authpkg.IsAdmin(r.Context())
jobID, err := parseID(r, "id")
if err != nil {
http.Error(w, "invalid job id", http.StatusBadRequest)
return
}
if _, err := s.getUIJob(jobID, userID, isAdmin); err != nil {
http.Error(w, "job not found", http.StatusNotFound)
return
}
files, err := s.listUIFiles(jobID, 100)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
outputFiles := make([]uiFileSummary, 0, len(files))
adminInputFiles := make([]uiFileSummary, 0)
for _, file := range files {
if strings.EqualFold(file.FileType, "output") {
outputFiles = append(outputFiles, file)
continue
}
if isAdmin {
adminInputFiles = append(adminInputFiles, file)
}
}
s.ui.renderTemplate(w, "partial_job_files", map[string]interface{}{
"job_id": jobID,
"files": outputFiles,
"is_admin": isAdmin,
"admin_input_files": adminInputFiles,
})
}
func (s *Manager) handleUIAdminRunnersFragment(w http.ResponseWriter, r *http.Request) {
var rows *sql.Rows
err := s.db.With(func(conn *sql.DB) error {
var qErr error
rows, qErr = conn.Query(`SELECT id, name, hostname, status, last_heartbeat, priority, created_at FROM runners ORDER BY created_at DESC`)
return qErr
})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer rows.Close()
type runner struct {
ID int64
Name string
Hostname string
Status string
LastHeartbeat time.Time
Priority int
CreatedAt time.Time
}
all := make([]runner, 0)
for rows.Next() {
var item runner
if scanErr := rows.Scan(&item.ID, &item.Name, &item.Hostname, &item.Status, &item.LastHeartbeat, &item.Priority, &item.CreatedAt); scanErr != nil {
http.Error(w, scanErr.Error(), http.StatusInternalServerError)
return
}
all = append(all, item)
}
s.ui.renderTemplate(w, "partial_admin_runners", map[string]interface{}{"runners": all})
}
func (s *Manager) handleUIAdminUsersFragment(w http.ResponseWriter, r *http.Request) {
currentUserID, _ := getUserID(r)
firstUserID, _ := s.auth.GetFirstUserID()
var rows *sql.Rows
err := s.db.With(func(conn *sql.DB) error {
var qErr error
rows, qErr = conn.Query(`SELECT id, email, name, oauth_provider, is_admin, created_at FROM users ORDER BY created_at DESC`)
return qErr
})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer rows.Close()
type user struct {
ID int64
Email string
Name string
OAuthProvider string
IsAdmin bool
IsFirstUser bool
CreatedAt time.Time
}
all := make([]user, 0)
for rows.Next() {
var item user
if scanErr := rows.Scan(&item.ID, &item.Email, &item.Name, &item.OAuthProvider, &item.IsAdmin, &item.CreatedAt); scanErr != nil {
http.Error(w, scanErr.Error(), http.StatusInternalServerError)
return
}
item.IsFirstUser = item.ID == firstUserID
all = append(all, item)
}
s.ui.renderTemplate(w, "partial_admin_users", map[string]interface{}{
"users": all,
"current_user_id": currentUserID,
})
}
func (s *Manager) handleUIAdminAPIKeysFragment(w http.ResponseWriter, r *http.Request) {
keys, err := s.secrets.ListRunnerAPIKeys()
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
type item struct {
ID int64
Name string
Scope string
Key string
IsActive bool
CreatedAt time.Time
}
out := make([]item, 0, len(keys))
for _, key := range keys {
out = append(out, item{
ID: key.ID,
Name: key.Name,
Scope: key.Scope,
Key: key.Key,
IsActive: key.IsActive,
CreatedAt: key.CreatedAt,
})
}
s.ui.renderTemplate(w, "partial_admin_apikeys", map[string]interface{}{"keys": out})
}
func (s *Manager) listUIJobSummaries(userID int64, limit int, offset int) ([]uiJobSummary, error) {
rows := &sql.Rows{}
err := s.db.With(func(conn *sql.DB) error {
var qErr error
rows, qErr = conn.Query(
`SELECT id, name, status, progress, frame_start, frame_end, output_format, created_at
FROM jobs WHERE user_id = ? ORDER BY created_at DESC LIMIT ? OFFSET ?`,
userID, limit, offset,
)
return qErr
})
if err != nil {
return nil, err
}
defer rows.Close()
out := make([]uiJobSummary, 0)
for rows.Next() {
var item uiJobSummary
var frameStart, frameEnd sql.NullInt64
var outputFormat sql.NullString
if scanErr := rows.Scan(&item.ID, &item.Name, &item.Status, &item.Progress, &frameStart, &frameEnd, &outputFormat, &item.CreatedAt); scanErr != nil {
return nil, scanErr
}
if frameStart.Valid {
v := int(frameStart.Int64)
item.FrameStart = &v
}
if frameEnd.Valid {
v := int(frameEnd.Int64)
item.FrameEnd = &v
}
if outputFormat.Valid {
item.OutputFormat = &outputFormat.String
}
out = append(out, item)
}
return out, nil
}
func (s *Manager) getUIJob(jobID int64, userID int64, isAdmin bool) (uiJobSummary, error) {
var item uiJobSummary
var frameStart, frameEnd sql.NullInt64
var outputFormat sql.NullString
err := s.db.With(func(conn *sql.DB) error {
if isAdmin {
return conn.QueryRow(
`SELECT id, name, status, progress, frame_start, frame_end, output_format, created_at
FROM jobs WHERE id = ?`,
jobID,
).Scan(&item.ID, &item.Name, &item.Status, &item.Progress, &frameStart, &frameEnd, &outputFormat, &item.CreatedAt)
}
return conn.QueryRow(
`SELECT id, name, status, progress, frame_start, frame_end, output_format, created_at
FROM jobs WHERE id = ? AND user_id = ?`,
jobID, userID,
).Scan(&item.ID, &item.Name, &item.Status, &item.Progress, &frameStart, &frameEnd, &outputFormat, &item.CreatedAt)
})
if err != nil {
return uiJobSummary{}, err
}
if frameStart.Valid {
v := int(frameStart.Int64)
item.FrameStart = &v
}
if frameEnd.Valid {
v := int(frameEnd.Int64)
item.FrameEnd = &v
}
if outputFormat.Valid {
item.OutputFormat = &outputFormat.String
}
return item, nil
}
func (s *Manager) listUITasks(jobID int64) ([]uiTaskSummary, error) {
var rows *sql.Rows
err := s.db.With(func(conn *sql.DB) error {
var qErr error
rows, qErr = conn.Query(
`SELECT id, task_type, status, frame, frame_end, current_step, retry_count, error_message, started_at, completed_at
FROM tasks WHERE job_id = ? ORDER BY id ASC`,
jobID,
)
return qErr
})
if err != nil {
return nil, err
}
defer rows.Close()
out := make([]uiTaskSummary, 0)
for rows.Next() {
var item uiTaskSummary
var frameEnd sql.NullInt64
var currentStep sql.NullString
var errMsg sql.NullString
var startedAt, completedAt sql.NullTime
if scanErr := rows.Scan(
&item.ID, &item.TaskType, &item.Status, &item.Frame, &frameEnd,
&currentStep, &item.RetryCount, &errMsg, &startedAt, &completedAt,
); scanErr != nil {
return nil, scanErr
}
if frameEnd.Valid {
v := int(frameEnd.Int64)
item.FrameEnd = &v
}
if currentStep.Valid {
item.CurrentStep = currentStep.String
}
if errMsg.Valid {
item.Error = errMsg.String
}
if startedAt.Valid {
item.StartedAt = &startedAt.Time
}
if completedAt.Valid {
item.CompletedAt = &completedAt.Time
}
out = append(out, item)
}
return out, nil
}
func (s *Manager) listUIFiles(jobID int64, limit int) ([]uiFileSummary, error) {
var rows *sql.Rows
err := s.db.With(func(conn *sql.DB) error {
var qErr error
rows, qErr = conn.Query(
`SELECT id, file_name, file_type, file_size, created_at
FROM job_files WHERE job_id = ? ORDER BY created_at DESC LIMIT ?`,
jobID, limit,
)
return qErr
})
if err != nil {
return nil, err
}
defer rows.Close()
out := make([]uiFileSummary, 0)
for rows.Next() {
var item uiFileSummary
if scanErr := rows.Scan(&item.ID, &item.FileName, &item.FileType, &item.FileSize, &item.CreatedAt); scanErr != nil {
return nil, scanErr
}
out = append(out, item)
}
return out, nil
}
func parseBoolForm(r *http.Request, key string) bool {
v := strings.TrimSpace(strings.ToLower(r.FormValue(key)))
return v == "1" || v == "true" || v == "on" || v == "yes"
}
func parseIntQuery(r *http.Request, key string, fallback int) int {
raw := strings.TrimSpace(r.URL.Query().Get(key))
if raw == "" {
return fallback
}
v, err := strconv.Atoi(raw)
if err != nil || v < 0 {
return fallback
}
return v
}

View File

@@ -0,0 +1,37 @@
package api
import (
"net/http/httptest"
"testing"
)
func TestParseBoolForm(t *testing.T) {
req := httptest.NewRequest("POST", "/?flag=true", nil)
req.ParseForm()
req.Form.Set("enabled", "true")
if !parseBoolForm(req, "enabled") {
t.Fatalf("expected true for enabled=true")
}
req.Form.Set("enabled", "no")
if parseBoolForm(req, "enabled") {
t.Fatalf("expected false for enabled=no")
}
}
func TestParseIntQuery(t *testing.T) {
req := httptest.NewRequest("GET", "/?limit=42", nil)
if got := parseIntQuery(req, "limit", 10); got != 42 {
t.Fatalf("expected 42, got %d", got)
}
req = httptest.NewRequest("GET", "/?limit=-1", nil)
if got := parseIntQuery(req, "limit", 10); got != 10 {
t.Fatalf("expected fallback 10, got %d", got)
}
req = httptest.NewRequest("GET", "/?limit=abc", nil)
if got := parseIntQuery(req, "limit", 10); got != 10 {
t.Fatalf("expected fallback 10, got %d", got)
}
}

View File

@@ -0,0 +1,44 @@
package api
import (
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
"github.com/gorilla/websocket"
)
func TestJobConnection_ConnectAndClose(t *testing.T) {
upgrader := websocket.Upgrader{}
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
conn, err := upgrader.Upgrade(w, r, nil)
if err != nil {
return
}
defer conn.Close()
var msg map[string]interface{}
if err := conn.ReadJSON(&msg); err != nil {
return
}
if msg["type"] == "auth" {
_ = conn.WriteJSON(map[string]string{"type": "auth_ok"})
}
// Keep open briefly so client can mark connected.
time.Sleep(100 * time.Millisecond)
}))
defer server.Close()
jc := NewJobConnection()
managerURL := strings.Replace(server.URL, "http://", "http://", 1)
if err := jc.Connect(managerURL, "/job/1", "token123"); err != nil {
t.Fatalf("Connect failed: %v", err)
}
if !jc.IsConnected() {
t.Fatal("expected connection to be marked connected")
}
jc.Close()
}

View File

@@ -196,7 +196,8 @@ type NextJobTaskInfo struct {
TaskID int64 `json:"task_id"` TaskID int64 `json:"task_id"`
JobID int64 `json:"job_id"` JobID int64 `json:"job_id"`
JobName string `json:"job_name"` JobName string `json:"job_name"`
Frame int `json:"frame"` Frame int `json:"frame"` // frame start (inclusive)
FrameEnd int `json:"frame_end"` // frame end (inclusive); same as Frame for single-frame
TaskType string `json:"task_type"` TaskType string `json:"task_type"`
Metadata *types.BlendMetadata `json:"metadata,omitempty"` Metadata *types.BlendMetadata `json:"metadata,omitempty"`
} }
@@ -240,8 +241,8 @@ func (m *ManagerClient) DownloadContext(contextPath, jobToken string) (io.ReadCl
} }
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
defer resp.Body.Close()
body, _ := io.ReadAll(resp.Body) body, _ := io.ReadAll(resp.Body)
resp.Body.Close()
return nil, fmt.Errorf("context download failed with status %d: %s", resp.StatusCode, string(body)) return nil, fmt.Errorf("context download failed with status %d: %s", resp.StatusCode, string(body))
} }
@@ -315,6 +316,28 @@ func (m *ManagerClient) GetJobMetadata(jobID int64) (*types.BlendMetadata, error
return &metadata, nil return &metadata, nil
} }
// GetJobStatus retrieves the current status of a job.
func (m *ManagerClient) GetJobStatus(jobID int64) (types.JobStatus, error) {
path := fmt.Sprintf("/api/runner/jobs/%d/status?runner_id=%d", jobID, m.runnerID)
resp, err := m.Request("GET", path, nil)
if err != nil {
return "", err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
return "", fmt.Errorf("failed to get job status: %s", string(body))
}
var job types.Job
if err := json.NewDecoder(resp.Body).Decode(&job); err != nil {
return "", err
}
return job.Status, nil
}
// JobFile represents a file associated with a job. // JobFile represents a file associated with a job.
type JobFile struct { type JobFile struct {
ID int64 `json:"id"` ID int64 `json:"id"`
@@ -412,10 +435,39 @@ func (m *ManagerClient) DownloadBlender(version string) (io.ReadCloser, error) {
} }
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
defer resp.Body.Close()
body, _ := io.ReadAll(resp.Body) body, _ := io.ReadAll(resp.Body)
resp.Body.Close()
return nil, fmt.Errorf("failed to download blender: status %d, body: %s", resp.StatusCode, string(body)) return nil, fmt.Errorf("failed to download blender: status %d, body: %s", resp.StatusCode, string(body))
} }
return resp.Body, nil return resp.Body, nil
} }
// blenderVersionsResponse is the response from GET /api/blender/versions.
type blenderVersionsResponse struct {
Versions []struct {
Full string `json:"full"`
} `json:"versions"`
}
// GetLatestBlenderVersion returns the latest Blender version string (e.g. "4.2.3") from the manager.
// Uses the flat versions list which is newest-first.
func (m *ManagerClient) GetLatestBlenderVersion() (string, error) {
resp, err := m.Request("GET", "/api/blender/versions", nil)
if err != nil {
return "", fmt.Errorf("failed to fetch blender versions: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
return "", fmt.Errorf("blender versions returned status %d: %s", resp.StatusCode, string(body))
}
var out blenderVersionsResponse
if err := json.NewDecoder(resp.Body).Decode(&out); err != nil {
return "", fmt.Errorf("failed to decode blender versions: %w", err)
}
if len(out.Versions) == 0 {
return "", fmt.Errorf("no blender versions available")
}
return out.Versions[0].Full, nil
}

View File

@@ -0,0 +1,45 @@
package api
import (
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
)
func TestNewManagerClient_TrimsTrailingSlash(t *testing.T) {
c := NewManagerClient("http://example.com/")
if c.GetBaseURL() != "http://example.com" {
t.Fatalf("unexpected base url: %q", c.GetBaseURL())
}
}
func TestDoRequest_SetsAuthorizationHeader(t *testing.T) {
var authHeader string
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
authHeader = r.Header.Get("Authorization")
_ = json.NewEncoder(w).Encode(map[string]bool{"ok": true})
}))
defer ts.Close()
c := NewManagerClient(ts.URL)
c.SetCredentials(1, "abc123")
resp, err := c.Request(http.MethodGet, "/x", nil)
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
if authHeader != "Bearer abc123" {
t.Fatalf("unexpected Authorization header: %q", authHeader)
}
}
func TestRequest_RequiresAuth(t *testing.T) {
c := NewManagerClient("http://example.com")
if _, err := c.Request(http.MethodGet, "/x", nil); err == nil {
t.Fatal("expected auth error when api key is missing")
}
}

View File

@@ -5,7 +5,9 @@ import (
"fmt" "fmt"
"log" "log"
"os" "os"
"os/exec"
"path/filepath" "path/filepath"
"strings"
"jiggablend/internal/runner/api" "jiggablend/internal/runner/api"
"jiggablend/internal/runner/workspace" "jiggablend/internal/runner/workspace"
@@ -43,8 +45,12 @@ func (m *Manager) GetBinaryPath(version string) (string, error) {
if binaryInfo, err := os.Stat(binaryPath); err == nil { if binaryInfo, err := os.Stat(binaryPath); err == nil {
// Verify it's actually a file (not a directory) // Verify it's actually a file (not a directory)
if !binaryInfo.IsDir() { if !binaryInfo.IsDir() {
log.Printf("Found existing Blender %s installation at %s", version, binaryPath) absBinaryPath, err := ResolveBinaryPath(binaryPath)
return binaryPath, nil if err != nil {
return "", err
}
log.Printf("Found existing Blender %s installation at %s", version, absBinaryPath)
return absBinaryPath, nil
} }
} }
// Version folder exists but binary is missing - might be incomplete installation // Version folder exists but binary is missing - might be incomplete installation
@@ -71,17 +77,86 @@ func (m *Manager) GetBinaryPath(version string) (string, error) {
return "", fmt.Errorf("blender binary not found after extraction") return "", fmt.Errorf("blender binary not found after extraction")
} }
log.Printf("Blender %s installed at %s", version, binaryPath) absBinaryPath, err := ResolveBinaryPath(binaryPath)
return binaryPath, nil if err != nil {
return "", err
}
log.Printf("Blender %s installed at %s", version, absBinaryPath)
return absBinaryPath, nil
} }
// GetBinaryForJob returns the Blender binary path for a job. // GetBinaryForJob returns the Blender binary path for a job.
// Uses the version from metadata or falls back to system blender. // Uses the version from metadata or falls back to system blender.
func (m *Manager) GetBinaryForJob(version string) (string, error) { func (m *Manager) GetBinaryForJob(version string) (string, error) {
if version == "" { if version == "" {
return "blender", nil // System blender return ResolveBinaryPath("blender")
} }
return m.GetBinaryPath(version) return m.GetBinaryPath(version)
} }
// ResolveBinaryPath resolves a Blender executable to an absolute path.
func ResolveBinaryPath(blenderBinary string) (string, error) {
if blenderBinary == "" {
return "", fmt.Errorf("blender binary path is empty")
}
if strings.Contains(blenderBinary, string(filepath.Separator)) {
absPath, err := filepath.Abs(blenderBinary)
if err != nil {
return "", fmt.Errorf("failed to resolve blender binary path %q: %w", blenderBinary, err)
}
return absPath, nil
}
resolvedPath, err := exec.LookPath(blenderBinary)
if err != nil {
return "", fmt.Errorf("failed to locate blender binary %q in PATH: %w", blenderBinary, err)
}
absPath, err := filepath.Abs(resolvedPath)
if err != nil {
return "", fmt.Errorf("failed to resolve blender binary path %q: %w", resolvedPath, err)
}
return absPath, nil
}
// TarballEnv returns a copy of baseEnv with LD_LIBRARY_PATH set so that a
// tarball Blender installation can find its bundled libs (e.g. lib/python3.x).
// If blenderBinary is the system "blender" or has no path component, baseEnv is
// returned unchanged.
func TarballEnv(blenderBinary string, baseEnv []string) []string {
if blenderBinary == "" || blenderBinary == "blender" {
return baseEnv
}
if !strings.Contains(blenderBinary, string(os.PathSeparator)) {
return baseEnv
}
blenderDir := filepath.Dir(blenderBinary)
libDir := filepath.Join(blenderDir, "lib")
ldLib := libDir
for _, e := range baseEnv {
if strings.HasPrefix(e, "LD_LIBRARY_PATH=") {
existing := strings.TrimPrefix(e, "LD_LIBRARY_PATH=")
if existing != "" {
ldLib = libDir + ":" + existing
}
break
}
}
out := make([]string, 0, len(baseEnv)+1)
done := false
for _, e := range baseEnv {
if strings.HasPrefix(e, "LD_LIBRARY_PATH=") {
out = append(out, "LD_LIBRARY_PATH="+ldLib)
done = true
continue
}
out = append(out, e)
}
if !done {
out = append(out, "LD_LIBRARY_PATH="+ldLib)
}
return out
}

View File

@@ -0,0 +1,34 @@
package blender
import (
"os"
"path/filepath"
"strings"
"testing"
)
func TestResolveBinaryPath_AbsoluteLikePath(t *testing.T) {
got, err := ResolveBinaryPath("./blender")
if err != nil {
t.Fatalf("ResolveBinaryPath failed: %v", err)
}
if !filepath.IsAbs(got) {
t.Fatalf("expected absolute path, got %q", got)
}
}
func TestResolveBinaryPath_Empty(t *testing.T) {
if _, err := ResolveBinaryPath(""); err == nil {
t.Fatal("expected error for empty blender binary")
}
}
func TestTarballEnv_SetsAndExtendsLDLibraryPath(t *testing.T) {
bin := filepath.Join(string(os.PathSeparator), "tmp", "blender", "blender")
got := TarballEnv(bin, []string{"A=B", "LD_LIBRARY_PATH=/old"})
joined := strings.Join(got, "\n")
if !strings.Contains(joined, "LD_LIBRARY_PATH=/tmp/blender/lib:/old") {
t.Fatalf("expected LD_LIBRARY_PATH to include blender lib, got %v", got)
}
}

View File

@@ -0,0 +1,116 @@
// Package blender: host GPU backend detection for AMD/NVIDIA/Intel.
package blender
import (
"bufio"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
)
// DetectGPUBackends detects whether AMD, NVIDIA, and/or Intel GPUs are available
// using host-level hardware probing only.
func DetectGPUBackends() (hasAMD, hasNVIDIA, hasIntel bool, ok bool) {
return detectGPUBackendsFromHost()
}
func detectGPUBackendsFromHost() (hasAMD, hasNVIDIA, hasIntel bool, ok bool) {
if amd, nvidia, intel, found := detectGPUBackendsFromDRM(); found {
return amd, nvidia, intel, true
}
if amd, nvidia, intel, found := detectGPUBackendsFromLSPCI(); found {
return amd, nvidia, intel, true
}
return false, false, false, false
}
func detectGPUBackendsFromDRM() (hasAMD, hasNVIDIA, hasIntel bool, ok bool) {
entries, err := os.ReadDir("/sys/class/drm")
if err != nil {
return false, false, false, false
}
for _, entry := range entries {
name := entry.Name()
if !isDRMCardNode(name) {
continue
}
vendorPath := filepath.Join("/sys/class/drm", name, "device", "vendor")
vendorRaw, err := os.ReadFile(vendorPath)
if err != nil {
continue
}
vendor := strings.TrimSpace(strings.ToLower(string(vendorRaw)))
switch vendor {
case "0x1002":
hasAMD = true
ok = true
case "0x10de":
hasNVIDIA = true
ok = true
case "0x8086":
hasIntel = true
ok = true
}
}
return hasAMD, hasNVIDIA, hasIntel, ok
}
func isDRMCardNode(name string) bool {
if !strings.HasPrefix(name, "card") {
return false
}
if strings.Contains(name, "-") {
// Connector entries like card0-DP-1 are not GPU device nodes.
return false
}
if len(name) <= len("card") {
return false
}
_, err := strconv.Atoi(strings.TrimPrefix(name, "card"))
return err == nil
}
func detectGPUBackendsFromLSPCI() (hasAMD, hasNVIDIA, hasIntel bool, ok bool) {
if _, err := exec.LookPath("lspci"); err != nil {
return false, false, false, false
}
out, err := exec.Command("lspci").CombinedOutput()
if err != nil {
return false, false, false, false
}
scanner := bufio.NewScanner(strings.NewReader(string(out)))
for scanner.Scan() {
line := strings.ToLower(strings.TrimSpace(scanner.Text()))
if !isGPUControllerLine(line) {
continue
}
if strings.Contains(line, "nvidia") {
hasNVIDIA = true
ok = true
}
if strings.Contains(line, "amd") || strings.Contains(line, "ati") || strings.Contains(line, "radeon") {
hasAMD = true
ok = true
}
if strings.Contains(line, "intel") {
hasIntel = true
ok = true
}
}
return hasAMD, hasNVIDIA, hasIntel, ok
}
func isGPUControllerLine(line string) bool {
return strings.Contains(line, "vga compatible controller") ||
strings.Contains(line, "3d controller") ||
strings.Contains(line, "display controller")
}

View File

@@ -0,0 +1,32 @@
package blender
import "testing"
func TestIsDRMCardNode(t *testing.T) {
tests := map[string]bool{
"card0": true,
"card12": true,
"card": false,
"card0-DP-1": false,
"renderD128": false,
"foo": false,
}
for in, want := range tests {
if got := isDRMCardNode(in); got != want {
t.Fatalf("isDRMCardNode(%q) = %v, want %v", in, got, want)
}
}
}
func TestIsGPUControllerLine(t *testing.T) {
if !isGPUControllerLine("vga compatible controller: nvidia corp") {
t.Fatal("expected VGA controller line to match")
}
if !isGPUControllerLine("3d controller: amd") {
t.Fatal("expected 3d controller line to match")
}
if isGPUControllerLine("audio device: something") {
t.Fatal("audio line should not match")
}
}

View File

@@ -0,0 +1,34 @@
package blender
import (
"testing"
"jiggablend/pkg/types"
)
func TestFilterLog_FiltersNoise(t *testing.T) {
cases := []string{
"",
"--------------------------------------------------------------------",
"Failed to add relation foo",
"BKE_modifier_set_error",
"Depth Type Name",
}
for _, in := range cases {
filtered, level := FilterLog(in)
if !filtered {
t.Fatalf("expected filtered for %q", in)
}
if level != types.LogLevelInfo {
t.Fatalf("unexpected level for %q: %s", in, level)
}
}
}
func TestFilterLog_KeepsNormalLine(t *testing.T) {
filtered, _ := FilterLog("Rendering done.")
if filtered {
t.Fatal("normal line should not be filtered")
}
}

View File

@@ -1,143 +1,19 @@
package blender package blender
import ( import (
"compress/gzip"
"fmt" "fmt"
"io"
"os" "jiggablend/pkg/blendfile"
"os/exec"
) )
// ParseVersionFromFile parses the Blender version that a .blend file was saved with. // ParseVersionFromFile parses the Blender version that a .blend file was saved with.
// Returns major and minor version numbers. // Returns major and minor version numbers.
// Delegates to the shared pkg/blendfile implementation.
func ParseVersionFromFile(blendPath string) (major, minor int, err error) { func ParseVersionFromFile(blendPath string) (major, minor int, err error) {
file, err := os.Open(blendPath) return blendfile.ParseVersionFromFile(blendPath)
if err != nil {
return 0, 0, fmt.Errorf("failed to open blend file: %w", err)
}
defer file.Close()
// Read the first 12 bytes of the blend file header
// Format: BLENDER-v<major><minor><patch> or BLENDER_v<major><minor><patch>
// The header is: "BLENDER" (7 bytes) + pointer size (1 byte: '-' for 64-bit, '_' for 32-bit)
// + endianness (1 byte: 'v' for little-endian, 'V' for big-endian)
// + version (3 bytes: e.g., "402" for 4.02)
header := make([]byte, 12)
n, err := file.Read(header)
if err != nil || n < 12 {
return 0, 0, fmt.Errorf("failed to read blend file header: %w", err)
}
// Check for BLENDER magic
if string(header[:7]) != "BLENDER" {
// Might be compressed - try to decompress
file.Seek(0, 0)
return parseCompressedVersion(file)
}
// Parse version from bytes 9-11 (3 digits)
versionStr := string(header[9:12])
// Version format changed in Blender 3.0
// Pre-3.0: "279" = 2.79, "280" = 2.80
// 3.0+: "300" = 3.0, "402" = 4.02, "410" = 4.10
if len(versionStr) == 3 {
// First digit is major version
fmt.Sscanf(string(versionStr[0]), "%d", &major)
// Next two digits are minor version
fmt.Sscanf(versionStr[1:3], "%d", &minor)
}
return major, minor, nil
}
// parseCompressedVersion handles gzip and zstd compressed blend files.
func parseCompressedVersion(file *os.File) (major, minor int, err error) {
magic := make([]byte, 4)
if _, err := file.Read(magic); err != nil {
return 0, 0, err
}
file.Seek(0, 0)
if magic[0] == 0x1f && magic[1] == 0x8b {
// gzip compressed
gzReader, err := gzip.NewReader(file)
if err != nil {
return 0, 0, fmt.Errorf("failed to create gzip reader: %w", err)
}
defer gzReader.Close()
header := make([]byte, 12)
n, err := gzReader.Read(header)
if err != nil || n < 12 {
return 0, 0, fmt.Errorf("failed to read compressed blend header: %w", err)
}
if string(header[:7]) != "BLENDER" {
return 0, 0, fmt.Errorf("invalid blend file format")
}
versionStr := string(header[9:12])
if len(versionStr) == 3 {
fmt.Sscanf(string(versionStr[0]), "%d", &major)
fmt.Sscanf(versionStr[1:3], "%d", &minor)
}
return major, minor, nil
}
// Check for zstd magic (Blender 3.0+): 0x28 0xB5 0x2F 0xFD
if magic[0] == 0x28 && magic[1] == 0xb5 && magic[2] == 0x2f && magic[3] == 0xfd {
return parseZstdVersion(file)
}
return 0, 0, fmt.Errorf("unknown blend file format")
}
// parseZstdVersion handles zstd-compressed blend files (Blender 3.0+).
// Uses zstd command line tool since Go doesn't have native zstd support.
func parseZstdVersion(file *os.File) (major, minor int, err error) {
file.Seek(0, 0)
cmd := exec.Command("zstd", "-d", "-c")
cmd.Stdin = file
stdout, err := cmd.StdoutPipe()
if err != nil {
return 0, 0, fmt.Errorf("failed to create zstd stdout pipe: %w", err)
}
if err := cmd.Start(); err != nil {
return 0, 0, fmt.Errorf("failed to start zstd decompression: %w", err)
}
// Read just the header (12 bytes)
header := make([]byte, 12)
n, readErr := io.ReadFull(stdout, header)
// Kill the process early - we only need the header
cmd.Process.Kill()
cmd.Wait()
if readErr != nil || n < 12 {
return 0, 0, fmt.Errorf("failed to read zstd compressed blend header: %v", readErr)
}
if string(header[:7]) != "BLENDER" {
return 0, 0, fmt.Errorf("invalid blend file format in zstd archive")
}
versionStr := string(header[9:12])
if len(versionStr) == 3 {
fmt.Sscanf(string(versionStr[0]), "%d", &major)
fmt.Sscanf(versionStr[1:3], "%d", &minor)
}
return major, minor, nil
} }
// VersionString returns a formatted version string like "4.2". // VersionString returns a formatted version string like "4.2".
func VersionString(major, minor int) string { func VersionString(major, minor int) string {
return fmt.Sprintf("%d.%d", major, minor) return fmt.Sprintf("%d.%d", major, minor)
} }

View File

@@ -0,0 +1,10 @@
package blender
import "testing"
func TestVersionString(t *testing.T) {
if got := VersionString(4, 2); got != "4.2" {
t.Fatalf("VersionString() = %q, want %q", got, "4.2")
}
}

View File

@@ -20,10 +20,8 @@ type EncodeConfig struct {
StartFrame int // Starting frame number StartFrame int // Starting frame number
FrameRate float64 // Frame rate FrameRate float64 // Frame rate
WorkDir string // Working directory WorkDir string // Working directory
UseAlpha bool // Whether to preserve alpha channel UseAlpha bool // Whether to preserve alpha channel
TwoPass bool // Whether to use 2-pass encoding TwoPass bool // Whether to use 2-pass encoding
SourceFormat string // Source format: "exr" or "png" (defaults to "exr")
PreserveHDR bool // Whether to preserve HDR range for EXR (uses HLG with bt709 primaries)
} }
// Selector selects the software encoder. // Selector selects the software encoder.

View File

@@ -1,5 +1,7 @@
package encoding package encoding
// Pipeline: Blender outputs only EXR (linear). Encode is EXR only: linear -> sRGB -> HLG (video), 10-bit, full range.
import ( import (
"fmt" "fmt"
"log" "log"
@@ -56,97 +58,34 @@ func (e *SoftwareEncoder) Available() bool {
} }
func (e *SoftwareEncoder) BuildCommand(config *EncodeConfig) *exec.Cmd { func (e *SoftwareEncoder) BuildCommand(config *EncodeConfig) *exec.Cmd {
// Use HDR pixel formats for EXR, SDR for PNG // EXR only: HDR path (HLG, 10-bit, full range)
var pixFmt string pixFmt := "yuv420p10le"
var colorPrimaries, colorTrc, colorspace string if config.UseAlpha {
if config.SourceFormat == "png" { pixFmt = "yuva420p10le"
// PNG: SDR format
pixFmt = "yuv420p"
if config.UseAlpha {
pixFmt = "yuva420p"
}
colorPrimaries = "bt709"
colorTrc = "bt709"
colorspace = "bt709"
} else {
// EXR: Use HDR encoding if PreserveHDR is true, otherwise SDR (like PNG)
if config.PreserveHDR {
// HDR: Use HLG transfer with bt709 primaries to preserve HDR range while matching PNG color
pixFmt = "yuv420p10le" // 10-bit to preserve HDR range
if config.UseAlpha {
pixFmt = "yuva420p10le"
}
colorPrimaries = "bt709" // bt709 primaries to match PNG color appearance
colorTrc = "arib-std-b67" // HLG transfer function - preserves HDR range, works on SDR displays
colorspace = "bt709" // bt709 colorspace to match PNG
} else {
// SDR: Treat as SDR (like PNG) - encode as bt709
pixFmt = "yuv420p"
if config.UseAlpha {
pixFmt = "yuva420p"
}
colorPrimaries = "bt709"
colorTrc = "bt709"
colorspace = "bt709"
}
} }
colorPrimaries, colorTrc, colorspace, colorRange := "bt709", "arib-std-b67", "bt709", "pc"
var codecArgs []string var codecArgs []string
switch e.codec { switch e.codec {
case "libaom-av1": case "libaom-av1":
codecArgs = []string{"-crf", strconv.Itoa(CRFAV1), "-b:v", "0", "-tiles", "2x2", "-g", "240"} codecArgs = []string{"-crf", strconv.Itoa(CRFAV1), "-b:v", "0", "-tiles", "2x2", "-g", "240"}
case "libvpx-vp9": case "libvpx-vp9":
// VP9 supports alpha and HDR, use good quality settings
codecArgs = []string{"-crf", strconv.Itoa(CRFVP9), "-b:v", "0", "-row-mt", "1", "-g", "240"} codecArgs = []string{"-crf", strconv.Itoa(CRFVP9), "-b:v", "0", "-row-mt", "1", "-g", "240"}
default: default:
// H.264: Use High 10 profile for HDR EXR (10-bit), High profile for SDR codecArgs = []string{"-preset", "veryslow", "-crf", strconv.Itoa(CRFH264), "-profile:v", "high10", "-level", "5.2", "-tune", "film", "-keyint_min", "24", "-g", "240", "-bf", "2", "-refs", "4"}
if config.SourceFormat != "png" && config.PreserveHDR {
codecArgs = []string{"-preset", "veryslow", "-crf", strconv.Itoa(CRFH264), "-profile:v", "high10", "-level", "5.2", "-tune", "film", "-keyint_min", "24", "-g", "240", "-bf", "2", "-refs", "4"}
} else {
codecArgs = []string{"-preset", "veryslow", "-crf", strconv.Itoa(CRFH264), "-profile:v", "high", "-level", "5.2", "-tune", "film", "-keyint_min", "24", "-g", "240", "-bf", "2", "-refs", "4"}
}
} }
args := []string{ args := []string{"-y", "-f", "image2", "-start_number", fmt.Sprintf("%d", config.StartFrame), "-framerate", fmt.Sprintf("%.2f", config.FrameRate),
"-y", "-color_trc", "linear", "-color_primaries", "bt709"}
"-f", "image2", args = append(args, "-i", config.InputPattern, "-c:v", e.codec, "-pix_fmt", pixFmt, "-r", fmt.Sprintf("%.2f", config.FrameRate), "-color_primaries", colorPrimaries, "-color_trc", colorTrc, "-colorspace", colorspace, "-color_range", colorRange)
"-start_number", fmt.Sprintf("%d", config.StartFrame),
"-framerate", fmt.Sprintf("%.2f", config.FrameRate),
"-i", config.InputPattern,
"-c:v", e.codec,
"-pix_fmt", pixFmt,
"-r", fmt.Sprintf("%.2f", config.FrameRate),
"-color_primaries", colorPrimaries,
"-color_trc", colorTrc,
"-colorspace", colorspace,
"-color_range", "tv",
}
// Add video filter for EXR: convert linear RGB based on HDR setting vf := "format=gbrpf32le,zscale=transferin=8:transfer=13:primariesin=1:primaries=1:matrixin=0:matrix=1:rangein=full:range=full,zscale=transferin=13:transfer=18:primariesin=1:primaries=1:matrixin=1:matrix=1:rangein=full:range=full"
// PNG doesn't need any filter as it's already in sRGB if config.UseAlpha {
if config.SourceFormat != "png" { vf += ",format=yuva420p10le"
var vf string } else {
if config.PreserveHDR { vf += ",format=yuv420p10le"
// HDR: Convert linear RGB -> sRGB -> HLG with bt709 primaries
// This preserves HDR range while matching PNG color appearance
vf = "format=gbrpf32le,zscale=transferin=8:transfer=13:primariesin=1:primaries=1:matrixin=0:matrix=1:rangein=full:range=full,zscale=transferin=13:transfer=18:primariesin=1:primaries=1:matrixin=1:matrix=1:rangein=full:range=full"
if config.UseAlpha {
vf += ",format=yuva420p10le"
} else {
vf += ",format=yuv420p10le"
}
} else {
// SDR: Convert linear RGB (EXR) to sRGB (bt709) - simple conversion like Krita does
// zscale: linear (8) -> sRGB (13) with bt709 primaries/matrix
vf = "format=gbrpf32le,zscale=transferin=8:transfer=13:primariesin=1:primaries=1:matrixin=0:matrix=1:rangein=full:range=full"
if config.UseAlpha {
vf += ",format=yuva420p"
} else {
vf += ",format=yuv420p"
}
}
args = append(args, "-vf", vf)
} }
args = append(args, "-vf", vf)
args = append(args, codecArgs...) args = append(args, codecArgs...)
if config.TwoPass { if config.TwoPass {
@@ -168,97 +107,33 @@ func (e *SoftwareEncoder) BuildCommand(config *EncodeConfig) *exec.Cmd {
// BuildPass1Command builds the first pass command for 2-pass encoding. // BuildPass1Command builds the first pass command for 2-pass encoding.
func (e *SoftwareEncoder) BuildPass1Command(config *EncodeConfig) *exec.Cmd { func (e *SoftwareEncoder) BuildPass1Command(config *EncodeConfig) *exec.Cmd {
// Use HDR pixel formats for EXR, SDR for PNG pixFmt := "yuv420p10le"
var pixFmt string if config.UseAlpha {
var colorPrimaries, colorTrc, colorspace string pixFmt = "yuva420p10le"
if config.SourceFormat == "png" {
// PNG: SDR format
pixFmt = "yuv420p"
if config.UseAlpha {
pixFmt = "yuva420p"
}
colorPrimaries = "bt709"
colorTrc = "bt709"
colorspace = "bt709"
} else {
// EXR: Use HDR encoding if PreserveHDR is true, otherwise SDR (like PNG)
if config.PreserveHDR {
// HDR: Use HLG transfer with bt709 primaries to preserve HDR range while matching PNG color
pixFmt = "yuv420p10le" // 10-bit to preserve HDR range
if config.UseAlpha {
pixFmt = "yuva420p10le"
}
colorPrimaries = "bt709" // bt709 primaries to match PNG color appearance
colorTrc = "arib-std-b67" // HLG transfer function - preserves HDR range, works on SDR displays
colorspace = "bt709" // bt709 colorspace to match PNG
} else {
// SDR: Treat as SDR (like PNG) - encode as bt709
pixFmt = "yuv420p"
if config.UseAlpha {
pixFmt = "yuva420p"
}
colorPrimaries = "bt709"
colorTrc = "bt709"
colorspace = "bt709"
}
} }
colorPrimaries, colorTrc, colorspace, colorRange := "bt709", "arib-std-b67", "bt709", "pc"
var codecArgs []string var codecArgs []string
switch e.codec { switch e.codec {
case "libaom-av1": case "libaom-av1":
codecArgs = []string{"-crf", strconv.Itoa(CRFAV1), "-b:v", "0", "-tiles", "2x2", "-g", "240"} codecArgs = []string{"-crf", strconv.Itoa(CRFAV1), "-b:v", "0", "-tiles", "2x2", "-g", "240"}
case "libvpx-vp9": case "libvpx-vp9":
// VP9 supports alpha and HDR, use good quality settings
codecArgs = []string{"-crf", strconv.Itoa(CRFVP9), "-b:v", "0", "-row-mt", "1", "-g", "240"} codecArgs = []string{"-crf", strconv.Itoa(CRFVP9), "-b:v", "0", "-row-mt", "1", "-g", "240"}
default: default:
// H.264: Use High 10 profile for HDR EXR (10-bit), High profile for SDR codecArgs = []string{"-preset", "veryslow", "-crf", strconv.Itoa(CRFH264), "-profile:v", "high10", "-level", "5.2", "-tune", "film", "-keyint_min", "24", "-g", "240", "-bf", "2", "-refs", "4"}
if config.SourceFormat != "png" && config.PreserveHDR {
codecArgs = []string{"-preset", "veryslow", "-crf", strconv.Itoa(CRFH264), "-profile:v", "high10", "-level", "5.2", "-tune", "film", "-keyint_min", "24", "-g", "240", "-bf", "2", "-refs", "4"}
} else {
codecArgs = []string{"-preset", "veryslow", "-crf", strconv.Itoa(CRFH264), "-profile:v", "high", "-level", "5.2", "-tune", "film", "-keyint_min", "24", "-g", "240", "-bf", "2", "-refs", "4"}
}
} }
args := []string{ args := []string{"-y", "-f", "image2", "-start_number", fmt.Sprintf("%d", config.StartFrame), "-framerate", fmt.Sprintf("%.2f", config.FrameRate),
"-y", "-color_trc", "linear", "-color_primaries", "bt709"}
"-f", "image2", args = append(args, "-i", config.InputPattern, "-c:v", e.codec, "-pix_fmt", pixFmt, "-r", fmt.Sprintf("%.2f", config.FrameRate), "-color_primaries", colorPrimaries, "-color_trc", colorTrc, "-colorspace", colorspace, "-color_range", colorRange)
"-start_number", fmt.Sprintf("%d", config.StartFrame),
"-framerate", fmt.Sprintf("%.2f", config.FrameRate),
"-i", config.InputPattern,
"-c:v", e.codec,
"-pix_fmt", pixFmt,
"-r", fmt.Sprintf("%.2f", config.FrameRate),
"-color_primaries", colorPrimaries,
"-color_trc", colorTrc,
"-colorspace", colorspace,
"-color_range", "tv",
}
// Add video filter for EXR: convert linear RGB based on HDR setting vf := "format=gbrpf32le,zscale=transferin=8:transfer=13:primariesin=1:primaries=1:matrixin=0:matrix=1:rangein=full:range=full,zscale=transferin=13:transfer=18:primariesin=1:primaries=1:matrixin=1:matrix=1:rangein=full:range=full"
// PNG doesn't need any filter as it's already in sRGB if config.UseAlpha {
if config.SourceFormat != "png" { vf += ",format=yuva420p10le"
var vf string } else {
if config.PreserveHDR { vf += ",format=yuv420p10le"
// HDR: Convert linear RGB -> sRGB -> HLG with bt709 primaries
// This preserves HDR range while matching PNG color appearance
vf = "format=gbrpf32le,zscale=transferin=8:transfer=13:primariesin=1:primaries=1:matrixin=0:matrix=1:rangein=full:range=full,zscale=transferin=13:transfer=18:primariesin=1:primaries=1:matrixin=1:matrix=1:rangein=full:range=full"
if config.UseAlpha {
vf += ",format=yuva420p10le"
} else {
vf += ",format=yuv420p10le"
}
} else {
// SDR: Convert linear RGB (EXR) to sRGB (bt709) - simple conversion like Krita does
// zscale: linear (8) -> sRGB (13) with bt709 primaries/matrix
vf = "format=gbrpf32le,zscale=transferin=8:transfer=13:primariesin=1:primaries=1:matrixin=0:matrix=1:rangein=full:range=full"
if config.UseAlpha {
vf += ",format=yuva420p"
} else {
vf += ",format=yuv420p"
}
}
args = append(args, "-vf", vf)
} }
args = append(args, "-vf", vf)
args = append(args, codecArgs...) args = append(args, codecArgs...)
args = append(args, "-pass", "1", "-f", "null", "/dev/null") args = append(args, "-pass", "1", "-f", "null", "/dev/null")

View File

@@ -18,7 +18,6 @@ func TestSoftwareEncoder_BuildCommand_H264_EXR(t *testing.T) {
WorkDir: "/tmp", WorkDir: "/tmp",
UseAlpha: false, UseAlpha: false,
TwoPass: true, TwoPass: true,
SourceFormat: "exr",
} }
cmd := encoder.BuildCommand(config) cmd := encoder.BuildCommand(config)
@@ -37,7 +36,7 @@ func TestSoftwareEncoder_BuildCommand_H264_EXR(t *testing.T) {
args := cmd.Args[1:] // Skip "ffmpeg" args := cmd.Args[1:] // Skip "ffmpeg"
argsStr := strings.Join(args, " ") argsStr := strings.Join(args, " ")
// Check required arguments // EXR always uses HDR path: 10-bit, HLG, full range
checks := []struct { checks := []struct {
name string name string
expected string expected string
@@ -46,18 +45,19 @@ func TestSoftwareEncoder_BuildCommand_H264_EXR(t *testing.T) {
{"image2 format", "-f image2"}, {"image2 format", "-f image2"},
{"start number", "-start_number 1"}, {"start number", "-start_number 1"},
{"framerate", "-framerate 24.00"}, {"framerate", "-framerate 24.00"},
{"input color tag", "-color_trc linear"},
{"input pattern", "-i frame_%04d.exr"}, {"input pattern", "-i frame_%04d.exr"},
{"codec", "-c:v libx264"}, {"codec", "-c:v libx264"},
{"pixel format", "-pix_fmt yuv420p"}, // EXR now treated as SDR (like PNG) {"pixel format", "-pix_fmt yuv420p10le"},
{"frame rate", "-r 24.00"}, {"frame rate", "-r 24.00"},
{"color primaries", "-color_primaries bt709"}, // EXR now uses bt709 (SDR) {"color primaries", "-color_primaries bt709"},
{"color trc", "-color_trc bt709"}, // EXR now uses bt709 (SDR) {"color trc", "-color_trc arib-std-b67"},
{"colorspace", "-colorspace bt709"}, {"colorspace", "-colorspace bt709"},
{"color range", "-color_range tv"}, {"color range", "-color_range pc"},
{"video filter", "-vf"}, {"video filter", "-vf"},
{"preset", "-preset veryslow"}, {"preset", "-preset veryslow"},
{"crf", "-crf 15"}, {"crf", "-crf 15"},
{"profile", "-profile:v high"}, // EXR now uses high profile (SDR) {"profile", "-profile:v high10"},
{"pass 2", "-pass 2"}, {"pass 2", "-pass 2"},
{"output path", "output.mp4"}, {"output path", "output.mp4"},
} }
@@ -68,40 +68,15 @@ func TestSoftwareEncoder_BuildCommand_H264_EXR(t *testing.T) {
} }
} }
// Verify filter is present for EXR (linear RGB to sRGB conversion, like Krita does) // EXR: linear -> sRGB -> HLG filter
if !strings.Contains(argsStr, "format=gbrpf32le") { if !strings.Contains(argsStr, "format=gbrpf32le") {
t.Error("Expected format conversion filter for EXR source, but not found") t.Error("Expected format conversion filter for EXR source, but not found")
} }
if !strings.Contains(argsStr, "zscale=transferin=8:transfer=13") { if !strings.Contains(argsStr, "zscale=transferin=8:transfer=13") {
t.Error("Expected linear to sRGB conversion for EXR source, but not found") t.Error("Expected linear to sRGB conversion for EXR source, but not found")
} }
} if !strings.Contains(argsStr, "transfer=18") {
t.Error("Expected sRGB to HLG conversion for EXR HDR, but not found")
func TestSoftwareEncoder_BuildCommand_H264_PNG(t *testing.T) {
encoder := &SoftwareEncoder{codec: "libx264"}
config := &EncodeConfig{
InputPattern: "frame_%04d.png",
OutputPath: "output.mp4",
StartFrame: 1,
FrameRate: 24.0,
WorkDir: "/tmp",
UseAlpha: false,
TwoPass: true,
SourceFormat: "png",
}
cmd := encoder.BuildCommand(config)
args := cmd.Args[1:]
argsStr := strings.Join(args, " ")
// PNG should NOT have video filter
if strings.Contains(argsStr, "-vf") {
t.Error("PNG source should not have video filter, but -vf was found")
}
// Should still have all other required args
if !strings.Contains(argsStr, "-c:v libx264") {
t.Error("Missing codec argument")
} }
} }
@@ -113,18 +88,17 @@ func TestSoftwareEncoder_BuildCommand_AV1_WithAlpha(t *testing.T) {
StartFrame: 100, StartFrame: 100,
FrameRate: 30.0, FrameRate: 30.0,
WorkDir: "/tmp", WorkDir: "/tmp",
UseAlpha: true, UseAlpha: true,
TwoPass: true, TwoPass: true,
SourceFormat: "exr",
} }
cmd := encoder.BuildCommand(config) cmd := encoder.BuildCommand(config)
args := cmd.Args[1:] args := cmd.Args[1:]
argsStr := strings.Join(args, " ") argsStr := strings.Join(args, " ")
// Check alpha-specific settings // EXR with alpha: 10-bit HDR path
if !strings.Contains(argsStr, "-pix_fmt yuva420p") { if !strings.Contains(argsStr, "-pix_fmt yuva420p10le") {
t.Error("Expected yuva420p pixel format for alpha, but not found") t.Error("Expected yuva420p10le pixel format for EXR alpha, but not found")
} }
// Check AV1-specific arguments // Check AV1-specific arguments
@@ -142,9 +116,9 @@ func TestSoftwareEncoder_BuildCommand_AV1_WithAlpha(t *testing.T) {
} }
} }
// Check tonemap filter includes alpha format // Check tonemap filter includes alpha format (10-bit for EXR)
if !strings.Contains(argsStr, "format=yuva420p") { if !strings.Contains(argsStr, "format=yuva420p10le") {
t.Error("Expected tonemap filter to output yuva420p for alpha, but not found") t.Error("Expected tonemap filter to output yuva420p10le for EXR alpha, but not found")
} }
} }
@@ -156,9 +130,8 @@ func TestSoftwareEncoder_BuildCommand_VP9(t *testing.T) {
StartFrame: 1, StartFrame: 1,
FrameRate: 24.0, FrameRate: 24.0,
WorkDir: "/tmp", WorkDir: "/tmp",
UseAlpha: true, UseAlpha: true,
TwoPass: true, TwoPass: true,
SourceFormat: "exr",
} }
cmd := encoder.BuildCommand(config) cmd := encoder.BuildCommand(config)
@@ -191,7 +164,6 @@ func TestSoftwareEncoder_BuildPass1Command(t *testing.T) {
WorkDir: "/tmp", WorkDir: "/tmp",
UseAlpha: false, UseAlpha: false,
TwoPass: true, TwoPass: true,
SourceFormat: "exr",
} }
cmd := encoder.BuildPass1Command(config) cmd := encoder.BuildPass1Command(config)
@@ -227,7 +199,6 @@ func TestSoftwareEncoder_BuildPass1Command_AV1(t *testing.T) {
WorkDir: "/tmp", WorkDir: "/tmp",
UseAlpha: false, UseAlpha: false,
TwoPass: true, TwoPass: true,
SourceFormat: "exr",
} }
cmd := encoder.BuildPass1Command(config) cmd := encoder.BuildPass1Command(config)
@@ -273,7 +244,6 @@ func TestSoftwareEncoder_BuildPass1Command_VP9(t *testing.T) {
WorkDir: "/tmp", WorkDir: "/tmp",
UseAlpha: false, UseAlpha: false,
TwoPass: true, TwoPass: true,
SourceFormat: "exr",
} }
cmd := encoder.BuildPass1Command(config) cmd := encoder.BuildPass1Command(config)
@@ -319,7 +289,6 @@ func TestSoftwareEncoder_BuildCommand_NoTwoPass(t *testing.T) {
WorkDir: "/tmp", WorkDir: "/tmp",
UseAlpha: false, UseAlpha: false,
TwoPass: false, TwoPass: false,
SourceFormat: "exr",
} }
cmd := encoder.BuildCommand(config) cmd := encoder.BuildCommand(config)
@@ -432,28 +401,6 @@ func TestSoftwareEncoder_Available(t *testing.T) {
} }
} }
func TestEncodeConfig_DefaultSourceFormat(t *testing.T) {
config := &EncodeConfig{
InputPattern: "frame_%04d.exr",
OutputPath: "output.mp4",
StartFrame: 1,
FrameRate: 24.0,
WorkDir: "/tmp",
UseAlpha: false,
TwoPass: false,
// SourceFormat not set, should default to empty string (treated as exr)
}
encoder := &SoftwareEncoder{codec: "libx264"}
cmd := encoder.BuildCommand(config)
args := strings.Join(cmd.Args[1:], " ")
// Should still have tonemap filter when SourceFormat is empty (defaults to exr behavior)
if !strings.Contains(args, "-vf") {
t.Error("Empty SourceFormat should default to EXR behavior with tonemap filter")
}
}
func TestCommandOrder(t *testing.T) { func TestCommandOrder(t *testing.T) {
encoder := &SoftwareEncoder{codec: "libx264"} encoder := &SoftwareEncoder{codec: "libx264"}
config := &EncodeConfig{ config := &EncodeConfig{
@@ -464,7 +411,6 @@ func TestCommandOrder(t *testing.T) {
WorkDir: "/tmp", WorkDir: "/tmp",
UseAlpha: false, UseAlpha: false,
TwoPass: true, TwoPass: true,
SourceFormat: "exr",
} }
cmd := encoder.BuildCommand(config) cmd := encoder.BuildCommand(config)
@@ -519,20 +465,18 @@ func TestCommand_ColorspaceMetadata(t *testing.T) {
WorkDir: "/tmp", WorkDir: "/tmp",
UseAlpha: false, UseAlpha: false,
TwoPass: false, TwoPass: false,
SourceFormat: "exr",
PreserveHDR: false, // SDR encoding
} }
cmd := encoder.BuildCommand(config) cmd := encoder.BuildCommand(config)
args := cmd.Args[1:] args := cmd.Args[1:]
argsStr := strings.Join(args, " ") argsStr := strings.Join(args, " ")
// Verify all SDR colorspace metadata is present for EXR (SDR encoding) // EXR always uses HDR path: bt709 primaries, HLG, full range
colorspaceArgs := []string{ colorspaceArgs := []string{
"-color_primaries bt709", // EXR uses bt709 (SDR) "-color_primaries bt709",
"-color_trc bt709", // EXR uses bt709 (SDR) "-color_trc arib-std-b67",
"-colorspace bt709", "-colorspace bt709",
"-color_range tv", "-color_range pc",
} }
for _, arg := range colorspaceArgs { for _, arg := range colorspaceArgs {
@@ -541,17 +485,11 @@ func TestCommand_ColorspaceMetadata(t *testing.T) {
} }
} }
// Verify SDR pixel format if !strings.Contains(argsStr, "-pix_fmt yuv420p10le") {
if !strings.Contains(argsStr, "-pix_fmt yuv420p") { t.Error("EXR encoding should use yuv420p10le pixel format")
t.Error("SDR encoding should use yuv420p pixel format")
} }
if !strings.Contains(argsStr, "-profile:v high10") {
// Verify H.264 high profile (not high10) t.Error("EXR encoding should use high10 profile")
if !strings.Contains(argsStr, "-profile:v high") {
t.Error("SDR encoding should use high profile")
}
if strings.Contains(argsStr, "-profile:v high10") {
t.Error("SDR encoding should not use high10 profile")
} }
} }
@@ -565,20 +503,18 @@ func TestCommand_HDR_ColorspaceMetadata(t *testing.T) {
WorkDir: "/tmp", WorkDir: "/tmp",
UseAlpha: false, UseAlpha: false,
TwoPass: false, TwoPass: false,
SourceFormat: "exr",
PreserveHDR: true, // HDR encoding
} }
cmd := encoder.BuildCommand(config) cmd := encoder.BuildCommand(config)
args := cmd.Args[1:] args := cmd.Args[1:]
argsStr := strings.Join(args, " ") argsStr := strings.Join(args, " ")
// Verify all HDR colorspace metadata is present for EXR (HDR encoding) // Verify all HDR colorspace metadata is present for EXR (full range to match zscale output)
colorspaceArgs := []string{ colorspaceArgs := []string{
"-color_primaries bt709", // bt709 primaries to match PNG color appearance "-color_primaries bt709",
"-color_trc arib-std-b67", // HLG transfer function for HDR/SDR compatibility "-color_trc arib-std-b67",
"-colorspace bt709", // bt709 colorspace to match PNG "-colorspace bt709",
"-color_range tv", "-color_range pc",
} }
for _, arg := range colorspaceArgs { for _, arg := range colorspaceArgs {
@@ -656,7 +592,6 @@ func TestIntegration_Encode_EXR_H264(t *testing.T) {
WorkDir: tmpDir, WorkDir: tmpDir,
UseAlpha: false, UseAlpha: false,
TwoPass: false, // Use single pass for faster testing TwoPass: false, // Use single pass for faster testing
SourceFormat: "exr",
} }
// Build and run command // Build and run command
@@ -687,77 +622,6 @@ func TestIntegration_Encode_EXR_H264(t *testing.T) {
} }
} }
func TestIntegration_Encode_PNG_H264(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
// Check if example file exists
exampleDir := filepath.Join("..", "..", "..", "examples")
pngFile := filepath.Join(exampleDir, "frame_0800.png")
if _, err := os.Stat(pngFile); os.IsNotExist(err) {
t.Skipf("Example file not found: %s", pngFile)
}
// Get absolute paths
workspaceRoot, err := filepath.Abs(filepath.Join("..", "..", ".."))
if err != nil {
t.Fatalf("Failed to get workspace root: %v", err)
}
exampleDirAbs, err := filepath.Abs(exampleDir)
if err != nil {
t.Fatalf("Failed to get example directory: %v", err)
}
tmpDir := filepath.Join(workspaceRoot, "tmp")
if err := os.MkdirAll(tmpDir, 0755); err != nil {
t.Fatalf("Failed to create tmp directory: %v", err)
}
encoder := &SoftwareEncoder{codec: "libx264"}
config := &EncodeConfig{
InputPattern: filepath.Join(exampleDirAbs, "frame_%04d.png"),
OutputPath: filepath.Join(tmpDir, "test_png_h264.mp4"),
StartFrame: 800,
FrameRate: 24.0,
WorkDir: tmpDir,
UseAlpha: false,
TwoPass: false, // Use single pass for faster testing
SourceFormat: "png",
}
// Build and run command
cmd := encoder.BuildCommand(config)
if cmd == nil {
t.Fatal("BuildCommand returned nil")
}
// Verify no video filter is used for PNG
argsStr := strings.Join(cmd.Args, " ")
if strings.Contains(argsStr, "-vf") {
t.Error("PNG encoding should not use video filter, but -vf was found in command")
}
// Run the command
cmdOutput, err := cmd.CombinedOutput()
if err != nil {
t.Errorf("FFmpeg command failed: %v\nCommand output: %s", err, string(cmdOutput))
return
}
// Verify output file was created
if _, err := os.Stat(config.OutputPath); os.IsNotExist(err) {
t.Errorf("Output file was not created: %s\nCommand output: %s", config.OutputPath, string(cmdOutput))
} else {
t.Logf("Successfully created output file: %s", config.OutputPath)
info, _ := os.Stat(config.OutputPath)
if info.Size() == 0 {
t.Error("Output file was created but is empty")
} else {
t.Logf("Output file size: %d bytes", info.Size())
}
}
}
func TestIntegration_Encode_EXR_VP9(t *testing.T) { func TestIntegration_Encode_EXR_VP9(t *testing.T) {
if testing.Short() { if testing.Short() {
t.Skip("Skipping integration test in short mode") t.Skip("Skipping integration test in short mode")
@@ -800,7 +664,6 @@ func TestIntegration_Encode_EXR_VP9(t *testing.T) {
WorkDir: tmpDir, WorkDir: tmpDir,
UseAlpha: false, UseAlpha: false,
TwoPass: false, // Use single pass for faster testing TwoPass: false, // Use single pass for faster testing
SourceFormat: "exr",
} }
// Build and run command // Build and run command
@@ -873,7 +736,6 @@ func TestIntegration_Encode_EXR_AV1(t *testing.T) {
WorkDir: tmpDir, WorkDir: tmpDir,
UseAlpha: false, UseAlpha: false,
TwoPass: false, TwoPass: false,
SourceFormat: "exr",
} }
// Build and run command // Build and run command
@@ -940,7 +802,6 @@ func TestIntegration_Encode_EXR_VP9_WithAlpha(t *testing.T) {
WorkDir: tmpDir, WorkDir: tmpDir,
UseAlpha: true, // Test with alpha UseAlpha: true, // Test with alpha
TwoPass: false, // Use single pass for faster testing TwoPass: false, // Use single pass for faster testing
SourceFormat: "exr",
} }
// Build and run command // Build and run command

View File

@@ -4,6 +4,7 @@ package runner
import ( import (
"crypto/sha256" "crypto/sha256"
"encoding/hex" "encoding/hex"
"errors"
"fmt" "fmt"
"log" "log"
"net" "net"
@@ -39,10 +40,28 @@ type Runner struct {
fingerprint string fingerprint string
fingerprintMu sync.RWMutex fingerprintMu sync.RWMutex
// gpuLockedOut is set when logs indicate a GPU error (e.g. HIP "Illegal address");
// when true, the runner forces CPU rendering for all subsequent jobs.
gpuLockedOut bool
gpuLockedOutMu sync.RWMutex
// hasAMD/hasNVIDIA/hasIntel are set at startup by hardware/Blender GPU backend detection.
// Used to force CPU only for Blender < 4.x when AMD is present (no official HIP support pre-4).
// gpuDetectionFailed is true when detection could not run; we then force CPU for all versions.
gpuBackendMu sync.RWMutex
hasAMD bool
hasNVIDIA bool
hasIntel bool
gpuBackendProbed bool
gpuDetectionFailed bool
// forceCPURendering forces CPU rendering for all jobs regardless of metadata/backend detection.
forceCPURendering bool
} }
// New creates a new runner. // New creates a new runner.
func New(managerURL, name, hostname string) *Runner { func New(managerURL, name, hostname string, forceCPURendering bool) *Runner {
manager := api.NewManagerClient(managerURL) manager := api.NewManagerClient(managerURL)
r := &Runner{ r := &Runner{
@@ -52,6 +71,8 @@ func New(managerURL, name, hostname string) *Runner {
processes: executils.NewProcessTracker(), processes: executils.NewProcessTracker(),
stopChan: make(chan struct{}), stopChan: make(chan struct{}),
processors: make(map[string]tasks.Processor), processors: make(map[string]tasks.Processor),
forceCPURendering: forceCPURendering,
} }
// Generate fingerprint // Generate fingerprint
@@ -67,32 +88,28 @@ func (r *Runner) CheckRequiredTools() error {
} }
log.Printf("Found zstd for compressed blend file support") log.Printf("Found zstd for compressed blend file support")
if err := exec.Command("xvfb-run", "--help").Run(); err != nil {
return fmt.Errorf("xvfb-run not found - required for headless Blender rendering. Install with: apt install xvfb")
}
log.Printf("Found xvfb-run for headless rendering without -b option")
return nil return nil
} }
var cachedCapabilities map[string]interface{} = nil var (
cachedCapabilities map[string]interface{}
capabilitiesOnce sync.Once
)
// ProbeCapabilities detects hardware capabilities. // ProbeCapabilities detects hardware capabilities.
func (r *Runner) ProbeCapabilities() map[string]interface{} { func (r *Runner) ProbeCapabilities() map[string]interface{} {
if cachedCapabilities != nil { capabilitiesOnce.Do(func() {
return cachedCapabilities caps := make(map[string]interface{})
}
caps := make(map[string]interface{}) if err := exec.Command("ffmpeg", "-version").Run(); err == nil {
caps["ffmpeg"] = true
} else {
caps["ffmpeg"] = false
}
// Check for ffmpeg and probe encoding capabilities cachedCapabilities = caps
if err := exec.Command("ffmpeg", "-version").Run(); err == nil { })
caps["ffmpeg"] = true return cachedCapabilities
} else {
caps["ffmpeg"] = false
}
cachedCapabilities = caps
return caps
} }
// Register registers the runner with the manager. // Register registers the runner with the manager.
@@ -122,6 +139,72 @@ func (r *Runner) Register(apiKey string) (int64, error) {
return id, nil return id, nil
} }
// DetectAndStoreGPUBackends runs host-level backend detection and stores AMD/NVIDIA/Intel results.
// Call after Register. Used so we only force CPU for Blender < 4.x when AMD is present.
func (r *Runner) DetectAndStoreGPUBackends() {
r.gpuBackendMu.Lock()
defer r.gpuBackendMu.Unlock()
if r.gpuBackendProbed {
return
}
hasAMD, hasNVIDIA, hasIntel, ok := blender.DetectGPUBackends()
if !ok {
log.Printf("GPU backend detection failed (host probe unavailable). All jobs will use CPU because backend availability is unknown.")
r.gpuBackendProbed = true
r.gpuDetectionFailed = true
return
}
detectedTypes := 0
if hasAMD {
detectedTypes++
}
if hasNVIDIA {
detectedTypes++
}
if hasIntel {
detectedTypes++
}
if detectedTypes > 1 {
log.Printf("mixed GPU vendors detected (AMD=%v NVIDIA=%v INTEL=%v): multi-vendor setups may not work reliably, but runner will continue with GPU enabled", hasAMD, hasNVIDIA, hasIntel)
}
r.hasAMD = hasAMD
r.hasNVIDIA = hasNVIDIA
r.hasIntel = hasIntel
r.gpuBackendProbed = true
r.gpuDetectionFailed = false
log.Printf("GPU backend detection: AMD=%v NVIDIA=%v INTEL=%v (Blender < 4.x will force CPU only when AMD is present)", hasAMD, hasNVIDIA, hasIntel)
}
// HasAMD returns whether the runner detected AMD devices. Used to force CPU for Blender < 4.x only when AMD is present.
func (r *Runner) HasAMD() bool {
r.gpuBackendMu.RLock()
defer r.gpuBackendMu.RUnlock()
return r.hasAMD
}
// HasNVIDIA returns whether the runner detected NVIDIA GPUs.
func (r *Runner) HasNVIDIA() bool {
r.gpuBackendMu.RLock()
defer r.gpuBackendMu.RUnlock()
return r.hasNVIDIA
}
// HasIntel returns whether the runner detected Intel GPUs (e.g. Arc).
func (r *Runner) HasIntel() bool {
r.gpuBackendMu.RLock()
defer r.gpuBackendMu.RUnlock()
return r.hasIntel
}
// GPUDetectionFailed returns true when startup GPU backend detection could not run or failed. When true, all jobs use CPU because backend availability is unknown.
func (r *Runner) GPUDetectionFailed() bool {
r.gpuBackendMu.RLock()
defer r.gpuBackendMu.RUnlock()
return r.gpuDetectionFailed
}
// Start starts the job polling loop. // Start starts the job polling loop.
func (r *Runner) Start(pollInterval time.Duration) { func (r *Runner) Start(pollInterval time.Duration) {
log.Printf("Starting job polling loop (interval: %v)", pollInterval) log.Printf("Starting job polling loop (interval: %v)", pollInterval)
@@ -182,6 +265,24 @@ func (r *Runner) Cleanup() {
} }
} }
func (r *Runner) withJobWorkspace(jobID int64, fn func(workDir string) error) error {
workDir, err := r.workspace.CreateJobDir(jobID)
if err != nil {
return fmt.Errorf("failed to create job workspace: %w", err)
}
defer func() {
if cleanupErr := r.workspace.CleanupJobDir(jobID); cleanupErr != nil {
log.Printf("Warning: failed to cleanup job workspace for job %d: %v", jobID, cleanupErr)
}
if cleanupErr := r.workspace.CleanupVideoDir(jobID); cleanupErr != nil {
log.Printf("Warning: failed to cleanup encode workspace for job %d: %v", jobID, cleanupErr)
}
}()
return fn(workDir)
}
// executeJob handles a job using per-job WebSocket connection. // executeJob handles a job using per-job WebSocket connection.
func (r *Runner) executeJob(job *api.NextJobResponse) (err error) { func (r *Runner) executeJob(job *api.NextJobResponse) (err error) {
// Recover from panics to prevent runner process crashes during task execution // Recover from panics to prevent runner process crashes during task execution
@@ -192,72 +293,89 @@ func (r *Runner) executeJob(job *api.NextJobResponse) (err error) {
} }
}() }()
// Connect to job WebSocket (no runnerID needed - authentication handles it) return r.withJobWorkspace(job.Task.JobID, func(workDir string) error {
jobConn := api.NewJobConnection() // Connect to job WebSocket (no runnerID needed - authentication handles it)
if err := jobConn.Connect(r.manager.GetBaseURL(), job.JobPath, job.JobToken); err != nil { jobConn := api.NewJobConnection()
return fmt.Errorf("failed to connect job WebSocket: %w", err) if err := jobConn.Connect(r.manager.GetBaseURL(), job.JobPath, job.JobToken); err != nil {
} return fmt.Errorf("failed to connect job WebSocket: %w", err)
defer jobConn.Close()
log.Printf("Job WebSocket authenticated for task %d", job.Task.TaskID)
// Create task context
workDir := r.workspace.JobDir(job.Task.JobID)
ctx := tasks.NewContext(
job.Task.TaskID,
job.Task.JobID,
job.Task.JobName,
job.Task.Frame,
job.Task.TaskType,
workDir,
job.JobToken,
job.Task.Metadata,
r.manager,
jobConn,
r.workspace,
r.blender,
r.encoder,
r.processes,
)
ctx.Info(fmt.Sprintf("Task assignment received (job: %d, type: %s)",
job.Task.JobID, job.Task.TaskType))
// Get processor for task type
processor, ok := r.processors[job.Task.TaskType]
if !ok {
return fmt.Errorf("unknown task type: %s", job.Task.TaskType)
}
// Process the task
var processErr error
switch job.Task.TaskType {
case "render": // this task has a upload outputs step because the frames are not uploaded by the render task directly we have to do it manually here TODO: maybe we should make it work like the encode task
// Download context
contextPath := job.JobPath + "/context.tar"
if err := r.downloadContext(job.Task.JobID, contextPath, job.JobToken); err != nil {
jobConn.Log(job.Task.TaskID, types.LogLevelError, fmt.Sprintf("Failed to download context: %v", err))
jobConn.Complete(job.Task.TaskID, false, fmt.Errorf("failed to download context: %v", err))
return fmt.Errorf("failed to download context: %w", err)
} }
processErr = processor.Process(ctx) defer jobConn.Close()
if processErr == nil {
processErr = r.uploadOutputs(ctx, job) log.Printf("Job WebSocket authenticated for task %d", job.Task.TaskID)
// Create task context (frame range: Frame = start, FrameEnd = end; 0 or missing = single frame)
frameEnd := job.Task.FrameEnd
if frameEnd < job.Task.Frame {
frameEnd = job.Task.Frame
} }
case "encode": // this task doesn't have a upload outputs step because the video is already uploaded by the encode task ctx := tasks.NewContext(
processErr = processor.Process(ctx) job.Task.TaskID,
default: job.Task.JobID,
return fmt.Errorf("unknown task type: %s", job.Task.TaskType) job.Task.JobName,
} job.Task.Frame,
frameEnd,
job.Task.TaskType,
workDir,
job.JobToken,
job.Task.Metadata,
r.manager,
jobConn,
r.workspace,
r.blender,
r.encoder,
r.processes,
r.IsGPULockedOut(),
r.HasAMD(),
r.HasNVIDIA(),
r.HasIntel(),
r.GPUDetectionFailed(),
r.forceCPURendering,
func() { r.SetGPULockedOut(true) },
)
if processErr != nil { ctx.Info(fmt.Sprintf("Task assignment received (job: %d, type: %s)",
ctx.Error(fmt.Sprintf("Task failed: %v", processErr)) job.Task.JobID, job.Task.TaskType))
ctx.Complete(false, processErr)
return processErr
}
ctx.Complete(true, nil) // Get processor for task type
return nil processor, ok := r.processors[job.Task.TaskType]
if !ok {
return fmt.Errorf("unknown task type: %s", job.Task.TaskType)
}
// Process the task
var processErr error
switch job.Task.TaskType {
case "render": // this task has a upload outputs step because the frames are not uploaded by the render task directly we have to do it manually here TODO: maybe we should make it work like the encode task
// Download context
contextPath := job.JobPath + "/context.tar"
if err := r.downloadContext(job.Task.JobID, contextPath, job.JobToken); err != nil {
jobConn.Log(job.Task.TaskID, types.LogLevelError, fmt.Sprintf("Failed to download context: %v", err))
jobConn.Complete(job.Task.TaskID, false, fmt.Errorf("failed to download context: %v", err))
return fmt.Errorf("failed to download context: %w", err)
}
processErr = processor.Process(ctx)
if processErr == nil {
processErr = r.uploadOutputs(ctx, job)
}
case "encode": // this task doesn't have a upload outputs step because the video is already uploaded by the encode task
processErr = processor.Process(ctx)
default:
return fmt.Errorf("unknown task type: %s", job.Task.TaskType)
}
if processErr != nil {
if errors.Is(processErr, tasks.ErrJobCancelled) {
ctx.Warn("Stopping task early because the job was cancelled")
return nil
}
ctx.Error(fmt.Sprintf("Task failed: %v", processErr))
ctx.Complete(false, processErr)
return processErr
}
ctx.Complete(true, nil)
return nil
})
} }
func (r *Runner) downloadContext(jobID int64, contextPath, jobToken string) error { func (r *Runner) downloadContext(jobID int64, contextPath, jobToken string) error {
@@ -289,6 +407,10 @@ func (r *Runner) uploadOutputs(ctx *tasks.Context, job *api.NextJobResponse) err
log.Printf("Failed to upload %s: %v", filePath, err) log.Printf("Failed to upload %s: %v", filePath, err)
} else { } else {
ctx.OutputUploaded(entry.Name()) ctx.OutputUploaded(entry.Name())
// Delete file after successful upload to prevent duplicate uploads
if err := os.Remove(filePath); err != nil {
log.Printf("Warning: Failed to delete file %s after upload: %v", filePath, err)
}
} }
} }
@@ -359,3 +481,21 @@ func (r *Runner) GetFingerprint() string {
func (r *Runner) GetID() int64 { func (r *Runner) GetID() int64 {
return r.id return r.id
} }
// SetGPULockedOut sets whether GPU use is locked out due to a detected GPU error.
// When true, the runner will force CPU rendering for all jobs.
func (r *Runner) SetGPULockedOut(locked bool) {
r.gpuLockedOutMu.Lock()
defer r.gpuLockedOutMu.Unlock()
r.gpuLockedOut = locked
if locked {
log.Printf("GPU lockout enabled: GPU rendering disabled for subsequent jobs (CPU only)")
}
}
// IsGPULockedOut returns whether GPU use is currently locked out.
func (r *Runner) IsGPULockedOut() bool {
r.gpuLockedOutMu.RLock()
defer r.gpuLockedOutMu.RUnlock()
return r.gpuLockedOut
}

View File

@@ -0,0 +1,40 @@
package runner
import (
"encoding/hex"
"testing"
)
func TestNewRunner_InitializesFields(t *testing.T) {
r := New("http://localhost:8080", "runner-a", "host-a", false)
if r == nil {
t.Fatal("New should return a runner")
}
if r.name != "runner-a" || r.hostname != "host-a" {
t.Fatalf("unexpected runner identity: %q %q", r.name, r.hostname)
}
}
func TestRunner_GPUFlagsSetters(t *testing.T) {
r := New("http://localhost:8080", "runner-a", "host-a", false)
r.SetGPULockedOut(true)
if !r.IsGPULockedOut() {
t.Fatal("expected GPU lockout to be true")
}
}
func TestGenerateFingerprint_PopulatesValue(t *testing.T) {
r := New("http://localhost:8080", "runner-a", "host-a", false)
r.generateFingerprint()
fp := r.GetFingerprint()
if fp == "" {
t.Fatal("fingerprint should not be empty")
}
if len(fp) != 64 {
t.Fatalf("fingerprint should be sha256 hex, got %q", fp)
}
if _, err := hex.DecodeString(fp); err != nil {
t.Fatalf("fingerprint should be valid hex: %v", err)
}
}

View File

@@ -12,6 +12,7 @@ import (
"regexp" "regexp"
"sort" "sort"
"strings" "strings"
"sync"
"jiggablend/internal/runner/encoding" "jiggablend/internal/runner/encoding"
) )
@@ -26,6 +27,10 @@ func NewEncodeProcessor() *EncodeProcessor {
// Process executes an encode task. // Process executes an encode task.
func (p *EncodeProcessor) Process(ctx *Context) error { func (p *EncodeProcessor) Process(ctx *Context) error {
if err := ctx.CheckCancelled(); err != nil {
return err
}
ctx.Info(fmt.Sprintf("Starting encode task: job %d", ctx.JobID)) ctx.Info(fmt.Sprintf("Starting encode task: job %d", ctx.JobID))
log.Printf("Processing encode task %d for job %d", ctx.TaskID, ctx.JobID) log.Printf("Processing encode task %d for job %d", ctx.TaskID, ctx.JobID)
@@ -64,23 +69,18 @@ func (p *EncodeProcessor) Process(ctx *Context) error {
ctx.Info(fmt.Sprintf("File: %s (type: %s, size: %d)", file.FileName, file.FileType, file.FileSize)) ctx.Info(fmt.Sprintf("File: %s (type: %s, size: %d)", file.FileName, file.FileType, file.FileSize))
} }
// Determine source format based on output format // Encode from EXR frames only
sourceFormat := "exr"
fileExt := ".exr" fileExt := ".exr"
// Find and deduplicate frame files (EXR or PNG)
frameFileSet := make(map[string]bool) frameFileSet := make(map[string]bool)
var frameFilesList []string var frameFilesList []string
for _, file := range files { for _, file := range files {
if file.FileType == "output" && strings.HasSuffix(strings.ToLower(file.FileName), fileExt) { if file.FileType == "output" && strings.HasSuffix(strings.ToLower(file.FileName), fileExt) {
// Deduplicate by filename
if !frameFileSet[file.FileName] { if !frameFileSet[file.FileName] {
frameFileSet[file.FileName] = true frameFileSet[file.FileName] = true
frameFilesList = append(frameFilesList, file.FileName) frameFilesList = append(frameFilesList, file.FileName)
} }
} }
} }
if len(frameFilesList) == 0 { if len(frameFilesList) == 0 {
// Log why no files matched (deduplicate for error reporting) // Log why no files matched (deduplicate for error reporting)
outputFileSet := make(map[string]bool) outputFileSet := make(map[string]bool)
@@ -103,37 +103,61 @@ func (p *EncodeProcessor) Process(ctx *Context) error {
} }
} }
} }
ctx.Error(fmt.Sprintf("no %s frame files found for encode: found %d total files, %d unique output files, %d unique %s files (with other types)", strings.ToUpper(fileExt[1:]), len(files), len(outputFiles), len(frameFilesOtherType), strings.ToUpper(fileExt[1:]))) ctx.Error(fmt.Sprintf("no EXR frame files found for encode: found %d total files, %d unique output files, %d unique EXR files (with other types)", len(files), len(outputFiles), len(frameFilesOtherType)))
if len(outputFiles) > 0 { if len(outputFiles) > 0 {
ctx.Error(fmt.Sprintf("Output files found: %v", outputFiles)) ctx.Error(fmt.Sprintf("Output files found: %v", outputFiles))
} }
if len(frameFilesOtherType) > 0 { if len(frameFilesOtherType) > 0 {
ctx.Error(fmt.Sprintf("%s files with wrong type: %v", strings.ToUpper(fileExt[1:]), frameFilesOtherType)) ctx.Error(fmt.Sprintf("EXR files with wrong type: %v", frameFilesOtherType))
} }
err := fmt.Errorf("no %s frame files found for encode", strings.ToUpper(fileExt[1:])) err := fmt.Errorf("no EXR frame files found for encode")
return err return err
} }
ctx.Info(fmt.Sprintf("Found %d %s frames for encode", len(frameFilesList), strings.ToUpper(fileExt[1:]))) ctx.Info(fmt.Sprintf("Found %d EXR frames for encode", len(frameFilesList)))
// Download frames // Download frames with bounded parallelism (8 concurrent downloads)
ctx.Info(fmt.Sprintf("Downloading %d %s frames for encode...", len(frameFilesList), strings.ToUpper(fileExt[1:]))) const downloadWorkers = 8
ctx.Info(fmt.Sprintf("Downloading %d EXR frames for encode...", len(frameFilesList)))
type result struct {
path string
err error
}
results := make([]result, len(frameFilesList))
var wg sync.WaitGroup
sem := make(chan struct{}, downloadWorkers)
for i, fileName := range frameFilesList {
wg.Add(1)
go func(i int, fileName string) {
defer wg.Done()
sem <- struct{}{}
defer func() { <-sem }()
framePath := filepath.Join(workDir, fileName)
err := ctx.Manager.DownloadFrame(ctx.JobID, fileName, framePath)
if err != nil {
ctx.Error(fmt.Sprintf("Failed to download EXR frame %s: %v", fileName, err))
log.Printf("Failed to download EXR frame for encode %s: %v", fileName, err)
results[i] = result{"", err}
return
}
results[i] = result{framePath, nil}
}(i, fileName)
}
wg.Wait()
var frameFiles []string var frameFiles []string
for i, fileName := range frameFilesList { for _, r := range results {
ctx.Info(fmt.Sprintf("Downloading frame %d/%d: %s", i+1, len(frameFilesList), fileName)) if r.err == nil && r.path != "" {
framePath := filepath.Join(workDir, fileName) frameFiles = append(frameFiles, r.path)
if err := ctx.Manager.DownloadFrame(ctx.JobID, fileName, framePath); err != nil {
ctx.Error(fmt.Sprintf("Failed to download %s frame %s: %v", strings.ToUpper(fileExt[1:]), fileName, err))
log.Printf("Failed to download %s frame for encode %s: %v", strings.ToUpper(fileExt[1:]), fileName, err)
continue
} }
ctx.Info(fmt.Sprintf("Successfully downloaded frame %d/%d: %s", i+1, len(frameFilesList), fileName)) }
frameFiles = append(frameFiles, framePath) if err := ctx.CheckCancelled(); err != nil {
return err
} }
if len(frameFiles) == 0 { if len(frameFiles) == 0 {
err := fmt.Errorf("failed to download any %s frames for encode", strings.ToUpper(fileExt[1:])) err := fmt.Errorf("failed to download any EXR frames for encode")
ctx.Error(err.Error()) ctx.Error(err.Error())
return err return err
} }
@@ -141,11 +165,9 @@ func (p *EncodeProcessor) Process(ctx *Context) error {
sort.Strings(frameFiles) sort.Strings(frameFiles)
ctx.Info(fmt.Sprintf("Downloaded %d frames", len(frameFiles))) ctx.Info(fmt.Sprintf("Downloaded %d frames", len(frameFiles)))
// Check if EXR files have alpha channel and HDR content (only for EXR source format) // Check if EXR files have alpha channel (for encode decision)
hasAlpha := false hasAlpha := false
hasHDR := false {
if sourceFormat == "exr" {
// Check first frame for alpha channel and HDR using ffprobe
firstFrame := frameFiles[0] firstFrame := frameFiles[0]
hasAlpha = detectAlphaChannel(ctx, firstFrame) hasAlpha = detectAlphaChannel(ctx, firstFrame)
if hasAlpha { if hasAlpha {
@@ -153,45 +175,28 @@ func (p *EncodeProcessor) Process(ctx *Context) error {
} else { } else {
ctx.Info("No alpha channel detected in EXR files") ctx.Info("No alpha channel detected in EXR files")
} }
hasHDR = detectHDR(ctx, firstFrame)
if hasHDR {
ctx.Info("Detected HDR content in EXR files")
} else {
ctx.Info("No HDR content detected in EXR files (SDR range)")
}
} }
// Generate video // Generate video
// Use alpha if: // Use alpha when source EXR has alpha and codec supports it (AV1 or VP9). H.264 does not support alpha.
// 1. User explicitly enabled it OR source has alpha channel AND useAlpha := hasAlpha && (outputFormat == "EXR_AV1_MP4" || outputFormat == "EXR_VP9_WEBM")
// 2. Codec supports alpha (AV1 or VP9) if hasAlpha && outputFormat == "EXR_264_MP4" {
preserveAlpha := ctx.ShouldPreserveAlpha() ctx.Warn("Alpha channel detected in EXR but H.264 does not support alpha. Use EXR_AV1_MP4 or EXR_VP9_WEBM to preserve alpha in video.")
useAlpha := (preserveAlpha || hasAlpha) && (outputFormat == "EXR_AV1_MP4" || outputFormat == "EXR_VP9_WEBM")
if (preserveAlpha || hasAlpha) && outputFormat == "EXR_264_MP4" {
ctx.Warn("Alpha channel requested/detected but H.264 does not support alpha. Consider using EXR_AV1_MP4 or EXR_VP9_WEBM to preserve alpha.")
}
if preserveAlpha && !hasAlpha {
ctx.Warn("Alpha preservation requested but no alpha channel detected in EXR files.")
} }
if useAlpha { if useAlpha {
if preserveAlpha && hasAlpha { ctx.Info("Alpha channel detected - encoding with alpha (AV1/VP9)")
ctx.Info("Alpha preservation enabled: Using alpha channel encoding")
} else if hasAlpha {
ctx.Info("Alpha channel detected - automatically enabling alpha encoding")
}
} }
var outputExt string var outputExt string
switch outputFormat { switch outputFormat {
case "EXR_VP9_WEBM": case "EXR_VP9_WEBM":
outputExt = "webm" outputExt = "webm"
ctx.Info("Encoding WebM video with VP9 codec (with alpha channel and HDR support)...") ctx.Info("Encoding WebM video with VP9 codec (alpha, HDR)...")
case "EXR_AV1_MP4": case "EXR_AV1_MP4":
outputExt = "mp4" outputExt = "mp4"
ctx.Info("Encoding MP4 video with AV1 codec (with alpha channel)...") ctx.Info("Encoding MP4 video with AV1 codec (alpha, HDR)...")
default: default:
outputExt = "mp4" outputExt = "mp4"
ctx.Info("Encoding MP4 video with H.264 codec...") ctx.Info("Encoding MP4 video with H.264 codec (HDR, HLG)...")
} }
outputVideo := filepath.Join(workDir, fmt.Sprintf("output_%d.%s", ctx.JobID, outputExt)) outputVideo := filepath.Join(workDir, fmt.Sprintf("output_%d.%s", ctx.JobID, outputExt))
@@ -231,11 +236,6 @@ func (p *EncodeProcessor) Process(ctx *Context) error {
// Pass 1 // Pass 1
ctx.Info("Pass 1/2: Analyzing content for optimal encode...") ctx.Info("Pass 1/2: Analyzing content for optimal encode...")
softEncoder := encoder.(*encoding.SoftwareEncoder) softEncoder := encoder.(*encoding.SoftwareEncoder)
// Use HDR if: user explicitly enabled it OR HDR content was detected
preserveHDR := (ctx.ShouldPreserveHDR() || hasHDR) && sourceFormat == "exr"
if hasHDR && !ctx.ShouldPreserveHDR() {
ctx.Info("HDR content detected - automatically enabling HDR preservation")
}
pass1Cmd := softEncoder.BuildPass1Command(&encoding.EncodeConfig{ pass1Cmd := softEncoder.BuildPass1Command(&encoding.EncodeConfig{
InputPattern: patternPath, InputPattern: patternPath,
OutputPath: outputVideo, OutputPath: outputVideo,
@@ -244,8 +244,6 @@ func (p *EncodeProcessor) Process(ctx *Context) error {
WorkDir: workDir, WorkDir: workDir,
UseAlpha: useAlpha, UseAlpha: useAlpha,
TwoPass: true, TwoPass: true,
SourceFormat: sourceFormat,
PreserveHDR: preserveHDR,
}) })
if err := pass1Cmd.Run(); err != nil { if err := pass1Cmd.Run(); err != nil {
ctx.Warn(fmt.Sprintf("Pass 1 completed (warnings expected): %v", err)) ctx.Warn(fmt.Sprintf("Pass 1 completed (warnings expected): %v", err))
@@ -254,15 +252,6 @@ func (p *EncodeProcessor) Process(ctx *Context) error {
// Pass 2 // Pass 2
ctx.Info("Pass 2/2: Encoding with optimal quality...") ctx.Info("Pass 2/2: Encoding with optimal quality...")
preserveHDR = (ctx.ShouldPreserveHDR() || hasHDR) && sourceFormat == "exr"
if preserveHDR {
if hasHDR && !ctx.ShouldPreserveHDR() {
ctx.Info("HDR preservation enabled (auto-detected): Using HLG transfer with bt709 primaries")
} else {
ctx.Info("HDR preservation enabled: Using HLG transfer with bt709 primaries")
}
}
config := &encoding.EncodeConfig{ config := &encoding.EncodeConfig{
InputPattern: patternPath, InputPattern: patternPath,
OutputPath: outputVideo, OutputPath: outputVideo,
@@ -271,8 +260,6 @@ func (p *EncodeProcessor) Process(ctx *Context) error {
WorkDir: workDir, WorkDir: workDir,
UseAlpha: useAlpha, UseAlpha: useAlpha,
TwoPass: true, // Software encoding always uses 2-pass for quality TwoPass: true, // Software encoding always uses 2-pass for quality
SourceFormat: sourceFormat,
PreserveHDR: preserveHDR,
} }
cmd := encoder.BuildCommand(config) cmd := encoder.BuildCommand(config)
@@ -294,6 +281,8 @@ func (p *EncodeProcessor) Process(ctx *Context) error {
if err := cmd.Start(); err != nil { if err := cmd.Start(); err != nil {
return fmt.Errorf("failed to start encode command: %w", err) return fmt.Errorf("failed to start encode command: %w", err)
} }
stopMonitor := ctx.StartCancellationMonitor(cmd, "encode")
defer stopMonitor()
ctx.Processes.Track(ctx.TaskID, cmd) ctx.Processes.Track(ctx.TaskID, cmd)
defer ctx.Processes.Untrack(ctx.TaskID) defer ctx.Processes.Untrack(ctx.TaskID)
@@ -309,6 +298,9 @@ func (p *EncodeProcessor) Process(ctx *Context) error {
ctx.Info(line) ctx.Info(line)
} }
} }
if err := scanner.Err(); err != nil {
log.Printf("Error reading encode stdout: %v", err)
}
}() }()
// Stream stderr // Stream stderr
@@ -322,6 +314,9 @@ func (p *EncodeProcessor) Process(ctx *Context) error {
ctx.Warn(line) ctx.Warn(line)
} }
} }
if err := scanner.Err(); err != nil {
log.Printf("Error reading encode stderr: %v", err)
}
}() }()
err = cmd.Wait() err = cmd.Wait()
@@ -329,6 +324,9 @@ func (p *EncodeProcessor) Process(ctx *Context) error {
<-stderrDone <-stderrDone
if err != nil { if err != nil {
if cancelled, checkErr := ctx.IsJobCancelled(); checkErr == nil && cancelled {
return ErrJobCancelled
}
var errMsg string var errMsg string
if exitErr, ok := err.(*exec.ExitError); ok { if exitErr, ok := err.(*exec.ExitError); ok {
if exitErr.ExitCode() == 137 { if exitErr.ExitCode() == 137 {
@@ -373,6 +371,12 @@ func (p *EncodeProcessor) Process(ctx *Context) error {
ctx.Info(fmt.Sprintf("Successfully uploaded %s: %s", strings.ToUpper(outputExt), filepath.Base(outputVideo))) ctx.Info(fmt.Sprintf("Successfully uploaded %s: %s", strings.ToUpper(outputExt), filepath.Base(outputVideo)))
// Delete file after successful upload to prevent duplicate uploads
if err := os.Remove(outputVideo); err != nil {
log.Printf("Warning: Failed to delete video file %s after upload: %v", outputVideo, err)
ctx.Warn(fmt.Sprintf("Warning: Failed to delete video file after upload: %v", err))
}
log.Printf("Successfully generated and uploaded %s for job %d: %s", strings.ToUpper(outputExt), ctx.JobID, filepath.Base(outputVideo)) log.Printf("Successfully generated and uploaded %s for job %d: %s", strings.ToUpper(outputExt), ctx.JobID, filepath.Base(outputVideo))
return nil return nil
} }
@@ -381,7 +385,7 @@ func (p *EncodeProcessor) Process(ctx *Context) error {
func detectAlphaChannel(ctx *Context, filePath string) bool { func detectAlphaChannel(ctx *Context, filePath string) bool {
// Use ffprobe to check pixel format and stream properties // Use ffprobe to check pixel format and stream properties
// EXR files with alpha will have formats like gbrapf32le (RGBA) vs gbrpf32le (RGB) // EXR files with alpha will have formats like gbrapf32le (RGBA) vs gbrpf32le (RGB)
cmd := exec.Command("ffprobe", cmd := execCommand("ffprobe",
"-v", "error", "-v", "error",
"-select_streams", "v:0", "-select_streams", "v:0",
"-show_entries", "stream=pix_fmt:stream=codec_name", "-show_entries", "stream=pix_fmt:stream=codec_name",
@@ -414,7 +418,7 @@ func detectAlphaChannel(ctx *Context, filePath string) bool {
// detectHDR checks if an EXR file contains HDR content using ffprobe // detectHDR checks if an EXR file contains HDR content using ffprobe
func detectHDR(ctx *Context, filePath string) bool { func detectHDR(ctx *Context, filePath string) bool {
// First, check if the pixel format supports HDR (32-bit float) // First, check if the pixel format supports HDR (32-bit float)
cmd := exec.Command("ffprobe", cmd := execCommand("ffprobe",
"-v", "error", "-v", "error",
"-select_streams", "v:0", "-select_streams", "v:0",
"-show_entries", "stream=pix_fmt", "-show_entries", "stream=pix_fmt",
@@ -442,7 +446,7 @@ func detectHDR(ctx *Context, filePath string) bool {
// For 32-bit float EXR, sample pixels to check if values exceed SDR range (> 1.0) // For 32-bit float EXR, sample pixels to check if values exceed SDR range (> 1.0)
// Use ffmpeg to extract pixel statistics - check max pixel values // Use ffmpeg to extract pixel statistics - check max pixel values
// This is more efficient than sampling individual pixels // This is more efficient than sampling individual pixels
cmd = exec.Command("ffmpeg", cmd = execCommand("ffmpeg",
"-v", "error", "-v", "error",
"-i", filePath, "-i", filePath,
"-vf", "signalstats", "-vf", "signalstats",
@@ -485,7 +489,7 @@ func detectHDRBySampling(ctx *Context, filePath string) bool {
} }
for _, region := range sampleRegions { for _, region := range sampleRegions {
cmd := exec.Command("ffmpeg", cmd := execCommand("ffmpeg",
"-v", "error", "-v", "error",
"-i", filePath, "-i", filePath,
"-vf", fmt.Sprintf("%s,scale=1:1", region), "-vf", fmt.Sprintf("%s,scale=1:1", region),

View File

@@ -0,0 +1,120 @@
package tasks
import (
"encoding/binary"
"math"
"os"
"os/exec"
"strings"
"testing"
)
func TestFloat32FromBytes(t *testing.T) {
got := float32FromBytes([]byte{0x00, 0x00, 0x80, 0x3f}) // 1.0 little-endian
if got != 1.0 {
t.Fatalf("float32FromBytes() = %v, want 1.0", got)
}
}
func TestMax(t *testing.T) {
if got := max(1, 2); got != 2 {
t.Fatalf("max() = %v, want 2", got)
}
}
func TestExtractFrameNumber(t *testing.T) {
if got := extractFrameNumber("render_0042.png"); got != 42 {
t.Fatalf("extractFrameNumber() = %d, want 42", got)
}
}
func TestCheckFFmpegSizeError(t *testing.T) {
err := checkFFmpegSizeError("hardware does not support encoding at size ... constraints: width 128-4096 height 128-4096")
if err == nil {
t.Fatal("expected a size error")
}
}
func TestDetectAlphaChannel_UsesExecSeam(t *testing.T) {
orig := execCommand
execCommand = fakeExecCommand
defer func() { execCommand = orig }()
if !detectAlphaChannel(&Context{}, "/tmp/frame.exr") {
t.Fatal("expected alpha channel detection via mocked ffprobe output")
}
}
func TestDetectHDR_UsesExecSeam(t *testing.T) {
orig := execCommand
execCommand = fakeExecCommand
defer func() { execCommand = orig }()
if !detectHDR(&Context{}, "/tmp/frame.exr") {
t.Fatal("expected HDR detection via mocked ffmpeg sampling output")
}
}
func fakeExecCommand(command string, args ...string) *exec.Cmd {
cs := []string{"-test.run=TestExecHelperProcess", "--", command}
cs = append(cs, args...)
cmd := exec.Command(os.Args[0], cs...)
cmd.Env = append(os.Environ(), "GO_WANT_HELPER_PROCESS=1")
return cmd
}
func TestExecHelperProcess(t *testing.T) {
if os.Getenv("GO_WANT_HELPER_PROCESS") != "1" {
return
}
idx := 0
for i, a := range os.Args {
if a == "--" {
idx = i
break
}
}
if idx == 0 || idx+1 >= len(os.Args) {
os.Exit(2)
}
cmdName := os.Args[idx+1]
cmdArgs := os.Args[idx+2:]
switch cmdName {
case "ffprobe":
if containsArg(cmdArgs, "stream=pix_fmt:stream=codec_name") {
_, _ = os.Stdout.WriteString("pix_fmt=gbrapf32le\ncodec_name=exr\n")
os.Exit(0)
}
_, _ = os.Stdout.WriteString("gbrpf32le\n")
os.Exit(0)
case "ffmpeg":
if containsArg(cmdArgs, "signalstats") {
_, _ = os.Stderr.WriteString("signalstats failed")
os.Exit(1)
}
if containsArg(cmdArgs, "rawvideo") {
buf := make([]byte, 12)
binary.LittleEndian.PutUint32(buf[0:4], math.Float32bits(1.5))
binary.LittleEndian.PutUint32(buf[4:8], math.Float32bits(0.2))
binary.LittleEndian.PutUint32(buf[8:12], math.Float32bits(0.1))
_, _ = os.Stdout.Write(buf)
os.Exit(0)
}
os.Exit(0)
default:
os.Exit(0)
}
}
func containsArg(args []string, target string) bool {
for _, a := range args {
if strings.Contains(a, target) {
return true
}
}
return false
}

View File

@@ -0,0 +1,7 @@
package tasks
import "os/exec"
// execCommand is a seam for process execution in tests.
var execCommand = exec.Command

View File

@@ -2,12 +2,17 @@
package tasks package tasks
import ( import (
"errors"
"fmt"
"jiggablend/internal/runner/api" "jiggablend/internal/runner/api"
"jiggablend/internal/runner/blender" "jiggablend/internal/runner/blender"
"jiggablend/internal/runner/encoding" "jiggablend/internal/runner/encoding"
"jiggablend/internal/runner/workspace" "jiggablend/internal/runner/workspace"
"jiggablend/pkg/executils" "jiggablend/pkg/executils"
"jiggablend/pkg/types" "jiggablend/pkg/types"
"os/exec"
"sync"
"time"
) )
// Processor handles a specific task type. // Processor handles a specific task type.
@@ -20,7 +25,8 @@ type Context struct {
TaskID int64 TaskID int64
JobID int64 JobID int64
JobName string JobName string
Frame int Frame int // frame start (inclusive); kept for backward compat
FrameEnd int // frame end (inclusive); same as Frame for single-frame
TaskType string TaskType string
WorkDir string WorkDir string
JobToken string JobToken string
@@ -32,13 +38,32 @@ type Context struct {
Blender *blender.Manager Blender *blender.Manager
Encoder *encoding.Selector Encoder *encoding.Selector
Processes *executils.ProcessTracker Processes *executils.ProcessTracker
// GPULockedOut is set when the runner has detected a GPU error (e.g. HIP) and disables GPU for all jobs.
GPULockedOut bool
// HasAMD is true when the runner detected AMD devices at startup.
HasAMD bool
// HasNVIDIA is true when the runner detected NVIDIA GPUs at startup.
HasNVIDIA bool
// HasIntel is true when the runner detected Intel GPUs (e.g. Arc) at startup.
HasIntel bool
// GPUDetectionFailed is true when startup GPU backend detection could not run; we force CPU for all versions (backend availability unknown).
GPUDetectionFailed bool
// OnGPUError is called when a GPU error line is seen in render logs; typically sets runner GPU lockout.
OnGPUError func()
// ForceCPURendering is a runner-level override that forces CPU rendering for all jobs.
ForceCPURendering bool
} }
// NewContext creates a new task context. // ErrJobCancelled indicates the manager-side job was cancelled during execution.
var ErrJobCancelled = errors.New("job cancelled")
// NewContext creates a new task context. frameEnd should be >= frame; if 0 or less than frame, it is treated as single-frame (frameEnd = frame).
// gpuLockedOut is the runner's current GPU lockout state; gpuDetectionFailed means detection failed at startup (force CPU for all versions); onGPUError is called when a GPU error is detected in logs (may be nil).
func NewContext( func NewContext(
taskID, jobID int64, taskID, jobID int64,
jobName string, jobName string,
frame int, frameStart, frameEnd int,
taskType string, taskType string,
workDir string, workDir string,
jobToken string, jobToken string,
@@ -49,22 +74,40 @@ func NewContext(
blenderMgr *blender.Manager, blenderMgr *blender.Manager,
encoder *encoding.Selector, encoder *encoding.Selector,
processes *executils.ProcessTracker, processes *executils.ProcessTracker,
gpuLockedOut bool,
hasAMD bool,
hasNVIDIA bool,
hasIntel bool,
gpuDetectionFailed bool,
forceCPURendering bool,
onGPUError func(),
) *Context { ) *Context {
if frameEnd < frameStart {
frameEnd = frameStart
}
return &Context{ return &Context{
TaskID: taskID, TaskID: taskID,
JobID: jobID, JobID: jobID,
JobName: jobName, JobName: jobName,
Frame: frame, Frame: frameStart,
TaskType: taskType, FrameEnd: frameEnd,
WorkDir: workDir, TaskType: taskType,
JobToken: jobToken, WorkDir: workDir,
Metadata: metadata, JobToken: jobToken,
Manager: manager, Metadata: metadata,
JobConn: jobConn, Manager: manager,
Workspace: ws, JobConn: jobConn,
Blender: blenderMgr, Workspace: ws,
Encoder: encoder, Blender: blenderMgr,
Processes: processes, Encoder: encoder,
Processes: processes,
GPULockedOut: gpuLockedOut,
HasAMD: hasAMD,
HasNVIDIA: hasNVIDIA,
HasIntel: hasIntel,
GPUDetectionFailed: gpuDetectionFailed,
ForceCPURendering: forceCPURendering,
OnGPUError: onGPUError,
} }
} }
@@ -145,12 +188,88 @@ func (c *Context) ShouldEnableExecution() bool {
return c.Metadata != nil && c.Metadata.EnableExecution != nil && *c.Metadata.EnableExecution return c.Metadata != nil && c.Metadata.EnableExecution != nil && *c.Metadata.EnableExecution
} }
// ShouldPreserveHDR returns whether to preserve HDR range for EXR encoding. // ShouldForceCPU returns true if GPU should be disabled and CPU rendering forced
func (c *Context) ShouldPreserveHDR() bool { // (runner GPU lockout, GPU detection failed at startup, or metadata force_cpu).
return c.Metadata != nil && c.Metadata.PreserveHDR != nil && *c.Metadata.PreserveHDR func (c *Context) ShouldForceCPU() bool {
if c.ForceCPURendering {
return true
}
if c.GPULockedOut {
return true
}
// Detection failed at startup: backend availability unknown, so force CPU for all versions.
if c.GPUDetectionFailed {
return true
}
if c.Metadata != nil && c.Metadata.RenderSettings.EngineSettings != nil {
if v, ok := c.Metadata.RenderSettings.EngineSettings["force_cpu"]; ok {
if b, ok := v.(bool); ok && b {
return true
}
}
}
return false
} }
// ShouldPreserveAlpha returns whether to preserve alpha channel for EXR encoding. // IsJobCancelled checks whether the manager marked this job as cancelled.
func (c *Context) ShouldPreserveAlpha() bool { func (c *Context) IsJobCancelled() (bool, error) {
return c.Metadata != nil && c.Metadata.PreserveAlpha != nil && *c.Metadata.PreserveAlpha if c.Manager == nil {
return false, nil
}
status, err := c.Manager.GetJobStatus(c.JobID)
if err != nil {
return false, err
}
return status == types.JobStatusCancelled, nil
}
// CheckCancelled returns ErrJobCancelled if the job was cancelled.
func (c *Context) CheckCancelled() error {
cancelled, err := c.IsJobCancelled()
if err != nil {
return fmt.Errorf("failed to check job status: %w", err)
}
if cancelled {
return ErrJobCancelled
}
return nil
}
// StartCancellationMonitor polls manager status and kills cmd if job is cancelled.
// Caller must invoke returned stop function when cmd exits.
func (c *Context) StartCancellationMonitor(cmd *exec.Cmd, taskLabel string) func() {
stop := make(chan struct{})
var once sync.Once
go func() {
ticker := time.NewTicker(2 * time.Second)
defer ticker.Stop()
for {
select {
case <-stop:
return
case <-ticker.C:
cancelled, err := c.IsJobCancelled()
if err != nil {
c.Warn(fmt.Sprintf("Could not check cancellation for %s task: %v", taskLabel, err))
continue
}
if !cancelled {
continue
}
c.Warn(fmt.Sprintf("Job %d was cancelled, stopping %s task early", c.JobID, taskLabel))
if cmd != nil && cmd.Process != nil {
_ = cmd.Process.Kill()
}
return
}
}
}()
return func() {
once.Do(func() {
close(stop)
})
}
} }

View File

@@ -0,0 +1,42 @@
package tasks
import (
"errors"
"testing"
"jiggablend/pkg/types"
)
func TestNewContext_NormalizesFrameEnd(t *testing.T) {
ctx := NewContext(1, 2, "job", 10, 1, "render", "/tmp", "tok", nil, nil, nil, nil, nil, nil, nil, false, false, false, false, false, false, nil)
if ctx.FrameEnd != 10 {
t.Fatalf("expected FrameEnd to be normalized to Frame, got %d", ctx.FrameEnd)
}
}
func TestContext_GetOutputFormat_Default(t *testing.T) {
ctx := &Context{}
if got := ctx.GetOutputFormat(); got != "PNG" {
t.Fatalf("GetOutputFormat() = %q, want PNG", got)
}
}
func TestContext_ShouldForceCPU(t *testing.T) {
ctx := &Context{ForceCPURendering: true}
if !ctx.ShouldForceCPU() {
t.Fatal("expected force cpu when runner-level flag is set")
}
force := true
ctx = &Context{Metadata: &types.BlendMetadata{RenderSettings: types.RenderSettings{EngineSettings: map[string]interface{}{"force_cpu": force}}}}
if !ctx.ShouldForceCPU() {
t.Fatal("expected force cpu when metadata requests it")
}
}
func TestErrJobCancelled_IsSentinel(t *testing.T) {
if !errors.Is(ErrJobCancelled, ErrJobCancelled) {
t.Fatal("sentinel error should be self-identical")
}
}

View File

@@ -25,11 +25,47 @@ func NewRenderProcessor() *RenderProcessor {
return &RenderProcessor{} return &RenderProcessor{}
} }
// gpuErrorSubstrings are log line substrings that indicate a GPU backend error (matched case-insensitively); any match triggers full GPU lockout.
var gpuErrorSubstrings = []string{
"illegal address in hip", // HIP (AMD) e.g. "Illegal address in HIP" or "Illegal address in hip"
"hiperror", // hipError* codes
"hip error",
"cuda error",
"cuerror",
"optix error",
"oneapi error",
"opencl error",
}
// checkGPUErrorLine checks a log line for GPU error indicators and triggers runner GPU lockout if found.
func (p *RenderProcessor) checkGPUErrorLine(ctx *Context, line string) {
lower := strings.ToLower(line)
for _, sub := range gpuErrorSubstrings {
if strings.Contains(lower, sub) {
if ctx.OnGPUError != nil {
ctx.OnGPUError()
}
ctx.Warn(fmt.Sprintf("GPU error detected in log (%q); GPU disabled for subsequent jobs", sub))
return
}
}
}
// Process executes a render task. // Process executes a render task.
func (p *RenderProcessor) Process(ctx *Context) error { func (p *RenderProcessor) Process(ctx *Context) error {
ctx.Info(fmt.Sprintf("Starting task: job %d, frame %d, format: %s", if err := ctx.CheckCancelled(); err != nil {
ctx.JobID, ctx.Frame, ctx.GetOutputFormat())) return err
log.Printf("Processing task %d: job %d, frame %d", ctx.TaskID, ctx.JobID, ctx.Frame) }
if ctx.FrameEnd > ctx.Frame {
ctx.Info(fmt.Sprintf("Starting task: job %d, frames %d-%d, format: %s",
ctx.JobID, ctx.Frame, ctx.FrameEnd, ctx.GetOutputFormat()))
log.Printf("Processing task %d: job %d, frames %d-%d", ctx.TaskID, ctx.JobID, ctx.Frame, ctx.FrameEnd)
} else {
ctx.Info(fmt.Sprintf("Starting task: job %d, frame %d, format: %s",
ctx.JobID, ctx.Frame, ctx.GetOutputFormat()))
log.Printf("Processing task %d: job %d, frame %d", ctx.TaskID, ctx.JobID, ctx.Frame)
}
// Find .blend file // Find .blend file
blendFile, err := workspace.FindFirstBlendFile(ctx.WorkDir) blendFile, err := workspace.FindFirstBlendFile(ctx.WorkDir)
@@ -52,6 +88,11 @@ func (p *RenderProcessor) Process(ctx *Context) error {
ctx.Info("No Blender version specified, using system blender") ctx.Info("No Blender version specified, using system blender")
} }
blenderBinary, err = blender.ResolveBinaryPath(blenderBinary)
if err != nil {
return fmt.Errorf("failed to resolve blender binary: %w", err)
}
// Create output directory // Create output directory
outputDir := filepath.Join(ctx.WorkDir, "output") outputDir := filepath.Join(ctx.WorkDir, "output")
if err := os.MkdirAll(outputDir, 0755); err != nil { if err := os.MkdirAll(outputDir, 0755); err != nil {
@@ -64,11 +105,17 @@ func (p *RenderProcessor) Process(ctx *Context) error {
return fmt.Errorf("failed to create Blender home directory: %w", err) return fmt.Errorf("failed to create Blender home directory: %w", err)
} }
// Determine render format // We always render EXR (linear) for VFX accuracy; job output_format is the deliverable (EXR sequence or video).
outputFormat := ctx.GetOutputFormat() renderFormat := "EXR"
renderFormat := outputFormat
if outputFormat == "EXR_264_MP4" || outputFormat == "EXR_AV1_MP4" || outputFormat == "EXR_VP9_WEBM" { if ctx.ShouldForceCPU() {
renderFormat = "EXR" // Use EXR for maximum quality if ctx.ForceCPURendering {
ctx.Info("Runner compatibility flag is enabled: forcing CPU rendering for this job")
} else if ctx.GPUDetectionFailed {
ctx.Info("GPU backend detection failed at startup—we could not determine available GPU backends, so rendering will use CPU to avoid compatibility issues")
} else {
ctx.Info("GPU lockout active: using CPU rendering only")
}
} }
// Create render script // Create render script
@@ -77,18 +124,30 @@ func (p *RenderProcessor) Process(ctx *Context) error {
} }
// Render // Render
ctx.Info(fmt.Sprintf("Starting Blender render for frame %d...", ctx.Frame)) if ctx.FrameEnd > ctx.Frame {
ctx.Info(fmt.Sprintf("Starting Blender render for frames %d-%d...", ctx.Frame, ctx.FrameEnd))
} else {
ctx.Info(fmt.Sprintf("Starting Blender render for frame %d...", ctx.Frame))
}
if err := p.runBlender(ctx, blenderBinary, blendFile, outputDir, renderFormat, blenderHome); err != nil { if err := p.runBlender(ctx, blenderBinary, blendFile, outputDir, renderFormat, blenderHome); err != nil {
if errors.Is(err, ErrJobCancelled) {
ctx.Warn("Render stopped because job was cancelled")
return err
}
ctx.Error(fmt.Sprintf("Blender render failed: %v", err)) ctx.Error(fmt.Sprintf("Blender render failed: %v", err))
return err return err
} }
// Verify output // Verify output (range or single frame)
if _, err := p.findOutputFile(ctx, outputDir, renderFormat); err != nil { if err := p.verifyOutputRange(ctx, outputDir, renderFormat); err != nil {
ctx.Error(fmt.Sprintf("Output verification failed: %v", err)) ctx.Error(fmt.Sprintf("Output verification failed: %v", err))
return err return err
} }
ctx.Info(fmt.Sprintf("Blender render completed for frame %d", ctx.Frame)) if ctx.FrameEnd > ctx.Frame {
ctx.Info(fmt.Sprintf("Blender render completed for frames %d-%d", ctx.Frame, ctx.FrameEnd))
} else {
ctx.Info(fmt.Sprintf("Blender render completed for frame %d", ctx.Frame))
}
return nil return nil
} }
@@ -116,22 +175,30 @@ func (p *RenderProcessor) createRenderScript(ctx *Context, renderFormat string)
return errors.New(errMsg) return errors.New(errMsg)
} }
// Write output format // Write EXR to format file so Blender script sets OPEN_EXR (job output_format is for downstream deliverable only).
outputFormat := ctx.GetOutputFormat() ctx.Info("Writing output format 'EXR' to format file")
ctx.Info(fmt.Sprintf("Writing output format '%s' to format file", outputFormat)) if err := os.WriteFile(formatFilePath, []byte("EXR"), 0644); err != nil {
if err := os.WriteFile(formatFilePath, []byte(outputFormat), 0644); err != nil {
errMsg := fmt.Sprintf("failed to create format file: %v", err) errMsg := fmt.Sprintf("failed to create format file: %v", err)
ctx.Error(errMsg) ctx.Error(errMsg)
return errors.New(errMsg) return errors.New(errMsg)
} }
// Write render settings if available // Write render settings: merge job metadata with runner force_cpu (GPU lockout)
var settingsMap map[string]interface{}
if ctx.Metadata != nil && ctx.Metadata.RenderSettings.EngineSettings != nil { if ctx.Metadata != nil && ctx.Metadata.RenderSettings.EngineSettings != nil {
settingsJSON, err := json.Marshal(ctx.Metadata.RenderSettings) raw, err := json.Marshal(ctx.Metadata.RenderSettings)
if err == nil { if err == nil {
if err := os.WriteFile(renderSettingsFilePath, settingsJSON, 0644); err != nil { _ = json.Unmarshal(raw, &settingsMap)
ctx.Warn(fmt.Sprintf("Failed to write render settings file: %v", err)) }
} }
if settingsMap == nil {
settingsMap = make(map[string]interface{})
}
settingsMap["force_cpu"] = ctx.ShouldForceCPU()
settingsJSON, err := json.Marshal(settingsMap)
if err == nil {
if err := os.WriteFile(renderSettingsFilePath, settingsJSON, 0644); err != nil {
ctx.Warn(fmt.Sprintf("Failed to write render settings file: %v", err))
} }
} }
@@ -140,8 +207,16 @@ func (p *RenderProcessor) createRenderScript(ctx *Context, renderFormat string)
func (p *RenderProcessor) runBlender(ctx *Context, blenderBinary, blendFile, outputDir, renderFormat, blenderHome string) error { func (p *RenderProcessor) runBlender(ctx *Context, blenderBinary, blendFile, outputDir, renderFormat, blenderHome string) error {
scriptPath := filepath.Join(ctx.WorkDir, "enable_gpu.py") scriptPath := filepath.Join(ctx.WorkDir, "enable_gpu.py")
blendFileAbs, err := filepath.Abs(blendFile)
if err != nil {
return fmt.Errorf("failed to resolve blend file path: %w", err)
}
scriptPathAbs, err := filepath.Abs(scriptPath)
if err != nil {
return fmt.Errorf("failed to resolve blender script path: %w", err)
}
args := []string{"-b", blendFile, "--python", scriptPath} args := []string{"-b", blendFileAbs, "--python", scriptPathAbs}
if ctx.ShouldEnableExecution() { if ctx.ShouldEnableExecution() {
args = append(args, "--enable-autoexec") args = append(args, "--enable-autoexec")
} }
@@ -151,17 +226,19 @@ func (p *RenderProcessor) runBlender(ctx *Context, blenderBinary, blendFile, out
outputAbsPattern, _ := filepath.Abs(outputPattern) outputAbsPattern, _ := filepath.Abs(outputPattern)
args = append(args, "-o", outputAbsPattern) args = append(args, "-o", outputAbsPattern)
args = append(args, "-f", fmt.Sprintf("%d", ctx.Frame)) // Render single frame or range: -f N for one frame, -s start -e end -a for range
if ctx.FrameEnd > ctx.Frame {
args = append(args, "-s", fmt.Sprintf("%d", ctx.Frame), "-e", fmt.Sprintf("%d", ctx.FrameEnd), "-a")
} else {
args = append(args, "-f", fmt.Sprintf("%d", ctx.Frame))
}
// Wrap with xvfb-run cmd := execCommand(blenderBinary, args...)
xvfbArgs := []string{"-a", "-s", "-screen 0 800x600x24", blenderBinary}
xvfbArgs = append(xvfbArgs, args...)
cmd := exec.Command("xvfb-run", xvfbArgs...)
cmd.Dir = ctx.WorkDir cmd.Dir = ctx.WorkDir
// Set up environment with custom HOME directory // Set up environment: LD_LIBRARY_PATH for tarball Blender, then custom HOME
env := os.Environ() env := os.Environ()
// Remove existing HOME if present and add our custom one env = blender.TarballEnv(blenderBinary, env)
newEnv := make([]string, 0, len(env)+1) newEnv := make([]string, 0, len(env)+1)
for _, e := range env { for _, e := range env {
if !strings.HasPrefix(e, "HOME=") { if !strings.HasPrefix(e, "HOME=") {
@@ -185,12 +262,14 @@ func (p *RenderProcessor) runBlender(ctx *Context, blenderBinary, blendFile, out
if err := cmd.Start(); err != nil { if err := cmd.Start(); err != nil {
return fmt.Errorf("failed to start blender: %w", err) return fmt.Errorf("failed to start blender: %w", err)
} }
stopMonitor := ctx.StartCancellationMonitor(cmd, "render")
defer stopMonitor()
// Track process // Track process
ctx.Processes.Track(ctx.TaskID, cmd) ctx.Processes.Track(ctx.TaskID, cmd)
defer ctx.Processes.Untrack(ctx.TaskID) defer ctx.Processes.Untrack(ctx.TaskID)
// Stream stdout // Stream stdout and watch for GPU error lines (lock out all GPU on any backend error)
stdoutDone := make(chan bool) stdoutDone := make(chan bool)
go func() { go func() {
defer close(stdoutDone) defer close(stdoutDone)
@@ -198,15 +277,19 @@ func (p *RenderProcessor) runBlender(ctx *Context, blenderBinary, blendFile, out
for scanner.Scan() { for scanner.Scan() {
line := scanner.Text() line := scanner.Text()
if line != "" { if line != "" {
p.checkGPUErrorLine(ctx, line)
shouldFilter, logLevel := blender.FilterLog(line) shouldFilter, logLevel := blender.FilterLog(line)
if !shouldFilter { if !shouldFilter {
ctx.Log(logLevel, line) ctx.Log(logLevel, line)
} }
} }
} }
if err := scanner.Err(); err != nil {
log.Printf("Error reading stdout: %v", err)
}
}() }()
// Stream stderr // Stream stderr and watch for GPU error lines
stderrDone := make(chan bool) stderrDone := make(chan bool)
go func() { go func() {
defer close(stderrDone) defer close(stderrDone)
@@ -214,6 +297,7 @@ func (p *RenderProcessor) runBlender(ctx *Context, blenderBinary, blendFile, out
for scanner.Scan() { for scanner.Scan() {
line := scanner.Text() line := scanner.Text()
if line != "" { if line != "" {
p.checkGPUErrorLine(ctx, line)
shouldFilter, logLevel := blender.FilterLog(line) shouldFilter, logLevel := blender.FilterLog(line)
if !shouldFilter { if !shouldFilter {
if logLevel == types.LogLevelInfo { if logLevel == types.LogLevelInfo {
@@ -223,6 +307,9 @@ func (p *RenderProcessor) runBlender(ctx *Context, blenderBinary, blendFile, out
} }
} }
} }
if err := scanner.Err(); err != nil {
log.Printf("Error reading stderr: %v", err)
}
}() }()
// Wait for completion // Wait for completion
@@ -231,6 +318,9 @@ func (p *RenderProcessor) runBlender(ctx *Context, blenderBinary, blendFile, out
<-stderrDone <-stderrDone
if err != nil { if err != nil {
if cancelled, checkErr := ctx.IsJobCancelled(); checkErr == nil && cancelled {
return ErrJobCancelled
}
if exitErr, ok := err.(*exec.ExitError); ok { if exitErr, ok := err.(*exec.ExitError); ok {
if exitErr.ExitCode() == 137 { if exitErr.ExitCode() == 137 {
return errors.New("Blender was killed due to excessive memory usage (OOM)") return errors.New("Blender was killed due to excessive memory usage (OOM)")
@@ -242,60 +332,64 @@ func (p *RenderProcessor) runBlender(ctx *Context, blenderBinary, blendFile, out
return nil return nil
} }
func (p *RenderProcessor) findOutputFile(ctx *Context, outputDir, renderFormat string) (string, error) { // verifyOutputRange checks that output files exist for the task's frame range (first and last at minimum).
func (p *RenderProcessor) verifyOutputRange(ctx *Context, outputDir, renderFormat string) error {
entries, err := os.ReadDir(outputDir) entries, err := os.ReadDir(outputDir)
if err != nil { if err != nil {
return "", fmt.Errorf("failed to read output directory: %w", err) return fmt.Errorf("failed to read output directory: %w", err)
} }
ctx.Info("Checking output directory for files...") ctx.Info("Checking output directory for files...")
ext := strings.ToLower(renderFormat)
// Try exact match first // Check first and last frame in range (minimum required for range; single frame = one check)
expectedFile := filepath.Join(outputDir, fmt.Sprintf("frame_%04d.%s", ctx.Frame, strings.ToLower(renderFormat))) framesToCheck := []int{ctx.Frame}
if _, err := os.Stat(expectedFile); err == nil { if ctx.FrameEnd > ctx.Frame {
ctx.Info(fmt.Sprintf("Found output file: %s", filepath.Base(expectedFile))) framesToCheck = append(framesToCheck, ctx.FrameEnd)
return expectedFile, nil
} }
for _, frame := range framesToCheck {
// Try without zero padding found := false
altFile := filepath.Join(outputDir, fmt.Sprintf("frame_%d.%s", ctx.Frame, strings.ToLower(renderFormat))) // Try frame_0001.ext, frame_1.ext, 0001.ext
if _, err := os.Stat(altFile); err == nil { for _, name := range []string{
ctx.Info(fmt.Sprintf("Found output file: %s", filepath.Base(altFile))) fmt.Sprintf("frame_%04d.%s", frame, ext),
return altFile, nil fmt.Sprintf("frame_%d.%s", frame, ext),
} fmt.Sprintf("%04d.%s", frame, ext),
} {
// Try just frame number if _, err := os.Stat(filepath.Join(outputDir, name)); err == nil {
altFile2 := filepath.Join(outputDir, fmt.Sprintf("%04d.%s", ctx.Frame, strings.ToLower(renderFormat))) found = true
if _, err := os.Stat(altFile2); err == nil { ctx.Info(fmt.Sprintf("Found output file: %s", name))
ctx.Info(fmt.Sprintf("Found output file: %s", filepath.Base(altFile2))) break
return altFile2, nil
}
// Search through all files
for _, entry := range entries {
if !entry.IsDir() {
fileName := entry.Name()
if strings.Contains(fileName, "%04d") || strings.Contains(fileName, "%d") {
ctx.Warn(fmt.Sprintf("Skipping file with literal pattern: %s", fileName))
continue
}
frameStr := fmt.Sprintf("%d", ctx.Frame)
frameStrPadded := fmt.Sprintf("%04d", ctx.Frame)
if strings.Contains(fileName, frameStrPadded) ||
(strings.Contains(fileName, frameStr) && strings.HasSuffix(strings.ToLower(fileName), strings.ToLower(renderFormat))) {
outputFile := filepath.Join(outputDir, fileName)
ctx.Info(fmt.Sprintf("Found output file: %s", fileName))
return outputFile, nil
} }
} }
} if !found {
// Search entries for this frame number
// Not found frameStr := fmt.Sprintf("%d", frame)
fileList := []string{} frameStrPadded := fmt.Sprintf("%04d", frame)
for _, entry := range entries { for _, entry := range entries {
if !entry.IsDir() { if entry.IsDir() {
fileList = append(fileList, entry.Name()) continue
}
fileName := entry.Name()
if strings.Contains(fileName, "%04d") || strings.Contains(fileName, "%d") {
continue
}
if (strings.Contains(fileName, frameStrPadded) ||
strings.Contains(fileName, frameStr)) && strings.HasSuffix(strings.ToLower(fileName), ext) {
found = true
ctx.Info(fmt.Sprintf("Found output file: %s", fileName))
break
}
}
}
if !found {
fileList := []string{}
for _, e := range entries {
if !e.IsDir() {
fileList = append(fileList, e.Name())
}
}
return fmt.Errorf("output file for frame %d not found; files in output directory: %v", frame, fileList)
} }
} }
return "", fmt.Errorf("output file not found: %s\nFiles in output directory: %v", expectedFile, fileList) return nil
} }

View File

@@ -0,0 +1,28 @@
package tasks
import "testing"
func TestCheckGPUErrorLine_TriggersCallback(t *testing.T) {
p := NewRenderProcessor()
triggered := false
ctx := &Context{
OnGPUError: func() { triggered = true },
}
p.checkGPUErrorLine(ctx, "Fatal: Illegal address in HIP kernel execution")
if !triggered {
t.Fatal("expected GPU error callback to be triggered")
}
}
func TestCheckGPUErrorLine_IgnoresNormalLine(t *testing.T) {
p := NewRenderProcessor()
triggered := false
ctx := &Context{
OnGPUError: func() { triggered = true },
}
p.checkGPUErrorLine(ctx, "Render completed successfully")
if triggered {
t.Fatal("did not expect GPU callback for normal line")
}
}

View File

@@ -99,6 +99,11 @@ func ExtractTarStripPrefix(reader io.Reader, destDir string) error {
targetPath := filepath.Join(destDir, name) targetPath := filepath.Join(destDir, name)
// Sanitize path to prevent directory traversal
if !strings.HasPrefix(filepath.Clean(targetPath), filepath.Clean(destDir)+string(os.PathSeparator)) {
return fmt.Errorf("invalid file path in tar: %s", header.Name)
}
switch header.Typeflag { switch header.Typeflag {
case tar.TypeDir: case tar.TypeDir:
if err := os.MkdirAll(targetPath, os.FileMode(header.Mode)); err != nil { if err := os.MkdirAll(targetPath, os.FileMode(header.Mode)); err != nil {

View File

@@ -0,0 +1,125 @@
package workspace
import (
"archive/tar"
"bytes"
"os"
"path/filepath"
"testing"
)
func createTarBuffer(files map[string]string) *bytes.Buffer {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
for name, content := range files {
hdr := &tar.Header{
Name: name,
Mode: 0644,
Size: int64(len(content)),
}
tw.WriteHeader(hdr)
tw.Write([]byte(content))
}
tw.Close()
return &buf
}
func TestExtractTar(t *testing.T) {
destDir := t.TempDir()
buf := createTarBuffer(map[string]string{
"hello.txt": "world",
"sub/a.txt": "nested",
})
if err := ExtractTar(buf, destDir); err != nil {
t.Fatalf("ExtractTar: %v", err)
}
data, err := os.ReadFile(filepath.Join(destDir, "hello.txt"))
if err != nil {
t.Fatalf("read hello.txt: %v", err)
}
if string(data) != "world" {
t.Errorf("hello.txt = %q, want %q", data, "world")
}
data, err = os.ReadFile(filepath.Join(destDir, "sub", "a.txt"))
if err != nil {
t.Fatalf("read sub/a.txt: %v", err)
}
if string(data) != "nested" {
t.Errorf("sub/a.txt = %q, want %q", data, "nested")
}
}
func TestExtractTarStripPrefix(t *testing.T) {
destDir := t.TempDir()
buf := createTarBuffer(map[string]string{
"toplevel/": "",
"toplevel/foo.txt": "bar",
})
if err := ExtractTarStripPrefix(buf, destDir); err != nil {
t.Fatalf("ExtractTarStripPrefix: %v", err)
}
data, err := os.ReadFile(filepath.Join(destDir, "foo.txt"))
if err != nil {
t.Fatalf("read foo.txt: %v", err)
}
if string(data) != "bar" {
t.Errorf("foo.txt = %q, want %q", data, "bar")
}
}
func TestExtractTarStripPrefix_PathTraversal(t *testing.T) {
destDir := t.TempDir()
buf := createTarBuffer(map[string]string{
"prefix/../../../etc/passwd": "pwned",
})
err := ExtractTarStripPrefix(buf, destDir)
if err == nil {
t.Fatal("expected error for path traversal, got nil")
}
}
func TestExtractTar_PathTraversal(t *testing.T) {
destDir := t.TempDir()
buf := createTarBuffer(map[string]string{
"../../../etc/passwd": "pwned",
})
err := ExtractTar(buf, destDir)
if err == nil {
t.Fatal("expected error for path traversal, got nil")
}
}
func TestExtractTarFile(t *testing.T) {
destDir := t.TempDir()
tarPath := filepath.Join(t.TempDir(), "archive.tar")
buf := createTarBuffer(map[string]string{
"hello.txt": "world",
})
if err := os.WriteFile(tarPath, buf.Bytes(), 0644); err != nil {
t.Fatalf("write tar file: %v", err)
}
if err := ExtractTarFile(tarPath, destDir); err != nil {
t.Fatalf("ExtractTarFile: %v", err)
}
got, err := os.ReadFile(filepath.Join(destDir, "hello.txt"))
if err != nil {
t.Fatalf("read extracted file: %v", err)
}
if string(got) != "world" {
t.Fatalf("unexpected file content: %q", got)
}
}

View File

@@ -0,0 +1,40 @@
package workspace
import (
"os"
"path/filepath"
"strings"
"testing"
)
func TestSanitizeName_ReplacesUnsafeChars(t *testing.T) {
got := sanitizeName("runner / with\\bad:chars")
if strings.ContainsAny(got, " /\\:") {
t.Fatalf("sanitizeName did not sanitize input: %q", got)
}
}
func TestFindBlendFiles_IgnoresBlendSaveFiles(t *testing.T) {
dir := t.TempDir()
if err := os.WriteFile(filepath.Join(dir, "scene.blend"), []byte("x"), 0644); err != nil {
t.Fatal(err)
}
if err := os.WriteFile(filepath.Join(dir, "scene.blend1"), []byte("x"), 0644); err != nil {
t.Fatal(err)
}
files, err := FindBlendFiles(dir)
if err != nil {
t.Fatalf("FindBlendFiles failed: %v", err)
}
if len(files) != 1 || files[0] != "scene.blend" {
t.Fatalf("unexpected files: %#v", files)
}
}
func TestFindFirstBlendFile_ReturnsErrorWhenMissing(t *testing.T) {
_, err := FindFirstBlendFile(t.TempDir())
if err == nil {
t.Fatal("expected error when no blend file exists")
}
}

View File

@@ -82,6 +82,9 @@ func (s *Storage) JobPath(jobID int64) string {
// SaveUpload saves an uploaded file // SaveUpload saves an uploaded file
func (s *Storage) SaveUpload(jobID int64, filename string, reader io.Reader) (string, error) { func (s *Storage) SaveUpload(jobID int64, filename string, reader io.Reader) (string, error) {
// Sanitize filename to prevent path traversal
filename = filepath.Base(filename)
jobPath := s.JobPath(jobID) jobPath := s.JobPath(jobID)
if err := os.MkdirAll(jobPath, 0755); err != nil { if err := os.MkdirAll(jobPath, 0755); err != nil {
return "", fmt.Errorf("failed to create job directory: %w", err) return "", fmt.Errorf("failed to create job directory: %w", err)
@@ -608,3 +611,129 @@ func (s *Storage) CreateJobContextFromDir(sourceDir string, jobID int64, exclude
return contextPath, nil return contextPath, nil
} }
// CreateContextArchiveFromDirToPath creates a context archive from files in sourceDir at destPath.
// This is used for pre-job upload sessions where the archive is staged before a job ID exists.
func (s *Storage) CreateContextArchiveFromDirToPath(sourceDir, destPath string, excludeFiles ...string) (string, error) {
excludeSet := make(map[string]bool)
for _, excludeFile := range excludeFiles {
excludePath := filepath.Clean(excludeFile)
excludeSet[excludePath] = true
excludeSet[filepath.ToSlash(excludePath)] = true
}
var filesToInclude []string
err := filepath.Walk(sourceDir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if info.IsDir() {
return nil
}
if isBlenderSaveFile(info.Name()) {
return nil
}
relPath, err := filepath.Rel(sourceDir, path)
if err != nil {
return err
}
cleanRelPath := filepath.Clean(relPath)
if strings.HasPrefix(cleanRelPath, "..") {
return fmt.Errorf("invalid file path: %s", relPath)
}
if excludeSet[cleanRelPath] || excludeSet[filepath.ToSlash(cleanRelPath)] {
return nil
}
filesToInclude = append(filesToInclude, path)
return nil
})
if err != nil {
return "", fmt.Errorf("failed to walk source directory: %w", err)
}
if len(filesToInclude) == 0 {
return "", fmt.Errorf("no files found to include in context archive")
}
relPaths := make([]string, 0, len(filesToInclude))
for _, filePath := range filesToInclude {
relPath, err := filepath.Rel(sourceDir, filePath)
if err != nil {
return "", fmt.Errorf("failed to get relative path: %w", err)
}
relPaths = append(relPaths, relPath)
}
commonPrefix := findCommonPrefix(relPaths)
blendFilesAtRoot := 0
for _, relPath := range relPaths {
tarPath := filepath.ToSlash(relPath)
if commonPrefix != "" && strings.HasPrefix(tarPath, commonPrefix) {
tarPath = strings.TrimPrefix(tarPath, commonPrefix)
}
if strings.HasSuffix(strings.ToLower(tarPath), ".blend") && !strings.Contains(tarPath, "/") {
blendFilesAtRoot++
}
}
if blendFilesAtRoot == 0 {
return "", fmt.Errorf("no .blend file found at root level in context archive - .blend files must be at the root level of the uploaded archive, not in subdirectories")
}
if blendFilesAtRoot > 1 {
return "", fmt.Errorf("multiple .blend files found at root level in context archive (found %d, expected 1)", blendFilesAtRoot)
}
contextFile, err := os.Create(destPath)
if err != nil {
return "", fmt.Errorf("failed to create context file: %w", err)
}
defer contextFile.Close()
tarWriter := tar.NewWriter(contextFile)
defer tarWriter.Close()
copyBuf := make([]byte, 32*1024)
for i, filePath := range filesToInclude {
file, err := os.Open(filePath)
if err != nil {
return "", fmt.Errorf("failed to open file: %w", err)
}
info, err := file.Stat()
if err != nil {
file.Close()
return "", fmt.Errorf("failed to stat file: %w", err)
}
tarPath := filepath.ToSlash(relPaths[i])
if commonPrefix != "" && strings.HasPrefix(tarPath, commonPrefix) {
tarPath = strings.TrimPrefix(tarPath, commonPrefix)
}
header, err := tar.FileInfoHeader(info, "")
if err != nil {
file.Close()
return "", fmt.Errorf("failed to create tar header: %w", err)
}
header.Name = tarPath
if err := tarWriter.WriteHeader(header); err != nil {
file.Close()
return "", fmt.Errorf("failed to write tar header: %w", err)
}
if _, err := io.CopyBuffer(tarWriter, file, copyBuf); err != nil {
file.Close()
return "", fmt.Errorf("failed to write file to tar: %w", err)
}
file.Close()
}
if err := tarWriter.Close(); err != nil {
return "", fmt.Errorf("failed to close tar writer: %w", err)
}
if err := contextFile.Close(); err != nil {
return "", fmt.Errorf("failed to close context file: %w", err)
}
return destPath, nil
}

View File

@@ -0,0 +1,95 @@
package storage
import (
"os"
"path/filepath"
"strings"
"testing"
)
func setupStorage(t *testing.T) *Storage {
t.Helper()
dir := t.TempDir()
s, err := NewStorage(dir)
if err != nil {
t.Fatalf("NewStorage: %v", err)
}
return s
}
func TestSaveUpload(t *testing.T) {
s := setupStorage(t)
path, err := s.SaveUpload(1, "test.blend", strings.NewReader("data"))
if err != nil {
t.Fatalf("SaveUpload: %v", err)
}
data, err := os.ReadFile(path)
if err != nil {
t.Fatalf("read saved file: %v", err)
}
if string(data) != "data" {
t.Errorf("got %q, want %q", data, "data")
}
}
func TestSaveUpload_PathTraversal(t *testing.T) {
s := setupStorage(t)
path, err := s.SaveUpload(1, "../../etc/passwd", strings.NewReader("evil"))
if err != nil {
t.Fatalf("SaveUpload: %v", err)
}
// filepath.Base strips traversal, so the file should be inside the job dir
if !strings.HasPrefix(path, s.JobPath(1)) {
t.Errorf("saved file %q escaped job directory %q", path, s.JobPath(1))
}
if filepath.Base(path) != "passwd" {
t.Errorf("expected basename 'passwd', got %q", filepath.Base(path))
}
}
func TestSaveOutput(t *testing.T) {
s := setupStorage(t)
path, err := s.SaveOutput(42, "output.png", strings.NewReader("img"))
if err != nil {
t.Fatalf("SaveOutput: %v", err)
}
data, err := os.ReadFile(path)
if err != nil {
t.Fatalf("read saved output: %v", err)
}
if string(data) != "img" {
t.Errorf("got %q, want %q", data, "img")
}
}
func TestGetFile(t *testing.T) {
s := setupStorage(t)
savedPath, err := s.SaveUpload(1, "readme.txt", strings.NewReader("hello"))
if err != nil {
t.Fatalf("SaveUpload: %v", err)
}
f, err := s.GetFile(savedPath)
if err != nil {
t.Fatalf("GetFile: %v", err)
}
defer f.Close()
buf := make([]byte, 64)
n, _ := f.Read(buf)
if string(buf[:n]) != "hello" {
t.Errorf("got %q, want %q", string(buf[:n]), "hello")
}
}
func TestJobPath(t *testing.T) {
s := setupStorage(t)
path := s.JobPath(99)
if !strings.Contains(path, "99") {
t.Errorf("JobPath(99) = %q, expected to contain '99'", path)
}
}

123
pkg/blendfile/version.go Normal file
View File

@@ -0,0 +1,123 @@
package blendfile
import (
"compress/gzip"
"fmt"
"io"
"os"
"os/exec"
)
// ParseVersionFromReader parses the Blender version from a reader.
// Returns major and minor version numbers.
//
// Blend file header layout (12 bytes):
//
// "BLENDER" (7) + pointer-size (1: '-'=64, '_'=32) + endian (1: 'v'=LE, 'V'=BE)
// + version (3 digits, e.g. "402" = 4.02)
//
// Supports uncompressed, gzip-compressed, and zstd-compressed blend files.
func ParseVersionFromReader(r io.ReadSeeker) (major, minor int, err error) {
header := make([]byte, 12)
n, err := r.Read(header)
if err != nil || n < 12 {
return 0, 0, fmt.Errorf("failed to read blend file header: %w", err)
}
if string(header[:7]) != "BLENDER" {
r.Seek(0, 0)
return parseCompressedVersion(r)
}
return parseVersionDigits(header[9:12])
}
// ParseVersionFromFile opens a blend file and parses the Blender version.
func ParseVersionFromFile(blendPath string) (major, minor int, err error) {
file, err := os.Open(blendPath)
if err != nil {
return 0, 0, fmt.Errorf("failed to open blend file: %w", err)
}
defer file.Close()
return ParseVersionFromReader(file)
}
// VersionString returns a formatted version string like "4.2".
func VersionString(major, minor int) string {
return fmt.Sprintf("%d.%d", major, minor)
}
func parseVersionDigits(versionBytes []byte) (major, minor int, err error) {
if len(versionBytes) != 3 {
return 0, 0, fmt.Errorf("expected 3 version digits, got %d", len(versionBytes))
}
fmt.Sscanf(string(versionBytes[0]), "%d", &major)
fmt.Sscanf(string(versionBytes[1:3]), "%d", &minor)
return major, minor, nil
}
func parseCompressedVersion(r io.ReadSeeker) (major, minor int, err error) {
magic := make([]byte, 4)
if _, err := r.Read(magic); err != nil {
return 0, 0, err
}
r.Seek(0, 0)
// gzip: 0x1f 0x8b
if magic[0] == 0x1f && magic[1] == 0x8b {
gzReader, err := gzip.NewReader(r)
if err != nil {
return 0, 0, fmt.Errorf("failed to create gzip reader: %w", err)
}
defer gzReader.Close()
header := make([]byte, 12)
n, err := gzReader.Read(header)
if err != nil || n < 12 {
return 0, 0, fmt.Errorf("failed to read compressed blend header: %w", err)
}
if string(header[:7]) != "BLENDER" {
return 0, 0, fmt.Errorf("invalid blend file format")
}
return parseVersionDigits(header[9:12])
}
// zstd: 0x28 0xB5 0x2F 0xFD
if magic[0] == 0x28 && magic[1] == 0xb5 && magic[2] == 0x2f && magic[3] == 0xfd {
return parseZstdVersion(r)
}
return 0, 0, fmt.Errorf("unknown blend file format")
}
func parseZstdVersion(r io.ReadSeeker) (major, minor int, err error) {
r.Seek(0, 0)
cmd := exec.Command("zstd", "-d", "-c")
cmd.Stdin = r
stdout, err := cmd.StdoutPipe()
if err != nil {
return 0, 0, fmt.Errorf("failed to create zstd stdout pipe: %w", err)
}
if err := cmd.Start(); err != nil {
return 0, 0, fmt.Errorf("failed to start zstd decompression: %w", err)
}
header := make([]byte, 12)
n, readErr := io.ReadFull(stdout, header)
cmd.Process.Kill()
cmd.Wait()
if readErr != nil || n < 12 {
return 0, 0, fmt.Errorf("failed to read zstd compressed blend header: %v", readErr)
}
if string(header[:7]) != "BLENDER" {
return 0, 0, fmt.Errorf("invalid blend file format in zstd archive")
}
return parseVersionDigits(header[9:12])
}

View File

@@ -0,0 +1,96 @@
package blendfile
import (
"bytes"
"compress/gzip"
"testing"
)
func makeBlendHeader(major, minor int) []byte {
header := make([]byte, 12)
copy(header[:7], "BLENDER")
header[7] = '-'
header[8] = 'v'
header[9] = byte('0' + major)
header[10] = byte('0' + minor/10)
header[11] = byte('0' + minor%10)
return header
}
func TestParseVersionFromReader_Uncompressed(t *testing.T) {
tests := []struct {
name string
major int
minor int
wantMajor int
wantMinor int
}{
{"Blender 4.02", 4, 2, 4, 2},
{"Blender 3.06", 3, 6, 3, 6},
{"Blender 2.79", 2, 79, 2, 79},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
header := makeBlendHeader(tt.major, tt.minor)
r := bytes.NewReader(header)
major, minor, err := ParseVersionFromReader(r)
if err != nil {
t.Fatalf("ParseVersionFromReader: %v", err)
}
if major != tt.wantMajor || minor != tt.wantMinor {
t.Errorf("got %d.%d, want %d.%d", major, minor, tt.wantMajor, tt.wantMinor)
}
})
}
}
func TestParseVersionFromReader_GzipCompressed(t *testing.T) {
header := makeBlendHeader(4, 2)
// Pad to ensure gzip has enough data for a full read
data := make([]byte, 128)
copy(data, header)
var buf bytes.Buffer
gz := gzip.NewWriter(&buf)
gz.Write(data)
gz.Close()
r := bytes.NewReader(buf.Bytes())
major, minor, err := ParseVersionFromReader(r)
if err != nil {
t.Fatalf("ParseVersionFromReader (gzip): %v", err)
}
if major != 4 || minor != 2 {
t.Errorf("got %d.%d, want 4.2", major, minor)
}
}
func TestParseVersionFromReader_InvalidMagic(t *testing.T) {
data := []byte("NOT_BLEND_DATA_HERE")
r := bytes.NewReader(data)
_, _, err := ParseVersionFromReader(r)
if err == nil {
t.Fatal("expected error for invalid magic, got nil")
}
}
func TestParseVersionFromReader_TooShort(t *testing.T) {
data := []byte("SHORT")
r := bytes.NewReader(data)
_, _, err := ParseVersionFromReader(r)
if err == nil {
t.Fatal("expected error for short data, got nil")
}
}
func TestVersionString(t *testing.T) {
got := VersionString(4, 2)
want := "4.2"
if got != want {
t.Errorf("VersionString(4, 2) = %q, want %q", got, want)
}
}

View File

@@ -2,10 +2,13 @@ package executils
import ( import (
"bufio" "bufio"
"context"
"errors" "errors"
"fmt" "fmt"
"io"
"os" "os"
"os/exec" "os/exec"
"strings"
"sync" "sync"
"time" "time"
@@ -107,6 +110,78 @@ type CommandResult struct {
ExitCode int ExitCode int
} }
// RunCommandWithTimeout is like RunCommand but kills the process after timeout.
// A zero timeout means no timeout.
func RunCommandWithTimeout(
timeout time.Duration,
cmdPath string,
args []string,
dir string,
env []string,
taskID int64,
tracker *ProcessTracker,
) (*CommandResult, error) {
if timeout <= 0 {
return RunCommand(cmdPath, args, dir, env, taskID, tracker)
}
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
cmd := exec.CommandContext(ctx, cmdPath, args...)
cmd.Dir = dir
if env != nil {
cmd.Env = env
}
stdoutPipe, err := cmd.StdoutPipe()
if err != nil {
return nil, fmt.Errorf("failed to create stdout pipe: %w", err)
}
stderrPipe, err := cmd.StderrPipe()
if err != nil {
return nil, fmt.Errorf("failed to create stderr pipe: %w", err)
}
if err := cmd.Start(); err != nil {
return nil, fmt.Errorf("failed to start command: %w", err)
}
if tracker != nil {
tracker.Track(taskID, cmd)
defer tracker.Untrack(taskID)
}
var stdoutBuf, stderrBuf []byte
var stdoutErr, stderrErr error
var wg sync.WaitGroup
wg.Add(2)
go func() {
defer wg.Done()
stdoutBuf, stdoutErr = readAll(stdoutPipe)
}()
go func() {
defer wg.Done()
stderrBuf, stderrErr = readAll(stderrPipe)
}()
waitErr := cmd.Wait()
wg.Wait()
if stdoutErr != nil && !isBenignPipeReadError(stdoutErr) {
return nil, fmt.Errorf("failed to read stdout: %w", stdoutErr)
}
if stderrErr != nil && !isBenignPipeReadError(stderrErr) {
return nil, fmt.Errorf("failed to read stderr: %w", stderrErr)
}
result := &CommandResult{Stdout: string(stdoutBuf), Stderr: string(stderrBuf)}
if waitErr != nil {
if exitErr, ok := waitErr.(*exec.ExitError); ok {
result.ExitCode = exitErr.ExitCode()
} else {
result.ExitCode = -1
}
if ctx.Err() == context.DeadlineExceeded {
return result, fmt.Errorf("command timed out after %v: %w", timeout, waitErr)
}
return result, waitErr
}
result.ExitCode = 0
return result, nil
}
// RunCommand executes a command and returns the output // RunCommand executes a command and returns the output
// If tracker is provided, the process will be registered for tracking // If tracker is provided, the process will be registered for tracking
// This is useful for commands where you need to capture output (like metadata extraction) // This is useful for commands where you need to capture output (like metadata extraction)
@@ -164,10 +239,10 @@ func RunCommand(
wg.Wait() wg.Wait()
// Check for read errors // Check for read errors
if stdoutErr != nil { if stdoutErr != nil && !isBenignPipeReadError(stdoutErr) {
return nil, fmt.Errorf("failed to read stdout: %w", stdoutErr) return nil, fmt.Errorf("failed to read stdout: %w", stdoutErr)
} }
if stderrErr != nil { if stderrErr != nil && !isBenignPipeReadError(stderrErr) {
return nil, fmt.Errorf("failed to read stderr: %w", stderrErr) return nil, fmt.Errorf("failed to read stderr: %w", stderrErr)
} }
@@ -208,6 +283,18 @@ func readAll(r interface{ Read([]byte) (int, error) }) ([]byte, error) {
return buf, nil return buf, nil
} }
// isBenignPipeReadError treats EOF-like pipe close races as non-fatal.
func isBenignPipeReadError(err error) bool {
if err == nil {
return false
}
if errors.Is(err, io.EOF) || errors.Is(err, os.ErrClosed) || errors.Is(err, io.ErrClosedPipe) {
return true
}
// Some platforms return wrapped messages that don't map cleanly to sentinel errors.
return strings.Contains(strings.ToLower(err.Error()), "file already closed")
}
// LogSender is a function type for sending logs // LogSender is a function type for sending logs
type LogSender func(taskID int, level types.LogLevel, message string, stepName string) type LogSender func(taskID int, level types.LogLevel, message string, stepName string)
@@ -274,6 +361,9 @@ func RunCommandWithStreaming(
} }
} }
} }
if err := scanner.Err(); err != nil && !isBenignPipeReadError(err) {
logSender(taskID, types.LogLevelWarn, fmt.Sprintf("stdout read error: %v", err), stepName)
}
}() }()
go func() { go func() {
@@ -288,6 +378,9 @@ func RunCommandWithStreaming(
} }
} }
} }
if err := scanner.Err(); err != nil && !isBenignPipeReadError(err) {
logSender(taskID, types.LogLevelWarn, fmt.Sprintf("stderr read error: %v", err), stepName)
}
}() }()
err = cmd.Wait() err = cmd.Wait()

View File

@@ -0,0 +1,55 @@
package executils
import (
"errors"
"io"
"os"
"os/exec"
"testing"
"time"
)
func TestIsBenignPipeReadError(t *testing.T) {
tests := []struct {
name string
err error
want bool
}{
{name: "nil", err: nil, want: false},
{name: "eof", err: io.EOF, want: true},
{name: "closed", err: os.ErrClosed, want: true},
{name: "closed pipe", err: io.ErrClosedPipe, want: true},
{name: "wrapped closed", err: errors.New("read |0: file already closed"), want: true},
{name: "other", err: errors.New("permission denied"), want: false},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got := isBenignPipeReadError(tc.err)
if got != tc.want {
t.Fatalf("got %v, want %v (err=%v)", got, tc.want, tc.err)
}
})
}
}
func TestProcessTracker_TrackUntrack(t *testing.T) {
pt := NewProcessTracker()
cmd := exec.Command("sh", "-c", "sleep 1")
pt.Track(1, cmd)
if count := pt.Count(); count != 1 {
t.Fatalf("Count() = %d, want 1", count)
}
pt.Untrack(1)
if count := pt.Count(); count != 0 {
t.Fatalf("Count() = %d, want 0", count)
}
}
func TestRunCommandWithTimeout_TimesOut(t *testing.T) {
pt := NewProcessTracker()
_, err := RunCommandWithTimeout(200*time.Millisecond, "sh", []string{"-c", "sleep 2"}, "", nil, 99, pt)
if err == nil {
t.Fatal("expected timeout error")
}
}

View File

@@ -95,31 +95,20 @@ if current_device:
print(f"Blend file output format: {current_output_format}") print(f"Blend file output format: {current_output_format}")
# Override output format if specified # Override output format if specified
# The format file always takes precedence (it's written specifically for this job) # Render output is EXR only and must remain linear for the encode pipeline (linear -> sRGB -> HLG).
if output_format_override: if output_format_override:
print(f"Overriding output format from '{current_output_format}' to '{output_format_override}'") print(f"Overriding output format from '{current_output_format}' to OPEN_EXR (always EXR for pipeline)")
# Map common format names to Blender's format constants
# For video formats, we render as appropriate frame format first
format_to_use = output_format_override.upper()
if format_to_use in ['EXR_264_MP4', 'EXR_AV1_MP4', 'EXR_VP9_WEBM']:
format_to_use = 'EXR' # Render as EXR for EXR video formats
format_map = {
'PNG': 'PNG',
'JPEG': 'JPEG',
'JPG': 'JPEG',
'EXR': 'OPEN_EXR',
'OPEN_EXR': 'OPEN_EXR',
'TARGA': 'TARGA',
'TIFF': 'TIFF',
'BMP': 'BMP',
}
blender_format = format_map.get(format_to_use, format_to_use)
try: try:
scene.render.image_settings.file_format = blender_format scene.render.image_settings.file_format = 'OPEN_EXR'
print(f"Successfully set output format to: {blender_format}") # Lock output color space to linear (defense in depth; EXR is linear for encode pipeline)
if getattr(scene.render.image_settings, 'has_linear_colorspace', False) and hasattr(scene.render.image_settings, 'linear_colorspace_settings'):
try:
scene.render.image_settings.linear_colorspace_settings.name = 'Linear'
except Exception as ex:
print(f"Note: Could not set linear output: {ex}")
print("Successfully set output format to: OPEN_EXR")
except Exception as e: except Exception as e:
print(f"Warning: Could not set output format to {blender_format}: {e}") print(f"Warning: Could not set output format to OPEN_EXR: {e}")
print(f"Using blend file's format: {current_output_format}") print(f"Using blend file's format: {current_output_format}")
else: else:
print(f"Using blend file's output format: {current_output_format}") print(f"Using blend file's output format: {current_output_format}")
@@ -220,9 +209,10 @@ if current_engine == 'CYCLES':
traceback.print_exc() traceback.print_exc()
sys.exit(1) sys.exit(1)
# Check all devices and choose the best GPU type # Check all devices and choose the best GPU type.
# Device type preference order (most performant first) # Explicit fallback policy: NVIDIA -> Intel -> AMD -> CPU.
device_type_preference = ['OPTIX', 'CUDA', 'HIP', 'ONEAPI', 'METAL'] # (OPTIX/CUDA are NVIDIA, ONEAPI is Intel, HIP/OPENCL are AMD)
device_type_preference = ['OPTIX', 'CUDA', 'ONEAPI', 'HIP', 'OPENCL']
gpu_available = False gpu_available = False
best_device_type = None best_device_type = None
best_gpu_devices = [] best_gpu_devices = []
@@ -354,16 +344,6 @@ if current_engine == 'CYCLES':
scene.cycles.use_optix_denoising = True scene.cycles.use_optix_denoising = True
print(f" Enabled OptiX denoising (if OptiX available)") print(f" Enabled OptiX denoising (if OptiX available)")
print(f" CUDA ray tracing active") print(f" CUDA ray tracing active")
elif best_device_type == 'METAL':
# MetalRT for Apple Silicon (if available)
if hasattr(scene.cycles, 'use_metalrt'):
scene.cycles.use_metalrt = True
print(f" Enabled MetalRT (Metal Ray Tracing) for faster rendering")
elif hasattr(cycles_prefs, 'use_metalrt'):
cycles_prefs.use_metalrt = True
print(f" Enabled MetalRT (Metal Ray Tracing) for faster rendering")
else:
print(f" MetalRT not available")
elif best_device_type == 'ONEAPI': elif best_device_type == 'ONEAPI':
# Intel oneAPI - Embree might be available # Intel oneAPI - Embree might be available
if hasattr(scene.cycles, 'use_embree'): if hasattr(scene.cycles, 'use_embree'):

View File

@@ -0,0 +1,19 @@
package scripts
import (
"strings"
"testing"
)
func TestEmbeddedScripts_ArePresent(t *testing.T) {
if strings.TrimSpace(ExtractMetadata) == "" {
t.Fatal("ExtractMetadata script should not be empty")
}
if strings.TrimSpace(UnhideObjects) == "" {
t.Fatal("UnhideObjects script should not be empty")
}
if strings.TrimSpace(RenderBlenderTemplate) == "" {
t.Fatal("RenderBlenderTemplate should not be empty")
}
}

View File

@@ -93,7 +93,8 @@ type Task struct {
ID int64 `json:"id"` ID int64 `json:"id"`
JobID int64 `json:"job_id"` JobID int64 `json:"job_id"`
RunnerID *int64 `json:"runner_id,omitempty"` RunnerID *int64 `json:"runner_id,omitempty"`
Frame int `json:"frame"` Frame int `json:"frame"` // frame start (inclusive) for render tasks
FrameEnd *int `json:"frame_end,omitempty"` // frame end (inclusive); nil = single frame
TaskType TaskType `json:"task_type"` TaskType TaskType `json:"task_type"`
Status TaskStatus `json:"status"` Status TaskStatus `json:"status"`
CurrentStep string `json:"current_step,omitempty"` CurrentStep string `json:"current_step,omitempty"`
@@ -138,8 +139,6 @@ type CreateJobRequest struct {
UnhideObjects *bool `json:"unhide_objects,omitempty"` // Optional: Enable unhide tweaks for objects/collections UnhideObjects *bool `json:"unhide_objects,omitempty"` // Optional: Enable unhide tweaks for objects/collections
EnableExecution *bool `json:"enable_execution,omitempty"` // Optional: Enable auto-execution in Blender (adds --enable-autoexec flag, defaults to false) EnableExecution *bool `json:"enable_execution,omitempty"` // Optional: Enable auto-execution in Blender (adds --enable-autoexec flag, defaults to false)
BlenderVersion *string `json:"blender_version,omitempty"` // Optional: Override Blender version (e.g., "4.2" or "4.2.3") BlenderVersion *string `json:"blender_version,omitempty"` // Optional: Override Blender version (e.g., "4.2" or "4.2.3")
PreserveHDR *bool `json:"preserve_hdr,omitempty"` // Optional: Preserve HDR range for EXR encoding (uses HLG with bt709 primaries)
PreserveAlpha *bool `json:"preserve_alpha,omitempty"` // Optional: Preserve alpha channel for EXR encoding (requires AV1 or VP9 codec)
} }
// UpdateJobProgressRequest represents a request to update job progress // UpdateJobProgressRequest represents a request to update job progress
@@ -234,8 +233,6 @@ type BlendMetadata struct {
UnhideObjects *bool `json:"unhide_objects,omitempty"` // Enable unhide tweaks for objects/collections UnhideObjects *bool `json:"unhide_objects,omitempty"` // Enable unhide tweaks for objects/collections
EnableExecution *bool `json:"enable_execution,omitempty"` // Enable auto-execution in Blender (adds --enable-autoexec flag, defaults to false) EnableExecution *bool `json:"enable_execution,omitempty"` // Enable auto-execution in Blender (adds --enable-autoexec flag, defaults to false)
BlenderVersion string `json:"blender_version,omitempty"` // Detected or overridden Blender version (e.g., "4.2" or "4.2.3") BlenderVersion string `json:"blender_version,omitempty"` // Detected or overridden Blender version (e.g., "4.2" or "4.2.3")
PreserveHDR *bool `json:"preserve_hdr,omitempty"` // Preserve HDR range for EXR encoding (uses HLG with bt709 primaries)
PreserveAlpha *bool `json:"preserve_alpha,omitempty"` // Preserve alpha channel for EXR encoding (requires AV1 or VP9 codec)
} }
// MissingFilesInfo represents information about missing files/addons // MissingFilesInfo represents information about missing files/addons

49
pkg/types/types_test.go Normal file
View File

@@ -0,0 +1,49 @@
package types
import (
"encoding/json"
"testing"
"time"
)
func TestJobJSON_RoundTrip(t *testing.T) {
now := time.Now().UTC().Truncate(time.Second)
frameStart, frameEnd := 1, 10
format := "PNG"
job := Job{
ID: 42,
UserID: 7,
JobType: JobTypeRender,
Name: "demo",
Status: JobStatusPending,
Progress: 12.5,
FrameStart: &frameStart,
FrameEnd: &frameEnd,
OutputFormat: &format,
BlendMetadata: &BlendMetadata{
FrameStart: 1,
FrameEnd: 10,
RenderSettings: RenderSettings{
ResolutionX: 1920,
ResolutionY: 1080,
FrameRate: 24.0,
},
},
CreatedAt: now,
}
raw, err := json.Marshal(job)
if err != nil {
t.Fatalf("marshal failed: %v", err)
}
var out Job
if err := json.Unmarshal(raw, &out); err != nil {
t.Fatalf("unmarshal failed: %v", err)
}
if out.ID != job.ID || out.JobType != JobTypeRender || out.BlendMetadata == nil {
t.Fatalf("unexpected roundtrip result: %+v", out)
}
}

19
version/version_test.go Normal file
View File

@@ -0,0 +1,19 @@
package version
import (
"strings"
"testing"
)
func TestInitDefaults_AreSet(t *testing.T) {
if Version == "" {
t.Fatal("Version should be initialized")
}
if Date == "" {
t.Fatal("Date should be initialized")
}
if !strings.Contains(Version, ".") {
t.Fatalf("Version should look semantic-ish, got %q", Version)
}
}

View File

@@ -1,269 +0,0 @@
const API_BASE = '/api';
let currentUser = null;
// Check authentication on load
async function init() {
await checkAuth();
setupEventListeners();
if (currentUser) {
showMainPage();
loadJobs();
loadRunners();
} else {
showLoginPage();
}
}
async function checkAuth() {
try {
const response = await fetch(`${API_BASE}/auth/me`);
if (response.ok) {
currentUser = await response.json();
return true;
}
} catch (error) {
console.error('Auth check failed:', error);
}
return false;
}
function showLoginPage() {
document.getElementById('login-page').classList.remove('hidden');
document.getElementById('main-page').classList.add('hidden');
}
function showMainPage() {
document.getElementById('login-page').classList.add('hidden');
document.getElementById('main-page').classList.remove('hidden');
if (currentUser) {
document.getElementById('user-name').textContent = currentUser.name || currentUser.email;
}
}
function setupEventListeners() {
// Navigation
document.querySelectorAll('.nav-btn').forEach(btn => {
btn.addEventListener('click', (e) => {
const page = e.target.dataset.page;
switchPage(page);
});
});
// Logout
document.getElementById('logout-btn').addEventListener('click', async () => {
await fetch(`${API_BASE}/auth/logout`, { method: 'POST' });
currentUser = null;
showLoginPage();
});
// Job form
document.getElementById('job-form').addEventListener('submit', async (e) => {
e.preventDefault();
await submitJob();
});
}
function switchPage(page) {
document.querySelectorAll('.content-page').forEach(p => p.classList.add('hidden'));
document.querySelectorAll('.nav-btn').forEach(b => b.classList.remove('active'));
document.getElementById(`${page}-page`).classList.remove('hidden');
document.querySelector(`[data-page="${page}"]`).classList.add('active');
if (page === 'jobs') {
loadJobs();
} else if (page === 'runners') {
loadRunners();
}
}
async function submitJob() {
const form = document.getElementById('job-form');
const formData = new FormData(form);
const jobData = {
name: document.getElementById('job-name').value,
frame_start: parseInt(document.getElementById('frame-start').value),
frame_end: parseInt(document.getElementById('frame-end').value),
output_format: document.getElementById('output-format').value,
};
try {
// Create job
const jobResponse = await fetch(`${API_BASE}/jobs`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(jobData),
});
if (!jobResponse.ok) {
throw new Error('Failed to create job');
}
const job = await jobResponse.json();
// Upload file
const fileInput = document.getElementById('blend-file');
if (fileInput.files.length > 0) {
const fileFormData = new FormData();
fileFormData.append('file', fileInput.files[0]);
const fileResponse = await fetch(`${API_BASE}/jobs/${job.id}/upload`, {
method: 'POST',
body: fileFormData,
});
if (!fileResponse.ok) {
throw new Error('Failed to upload file');
}
}
alert('Job submitted successfully!');
form.reset();
switchPage('jobs');
loadJobs();
} catch (error) {
alert('Failed to submit job: ' + error.message);
}
}
async function loadJobs() {
try {
const response = await fetch(`${API_BASE}/jobs`);
if (!response.ok) throw new Error('Failed to load jobs');
const jobs = await response.json();
displayJobs(jobs);
} catch (error) {
console.error('Failed to load jobs:', error);
}
}
function displayJobs(jobs) {
const container = document.getElementById('jobs-list');
if (jobs.length === 0) {
container.innerHTML = '<p>No jobs yet. Submit a job to get started!</p>';
return;
}
container.innerHTML = jobs.map(job => `
<div class="job-card">
<h3>${escapeHtml(job.name)}</h3>
<div class="job-meta">
<span>Frames: ${job.frame_start}-${job.frame_end}</span>
<span>Format: ${job.output_format}</span>
<span>Created: ${new Date(job.created_at).toLocaleString()}</span>
</div>
<div class="job-status ${job.status}">${job.status}</div>
<div class="progress-bar">
<div class="progress-fill" style="width: ${job.progress}%"></div>
</div>
<div class="job-actions">
<button onclick="viewJob(${job.id})" class="btn btn-primary">View Details</button>
${job.status === 'pending' || job.status === 'running' ?
`<button onclick="cancelJob(${job.id})" class="btn btn-secondary">Cancel</button>` : ''}
</div>
</div>
`).join('');
}
async function viewJob(jobId) {
try {
const response = await fetch(`${API_BASE}/jobs/${jobId}`);
if (!response.ok) throw new Error('Failed to load job');
const job = await response.json();
// Load files
const filesResponse = await fetch(`${API_BASE}/jobs/${jobId}/files`);
const files = filesResponse.ok ? await filesResponse.json() : [];
const outputFiles = files.filter(f => f.file_type === 'output');
if (outputFiles.length > 0) {
let message = 'Output files:\n';
outputFiles.forEach(file => {
message += `- ${file.file_name}\n`;
});
message += '\nWould you like to download them?';
if (confirm(message)) {
outputFiles.forEach(file => {
window.open(`${API_BASE}/jobs/${jobId}/files/${file.id}/download`, '_blank');
});
}
} else {
alert(`Job: ${job.name}\nStatus: ${job.status}\nProgress: ${job.progress.toFixed(1)}%`);
}
} catch (error) {
alert('Failed to load job details: ' + error.message);
}
}
async function cancelJob(jobId) {
if (!confirm('Are you sure you want to cancel this job?')) return;
try {
const response = await fetch(`${API_BASE}/jobs/${jobId}`, {
method: 'DELETE',
});
if (!response.ok) throw new Error('Failed to cancel job');
loadJobs();
} catch (error) {
alert('Failed to cancel job: ' + error.message);
}
}
async function loadRunners() {
try {
const response = await fetch(`${API_BASE}/runners`);
if (!response.ok) throw new Error('Failed to load runners');
const runners = await response.json();
displayRunners(runners);
} catch (error) {
console.error('Failed to load runners:', error);
}
}
function displayRunners(runners) {
const container = document.getElementById('runners-list');
if (runners.length === 0) {
container.innerHTML = '<p>No runners connected.</p>';
return;
}
container.innerHTML = runners.map(runner => {
const lastHeartbeat = new Date(runner.last_heartbeat);
const isOnline = (Date.now() - lastHeartbeat.getTime()) < 60000; // 1 minute
return `
<div class="runner-card">
<h3>${escapeHtml(runner.name)}</h3>
<div class="runner-info">
<span>Hostname: ${escapeHtml(runner.hostname)}</span>
<span>Last heartbeat: ${lastHeartbeat.toLocaleString()}</span>
</div>
<div class="runner-status ${isOnline ? 'online' : 'offline'}">
${isOnline ? 'Online' : 'Offline'}
</div>
</div>
`;
}).join('');
}
function escapeHtml(text) {
const div = document.createElement('div');
div.textContent = text;
return div.innerHTML;
}
// Auto-refresh jobs every 5 seconds
setInterval(() => {
if (currentUser && document.getElementById('jobs-page').classList.contains('hidden') === false) {
loadJobs();
}
}, 5000);
// Initialize on load
init();

View File

@@ -4,42 +4,26 @@ import (
"embed" "embed"
"io/fs" "io/fs"
"net/http" "net/http"
"strings"
) )
//go:embed dist/* //go:embed templates templates/partials static
var distFS embed.FS var uiFS embed.FS
// GetFileSystem returns an http.FileSystem for the embedded web UI files // GetStaticFileSystem returns an http.FileSystem for embedded UI assets.
func GetFileSystem() http.FileSystem { func GetStaticFileSystem() http.FileSystem {
subFS, err := fs.Sub(distFS, "dist") subFS, err := fs.Sub(uiFS, "static")
if err != nil { if err != nil {
panic(err) panic(err)
} }
return http.FS(subFS) return http.FS(subFS)
} }
// SPAHandler returns an http.Handler that serves the embedded SPA // StaticHandler serves /assets/* files from embedded static assets.
// It serves static files if they exist, otherwise falls back to index.html func StaticHandler() http.Handler {
func SPAHandler() http.Handler { return http.StripPrefix("/assets/", http.FileServer(GetStaticFileSystem()))
fsys := GetFileSystem()
fileServer := http.FileServer(fsys)
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
path := r.URL.Path
// Try to open the file
f, err := fsys.Open(strings.TrimPrefix(path, "/"))
if err != nil {
// File doesn't exist, serve index.html for SPA routing
r.URL.Path = "/"
fileServer.ServeHTTP(w, r)
return
}
f.Close()
// File exists, serve it
fileServer.ServeHTTP(w, r)
})
} }
// GetTemplateFS returns the embedded template filesystem.
func GetTemplateFS() fs.FS {
return uiFS
}

30
web/embed_test.go Normal file
View File

@@ -0,0 +1,30 @@
package web
import (
"net/http/httptest"
"testing"
)
func TestGetStaticFileSystem_NonNil(t *testing.T) {
fs := GetStaticFileSystem()
if fs == nil {
t.Fatal("static filesystem should not be nil")
}
}
func TestStaticHandler_ServesWithoutPanic(t *testing.T) {
h := StaticHandler()
rr := httptest.NewRecorder()
req := httptest.NewRequest("GET", "/assets/does-not-exist.txt", nil)
h.ServeHTTP(rr, req)
if rr.Code == 0 {
t.Fatal("handler should write a status code")
}
}
func TestGetTemplateFS_NonNil(t *testing.T) {
if GetTemplateFS() == nil {
t.Fatal("template fs should not be nil")
}
}

View File

@@ -1,13 +0,0 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>JiggaBlend</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.jsx"></script>
</body>
</html>

2677
web/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,21 +0,0 @@
{
"name": "jiggablend-web",
"version": "1.0.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "vite build",
"preview": "vite preview"
},
"dependencies": {
"react": "^18.2.0",
"react-dom": "^18.2.0"
},
"devDependencies": {
"@vitejs/plugin-react": "^4.2.1",
"autoprefixer": "^10.4.16",
"postcss": "^8.4.32",
"tailwindcss": "^3.4.0",
"vite": "^7.2.4"
}
}

View File

@@ -1,7 +0,0 @@
export default {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
}

View File

@@ -1,50 +0,0 @@
import { useState, useEffect, useMemo } from 'react';
import { useAuth } from './hooks/useAuth';
import Login from './components/Login';
import Layout from './components/Layout';
import JobList from './components/JobList';
import JobSubmission from './components/JobSubmission';
import AdminPanel from './components/AdminPanel';
import ErrorBoundary from './components/ErrorBoundary';
import LoadingSpinner from './components/LoadingSpinner';
import './styles/index.css';
function App() {
const { user, loading, refresh } = useAuth();
const [activeTab, setActiveTab] = useState('jobs');
// Memoize login component to ensure it's ready immediately
const loginComponent = useMemo(() => <Login key="login" />, []);
if (loading) {
return (
<div className="min-h-screen flex items-center justify-center bg-gray-900">
<LoadingSpinner size="md" />
</div>
);
}
if (!user) {
return loginComponent;
}
// Wrapper to change tabs - only check auth on mount, not on every navigation
const handleTabChange = (newTab) => {
setActiveTab(newTab);
};
return (
<Layout activeTab={activeTab} onTabChange={handleTabChange}>
<ErrorBoundary>
{activeTab === 'jobs' && <JobList />}
{activeTab === 'submit' && (
<JobSubmission onSuccess={() => handleTabChange('jobs')} />
)}
{activeTab === 'admin' && <AdminPanel />}
</ErrorBoundary>
</Layout>
);
}
export default App;

View File

@@ -1,810 +0,0 @@
import { useState, useEffect, useRef } from 'react';
import { admin, jobs, normalizeArrayResponse } from '../utils/api';
import { wsManager } from '../utils/websocket';
import UserJobs from './UserJobs';
import PasswordChange from './PasswordChange';
import LoadingSpinner from './LoadingSpinner';
export default function AdminPanel() {
const [activeSection, setActiveSection] = useState('api-keys');
const [apiKeys, setApiKeys] = useState([]);
const [runners, setRunners] = useState([]);
const [users, setUsers] = useState([]);
const [loading, setLoading] = useState(false);
const [newAPIKeyName, setNewAPIKeyName] = useState('');
const [newAPIKeyDescription, setNewAPIKeyDescription] = useState('');
const [newAPIKeyScope, setNewAPIKeyScope] = useState('user'); // Default to user scope
const [newAPIKey, setNewAPIKey] = useState(null);
const [selectedUser, setSelectedUser] = useState(null);
const [registrationEnabled, setRegistrationEnabled] = useState(true);
const [passwordChangeUser, setPasswordChangeUser] = useState(null);
const listenerIdRef = useRef(null); // Listener ID for shared WebSocket
const subscribedChannelsRef = useRef(new Set()); // Track confirmed subscribed channels
const pendingSubscriptionsRef = useRef(new Set()); // Track pending subscriptions (waiting for confirmation)
// Connect to shared WebSocket on mount
useEffect(() => {
listenerIdRef.current = wsManager.subscribe('adminpanel', {
open: () => {
console.log('AdminPanel: Shared WebSocket connected');
// Subscribe to runners if already viewing runners section
if (activeSection === 'runners') {
subscribeToRunners();
}
},
message: (data) => {
// Handle subscription responses - update both local refs and wsManager
if (data.type === 'subscribed' && data.channel) {
pendingSubscriptionsRef.current.delete(data.channel);
subscribedChannelsRef.current.add(data.channel);
wsManager.confirmSubscription(data.channel);
console.log('Successfully subscribed to channel:', data.channel);
} else if (data.type === 'subscription_error' && data.channel) {
pendingSubscriptionsRef.current.delete(data.channel);
subscribedChannelsRef.current.delete(data.channel);
wsManager.failSubscription(data.channel);
console.error('Subscription failed for channel:', data.channel, data.error);
}
// Handle runners channel messages
if (data.channel === 'runners' && data.type === 'runner_status') {
// Update runner in list
setRunners(prev => {
const index = prev.findIndex(r => r.id === data.runner_id);
if (index >= 0 && data.data) {
const updated = [...prev];
updated[index] = { ...updated[index], ...data.data };
return updated;
}
return prev;
});
}
},
error: (error) => {
console.error('AdminPanel: Shared WebSocket error:', error);
},
close: (event) => {
console.log('AdminPanel: Shared WebSocket closed:', event);
subscribedChannelsRef.current.clear();
pendingSubscriptionsRef.current.clear();
}
});
// Ensure connection is established
wsManager.connect();
return () => {
// Unsubscribe from all channels before unmounting
unsubscribeFromRunners();
if (listenerIdRef.current) {
wsManager.unsubscribe(listenerIdRef.current);
listenerIdRef.current = null;
}
};
}, []);
const subscribeToRunners = () => {
const channel = 'runners';
// Don't subscribe if already subscribed or pending
if (subscribedChannelsRef.current.has(channel) || pendingSubscriptionsRef.current.has(channel)) {
return;
}
wsManager.subscribeToChannel(channel);
subscribedChannelsRef.current.add(channel);
pendingSubscriptionsRef.current.add(channel);
console.log('Subscribing to runners channel');
};
const unsubscribeFromRunners = () => {
const channel = 'runners';
if (!subscribedChannelsRef.current.has(channel)) {
return; // Not subscribed
}
wsManager.unsubscribeFromChannel(channel);
subscribedChannelsRef.current.delete(channel);
pendingSubscriptionsRef.current.delete(channel);
console.log('Unsubscribed from runners channel');
};
useEffect(() => {
if (activeSection === 'api-keys') {
loadAPIKeys();
unsubscribeFromRunners();
} else if (activeSection === 'runners') {
loadRunners();
subscribeToRunners();
} else if (activeSection === 'users') {
loadUsers();
unsubscribeFromRunners();
} else if (activeSection === 'settings') {
loadSettings();
unsubscribeFromRunners();
}
}, [activeSection]);
const loadAPIKeys = async () => {
setLoading(true);
try {
const data = await admin.listAPIKeys();
setApiKeys(normalizeArrayResponse(data));
} catch (error) {
console.error('Failed to load API keys:', error);
setApiKeys([]);
alert('Failed to load API keys');
} finally {
setLoading(false);
}
};
const loadRunners = async () => {
setLoading(true);
try {
const data = await admin.listRunners();
setRunners(normalizeArrayResponse(data));
} catch (error) {
console.error('Failed to load runners:', error);
setRunners([]);
alert('Failed to load runners');
} finally {
setLoading(false);
}
};
const loadUsers = async () => {
setLoading(true);
try {
const data = await admin.listUsers();
setUsers(normalizeArrayResponse(data));
} catch (error) {
console.error('Failed to load users:', error);
setUsers([]);
alert('Failed to load users');
} finally {
setLoading(false);
}
};
const loadSettings = async () => {
setLoading(true);
try {
const data = await admin.getRegistrationEnabled();
setRegistrationEnabled(data.enabled);
} catch (error) {
console.error('Failed to load settings:', error);
alert('Failed to load settings');
} finally {
setLoading(false);
}
};
const handleToggleRegistration = async () => {
const newValue = !registrationEnabled;
setLoading(true);
try {
await admin.setRegistrationEnabled(newValue);
setRegistrationEnabled(newValue);
alert(`Registration ${newValue ? 'enabled' : 'disabled'}`);
} catch (error) {
console.error('Failed to update registration setting:', error);
alert('Failed to update registration setting');
} finally {
setLoading(false);
}
};
const generateAPIKey = async () => {
if (!newAPIKeyName.trim()) {
alert('API key name is required');
return;
}
setLoading(true);
try {
const data = await admin.generateAPIKey(newAPIKeyName.trim(), newAPIKeyDescription.trim() || undefined, newAPIKeyScope);
setNewAPIKey(data);
setNewAPIKeyName('');
setNewAPIKeyDescription('');
setNewAPIKeyScope('user');
await loadAPIKeys();
} catch (error) {
console.error('Failed to generate API key:', error);
alert('Failed to generate API key');
} finally {
setLoading(false);
}
};
const [deletingKeyId, setDeletingKeyId] = useState(null);
const [deletingRunnerId, setDeletingRunnerId] = useState(null);
const revokeAPIKey = async (keyId) => {
if (!confirm('Are you sure you want to delete this API key? This action cannot be undone.')) {
return;
}
setDeletingKeyId(keyId);
try {
await admin.deleteAPIKey(keyId);
await loadAPIKeys();
} catch (error) {
console.error('Failed to delete API key:', error);
alert('Failed to delete API key');
} finally {
setDeletingKeyId(null);
}
};
const deleteRunner = async (runnerId) => {
if (!confirm('Are you sure you want to delete this runner?')) {
return;
}
setDeletingRunnerId(runnerId);
try {
await admin.deleteRunner(runnerId);
await loadRunners();
} catch (error) {
console.error('Failed to delete runner:', error);
alert('Failed to delete runner');
} finally {
setDeletingRunnerId(null);
}
};
const copyToClipboard = (text) => {
navigator.clipboard.writeText(text);
alert('Copied to clipboard!');
};
const isAPIKeyActive = (isActive) => {
return isActive;
};
return (
<div className="space-y-6">
<div className="flex space-x-4 border-b border-gray-700">
<button
onClick={() => {
setActiveSection('api-keys');
setSelectedUser(null);
}}
className={`py-2 px-4 border-b-2 font-medium ${
activeSection === 'api-keys'
? 'border-orange-500 text-orange-500'
: 'border-transparent text-gray-400 hover:text-gray-300'
}`}
>
API Keys
</button>
<button
onClick={() => {
setActiveSection('runners');
setSelectedUser(null);
}}
className={`py-2 px-4 border-b-2 font-medium ${
activeSection === 'runners'
? 'border-orange-500 text-orange-500'
: 'border-transparent text-gray-400 hover:text-gray-300'
}`}
>
Runner Management
</button>
<button
onClick={() => {
setActiveSection('users');
setSelectedUser(null);
}}
className={`py-2 px-4 border-b-2 font-medium ${
activeSection === 'users'
? 'border-orange-500 text-orange-500'
: 'border-transparent text-gray-400 hover:text-gray-300'
}`}
>
Users
</button>
<button
onClick={() => {
setActiveSection('settings');
setSelectedUser(null);
}}
className={`py-2 px-4 border-b-2 font-medium ${
activeSection === 'settings'
? 'border-orange-500 text-orange-500'
: 'border-transparent text-gray-400 hover:text-gray-300'
}`}
>
Settings
</button>
</div>
{activeSection === 'api-keys' && (
<div className="space-y-6">
<div className="bg-gray-800 rounded-lg shadow-md p-6 border border-gray-700">
<h2 className="text-xl font-semibold mb-4 text-gray-100">Generate API Key</h2>
<div className="space-y-4">
<div className="grid grid-cols-1 md:grid-cols-3 gap-4">
<div>
<label className="block text-sm font-medium text-gray-300 mb-2">
Name *
</label>
<input
type="text"
value={newAPIKeyName}
onChange={(e) => setNewAPIKeyName(e.target.value)}
placeholder="e.g., production-runner-01"
className="w-full px-3 py-2 bg-gray-900 border border-gray-600 rounded-lg text-gray-100 focus:ring-2 focus:ring-orange-500 focus:border-transparent"
required
/>
</div>
<div>
<label className="block text-sm font-medium text-gray-300 mb-2">
Description
</label>
<input
type="text"
value={newAPIKeyDescription}
onChange={(e) => setNewAPIKeyDescription(e.target.value)}
placeholder="Optional description"
className="w-full px-3 py-2 bg-gray-900 border border-gray-600 rounded-lg text-gray-100 focus:ring-2 focus:ring-orange-500 focus:border-transparent"
/>
</div>
<div>
<label className="block text-sm font-medium text-gray-300 mb-2">
Scope
</label>
<select
value={newAPIKeyScope}
onChange={(e) => setNewAPIKeyScope(e.target.value)}
className="w-full px-3 py-2 bg-gray-900 border border-gray-600 rounded-lg text-gray-100 focus:ring-2 focus:ring-orange-500 focus:border-transparent"
>
<option value="user">User - Only jobs from API key owner</option>
<option value="manager">Manager - All jobs from any user</option>
</select>
</div>
</div>
<div className="flex justify-end">
<button
onClick={generateAPIKey}
disabled={loading || !newAPIKeyName.trim()}
className="px-6 py-2 bg-orange-600 text-white rounded-lg hover:bg-orange-500 disabled:opacity-50 disabled:cursor-not-allowed transition-colors"
>
Generate API Key
</button>
</div>
</div>
{newAPIKey && (
<div className="mt-4 p-4 bg-green-400/20 border border-green-400/50 rounded-lg">
<p className="text-sm font-medium text-green-400 mb-2">New API Key Generated:</p>
<div className="space-y-2">
<div className="flex items-center gap-2">
<code className="flex-1 px-3 py-2 bg-gray-900 border border-green-400/50 rounded text-sm font-mono break-all text-gray-100">
{newAPIKey.key}
</code>
<button
onClick={() => copyToClipboard(newAPIKey.key)}
className="px-4 py-2 bg-green-600 text-white rounded hover:bg-green-500 transition-colors text-sm whitespace-nowrap"
>
Copy Key
</button>
</div>
<div className="text-xs text-green-400/80">
<p><strong>Name:</strong> {newAPIKey.name}</p>
{newAPIKey.description && <p><strong>Description:</strong> {newAPIKey.description}</p>}
</div>
<p className="text-xs text-green-400/80 mt-2">
Save this API key securely. It will not be shown again.
</p>
</div>
</div>
)}
</div>
<div className="bg-gray-800 rounded-lg shadow-md p-6 border border-gray-700">
<h2 className="text-xl font-semibold mb-4 text-gray-100">API Keys</h2>
{loading ? (
<LoadingSpinner size="sm" className="py-8" />
) : !apiKeys || apiKeys.length === 0 ? (
<p className="text-gray-400 text-center py-8">No API keys generated yet.</p>
) : (
<div className="overflow-x-auto">
<table className="min-w-full divide-y divide-gray-700">
<thead className="bg-gray-900">
<tr>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Name
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Scope
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Key Prefix
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Status
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Created At
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Actions
</th>
</tr>
</thead>
<tbody className="bg-gray-800 divide-y divide-gray-700">
{apiKeys.map((key) => {
return (
<tr key={key.id}>
<td className="px-6 py-4 whitespace-nowrap">
<div>
<div className="text-sm font-medium text-gray-100">{key.name}</div>
{key.description && (
<div className="text-sm text-gray-400">{key.description}</div>
)}
</div>
</td>
<td className="px-6 py-4 whitespace-nowrap">
<span className={`px-2 py-1 text-xs font-medium rounded-full ${
key.scope === 'manager'
? 'bg-purple-400/20 text-purple-400'
: 'bg-blue-400/20 text-blue-400'
}`}>
{key.scope === 'manager' ? 'Manager' : 'User'}
</span>
</td>
<td className="px-6 py-4 whitespace-nowrap">
<code className="text-sm font-mono text-gray-300">
{key.key_prefix}
</code>
</td>
<td className="px-6 py-4 whitespace-nowrap">
{!key.is_active ? (
<span className="px-2 py-1 text-xs font-medium rounded-full bg-gray-500/20 text-gray-400">
Revoked
</span>
) : (
<span className="px-2 py-1 text-xs font-medium rounded-full bg-green-400/20 text-green-400">
Active
</span>
)}
</td>
<td className="px-6 py-4 whitespace-nowrap text-sm text-gray-400">
{new Date(key.created_at).toLocaleString()}
</td>
<td className="px-6 py-4 whitespace-nowrap text-sm space-x-2">
<button
onClick={() => revokeAPIKey(key.id)}
disabled={deletingKeyId === key.id}
className="text-red-400 hover:text-red-300 font-medium disabled:opacity-50 disabled:cursor-not-allowed"
title="Delete API key"
>
{deletingKeyId === key.id ? 'Deleting...' : 'Delete'}
</button>
</td>
</tr>
);
})}
</tbody>
</table>
</div>
)}
</div>
</div>
)}
{activeSection === 'runners' && (
<div className="bg-gray-800 rounded-lg shadow-md p-6 border border-gray-700">
<h2 className="text-xl font-semibold mb-4 text-gray-100">Runner Management</h2>
{loading ? (
<LoadingSpinner size="sm" className="py-8" />
) : !runners || runners.length === 0 ? (
<p className="text-gray-400 text-center py-8">No runners registered.</p>
) : (
<div className="overflow-x-auto">
<table className="min-w-full divide-y divide-gray-700">
<thead className="bg-gray-900">
<tr>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Name
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Hostname
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Status
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
API Key
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Priority
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Capabilities
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Last Heartbeat
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Actions
</th>
</tr>
</thead>
<tbody className="bg-gray-800 divide-y divide-gray-700">
{runners.map((runner) => {
const isOnline = new Date(runner.last_heartbeat) > new Date(Date.now() - 60000);
return (
<tr key={runner.id}>
<td className="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-100">
{runner.name}
</td>
<td className="px-6 py-4 whitespace-nowrap text-sm text-gray-400">
{runner.hostname}
</td>
<td className="px-6 py-4 whitespace-nowrap">
<span
className={`px-2 py-1 text-xs font-medium rounded-full ${
isOnline
? 'bg-green-400/20 text-green-400'
: 'bg-gray-500/20 text-gray-400'
}`}
>
{isOnline ? 'Online' : 'Offline'}
</span>
</td>
<td className="px-6 py-4 whitespace-nowrap text-sm text-gray-400">
<code className="text-xs font-mono bg-gray-900 px-2 py-1 rounded">
jk_r{runner.id % 10}_...
</code>
</td>
<td className="px-6 py-4 whitespace-nowrap text-sm text-gray-400">
{runner.priority}
</td>
<td className="px-6 py-4 whitespace-nowrap text-sm text-gray-400">
{runner.capabilities ? (
(() => {
try {
const caps = JSON.parse(runner.capabilities);
const enabled = Object.entries(caps)
.filter(([_, v]) => v)
.map(([k, _]) => k)
.join(', ');
return enabled || 'None';
} catch {
return runner.capabilities;
}
})()
) : (
'None'
)}
</td>
<td className="px-6 py-4 whitespace-nowrap text-sm text-gray-400">
{new Date(runner.last_heartbeat).toLocaleString()}
</td>
<td className="px-6 py-4 whitespace-nowrap text-sm">
<button
onClick={() => deleteRunner(runner.id)}
disabled={deletingRunnerId === runner.id}
className="text-red-400 hover:text-red-300 font-medium disabled:opacity-50 disabled:cursor-not-allowed"
>
{deletingRunnerId === runner.id ? 'Deleting...' : 'Delete'}
</button>
</td>
</tr>
);
})}
</tbody>
</table>
</div>
)}
</div>
)}
{activeSection === 'change-password' && passwordChangeUser && (
<div className="bg-gray-800 rounded-lg shadow-md p-6 border border-gray-700">
<button
onClick={() => {
setPasswordChangeUser(null);
setActiveSection('users');
}}
className="text-gray-400 hover:text-gray-300 mb-4 flex items-center gap-2"
>
<svg className="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M15 19l-7-7 7-7" />
</svg>
Back to Users
</button>
<PasswordChange
targetUserId={passwordChangeUser.id}
targetUserName={passwordChangeUser.name || passwordChangeUser.email}
onSuccess={() => {
setPasswordChangeUser(null);
setActiveSection('users');
}}
/>
</div>
)}
{activeSection === 'users' && (
<div className="space-y-6">
{selectedUser ? (
<UserJobs
userId={selectedUser.id}
userName={selectedUser.name || selectedUser.email}
onBack={() => setSelectedUser(null)}
/>
) : (
<div className="bg-gray-800 rounded-lg shadow-md p-6 border border-gray-700">
<h2 className="text-xl font-semibold mb-4 text-gray-100">User Management</h2>
{loading ? (
<LoadingSpinner size="sm" className="py-8" />
) : !users || users.length === 0 ? (
<p className="text-gray-400 text-center py-8">No users found.</p>
) : (
<div className="overflow-x-auto">
<table className="min-w-full divide-y divide-gray-700">
<thead className="bg-gray-900">
<tr>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Email
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Name
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Provider
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Admin
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Jobs
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Created
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Actions
</th>
</tr>
</thead>
<tbody className="bg-gray-800 divide-y divide-gray-700">
{users.map((user) => (
<tr key={user.id}>
<td className="px-6 py-4 whitespace-nowrap text-sm text-gray-100">
{user.email}
</td>
<td className="px-6 py-4 whitespace-nowrap text-sm text-gray-300">
{user.name}
</td>
<td className="px-6 py-4 whitespace-nowrap text-sm text-gray-400">
{user.oauth_provider}
</td>
<td className="px-6 py-4 whitespace-nowrap">
<div className="flex items-center gap-2">
{user.is_admin ? (
<span className="px-2 py-1 text-xs font-medium rounded-full bg-orange-400/20 text-orange-400">
Admin
</span>
) : (
<span className="px-2 py-1 text-xs font-medium rounded-full bg-gray-500/20 text-gray-400">
User
</span>
)}
<button
onClick={async () => {
if (user.is_first_user && user.is_admin) {
alert('Cannot remove admin status from the first user');
return;
}
if (!confirm(`Are you sure you want to ${user.is_admin ? 'remove admin privileges from' : 'grant admin privileges to'} ${user.name || user.email}?`)) {
return;
}
try {
await admin.setUserAdminStatus(user.id, !user.is_admin);
await loadUsers();
alert(`Admin status ${user.is_admin ? 'removed' : 'granted'} successfully`);
} catch (error) {
console.error('Failed to update admin status:', error);
const errorMsg = error.message || 'Failed to update admin status';
if (errorMsg.includes('first user')) {
alert('Cannot remove admin status from the first user');
} else {
alert(errorMsg);
}
}
}}
disabled={user.is_first_user && user.is_admin}
className={`text-xs px-2 py-1 rounded ${
user.is_first_user && user.is_admin
? 'text-gray-500 bg-gray-500/10 cursor-not-allowed'
: user.is_admin
? 'text-red-400 hover:text-red-300 bg-red-400/10 hover:bg-red-400/20'
: 'text-green-400 hover:text-green-300 bg-green-400/10 hover:bg-green-400/20'
} transition-colors`}
title={user.is_first_user && user.is_admin ? 'First user must remain admin' : user.is_admin ? 'Remove admin privileges' : 'Grant admin privileges'}
>
{user.is_admin ? 'Remove Admin' : 'Make Admin'}
</button>
</div>
</td>
<td className="px-6 py-4 whitespace-nowrap text-sm text-gray-400">
{user.job_count || 0}
</td>
<td className="px-6 py-4 whitespace-nowrap text-sm text-gray-400">
{new Date(user.created_at).toLocaleString()}
</td>
<td className="px-6 py-4 whitespace-nowrap text-sm">
<div className="flex gap-3">
<button
onClick={() => setSelectedUser(user)}
className="text-orange-400 hover:text-orange-300 font-medium"
>
View Jobs
</button>
{user.oauth_provider === 'local' && (
<button
onClick={() => {
const userForPassword = { id: user.id, name: user.name || user.email };
setPasswordChangeUser(userForPassword);
setSelectedUser(null);
setActiveSection('change-password');
}}
className="text-blue-400 hover:text-blue-300 font-medium"
>
Change Password
</button>
)}
</div>
</td>
</tr>
))}
</tbody>
</table>
</div>
)}
</div>
)}
</div>
)}
{activeSection === 'settings' && (
<div className="space-y-6">
<PasswordChange />
<div className="bg-gray-800 rounded-lg shadow-md p-6 border border-gray-700">
<h2 className="text-xl font-semibold mb-6 text-gray-100">System Settings</h2>
<div className="space-y-6">
<div className="flex items-center justify-between p-4 bg-gray-900 rounded-lg border border-gray-700">
<div>
<h3 className="text-lg font-medium text-gray-100 mb-1">User Registration</h3>
<p className="text-sm text-gray-400">
{registrationEnabled
? 'New users can register via OAuth or local login'
: 'Registration is disabled. Only existing users can log in.'}
</p>
</div>
<div className="flex items-center gap-4">
<span className={`text-sm font-medium ${registrationEnabled ? 'text-green-400' : 'text-red-400'}`}>
{registrationEnabled ? 'Enabled' : 'Disabled'}
</span>
<button
onClick={handleToggleRegistration}
disabled={loading}
className={`px-6 py-2 rounded-lg font-medium transition-colors ${
registrationEnabled
? 'bg-red-600 hover:bg-red-500 text-white'
: 'bg-green-600 hover:bg-green-500 text-white'
} disabled:opacity-50 disabled:cursor-not-allowed`}
>
{registrationEnabled ? 'Disable' : 'Enable'}
</button>
</div>
</div>
</div>
</div>
</div>
)}
</div>
);
}

View File

@@ -1,41 +0,0 @@
import React from 'react';
class ErrorBoundary extends React.Component {
constructor(props) {
super(props);
this.state = { hasError: false, error: null };
}
static getDerivedStateFromError(error) {
return { hasError: true, error };
}
componentDidCatch(error, errorInfo) {
console.error('ErrorBoundary caught an error:', error, errorInfo);
}
render() {
if (this.state.hasError) {
return (
<div className="p-6 bg-red-400/20 border border-red-400/50 rounded-lg text-red-400">
<h2 className="text-xl font-semibold mb-2">Something went wrong</h2>
<p className="mb-4">{this.state.error?.message || 'An unexpected error occurred'}</p>
<button
onClick={() => {
this.setState({ hasError: false, error: null });
window.location.reload();
}}
className="px-4 py-2 bg-red-600 text-white rounded-lg hover:bg-red-500 transition-colors"
>
Reload Page
</button>
</div>
);
}
return this.props.children;
}
}
export default ErrorBoundary;

View File

@@ -1,26 +0,0 @@
import React from 'react';
/**
* Shared ErrorMessage component for consistent error display
* Sanitizes error messages to prevent XSS
*/
export default function ErrorMessage({ error, className = '' }) {
if (!error) return null;
// Sanitize error message - escape HTML entities
const sanitize = (text) => {
const div = document.createElement('div');
div.textContent = text;
return div.innerHTML;
};
const sanitizedError = typeof error === 'string' ? sanitize(error) : sanitize(error.message || 'An error occurred');
return (
<div className={`p-4 bg-red-400/20 border border-red-400/50 rounded-lg text-red-400 ${className}`}>
<p className="font-semibold">Error:</p>
<p dangerouslySetInnerHTML={{ __html: sanitizedError }} />
</div>
);
}

View File

@@ -1,191 +0,0 @@
import { useState } from 'react';
export default function FileExplorer({ files, onDownload, onPreview, onVideoPreview, isImageFile }) {
const [expandedPaths, setExpandedPaths] = useState(new Set()); // Root folder collapsed by default
// Build directory tree from file paths
const buildTree = (files) => {
const tree = {};
files.forEach(file => {
const path = file.file_name;
// Handle both paths with slashes and single filenames
const parts = path.includes('/') ? path.split('/').filter(p => p) : [path];
// If it's a single file at root (no slashes), treat it specially
if (parts.length === 1 && !path.includes('/')) {
tree[parts[0]] = {
name: parts[0],
isFile: true,
file: file,
children: {},
path: parts[0]
};
return;
}
let current = tree;
parts.forEach((part, index) => {
if (!current[part]) {
current[part] = {
name: part,
isFile: index === parts.length - 1,
file: index === parts.length - 1 ? file : null,
children: {},
path: parts.slice(0, index + 1).join('/')
};
}
current = current[part].children;
});
});
return tree;
};
const togglePath = (path) => {
const newExpanded = new Set(expandedPaths);
if (newExpanded.has(path)) {
newExpanded.delete(path);
} else {
newExpanded.add(path);
}
setExpandedPaths(newExpanded);
};
const renderTree = (node, level = 0, parentPath = '') => {
const items = Object.values(node).sort((a, b) => {
// Directories first, then files
if (a.isFile !== b.isFile) {
return a.isFile ? 1 : -1;
}
return a.name.localeCompare(b.name);
});
return items.map((item) => {
const fullPath = parentPath ? `${parentPath}/${item.name}` : item.name;
const isExpanded = expandedPaths.has(fullPath);
const indent = level * 20;
if (item.isFile) {
const file = item.file;
const isImage = isImageFile && isImageFile(file.file_name);
const isVideo = file.file_name.toLowerCase().endsWith('.mp4');
const sizeMB = (file.file_size / 1024 / 1024).toFixed(2);
const isArchive = file.file_name.endsWith('.tar') || file.file_name.endsWith('.zip');
return (
<div key={fullPath} className="flex items-center justify-between py-1.5 hover:bg-gray-800/50 rounded px-2" style={{ paddingLeft: `${indent + 8}px` }}>
<div className="flex items-center gap-2 flex-1 min-w-0">
<span className="text-gray-500 text-sm">{isArchive ? '📦' : isVideo ? '🎬' : '📄'}</span>
<span className="text-gray-200 text-sm truncate" title={item.name}>
{item.name}
</span>
<span className="text-gray-500 text-xs ml-2">{sizeMB} MB</span>
</div>
<div className="flex gap-2 ml-4 shrink-0">
{isVideo && onVideoPreview && (
<button
onClick={() => onVideoPreview(file)}
className="px-2 py-1 bg-purple-600 text-white rounded text-xs hover:bg-purple-500 transition-colors"
title="Play Video"
>
</button>
)}
{isImage && onPreview && (
<button
onClick={() => onPreview(file)}
className="px-2 py-1 bg-blue-600 text-white rounded text-xs hover:bg-blue-500 transition-colors"
title="Preview"
>
👁
</button>
)}
{onDownload && file.id && (
<button
onClick={() => onDownload(file.id, file.file_name)}
className="px-2 py-1 bg-orange-600 text-white rounded text-xs hover:bg-orange-500 transition-colors"
title="Download"
>
</button>
)}
</div>
</div>
);
} else {
const hasChildren = Object.keys(item.children).length > 0;
return (
<div key={fullPath}>
<div
className="flex items-center gap-2 py-1.5 hover:bg-gray-800/50 rounded px-2 cursor-pointer select-none"
style={{ paddingLeft: `${indent + 8}px` }}
onClick={() => hasChildren && togglePath(fullPath)}
>
<span className="text-gray-400 text-xs w-4 flex items-center justify-center">
{hasChildren ? (isExpanded ? '▼' : '▶') : '•'}
</span>
<span className="text-gray-500 text-sm">
{hasChildren ? (isExpanded ? '📂' : '📁') : '📁'}
</span>
<span className="text-gray-300 text-sm font-medium">{item.name}</span>
{hasChildren && (
<span className="text-gray-500 text-xs ml-2">
({Object.keys(item.children).length})
</span>
)}
</div>
{hasChildren && isExpanded && (
<div className="ml-2">
{renderTree(item.children, level + 1, fullPath)}
</div>
)}
</div>
);
}
});
};
const tree = buildTree(files);
if (Object.keys(tree).length === 0) {
return (
<div className="text-gray-400 text-sm py-4 text-center">
No files
</div>
);
}
// Wrap tree in a root folder
const rootExpanded = expandedPaths.has('');
return (
<div className="bg-gray-900 rounded-lg border border-gray-700 p-3">
<div className="space-y-1">
<div>
<div
className="flex items-center gap-2 py-1.5 hover:bg-gray-800/50 rounded px-2 cursor-pointer select-none"
onClick={() => togglePath('')}
>
<span className="text-gray-400 text-xs w-4 flex items-center justify-center">
{rootExpanded ? '▼' : '▶'}
</span>
<span className="text-gray-500 text-sm">
{rootExpanded ? '📂' : '📁'}
</span>
<span className="text-gray-300 text-sm font-medium">Files</span>
<span className="text-gray-500 text-xs ml-2">
({Object.keys(tree).length})
</span>
</div>
{rootExpanded && (
<div className="ml-2">
{renderTree(tree)}
</div>
)}
</div>
</div>
</div>
);
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,289 +0,0 @@
import { useState, useEffect, useRef } from 'react';
import { jobs, normalizeArrayResponse } from '../utils/api';
import { wsManager } from '../utils/websocket';
import JobDetails from './JobDetails';
import LoadingSpinner from './LoadingSpinner';
export default function JobList() {
const [jobList, setJobList] = useState([]);
const [loading, setLoading] = useState(true);
const [selectedJob, setSelectedJob] = useState(null);
const [pagination, setPagination] = useState({ total: 0, limit: 50, offset: 0 });
const [hasMore, setHasMore] = useState(true);
const listenerIdRef = useRef(null);
useEffect(() => {
loadJobs();
// Use shared WebSocket manager for real-time updates
listenerIdRef.current = wsManager.subscribe('joblist', {
open: () => {
console.log('JobList: Shared WebSocket connected');
// Load initial job list via HTTP to get current state
loadJobs();
},
message: (data) => {
console.log('JobList: Client WebSocket message received:', data.type, data.channel, data);
// Handle jobs channel messages (always broadcasted)
if (data.channel === 'jobs') {
if (data.type === 'job_update' && data.data) {
console.log('JobList: Updating job:', data.job_id, data.data);
// Update job in list
setJobList(prev => {
const prevArray = Array.isArray(prev) ? prev : [];
const index = prevArray.findIndex(j => j.id === data.job_id);
if (index >= 0) {
const updated = [...prevArray];
updated[index] = { ...updated[index], ...data.data };
console.log('JobList: Updated job at index', index, updated[index]);
return updated;
}
// If job not in current page, reload to get updated list
if (data.data.status === 'completed' || data.data.status === 'failed') {
loadJobs();
}
return prevArray;
});
} else if (data.type === 'job_created' && data.data) {
console.log('JobList: New job created:', data.job_id, data.data);
// New job created - add to list
setJobList(prev => {
const prevArray = Array.isArray(prev) ? prev : [];
// Check if job already exists (avoid duplicates)
if (prevArray.findIndex(j => j.id === data.job_id) >= 0) {
return prevArray;
}
// Add new job at the beginning
return [data.data, ...prevArray];
});
}
} else if (data.type === 'connected') {
// Connection established
console.log('JobList: WebSocket connected');
}
},
error: (error) => {
console.error('JobList: Shared WebSocket error:', error);
},
close: (event) => {
console.log('JobList: Shared WebSocket closed:', event);
}
});
// Ensure connection is established
wsManager.connect();
return () => {
if (listenerIdRef.current) {
wsManager.unsubscribe(listenerIdRef.current);
listenerIdRef.current = null;
}
};
}, []);
const loadJobs = async (append = false) => {
try {
const offset = append ? pagination.offset + pagination.limit : 0;
const result = await jobs.listSummary({
limit: pagination.limit,
offset,
sort: 'created_at:desc'
});
// Handle both old format (array) and new format (object with data, total, etc.)
const jobsArray = normalizeArrayResponse(result);
const total = result.total !== undefined ? result.total : jobsArray.length;
if (append) {
setJobList(prev => {
const prevArray = Array.isArray(prev) ? prev : [];
return [...prevArray, ...jobsArray];
});
setPagination(prev => ({ ...prev, offset, total }));
} else {
setJobList(jobsArray);
setPagination({ total, limit: result.limit || pagination.limit, offset: result.offset || 0 });
}
setHasMore(offset + jobsArray.length < total);
} catch (error) {
console.error('Failed to load jobs:', error);
// Ensure jobList is always an array even on error
if (!append) {
setJobList([]);
}
} finally {
setLoading(false);
}
};
const loadMore = () => {
if (!loading && hasMore) {
loadJobs(true);
}
};
// Keep selectedJob in sync with the job list when it refreshes
useEffect(() => {
if (selectedJob && jobList.length > 0) {
const freshJob = jobList.find(j => j.id === selectedJob.id);
if (freshJob) {
// Update to the fresh object from the list to keep it in sync
setSelectedJob(freshJob);
} else {
// Job was deleted or no longer exists, clear selection
setSelectedJob(null);
}
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [jobList]); // Only depend on jobList, not selectedJob to avoid infinite loops
const handleCancel = async (jobId) => {
if (!confirm('Are you sure you want to cancel this job?')) return;
try {
await jobs.cancel(jobId);
loadJobs();
} catch (error) {
alert('Failed to cancel job: ' + error.message);
}
};
const handleDelete = async (jobId) => {
if (!confirm('Are you sure you want to permanently delete this job? This action cannot be undone.')) return;
try {
// Optimistically update the list
setJobList(prev => {
const prevArray = Array.isArray(prev) ? prev : [];
return prevArray.filter(j => j.id !== jobId);
});
if (selectedJob && selectedJob.id === jobId) {
setSelectedJob(null);
}
// Then actually delete
await jobs.delete(jobId);
// Reload to ensure consistency
loadJobs();
} catch (error) {
// On error, reload to restore correct state
loadJobs();
alert('Failed to delete job: ' + error.message);
}
};
const getStatusColor = (status) => {
const colors = {
pending: 'bg-yellow-400/20 text-yellow-400',
running: 'bg-orange-400/20 text-orange-400',
completed: 'bg-green-400/20 text-green-400',
failed: 'bg-red-400/20 text-red-400',
cancelled: 'bg-gray-500/20 text-gray-400',
};
return colors[status] || colors.pending;
};
if (loading && jobList.length === 0) {
return <LoadingSpinner size="md" className="h-64" />;
}
if (jobList.length === 0) {
return (
<div className="text-center py-12">
<p className="text-gray-400 text-lg">No jobs yet. Submit a job to get started!</p>
</div>
);
}
return (
<>
<div className="grid gap-6 md:grid-cols-2 lg:grid-cols-3">
{jobList.map((job) => (
<div
key={job.id}
className="bg-gray-800 rounded-lg shadow-md hover:shadow-lg transition-shadow p-6 border-l-4 border-orange-500 border border-gray-700"
>
<div className="flex justify-between items-start mb-4">
<h3 className="text-xl font-semibold text-gray-100">{job.name}</h3>
<span className={`px-3 py-1 rounded-full text-xs font-medium ${getStatusColor(job.status)}`}>
{job.status}
</span>
</div>
<div className="space-y-2 text-sm text-gray-400 mb-4">
{job.frame_start !== undefined && job.frame_end !== undefined && (
<p>Frames: {job.frame_start} - {job.frame_end}</p>
)}
{job.output_format && <p>Format: {job.output_format}</p>}
<p>Created: {new Date(job.created_at).toLocaleString()}</p>
</div>
<div className="mb-4">
<div className="flex justify-between text-xs text-gray-400 mb-1">
<span>Progress</span>
<span>{job.progress.toFixed(1)}%</span>
</div>
<div className="w-full bg-gray-700 rounded-full h-2">
<div
className="bg-orange-500 h-2 rounded-full transition-all duration-300"
style={{ width: `${job.progress}%` }}
></div>
</div>
</div>
<div className="flex gap-2">
<button
onClick={() => {
// Fetch full job details when viewing
jobs.get(job.id).then(fullJob => {
setSelectedJob(fullJob);
}).catch(err => {
console.error('Failed to load job details:', err);
setSelectedJob(job); // Fallback to summary
});
}}
className="flex-1 px-4 py-2 bg-orange-600 text-white rounded-lg hover:bg-orange-500 transition-colors font-medium"
>
View Details
</button>
{(job.status === 'pending' || job.status === 'running') && (
<button
onClick={() => handleCancel(job.id)}
className="px-4 py-2 bg-gray-700 text-gray-200 rounded-lg hover:bg-gray-600 transition-colors font-medium"
>
Cancel
</button>
)}
{(job.status === 'completed' || job.status === 'failed' || job.status === 'cancelled') && (
<button
onClick={() => handleDelete(job.id)}
className="px-4 py-2 bg-red-600 text-white rounded-lg hover:bg-red-500 transition-colors font-medium"
title="Delete job"
>
Delete
</button>
)}
</div>
</div>
))}
</div>
{hasMore && (
<div className="flex justify-center mt-6">
<button
onClick={loadMore}
disabled={loading}
className="px-6 py-2 bg-gray-700 text-gray-200 rounded-lg hover:bg-gray-600 transition-colors font-medium disabled:opacity-50"
>
{loading ? 'Loading...' : 'Load More'}
</button>
</div>
)}
{selectedJob && (
<JobDetails
job={selectedJob}
onClose={() => setSelectedJob(null)}
onUpdate={loadJobs}
/>
)}
</>
);
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,76 +0,0 @@
import { useAuth } from '../hooks/useAuth';
export default function Layout({ children, activeTab, onTabChange }) {
const { user, logout } = useAuth();
const isAdmin = user?.is_admin || false;
// Note: If user becomes null, App.jsx will handle showing Login component
// We don't need to redirect here as App.jsx already checks for !user
return (
<div className="min-h-screen bg-gray-900">
<header className="bg-gray-800 shadow-sm border-b border-gray-700">
<div className="max-w-7xl mx-auto px-4 sm:px-6 lg:px-8">
<div className="flex justify-between items-center h-16">
<h1 className="text-2xl font-bold text-transparent bg-clip-text bg-gradient-to-r from-orange-500 to-amber-500">
JiggaBlend
</h1>
<div className="flex items-center gap-4">
<span className="text-gray-300">{user?.name || user?.email}</span>
<button
onClick={logout}
className="px-4 py-2 text-sm font-medium text-gray-200 bg-gray-700 border border-gray-600 rounded-lg hover:bg-gray-600 transition-colors"
>
Logout
</button>
</div>
</div>
</div>
</header>
<nav className="bg-gray-800 border-b border-gray-700">
<div className="max-w-7xl mx-auto px-4 sm:px-6 lg:px-8">
<div className="flex space-x-8">
<button
onClick={() => onTabChange('jobs')}
className={`py-4 px-1 border-b-2 font-medium text-sm transition-colors ${
activeTab === 'jobs'
? 'border-orange-500 text-orange-500'
: 'border-transparent text-gray-400 hover:text-gray-300 hover:border-gray-600'
}`}
>
Jobs
</button>
<button
onClick={() => onTabChange('submit')}
className={`py-4 px-1 border-b-2 font-medium text-sm transition-colors ${
activeTab === 'submit'
? 'border-orange-500 text-orange-500'
: 'border-transparent text-gray-400 hover:text-gray-300 hover:border-gray-600'
}`}
>
Submit Job
</button>
{isAdmin && (
<button
onClick={() => onTabChange('admin')}
className={`py-4 px-1 border-b-2 font-medium text-sm transition-colors ${
activeTab === 'admin'
? 'border-orange-500 text-orange-500'
: 'border-transparent text-gray-400 hover:text-gray-300 hover:border-gray-600'
}`}
>
Admin
</button>
)}
</div>
</div>
</nav>
<main className="max-w-7xl mx-auto px-4 sm:px-6 lg:px-8 py-8">
{children}
</main>
</div>
);
}

View File

@@ -1,19 +0,0 @@
import React from 'react';
/**
* Shared LoadingSpinner component with size variants
*/
export default function LoadingSpinner({ size = 'md', className = '', borderColor = 'border-orange-500' }) {
const sizeClasses = {
sm: 'h-8 w-8',
md: 'h-12 w-12',
lg: 'h-16 w-16',
};
return (
<div className={`flex justify-center items-center ${className}`}>
<div className={`animate-spin rounded-full border-b-2 ${borderColor} ${sizeClasses[size]}`}></div>
</div>
);
}

View File

@@ -1,277 +0,0 @@
import { useState, useEffect } from 'react';
import { auth } from '../utils/api';
import ErrorMessage from './ErrorMessage';
export default function Login() {
const [providers, setProviders] = useState({
google: false,
discord: false,
local: false,
});
const [showRegister, setShowRegister] = useState(false);
const [email, setEmail] = useState('');
const [name, setName] = useState('');
const [username, setUsername] = useState('');
const [password, setPassword] = useState('');
const [confirmPassword, setConfirmPassword] = useState('');
const [error, setError] = useState('');
const [loading, setLoading] = useState(false);
useEffect(() => {
checkAuthProviders();
// Check for registration disabled error in URL
const urlParams = new URLSearchParams(window.location.search);
if (urlParams.get('error') === 'registration_disabled') {
setError('Registration is currently disabled. Please contact an administrator.');
}
}, []);
const checkAuthProviders = async () => {
try {
const result = await auth.getProviders();
setProviders({
google: result.google || false,
discord: result.discord || false,
local: result.local || false,
});
} catch (error) {
// If endpoint fails, assume no providers are available
console.error('Failed to check auth providers:', error);
setProviders({ google: false, discord: false, local: false });
}
};
const handleLocalLogin = async (e) => {
e.preventDefault();
setError('');
setLoading(true);
try {
await auth.localLogin(username, password);
// Reload page to trigger auth check in App component
window.location.reload();
} catch (err) {
setError(err.message || 'Login failed');
setLoading(false);
}
};
const handleLocalRegister = async (e) => {
e.preventDefault();
setError('');
if (password !== confirmPassword) {
setError('Passwords do not match');
return;
}
if (password.length < 8) {
setError('Password must be at least 8 characters long');
return;
}
setLoading(true);
try {
await auth.localRegister(email, name, password);
// Reload page to trigger auth check in App component
window.location.reload();
} catch (err) {
setError(err.message || 'Registration failed');
setLoading(false);
}
};
return (
<div className="min-h-screen flex items-center justify-center bg-gray-900">
<div className="bg-gray-800 rounded-2xl shadow-2xl p-8 w-full max-w-md border border-gray-700">
<div className="text-center mb-8">
<h1 className="text-4xl font-bold text-transparent bg-clip-text bg-gradient-to-r from-orange-500 to-amber-500 mb-2">
JiggaBlend
</h1>
<p className="text-gray-400 text-lg">Blender Render Farm</p>
</div>
<div className="space-y-4">
<ErrorMessage error={error} className="text-sm" />
{providers.local && (
<div className="pb-4 border-b border-gray-700">
<div className="flex gap-2 mb-4">
<button
type="button"
onClick={() => {
setShowRegister(false);
setError('');
}}
className={`flex-1 py-2 px-4 rounded-lg font-medium transition-colors ${
!showRegister
? 'bg-orange-600 text-white'
: 'bg-gray-700 text-gray-300 hover:bg-gray-600'
}`}
>
Login
</button>
<button
type="button"
onClick={() => {
setShowRegister(true);
setError('');
}}
className={`flex-1 py-2 px-4 rounded-lg font-medium transition-colors ${
showRegister
? 'bg-orange-600 text-white'
: 'bg-gray-700 text-gray-300 hover:bg-gray-600'
}`}
>
Register
</button>
</div>
{!showRegister ? (
<form onSubmit={handleLocalLogin} className="space-y-4">
<div>
<label htmlFor="username" className="block text-sm font-medium text-gray-300 mb-1">
Email
</label>
<input
id="username"
type="email"
value={username}
onChange={(e) => setUsername(e.target.value)}
required
className="w-full px-4 py-2 bg-gray-900 border border-gray-600 rounded-lg text-gray-100 focus:ring-2 focus:ring-orange-500 focus:border-transparent placeholder-gray-500"
placeholder="Enter your email"
/>
</div>
<div>
<label htmlFor="password" className="block text-sm font-medium text-gray-300 mb-1">
Password
</label>
<input
id="password"
type="password"
value={password}
onChange={(e) => setPassword(e.target.value)}
required
className="w-full px-4 py-2 bg-gray-900 border border-gray-600 rounded-lg text-gray-100 focus:ring-2 focus:ring-orange-500 focus:border-transparent placeholder-gray-500"
placeholder="Enter password"
/>
</div>
<button
type="submit"
disabled={loading}
className="w-full bg-orange-600 text-white font-semibold py-3 px-6 rounded-lg hover:bg-orange-500 transition-all duration-200 shadow-lg disabled:opacity-50 disabled:cursor-not-allowed"
>
{loading ? 'Logging in...' : 'Login'}
</button>
</form>
) : (
<form onSubmit={handleLocalRegister} className="space-y-4">
<div>
<label htmlFor="reg-email" className="block text-sm font-medium text-gray-300 mb-1">
Email
</label>
<input
id="reg-email"
type="email"
value={email}
onChange={(e) => setEmail(e.target.value)}
required
className="w-full px-4 py-2 bg-gray-900 border border-gray-600 rounded-lg text-gray-100 focus:ring-2 focus:ring-orange-500 focus:border-transparent placeholder-gray-500"
placeholder="Enter your email"
/>
</div>
<div>
<label htmlFor="reg-name" className="block text-sm font-medium text-gray-300 mb-1">
Name
</label>
<input
id="reg-name"
type="text"
value={name}
onChange={(e) => setName(e.target.value)}
required
className="w-full px-4 py-2 bg-gray-900 border border-gray-600 rounded-lg text-gray-100 focus:ring-2 focus:ring-orange-500 focus:border-transparent placeholder-gray-500"
placeholder="Enter your name"
/>
</div>
<div>
<label htmlFor="reg-password" className="block text-sm font-medium text-gray-300 mb-1">
Password
</label>
<input
id="reg-password"
type="password"
value={password}
onChange={(e) => setPassword(e.target.value)}
required
minLength={8}
className="w-full px-4 py-2 bg-gray-900 border border-gray-600 rounded-lg text-gray-100 focus:ring-2 focus:ring-orange-500 focus:border-transparent placeholder-gray-500"
placeholder="At least 8 characters"
/>
</div>
<div>
<label htmlFor="reg-confirm-password" className="block text-sm font-medium text-gray-300 mb-1">
Confirm Password
</label>
<input
id="reg-confirm-password"
type="password"
value={confirmPassword}
onChange={(e) => setConfirmPassword(e.target.value)}
required
minLength={8}
className="w-full px-4 py-2 bg-gray-900 border border-gray-600 rounded-lg text-gray-100 focus:ring-2 focus:ring-orange-500 focus:border-transparent placeholder-gray-500"
placeholder="Confirm password"
/>
</div>
<button
type="submit"
disabled={loading}
className="w-full bg-orange-600 text-white font-semibold py-3 px-6 rounded-lg hover:bg-orange-500 transition-all duration-200 shadow-lg disabled:opacity-50 disabled:cursor-not-allowed"
>
{loading ? 'Registering...' : 'Register'}
</button>
</form>
)}
</div>
)}
{providers.google && (
<a
href="/api/auth/google/login"
className="w-full flex items-center justify-center gap-3 bg-gray-700 border-2 border-gray-600 text-gray-200 font-semibold py-3 px-6 rounded-lg hover:bg-gray-600 hover:border-gray-500 transition-all duration-200 shadow-sm"
>
<svg className="w-5 h-5" viewBox="0 0 24 24">
<path fill="#4285F4" d="M22.56 12.25c0-.78-.07-1.53-.2-2.25H12v4.26h5.92c-.26 1.37-1.04 2.53-2.21 3.31v2.77h3.57c2.08-1.92 3.28-4.74 3.28-8.09z"/>
<path fill="#34A853" d="M12 23c2.97 0 5.46-.98 7.28-2.66l-3.57-2.77c-.98.66-2.23 1.06-3.71 1.06-2.86 0-5.29-1.93-6.16-4.53H2.18v2.84C3.99 20.53 7.7 23 12 23z"/>
<path fill="#FBBC05" d="M5.84 14.09c-.22-.66-.35-1.36-.35-2.09s.13-1.43.35-2.09V7.07H2.18C1.43 8.55 1 10.22 1 12s.43 3.45 1.18 4.93l2.85-2.22.81-.62z"/>
<path fill="#EA4335" d="M12 5.38c1.62 0 3.06.56 4.21 1.64l3.15-3.15C17.45 2.09 14.97 1 12 1 7.7 1 3.99 3.47 2.18 7.07l3.66 2.84c.87-2.6 3.3-4.53 6.16-4.53z"/>
</svg>
Continue with Google
</a>
)}
{providers.discord && (
<a
href="/api/auth/discord/login"
className="w-full flex items-center justify-center gap-3 bg-[#5865F2] text-white font-semibold py-3 px-6 rounded-lg hover:bg-[#4752C4] transition-all duration-200 shadow-lg"
>
<svg className="w-5 h-5" fill="currentColor" viewBox="0 0 24 24">
<path d="M20.317 4.37a19.791 19.791 0 0 0-4.885-1.515a.074.074 0 0 0-.079.037c-.21.375-.444.864-.608 1.25a18.27 18.27 0 0 0-5.487 0a12.64 12.64 0 0 0-.617-1.25a.077.077 0 0 0-.079-.037A19.736 19.736 0 0 0 3.677 4.37a.07.07 0 0 0-.032.027C.533 9.046-.32 13.58.099 18.057a.082.082 0 0 0 .031.057a19.9 19.9 0 0 0 5.993 3.03a.078.078 0 0 0 .084-.028a14.09 14.09 0 0 0 1.226-1.994a.076.076 0 0 0-.041-.106a13.107 13.107 0 0 1-1.872-.892a.077.077 0 0 1-.008-.128a10.2 10.2 0 0 0 .372-.292a.074.074 0 0 1 .077-.01c3.928 1.793 8.18 1.793 12.062 0a.074.074 0 0 1 .078.01c.12.098.246.198.373.292a.077.077 0 0 1-.006.127a12.299 12.299 0 0 1-1.873.892a.077.077 0 0 0-.041.107c.36.698.772 1.362 1.225 1.993a.076.076 0 0 0 .084.028a19.839 19.839 0 0 0 6.002-3.03a.077.077 0 0 0 .032-.054c.5-5.177-.838-9.674-3.549-13.66a.061.061 0 0 0-.031-.03zM8.02 15.33c-1.183 0-2.157-1.085-2.157-2.419c0-1.333.956-2.419 2.157-2.419c1.21 0 2.176 1.096 2.157 2.42c0 1.333-.956 2.418-2.157 2.418zm7.975 0c-1.183 0-2.157-1.085-2.157-2.419c0-1.333.955-2.419 2.157-2.419c1.21 0 2.176 1.096 2.157 2.42c0 1.333-.946 2.418-2.157 2.418z"/>
</svg>
Continue with Discord
</a>
)}
{!providers.google && !providers.discord && !providers.local && (
<div className="p-4 bg-yellow-400/20 border border-yellow-400/50 rounded-lg text-yellow-400 text-sm text-center">
No authentication methods are configured. Please contact an administrator.
</div>
)}
</div>
</div>
</div>
);
}

View File

@@ -1,137 +0,0 @@
import { useState } from 'react';
import { auth } from '../utils/api';
import ErrorMessage from './ErrorMessage';
import { useAuth } from '../hooks/useAuth';
export default function PasswordChange({ targetUserId = null, targetUserName = null, onSuccess }) {
const { user } = useAuth();
const [oldPassword, setOldPassword] = useState('');
const [newPassword, setNewPassword] = useState('');
const [confirmPassword, setConfirmPassword] = useState('');
const [error, setError] = useState('');
const [success, setSuccess] = useState('');
const [loading, setLoading] = useState(false);
const isAdmin = user?.is_admin || false;
const isChangingOtherUser = targetUserId !== null && isAdmin;
const handleSubmit = async (e) => {
e.preventDefault();
setError('');
setSuccess('');
if (newPassword !== confirmPassword) {
setError('New passwords do not match');
return;
}
if (newPassword.length < 8) {
setError('Password must be at least 8 characters long');
return;
}
if (!isChangingOtherUser && !oldPassword) {
setError('Old password is required');
return;
}
setLoading(true);
try {
await auth.changePassword(
isChangingOtherUser ? null : oldPassword,
newPassword,
isChangingOtherUser ? targetUserId : null
);
setSuccess('Password changed successfully');
setOldPassword('');
setNewPassword('');
setConfirmPassword('');
if (onSuccess) {
setTimeout(() => {
onSuccess();
}, 1500);
}
} catch (err) {
setError(err.message || 'Failed to change password');
} finally {
setLoading(false);
}
};
return (
<div className="bg-gray-800 rounded-lg shadow-md p-6 border border-gray-700">
<h2 className="text-xl font-semibold mb-4 text-gray-100">
{isChangingOtherUser ? `Change Password for ${targetUserName || 'User'}` : 'Change Password'}
</h2>
<ErrorMessage error={error} className="mb-4 text-sm" />
{success && (
<div className="mb-4 p-3 bg-green-400/20 border border-green-400/50 rounded-lg text-green-400 text-sm">
{success}
</div>
)}
<form onSubmit={handleSubmit} className="space-y-4">
{!isChangingOtherUser && (
<div>
<label htmlFor="old-password" className="block text-sm font-medium text-gray-300 mb-1">
Current Password
</label>
<input
id="old-password"
type="password"
value={oldPassword}
onChange={(e) => setOldPassword(e.target.value)}
required
className="w-full px-4 py-2 bg-gray-900 border border-gray-600 rounded-lg text-gray-100 focus:ring-2 focus:ring-orange-500 focus:border-transparent"
placeholder="Enter current password"
/>
</div>
)}
<div>
<label htmlFor="new-password" className="block text-sm font-medium text-gray-300 mb-1">
New Password
</label>
<input
id="new-password"
type="password"
value={newPassword}
onChange={(e) => setNewPassword(e.target.value)}
required
minLength={8}
className="w-full px-4 py-2 bg-gray-900 border border-gray-600 rounded-lg text-gray-100 focus:ring-2 focus:ring-orange-500 focus:border-transparent"
placeholder="At least 8 characters"
/>
</div>
<div>
<label htmlFor="confirm-password" className="block text-sm font-medium text-gray-300 mb-1">
Confirm New Password
</label>
<input
id="confirm-password"
type="password"
value={confirmPassword}
onChange={(e) => setConfirmPassword(e.target.value)}
required
minLength={8}
className="w-full px-4 py-2 bg-gray-900 border border-gray-600 rounded-lg text-gray-100 focus:ring-2 focus:ring-orange-500 focus:border-transparent"
placeholder="Confirm new password"
/>
</div>
<button
type="submit"
disabled={loading}
className="w-full bg-orange-600 text-white font-semibold py-2 px-4 rounded-lg hover:bg-orange-500 transition-colors disabled:opacity-50 disabled:cursor-not-allowed"
>
{loading ? 'Changing Password...' : 'Change Password'}
</button>
</form>
</div>
);
}

Some files were not shown because too many files have changed in this diff Show More