GPUNet leverages cutting-edge AI models like Real-ESRGAN, RIFE, and DAIN to upscale, interpolate, and enhance video across distributed GPU clusters at unprecedented speed.
Every component of GPUNet has been engineered to maximise throughput and minimise latency across distributed GPU workloads.
Bidirectional gRPC streaming pushes tasks directly to idle workers with sub-millisecond overhead. No polling, no wasted cycles.
Real-ESRGAN and custom ONNX models transform SD content to 4K with stunning clarity. Supports 2x and 4x upscaling modes.
Generate intermediate frames using RIFE, DAIN, or SVP methods. Boost video frame rates for smoother, more cinematic playback.
Automatic detection of VRAM capacity, compute capability, and optimal batch sizes. Every worker runs at peak efficiency — zero configuration.
Raft-inspired cluster replication with leader election, log synchronisation, and vector clocks. No single point of failure.
Workers earn tokens for each processed frame with performance bonuses. Built-in incentive system for distributed contributor networks.
From pixelated archives to crystal-clear 4K — powered by distributed GPU computing.
A fully automated pipeline handles every stage of distributed video processing.
User uploads a video through the web portal, selects an upscaling model (e.g. Real-ESRGAN 4x), and optionally enables frame interpolation.
The master server uses FFmpeg to split the video into individual PNG frames. Each frame becomes an independent task, queued in Redis by priority.
The push-based distributor identifies idle GPU workers via their bidirectional gRPC stream and sends frame data directly — least-loaded worker first.
Each worker runs the ONNX model on its GPU (CUDA, CoreML, or CPU fallback) with auto-tuned tile sizes and batch parameters. Post-processing filters are applied automatically.
Processed frames stream back to the server via chunked gRPC uploads. The server tracks progress in real time and awards tokens to the contributing worker.
Once all frames are collected, FFmpeg merges them into the final video. The user receives a notification and can download the enhanced result.
A layered architecture separating user interfaces, orchestration, storage, and compute.
Bidirectional gRPC streaming with hybrid task distribution
Measured with Real-ESRGAN 4x model, upscaling 1080p frames to 4K resolution.
| GPU | VRAM | Throughput | Time per Frame | |
|---|---|---|---|---|
| NVIDIA RTX 4090 | 0 GB | ~0 FPS | 0 ms | |
| NVIDIA RTX 4080 | 0 GB | ~0 FPS | 0 ms | |
| NVIDIA RTX 3090 | 0 GB | ~0 FPS | 0 ms | |
| NVIDIA RTX 4070 Ti | 0 GB | ~0 FPS | 0 ms | |
| Apple M2 Pro | 0 GB | ~0 FPS | 0 ms | |
| CPU (16 cores) | — | ~0 FPS | 0 ms |
GPUNet workers run natively across every major operating system and architecture.
Every connection is encrypted, every request is signed, every worker is verified.
Workers authenticate using elliptic-curve keypairs with challenge-response verification. No passwords, no shared secrets.
All gRPC and REST connections are encrypted end-to-end. Frame data in transit remains confidential across the network.
Every worker request is signed with a private key. The server verifies signatures before processing — tamper-proof communication.
Built-in rate limiting protects against abuse. Strict input validation on all endpoints prevents injection and overflow attacks.
GPUNet supports leading AI models for video enhancement, all running via optimised ONNX Runtime.
A modern Rust-native stack optimised for performance, safety, and reliability.