CLI-first
Queueing, control, and inspection from familiar commands.
Single-node scheduling for shared machines
Queue, run, and inspect GPU or CPU jobs on a shared workstation with a small CLI and a local daemon.
CLI-first
Queueing, control, and inspection from familiar commands.
Recoverable
Attach, follow logs, and redo failed work.
Explicit policy
Declare GPUs, VRAM limits, priorities, reservations, and dependencies.
Queue snapshot
GPU policy
Why it exists
Ad hoc tmux sessions and manual GPU etiquette break down once a workstation is shared.
Without queue discipline
With gflow
How it works
gflowd upStart the local scheduler.
gbatch --gpus 1 python train.pySubmit a command or script with explicit resources.
gqueueInspect running, queued, or completed jobs.
gjob log <job_id>Follow logs or attach when a run needs attention.
Capabilities
Submit, hold, release, cancel, update, and redo jobs with an inspectable state model.
Request GPUs directly, enable shared mode, and set VRAM limits.
Use dependencies, arrays, and parameter sweeps for multi-stage runs.
Read queue state through tables, trees, JSON, CSV, or YAML.
Run each job in its own tmux session for direct logs and recovery.
Expose scheduler actions through a local MCP server.
Where it fits
Coordinate multiple researchers on one box with explicit scheduling rules.
Keep long-running experiments structured and restartable.
Chain prep, train, benchmark, and reporting jobs with dependencies.
Documentation paths
Install the binary, create config defaults, and start the daemon.
Open InstallationStart the daemon, submit a job, inspect the queue, and read logs.
Open Quick StartUse the cheat sheet for `gflowd`, `gbatch`, `gqueue`, `gjob`, `gctl`, and more.
Open ReferenceRun `gflow` as a local MCP server for agent tooling.
Open AI IntegrationAI integration
Run gflow as a local stdio MCP server so agent CLIs can inspect queues and drive scheduler workflows.
gflow mcp serveRead Agents, MCP, and SkillsUse the docs as an operator handbook.