Task portability
Author once as a Task Package. Ship as OCI or zip. Run on your laptop or a pool with the same semantics.
Define task graphs once. Run them locally or on a fleet of runners with identical semantics. Logs, artifacts, retries and reruns all stay rooted in one coherent history.
curl -fsSL https://cuttlefish.sh/install.sh | sh› cuttle run start --workflow hello-artifacts \
--inputs '{"who":"world"}' --wait
⧗ dispatched run 951fd838-…c216
✓ echo 0.8s message="hello world"
✓ write 0.6s hello.txt (12 B)
● SUCCEEDED in 1.7s · artifacts 1// capabilities
Cuttlefish keeps workflow intent, runner supply, and run evidence in one operator-grade control room.
Author once as a Task Package. Ship as OCI or zip. Run on your laptop or a pool with the same semantics.
Every run is a timeline you can replay. Retries, backoff, and wait reasons stay indexed and queryable.
Typed outputs, downloadable artifacts, logs, and inputs stay rooted in one coherent run history.
Register agents, observe heartbeats, drain capacity, and see queue depth the moment it changes.
Drag-and-drop DAG editing with a YAML escape hatch. Jump straight to the first causal failure.
connect-go control plane, pluggable storage, OpenTelemetry traces, and mockable interfaces throughout.
// surface to depth
Describe a DAG in YAML or drag nodes in the studio.
Build a task package with typed inputs and outputs.
Dispatch locally or to a runner pool with the same semantics.
Inspect logs, artifacts, events, and rerun paths.
// task packages
Task packages are the portable unit: a name, version, typed I/O contract, and an implementation. Go, Python, Node, or a container all run through the same engine.
// examples/transform@0.2.1
// inputs: { message: string, locale?: string }
// outputs: { message: string, meta: { length: int } }
package transform
import "github.com/cuttlefish/sdk/go/task"
type Input struct {
Message string `json:"message"`
Locale string `json:"locale,omitempty"`
}
func Run(ctx task.Context, in Input) (Output, error) {
ctx.Log().Info("transforming", "locale", in.Locale)
return Output{
Message: strings.ToUpper(in.Message),
Meta: Meta{Length: len(in.Message)},
}, nil
}// surfaces
NODE · writeB
RUNNINGexamples/write-file@0.1.0
runner-pod-01
attempt 2 / 5
23.840 INFO attempt 2 start 23.906 INFO writing s3://arts/.. 24.020 OK uploaded 412 KiB 24.210 OK progress 62%
default
gpu
prod
edge
prod
default
gpu
// fleet
Register a k8s pool, EC2 fleet, or laptop with a single token. Heartbeats, capacity, queue depth, and draining states stream back without glue.
// run explorer
Live DAG overlay, event history, per-attempt retry reasoning, artifacts, and structured outputs stay in one pane that remains fast at 200 nodes.
// how we compare
No external broker, no scheduler sidecar, no DSL that fights you at 2am.
// cli
The studio and the CLI call the same API. Every button you click emits a command you can script, review, or pipe into anything.
you@reef ~/hello-artifacts › cuttle run start -i message="hi"
↳ resolved workflow hello-artifacts@3 · 4 nodes
↳ queued on pool default · run 951fd838
› cuttle run tail 951fd838 --graph
● fetch ──▶ ● transform ──┬─▶ ● writeA ✓
└─▶ ● writeB ⟳ attempt 2/5[12:04:23.840] writeA OK · 412KiB · s3://arts/..
[12:04:24.020] writeB retry after EOF · backoff 2s
[12:04:26.110] writeB OK · 611KiB
› run 951fd838 · SUCCEEDED in 00:04:52
you@reef ~/hello-artifacts ›
Menu bar app for macOS. Run pipelines on your own hardware and stream status back to Cuttlefish.
macOS 13+
// integrations
We replaced a cron + Airflow + six Lambda glue scripts with one Cuttlefish workflow. The event history alone saved us during an audit.
The local parity is the real magic. Same DAG runs on my laptop and in prod, same logs, same artifacts. Debugging became boring, in a good way.
// changelog · last 30 days
view all →Content-addressed artifact cache, cross-run.
Schema-driven inspector supports oneOf + $ref.
Capacity-aware scheduler, drain & cordon.
Python SDK reaches parity with Go.
// tagline
Less YAML misery.More graph, run anywhere.