PORTABLE PIPELINE PLATFORM · v1α

Pipelines that move like the tide.

Define task graphs once. Run them locally or on a fleet of runners with identical semantics. Logs, artifacts, retries and reruns all stay rooted in one coherent history.

Open studioView the CLIconnect-go · postgres · s3
curl -fsSL https://cuttlefish.sh/install.sh | sh
~/cuttlefish/ocean-lab
› cuttle run start --workflow hello-artifacts \
    --inputs '{"who":"world"}' --wait
⧗ dispatched run 951fd838-…c216
✓ echo        0.8s   message="hello world"
✓ write       0.6s   hello.txt (12 B)
● SUCCEEDED in 1.7s · artifacts 1
│ SONAR · LIVE DAG · 8 AGENTSTIDE 04:12 · LAT 42.1N
10K+runs / day per control plane
<2minblank slate to first run
200+nodes per DAG, still 60fps
1:1local <-> remote semantic parity

// capabilities

Calm deployments in deep water.

Cuttlefish keeps workflow intent, runner supply, and run evidence in one operator-grade control room.

Task portability

Author once as a Task Package. Ship as OCI or zip. Run on your laptop or a pool with the same semantics.

Event-history first

Every run is a timeline you can replay. Retries, backoff, and wait reasons stay indexed and queryable.

Artifacts as first class

Typed outputs, downloadable artifacts, logs, and inputs stay rooted in one coherent run history.

Remote runners, on tap

Register agents, observe heartbeats, drain capacity, and see queue depth the moment it changes.

n8n-grade editor

Drag-and-drop DAG editing with a YAML escape hatch. Jump straight to the first causal failure.

Built on Go

connect-go control plane, pluggable storage, OpenTelemetry traces, and mockable interfaces throughout.

// surface to depth

Four steps from author to audit.

cli + ui parity
01

Author

Describe a DAG in YAML or drag nodes in the studio.

02

Pack

Build a task package with typed inputs and outputs.

03

Run

Dispatch locally or to a runner pool with the same semantics.

04

Audit

Inspect logs, artifacts, events, and rerun paths.

// task packages

One package, every runtime.

Task packages are the portable unit: a name, version, typed I/O contract, and an implementation. Go, Python, Node, or a container all run through the same engine.

sdk.gofirst-class
sdk.pythonv0.6+
sdk.nodev0.6+
oci imagesrecommended
zip + runtimedev only
content cachesha256
tasks / examples/transform / task.goSIGNED
// examples/transform@0.2.1
// inputs:  { message: string, locale?: string }
// outputs: { message: string, meta: { length: int } }

package transform

import "github.com/cuttlefish/sdk/go/task"

type Input struct {
  Message string `json:"message"`
  Locale  string `json:"locale,omitempty"`
}

func Run(ctx task.Context, in Input) (Output, error) {
  ctx.Log().Info("transforming", "locale", in.Locale)
  return Output{
    Message: strings.ToUpper(in.Message),
    Meta:    Meta{Length: len(in.Message)},
  }, nil
}
› cuttle task pack ./transform → ghcr.io/you/transform:0.2.1• 412KiB · built 1.8s

// surfaces

Six surfaces. One mental model.

studiocanvasrunsagentscatalogcli
studio.cuttlefish.dev › workflows / hello-artifacts / run 951fd838LIVE
Run HistoryWorkflowsCanvasAgentsResourcesCatalog

› Run · 951fd838

echotransformwriteAwriteBarchive

NODE · writeB

RUNNING

examples/write-file@0.1.0
runner-pod-01
attempt 2 / 5

23.840 INFO attempt 2 start
23.906 INFO writing s3://arts/..
24.020 OK   uploaded 412 KiB
24.210 OK   progress 62%
› FLEET STATUS7 RUNNERS · 3 POOLS
us-w
runner-01

default

62%
us-e
runner-02

gpu

41%
eu
runner-03

prod

78%
ap
runner-04

edge

24%
us-w
runner-05

prod

52%
eu
runner-06

default

19%
ap
runner-07

gpu

83%
+ register

// fleet

Runners on tap, instrumented by default.

Register a k8s pool, EC2 fleet, or laptop with a single token. Heartbeats, capacity, queue depth, and draining states stream back without glue.

capacity-aware schedulingqueue depth · vCPU · mem · disk
graceful drainfinish in-flight, reject new
token-scoped enrollmentorg / project / pool boundaries
observability out of boxOpenTelemetry traces + metrics

// run explorer

See the whole run, not just the last log line.

Live DAG overlay, event history, per-attempt retry reasoning, artifacts, and structured outputs stay in one pane that remains fast at 200 nodes.

rerun from nodediff inputspin outputsfirst-failure bannerstream logs
runs / hello-artifacts / 951fd838RUNNING
echotransformwriteAwriteBwriteC

// how we compare

The small orchestrator.

No external broker, no scheduler sidecar, no DSL that fights you at 2am.

CuttlefishHeavy orchestratorCI-as-orchestrator
Single binaryyesbroker + schedulerrunners only
Typed I/O contractsyespartialno
Rerun from any nodeyesyesno
Local parityidenticalpartialno
Schema-driven UIgeneratedcustom reactno
Audit & event historyyesyeslog tail
Time to first workflow< 5 minhoursminutes

// cli

One binary. Everything is addressable.

The studio and the CLI call the same API. Every button you click emits a command you can script, review, or pipe into anything.

cuttle task packbuild + push a task package
cuttle workflow applydeclarative workflow upsert
cuttle run starttrigger a run with inputs
cuttle run taillive logs across the DAG
cuttle agent enrollregister a runner pool
cuttle catalog searchfind tasks by signature
~ / hello-artifacts — zshLIVE

you@reef ~/hello-artifacts cuttle run start -i message="hi"

↳ resolved workflow hello-artifacts@3 · 4 nodes

↳ queued on pool default · run 951fd838

cuttle run tail 951fd838 --graph

  ● fetch   ──▶ ● transform ──┬─▶ ● writeA  ✓
                              └─▶ ● writeB  ⟳ attempt 2/5

[12:04:23.840] writeA OK · 412KiB · s3://arts/..

[12:04:24.020] writeB retry after EOF · backoff 2s

[12:04:26.110] writeB OK · 611KiB

› run 951fd838 · SUCCEEDED in 00:04:52

you@reef ~/hello-artifacts

Desktop Agent

Menu bar app for macOS. Run pipelines on your own hardware and stream status back to Cuttlefish.

macOS 13+

// integrations

Slots into the reef you already have.

KUBk8s pools
DOClocal + OCI
AWSec2 · s3 · eks
GITactions trigger
OTEotel-native
PROprometheus
SLAalerts
PAGincidents
POSstate store
S3artifacts
VAUsecrets
TERenrollment

We replaced a cron + Airflow + six Lambda glue scripts with one Cuttlefish workflow. The event history alone saved us during an audit.

Mira OduyaStaff Platform Eng · Reefside Labs−68%infra spend

The local parity is the real magic. Same DAG runs on my laptop and in prod, same logs, same artifacts. Debugging became boring, in a good way.

Tomas HalakData Eng Lead · Copperfish4.2×deploys / week

// changelog · last 30 days

view all →
v0.7.2Apr 18
runtime

Content-addressed artifact cache, cross-run.

v0.7.1Apr 11
ui

Schema-driven inspector supports oneOf + $ref.

v0.7.0Apr 02
runner

Capacity-aware scheduler, drain & cordon.

v0.6.8Mar 24
sdk

Python SDK reaches parity with Go.

// tagline

Less YAML misery.More graph, run anywhere.