Skip to content
MonkeyBuckets

File storage that fits real AI workflows.

Upload files directly to S3, serve public assets through the CDN, keep private objects behind signed access, and query metadata from the same bucket surface.

One org-aware path across SDK code, CLI jobs, dashboard tooling, and agent automation.

Bucket Surface

Start from the file operation you want. MonkeyBuckets handles the delivery path behind it.

const uploads = monkey.bucket('uploads')

await uploads.put('runs/run_001/input.json', file, {
	visibility: 'private',
	contentType: 'application/json',
})

const url = await uploads.getUrl('runs/run_001/input.json')

Direct-to-S3 uploads

File bytes skip the app server path, so upload traffic does not become an API bottleneck.

CDN for public files

Public assets get a stable delivery URL without turning the underlying bucket into a public object dump.

Signed access for private files

Private reads resolve through short-lived object URLs that match the caller and expiry you request.

Why MonkeyBuckets

The file layer that behaves like product infrastructure, not a pile of storage exceptions.

MonkeyBuckets keeps the useful parts of S3 and CloudFront while removing the ceremony most teams rebuild around uploads, delivery, and file metadata.

Direct uploads, no byte proxy

bucket.put() starts the upload flow, then the file goes straight to S3 instead of pinning your API servers under file traffic.

Public and private delivery built in

Public files get stable CDN URLs. Private files stay behind short-lived signed access instead of a permanently exposed object path.

Metadata travels with the file

Visibility, content type, extension, and custom metadata are queryable alongside the object reference so file workflows stay inspectable.

Same surface for apps, ops, and agents

SDK code, CLI scripts, dashboard tooling, and MCP traffic all hit the same bucket model instead of parallel upload systems.

How It Works

Direct upload in, correct access path out.

The core model is simple: MonkeyBuckets keeps file transfer on the storage plane, then resolves each object through the public or private delivery path that matches its visibility.

1. Start the upload from the bucket surface

Your app calls bucket.put() with a filename, visibility, content type, and optional metadata. MonkeyHub validates auth and prepares the upload.

2. Send the bytes directly to S3

The SDK uploads to a pre-signed S3 URL, which keeps file transfer out of the API request path and avoids turning uploads into app-server bottlenecks.

3. Read through the right delivery path

Public files resolve to permanent CDN URLs. Private files resolve through getUrl() to short-lived signed access for exactly one object.

Access Model

One bucket. Two delivery paths. No public S3 free-for-all.

  • Public files resolve to CDN URLs while the bucket itself stays private behind origin controls.
  • Private files never need a permanent public path; access is granted through short-lived object URLs.
  • Metadata lives beside the file reference so query(), dashboard views, and automations stay aligned.

Public

brand/logo.png

visibility: public

https://cdn.monkeyhub.io/org_abc/uploads/brand/logo.png

Private

runs/run_001/input.json

visibility: private

signed URL via getUrl(..., { expiresIn: 900 })

Queryable file metadata

uploads.query({ ext: 'png', limit: 20 })

Use the file record for listings, dashboard views, and automations without crawling the object store directly.

Real Surface Area

The same bucket workflow in app code or automation scripts.

These examples match the implemented SDK and CLI surface. No pseudo-API, no marketing-only shorthand.

TypeScript SDK

Put files, fetch URLs, and query metadata through one bucket object.

typescript
import { Monkey } from '@monkeyhub/sdk'

const monkey = new Monkey(process.env.MONKEY_KEY!)
const uploads = monkey.bucket('uploads')

const record = await uploads.put(
	'runs/run_001/input.json',
	JSON.stringify({ prompt: 'Summarize the latest support backlog' }),
	{
		visibility: 'private',
		contentType: 'application/json',
		metadata: {
			kind: 'prompt',
			runId: 'run_001',
		},
	},
)

const signedUrl = await uploads.getUrl(record.filename, { expiresIn: 900 })

const jsonFiles = await uploads.query({
	ext: 'json',
	limit: 20,
})

Monkey CLI

The same upload and query flow for scripts, CI jobs, and one-off operational work.

bash
monkey bucket put uploads ./fixtures/logo.png \
	--input '{"visibility":"public","contentType":"image/png","metadata":{"kind":"brand"}}'

monkey bucket url uploads logo.png

monkey bucket query uploads \
	--input '{"ext":"png","limit":20}'

Compare

More operational control than generic file helpers, less storage plumbing than rolling your own.

MonkeyBuckets is for teams that want S3-grade storage behavior without turning uploads, CDN delivery, and signed access into yet another internal platform project.

Need
Roll Your Own
Generic BaaS
MonkeyBuckets
Upload path
Design the signed upload flow, validate payload shape, and keep metadata state in sync yourself.
Files work, but the storage model and the app-specific wrapper usually end up split across multiple surfaces.
Use bucket.put() and let MonkeyHub handle the direct-upload handshake and metadata record.
Public delivery
Wire CloudFront, origin access, URL conventions, and cache-safe object exposure yourself.
Basic object delivery exists, but product-level visibility and URL ergonomics still leak into app code.
Public files get stable CDN URLs without opening the bucket itself.
Private access
Build your own signed URL service and permission checks around each object request.
Usually mixed into custom auth rules or another API layer you still have to maintain.
getUrl() returns short-lived object access only when the caller is allowed to read it.
Operational surface
Separate upload flows for the app, internal scripts, and agent tooling.
Usually one path for users and another for internal automation.
The same bucket surface shows up in SDK, CLI, dashboard, and MCP.

MonkeyBuckets on the MonkeyHub data plane

Start with files. Keep the rest of the platform available when the workflow grows.

Sign up if you want the shortest path to a real org, API keys, and a working bucket surface. Or go straight to the docs if you want to inspect the upload flow in detail first.