Documentation

The Official turd.ai Docs

Everything you need to build with the #2 AI platform. From first flush to full-scale deployment, we've got your backend covered.

v2.0.0 -- The Big FlushLast updated: Feb 2026

Getting Started

Installation

Install the turd.ai SDK via your preferred package manager. We recommend using npm for maximum regularity.

terminal
npm install @turd/sdk

# or if you prefer yarn
yarn add @turd/sdk

# pnpm gang
pnpm add @turd/sdk

Quick Setup

Initialize your first pipeline in under 60 seconds. That's faster than most bathroom breaks.

terminal
import { TurdClient } from '@turd/sdk'

const turd = new TurdClient({
  apiKey: process.env.TURD_API_KEY,
  region: 'us-east-1', // or 'eu-west-2'
  flushMode: 'auto'
})

// Create your first dump
const dump = await turd.createDump({
  name: 'my-first-dump',
  source: 'csv',
  retention: '30d'
})

console.log('Dump created:', dump.id)

Authentication

All API requests require a valid API key. You can generate one from the Brown Eye View dashboard under Settings > API Keys > Generate Fresh Key.

terminal
// Set your API key as an environment variable
// .env.local
TURD_API_KEY=turd_live_sk_...

// Or pass it directly (not recommended in prod)
const turd = new TurdClient({
  apiKey: 'turd_live_sk_...'
})

Data Dumps

Creating a Dump

Dumps are the fundamental unit of data storage in turd.ai. Think of them as containers that hold your processed output.

terminal
const dump = await turd.dumps.create({
  name: 'quarterly-report',
  format: 'json',
  compression: 'gzip',
  metadata: {
    department: 'analytics',
    priority: 'urgent'
  }
})

// Upload data to the dump
await dump.ingest(rawData, {
  batchSize: 1000,
  onProgress: (p) => console.log(p + '% ingested')
})

Querying Dumps

Use TQL (Turd Query Language) to search and filter your dumps. It's like SQL, but crappier -- in a good way.

terminal
// TQL - Turd Query Language
const results = await turd.dumps.query(`
  SELECT * FROM quarterly-report
  WHERE department = 'analytics'
  AND created_at > NOW() - INTERVAL '7 turds'
  ORDER BY stink_score DESC
  LIMIT 50
`)

// Stream large result sets
const stream = turd.dumps.stream('quarterly-report')
for await (const chunk of stream) {
  process(chunk)
}

Flushing Dumps

When a dump has served its purpose, flush it to free up resources. Flushed dumps enter a 30-day grace period before permanent removal.

terminal
// Soft flush (recoverable for 30 days)
await turd.dumps.flush('dump_abc123')

// Hard flush (immediate, irreversible)
await turd.dumps.flush('dump_abc123', {
  force: true,
  reason: 'GDPR request'
})

// Flush all dumps matching criteria
await turd.dumps.flushWhere({
  olderThan: '90d',
  status: 'stale'
})

ShatGPT Integration

Basic Usage

ShatGPT is our proprietary LLM trained on billions of data movements. Use it for analysis, summarization, and pattern detection.

terminal
const response = await turd.shatgpt.analyze({
  prompt: 'Summarize Q4 pipeline throughput',
  context: dump.id,
  model: 'shat-4-turbo', // or 'shat-3.5-regular'
  temperature: 0.7,
  maxTokens: 2000
})

console.log(response.analysis)
console.log('Confidence:', response.stinkScore)

Streaming Responses

For real-time analysis, use streaming mode. Watch the insights flow in, one chunk at a time.

terminal
const stream = await turd.shatgpt.stream({
  prompt: 'Analyze anomalies in user-data dump',
  context: 'dump_xyz789',
  model: 'shat-4-turbo'
})

for await (const chunk of stream) {
  process.stdout.write(chunk.text)
}

// Or use the callback pattern
turd.shatgpt.stream(config, {
  onChunk: (text) => appendToUI(text),
  onDone: (full) => saveAnalysis(full),
  onError: (err) => handleBlockage(err)
})

Colon-ial Pipelines

Creating Pipelines

Pipelines are the backbone of turd.ai -- they move data from ingestion to output with zero blockages. Define stages, set flow rates, and let it rip.

terminal
const pipeline = await turd.pipelines.create({
  name: 'etl-main',
  stages: [
    { type: 'ingest', source: 's3://raw-data' },
    { type: 'transform', fn: 'clean_and_normalize' },
    { type: 'enrich', model: 'shat-4-turbo' },
    { type: 'load', destination: 'dump_analytics' }
  ],
  schedule: 'every 6 hours',
  retryPolicy: {
    maxRetries: 3,
    backoff: 'exponential'
  }
})

Monitoring Flow

Real-time pipeline health monitoring. Because nobody wants a surprise backup.

terminal
// Get pipeline health
const health = await turd.pipelines.health('pipe_123')
console.log(health)
// {
//   status: 'flowing',
//   throughput: '12.4k records/sec',
//   latency: '23ms',
//   backpressure: 'nominal',
//   lastFlush: '2026-02-08T10:30:00Z'
// }

// Subscribe to alerts
turd.pipelines.onAlert('pipe_123', (alert) => {
  if (alert.type === 'blockage') {
    await turd.pipelines.plunge('pipe_123')
  }
})

Webhooks & Events

Event Types

Subscribe to lifecycle events across your dumps and pipelines. Get notified when things move -- or when they don't.

terminal
// Available event types:
// dump.created      - New dump initiated
// dump.flushed      - Dump removed
// dump.overflow     - Dump exceeded capacity
// pipeline.flowing  - Pipeline is healthy
// pipeline.blocked  - Blockage detected
// pipeline.plunged  - Blockage resolved
// shatgpt.complete  - Analysis finished
// shatgpt.timeout   - Analysis constipated

await turd.webhooks.create({
  url: 'https://your-app.com/api/turd-events',
  events: ['pipeline.blocked', 'dump.overflow'],
  secret: 'whsec_...',
  retries: 3
})

SDKs & Libraries

Available SDKs

We offer first-class SDKs for all major platforms. Pick your weapon.

terminal
# JavaScript / TypeScript (recommended)
npm install @turd/sdk

# Python
pip install turdai

# Go
go get github.com/turdai/turd-go

# Ruby
gem install turd-ruby

# Rust
cargo add turd-rs

# Java (because someone has to)
mvn install com.turdai:turd-java:2.0.0

Ready to drop your first dump?

Sign up for a free account and start shipping in minutes. No credit card required. No toilet paper either.