Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.getjumper.io/llms.txt

Use this file to discover all available pages before exploring further.

Jumper analyzes video, image, and audio files locally on the machine running the backend. The public API lets MAM systems, automation scripts, AI agents, and custom integrations use Jumper’s media analysis engine for semantic visual search, speech transcription, thumbnails, face clustering, watch folders, and exports. Base URL: http://localhost:6699/api/v1

Canonical files

These files are served directly from the docs site and are the best entry points for tools that need to inspect the API.

OpenAPI YAML

Raw OpenAPI 3.0 contract for endpoint discovery, schemas, and generated clients.

OpenAPI JSON

Raw JSON mirror of the OpenAPI contract.

Markdown Reference

Markdown export with examples, workflow notes, and endpoint behavior.

Authentication

Every endpoint except GET /health requires a Jumper Pro license key. Use the X-License-Key header when possible.
curl -H "X-License-Key: YOUR_KEY" http://localhost:6699/api/v1/models/loaded

Typical workflow

1

Check health

Verify the backend is running with GET /health.
2

Analyze media

Run visual analysis, transcription, and optional face clustering with POST /analyze.
3

Track progress

Use the returned task_id with Socket.IO progress events. In Socket.IO terms, emit join with the task_id, then listen for progress.
4

Load analysis data

Load all analysis data with POST /analysis-data/load, visual data for selected media with POST /analysis-data/load-for-media, or transcript-only data with POST /analysis-data/load-transcriptions.
5

Search and retrieve

Search visually, search transcripts, fetch transcriptions, get thumbnails, inspect face clusters, or export clips and transcripts.
Batch many files into one POST /analyze request when possible. Jumper loads ML models per analysis request, so one larger request is much faster than many one-file requests on the same backend.

Endpoint groups

AreaEndpoints
HealthGET /health
ModelsGET /models/loaded, GET /models/available, POST /models/load
MediaPOST /media/metadata
AnalysisPOST /analyze, POST /analyze/cancel
Analysis dataPOST /analysis-data/load, POST /analysis-data/load-for-media, POST /analysis-data/load-transcriptions
SearchPOST /search/text, POST /search/image, POST /search/frame, POST /search/transcript
TranscriptionsPOST /transcriptions
ThumbnailsPOST /thumbnails, POST /thumbnails/scene
Face clusteringGET /faces/clusters, POST /faces/clusters/samples, POST /faces/clusters/faces, PUT /faces/clusters/names, POST /faces/recluster, POST /faces/clusters/modify
Watch foldersGET /watch-folders, POST /watch-folders, PUT /watch-folders/{watch_folder_id}, DELETE /watch-folders/{watch_folder_id}, POST /watch-folders/service/start, POST /watch-folders/service/stop, GET /watch-folders/service/status
Cache pathsPOST /cache-paths
ExportPOST /export/clips, POST /export/premiere-xml, POST /export/transcript

Important behavior notes

Only one analysis task can run at a time per backend instance. A new POST /analyze request first asks the current task to stop. If the previous task is still unwinding, Jumper returns 409.
POST /search/text, POST /search/image, and POST /search/frame return matches ordered best-first, but they do not return similarity scores. The optional exclude field is a soft ranking signal, not a hard filter.
Visual search matches use frame_idx as a string on the 1-FPS embedding grid, include a base64 JPEG image, and provide scene_start_timestamp, scene_end_timestamp, original_index, hash_str, and video_path.
Watch folders support excluded_extensions and excluded_filename_globs. Glob patterns match the filename only, not the full path.
Use POST /export/transcript to export transcript segments to txt, csv, docx, or pdf.

Example requests

curl http://localhost:6699/api/v1/health
Use the generated endpoint reference in the sidebar for request and response schemas.
Last modified on May 7, 2026