Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.getjumper.io/llms.txt

Use this file to discover all available pages before exploring further.

Jumper Public API v1

Base URL: http://localhost:6699/api/v1 Jumper analyzes your media files locally on your machine. This API lets external tools (MAM systems, automation scripts, custom integrations) tap into those capabilities: semantic search across video, speech transcription, face detection and clustering, and more.

Authentication

Every endpoint except /health requires a Jumper Pro license key. Pass it one of two ways:
MethodExample
Header (recommended)X-License-Key: your-key-here
JSON body{"license_key": "your-key-here", ...}
The key is validated on the first request and cached for the session — subsequent requests with the same key skip the network check. Error responses:
StatusMeaning
401No license key provided
403Key is valid but not a Pro license

Typical Workflow

Most integrations follow this pattern:
  1. Check health — verify the backend is running
  2. Analyze media — run visual and/or speech analysis on your files
  3. Load analysis data — load analysis results into memory for searching
  4. Search — find matching moments by text, image, or frame similarity
  5. Get transcriptions — retrieve speech-to-text results
  6. Get thumbnails — fetch preview images for specific timestamps

Endpoints

Health

GET /health

No authentication required. Use this to check if the backend is running. Response:
{"status": "ok"}

Models

GET /models/loaded

Returns which visual and speech analysis models are currently loaded in memory. Example request:
curl -H "X-License-Key: YOUR_KEY" http://localhost:6699/api/v1/models/loaded
Response:
{
  "visual": {
    "model_key": "v2-medium-256",
    "is_loading": false
  },
  "speech": {
    "model_key": "mlx-large-v3-turbo",
    "is_loading": false
  }
}

GET /models/available

Lists all models supported on this hardware, which are downloaded, and which is active. Example request:
curl -H "X-License-Key: YOUR_KEY" http://localhost:6699/api/v1/models/available
Response:
{
  "supported_models": ["v2-medium-256", "v2-large-384", "v1-multilingual-384"],
  "downloaded_models": ["v2-medium-256", "v2-large-384"],
  "loaded_model": "v2-medium-256",
  "model_info": [
    {"v2-medium-256": {"speed": 5, "accuracy": 3, "frame_resolution": 256, "size_gb": 0.6, "compatible_os": ["mac-arm", "mac-x86", "windows"], "is_multilingual": false}}
  ],
  "current_system": "mac-arm"
}
model_info is an array of single-key objects, where each key is a model_key from supported_models. current_system is one of mac-arm, mac-x86, windows, or linux.

POST /models/load

Switches to a different visual model. The model must already be downloaded. This clears loaded analysis data from memory — you’ll need to reload it afterwards. Example request:
curl -X POST http://localhost:6699/api/v1/models/load \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model_key": "v2-large-384"}'
Response:
{
  "message": "Model changed from V2 Medium to V2 Large high-res in 3.24s",
  "loaded_model": "v2-large-384"
}
StatusMeaning
200Model switched, already active, or still loading from a previous switch
400Model key invalid or not downloaded
500Loading failed

Media Metadata

POST /media/metadata

Returns file properties (duration, FPS, timecode) and analysis status for each file. This is the way to check which files have been analyzed and to get their hash_str identifiers. Example request:
curl -X POST http://localhost:6699/api/v1/media/metadata \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "cache_dir": "/Users/me/JumperAnalysis",
    "media_paths": [
      "/Videos/interview.mp4",
      "/Videos/photo.jpg",
      "/Audio/voiceover.mp3"
    ]
  }'
Response:
{
  "media_properties": {
    "/Videos/interview.mp4": {
      "video_cached": true,
      "audio_cached": true,
      "runtime": "00:02:30",
      "fps": 30,
      "media_path": "/Videos/interview.mp4",
      "timecode": "00:00:00:00",
      "hash_str": "1e09e4953de0471b"
    },
    "/Videos/photo.jpg": {
      "video_cached": true,
      "audio_cached": false,
      "runtime": "00:00:01",
      "fps": 0,
      "media_path": "/Videos/photo.jpg",
      "timecode": "00:00:00:00",
      "hash_str": "fba6b4a2d2cf672c"
    },
    "/Audio/voiceover.mp3": {
      "video_cached": false,
      "audio_cached": true,
      "runtime": "00:00:03",
      "fps": 0,
      "media_path": "/Audio/voiceover.mp3",
      "timecode": "00:00:00:00",
      "hash_str": "485535a3914b2d02"
    }
  }
}
Fields explained:
  • video_cachedtrue if visual analysis data exists (for videos/images)
  • audio_cachedtrue if a transcription exists (for videos/audio)
  • hash_str — unique identifier for this file, used internally and in other endpoints

Analysis

POST /analyze

Starts analyzing media files. This is asynchronous — it returns immediately with a task_id, and the actual work runs in the background. You can combine visual analysis, transcription, and face clustering in one call. Batching matters: one request with 50 files is much faster than 50 one-file requests on the same backend, because Jumper loads the ML models per analysis request. Only one analysis task can run at a time per backend instance. A new /analyze request first tries to stop any current task. If the previous task is still unwinding, the new request returns 409. Example request — visual + transcription:
curl -X POST http://localhost:6699/api/v1/analyze \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "cache_dir": "/Users/me/JumperAnalysis",
    "visual_media_paths": [
      "/Videos/interview.mp4",
      "/Videos/photo.jpg"
    ],
    "transcription_jobs": [
      {"path": "/Videos/interview.mp4", "language": "english"},
      {"path": "/Audio/voiceover.mp3", "language": "english"}
    ]
  }'
Example request — with face clustering:
curl -X POST http://localhost:6699/api/v1/analyze \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "cache_dir": "/Users/me/JumperAnalysis",
    "visual_media_paths": ["/Videos/interview.mp4"],
    "transcription_jobs": [],
    "face_clustering_jobs": [
      {"cluster_job_name": "Interview Faces", "face_media_paths": ["/Videos/interview.mp4"]}
    ]
  }'
Response (202):
{
  "message": "Analysis started",
  "task_id": "550e8400-e29b-41d4-a716-446655440000"
}
Tracking progress: Connect to the SocketIO server at http://localhost:6699, emit join with the returned task_id, then listen for progress events:
{
  "progress": 45.5,
  "video_path": "/Videos/interview.mp4",
  "done": false,
  "type": "video"
}
StatusMeaning
202Analysis started
400No valid media files supplied
409Another analysis task is still stopping or already active
503Models still loading, try again shortly

POST /analyze/cancel

Cancels any running analysis task.
curl -X POST http://localhost:6699/api/v1/analyze/cancel \
  -H "X-License-Key: YOUR_KEY"
Response:
{"message": "Cancellation requested"}

Analysis Data Management

Before you can search, the analysis data needs to be loaded into memory. There are three options:

POST /analysis-data/load

Loads all analysis data from the folder into memory — visual embeddings, people metadata, and transcriptions. Best when you want to search across everything.
curl -X POST http://localhost:6699/api/v1/analysis-data/load \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"cache_dir": "/Users/me/JumperAnalysis"}'
Response:
{"message": "Analysis data loaded in 0.45s"}

POST /analysis-data/load-for-media

Loads visual analysis data for specific video/image files only. More efficient when working with a subset. Skips files already in memory. This endpoint does not load transcriptions. Use /analysis-data/load to load everything, or /analysis-data/load-transcriptions for transcript-only workflows.
curl -X POST http://localhost:6699/api/v1/analysis-data/load-for-media \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "cache_dir": "/Users/me/JumperAnalysis",
    "media_paths": ["/Videos/interview.mp4"]
  }'
Response:
{"message": "Loaded analysis data for 1 files in 0.12s"}

POST /analysis-data/load-transcriptions

Loads transcription data into memory separately. This is useful if you only need transcript search and don’t want to load visual analysis data. Transcriptions are also loaded automatically by /analysis-data/load.
curl -X POST http://localhost:6699/api/v1/analysis-data/load-transcriptions \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"cache_dir": "/Users/me/JumperAnalysis"}'
Response:
{"message": "Loaded 123 transcriptions in 0.09s"}

All search endpoints require analysis data to be loaded into memory first (see above). The visual search endpoints (/search/text, /search/image, /search/frame) return the same match structure and order results best-first. Jumper does not return similarity scores in the response payload. The optional exclude field is a soft ranking signal, not a hard filter.

POST /search/text

Semantic visual search — describe what you’re looking for in plain language. Example request:
curl -X POST http://localhost:6699/api/v1/search/text \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "dog on a beach",
    "exclude": ["water", "swimming pool"],
    "cache_dir": "/Users/me/JumperAnalysis",
    "media_paths": ["/Videos/interview.mp4"],
    "max_results": 10
  }'
Response:
{
  "matches": [
    {
      "frame_idx": "76",
      "timestamp": "00:01:16",
      "image": "/9j/4AAQSkZJRgABAQAAAQABAAD...",
      "scene_start_timestamp": "00:01:10",
      "scene_end_timestamp": "00:01:19",
      "original_index": 0,
      "hash_str": "1e09e4953de0471b",
      "video_path": "/Videos/interview.mp4"
    },
    {
      "frame_idx": "107",
      "timestamp": "00:01:47",
      "image": "/9j/4AAQSkZJRgABAQAAAQABAAD...",
      "scene_start_timestamp": "00:01:43",
      "scene_end_timestamp": "00:01:52",
      "original_index": 1,
      "hash_str": "1e09e4953de0471b",
      "video_path": "/Videos/interview.mp4"
    }
  ]
}
frame_idx is the frame number on the 1-FPS embedding grid, returned as a string. original_index is the result’s position in the underlying ranking before per-video grouping. Parameters:
ParameterTypeDefaultDescription
querystringrequiredWhat you’re looking for
cache_dirstringrequiredAnalysis data folder
media_pathsstring[][]Restrict to these files
max_resultsint50Maximum matches to return
search_allboolfalseSearch all loaded media (ignores media_paths)
text_weightnumber1Text similarity weight (advanced)
excludestring[][]Visual concepts to softly push down in the ranking
people_filterobject[][]Only show results containing specific people
Filtering by people: If you’ve run face clustering, you can restrict results to frames containing specific people:
{
  "query": "waving hands",
  "cache_dir": "/Users/me/JumperAnalysis",
  "media_paths": ["/Videos/interview.mp4"],
  "people_filter": [
    {"cluster_job_name": "Interview Faces", "person_name": "Rodrigo"}
  ]
}

POST /search/image

Find moments visually similar to a reference image. Optionally combine with a text query. Example request:
curl -X POST http://localhost:6699/api/v1/search/image \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "image_path": "/Images/reference.jpg",
    "cache_dir": "/Users/me/JumperAnalysis",
    "media_paths": ["/Videos/interview.mp4"],
    "max_results": 10
  }'
Parameters:
ParameterTypeDefaultDescription
image_pathstringrequiredPath to the reference image
querystringoptionalText query to combine with the image
cache_dirstringrequiredAnalysis data folder
media_pathsstring[][]Restrict to these files
max_resultsint50Maximum matches to return
search_allboolfalseSearch all loaded media
excludestring[][]Visual concepts to softly push down in the ranking. Requires query.

POST /search/frame

Find moments similar to a specific frame in a video — a “find more like this” search. Example request:
curl -X POST http://localhost:6699/api/v1/search/frame \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "media_path": "/Videos/interview.mp4",
    "time_seconds": 38,
    "cache_dir": "/Users/me/JumperAnalysis",
    "media_paths": ["/Videos/interview.mp4"],
    "max_results": 10
  }'
Parameters:
ParameterTypeDefaultDescription
media_pathstringrequiredVideo to extract the reference frame from
time_secondsnumberrequiredTimestamp in seconds
querystringoptionalText query to combine with the frame
cache_dirstringrequiredAnalysis data folder
media_pathsstring[][]Restrict to these files
max_resultsint50Maximum matches to return
search_allboolfalseSearch all loaded media
excludestring[][]Visual concepts to softly push down in the ranking. Requires query.

POST /search/transcript

Search through loaded transcriptions for segments containing a query string. Uses case-insensitive substring matching. Example request:
curl -X POST http://localhost:6699/api/v1/search/transcript \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "hello world",
    "cache_dir": "/Users/me/JumperAnalysis",
    "max_results": 10
  }'
Response:
{
  "matches": [
    {
      "media_path": "/Videos/interview.mp4",
      "hash_str": "1e09e4953de0471b",
      "start_seconds": 12.5,
      "end_seconds": 15.2,
      "text": "And she said hello world to the audience",
      "start_timestamp": "00:00:12",
      "end_timestamp": "00:00:15"
    }
  ]
}
Parameters:
ParameterTypeDefaultDescription
querystringrequiredText to search for (case-insensitive substring match)
cache_dirstringrequiredAnalysis data folder
media_pathsstring[][]Restrict to these files
max_resultsint50Maximum matches to return
search_allbooltrueSearch across all loaded transcriptions

Transcriptions

POST /transcriptions

Returns speech transcriptions for media files that have been transcribed via /analyze. Example request:
curl -X POST http://localhost:6699/api/v1/transcriptions \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "cache_dir": "/Users/me/JumperAnalysis",
    "media_paths": ["/Audio/voiceover.mp3", "/Videos/interview.mp4"]
  }'
Response:
{
  "transcriptions": {
    "/Audio/voiceover.mp3": [
      [0.0, " One, two, three, testing.", 2.52, "/Audio/voiceover.mp3"]
    ],
    "/Videos/interview.mp4": [
      [0.0, " Welcome to today's discussion.", 3.84, "/Videos/interview.mp4"],
      [3.84, " We'll be talking about...", 7.12, "/Videos/interview.mp4"]
    ]
  }
}
Each segment is an array: [start_seconds, text, end_seconds, media_path]

Thumbnails

POST /thumbnails

Get base64-encoded JPEG thumbnail images for specific media/timestamp pairs. Example request:
curl -X POST http://localhost:6699/api/v1/thumbnails \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "cache_dir": "/Users/me/JumperAnalysis",
    "requests": [
      {"media_path": "/Videos/interview.mp4", "time_seconds": 38},
      {"media_path": "/Videos/interview.mp4", "time_seconds": 76}
    ]
  }'
Response:
{
  "thumbnails": [
    {"thumbnail": "/9j/4AAQSkZJRg..."},
    {"thumbnail": "/9j/4AAQSkZJRg..."}
  ]
}
Each thumbnail is a base64-encoded JPEG string, or null for audio files.

POST /thumbnails/scene

Get a strip of thumbnails spanning a time range. Useful for timeline scrubbers. Example request:
curl -X POST http://localhost:6699/api/v1/thumbnails/scene \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "cache_dir": "/Users/me/JumperAnalysis",
    "media_path": "/Videos/interview.mp4",
    "start_time": "00:01:00",
    "end_time": "00:02:00",
    "hash_str": "1e09e4953de0471b"
  }'
Response:
{
  "scene_thumbnails": [
    ["/9j/4AAQSkZJRg...", "00:01:00"],
    ["/9j/4AAQSkZJRg...", "00:01:01"],
    ["/9j/4AAQSkZJRg...", "00:01:02"]
  ]
}
Each entry is [base64_jpeg, timestamp]. For ranges longer than 100 seconds, Jumper samples 100 evenly-spaced frames instead of one per second.

Face Clustering

Jumper detects faces across video frames and automatically groups them by identity. These endpoints let you inspect, name, and refine those groups.

GET /faces/clusters

Lists all clustering jobs and their statistics. Example request:
curl -H "X-License-Key: YOUR_KEY" \
  "http://localhost:6699/api/v1/faces/clusters?cache_dir=/Users/me/JumperAnalysis"
Response:
{
  "cluster_jobs": ["Interview Faces"],
  "jobs": [
    {
      "name": "Interview Faces",
      "total_entries": 206,
      "num_clusters": 2,
      "noise_entries": 3,
      "cluster_ids": ["a1b2c3d4-...", "e5f6a7b8-..."],
      "face_storage_version": 2,
      "media_hashes": ["1e09e4953de0471b"]
    }
  ]
}
face_storage_version reports the on-disk face storage format for the job. Newer jobs use the packed format; older legacy jobs may not be mutable for reclustering or merges.

POST /faces/clusters/samples

Get sample face thumbnail images for each cluster. Useful for building a “who is this?” UI. Example request:
curl -X POST http://localhost:6699/api/v1/faces/clusters/samples \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "cache_dir": "/Users/me/JumperAnalysis",
    "cluster_job_name": "Interview Faces",
    "limit_per_cluster": 4,
    "max_clusters": 50
  }'
Response:
{
  "cluster_job_name": "Interview Faces",
  "num_clusters": 2,
  "num_clusters_after_filter": 2,
  "total_entries": 206,
  "clusters": [
    {
      "cluster_id": "a1b2c3d4-...",
      "name": "Rodrigo",
      "size": 120,
      "sample_faces": ["/9j/4AAQ...", "/9j/4AAQ...", "/9j/4AAQ...", "/9j/4AAQ..."]
    },
    {
      "cluster_id": "e5f6a7b8-...",
      "name": "Carlos",
      "size": 83,
      "sample_faces": ["/9j/4AAQ...", "/9j/4AAQ...", "/9j/4AAQ...", "/9j/4AAQ..."]
    }
  ]
}
Parameters:
ParameterTypeDefaultDescription
cache_dirstringrequiredAnalysis data folder
cluster_job_namestringrequiredWhich clustering job to query
limit_per_clusterint12Max face thumbnails per cluster
min_cluster_sizeint1Skip clusters smaller than this
max_cluster_sizeint1000000000Skip clusters larger than this
max_clustersint100Maximum clusters to return
include_noiseboolfalseInclude unassigned faces in response

POST /faces/clusters/faces

Get paginated face images for specific cluster(s). For browsing all faces in a cluster. Example request:
curl -X POST http://localhost:6699/api/v1/faces/clusters/faces \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "cache_dir": "/Users/me/JumperAnalysis",
    "cluster_job_name": "Interview Faces",
    "cluster_id": "a1b2c3d4-...",
    "limit": 100,
    "offset": 0
  }'
Parameters:
ParameterTypeDefaultDescription
cluster_idstring-Single cluster to query
cluster_idsstring[]-Multiple clusters (overrides cluster_id)
limitint500Page size (max 2000)
offsetint0Skip this many entries

PUT /faces/clusters/names

Assign human-readable names to clusters. Example request:
curl -X PUT http://localhost:6699/api/v1/faces/clusters/names \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "cache_dir": "/Users/me/JumperAnalysis",
    "cluster_job_name": "Interview Faces",
    "assignments": [
      {"cluster_id": "a1b2c3d4-...", "name": "Rodrigo"},
      {"cluster_id": "e5f6a7b8-...", "name": "Carlos"}
    ]
  }'
Response:
{
  "updated": 2,
  "changes": {
    "a1b2c3d4-...": "Rodrigo",
    "e5f6a7b8-...": "Carlos"
  }
}

POST /faces/recluster

Re-runs face clustering with different parameters. Useful for tuning how aggressively faces are grouped. Runs asynchronously. Example request:
curl -X POST http://localhost:6699/api/v1/faces/recluster \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "cache_dir": "/Users/me/JumperAnalysis",
    "cluster_job_name": "Interview Faces",
    "eps": 0.55,
    "min_samples": 3
  }'
Response (202):
{
  "message": "Re-clustering started",
  "task_id": "f1e2d3c4-..."
}
Parameters:
ParameterTypeDefaultDescription
epsfloat0.48Clustering sensitivity — lower means stricter grouping (fewer, purer clusters)
min_samplesint5Minimum faces needed to form a cluster
auto_detect_paramsboolfalseLet Jumper pick eps/min_samples automatically
clear_namesboolfalseClear all existing cluster names

POST /faces/clusters/modify

Merge clusters or move individual faces between clusters. Useful for correcting mistakes. Example — merge two clusters:
curl -X POST http://localhost:6699/api/v1/faces/clusters/modify \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "cache_dir": "/Users/me/JumperAnalysis",
    "cluster_job_name": "Interview Faces",
    "merges": [
      {
        "cluster_ids": ["small-cluster-id", "target-cluster-id"],
        "target_cluster_id": "target-cluster-id"
      }
    ],
    "moves": []
  }'

Watch Folders

Watch folders let Jumper automatically analyze new media files as they appear in a directory.

GET /watch-folders

Lists all configured watch folders and the background service status.
curl -H "X-License-Key: YOUR_KEY" http://localhost:6699/api/v1/watch-folders
Response:
{
  "watch_folders": [
    {
      "id": "wf-abc123",
      "folder_path": "/Volumes/Media/Ingest",
      "enabled": true,
      "enable_visual_analysis": true,
      "enable_audio_analysis": true,
      "enable_face_analysis": false,
      "audio_language": "english",
      "cluster_job_name": null,
      "face_eps": 0.48,
      "face_min_samples": 5,
      "cache_dir": "/Users/me/JumperAnalysis",
      "excluded_extensions": [".jpg", ".jpeg"],
      "excluded_filename_globs": ["*_a04.mxf"],
      "created_at": 1707840000,
      "last_poll_time": 1707840060,
      "files_analyzed_count": 24
    }
  ],
  "service_status": {
    "state": "running",
    "current_folder": null,
    "current_file": null,
    "files_pending": 0,
    "files_processed_this_session": 24,
    "last_error": null,
    "paused_until": null,
    "folder_stats": {}
  }
}

POST /watch-folders

Add a new watch folder. The folder must exist on disk.
curl -X POST http://localhost:6699/api/v1/watch-folders \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "folder_path": "/Volumes/Media/Ingest",
    "cache_dir": "/Users/me/JumperAnalysis",
    "enabled": true,
    "enable_visual_analysis": true,
    "enable_audio_analysis": true,
    "audio_language": "english",
    "excluded_extensions": [".jpg", ".jpeg"],
    "excluded_filename_globs": ["*_a04.mxf"]
  }'
Response (201):
{
  "message": "Watch folder added: /Volumes/Media/Ingest",
  "watch_folder": { ... }
}
Parameters:
ParameterTypeDefaultDescription
folder_pathstringrequiredAbsolute path to watch
cache_dirstringoptionalWhere to store analysis data
enabledbooltrueStart watching immediately
enable_visual_analysisbooltrueAnalyze visual content for searching
enable_audio_analysisboolfalseTranscribe audio
enable_face_analysisboolfalseDetect and cluster faces
audio_languagestring”english”Language hint for transcription
cluster_job_namestring-Required if face analysis is enabled
face_epsfloat0.48Clustering sensitivity for face grouping
face_min_samplesint5Minimum faces needed to form a cluster
excluded_extensionsarray[string][]File extensions to skip; normalized to lowercase with a leading dot
excluded_filename_globsarray[string][]Filename-only glob patterns to skip, matched case-insensitively
excluded_filename_globs apply to the basename only, not the relative or absolute path.

PUT /watch-folders/{watch_folder_id}

Update settings for an existing watch folder. Only include the fields you want to change.
curl -X PUT http://localhost:6699/api/v1/watch-folders/wf-abc123 \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"enabled": false, "excluded_extensions": [], "excluded_filename_globs": ["*_proxy.mov"]}'

DELETE /watch-folders/{watch_folder_id}

Remove a watch folder. Does not delete any analysis data.
curl -X DELETE http://localhost:6699/api/v1/watch-folders/wf-abc123 \
  -H "X-License-Key: YOUR_KEY"

POST /watch-folders/service/start

Start the background service that monitors watch folders. This uses the enabled watch-folder configuration already stored in settings; it does not require prior UI interaction in the current session.
curl -X POST http://localhost:6699/api/v1/watch-folders/service/start \
  -H "X-License-Key: YOUR_KEY"

POST /watch-folders/service/stop

Stop the background service.
curl -X POST http://localhost:6699/api/v1/watch-folders/service/stop \
  -H "X-License-Key: YOUR_KEY"

GET /watch-folders/service/status

Check if the service is running.
curl -H "X-License-Key: YOUR_KEY" \
  http://localhost:6699/api/v1/watch-folders/service/status
Response:
{
  "state": "running",
  "current_folder": null,
  "current_file": null,
  "files_pending": 0,
  "files_processed_this_session": 24,
  "last_error": null,
  "paused_until": null,
  "folder_stats": {}
}

Cache Paths

POST /cache-paths

Get the visual and audio cache folder paths for a specific media file. Useful for inspecting or managing analysis data on disk.
curl -X POST http://localhost:6699/api/v1/cache-paths \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "cache_dir": "/Users/me/JumperAnalysis",
    "media_path": "/Videos/interview.mp4"
  }'
Response:
{
  "video_cache_path": "/Users/me/JumperAnalysis/1e09e4953de0471b",
  "audio_cache_path": "/Users/me/JumperAnalysis/1e09e4953de0471b_audio",
  "hash_str": "1e09e4953de0471b"
}
Paths are null if the corresponding analysis hasn’t been run yet.

Export

POST /export/clips

Export trimmed video clips to a folder using ffmpeg. Example request:
curl -X POST http://localhost:6699/api/v1/export/clips \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "clips": [
      {
        "source_path": "/Videos/interview.mp4",
        "start_seconds": 10,
        "end_seconds": 25
      },
      {
        "source_path": "/Videos/interview.mp4",
        "start_seconds": 60,
        "end_seconds": 90,
        "subfolder": "highlights"
      }
    ],
    "output_dir": "/Users/me/Exports",
    "copy_codec": true
  }'
Response:
{
  "results": [
    {"success": true, "output_path": "/Users/me/Exports/interview_10-25.mp4"},
    {"success": true, "output_path": "/Users/me/Exports/highlights/interview_60-90.mp4"}
  ],
  "summary": "2/2 clips exported successfully"
}
Parameters:
ParameterTypeDefaultDescription
clipsobject[]requiredArray of clip definitions
clips[].source_pathstringrequiredSource media file path
clips[].start_secondsnumberrequiredStart time in seconds
clips[].end_secondsnumberrequiredEnd time in seconds
clips[].subfolderstringoptionalSubfolder within output_dir
output_dirstringrequiredDirectory to write clips to
copy_codecbooltrueStream copy (fast, no re-encode)

POST /export/premiere-xml

Generate a Premiere Pro compatible XML sequence file (XMEML v4). Can be imported into Premiere Pro, DaVinci Resolve, Avid, and other NLEs. Example request:
curl -X POST http://localhost:6699/api/v1/export/premiere-xml \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "sequence_name": "Interview Highlights",
    "clips": [
      {"source_path": "/Videos/interview.mp4", "start_seconds": 10, "end_seconds": 25},
      {"source_path": "/Videos/interview.mp4", "start_seconds": 60, "end_seconds": 90}
    ],
    "output_path": "/Users/me/Exports/highlights.xml",
    "timebase": 24
  }'
Response:
{
  "output_path": "/Users/me/Exports/highlights.xml",
  "clip_count": 2,
  "total_duration_seconds": 45.0
}
Parameters:
ParameterTypeDefaultDescription
sequence_namestring”Untitled Sequence”Name of the sequence in the NLE
clipsobject[]requiredArray of clip definitions
clips[].source_pathstringrequiredSource media file path
clips[].start_secondsnumberrequiredStart time in seconds
clips[].end_secondsnumberrequiredEnd time in seconds
output_pathstringrequiredWhere to write the XML file
timebaseint24Sequence frame rate (e.g. 24, 25, 30)

POST /export/transcript

Export transcript segments to a file (TXT, CSV, DOCX, or PDF). Example request:
curl -X POST http://localhost:6699/api/v1/export/transcript \
  -H "X-License-Key: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "segments": [
      [0.32, "Hello world.", 5.84],
      [5.84, "Another line.", 11.20]
    ],
    "format": "txt",
    "display_name": "My Transcript",
    "output_path": "/Users/me/Exports/transcript.txt"
  }'
Response:
{
  "output_path": "/Users/me/Exports/transcript.txt",
  "format": "txt",
  "row_count": 2
}
Parameters:
ParameterTypeDefaultDescription
segmentsarrayrequiredList of [start_seconds, text, end_seconds] arrays
formatstring"txt"Export format: txt, csv, docx, pdf
display_namestring"Transcript"Name shown in header/filename
output_pathstring~/Desktop/{display_name}.{format}Destination file path
include_silencesbooltrueInclude silence gap rows between segments

Error Format

Error responses include at least an error field. Some endpoint-specific errors include additional fields; use the generated endpoint reference for exact response schemas.
{"error": "Description of what went wrong"}
Common status codes:
CodeMeaning
400Bad request (missing or invalid parameters)
401No license key provided
403Not a Pro license
404Resource not found (e.g. watch folder ID)
500Server error
503Models still loading — retry shortly

SocketIO Progress Tracking

For long-running operations (/analyze, /faces/recluster), connect to the SocketIO server at http://localhost:6699 and join the room matching the returned task_id. In Socket.IO terms, emit join with the returned task_id, then listen for progress. Event: progress
{
  "progress": 45.5,
  "video_path": "/Videos/interview.mp4",
  "done": false,
  "type": "video"
}
Common fields:
  • progress — numeric percentage
  • video_path — the file currently being processed, or null for some clustering updates
  • donetrue when the task is complete
  • type"video" for visual analysis, "speech" for transcription
  • is_cluster_job — present for face-clustering progress
  • cluster_job_name — present for face-clustering progress
  • cluster_media_hashes — present on some clustering completion events
Last modified on May 7, 2026