Overview
Agentic editing lets AI agents operate Jumper directly (searching your footage, retrieving clips, and triggering workflow actions) through an server. Jumper integrates with Claude Desktop (Chat, Cowork, and Code) and OpenAI Codex Desktop, so you can give a natural-language task to an agent and have it break the work into steps and execute them. Instead of clicking through searches and exports yourself, you describe what you want in plain language. The agent queries Jumper’s backend, orchestrates the workflow, and produces the result (clips, sequences, exports) without you having to perform each step manually.What agents can do
Same as you
Agents can use Jumper much like you do through the interface:- Search visually across analyzed footage using natural language
- Search transcriptions for spoken words and phrases
- Retrieve clip segments with start and end times
- Find similar clips by text, image, or frame
- Find clips by face recognition (people search)
- Trigger workflow actions such as exporting a sequence to Premiere Pro, Final Cut Pro, DaVinci Resolve, or Avid Media Composer
Beyond the UI
Because the agent orchestrates Jumper programmatically, it can also do things the normal interface does not:- Export scenes as individual files to a folder
- Export a set of clips as a sequence for your editing software (Premiere XML, FCPXML, etc.)
Orchestration
The agent acts as the orchestrator: you give it a complex task, and it breaks it down into smaller steps and runs them in the right order. The agent decides how to search, filter, and export, then executes the workflow end to end.Example use cases
Agentic editing can speed up routine media production tasks:- Finding B-roll that matches a script
- Pulling every clip of a certain person across a large library
- Creating sequences of selects for review or rough cuts
- VO + B-roll workflows (e.g. sizzle reels from event footage with a voiceover track)
USER
Use Jumper to find all shots of Anna smiling, export them as individual files, and also export a sequence to Premiere.
Worked for 50s
AGENT
Export finished. The Premiere XML timeline is saved at
~/jumper_test_movies/anna_smiling_sequence.xmlSequence: Anna smilingClips: 20 (~1:20 total duration)The clips are in ~/jumper_test_movies/anna_smiling_clips/In Premiere Pro, use File → Import and select the XML file. A sequence named “Anna smiling” will be added with all 20 clips in order. Ensure the source media is imported in your project first, or let Premiere relink if paths differ.USER
Use Jumper to cut a sizzle reel from the event footage in
/day1_footage/ with the voiceover sizzle_reel_audio.wav.Worked for 1m 15s
AGENT
Export finished. The sizzle reel with synced voiceover has been created.Sequence file:
~/day1_footage/sizzle_reel_with_vo.xml- Clips: 12 (cut to match voiceover timing)
- Voiceover:
sizzle_reel_audio.wav(synced)
Parallel workflows
You can run multiple agents at the same time. Each agent can work on different tasks, so you can fire off several jobs in parallel and focus on other work while they complete.On Privacy
Will my footage be uploaded to AI companies? No. Here’s how it works: the agent (Claude, Codex, etc.) talks to Jumper via MCP (an open standard for AI agents to “talk” with software). It sends commands like “search for shots of Anna smiling” or “export clips to this folder.” Jumper runs a local server on your machine (localhost). All the heavy work (visual search, transcription, face recognition) happens on your computer. Your footage never leaves it. What the agent receives back is metadata only: file paths, timecodes, transcript excerpts, search result lists. The agent orchestrates the workflow; Jumper does the actual analysis locally and returns pointers to where things are, not the footage itself.Compatibility
Jumper is currently compatible with:- Claude Desktop: Chat, Cowork, and Code
- OpenAI Codex Desktop

