Starts an asynchronous analysis pipeline. Returns immediately with a task_id
that can be used to track progress via SocketIO.
You can run visual analysis, speech transcription, and face clustering in a single call.
For throughput, batch many files into one request instead of sending many single-file requests. Jumper loads ML models per analysis request, so one 50-file request is much faster than 50 one-file requests on the same node.
Only one analysis task can run at a time per backend instance. A new analyze
request first asks the current task to stop. If the previous task is still
unwinding, Jumper returns 409.
Documentation Index
Fetch the complete documentation index at: https://docs.getjumper.io/llms.txt
Use this file to discover all available pages before exploring further.
Jumper Pro license key passed via header