Skip to main content
This guide walks through a fully local agentic editing setup with Jumper using LM Studio. The LLM agent runs on your machine, there is no dependency on cloud AI services, and you do not need a paid model subscription to use it. LM Studio gives you access to many local models. For Jumper, choose a model with Tool Use and Vision support. The Gemma 4 and Qwen 3.5 model families are good options. This guide uses a Gemma 4 model. For air-gapped networks or absolute privacy requirements, see Agentic editing and data privacy. Once LM Studio and your model are installed, this workflow can run fully locally without an internet connection.

Before you begin

  • Jumper is installed and running locally
  • You have media analyzed in Jumper if you want to test a real search right away
  • You want the model and Jumper tool calls to stay on your machine
Model size matters a lot for local performance. Before downloading a large model, use a tool like canirun.ai or LM Studio’s own memory estimate to sanity-check what your machine can handle.

Step-by-step

1

Download LM Studio

Go to lmstudio.ai and download the build for your operating system.LM Studio download page showing the macOS installer
2

Check which Gemma size fits your machine

Before downloading a large model, search for gemma 4 on canirun.ai and see which sizes are realistic for your hardware.The little red eye icon marks models with Vision support, which is required for the Jumper workflow.canirun.ai results for Gemma 4 model sizes
3

Open LM Studio's model search

Open LM Studio and click the robot icon in the left sidebar (Model Search).LM Studio sidebar with Model Search highlighted
4

Search for a Gemma 4 model and download it

Search for gemma 4 and pick a model that shows the capabilities you want. For Jumper, Tool Use and Vision are the key capabilities.LM Studio model search results for Gemma 4 with tool use and vision badges
5

Choose load parameters

Once the download finishes, open the model picker, turn on Manually choose model load parameters, and then click the model you want to load.LM Studio model picker with several Gemma 4 models ready to load
6

Confirm the load settings

In the load settings view, you will most likely want to increase Context Length. In this example it is set to the maximum value, and Remember settings is enabled so you do not have to repeat this every time you load the model.LM Studio load model dialog with Remember settings enabled
If the model is too slow, runs out of memory, or fails to load, lower the Context Length or move to a smaller model.
When you are happy with the settings, click Load Model.
7

Open the MCP configuration

In LM Studio chat, open Integrations, click the + menu, and choose Edit mcp.json.LM Studio Integrations panel with Edit mcp.json selected
8

Add the Jumper MCP server

Add the Jumper server to mcp.json and save the file:
{
  "mcpServers": {
    "jumper": {
      "url": "http://127.0.0.1:6699/mcp"
    }
  }
}
LM Studio mcp.json file configured with the Jumper MCP server
9

Enable the Jumper integration

After saving, LM Studio reloads its MCP servers. Open Integrations and turn on mcp/jumper.LM Studio Integrations panel with mcp/jumper enabled and Jumper tools listed
10

Make sure Jumper is active in the chat

Open or start a chat and confirm that jumper is attached in the composer before you send a prompt.LM Studio chat composer showing the jumper integration attached
11

Start LM Studio's local server

Open the menu bar icon and choose Start Server on Port 1234.LM Studio menu bar menu showing Start Server on Port 1234This step is required for a good Jumper workflow. LM Studio can talk to Jumper through MCP, but because of a current LM Studio limitation it cannot reliably inspect Jumper’s result thumbnails directly inside the agent chat unless those images are manually pasted into the conversation.With the local server running, Jumper can ask the loaded model to describe candidate frames in the background and pass those descriptions back to the chat. In practice, that means the agent can actually validate whether the returned shots are good matches instead of guessing from filenames, metadata, or search scores alone.
12

Verify that LM Studio can use Jumper

Ask a simple question like Can you use Jumper? or give it a real search request. A healthy setup should answer using the available Jumper tools instead of acting like it has no integration.LM Studio chat responding to 'Can you use Jumper?' with Jumper capabilities

Troubleshooting

  • mcp/jumper does not appear in Integrations: Check that your mcp.json is valid JSON and that Jumper is running locally on http://127.0.0.1:6699.
  • The model behaves like it has no tools: Make sure mcp/jumper is enabled in Integrations and that jumper is attached in the current chat composer.
  • The agent can search but does a poor job judging which shots are the best matches: Make sure LM Studio’s local server is running on http://127.0.0.1:1234.
  • The model is too slow or does not load: Choose a smaller Gemma 4 variant or lower the context length in LM Studio’s load settings.

Agentic editing

Overview of how AI agents use Jumper through MCP

Agentic editing and data privacy

What stays local and what the model actually receives
Last modified on April 22, 2026