Before you begin
- Jumper is installed and running locally
- You have media analyzed in Jumper if you want to test a real search right away
- You want the model and Jumper tool calls to stay on your machine
Step-by-step
Download LM Studio
Go to lmstudio.ai and download the build for your operating system.

Check which Gemma size fits your machine
Before downloading a large model, search for 
gemma 4 on canirun.ai and see which sizes are realistic for your hardware.The little red eye icon marks models with Vision support, which is required for the Jumper workflow.
Open LM Studio's model search
Open LM Studio and click the robot icon in the left sidebar (Model Search).

Search for a Gemma 4 model and download it
Search for 
gemma 4 and pick a model that shows the capabilities you want. For Jumper, Tool Use and Vision are the key capabilities.
Choose load parameters
Once the download finishes, open the model picker, turn on Manually choose model load parameters, and then click the model you want to load.

Confirm the load settings
In the load settings view, you will most likely want to increase Context Length. In this example it is set to the maximum value, and Remember settings is enabled so you do not have to repeat this every time you load the model.
When you are happy with the settings, click Load Model.

Open the MCP configuration
In LM Studio chat, open Integrations, click the 
+ menu, and choose Edit mcp.json.
Enable the Jumper integration
After saving, LM Studio reloads its MCP servers. Open Integrations and turn on 
mcp/jumper.
Make sure Jumper is active in the chat
Open or start a chat and confirm that 
jumper is attached in the composer before you send a prompt.
Start LM Studio's local server
Open the menu bar icon and choose Start Server on Port 1234.
This step is required for a good Jumper workflow. LM Studio can talk to Jumper through MCP, but because of a current LM Studio limitation it cannot reliably inspect Jumper’s result thumbnails directly inside the agent chat unless those images are manually pasted into the conversation.With the local server running, Jumper can ask the loaded model to describe candidate frames in the background and pass those descriptions back to the chat. In practice, that means the agent can actually validate whether the returned shots are good matches instead of guessing from filenames, metadata, or search scores alone.

Troubleshooting
mcp/jumperdoes not appear in Integrations: Check that yourmcp.jsonis valid JSON and that Jumper is running locally onhttp://127.0.0.1:6699.- The model behaves like it has no tools: Make sure
mcp/jumperis enabled in Integrations and thatjumperis attached in the current chat composer. - The agent can search but does a poor job judging which shots are the best matches: Make sure LM Studio’s local server is running on
http://127.0.0.1:1234. - The model is too slow or does not load: Choose a smaller Gemma 4 variant or lower the context length in LM Studio’s load settings.
Related
Agentic editing
Overview of how AI agents use Jumper through MCP
Agentic editing and data privacy
What stays local and what the model actually receives



