Video Walkthrough #
The Problem #
Every AI model has a knowledge cutoff — a date where its training data stops. Anything after that date it simply doesn’t know about. For a lot of tasks that’s fine, but the moment you need current information it falls flat.
What we’re doing here is giving your local model a web search tool via MCP, so instead of relying on what it was trained on, it can go out and find the answer in real time.
Prerequisites #
- A local model running (LM Studio, Ollama, etc.)
- A free Tavily account — free tier gives you 1,000 searches per month
- Node.js installed on your machine
Install Node.js:
# Ubuntu/Debian
sudo apt install nodejs npm
# Arch/Manjaro
sudo pacman -S nodejs npm
# Mac
brew install node
Examples #
LM Studio #
In LM Studio, navigate to the Local Server section on the left, then click mcp.json at the top.
Clear the existing template and paste in:
{
"mcpServers": {
"tavily": {
"command": "npx",
"args": [
"-y",
"tavily-mcp@latest"
],
"env": {
"TAVILY_API_KEY": "YOUR-API-KEY-HERE"
}
}
}
}
Replace YOUR-API-KEY-HERE with your Tavily API key from the dashboard, then hit Save.
Open the sidebar in LM Studio, find the Tavily MCP server we just added, and toggle it on. Then open the Tools section and confirm it’s selected. You’re done — your model can now search the web.
Open WebUI #
Go to your Admin Panel → Settings → Web Search.
Select Tavily from the dropdown, paste your API key in, and hit Save.
Fire up a new chat, click Integrations, and enable Web Search if it isn’t on automatically. That’s it.
OpenCode #
Open your OpenCode config file:
nano ~/.config/opencode/opencode.json
Add the mcp block into your existing config:
"model": "lmstudio/your-model-name-here",
"mcp": {
"tavily": {
"type": "local",
"command": ["npx", "-y", "tavily-mcp@latest"],
"enabled": true,
"environment": {
"TAVILY_API_KEY": "YOUR-API-KEY-HERE"
}
}
}
Save and restart OpenCode — you should see Tavily listed as a connected MCP server on the right.
Each tool you use needs to be configured separately. LM Studio, Open WebUI, and OpenCode all need their own setup — there’s no single place to configure this once for everything.
Closing #
Hooking your local AI up to the internet is one of those upgrades that’s completely worth the five minutes it takes. Having your models go out and fetch the latest information in real time is immensely powerful — especially if you’re doing any kind of agentic work. Config file syntax changes, packages get updated, new releases drop. Without web access your model is always working from stale data and you’ll never know it. With it, it can find the latest and greatest and build from that.
It’s also worth knowing that each tool needs to be configured separately. There’s no single place to set this once and have it work everywhere — LM Studio, Open WebUI, and OpenCode all need their own setup as shown above.