LM Studio: Run Local AI with a Graphical Interface

LM Studio is a desktop app for running LLMs locally. No terminal required—everything is point-and-click.

Download and install

Go to lmstudio.ai and download for your OS (Mac, Windows, Linux). Install and launch.

Step 1: Discover and download models

Open the search tab. Browse models by name, size, or capability. Click to download. LM Studio handles the rest. Start with a 7B model (e.g., Llama 3.2, Mistral) if you have 16GB RAM.

Step 2: Load and chat

Once downloaded, select the model and click "Load." Switch to the chat tab. Type your prompt and get responses. You can adjust temperature, context length, and other parameters in the settings.

Step 3: Use the local server

LM Studio can run a local OpenAI-compatible server. Enable it in the server tab. Other apps (Open WebUI, Continue, scripts) can then connect to localhost:1234 as if it were an OpenAI API.

When to use LM Studio vs Ollama

  • LM Studio – Prefer a GUI, want to explore many models, like visual model management
  • Ollama – Prefer CLI, automation, or need the simplest setup

Discussion

  • Loading…

← Back to Tutorials