Build an n8n Workflow with Local AI (Ollama)
Run an AI-powered workflow entirely on your machine. No data leaves your server.
Prerequisites
- Ollama installed (
ollama run llama3.2) - n8n running (Docker or npm)
Step 1: Create a workflow
In n8n, add a trigger (Webhook, Schedule, or Manual). Add an Ollama node. Configure: base URL http://host.docker.internal:11434 (or localhost:11434), model llama3.2.
Step 2: Design the prompt
In the Ollama node, set the prompt. Use expressions to inject data: {{ $json.input }} or {{ $node["Webhook"].json.body }}. Add system instructions if the node supports it.
Step 3: Add logic
Use an IF node to branch: e.g., if sentiment is positive, route to one action; if negative, route to another. Use a Switch node for multiple outcomes.
Step 4: Handle output
Add nodes to process the AI response: save to DB, send email, post to Slack, or trigger another workflow. Map $json.response or the relevant output field.
Step 5: Error handling
Add an Error Trigger. On failure, log to a sheet, send an alert, or retry. Prevents silent failures.
Example: Support ticket classifier
Webhook receives ticket → Ollama classifies (urgent/normal/low) → IF urgent → Slack alert. Else → add to sheet. All local, no API keys.
Discussion
Sign in to comment. Your account must be at least 1 day old.