← Back to Academy

This course is free. Create a free account to learn, save your progress, and earn a certificate when you complete it.

Run AI Locally

Free

Download and run LLMs on your own hardware. No API keys, no cloud. Privacy-first AI with Ollama and open models.

No payment or subscription required. Sign in to track your learning and claim your certificate when you finish.

Bookmark
Loading…

Complete lessons in order to unlock the next — structured progression.

Lessons

  1. 1Why Run AI Locally: The Case for Local LLMsTutorial
  2. 2Install Ollama and Run LLMs Locally in 5 MinutesTutorial
  3. 3Run LLMs locally (no cloud)Task
  4. 4LM Studio: Run Local AI with a Graphical InterfaceTutorial
  5. 5Run Private AI Locally with LM StudioTutorial
  6. 6Set Up Open WebUI with Ollama: Your Own ChatGPTTutorial
  7. 7Self-hosted AI chat interfaceTask
  8. 8How to Choose the Right Local AI ModelTutorial
  9. 9Deploy self-hosted AI stackTask
  10. 10Deploy LocalAI with Docker for ProductionTutorial
  11. 11Build Your First n8n Workflow (Self-Hosted Automation)Tutorial
  12. 12Build an n8n Workflow with Local AI (Ollama)Tutorial
  13. 13Self-hosted workflow automationTask
  14. 14Implement Multi-Model Fallback in n8nTutorial
  15. 15Build RAG pipelineTask
  16. 16Build a Complete Private AI WorkspaceTutorial

Discussion

  • Loading…

← Back to Academy