AI for Developers1 of 43 steps (2%)

The Developer's AI Mindset: What's Real, What's Hype, and How to Use It Well

Developers have a uniquely complicated relationship with AI. We build the tools other people use. We're trained to be skeptical of abstractions we don't control. And we've already been burned by a wave of "AI-powered" products that were just if-statements with a chatbot UI.

So let's be honest about what AI actually does well in a development workflow, where it quietly fails, and how to use it without making your codebase worse.

What AI Is Actually Good At in Development

AI coding tools are genuinely useful for a specific set of tasks:

High signal-to-effort tasks (use AI, save real time):

  • Boilerplate generation: CRUD endpoints, schema definitions, test scaffolding, form handlers. The stuff you've written a hundred times and could write again, but don't want to.
  • Writing tests: Unit tests, integration tests, fixture generation. AI is remarkably good at producing high-coverage tests for code it can see.
  • Writing documentation: Docstrings, README sections, API documentation from source code. Tedious for humans; fast and consistent for AI.
  • Explaining unfamiliar code: Paste a function and ask what it does, why it might be slow, or what edge cases it misses. Dramatically speeds up codebase onboarding.
  • Refactoring to a pattern: "Refactor this to use the repository pattern" or "convert this class to use dependency injection." AI handles mechanical transformations well.
  • Debugging with context: Paste the error, the stack trace, and the relevant code. AI identifies the cause and suggests a fix faster than most Stack Overflow searches.
  • Language/framework translation: Porting code from one language or framework to another. Python to Go, Express to FastAPI, jQuery to React.
  • Regex and query writing: Complex SQL, regex patterns, jq queries, shell pipelines. AI generates these correctly more often than most developers.

Where AI underperforms (be careful):

  • Architecture decisions: AI doesn't know your non-functional requirements, your team's skills, your infra constraints, or your company's political realities. It will give confident-sounding answers to architectural questions that may be completely wrong for your context.
  • Security-critical code: AI will produce code that looks correct but contains subtle vulnerabilities. Never use AI output for auth, cryptography, or access control without expert review.
  • Novel algorithms: If the answer isn't in the training data, AI will hallucinate one. For well-trodden problems, it's great. For genuinely new problems, it can be confidently wrong.
  • Long-horizon reasoning: AI struggles to reason correctly across many steps or with complex constraint satisfaction. It's better at local, focused tasks.
  • Knowing what it doesn't know: AI will produce plausible-looking code for APIs it doesn't have accurate training data on, including hallucinating method signatures and parameter names. Always verify against real docs.

The Three Modes of AI-Assisted Development

Most developers think of AI assistance as one thing, autocomplete. But there are actually three distinct modes, each with different tools and use cases:

Mode 1: Copilot (Inline Autocomplete)

AI predicts and completes code as you type, inline in your editor.

Tools: GitHub Copilot, Codeium, Supermaven, Tabnine Best for: Repetitive patterns, boilerplate, function bodies when the signature is clear Limitation: Only sees the current file and nearby context; can't reason about the full codebase

Mode 2: Codebase Chat

AI can see your entire repository and answer questions, explain code, and generate changes with full project context.

Tools: Cursor, Continue, GitHub Copilot Workspace, Sourcegraph Cody Best for: Understanding unfamiliar code, asking architectural questions, making changes that affect multiple files, targeted refactors Limitation: Larger context windows get noisier; quality degrades on very large codebases without good retrieval

Mode 3: Agentic (Autonomous)

AI takes a goal, plans steps, runs commands, writes code, checks output, and iterates, with minimal human intervention per cycle.

Tools: OpenClaw, Devin, GitHub Copilot Agent, Claude Code, and similar autonomous coding agents Best for: Large-scale tasks you can specify clearly: "Add pagination to every API endpoint," "Write tests for all uncovered functions" Limitation: Errors can compound; requires careful task scoping and human review checkpoints

The Developer AI Stack

Here's the landscape organized by what you're trying to do:

In-Editor Coding Assistance

ToolModeBest forCost
CodeiumCopilotFree autocomplete in any editorFree
GitHub CopilotCopilot + ChatTight GitHub integration$10/mo
CursorCodebase ChatFull repo context, fast iterations$20/mo
ContinueCodebase ChatOpen-source, self-hosted, any modelFree
SupermavenCopilotFast autocomplete, large contextFree tier

Testing & Quality

ToolWhat it doesCost
OctomindGenerate + maintain E2E testsPaid
CodiumAIUnit test generation from codeFree tier
GitHub CopilotTest suggestions inline$10/mo

Local & Self-Hosted Models

ToolWhat it doesCost
OllamaRun open-source models locally (CLI)Free
LM StudioRun local models with a GUIFree
Open WebUIChatGPT-style UI over local modelsFree
LocalAIOpenAI-compatible local API serverFree

Building AI Features

ToolWhat it doesCost
OpenAI APIGPT-4, embeddings, function callingPay-per-use
Anthropic APIClaude models, structured outputPay-per-use
LangChain / LlamaIndexRAG, agents, chainsOpen source
n8nSelf-hosted workflow automationFree / Paid
Make (Integromat)Cloud workflow automationFree tier

How to Evaluate a New AI Tool Without Wasting Time

New AI developer tools launch every week. Here's a fast evaluation framework:

  1. Does it work in your actual editor/stack? Don't evaluate tools in demos. Set it up in your real environment with your real codebase and do one real task.
  2. What model is it using? Many tools are thin wrappers around GPT-4 or Claude. The model determines most of the quality ceiling.
  3. How much context can it see? The bigger the context window and the smarter the retrieval, the better it performs on complex tasks.
  4. Is it privacy-safe? Does your code leave your machine? Check the privacy policy before connecting it to a proprietary codebase.
  5. Does it make you faster in practice? Time yourself on a representative task with and without the tool. If it's not 20 to 30 percent faster after a week, it's probably not worth the context-switching cost.

What This Course Covers

This path takes you through the full developer AI journey:

  • Module 1: Coding assistants, Codeium, Cursor, inline AI in your real editor
  • Module 2: Code quality, AI-assisted review, test generation, automatic documentation
  • Module 3: Prompt engineering, Structured prompts, JSON extraction, developer-specific patterns
  • Module 4: Local AI, Running open-source models with Ollama, RAG on your own docs
  • Module 5: Building AI features, RAG pipelines, agents, Custom GPTs with API actions, automation
  • Module 6: Production, Reliability, cost, latency, safety, observability, multi-model routing

The goal isn't to use AI for everything. It's to know exactly where it gives you back time, and to have the skills to build it into the products you ship.

Let's start with the tools you can add to your editor right now.

Discussion

  • Loading…

← Back to course