We’ve used AI for snippets and reviews, but the real power lies in Agentic Workflows. An agent is an AI that has access to tools — it can read your files, run your compiler, execute your tests, and use the output to fix its own mistakes. It doesn't just give you a block of code; it performs a multi-step mission until the job is done.

The Agentic Loop

An agent operates in a cycle: Observe → Plan → Act → Verify. If you ask an agent to "Fix the memory leak in the bridge," it doesn't just guess. It reads the code, runs a memory profiler (like Valgrind or Instruments), identifies the leak, applies a fix, and runs the profiler again to verify the fix works. This is the difference between a chat bot and a software engineering agent.

// Agent Log:
// 1. Reading chess_bridge.cpp... identified missing 'delete' in chess_destroy.
// 2. Running 'make test'... tests passed, but leak confirmed via valgrind.
// 3. Applying fix to line 42...
// 4. Running 'make test' with valgrind... leak resolved. Task complete.

The tools you'll actually use

The AI agent space moves fast, but a few patterns are standard now:

What they share: tool use (read/write files, run commands), a planning loop, and the ability to keep going until the task succeeds or a budget runs out.

Automated Refactoring

Agents are perfect for large-scale, repetitive changes that are too complex for a simple search-and-replace. If you want to "Convert all raw pointers in the engine to std::unique_ptr," an agent can go through every file, update the types, fix the function signatures, and ensure the code still compiles at every step.

An agent is more than an AI; it is an AI with a toolbox and a mission. It closes the loop between thinking and doing.

The Human-in-the-Loop

The most effective agentic workflows are collaborative. You set the high-level strategy (the "Plan") and the agent handles the low-level execution (the "Act"). You act as the final validator, checking the agent's work and providing course corrections when it goes down a rabbit hole. This "Centaur" approach — human strategy plus AI execution — is how modern apps are built at 10x speed.

// User to Agent:
// "I want to add a new game: Sudoku. Research the rules, 
// design the C++ engine header, and show me the plan before 
// you start implementing."

Try it yourself

What's next

We've mastered the development lifecycle with AI. Now, we need to get our apps into the hands of users. Next week, we'll cover Deploying: how to package your Mac and Windows apps, sign them for security, and distribute them to the world.

Week 51 is Deploying.

Quick check

1. How does an 'agent' differ from a chat assistant?
  1. It costs more
  2. It has tools — can read files, run builds, execute tests, and iterate until the task succeeds
  3. It uses a bigger model
Reveal Answer

Answer: B. Tool use is the inflection point. Once an AI can run your tests, it can fix bugs end-to-end without you in the loop.

2. What is the typical agent loop?
  1. Type, send, read, repeat
  2. Observe → Plan → Act → Verify
  3. Compile and crash
Reveal Answer

Answer: B. Each iteration ends with verification — tests passing, build green, output correct — before the agent stops or moves on.

3. For a non-trivial agentic task, the recommended pattern is…
  1. Let the agent run unsupervised for hours
  2. Set the high-level plan; let the agent handle execution; review the result and course-correct
  3. Use only a single prompt
Reveal Answer

Answer: B. 'Centaur' workflow — human strategy plus AI execution. Best of both modes.