Week 48 · Phase 7 — The AI Hero
Intent-Driven Development: How to use LLMs to handle the drudgery while you steer the ship.
Welcome to Phase 7. You’ve spent 47 weeks learning how every bit and byte works. Now, we’re going to give you a superpower. Vibe Coding isn’t about being lazy; it’s about using Large Language Models (LLMs) to handle the 90% of development that is repetitive boilerplate, allowing you to focus on the 10% that is unique, difficult, and creative.
Modern AI is incredibly good at writing declarative UI like SwiftUI, XAML, or CSS. Instead of manually tweaking padding and colors for hours, you can describe the "vibe" of the interface and let the AI generate the first 80%. Because you understand the underlying layout systems, you can then surgically fix the parts the AI gets wrong.
// Prompt: "Create a SwiftUI settings view with a clean, glassmorphic look.
// Include toggles for 'Sound Effects' and 'Dark Mode', and a picker for 'AI Difficulty'."
struct SettingsView: View {
@State private var soundEnabled = true
// ... AI generated layout ...
VStack {
Toggle("Sound Effects", isOn: $soundEnabled)
.toggleStyle(SwitchToggleStyle(tint: .accentColor))
}
.background(.ultraThinMaterial)
}
Remember writing all those create and destroy bridge functions in Week 44? An AI can generate those in seconds. It can also write unit tests for your C++ engine logic, ensuring that your core remains robust as you iterate on the UI. The key is to provide the AI with your context — share your header files and it will write perfectly matched glue code.
AI doesn't replace the need to know how to code; it replaces the drudgery of writing code you already know how to write.
Modern AI coding tools work best when given precise context. The vague prompt "make this faster" produces vague results; "this loop is hot, the inner data is contiguous, prefer SIMD if available" produces a real optimisation. Every paragraph of vocabulary you've earned in the last year — cache lines, branching, RAII, virtual dispatch, hash collisions, alpha-beta cutoffs — translates directly into tighter prompts and sharper review.
A developer who types "the AI is hallucinating" cannot recognise the moment when the AI makes a real mistake. A developer who can say "the AI is calling std::sort on a std::list, which silently re-sorts via a copy and breaks iterator stability" is using the AI as a force multiplier, not a crutch. The course you've nearly finished is precisely the toolkit that lets you do the second one.
The secret to "Vibe Coding" is the iterative loop. You prompt, you run, you observe a bug, and you explain the bug to the AI. Because you've spent a year learning the metal, you can spot *why* the AI is failing (e.g., "You're passing a pointer but not managing its lifetime") and correct its course with technical precision.
// User feedback to AI:
// "The C# P/Invoke call is crashing with an AccessViolation.
// You're passing a string directly, but the C++ engine expects
// an allocated buffer it can write into. Use a StringBuilder."
[DllImport("engine.dll")]
public static extern void get_name(IntPtr e, StringBuilder buffer, int size);
Vibe coding is fast, but it can be dangerous if you don't know what you're looking at. Next week, we'll talk about The Senior Architect mindset: how to use your deep technical knowledge to audit, review, and debug AI-generated code to ensure it's performant, secure, and maintainable.
Week 49 is The Senior Architect.
Answer: B. Boilerplate is what AI most reliably produces well. Architecture and novel algorithms still need human reasoning.
Answer: B. AI quality scales with context quality. Vague prompts produce vague code.
Answer: B. An expert with AI is faster than an expert without. A novice with AI is faster than a novice without — but slower than the expert with AI.