top of page

Vibe Coding, A Professional Approach

  • Writer: Nikody Keating
    Nikody Keating
  • 3 minutes ago
  • 18 min read


How to move fast with AI without losing control of your codebase


You've probably felt it: the rush of describing what you want to a coding AI, watching it generate working code in seconds, and hitting the ground running. It's addictive. But here's the problem - speed without structure is chaos. This guide is for developers who want that speed, but not at the cost of maintainability. It's about being intentional with AI as your coding partner, not just letting it loose in your codebase.


1. What Is Vibe Coding - and Why Does It Matter?

"Vibe coding" is what happens when you work fluidly with an AI coding assistant. You describe what you want in plain English, and functional code appears on your screen in seconds. It feels less like programming and more like dictation - except the code actually works.


For many developers, this has been transformative. Boilerplate that used to kill an afternoon? Done in a minute. Unfamiliar API? You can explore it in real time. Prototypes that needed a full sprint can be spun up in an afternoon. It genuinely changes how fast you can move.


But there's a catch. The same speed that makes vibe coding exhilarating can make it dangerous. You're moving so fast that it's easy to accept code you don't fully understand, pile up architectural decisions you never consciously made, and wake up weeks later with a codebase that feels like someone else wrote it - even though you technically did.


Real talk: I've been there. You keep generating, it keeps working, and you tell yourself you'll review it later. Later never comes, and then you're staring at a 300-line component that uses three different state management patterns and you have no idea why.


Here's the thing: this guide isn't about pumping the brakes. It's about moving fast in a way that doesn't leave a trail of chaos. It's about being a thoughtful collaborator with your AI tool, not a passive passenger watching it generate your codebase.


A Real Example

Imagine you ask your AI: "Build me a React component for user settings." You're not being specific - you're in a hurry, and the AI seems to understand what you want. Two minutes later, you have a working component. It has local state. It handles form inputs. It looks great. You accept it and move on.


Except - your app everywhere else uses Redux for state management. Your new settings component uses React hooks. Somewhere down the line, you need to sync that state with the server, and now you're managing data in two places. The component works, but it doesn't belong in your codebase.


2. The Problem with Unchecked AI Development

AI coding tools are incredible at one thing: pattern matching. They've been trained on millions of lines of code, and they're extremely good at producing plausible solutions to well-scoped problems. What they don't have is your context.


They don't know your product's goals. They don't know your team's conventions. They don't know the broader system your code lives in. They're excellent implementers, but they're not architects.


Left unchecked, AI development drifts toward local optima. Each piece of code looks reasonable on its own. But without someone holding the bigger picture, the pieces accumulate into something sprawling, inconsistent, and hard to change. Data structures get duplicated. Naming conventions diverge. Business logic leaks into layers where it doesn't belong.


The Core Risk

AI generates code in response to your immediate prompt. Without deliberate oversight, the accumulated decisions of a hundred AI-generated snippets can produce a codebase that works fine until it doesn't - and then it's a nightmare to debug.


A Concrete Example

Here's a function an AI might generate when you ask for "error handling in an API call":


async function fetchUserData(userId) {
  try {
    const response = await fetch(`/api/users/${userId}`);
    const data = await response.json();
    return data;
  } catch (error) {
    console.log('Error fetching user:', error);
  }
}

Looks clean, right? But look closer:


If the fetch fails, it catches the error and... does nothing. Silently swallows it.

If the response status is 404 or 500, the code doesn't check for that. It just tries to parse the response as JSON.


The caller has no way to know if the function succeeded or failed. It returns undefined either way.

The function works when things go right. But the error handling is theater - it's there, but it doesn't actually handle anything. This is exactly the kind of thing that looks fine in isolation but causes problems across your codebase.


"AI is never going to replace humans completely. There's always the need for someone who sees the big picture and is creative in their approach to solving the big picture problems."

That quote came from a Hollywood executive, but it applies perfectly to software engineering. AI can generate the pieces, but someone still needs to see the whole - the system, the product, the user experience. That's you.


There's another problem too: the psychological pull toward acceptance. When an AI generates code confidently and quickly, there's a bias toward trusting it. Reviewing something you didn't write requires effort, especially when it compiles and doesn't throw errors. It's easy to rubber-stamp what the AI produces and move on.


3. The Garden Principle

Think of your codebase as a garden. A garden has potential. Given the right conditions, it produces something beautiful and useful. But a garden thrown together without vision - seeds scattered randomly, no thought given to spacing, sunlight, or what belongs next to what - doesn't become a garden at all. It becomes a mess. Plants compete for the same soil. Things grow where they shouldn't. What could have been productive becomes tangled and unmanageable, and untangling it takes far more effort than planning it would have.


The Garden Metaphor: AI is like a powerful seed catalog: it can generate code quickly and creatively. But you're the gardener. You decide what to plant, where, and when. You shape the growth. You prune what doesn't belong. And you keep the whole garden coherent season after season.

And here's the part most people skip: the best gardens don't start with planting. They start with planning. A great gardener surveys the land first. Where does the sun hit? What's the soil like? What grows well together, and what competes for the same resources? The planting comes after all of that is figured out.


Software works the same way. Before you start prompting an AI to generate code, you need to plan the system. And this is where a concept I'll call spec-driven AI development comes in.


Spec-Driven AI Development

The idea is simple: don't start generating code until you have a spec. That spec doesn't need to be a 40-page document - but it does need to exist. It's the blueprint for your garden.


What goes into a spec? The things that matter before a single line of code gets written. Well-structured user stories that describe what the system should do and why. An architectural approach - whether that's a software architecture for a single application or a solution architecture spanning multiple services. Your core data structures and how they relate to each other. The technology stack you're building on and the rationale behind those choices. API contracts between components. Non-functional requirements like performance targets or security constraints.


Here's the important part: AI can absolutely be involved in this process. You can use it to brainstorm architectural options, draft user stories, sketch out data models, or evaluate trade-offs between technology choices. It's a great thinking partner at the spec level. But the engineer is responsible for this process. You're the one who decides which architectural approach actually fits your problem. You're the one who validates the data structures against your real business domain. You're the one who commits to the technology stack your team will live with for the next two years.


When you hand an AI a well-defined spec, the quality of its output jumps dramatically. Instead of guessing at your data shapes, it works with the ones you've defined. Instead of inventing an architecture, it builds within yours. The spec becomes the guardrails that keep AI-generated code coherent across the entire project.


Vibe coding without that upfront spec work is like scattering seeds everywhere and hoping for the best. You might get growth. You'll also get sprawl - and eventually the sprawl will strangle the garden.


Responsible vibe coding means being a deliberate gardener. Survey the land. Write the spec. Plan the layout. Choose what goes where. Then move quickly when conditions are right - but never abandon the care and intentionality that keeps a garden thriving long-term.


The Cost Argument: Specs Save Tokens

There's a practical dimension to this that's easy to overlook: token usage. Every time you prompt an AI, you're spending tokens - and those tokens cost money. When you don't have a spec, you spend tokens on back-and-forth clarifications, on regenerating code that didn't fit your architecture, on fixing data structures that almost match but don't quite. You end up with three slightly different User objects across your codebase because each prompt invented its own, and now you're burning more tokens to reconcile them.


A well-defined spec with clear data structures, communication patterns, and architectural decisions defined upfront changes the economics entirely. Your prompts are more precise, so the AI gets it right the first time more often. You're not paying for the AI to guess at your schema - you're telling it the schema. You're not spending cycles debugging mismatches between redundant data structures that were generated in isolation - because those structures were defined once, in the spec, before any code was written.


Fewer mistakes means fewer correction cycles. Fewer correction cycles means lower cost. And lower cost means you can do more with the same budget. The teams that invest in spec-driven development don't just build better software - they build more of it, at lower cost, because they're not wasting tokens on rework.


4. A Clear Division of Responsibilities

Effective AI-assisted development requires being explicit about who owns what kinds of decisions. When these lines blur, teams run into trouble.

The Engineer Owns

  • The overall product vision and goals

  • System architecture and component design

  • Data structures and their relationships

  • Naming conventions and team standards

  • Code review and final approval

  • Quality gates and non-negotiable constraints

  • The decision of when to accept, revise, or discard AI output

  • Project folder layout and file organization

  • Long-term maintainability of the codebase

The AI Assists With

  • Implementing well-scoped, clearly defined tasks

  • Generating boilerplate and repetitive structures

  • Suggesting approaches within a defined scope

  • Explaining existing code or patterns

  • Writing tests for described behaviors

  • Drafting documentation

  • Rapid prototyping of ideas

  • Catching common implementation pitfalls


This isn't a limitation - it's the strength. When you maintain ownership of architecture and design while the AI handles implementation, both parties are playing to their strengths. You bring judgment, context, and accountability. The AI brings speed, breadth, and tireless availability.


"You're not a reviewer of AI's work. You're the owner of the codebase who chooses to use AI as an implementation assistant."

That distinction matters more than you'd think. An engineer who sees themselves as reviewing AI output is in a weaker position than one who sees themselves as directing the work and using AI to accelerate it.


The Engineer Sees What's Coming

There's another reason the human holds the larger vision: future requirements. The engineer knows what's on the roadmap, what stakeholders have been hinting at, what's likely to change in six months. The AI only knows what's in the prompt right now. It implements based on a limited context - the immediate purpose and scope of what's being built. And that gap between "what works today" and "what we'll need tomorrow" is where a lot of architectural mistakes get made.


Here's a concrete example. Say you need a popup that asks the user to answer a few survey questions. You prompt the AI, and it builds a clean, functional component with the questions hardcoded right into the markup. It works. It looks great. Ship it?


Well - it depends on what you know. If you know those survey questions will differ depending on what page the user is on, or what topic they're browsing, or what stage of the customer journey they're in, then hardcoding is a problem. The architecture needs to support dynamic questions pulled from a configuration or a backend service. The way responses are captured and reported needs to account for varying question sets. That's context the AI simply doesn't have unless you give it - and it's the kind of context that shapes whether you build a simple component or a flexible system.


But here's the flip side: the AI might also over-engineer. Given a vague prompt, it could assume you need a fully dynamic survey engine with a backend API, admin panel, and analytics pipeline - when you know that three hardcoded questions are all this feature will ever need. The engineer who knows the roadmap can say "this is good enough" with confidence, and save the team from building complexity that will never be used.


This is the core of why the human stays in the driver's seat. The AI is great at implementing a solution to the problem as stated. But only the engineer knows whether the problem as stated is the whole picture - or just the first chapter.


5. The Core Practice: Small Units of Work

The single most effective practice for responsible vibe coding is working in small, reviewable units.


The temptation is to describe large tasks and let the AI run. "Build me a user authentication module." "Implement the full checkout flow." This feels efficient and sometimes produces impressive results. But large chunks of AI-generated code are hard to review, hard to understand, and hard to fix when something goes wrong.


Small units of work mean prompting the AI for one focused thing at a time: a single function, a clearly scoped component, a specific business rule. Then you review, understand, and approve that unit before moving to the next.


Good Prompts vs. Bad Prompts

Let's be concrete about this. Here are examples of bad and good prompts for the same task:


Too Broad (Bad):

"Build the authentication system"

Well-Scoped (Good):

Write a validateToken function that:
- Takes a JWT string as input
- Verifies the signature using RS256 algorithm
- Returns the decoded payload on success
- Throws InvalidTokenError if signature is invalid
- Throws ExpiredTokenError if token has expired

The second prompt tells the AI exactly what you want. It defines the boundaries. It specifies success and failure cases. The AI can't go wandering into adjacent concerns. You'll get a focused piece of code you can understand and review in minutes.


What a "Small Unit" Looks Like in Practice

  1. Define the unit clearly: Before you prompt, know exactly what you're asking for. Specify inputs, outputs, constraints, and what done looks like. Precision in prompts pays dividends.

  2. Generate and read - don't just run: When the AI responds, read the code before executing it. Understand what it does. If you can't explain it in plain English, dig deeper or ask the AI to explain its approach.

  3. Review against your design: Does the code fit your data structures? Does it respect your architectural boundaries? Does it match your team's conventions? Working code isn't the same as code that belongs in your codebase.

  4. Accept, revise, or reject: Use your judgment. Accept what's right. Ask for revisions when it's close. Write it yourself when the AI's approach doesn't fit - and don't feel bad about it. The goal is correct code, not AI-generated code.

  5. Commit small and commit often: Once you're satisfied with a unit, commit it. This is more important than it sounds. Large commits are harder for humans to review - when a diff is 500 lines across 20 files, your eyes glaze over and issues slip through. Small commits keep each change digestible and reviewable.


    But there's another reason small commits matter with AI-assisted development: protection against the AI going off the rails. AI tools can sometimes decide that something is unimportant and just... remove it. You ask it to refactor a module, and it quietly deletes a set of server-side utility functions it deemed unnecessary. Or it reorganizes your file structure and drops a helper that other parts of the system depend on. With large, infrequent commits, that kind of damage can be buried in a massive diff and go unnoticed until something breaks in production. With small commits, you catch it immediately - and you have a clean rollback point right there. Frequent small commits are your safety net when working with an AI that's confident but doesn't always share your understanding of what matters.


This rhythm - define, generate, review, decide, commit - becomes automatic pretty quickly. And it makes a real difference. Code reviewed in small batches is code you actually understand. Code you understand is code you can maintain.


6. Code Review as a First-Class Discipline

In teams using AI tooling heavily, code review becomes the critical quality checkpoint. It's where a human being takes ownership of what enters the codebase. Code review isn't a speed bump - it's a core engineering practice. This matters because AI-generated code that looks clean on the surface can hide subtle issues. You need to know what to look for.


What to Watch For in AI-Assisted Code Reviews

Does it fit the architecture? AI doesn't always respect the layering and separation of concerns your team has established. Watch for business logic creeping into data access layers, or UI components reaching into services they shouldn't touch.


Are the data structures consistent? AI sometimes invents slightly different shapes for data that should match your existing models. Check for inconsistent property names, mismatched nullability assumptions, or arrays where your system expects a map.


Is the error handling real? This is crucial. Here's bad error handling that AI might generate:

function processPayment(amount) {
  try {
    submitToPaymentAPI(amount);
    return true;
  } catch (error) {
    logger.error('Payment failed:', error);
    return true; // BUG: still returns true!
  }
}

The try-catch is there. Logging happens. But the function returns success either way. The error is logged, but nobody's handling the failure state. Always read the error paths carefully.


Could someone maintain this? Ask whether a teammate would understand this code six months from now. AI sometimes produces code that works but is needlessly clever or opaque. Readable code is worth enforcing.


Are there edge cases the AI missed? AI generates based on the prompt and common patterns. It doesn't know your specific business constraints. Think about edge cases your domain introduces that the AI wouldn't anticipate.


Watch for Code Bloat

One of the sneakiest problems with AI-generated code is bloat. The AI doesn't have a mental map of your entire codebase, so it tends to solve each problem in isolation. The result? Redundant helper functions. You already have a formatCurrency() utility in your shared library, but the AI doesn't know that - so it writes a new one inline. Then next week it writes another slightly different version somewhere else. Before long you've got three functions that do almost the same thing, scattered across your codebase, each with its own quirks and edge case handling.


The same thing happens with tools and utilities that should be centralized. Date parsing, string sanitization, API response normalization - these are the kinds of things that belong in a shared module, imported everywhere. But AI writes them fresh each time because each prompt is a clean slate. Your codebase inflates with duplicated logic that should have been written once and reused.


Then there's the abstraction problem. AI-generated code often takes the brute-force path: explicit, repetitive, verbose. Sometimes that's fine. But when you've got the same pattern repeated across fifteen components - each one manually wiring up the same fetch-parse-validate-render cycle - a more elegant approach would be to abstract that pattern into a reusable hook or a base class. The AI won't suggest that refactor on its own. It doesn't feel the weight of repetition the way a human does scrolling through the codebase.


And without that abstraction, complexity skyrockets. A component that should be 40 lines balloons to 200 because it's handling fetching, parsing, error states, loading states, validation, and rendering all in one place. The more complex a function gets, the more likely it is to contain bugs - and those bugs become extremely difficult to track down. When a function is doing seven things at once, isolating which of those seven things is broken turns into a painful, time-consuming exercise. Abstraction isn't just about keeping code DRY - it's about keeping individual pieces simple enough that you can actually reason about them when something goes wrong.


During code review, actively look for these patterns. Ask: does this function already exist somewhere? Should this logic be centralized? Is there an abstraction that would make this cleaner and smaller? Keeping your codebase lean isn't just aesthetic - bloated code is harder to navigate, harder to test, and harder to change. A more elegant approach almost always wins over time.


7. Sustaining Momentum Without Losing Control

One of the real pleasures of vibe coding is momentum - that feeling of ideas becoming code fluidly and quickly. The practices here don't kill that. Done well, they channel it.


The key is getting good habits into place early, when the cost of doing things right is low. Define your data structures before generating features. Sketch the component hierarchy before building components. Agree on naming conventions before code gets written. These upfront investments pay dividends: when the AI has a clear structure to work within, its output is better and needs less revision.


The Gardener's Insight: A garden is hardest to tend after it's neglected for months. The effort you invest in structure early - good soil, clear rows, intentional planting - is what gives you freedom to move fast later. Restoring an overgrown garden is far costlier than keeping it tended.

When you notice the codebase drifting - duplication appearing, structures diverging, complexity spiking - that's your signal to pause and tend the garden. A short refactoring session when problems are small is far less painful than a multi-week cleanup later. Treat technical debt like weeds: pull them when they're small.


Warning: The AI Troubleshooting Death Spiral

This is one of the most dangerous patterns in AI-assisted development, and it deserves its own callout.


Here's how it happens: something breaks. You ask the AI to fix it. The AI tries a variation. It doesn't work. You ask again. The AI tries another variation - maybe slightly different, maybe not. You go back and forth, and at some point the AI settles on an approach and starts making increasingly minor tweaks to the same code, none of which meaningfully change the outcome. You're stuck in a loop, and the AI doesn't know how to get out of it because it doesn't have the context to understand why it's failing.


This is the AI troubleshooting death spiral. And if you don't understand the code or the underlying system well enough to recognize what's happening, you can burn hours - and a lot of tokens - going nowhere.


Here's a real-world example. Say you're using AWS CloudFormation to set up a Bedrock Knowledge Base backed by an OpenSearch Serverless index. In CloudFormation, you can define the index first and have the knowledge base depend on it - the knowledge base waits for the index to be created, then provisions itself. Straightforward, right?


Except it fails with an access denied error. You ask the AI to fix it. It adjusts IAM policies. Still fails. It restructures the resource dependencies. Still fails. It tries different permission combinations, different resource orderings, different policy documents. Each attempt looks plausible. None of them work. The AI starts recycling the same ideas with minor variations, confident each time that this should fix it.


The actual problem? Eventual consistency. OpenSearch Serverless needs a few seconds after the index is created before the permissions are fully propagated. The fix is a simple wait - a short delay between the index creation and the knowledge base provisioning. But the AI doesn't know that because it doesn't understand how AWS eventual consistency works in practice. It's pattern-matching on IAM policies and resource dependencies, not reasoning about distributed systems behavior.


This is why understanding your code and your platform matters. If you know how AWS works - the kinds of timing issues, consistency models, and service quirks that come with it - you can recognize the death spiral early and break out of it. You can tell the AI "stop changing the permissions, add a wait here" and move on. But if you're relying on the AI to figure it out for you, and you don't understand the underlying system well enough to steer, you'll spin in circles indefinitely.


The death spiral is the clearest argument for why vibe coding still requires engineering knowledge. The AI is a powerful tool, but when it hits a wall it doesn't understand, it can't tell you it's stuck. It just keeps trying. You're the one who needs to recognize the pattern, diagnose the real problem, and point the AI in the right direction.


8. Practical Tips for Everyday Vibe Coding


Start every session with context

Before generating new code, orient yourself in the existing codebase. What are the relevant data structures? What does the component you're extending currently do? AI works better when your prompts include context - and you'll write better prompts when you've reminded yourself what you're building.


Describe the "what," then check the "how"

When you get generated code, ask: does the "how" match my expectations? Sometimes the AI reaches a correct destination via a route you wouldn't have taken, and that's fine. Other times it takes a shortcut that creates problems downstream. The review step is where you distinguish the two.


Use the AI to explain itself

If generated code isn't immediately clear, ask the AI to explain its approach. This is one of the most underused techniques in AI-assisted development. Understanding the reasoning behind generated code helps you evaluate it properly - and often surfaces assumptions or trade-offs you'd want to know about.


Keep a "design decisions" document

As you make architectural choices, write them down briefly in a living document. This serves as context for future AI prompts and as institutional memory for your team. It's also a useful reminder of your own intentions when you return to a part of the codebase weeks later.


Here's what a few lines might look like:

## Data Structures
- User state lives in Redux store, shape: { id, email, profile }
- Validation errors stored locally in component, cleared on success

## Architecture Rules
- All API calls go through /services/api.js
- Components in /pages are route handlers, delegate logic to /components
- Business logic never in UI components

## Naming
- Async functions prefixed with 'fetch' or 'submit'
- Boolean properties prefixed with 'is' or 'has'

A simple reference like this prevents you from generating conflicting code and helps your AI prompts be more effective.


Treat "it works" as a starting point, not the finish line

The bar for accepting AI-generated code shouldn't be "does it produce the right output." It should be "is this the right code for this codebase." Functional code that doesn't belong is still a problem. Hold the bar.


Know when to step away from the AI

Some things are better written by hand: core business logic that needs to express subtle domain understanding, architectural seams that define how your system fits together, anything you've found the AI consistently misunderstands. The AI is a tool, not a mandate. Use it where it helps.


9. Conclusion: Tend Your Garden

Vibe coding, at its best, is one of the most exciting developments in software engineering. The ability to translate intent into implementation at speed - to move from idea to working code in a fraction of the time it once took - is genuinely powerful. But power requires judgment. The engineer who uses AI tools responsibly isn't the one who moves slowest. It's the one who moves fastest while keeping a clear picture of what they're building and why. They understand the architecture. They own the data structures. They review the code. They tend the garden.


The division of responsibilities in this guide isn't about constraining AI. It's about ensuring the human in the loop brings what only a human can: vision, context, taste, and accountability.


Work in small units. Review what you accept. Keep the design coherent. Prune early and often. The garden that's tended grows richer every season. The one that isn't becomes something you have to tear out and start over.


Vibe coding is here. Use it well.


 
 
 

Subscribe to get our latest content by email and a free "Guide to Building an Internal Enterprise Website on AWS Serverless".

bottom of page