What vibe coding actually is
In February 2025, Andrej Karpathy — one of the original OpenAI team members and former Tesla AI director — posted a description of a new way he'd been writing software:
"There's a new kind of coding I call 'vibe coding', where you fully give in to the vibes, embrace exponentials, and forget that the code even exists."
In practice, it means describing what you want to build in natural language, letting an AI model (Claude, GPT-4o, Gemini — take your pick) write the code, reviewing and testing it, then iterating. The "vibe" part is the acknowledgement that you're operating at a higher level of abstraction — you're directing and reviewing rather than typing every line.
The tools have matured rapidly. Cursor and Windsurf brought AI deeply into the editor. Claude Code, GitHub Copilot, and similar agentic tools can now handle multi-file edits, run tests, and iterate on failures automatically. What was a novelty in early 2025 is now a genuine part of how a lot of software gets built.
We've been using these tools heavily since they became genuinely capable. This is what we've actually learned.
Where it works remarkably well
Prototyping and one-off tools
This is the killer use case. Things that would have taken a day to build from scratch — a script to parse log files, a small web interface for an internal tool, a data transformation pipeline — now take an hour or two. The AI handles the boilerplate, you handle the domain logic and review.
This website was built with significant AI assistance. The HTML structure, CSS, and a large portion of the content was generated, reviewed, and refined through conversation rather than typed from scratch. The total time was a fraction of what it would have been writing everything by hand.
Boilerplate-heavy work
Configuration files, Dockerfiles, HTML/CSS, repetitive API integration code — any work where you know exactly what you want but the typing is the bottleneck. AI tools are excellent here. The ratio of thinking to typing is already tilted toward thinking for experienced developers; vibe coding tilts it further.
Working in unfamiliar territory
Need to write a PowerShell script when you're primarily a Linux person? Need to scaffold a React component when you mostly work on backend systems? AI tools dramatically reduce the cost of working outside your primary domain. You can describe what you need, get working code, understand it, and ship it — without spending two days reading documentation first.
Explaining and exploring existing code
Less talked about but genuinely useful: asking an AI to explain what an unfamiliar codebase does, trace through execution paths, or identify what a function is actually doing. We've used this extensively when inheriting legacy code from clients. It's not infallible, but it accelerates understanding significantly.
Where it falls down — and badly
Security-sensitive code
This is the big one. AI models will write SQL injection vulnerabilities, insecure authentication implementations, hardcoded credentials, and inadequate input validation — and they'll do it confidently, with well-structured code that looks completely fine to someone who doesn't know what to look for.
We've seen generated code that:
- Concatenated user input directly into SQL queries
- Stored passwords in plaintext in a database
- Exposed internal error messages (including stack traces) to users
- Implemented JWT validation that could be trivially bypassed
None of this is malicious — the model is producing code that appears to work, because it does work in the happy path. The security failures only appear when someone specifically tries to exploit them. If you don't know enough to look for these issues, you won't find them in review.
The rule we apply: anything touching authentication, authorisation, data storage, or external input gets line-by-line human review from someone who understands the threat model. Vibe coding produces a first draft; security review produces something you can ship.
Long-running projects with large codebases
AI tools are excellent at local, bounded changes. They struggle with large codebases where context spans many files and the constraints of one subsystem affect another. Models have finite context windows, and even the largest ones get fuzzy when asked to reason about a 50,000-line codebase holistically.
The pattern we've observed: vibe coding works best when the task is well-scoped. "Add a dark mode toggle to this component" — great. "Refactor the authentication system to support SAML" — you need a human architect who understands the whole system, even if they use AI to write individual pieces.
When you don't understand what was produced
Karpathy's original description included the line "I don't read the diffs anymore." This works for him because he has the depth to fix anything that breaks, and he knows which risks he's accepting. For most people, not reading the diffs is how you end up shipping something broken, insecure, or unmaintainable.
The code still exists, even if you feel like you've forgotten it. Someone has to maintain it, debug it, and extend it later. If nobody on your team understands it, you have a problem — and "the AI wrote it" is not a debugging strategy.
The skill gap it reveals
Here is the uncomfortable truth that the vibe coding discourse often avoids: AI coding tools amplify existing skill, they don't replace it.
An experienced developer using AI can move significantly faster than without it — because they know what to ask for, they can spot mistakes in the output, they understand the architecture decisions being made implicitly, and they can iterate intelligently when something doesn't work.
A beginner using AI to skip learning produces code they can't debug, in architectures they don't understand, with security properties they can't reason about. The speed is real; the quality debt is also real, and it comes due later.
The most useful mental model: vibe coding is a power tool. A skilled craftsperson with a power tool does more, faster. A novice with a power tool does more damage, faster. The tool is the same; the context is everything.
What it means for the IT industry
The honest answer is that we don't fully know yet, and anyone who tells you they do is guessing. A few things do seem clear:
The volume of software being built is increasing. Vibe coding lowers the cost of creating software, so more software gets created. More internal tools, more automations, more bespoke solutions that previously weren't worth the development cost. This creates more IT work, not less.
The shape of developer work is changing. Less time writing boilerplate, more time reviewing, directing, and understanding systems. The ability to articulate what you want precisely, evaluate what you get critically, and iterate effectively is becoming more valuable. These are not beginner skills.
Quality engineering matters more, not less. When everyone can produce code quickly, the differentiator is producing code that's correct, secure, and maintainable. The bar for "works" is easy to clear with AI assistance. The bar for "works safely and can be maintained over years" is not.
The bottom line
Use these tools. They're genuinely useful and the productivity improvement for experienced practitioners is real. We use them daily.
But understand what you're doing. Review the output, especially for anything touching security or data. Know the failure modes. Don't ship code you can't explain or debug.
After 25 years in IT, we've watched a lot of technologies get announced as the thing that will change everything. Sometimes the hype is justified and the change is real — the internet, smartphones, cloud computing. Sometimes it isn't.
AI coding tools are in the first category. The change is real. But every genuinely transformative technology in our industry has also created new failure modes and new ways for things to go badly wrong. The organisations that benefit most are the ones that understand the tool well enough to use it wisely — not just quickly.
The good news is that these failure modes will reduce over time. AI is getting smarter every day, and we are right at the beginning — Henry Ford's cars sharing the road with horses. It won't be long before AI can solve all but the most complex security issues reliably. Aisle.com recently provided a glimpse of what that might look like, using AI to discover 12 out of 12 known OpenSSL vulnerabilities — a result that would have taken experienced security researchers considerably longer to achieve manually. That's not a party trick; it's a sign of where this is heading.
Questions or thoughts? We'd love to hear what you're seeing with these tools: info@corenetworks.com.au