My Honest Thoughts on Vibe Coding
I want to be clear: I'm not advocating for avoiding AI tools entirely. I think that would amount to career suicide in 2025. But I think we need to be more intentional about how we use them.
As a citizen in the world of development (i just got an internship, yay), in this evolving landscape of AI-assisted development, I've been doing a lot of thinking about what it means to be a developer in 2025. The rise of tools like Claude, ChatGPT, Gemini, and Cursor, and etc. has fundamentally changed how we approach coding, but it also begs the question: When does a little healthy "vibe coding" cross the line into mindless dependency?
The Uncomfortable Truth About AI-Generated Code
Let me be brutally honest: I've seen a lot of aspiring developers who have essentially become human code reviewers for AI output. They'll prompt an AI, get a 200-line function back, skim through it (if even that), maybe run it once to see if it works, and then ship it to production. The code works, the feature ships. Everyone's happy. Right?
Call it pride, call it stubbornness, but something fundamentally bothers me about putting my name on a pull request where I didn't write the majority of the code. It feels like claiming credit for a painting that someone else created while I just chose the frame.
Side note, if it's a personal project, I couldn't care less. Go vibe away. I myself have personal projects that are almost over 50% vibe coded. (hehe)
Where I Draw My Lines
Don't get me wrong I'm not some old dog who can't be taught new tricks, who refuses to acknowledge that AI is the future. Quite the opposite, actually. I genuinely believe AI represents one of the most exciting opportunities in software development, and the democratization of building tools has never been more accessible. But there's a difference between using AI as a force multiplier and using it as a replacement for fundamental engineering skills.
Here's how I personally approach AI tools:
Infrastructure and Architecture Questions: When I'm stuck on deployment strategies, database optimization, or system design patterns (particularly because I am an avid self-hoster, and the resources for our breed is very limited), AI is incredibly valuable. It's like having a senior engineer available 24/7 to bounce ideas off of.
Code Reviews and Best Practices: I'll often paste my hand-written code into Claude and ask, "How can this be improved? Does this follow clean code principles?" It's like having an extra pair of experienced eyes on my work; a code review before my actual higher-ups actually review my code.
Codebase Navigation: Tools like Cursor are phenomenal for understanding large, unfamiliar codebases. Instead of spending hours tracing through files to understand data flow, I can ask targeted questions about specific functions or modules.
But here's my personal rule: I will never submit a PR for a tool that people will actually use where more than 25% of the code is AI-generated, and I will never submit code that I don't fully understand.
Why 25%? I don't know either. It's just an arbitrary value that I thought was reasonable in this day and age.
The Slippery Slope of "Vibe Coding"
The problem isn't just about pride or some purist notion of craftsmanship. It's about what happens when things break. When that AI-generated authentication middleware starts throwing cryptic errors in production, who's going to debug it? The developer who vibed their way through the implementation, or someone who actually understands how the auth validation works?
I've seen my fellow students in Computer Science struggle for hours trying to debug AI-generated code that they never fully understood in the first place. It's like trying to sew a shirt when you don't know how to thread the needle. You might get lucky after some time keeping at it, but the fact is you're fundamentally unprepared for complex problems.
The Learning Paradox
Here's what really gets me thinking: how do you become a good developer if you never practice being a developer?
When I was learning guitar, I could have used software to automatically generate chord progressions and melodies. But I would never have developed the muscle memory, the intuitive understanding of harmony, or the ability to improvise that comes from countless hours of practice (I still suck at playing the guitar, but my point stands). Programming feels similar. There's something irreplaceable about the struggle of working through complex logic, of hitting walls and having to think your way around them.
If we outsource that struggle to AI from day one, what happens to our fundamental problem-solving skills? What happens when we encounter a problem that doesn't have a clean AI solution?
Finding the Balance
I want to be clear: I'm not advocating for avoiding AI tools entirely. I think that would amount to career suicide in 2025. But I think we need to be more intentional about how we use them.
My approach is to think of AI as a very smart pair programming partner, one who's great at suggesting approaches, catching obvious mistakes, and providing implementation details for patterns I already understand conceptually. But I'm still the one driving. I'm still the one who needs to understand every line of code that goes out under my name.
Maybe this makes me slower than developers who can churn out AI-assisted features at breakneck speed. Maybe it makes me less "productive" in the short term. But I sleep better at night knowing that I can debug, maintain, and extend everything I've built.
The Long Game
The reality is that AI will continue to get better, faster, and more capable. In five years, the tools we're using today will look primitive. But I believe that the developers who thrive in that future won't be the ones who learned to prompt-engineer their way through problems, but rather they'll be the ones who understand software engineering fundamentals so deeply that they can leverage AI as a powerful extension of their existing skills.
The goal isn't to compete with AI. It's to become the kind of developer who can wield AI effectively while maintaining the deep understanding necessary to solve problems and build robust, maintainable systems.
So yes, I'll keep writing more code by hand. I'll keep taking longer to ship features while I make sure I understand every component. And I'll keep drawing my arbitrary line at 25% AI-generated code in my PRs.
Call it pride, call it principle, or call it pragmatism. But at the end of the day, I want to be a software engineer who actually engineers software.