Written by Satu Korhonen on the 25th of July 2025
There has been an influx of AI tools being able to code, create working proofs of concept from prompts (text description of what the model should do), create tests for code, document it, and so forth. This all makes perfect sense as there is an abundance of code available on the internet that can be utilised, with various shades of acceptability of means, to train models to be able to have this capability.
The tools are so good now, it allows anyone to create a piece of software. It allowed us to change the framework under Fenrir from streamlit to flask in a very short timeframe when we had only a beginner understanding of flask. So it is very good. It is very easy. And increasingly it is used.
Benefits of vibe coding
Having an AI next to you when you code is very much like having someone to continually discuss with on what you should do, what the error messages you run into mean, how they can be solved, what are other ways of solving the same problem, and having someone else read your code to find the bug you can’t see even after staring it for a while. For me it removes a lot of the dread of the code. It gets me going to the point where it works well enough for the error messages. Those I often then google.
It also allows more and more people to explore creating software. This benefit cannot be understated. Our societies are more and more run through digital interfaces. Allowing more people to create the ways of interaction and communication is a beneficial thing. AI tools enabling people to explore their creativity and building things and sharing them is a worthwhile endeavor.
Vibe coding also allows more time to be freed to actually think and plan what should be created, what are the options, what are the unintended consequences, and what are the problems down the line if this approach is chosen. It allows iterating through different options far faster to end up in a better outcome within a similar or less time it took earlier.
And the vulnerabilities?
I am not a developer as such. I am a machine learning engineer by trade and interested in cybersecurity, and have a doctorate in education. I see the risks of vibe coding through two different lenses. First of these is cybersecurity. AI learns from the code it has seen and emulates what is the most probable thing to follow. I’ve seen code on the internet. Many of the options are out of date, don’t work anymore with updates in libraries, and contain vulnerabilities. I was recommended to write API secrets, password and username, in the code going to github. I know there are a lot of these in github as generative AI has leaked the secrets contained in other repositories.
While these solutions improve, they still will contain more examples of outdated practices meaning vulnerabilities. They will not represent the newest understanding of cybersecurity risks and capabilities and are not able to mitigate them. This is not to say a cybersecurity professional would not be unable to benefit from these tools hardening a code, but the user of these tools should not expect this kind of expertise to just magically be in the tool they are using. For more about this viewpoint, check also this text.
Also check the code it generates. It has been known to hallucinate libraries that don’t exist allowing a malicious user to create the library to will the void then causing unsuspecting users to download and include the malware themselves. Humans are ingenious in figuring out ways to misuse tools. More about this here, here, and here.
Unfortunately I see the rise of vulnerable AI generated applications leaking sensitive data and spreading malware. There is already a shortage of people knowledgeable in cybersecurity. The influx of bad code and vulnerabilities will run against a wall as everyone capable of fixing the issues already have their hands full. AI also allows generating malware at speed. I’ll write about this later, but to those interested in the topic, this article will serve as an introduction. I will also write another post about Ai agents and the risks inherent there that are beginning to realize.
Learning and thinking impacted
Another worry I have concerns how the mind works. The tasks we struggle to solve literally develop new pathways through the brain. This is called learning. What happens to this process when it is AI who creates the code and solves the problems? Earlier companies struggled when the person who has written the code leaves and no one understands how the solution works anymore. Now, I fear, the person does not have to leave for this to be an issue. When AI did the thinking and logic, no person in the company necessarily will know why it works the way it does.
I also see this for myself. When I get tired, and really should already stop coding, I can sometimes still push through with the help of AI to create one more feature, one more connection. It is so fast and effortless. And leads to a world of pain when the next day I am trying to figure out what did I actually do, what implications does it have, does it break something else somewhere else, or where did I write a certain piece of code so I can fix the issue it’s causing. I had some interesting debugging sessions this spring for sure solving the circling logic of the AI fixing one thing and breaking another at the same time.
AI can help you learn a lot. I learned a heap ton of debugging skills, patience, reading the actual documentation created by humans (usually insufficient and often buggy as well, but that’s just the speed of the industry). I also learned where AI can be utilized safely. The recommendation I often give is that use AI only in topics you are an expert on and when you can actually check the output.
Use AI with oversight
I do not suggest we stop using the tools at our disposal. What I do suggest is that you do the due diligence of checking how the solution you use uses your data and evaluate the risks involved. I also suggest you use it within the area of your expertise when using it for work. You can use it for learning in many other ways as well as including playing with the technology and testing the limits of it. But when you use it for work or in a real life situation with consequences, please use it wisely. Check the outputs. You can even use AI to check the code for vulnerabilities and bad coding practices. It’s quite hilarious seeing it contradict itself repeatedly and go round and round in loops fixing and breaking things. This is the experience I hope everyone using it for vibe coding gets as it shows some of the limits. And for the love of all that is secure, please don’t give your AI access to production code and databases. More on this later.
