By Satu Korhonen, Founder of Helheim Labs
At this year’s Open Source Summit in Vienna, Linus Torvalds offered a cutting remark that captured the growing skepticism in the tech world: “AI is 90% marketing and 10% reality.” Coming from the creator of Linux, a project built on transparency and pragmatism, the statement resonated. It’s a sobering take in a wave of soaring valuations, ambitious predictions, and daily claims that AI will change everything. And a rise in that skepticism is seen elsewhere too. Where AI talks were sought after two years ago, they are far less so today.
There’s a saying in Finnish that something is like pushing a snake into the barrel of a rifle (Työntää käärmettä kiväärin piippuun) meaning that there’s a lot of push to force something into a place it doesn’t really want to go to. One excellent blog about this is this one about the color blue. So, lets explore this a bit.
Why the Hype?
AI feels magical, or intelligent. It speaks, or produces text, seemingly like us. It seems to reason like us. It mimics understanding and having a personality. That illusion of intelligence and personality triggers a powerful response varying from wonder to fear and excitement. When we combine that with aggressive investment cycles, and attention-driven and often click-bait style media, and platform-scale deployment, and you have a recipe for over-promising and, basically, the hype we’re seeing. It makes sense for AI companies to promise that it can do everything, and that it fits everywhere.
The hype, though, really isn’t just about what AI can do. It’s about what people, and organizations, want it to be able to do. For instance, earlier I ran into multiple research outputs checking how good large language models are in mathematical reasoning and logic (f.ex. https://arxiv.org/abs/2402.00157, https://arxiv.org/abs/2303.05398, https://medium.com/@adnanmasood/why-large-language-models-struggle-with-mathematical-reasoning-3dc8e9f964ae). Models trained to produce language are tested on mathematical reasoning even as earlier AI technologies were trained for accomplishing the tasks they were designed to perform, because even people in the industry want it to be able to show general intelligence and not just perform the task it has been trained to do. More articles come about the difficulties AI has with reasoning (source 1, source 2) where it’s shown it is more able to show rote memorization instead of actual reasoning. While the source 2 one by Apple has since been questioned by Anthropic, the problem still stands.
The industry is looking for the ability of generative AI to show general intelligence. We’re trying to find the AGI (Artificial General intelligence), that is smarter than us in every way. Some fear it. Some actively seek it. When new developments are made, it’s when we as an AI industry collectively go “are we there yet?” repeatedly for a while until it has been proven that “no, still not there”. And so the cycle goes with media following every twist and turn with grandiose headlines. The investors and companies funding the technology and developing it, as well as including AI features everywhere, they have other priorities in making sure the promises are big, the smoke screen of omnipotence and excellence is vast, and money keeps flowing.
With the hope of AI being the magic bullet being able to solve problems it isn’t trained to solve, to save on costs, and to be the worker that never gets tired, it leads to high hopes and high expectations. And when expectations soar that high, disillusionment often follows. That’s why grounded voices like Torvalds matter. They give a very real and important reminder that the real test of technology is utility and ability, not vision or promises. And it is also worth remembering that problems should be resolved with the simplest technology that solves the actual problem.
The Core of It All: Next-Word Prediction and Sense-Making
At its most fundamental level, generative AI is a probability engine. It excels at predicting the next likely word, synthesizing information into something coherent, and finding structure in unstructured data. That may sound underwhelming, and simple, compared to the grand vision of Artificial General Intelligence (AGI), but this is exactly where its utility lies.
From summarizing technical reports to translating obscure documentation, from parsing logs to generating well-structured code comments—AI systems are becoming adept at transforming chaotic information into something human-readable and actionable. It can save hours of work on paperwork on transforming customer, or patient, conversations into forms that get fed into databases. It can determine themes and viewpoints in conversations, and ask preliminary questions from customers so that the customer service person has a better view of the situation when they start problem solving.
But it isn’t magic, or intelligence, or understanding. It’s language, statistics, and compute power aligned in a very particular way. It is using the technology for what it is designed to be used. And when combined with other technology, it can create some quite genuine sci-fi-seeming stuff. The question of if this technology is capable of true intelligence, as well as what that means, is a topic for a future post.
Where It’s Useful—and Where It’s Not
The reality, however, is that AI doesn’t produce value everywhere. But that’s not a failure of the technology. Instead it should be seen as a normal part of technological exploration. We’re still in the test phase. From customer support chat bots to content generation pipelines, companies and individuals are trying AI in every corner of their operations. Some of those tests work. Many don’t. And many of them should not work as we are in the process of determining what this technology is and how it works. We have a new tool and now we’re testing what it can do.
What matters isn’t that every trial leads to a high return on investment. What instead matters is how we evaluate the outcome. When tests are treated as experiments and not products, we start to see the outlines of where AI actually belongs: situations where humans need a shortcut to clarity, or where patterns are buried under mountains of data, or problems are fuzzy and based on human communication. And many of the failures in AI actually show valuable gaps in the maturity of the companies adopting it. If each company that fails to get value out of AI would examine why that is and fix the underlying issues, their journey in AI could be expedited as well as their capability to understand the benefits and costs of the technology.
The trick is to be brutally honest in that evaluation. Not every AI-powered feature needs to exist. Not every automation saves time. And not every new AI tool is secure or trustworthy enough for deployment. But some of them are.
The dark side of AI
The danger of the hype, from my viewpoint, is that it hides the costs and throws smoke in the way for people looking at the dark side of the technology. But for sustained value creation, we need to look closely also at the dark side of AI as well. I will only focus on two of these in this post, as this is getting lengthy as is. Firstly, this technology is very hungry for original, human created, data.
Because the models need human created data, companies developing foundational AI models search human generated content text in many ways. They change usage policies allowing themselves to train their models on the data of their users, like Meta recently did. In the EU we have the right to object. Other parts of the world are not so lucky. Or they buy rights to use original data from publishers, like Amazon did with the Times, which is far more defensible way of going about it. And to begin with, the large foundational models were created through scraping the internet of everything that could be scraped without consideration for intellectual property. While this was defensible when it was a research project for the common good, it is far less so when it is done for profit.
And, secondly, it is very hungry for compute power, which means a sharp increase in the need for data centers, electricity, and chips requiring rare minerals. A recent IEA report on Energy and AI states that electricity demand from data centers worldwide would double by 2030. This same growth increases the need for the critical and rare minerals used in the equipment in the data centers that power AI. This has effects on the environments, and human rights around the world, we should not ignore. And much of this growth is dependent on whether us throwing AI at everything to see where it sticks is a long term thing or just us learning to use it well. So the question is: Will this growth continue or level out at some point?
While the industry seeks to make the models more efficient and less energy intensive, the concern of environment and human rights does require at least me to pause and rethink our approach. That’s why Helheim Labs focuses on using repurposed hardware and smallest models that solve the problem. And yes, even as an AI professional, I do think about where and when I use AI.
Our Role: Focused, Secure, Real
But returning to Torvalds evaluation: what does that 10% consist of—and is it enough to justify all the noise? Time will tell. And the choices we all make will determine it.
At Helheim Labs, we’re focused on understanding and securing the core of AI and discovering that 10% of real utility and impact. Because underneath the marketing slogans and billion-dollar promises lies a powerful technology that, when used with the right mindset of exploration and unwavering honesty about successes and failures, and building it with security in mind, it can produce value.
We’re not here to build hype machines. We’re here to interrogate the systems behind them, to test them in adversarial environments, and to identify where they are truly useful—and where they are dangerously over trusted and overused. For instance the IEA report above states that cybersecurity attacks have become more sophisticated because of AI as the number of attacks on energy utilities have tripled in the past four years. However, AI is also becoming a critical tool for energy companies in defending against these attacks.
We believe generative AI has a place in our toolbox, but it needs to keep earning that place through performance, trustworthiness, security, and concrete value. And it should not make global problems worse. So, we’re not looking for artificial general intelligence. We’re looking for systems that help people think more clearly, act well more quickly, make more grounded and data informed decisions, and work more securely. We aim to build systems that support the human in the equation.
That’s the real opportunity. And it’s worth pursuing—even if it’s just 10% of the story.
