If you frequent Hacker News regurlarly, you have likely noticed the buzz around engineers using AI (specifically Large Language Models, or LLMs) to tackle Computer Science problems.
I want to be clear: I’m not against LLMs.
LLMs are incredibly powerful tools, and can be a huge boon to engineers.
They can automate repetitive tasks, generate code snippets, help with brainstorming, assist in debugging, …
and this can frees up engineers’ time and mental energy, which could be channeled into more complex, creative problem-solving.
But, like any tool, LLMs should be used wisely.
LLMs can hallucinate, exhibit inconsistencies (especially with self-reflection models), and harbor biases. These limitations mean that LLM outputs require careful review before they can be trusted.
A key concern with LLMs lies in their training data.
The data can be biased, contradictory sometimes, but those data contain solutions to known problems.
If an engineer wants to “reinvent the wheel,” an LLM might offer a solution (good or bad, depending on the prompt). But when faced with truly novel problems, LLMs often provide unreliable responses, placing the burden of error detection squarely on the engineer.
This reliance on readily available solutions, particularly for familiar problems, creates a real risk: engineers may inadvertently atrophy their own problem-solving skills, hindering their ability to tackle truly novel challenges.
The solution lies is balance, and a focus on the “why”, not just the “what”.
Engineers should strive to understand the reasoning behind LLM-generated solutions, not simply accept them blindly. Blind acceptance shifts the focus from solving problems to merely obtaining a solution. Crucially, solving complex problems often depends on mastering simpler and foundational skills, which the engineer might lose quickly.
This idea summarizes why I disagree with those who equate the LLM revolution to the rise of search engines, like Google in the 90s.
Search enginers offer a good choice between Exploration (crawl through the list and pages of results) and
Exploitation (click on the top result).
LLMs, however, do not give this choice, and tend to encourage immediate exploitation instead.
Users may explore if the first solution does not work, but the first
choice is always to exploit.
Exploitation and exploration are complementary. Remove the exploration and you will introduce more and more instability into the exploitation process.
Computer Science emerged because Humans needed tools to solve problems faster and wanted to focus on the real problems, not repetitive tasks.
Humans built machines to accelerate problem-solving, but engineers remained the masters of the algorithms.
I fear we’re losing our grip on this mastery.
Not because engineers are becoming less and less intelligent, but because the pressure to deliver solutions quickly is paramount.
In embracing these “fast-paced solutions”, we risk losing a fundamental skill: focus. Because focus, like any skill, requires practice.
This is a worrying trend. If engineers become less adept at solving complex problems, what does the future hold? Will our ability to tackle complex challenges rest solely on self-reflecting AIs, rather than human ingenuity?