The Dark Side of LLMs: part 2

July 2024: I wrote the first blog post about The Dark Side of LLMs. During these 7 months, many things have changed; the usage of AI and LLMs in engineering has kept growing. LLMs and AI are cool. However, they are not a free lunch and have consequences. If you have not read the first blog post of this series, go there and read it because it will be relevant to this one. One significant open-source development in LLM was DeepSeek.DeepSeek is interesting for two factors: one that is considerably cheaper and second that is open source. Deepseek introduced a series of optimizations like parallelism, chain of thought (COT), which step by step so we can fix the model where is wrong and usage of Reinforcement Learning (RL) along side with Destilation. RL is how robots roll and self-driving cars also move in a city. Deepseek's being open source is great for the community; however, we also see interesting moves from big tech companies like MetaMicrosoftGoogle, and Amazon that are going nuclear. Nuclear is a way to reduce the cost of AI but also is a brute-force approach. 

AI Models are a commodity

We knew this would happen. We are at a point where the AI model COST is not the big gap anymore, and keep in mind that hardware will get better and there will be other future optimizations like Deepseek introduced. Last year (2024) Guido Appenzeller showed some interesting chart on his blog post on why LLM cost are going down and fast. 

Today, any decent AI coding tool allows users to choose between multiple models. We will see more improvements; however, we are not seeing exponential performance benefits and might not see them for a while. 

The Nature of the Game

I know people are concerned about losing their jobs, and we just recovered from a long pandemic. However, we need to understand that although LLMs are disrupting software engineering, chatbots, and Saas quite a lot, we are still far from having AI to build a better Netflix. 

The nature of software is not "typing"; it's learning. We still need to figure out what people want. Innovation, more than ever, is a differentiator. Anyone can use a co-pilot, barely know the software, and have some solution working. That's excellent to start and have something running. Can it build Netflix to the level and sophistication Netflix has? Not really; we are not there yet. The nature of software is learning; it understands people's pains and how to deliver better experiences. Code is one element of it, but it's not all. 

New Interface and Expectations

LLMs are a new interface, but they are not 100% new because we have used chats and chatbots in the past. However, we definitely have a much faster Google. Since the early days of the internet and open source, we have had access to a great deal of information. This is not different now, but we have tools to consume such information faster. 

LLM models are their raw materials. Companies are trying to deliver the second level of abstraction on top of AI through an AI coder, AI Researcher, or UX Designer. We are not quite there yet. However, when we repeatedly do something, we get it into our blood, which becomes a mindset that can be excellent or pretty bad. 

AI is setting the pace to change a lot of expectations, such as:

  • Speed: Have an answer in seconds. Can you really have a very complex code base where 90% of the code is anti-patterns and tech debt? Would the AI know the difference? Speed is good and bad at the same time. We can go faster, but it will also make us go slow (I will explain later).
  • We don't need as many developers: This is not really true. AI did not achieve AGI, and no matter how much marketing uses the word "reasoning," AI has poor reasoning. For sure, it puts pressure on developers to be better, to do more, to deliver more, to know more, and to have other skills, which, honestly, IMHO, is a good thing. 
  • Competition: IF anyone can code now (we know this is not true), entertain the idea for a while. The effect is that more people can do "simple" things. This is not different from the past; we always had things like "NO CODE" or "Low Code" platforms, and that is supercharged now. However, if everybody can do the same, it puts more pressure on them to improve and gives them more opportunities to innovate. 

Right now, AI is a much better auto-complete with browser capabilities and API integrations. Some people call that Agents or agentic behavior, which is more unpredictable and how it owns challenges but is a growing trend. 

LLMs are good for

LLMs are getting good with minor problems like auto-complete, trivial issues, and repetitive coding tasks. Migrations is where LLMs are also improving a lot; you cannot do a whole complex migration hands-off. However, if you feed one class at the time and one code at the time, LLMs are pretty good at doing that. Sure, they make many mistakes and hallucinate still, but they can speed up things considerably in migrations. Also, explaining syntax and pieces of code you might not understand fully is good usage for LLMs. 

The Dark Side of LLMs

Overrelying on LLM is a bad idea. I remember when I was in high school and university, and my teachers would give ZERO to students who copied and pasted homework from the Internet. So, this phenomenon is not new; of course, LLMs speed that up. 

There are things you cannot outsource; you can't delegate everything, and one is your learning. By using LLM for code assistance, you are trading off speed for learning. Speed is great; everybody loves speed, and I love speed. However, learning matters the most. Take the speed but It's essential to make sure you keep learning. 

Let's understand some effects LLM can also have on engineers:

Skill Degradation: The more you repeat a task, the more you can do it without effort. If AI is doing everything for you, you will become less proficient in it by nature. I did pojos in Java my whole life; I can do a pojo blindfolded with just one hand. I don't think I will ever forget how to write a pojo, but can I say the same thing for data structures and algorithms? I don't think so. Some things can be significantly worse if we are not doing them frequently. IMHO, the worst thing is troubleshooting. By using LLM chat support for coding, you are getting the end result but missing the process to get there. Troubleshooting skills are essential, especially when you have production bugs. 

Reasoning: AI does not reason. But if that happens someday, do you want to outsource your brain completely? I don't think that's a good idea? How easy or hard will it be to have senior engineers in the future? The ones to grow need to struggle and suffer. Without pain, there is no real learning. IF everything is comfortable and easy, how much do you think is fair to get paid for "easy" work all the time? LLM can be addictive, and the more addictive you are, the less you might think. So it's essential to keep thinking and doing things without LLMs; otherwise, how can you tell if LLM is right or wrong? 

Scaling: YES, LLMs can build things. Can it be built quickly by the self with one shot or even a few shots? No, not really. Can LLMs build an Uber, Netflix, or even a Google on the full scale of complexity and richness of features? Not today, and I don't know if we are even close to that. LMM will be great for small businesses, low code / no code, chatbot, and support areas, but they are not at the point that they can really scale and build something massive by themselves alone and hands-off. You have probably heard about Levelsio and Fly Pieter. It's fantastic to build a game using AI; however, remember that the AI did not build itself. There were lots of interactions, trial and error, and failure. The game is cool, but it's not Ace Combat, Flight Simulator, or an AAA game. Now think about this for as moment, how you will build a AAA game or the next Netflix if you are delegating the easy things all the time, would you be able to do the hard ones?

Vibe Coding

Vide Coding is also trendy. This is the new way to call NO CODE or even LOW CODE. We may have the WIX of everything now. IS vibe coding the future? Only if what you are doing is very simple; if it is complex, I dont think so. We need to remember that software engineering is not only about code, but it's also about:

  • Design
  • User Experience
  • System Design
  • Trade-offs
  • Architecture
  • Configuration Management
  • Refactoring
  • Migrations
  • Observability and Troubleshooting
  • Requirements and Needs
  • Innovation
  • Exprimentation
All of that can be code, but it is not just code. Keep that in mind. Would vibe coding push us to be more like product engineers or architects? IDK, but for sure, knowing multiple skills is always a good idea. 

Jevons Paradox

The Jevons paradox is the idea that increased efficiency can lead to increased consumption rather than decreased consumption.


People think that LLM will kill engineering, but the opposite is true: We will have more software. Companies may build more software now since it is easier to do so. Efficiency makes consumption go up, not down. 

Preparing for the Future

Is there anything against using LLMs? No. However, we need to use them wisely for code assistance. IF we just trade speed for learning all the time, we will be dumber. But if we take the speed LLM can give us and focus on learning we will have the best of both worlds. More than ever, we need to focus on learning.

You do not want to be an LLM Proxy. If all you do is sit between the LLM and somebody else, you are in a terrible position and likely to be replaced. You must add value. How do you add value? By stepping up and improving your game. You cannot be a LLM-Ops who just give "prompts" to the LLM that will kill you 100%. 

IMHO we will always need product and architecture. You will always need to figure out what people need and what the experience will look like; that's beyond coding, and we will require coding. We will need to optimize the code. How can we optimize code if we dont understand it? It turns out that if nobody knows code, knowing code will be a differentiator. Think about this: Who will improve the LLMs? How will optimize it? 

Vintage Coding via Coding Dojos

I have always believed in coding dojos. I have been coding dojos for a long time, starting around 2009. Now when we do coding dojos without AI, you can think as vintage coding, or coding like 1950. But the reality is that the resistance against AI is to protect your learning and make sure we still have struggles and still suffer and learn. You need to be very careful with comfort because that's the formula to get worse and never learn anything new, and being 100% dependent on AI is a very, very bad idea.

Criativity

Being creative and creating new solutions and experiences are still very relevant skills. Good communication, understanding needs, and extracting requirements are other essential skills that will become increasingly important. Humans still do much better than AI at being creative. Some time ago, I read the book Comfort Crisis. The book does not talk about AI or engineering. However, it's the perfect metaphor for why we need to struggle and face adversity and pain; that's the path of learning and improving. Sometimes, to be creative, we need motivation; the best motivation is pain, and the best way to have skin in the game is when we face consequences and are held accountable. 

LLMs, in a sense, are self-inflicting pain, and it's good because it forces them to do better and to be better.

PS: The cover image was generated with Grok 3. It took me at least 30 minutes to get it, and there were several prompts, more than 10. Some timeouts and some awful results. This is not what I wanted, but at some point, I gave up. LLMs tend to involve a lot of trial and error, and sometimes, we do not even understand what's happening. Is that better? No. It's, again, a game of trade-offs.

Cheers,

Diego Pacheco


Popular posts from this blog

Having fun with Zig Language

C Unit Testing with Check

Cool Retro Terminal