AI Output

AI Agents and vibing are growing in popularity. It's time for us to think about what use cases make sense to use them and what use cases it would be a mistake to use them. AI is changing how people work, learn, and behave. Should I vibe or not vibe?


Vibing can be useful when


AI Agents have limits. Vibe coding has several drawbacks, including security challenges, hallucinations, and incorrect facts. However, there are a couple of use cases where vibing can be useful, when:


  • It's not a priority: Imagine there is code that you would never write, tools you would never create, simply because you have bigger priorities and would never spend weeks or months on such tasks.
  • It's not your specialty: Let's say you would never learn C# to make a Windows extension of your application, but with AI Agents or even with vibing, you can get that done without much effort.
  • Limited Resources: You either lack the time or the funds to undertake such a task. AI agents/vibing can allow you to get something done, albeit with lower quality and limitations, but it can still be a win.


Obviously, you would not apply vibe coding to your core business, most strategic project, or spearhead of innovation. Simply never reading the code would be a recipe for disaster. Not all pieces of software have the same value or require the same level of investment.


Vibing leads to problems.


It's possible to use LLMs for various use cases. However, excessive usage of LLMs can create several issues, such as:

  • Decrease in Knowledge Retention: Your prompt is not the work; it's the request for the job. You did nothing, you learned nothing.
  • Decrease Attention to Details: LLM tends to spill out much more than you asked, adding a lot of noise and obscuring the real value.
  • Decrease in Delivery Quality: Getting things done faster is excellent; however, if you spend less time reviewing, polishing, and maturing the work via several iterations, quality will go down.


You must ask yourself the following questions: if you tell AI to do everything, what is your job? AI is great, but it is a prediction machine that outputs the most likely next sequence of bytes. Now, keep in mind that LLMs are available to everyone who can pay $20+ USD per month, meaning we all have equal access. So "just" using LLMs it's no differenciator and it's not innovation, specially everyoneelse is doing it.


Input vs Output


The trends for our industry are changing. The most natural thing is utilizing AI for Output. Generate images and videos, generate text, generate code. However, AI is also transforming how we work and think. Perplexity was a pioneer in this. However, when we use Google today, the first thing that appears is an AI Summary. 


It's very tempting to use AI for day-to-day work because LLMs are adept at summarizing, providing quick results, and saving time. It's tempting to generate presentations with AI, create prototypes with AI, and generate documentation with AI. Answer emails with AI, do code review with AI, and create tests with AI. Do it all with AI.


When we have output that is 100% AI, we need to ask ourselves how much this changes fundamentally several aspects of engineering, such as code review, design, and architecture. What are you reviewing? 

You did not code it, you did not learn it, so what are you learning from the review, to do a better prompt next time? How much is the human intention vs how much is AI's autocomplete? If you did not tought was worth your time to do it, is it worth somebody else to review it? 


Perhaps the most sensible suggestion we could consider is using AI only for input in certain types of tasks. However, we let humans do the writing; therefore, you still retain ownership, and you are forced to analyze what AI is outputting and vet it. 


Using AI for input is excellent; you can complete a POC much faster, then use the time to read the code and understand it, and then repeat the process without AI. You still will do it faster, but you will have to increase your ownership and learning.


Design and Architecture tasks require a lot of thinking, careful trade-off analysis, a crystal ball, and the ability to predict the feature based on what did not happen, what is not written, and what has not been asked yet. Often, the output of design and architecture is text and a wiki. But you cannot outsource good judgment. That's why I believe it's better to use AI for input, not output, for these tasks.


cheers,

Diego Pacheco

Popular posts from this blog

Having fun with Zig Language

Cool Retro Terminal

C Unit Testing with Check