Posts

Showing posts from July, 2024

The Dark Side of LLMs

Image
AI is the most considerable hype right now.  It is not as new as people think, starting in the 1950s. Significant advances have happened since 2017 with the  transformers architecture , which is the heart of all generative AI. There is an excellent potential for substantial disruption  because of AI. We are seeing significant improvements, but we are far from AGI. AI can be practical and honest, add value to the business, and improve our lives. AI is narrow at the moment and has many challenges. One of the industries that has the potential to be highly disrupted by AI is the technology industries and engineers.  Large Language Models (LLM) can do amazing things. From generating text, generating creative images and videos(with lots of problems), and even generating code. It's absolutely normal to be concerned, but the more you understand what's actually going on, the less you need to be worried. If you are an expert, you will be fine. We saw a great leap and boost of...

Testing Queues and Batch Jobs

Image
Testing could be considered a solved problem. Everybody knows the importance of testing. Unit testing and integration testing are not rocket science. However, it is still not uncommon to see a lack of testing, poor coverage, and flaky tests in the industry. Internal service implementation can be tested using fakes and mocks. Testing classes is a simple task if you have a good design. Refactoring a legacy system to have a better design makes it easier to test and could be more entangled, but it is still possible and desired. However, sometimes you have a good design, which is still hard to test. Some architectures can be more complex to test naturally. If we consider standard RPC services, it's pretty vanilla to test them. However, things get more messy when we consider integration tests or end-to-end testing, mainly because of dependencies and state. From the test point of view, RPC services are simple. You have a request, very likely a rest call, and you have some assumptions; you...