Why LLMs Can Never Fix Their Own Bugs

I once had a junior developer create a bug.

“Fix it!” I said, like an idiot, not considering that because they created the bug they would be incapable of fixing the bug.

“Why?” do you ask. Clearly a junior developer is capable of learning. That’s how they go from junior to senior, after all.

People generally perform to their capacity. If they make mistakes, it is not because they are able to do better – it is because they haven’t learned how yet.

You can choose to train them or let them learn and get better, or you can find someone else who can fix the problem.

What you cannot do, however, is to think that they made those bugs because they wanted to fail at the task, that somehow, they are perfectly able to write better code, they just chose not to do so.

Now… Imagine someone who is inherently incapable of learning and getting better. That means that no matter how many times you tell them to be better, they are incapable of improving.

They will never be able to do better.

Imagine, if you wish, your microwave. You put stuff in, hit a button a few times, and out comes popcorn.

If your popcorn is underdone, you wouldn’t expect the microwave to learn how to do better if you just tell it to fix it. Next time, you have to be better. You have to click enough times for the popcorn to fully pop. You need to stop before it is too much.

The microwave, however, is none the wiser. It does not understand popcorn. It cannot be better at making popcorn simply by doing it more times. You can get better at pressing the correct buttons the right amount of times, however.

Maybe your microwave isn’t good enough to make popcorn the way you want. In that case, you cannot ask it to be better. You can get a new microwave that might be better at making popcorn, however.

This is the state of AI development today. An AI model will never be able to improve. It does not learn from its mistakes so it will keep making those mistakes. If you tell it to fix it, it cannot because it is the exact same model that created the bugs in the first place.

You might get different bugs. You might get a fix for your first bug because you give your model a slightly different prompt but the model itself is incapable of fixing the bugs it has created because it too has reached a permanent peak of their own skills.

So, what does this mean? It means that unless you carefully monitor and understand the code that your AI assistant outputs, you are never going to get better code than the model is able to produce. If there are bugs, you will always have bugs.

If there are security flaws, there will always be security flaws.

To get better, you need a different model a more capable model, and if that is available, why didn’t you use that from the start? If it isn’t available yet, well, then you can’t fix your bugs with an AI yet and you just have to wait until someone gives you a better model.

Because they do not learn. They do not get better.

You need a human for that.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *