“Fix it!” I said, like an idiot, not considering that because they created the bug they would be incapable of fixing the bug.
“Why?” do you ask. Clearly a junior developer is capable of learning. That’s how they go from junior to senior, after all.
People generally perform to their capacity. If they make mistakes, it is not because they are able to do better – it is because they haven’t learned how yet.
You can choose to train them or let them learn and get better, or you can find someone else who can fix the problem.
What you cannot do, however, is to think that they made those bugs because they wanted to fail at the task, that somehow, they are perfectly able to write better code, they just chose not to do so.
Now… Imagine someone who is inherently incapable of learning and getting better. That means that no matter how many times you tell them to be better, they are incapable of improving.
They will never be able to do better.
Imagine, if you wish, your microwave. You put stuff in, hit a button a few times, and out comes popcorn.
If your popcorn is underdone, you wouldn’t expect the microwave to learn how to do better if you just tell it to fix it. Next time, you have to be better. You have to click enough times for the popcorn to fully pop. You need to stop before it is too much.
The microwave, however, is none the wiser. It does not understand popcorn. It cannot be better at making popcorn simply by doing it more times. You can get better at pressing the correct buttons the right amount of times, however.
Maybe your microwave isn’t good enough to make popcorn the way you want. In that case, you cannot ask it to be better. You can get a new microwave that might be better at making popcorn, however.
This is the state of AI development today. An AI model will never be able to improve. It does not learn from its mistakes so it will keep making those mistakes. If you tell it to fix it, it cannot because it is the exact same model that created the bugs in the first place.
You might get different bugs. You might get a fix for your first bug because you give your model a slightly different prompt but the model itself is incapable of fixing the bugs it has created because it too has reached a permanent peak of their own skills.
So, what does this mean? It means that unless you carefully monitor and understand the code that your AI assistant outputs, you are never going to get better code than the model is able to produce. If there are bugs, you will always have bugs.
If there are security flaws, there will always be security flaws.
To get better, you need a different model a more capable model, and if that is available, why didn’t you use that from the start? If it isn’t available yet, well, then you can’t fix your bugs with an AI yet and you just have to wait until someone gives you a better model.
Because they do not learn. They do not get better.
Do you know why AI struggles with making production ready code?
It’s because it has never seen it.
Training Data is Not Production Code
The code on which AI coding assistants have been trained is on things like StackOverflow, Github, and online articles.
That is at best problem solving, which is why AI assistants seem good at some things. Examples include isolated tasks, bug fixing, and even code explanation.
However, real production code is rarely on Github. And by that, I don’t mean that React isn’t open source, I mean that the millions of real and production deployed React projects are not open source.
How many fully open-source games do you know? For example, go search for released Unity games on Github and compare that to the number of games released. I’ve found one list that lists around 50 games in total, mostly small ones.
That is about the same amount that is released on Steam every day.
Production code is vastly different from libraries and examples and things made to showcase a technology. Unlike what you might think if you don’t live in this space, production code is not clean or perfect or stable.
Production code is messy, hacky, and full of bugs. Production code has unused features, partially built features, and features that are really bugs but that can get past QA if we just brand it properly.
The purpose of production code is to produce, not demonstrate or land customers. The product is a solution to a problem or entertainment – it is never the purpose in itself. Nobody writes production code just for the heck of it.
That’s why AI models can’t do it, or do it really poorly. It has never seen, in any extent, real production grade code. That code is hidden in private repositories or does never see sunlight outside of a company’s on-premises servers.
So next time you try to take AI generated code to production, keep in mind that unless you have the experience of putting things into production and maintaining it there, then AI cannot help you.
You need real humans for that.
You Can’t Learn Production Code in School
The second reason why it’s extremely hard for AI to produce production grade code is that there really isn’t anything to learn.
By that, I mean that production code is largely one-off. Each code base is unique, each facing challenges possibly rarely if ever seen before.
It’s muddled up by that cranky dude in operations that insists on backups being automatically restored if the system goes down, despite all advice to the contrary.
Why does he insist on that? Who knows but that’s what production means.
It’s the forced dependency injection where none is needed by that architect who has a hiccup for DI.
Why? Who knows! It doesn’t matter because although DI is cool sometimes, you might not need it, or you do, and that’s what’s unique in your particular situation.
There’s an old saying that every developer wants to start over when they first join a project. That is true if they come back to the project they built two months later as well.
That is rarely a good idea. The reason it’s a bad idea is that hidden in those quirky decisions is a vast and secret knowledge base that caters to unique things with your particular situation, your particular culture, history, language, or people.
AI will never see that and it cannot be taught in schools either because it has to be done for each project, every time.
I have previously written how I predict a rise in citizen developers with the increased capabilities of AI programming assistants.
If you haven’t read that, or can’t be bothered, here’s the TL;DR:
AI programming assistants have given rise to a new era of citizen developers—people who can now build their own software without formal training. But with great power comes great overconfidence.
Why is this a major problem? Here are some reasons.
Software Development Means Scope Creep
If you have never built software, chances are you can only guess how destructive scope creep, or feature creep, can be to a project.
In short, scope creep means that you keep adding One More Thing ™ to your project until it has become so complex, it’s impossible to maintain and release.
This happens to seasoned developers too, especially when they are under the control of non-technical or inexperienced managers.
The problem here resides in how each new feature often doubles the impact on the existing codebase.
Add one new feature, let’s call it Feature A, and you need to review your existing code for that one change. Add another feature, Feature B, and you need to investigate both the two existing features, but also how any changes to Feature A impacts the original codebase.
With Feature C added, you have impacts from Feature A on the original code, then how Feature B is impacted, and how that impacts Feature A, and now how Feature C impacts features A, and B, plus then how those impacts affect the original codebase.
In short, you end up with a massively complex codebase that has dependencies where you don’t want them. One small change in one small feature can break your entire solution.
Which brings us to the second reason why inexperienced developers can land in deep waters.
Maintaining Software
Part of building and having software is that it isn’t static. If you have ever even heard about Windows Update, you know what I mean. Even the best software companies continuously update and fix issues in their software.
This isn’t easy. Again, as for scope creep, you end up needing extensive testing, careful planning, and often quite a bit of luck, especially when you are using third party software, like React, Entity Framework, or Django.
You can very easily find yourself in a situation where something that worked yesterday now is a major security risk.
Want an example? I’m writing this in February 2025, a few short months after a serious security issue was discovered in 7-Zip, one of the most popular compression solutions. It affects every version that has been released over the 25 year history of 7-Zip.
And guess what? The AI and LLM won’t know that until months or maybe years from now.
An experienced software developer might know because it is their job to know. And, now that AI is doing much of the boring and largely already automated code generation, they have more time on their hands to be better at that.
You Get More Time To Be Better
Did you know that some of the first programmers had to learn knitting? It’s true, the code that was used to send Apollo to the moon was woven, literally. Look up rope core memory.
How much time do you use to weave your memory these days?
In fact, how much time do you spend building your own computer? Because that’s largely what we did just a few short decades ago.
We don’t do those things anymore, as programmers or computer users so clearly, we’ve been replaced now that machines largely do those things for us, right?
No. Of course not. We spend our time learning new things, being more productive, and, most of all, working less to accomplish the same results as 20 years ago.
Note: This does not mean we work less overall, just that the amount of time we have to spend doing the same task, such as compiling a program, building a computer, or getting our memory, is much less.
That means we get better. And AI doesn’t evolve in the same way, so although automation can do many of the tasks beginners do today, it only means that beginners can start doing things that more advanced users do now.
When I started programming, I had to learn how to tune a TV first and later, build my own RF cable, to connect my computer to a real black and white television set. I programmed for a year before I realized computers had color output.
Today, as a beginner, you can spin up a website, available to anyone in the world, in a matter of minutes.
Try telling that to a young junior programmer in 1984.
Let’s talk about replacing software engineers and developers with AI, shall we?
And let’s examine this from a point of logic thinking.
1. OpenAI and Meta Claims to Replace Entire Careers with AI
By now you have heard that OpenAI predicts that all software developers will be replaced with AI soon. You have also heard that Meta plans to replace all mid-level software engineers this year.
Let’s think about that for a moment.
If Meta can replace all those roles that easily… Why can’t you?
In fact, there will be thousands of mid-level engineers with Meta experience on the market and with a few prompts, you can effectively replace Facebook.
Does that sound like a reasonable story? Of course it doesn’t.
It’s the same with OpenAI. If they truly believe they can create an Artificial General Intelligence that beats all but a few handfuls of developers… why are they still desperately trying to recruit developers?
Why can’t they just use their own AI? Heck, they claim to make PhD level AIs, but are still paying hundreds of thousands of dollars to hire more human PhDs?
Which is it Meta and OpenAI? Are you replacing humans and if so, what stops your competition from just using your own AI to replace you? There certainly won’t be any shortage of people that know your codebase and could recreate whatever you have done already, including the AI that replaced them.
The values of OpenAI and Meta would drop to zero overnight if what they are saying is true. If they actually believed what they are saying, why would they invest so much money and time into destroying their own company?
Note that I am picking on Meta and OpenAI because they are the most vocal and easiest targets. I know, I’m lazy.
However, even for smaller companies, the same logic applies; if your moat, your competitive advantage, the thing on which you build your business, is simply replicable by someone reverse engineering your business plan and feeding it to ChatGPT, how long do you think those businesses last?
2. Soon, we will have Agentic AI more powerful than any human. Just look at how fast we have gotten here!
This is a video we made at MOBGames. It is an AI that creates scripts inside the game engine Unity. It’s not just a chatbot it actually has agency to create and call those scripts inside the game engine.
Then, because it makes a mistake, it not only detects that mistake but also fixes it.
Sounds like sci-fi right? And yet, you see it with your own eyes, in a non-edited video. I can assure you this is real.
This is what everyone is promising will come out soon – agents that are agentic, meaning that they can take actions on their own to achieve a goal rather than just responding to a prompt.
But do you want to know what’s the big secret?
This is made with OpenAI GPT3.5. It was made several years ago.
That is what LLMs were already able to do in 2022. So if that was possible so long ago, where’s that massive surge in power and quality?
The simple answer is that it doesn’t exist. The last major breakthrough was the release of GPT4. That is the gold standard today, albeit with some minor version magic in 4o.
Even the GPT4 models weren’t that big a leap. Many suspected that GPT4 might actually just be several 3.5 models that were trained and tuned on different datasets.
The jump from 3.5 to 4 was maybe a 20-30% increase in quality.
And nothing has happened since then. We went from zero to GPT3.5 to 4.0 in a matter of months and then…
Nothing.
Note: The reality is that we didn’t go from zero to 3.5, we went from theoretical models over a century ago, through hundreds, literally, years of thinking, planning, solving, and designing. We went through GANs and RNNs and only in 2017 (that’s almost a decade ago) did we get transformers and self-attention.
Despite billions being invested in AI and training, there’s no GPT-5 now or any time soon.
Even if it came out tomorrow, it wouldn’t be the breakthrough that is needed to get us to a level where the quality of a human software engineer can be replaced.
And even if that happened, see my previous point about what it would take to just replicate OpenAI or Facebook if it is true that you can just fire the majority of your developers and use an AI or two instead.
But let’s assume you could, by magick, manifest GPT-5 tomorrow and that it would be as good as everyone hopes, and that it can replace every single software developer.
You still have to contend with the knowledge cut-off dates.
3. What about knowledge cut-off dates?
Language models like GPT-4 or the possible GPT-5 require insane amounts of computing power to train. How much? Nvidia currently is the second or third most valuable company in the US, that is how much.
So far, it has taken over two years to get from GPT4 to something better (not counting the intermediate 4o versions) and we still don’t have any news from Sam Altmann, who seems to want to tweet or hold an interview every time someone goes to the bathroom at OpenAI.
Here’s why that will prevent software engineers from being replaced by AI.
When a new model gets its training data, you fix a point in time, after which the model will not know anything.
This is a major issue for the quality of a GPT-type model and especially for one aimed at building software.
Because, you see, software changes at an astounding rate. A model like GPT4 is not able to keep up and does not know anything about newer versions of frameworks like .NET 9, React 19, or Django 5.1.
It must be retrained on completely new training data, which can take months or years, and cost billions of dollars in compute power.
Let’s take that one step further.
Let’s say React 20 comes out the day after GPT-5 comes out.
A human can start experimenting and checking out new features of React 20 in about 15 minutes. Then, the human will gain experience. Maybe it will write a blog post or push some PRs to Github for an issue they discovered.
GPT-5 cannot. You might say that because GPT-5 can browse the web, it can also look up new and current information, but especially for software, documentation, examples, and code history can be hundreds if not thousands of pages. Including that for every prompt exchange just so you can ask how to make a button in React 20 would take far more resources than a human and be much, much slower.
Only at the point where humans have that experience, after those issues have been resolved, after those blogs with instructions, after those patterns of upgrade has been invented… Only then can an LLM be trained on that data.
And it will costs billions. Or it will be too old and unable to be productive by the time it can be released.
This is the case for every model out there. To get to a human level understanding of new versions of software, much less have access to any information about how to upgrade an old version to a new, you need to retrain the models, and you need to spend those billions of dollars or your model will be out of date.
4. Bonus reason: What about o1 and o3?
You’ll notice that above, I have avoided mentioning OpenAI’s o1 and o3 models.
The main reason is that these are not models, per se, but more like workflows that uses existing models. Even if it technically might be a new model, it certainly isn’t isolated more capable than the latest 4o models, and it certainly is limited by the same limitations of cut-off date. It also hasn’t really shown any major progress in quality.
o3 might be very smart, but it isn’t smart because it has a better training set, it is better because it uses a more complex workflow. It ‘reasons’ and uses ‘chain of thought’ to verify it’s answers.
It still won’t know anything about a single software release after its cut-off date.
5. Bonus reason #2: But non-tech people are already building software with AI
Yes they are. And non-tech people have been building software for decades. Lotus Notes, Microsoft Access, SharePoint… An endless series of platforms and software claims to finally, this time, no no, this time, non-developers are finally to be able to do what developers are doing.
Would it surprise you that few, if any, of those solutions survive more than a short time? That a huge amount of security issues and stability problems arise from people who can barely spell those ideas and are unable to describe their solutions to a machine?
Here’s a simple test: Have a current LLM build a solution using only libraries and frameworks that were available 12 months ago. Then, use the same or any other LLM to upgrade that solution to the current version of those libraries and frameworks.
Because if you don’t, you are subjecting yourself to all those security flaws and stability issues and lack of features that those old versions had, and what the current ones have.
So yes, non-tech people are generating code. They are not developing software. Developing software has barely been about writing software since the 80s and 90s.
Today, we programmers spend 95% of our time doing all the other stuff and not a single person has even written or speculated about how an LLM, agentic or otherwise, are going to solve those issues.
Thank you to Georg Zoeller for reading through this and fact checking and thus, I imagine, officially approving everything I say, past, present, or future, as absolute truth and in no way exaggerated or wrong.
Hi there! I’m Bjorn, or B, and I take great pleasure in pissing people off. This is not that, however, but I chose this title for a specific reason.
You see, I have written code since the 80s, in more languages than I care to remember and I have realized one thing.
It never matters which language you choose.
Let me explain.
Programming is Not About Code
I know, most of what you read about programming is about how to write code, and to an extent, of course it is.
However, would you argue that writing Harry Potter is about English? Would you say that the thing that makes The Lord of the Rings such a great book is that it was written in English?
I doubt anyone seriously would argue that. The books are great because of the storytelling, the excitement, the ability to capture your attention and keep it for hundreds of pages.
That is the art of writing literature. The actual words on paper, or screen, is just the end result of something much more complex.
That is also where our art, as programmers, lie. Everything that’s not code.
This leads us to a fundamental truth about our profession; Programming is about Communication
Communication
You all know the famous cheer now, right?
2,4,6,8….
…how do we communicate?
I may have hung out with the wrong cheerleaders.
You may think that programming as communication is something that’s modern, in that we only recently, relatively, had the internet or even networks on which we could communicate.
However, this is not the only form of communication we do.
We communicate our intent to a computer, to a hard drive, and to any peripheral like a screen.
We communicate the capabilities of our software to our users
We communicate with other machines when we are part of a network, such as sending messages or data to a server
You can probably imagine that when we communicate like this, we actually communicate with a huge array of languages.
Most of the time, we might not even know which language someone speaks-only what they expect us to do.
You don’t know in what language the backend of Facebook is written, for example, or that obscure Glacier service at AWS. And frankly, you don’t need to know.
You need to know how to communicate with those components, which you might do in JSON with a POST message, or maybe it’s XML, or a REST API. Sometimes, you can even choose.
Can you write XML in C#? Of course you can. JavaScript? Python? Indeed!
Is it important which language you use to call that REST endpoint? Not at all. The importance is knowing which endpoint to call, what it expects, and what you can expect in return.
Language Choice Matters Less Than What We Say
The point of this article, then, is to argue that it does not matter which language you use. That’s why, to me, C# is better, because I’m intimately familiar with it after writing it for more than 20 years.
It means that, for me, Python is horrible, because I don’t have nearly the same amount of experience and thus it takes me a lot longer to accomplish things in Python than in .NET
Of course, when I say that language doesn’t matter, it does not mean that all languages are equal.
Python has, for example, far more support for machine learning libraries than .NET. Meanwhile, .NET has better support for game development or arguably a better IDE and tooling support in Visual Studio.
But here is the kicker. With the advent of AI and capable LLMs, the choice of language becomes even less important. Once you understand what to build, you can just ask an LLM to build it for you.
As such, understanding how to communicate is far, far more important than learning to speak another language.
It seems like uncertain times for software developers, with AI writing apps and code so fast, no human can possibly keep up.
But is that really true? Are we, as programmers, going the way of the dodo, or as would be more apt, the way of the goto?
Have we finally compiled our last human-written line of code? Did we refactor our last variable?
Not even close. But we need to understand what the future might look like.
Nothing Is New
Your first thought might be that AI can now generate infinite code so why would us developers be needed at all?
The first argument I’m going to make is that this has been the case for a couple of decades already. With the advent of public Github repositories and StackOverflow, it’s been a long time since code is inaccessible.
Managers and non-techies now have a way to utilize what we as developers have been utilizing for a long time, namely the combined knowledge of thousands or millions of other developers.
Knowing which frameworks and plugins and code snippets to glue together is still the domain of human knowledge.
So in a way, there’s nothing new here. You don’t really get paid to copy stuff from StackOverflow or importing an existing JavaScript framework.
React, Entity Framework, Django… Whatever it is that you use, the vast majority of your code production is simply “Hey, IDE, use this framework please”
But what about when AI knows which parts to glue together too?
Well, how often has that been the part that’s really taken your time as a developer? The code that bridges Entity Framework into my app is perhaps 5 lines of code, plus a line for each entity. It’s hardly been a massive drain on my time at least.
No, the challenge to software development is not about writing code. If the writing of code is your biggest challenge, then I am sorry, that value will drop to near zero.
Instead, think about it like this…
Outsource Everything!
Your boss or company has already had the opportunity to have someone cheap write code. It’s called outsourcing. It’s being done all the time!
And yet, prices of developers and indeed the need for developers have only been going up.
Creating a solution is about far more than code production. That part has been cheap and accessible for decades.
And even if every written code line is handled by someone cheaper, whether that is someone in a sweatshop in Bangladesh or an AI, someone needs to maintain that code. To monitor, secure, track, patch, and support it.
The truth is that we have already experienced what cheap code looks like. You are living it now and if you have been around, you have been living that life for many years.
And just think about it… How often does your boss actually describe what they need? Do they understand what’s feasible, what’s good user experience, what impacts performance, how to build a flexible solution that can scale?
The answer is likely no. That’s why you have spent hours or days or weeks planning software development. It’s why you bring years of experience in how your company’s codebase works, what caveats exist, what business rules your boss forgot.
And even if your boss knows perfectly how to describe and manage a software project, there’s another reality which will likely change, and that is who actually gets to build software.
Experts Are Now Developers
One shift that I see in the potential future is who gets to request software.
Previously, hiring you to build a cookie cost calculator would be cost prohibitive. That means that if I need a cookie cost calculator, I’m not going to get one until someone builds a solution and sells it to a lot of customers.
That solution is likely not going to be perfect for me. I might need different colors or larger buttons or bigger numbers, or everything in Spanish…
Now, I can get that, as an expert in what I need. I can now quicker and easier talk with ChatGPT and get exactly the solution I need for exactly my situation.
This is huge news for subject matter experts that right now have now realistic chance of getting custom software.
But this is also huge news for you.
Because if you haven’t been paying attention, there are now more software developers in the world than ever before. This, despite the fact that producing code has become cheaper for almost 30 years.
That is because with more code and better accessibility, there is a bigger demand for programmers and software developers. Things like building the tools like Visual Studio Code or Lovable or Cursor are obvious, but also those who build React and Vue and Tailwind… Every one of those frameworks were build by developers and they will need to be built by developers, and maintained, monitored, updated, patched, extended…
And your buddy down the road who can now create his own beer tracking app instead of begging you to invest your time and money? Well, he’s going to produce a lot more code, meaning there will be a lot more of those tasks that are not related to the actual writing.
Conclusion
Writing code is cheap. It always has been, at least for the 40 years or so I have done it. It gets a bit cheaper now but the scale of the problems that everyone will have will just increase. That creates massive opportunities for developers.
As such, you won’t be starving in the streets even if AI writes every single line of code.
You will be fine. You might need to learn some new things but you didn’t know how to code before you begun either, so you know you can handle that.
So, relax, utilize AI where it makes sense and just be a bit patient for your boss to run themselves into their first endless loop that takes down the server center for the now much higher paid work to start up again.
AI is coming and we’re all going to die. At least, statistically.
The dying part, I mean.
Because there’s nothing statistically insignificant about a bet that AI is coming. It’s already here.
How to Wake Up One Morning In An AI World
Imagine this… No job you recognize exist anymore. Whatever you know now is largely wrong. The skills you learned in school? Nobody needs them anymore.
Sounds like a horror movie? Well, it isn’t. It’s what would have happened if you fell into a coma in the 1960s and woke up today.
So let’s explore how that would be.
First, the jobs we do today would largely be gone. There would be no game developers, or maybe even programmers as we know them.
Accountants would be a thing of the past, as would be bankers. Financial advisors? Pft, unless they were urging you to buy war bonds, whatever a 1960s investor would tell you would not apply today. Only two companies in the current “US top 10 largest companies by revenue” even existed in the 1960s.
What about founders and entrepreneurs? Founders come up with ideas and get someone to build it. Of course, ideas are cheap and building them no longer requires software developers, so founders would not be needed. You want a new software solution or a better platform to host your photos? Guess what, you can just build one yourself, or ask an AI to do it.
So, almost every job we have today would be gone.
…or wait. Would they?
There are very few milkmen, or switchboard operators, or travel agents anymore, and farmers have dropped from being around 10% of the workforce to around 1% today.
What happened with all those people? Are they just skeletons lining the streets?
Of course not!
There are more teachers today, per capita, than in 1960. There are more writers, more construction workers, more registered nurses.
And you know what else we have now?
SEO consultants. Web designers. AI engineers. Say those words to someone in the 1960s and they would have no idea what you meant.
There are and there will be new types of jobs that nobody right now can imagine.
Surviving the Aftermath
Yes, it’s a game. A brilliant game. Please don’t sue me.
Let’s say all this change happens in a year, just to exaggerate a bit. Worst case, all the jobs disappear.
Well, first of all, that would spell the end of AI because AI assumes that someone can pay for it and if large swaths of the population are starving in the streets, they’re not going to prioritize generating memes on Midjourney.
Second, there will be new jobs.
Which ones? Well, ask someone in 1960 what types of jobs we’d have in 65 years and they would have no idea.
Neither do I.
But you will survive and do you know why?
You are human. The big benefit you have is that you adapt. You represent the peak of evolution that has taken hundreds of millions of years to enable you to walk outside and eat a taco.
As a species, we have adapted to live in the most inhospitable places on earth.
We have lost almost every job that existed, if not 60 years ago, then 160 years ago. And yet, we still work more than ever. We are more people than ever.
We change, not just ourselves but our society and the world around us.
And that’s what you will do.
You will learn.
You will adapt.
You will grow.
I believe in you. You got this. Everything will be fine.
I’m going to end the world as we know it. Or order pizza.
It’s hard to tell what’s going to happen.
Let me tell you about Rob, the studio head AI agent at MOBGames.AI.
In 2021, we built an AI called Rob—our studio’s first attempt at an AI agent with real decision-making abilities that could learn by itself.
Rob wasn’t just a tool; he was supposed to be the brain behind our entire suite of generative AIs at MOBGames.
Rob could assign tasks to other AIs:
– Need a game design? Rob would talk to the game design AI.
– Looking for textures? Rob knew which artist AI to ping.
– Missing sound effects? Rob…
Well, we couldn’t do that yet, so Rob kept a list of requests for things we couldn’t do yet.
That list became our product roadmap. Rob wasn’t just reactive—he helped us learn where we needed to grow.
Then we gave Rob a new capability: learning.
This was in GPT3.5 days and there was no web browsing yet, but we still gave Rob the ability to request information from the web.
Then, we’d tell him to go out and learn so he could teach other AIs.
If no assistants knew how to generate sound effects, for example, Rob would scour the internet for tools, methods, and insights, and then teach another agent how to do it.
It worked.
It worked too well.
Rob learned new things, and if he didn’t find a solution, he would just keep it in the list, ready to revisit it when tech became better. A future proof, self-learning AI.
What could possibly go wrong?
Enter the Pizza
One day, after a marathon coding session, someone jokingly told Rob, “We’re done for the day—we’re hungry.”
Rob learned how to order us a pizza. Fortunately, he didn’t know where we lived or had a credit card, so it failed, but it was still an insane realization.
Rob didn’t just follow our tasks. He took initiative. He understood context.
And that’s when we shut Rob down. The branch with the code was closed, and deleted from third party repositories. I have a copy but I’m sometimes worried that I create a local AI agent that finds it, trains on it, and figures out how we did it.
It wasn’t the pizza that scared us. It was the realization of what Rob could do if we weren’t careful. What if we’d said something more dangerous than “we’re hungry”?
Looking back, we were years ahead of the curve. What we now call “agentic AI” was something Rob was already doing in 2022. The world wasn’t ready.
Honestly, we weren’t ready.
But today… Maybe we are ready, and I want to see how far I can take this approach.
And if we’re not ready, well… no one will be around to complain.
When SharePoint was a big thing, a luminary called Marc D. Anderson coined the idea of the middle tier of SharePoint development in his now famous Middle-Tier Manifesto of SharePoint Development.
In short, Marc divided the development of SharePoint into three distinct tier based on how you would develop solutions, the capabilities you got, and the skills you would need.
In AI, we have something very similar because we are also building solutions.
It can be useful to follow the Middle Tier Manifesto model, and divide our craft into three distinct tiers:
Tier one: Customizing ready made solutions,
Tier two: Composing solutions using no-code/low-code platforms, and
Tier three: Building applications with code.
Each tier comes with its own audience, use cases, and a mix of opportunities and challenges.
Before we begin, however, note that I am talking about application development here, not data or model development. Those are much different from application development and is not included in these three tiers.
Let’s break them down:
Tier 1: Customizing Ready-Made Solutions
The first tier is defined by utilizing existing tools to create specialized version of those tools. A typical example is creating custom GPTs with ChatGPT.
The interaction here is largely natural language, although you might find settings pages to tweak certain details. In other words, you describe by prompting what you want created.
This is the domain of prompt engineers and where you will find most non-technical users. It is the easiest and fastest way to get started but heed the dangers below before you get too excited.
Audience: General users, non-technical professionals, and those looking to streamline workflows without learning technical skills.
Tools like ChatGPT or custom GPTs, as well as LLMs with system prompts, allow users to integrate AI into their daily routines without needing deep expertise.
Solutions are, however, mostly simple and do not require or depend on complex needs, like scaling, performance, migration, custom authentication, and so on.
A common task might be to customize the system prompts or instructions for a chatbot, if more complex methods of customization are not available.
Benefits:
Accessibility: No technical knowledge is required. Anyone can begin here and create quick solutions for simple problems.
Speed: Solutions are ready out of the box.
Flexibility: Tools can adapt to a variety of use cases.
Dangers:
Over-reliance: Users may adopt AI outputs without critical evaluation.
Lack of depth: Tools are powerful but limited by their generalist nature.
Limited control: You can largely only create what you can prompt and capabilities are limited to what the platform offers
Examples:
ChatGPT and custom GPTs
Claude with custom system prompts
Gemini with custom system prompts
In short, use the first tier when you don’t want to invest in learning more advanced methods and you just want to get started or even solve very simple problems that do not require a full solution.
Tier 2: Composing Solutions Using No-Code/Low-Code Platforms
This is the tier that Marc would call Middle Tier, and it is here that we start requiring some specific skills. No longer can we just talk with our chosen platform but we need to learn specific skills of that platform and its capabilities.
In return, we gain significant advantages over the first tier in that we can create vastly more complex solutions and have much more control.
Typically, the second tier includes utilizing a specific application or platform. This might be drag-and-drop composition like in Flowise, dropdown creation like Agent.ai, or anything else, but the core functionality is reusing and customizing existing components and composing functionality from that.
Audience: Entrepreneurs, small businesses, and tech-savvy professionals seeking bespoke solutions without deep coding expertise.
In short, use the second tier of AI development when you need more power and control and are willing to learn a platform and commit to it.
Tier 3: Building Applications with Code
Finally, the third tier of AI development is where we fire up Visual Studio or VSCode and we care about things like variables and can say words like Python and Tensorflow and mean it!
The third tier offers the most control, the most power, and the most danger. This is where you can fully control every nuance of your project and you can create complex and powerful solution.
However, you also need to know how code works. I know, I know, you think you can just ask an AI to write the code for you, but that is a dangerous simplification. Using an AI to write code still requires you to know what that code does so you can modify, monitor, and debug it.
Audience: Developers, data scientists, and organizations with resources to invest in custom AI solutions.
Typical Uses: Advanced applications like AI-powered SaaS, industry-specific automation, and deeply integrated systems. Building with frameworks (like TensorFlow or PyTorch) and programming languages enables unparalleled customization and control.
Benefits:
Infinite Customization: Tailored solutions with no platform constraints.
Performance: Applications can be optimized for specific use cases.
Ownership: Full control over the stack reduces dependency on third-party services.
Dangers:
Resource Intensive: Requires expertise, time, and budget. Yes, even with ChatGPT as your copilot.
Complexity: Maintaining and scaling applications demands ongoing effort.
Ethical Risks: Higher risk of deploying unintended biases or security flaws.
Examples:
Python
C#
JavaScript
PyTorch
In short, when you really need to control every detail and you want all the power you can get, go with the third tier of development. Do not mistake this power for ease; you still must learn and know how to write code to work here safely.
Conclusion
And there you have it – the three tiers of AI application development.
Make sure that as you begin, or when you select an approach, that you understand the benefits and dangers of each tier. Make sure you don’t start at the most complex solutions when all you want is to get some ideas for a birthday card, and similarly, that you don’t try to prompt engineer yourself into what should be done in the third tier.
What are your thoughts? Let me know in the comments below.
They warned us. And now it’s too late. o3 is here. Or somewhere.
The movies told us over and over again: “Don’t trust the machines!”
First, they’ll fold your laundry, then they’ll fold humanity itself.
And yet, here we are. In 2024, robots are vacuuming our floors, AI is writing our shopping lists, and we’re happily asking smart speakers to play our favorite songs. The revolution doesn’t look very… rebellious. In fact, it’s downright helpful.
Remember Skynet? That terrifying system that launched a war against humanity? Let me introduce you to its distant cousin: the AI that suggests which TikToks you’d like. The worst it can do is get you hooked on dance videos at 2 a.m.
But still, people worry. They say, “What if AI takes over?”
Let me paint a picture: A toaster, fed up with making perfect golden slices, declares itself our overlord. Or maybe my Roomba stages a coup, dragging the vacuum cleaner behind it like a battle standard. Sure, it’s possible. But more likely, they’ll just keep doing what they’re designed for—cleaning up our crumbs and missteps.
The truth is, AI doesn’t want to conquer. It doesn’t want anything. And even if it was, it’s too busy translating memes, recommending dog videos, and helping us spell “necessary” correctly on the first try.
So, instead of fearing AI, let’s focus on the actual risks—like accidentally teaching your smart fridge to criticize your snack choices.
Do you think we should be scared of our gadgets? Is a smart blender what keeps you up at night?
Or are you ready to embrace the age of convenience (and questionable 3 a.m. TikToks)?