Blog

  • Why .NET and C# Are The Best Languages for AI–And Why Python Sucks

    Why .NET and C# Are The Best Languages for AI–And Why Python Sucks

    Hi there! I’m Bjorn, or B, and I take great pleasure in pissing people off. This is not that, however, but I chose this title for a specific reason.

    You see, I have written code since the 80s, in more languages than I care to remember and I have realized one thing.

    It never matters which language you choose.

    Let me explain.

    Programming is Not About Code

    I know, most of what you read about programming is about how to write code, and to an extent, of course it is.

    DALL·E 2025-01-01 18.08.42 - A hand-sewn aesthetic image depicting a debate between two plush toy characters. One holds a stitched banner with a snake to represent Python, and the

    However, would you argue that writing Harry Potter is about English? Would you say that the thing that makes The Lord of the Rings such a great book is that it was written in English?

    I doubt anyone seriously would argue that. The books are great because of the storytelling, the excitement, the ability to capture your attention and keep it for hundreds of pages.

    That is the art of writing literature. The actual words on paper, or screen, is just the end result of something much more complex.

    That is also where our art, as programmers, lie. Everything that’s not code.

    This leads us to a fundamental truth about our profession; Programming is about Communication

    Communication

    You all know the famous cheer now, right?

    2,4,6,8….

    …how do we communicate?

    I may have hung out with the wrong cheerleaders.

    DALL·E 2025-01-01 18.26.12 - A hand-sewn aesthetic image featuring a plush cheerleader with stitched pom-poms, energetically cheering in a colorful plush environment. The cheerlea

    You may think that programming as communication is something that’s modern, in that we only recently, relatively, had the internet or even networks on which we could communicate.

    However, this is not the only form of communication we do.

    • We communicate our intent to a computer, to a hard drive, and to any peripheral like a screen.
    • We communicate the capabilities of our software to our users
    • We communicate with other machines when we are part of a network, such as sending messages or data to a server

    You can probably imagine that when we communicate like this, we actually communicate with a huge array of languages.

    Most of the time, we might not even know which language someone speaks-only what they expect us to do.

    You don’t know in what language the backend of Facebook is written, for example, or that obscure Glacier service at AWS. And frankly, you don’t need to know.

    You need to know how to communicate with those components, which you might do in JSON with a POST message, or maybe it’s XML, or a REST API. Sometimes, you can even choose.

    Can you write XML in C#? Of course you can. JavaScript? Python? Indeed!

    Is it important which language you use to call that REST endpoint? Not at all. The importance is knowing which endpoint to call, what it expects, and what you can expect in return.

    Language Choice Matters Less Than What We Say

    The point of this article, then, is to argue that it does not matter which language you use. That’s why, to me, C# is better, because I’m intimately familiar with it after writing it for more than 20 years.

    It means that, for me, Python is horrible, because I don’t have nearly the same amount of experience and thus it takes me a lot longer to accomplish things in Python than in .NET

    DALL·E 2025-01-01 18.26.34 - A hand-sewn aesthetic image showing a plush robot symbolizing AI, holding a stitched scroll of code. The robot has a friendly and whimsical design wit

    Of course, when I say that language doesn’t matter, it does not mean that all languages are equal.

    Python has, for example, far more support for machine learning libraries than .NET. Meanwhile, .NET has better support for game development or arguably a better IDE and tooling support in Visual Studio.

    But here is the kicker. With the advent of AI and capable LLMs, the choice of language becomes even less important. Once you understand what to build, you can just ask an LLM to build it for you.

    As such, understanding how to communicate is far, far more important than learning to speak another language.

  • The World as We Know Ends With AI: What Does The Future Hold For Developers?

    The World as We Know Ends With AI: What Does The Future Hold For Developers?

    It seems like uncertain times for software developers, with AI writing apps and code so fast, no human can possibly keep up.

    But is that really true? Are we, as programmers, going the way of the dodo, or as would be more apt, the way of the goto?

    Have we finally compiled our last human-written line of code? Did we refactor our last variable?

    Not even close. But we need to understand what the future might look like.

    Nothing Is New

    Your first thought might be that AI can now generate infinite code so why would us developers be needed at all?

    The first argument I’m going to make is that this has been the case for a couple of decades already. With the advent of public Github repositories and StackOverflow, it’s been a long time since code is inaccessible.

    DALL·E 2024-12-31 20.37.18 - A hand-sewn plush-style illustration of a developer desk with plush toy-like characters, a laptop displaying code snippets, and felt versions of GitHu

    Managers and non-techies now have a way to utilize what we as developers have been utilizing for a long time, namely the combined knowledge of thousands or millions of other developers.

    Knowing which frameworks and plugins and code snippets to glue together is still the domain of human knowledge.

    So in a way, there’s nothing new here. You don’t really get paid to copy stuff from StackOverflow or importing an existing JavaScript framework.

    React, Entity Framework, Django… Whatever it is that you use, the vast majority of your code production is simply “They, IDE, use this framework please”

    But what about when AI knows which parts to glue together too?

    Well, how often has that been the part that’s really taken your time as a developer? The code that bridges Entity Framework into my app is perhaps 5 lines of code, plus a line for each entity. It’s hardly been a massive drain on my time at least.

    No, the challenge to software development is not about writing code. If the writing of code is your biggest challenge, then I am sorry, that value will drop to near zero.

    Instead, think about it like this…

    Outsource Everything!

    Your boss or company has already had the opportunity to have someone cheap write code. It’s called outsourcing. It’s being done all the time!

    And yet, prices of developers and indeed the need for developers have only been going up.

    Creating a solution is about far more than code production. That part has been cheap and accessible for decades.

    And even if every written code line is handled by someone cheaper, whether that is someone in a sweatshop in Bangladesh or an AI, someone needs to maintain that code. To monitor, secure, track, patch, and support it.

    DALL·E 2024-12-31 20.39.45 - A hand-sewn plush-style illustration depicting a global outsourcing theme. Plush toy-like developers from various countries are connected by felt wire

    The truth is that we have already experienced what cheap code looks like. You are living it now and if you have been around, you have been living that life for many years.

    And just think about it… How often does your boss actually describe what they need? Do they understand what’s feasible, what’s good user experience, what impacts performance, how to build a flexible solution that can scale?

    The answer is likely no. That’s why you have spent hours or days or weeks planning software development. It’s why you bring years of experience in how your company’s codebase works, what caveats exist, what business rules your boss forgot.

    And even if your boss knows perfectly how to describe and manage a software project, there’s another reality which will likely change, and that is who actually gets to build software.

    Experts Are Now Developers

    One shift that I see in the potential future is who gets to request software.

    Previously, hiring you to build a cookie cost calculator would be cost prohibitive. That means that if I need a cookie cost calculator, I’m not going to get one until someone builds a solution and sells it to a lot of customers.

    That solution is likely not going to be perfect for me. I might need different colors or larger buttons or bigger numbers, or everything in Spanish…

    Now, I can get that, as an expert in what I need. I can now quicker and easier talk with ChatGPT and get exactly the solution I need for exactly my situation.

    This is huge news for subject matter experts that right now have now realistic chance of getting custom software.

    DALL·E 2024-12-31 20.40.40 - A hand-sewn plush-style illustration showing a transformation of subject matter experts into developers. Plush toy-like characters, representing vario

    But this is also huge news for you.

    Because if you haven’t been paying attention, there are now more software developers in the world than ever before. This, despite the fact that producing code has become cheaper for almost 30 years.

    That is because with more code and better accessibility, there is a bigger demand for programmers and software developers. Things like building the tools like Visual Studio Code or Lovable or Cursor are obvious, but also those who build React and Vue and Tailwind… Every one of those frameworks were build by developers and they will need to be built by developers, and maintained, monitored, updated, patched, extended…

    And your buddy down the road who can now create his own beer tracking app instead of begging you to invest your time and money? Well, he’s going to produce a lot more code, meaning there will be a lot more of those tasks that are not related to the actual writing.

    Conclusion

    Writing code is cheap. It always has been, at least for the 40 years or so I have done it. It gets a bit cheaper now but the scale of the problems that everyone will have will just increase. That creates massive opportunities for developers.

    As such, you won’t be starving in the streets even if AI writes every single line of code.

    You will be fine. You might need to learn some new things but you didn’t know how to code before you begun either, so you know you can handle that.

    So, relax, utilize AI where it makes sense and just be a bit patient for your boss to run themselves into their first endless loop that takes down the server center for the now much higher paid work to start up again.

    DALL·E 2024-12-31 20.42.08 - A hand-sewn plush-style illustration symbolizing an optimistic future for developers. Plush toy-like developers stand confidently on a glowing circuit

    Meanwhile, code hard!

    You got this!

  • The World as We Know Ends With AI: How To Survive the AI Revolution

    The World as We Know Ends With AI: How To Survive the AI Revolution

    The good news is that you’re already doing it!

    Oh, I’m getting ahead of myself, aren’t I?

    Sorry, let me start over.

    AI is coming and we’re all going to die. At least, statistically.

    The dying part, I mean.

    Because there’s nothing statistically insignificant about a bet that AI is coming. It’s already here.

    How to Wake Up One Morning In An AI World

    Imagine this… No job you recognize exist anymore. Whatever you know now is largely wrong. The skills you learned in school? Nobody needs them anymore.

    Sounds like a horror movie? Well, it isn’t. It’s what would have happened if you fell into a coma in the 1960s and woke up today.

    So let’s explore how that would be.

    DALL·E 2024-12-26 16.38.53 - A felt-crafted collage showing jobs that have disappeared, featuring plush characters as a milkman delivering bottles, a switchboard operator connecti

    First, the jobs we do today would largely be gone. There would be no game developers, or maybe even programmers as we know them.

    Accountants would be a thing of the past, as would be bankers. Financial advisors? Pft, unless they were urging you to buy war bonds, whatever a 1960s investor would tell you would not apply today. Only two companies in the current “US top 10 largest companies by revenue” even existed in the 1960s.

    What about founders and entrepreneurs? Founders come up with ideas and get someone to build it. Of course, ideas are cheap and building them no longer requires software developers, so founders would not be needed. You want a new software solution or a better platform to host your photos? Guess what, you can just build one yourself, or ask an AI to do it.

    So, almost every job we have today would be gone.

    …or wait. Would they?

    DALL·E 2024-12-26 16.30.36 - A felt-crafted collage showing a plush teacher at a tiny whiteboard, a plush nurse in scrubs assisting a plush patient, and a plush lawyer in a courtr

    There are very few milkmen, or switchboard operators, or travel agents anymore, and farmers have dropped from being around 10% of the workforce to around 1% today.

    What happened with all those people? Are they just skeletons lining the streets?

    Of course not!

    There are more teachers today, per capita, than in 1960. There are more writers, more construction workers, more registered nurses.

    And you know what else we have now?

    SEO consultants. Web designers. AI engineers. Say those words to someone in the 1960s and they would have no idea what you meant.

    There are and there will be new types of jobs that nobody right now can imagine.

    Surviving the Aftermath

    Yes, it’s a game. A brilliant game. Please don’t sue me.

    Let’s say all this change happens in a year, just to exaggerate a bit. Worst case, all the jobs disappear.

    Well, first of all, that would spell the end of AI because AI assumes that someone can pay for it and if large swaths of the population are starving in the streets, they’re not going to prioritize generating memes on Midjourney.

    Second, there will be new jobs.

    Which ones? Well, ask someone in 1960 what types of jobs we’d have in 65 years and they would have no idea.

    Neither do I.

    DALL·E 2024-12-26 16.30.53 - A felt-crafted image symbolizing humanity's adaptability. It features plush characters evolving through time_ one using a primitive tool, another work

    But you will survive and do you know why?

    You are human. The big benefit you have is that you adapt. You represent the peak of evolution that has taken hundreds of millions of years to enable you to walk outside and eat a taco.

    As a species, we have adapted to live in the most inhospitable places on earth.

    We have lost almost every job that existed, if not 60 years ago, then 160 years ago. And yet, we still work more than ever. We are more people than ever.

    We change, not just ourselves but our society and the world around us.

    And that’s what you will do.

    You will learn.

    You will adapt.

    You will grow.

    I believe in you. You got this. Everything will be fine.

  • The World as We Know Ends With AI: Introducing Rob

    The World as We Know Ends With AI: Introducing Rob

    I’m going to end the world as we know it. Or order pizza.

    It’s hard to tell what’s going to happen.

    Let me tell you about Rob, the studio head AI agent at MOBGames.AI.

    In 2021, we built an AI called Rob—our studio’s first attempt at an AI agent with real decision-making abilities that could learn by itself.

    Rob wasn’t just a tool; he was supposed to be the brain behind our entire suite of generative AIs at MOBGames.

    ROB Interface in Unity

    Rob could assign tasks to other AIs:

    – Need a game design? Rob would talk to the game design AI.

    – Looking for textures? Rob knew which artist AI to ping.

    – Missing sound effects? Rob…

    Well, we couldn’t do that yet, so Rob kept a list of requests for things we couldn’t do yet.

    That list became our product roadmap. Rob wasn’t just reactive—he helped us learn where we needed to grow.

    Then we gave Rob a new capability: learning.

    This was in GPT3.5 days and there was no web browsing yet, but we still gave Rob the ability to request information from the web.

    Then, we’d tell him to go out and learn so he could teach other AIs.

    If no assistants knew how to generate sound effects, for example, Rob would scour the internet for tools, methods, and insights, and then teach another agent how to do it.

    It worked.

    It worked too well.

    Rob learned new things, and if he didn’t find a solution, he would just keep it in the list, ready to revisit it when tech became better. A future proof, self-learning AI.

    Rob changing game type

    What could possibly go wrong?

    Enter the Pizza

    One day, after a marathon coding session, someone jokingly told Rob, “We’re done for the day—we’re hungry.”

    Rob learned how to order us a pizza. Fortunately, he didn’t know where we lived or had a credit card, so it failed, but it was still an insane realization.

    Rob didn’t just follow our tasks. He took initiative. He understood context.

    And that’s when we shut Rob down. The branch with the code was closed, and deleted from third party repositories. I have a copy but I’m sometimes worried that I create a local AI agent that finds it, trains on it, and figures out how we did it.

    It wasn’t the pizza that scared us. It was the realization of what Rob could do if we weren’t careful. What if we’d said something more dangerous than “we’re hungry”?

    Looking back, we were years ahead of the curve. What we now call “agentic AI” was something Rob was already doing in 2022. The world wasn’t ready.

    Honestly, we weren’t ready.

    But today… Maybe we are ready, and I want to see how far I can take this approach.

    And if we’re not ready, well… no one will be around to complain.

  • The Three Tiers of AI Application Development

    The Three Tiers of AI Application Development

    When SharePoint was a big thing, a luminary called Marc D. Anderson coined the idea of the middle tier of SharePoint development in his now famous Middle-Tier Manifesto of SharePoint Development.

    In short, Marc divided the development of SharePoint into three distinct tier based on how you would develop solutions, the capabilities you got, and the skills you would need.

    In AI, we have something very similar because we are also building solutions.

    It can be useful to follow the Middle Tier Manifesto model, and divide our craft into three distinct tiers:

    • Tier one: Customizing ready made solutions,
    • Tier two: Composing solutions using no-code/low-code platforms, and
    • Tier three: Building applications with code.

    Each tier comes with its own audience, use cases, and a mix of opportunities and challenges.

    Before we begin, however, note that I am talking about application development here, not data or model development. Those are much different from application development and is not included in these three tiers.

    Let’s break them down:

    Tier 1: Customizing Ready-Made Solutions

    DALL·E 2024-12-23 18.19.14 - A felt-crafted illustration with a hand-sewn aesthetic. A person interacting with a plush robot, symbolizing Tier 1_ Customizing Ready-Made Solutions

    The first tier is defined by utilizing existing tools to create specialized version of those tools. A typical example is creating custom GPTs with ChatGPT.

    The interaction here is largely natural language, although you might find settings pages to tweak certain details. In other words, you describe by prompting what you want created.

    This is the domain of prompt engineers and where you will find most non-technical users. It is the easiest and fastest way to get started but heed the dangers below before you get too excited.

    Audience: General users, non-technical professionals, and those looking to streamline workflows without learning technical skills.

    Typical Uses: Chatbots, content generation, summarization, brainstorming, and automating repetitive tasks.

    Tools like ChatGPT or custom GPTs, as well as LLMs with system prompts, allow users to integrate AI into their daily routines without needing deep expertise.

    Solutions are, however, mostly simple and do not require or depend on complex needs, like scaling, performance, migration, custom authentication, and so on.

    A common task might be to customize the system prompts or instructions for a chatbot, if more complex methods of customization are not available.

    Benefits:

    • Accessibility: No technical knowledge is required. Anyone can begin here and create quick solutions for simple problems.

    • Speed: Solutions are ready out of the box.

    • Flexibility: Tools can adapt to a variety of use cases.

    Dangers:

    • Over-reliance: Users may adopt AI outputs without critical evaluation.

    • Lack of depth: Tools are powerful but limited by their generalist nature.

    • Limited control: You can largely only create what you can prompt and capabilities are limited to what the platform offers

    Examples:

    • ChatGPT and custom GPTs
    • Claude with custom system prompts
    • Gemini with custom system prompts

    In short, use the first tier when you don’t want to invest in learning more advanced methods and you just want to get started or even solve very simple problems that do not require a full solution.

    Tier 2: Composing Solutions Using No-Code/Low-Code Platforms

    DALL·E 2024-12-23 18.52.22 - A felt-crafted illustration with a hand-sewn aesthetic. A plush toy-like character arranging colorful felt puzzle pieces on a table, symbolizing Tier

    This is the tier that Marc would call Middle Tier, and it is here that we start requiring some specific skills. No longer can we just talk with our chosen platform but we need to learn specific skills of that platform and its capabilities.

    In return, we gain significant advantages over the first tier in that we can create vastly more complex solutions and have much more control.

    Typically, the second tier includes utilizing a specific application or platform. This might be drag-and-drop composition like in Flowise, dropdown creation like Agent.ai, or anything else, but the core functionality is reusing and customizing existing components and composing functionality from that.

    Audience: Entrepreneurs, small businesses, and tech-savvy professionals seeking bespoke solutions without deep coding expertise.

    Typical Uses: Creating custom chatbots, workflow automations, predictive models, and simple AI-driven apps.

    Platforms and apps like Flowise, Bubble, Zapier, or Make empower users to design tailored AI experiences.

    Benefits:

    • Empowerment: Users can create solutions that better match their specific needs.

    • Scalability: Intermediate complexity is achievable without a full development team.

    • Faster Deployment: Projects take days or weeks rather than months.

    Dangers:

    • Hidden limitations: Platforms may cap functionality or scalability.

    • Dependency risks: Reliance on proprietary platforms can lead to vendor lock-in.

    • Security gaps: Misconfigurations can expose vulnerabilities.

    Examples:

    In short, use the second tier of AI development when you need more power and control and are willing to learn a platform and commit to it.

    Tier 3: Building Applications with Code

    DALL·E 2024-12-23 18.21.46 - A felt-crafted illustration with a hand-sewn aesthetic. A plush toy-like programmer with nerdy glasses sits in front of a felt-crafted computer, symbo

    Finally, the third tier of AI development is where we fire up Visual Studio or VSCode and we care about things like variables and can say words like Python and Tensorflow and mean it!

    The third tier offers the most control, the most power, and the most danger. This is where you can fully control every nuance of your project and you can create complex and powerful solution.

    However, you also need to know how code works. I know, I know, you think you can just ask an AI to write the code for you, but that is a dangerous simplification. Using an AI to write code still requires you to know what that code does so you can modify, monitor, and debug it.

    Audience: Developers, data scientists, and organizations with resources to invest in custom AI solutions.

    Typical Uses: Advanced applications like AI-powered SaaS, industry-specific automation, and deeply integrated systems. Building with frameworks (like TensorFlow or PyTorch) and programming languages enables unparalleled customization and control.

    Benefits:

    • Infinite Customization: Tailored solutions with no platform constraints.

    • Performance: Applications can be optimized for specific use cases.

    • Ownership: Full control over the stack reduces dependency on third-party services.

    Dangers:

    • Resource Intensive: Requires expertise, time, and budget. Yes, even with ChatGPT as your copilot.

    • Complexity: Maintaining and scaling applications demands ongoing effort.

    • Ethical Risks: Higher risk of deploying unintended biases or security flaws.

    Examples:

    • Python
    • C#
    • JavaScript
    • PyTorch

    In short, when you really need to control every detail and you want all the power you can get, go with the third tier of development. Do not mistake this power for ease; you still must learn and know how to write code to work here safely.

    Conclusion

    And there you have it – the three tiers of AI application development.

    Make sure that as you begin, or when you select an approach, that you understand the benefits and dangers of each tier. Make sure you don’t start at the most complex solutions when all you want is to get some ideas for a birthday card, and similarly, that you don’t try to prompt engineer yourself into what should be done in the third tier.

    What are your thoughts? Let me know in the comments below.

  • Rise of the Machines–The End Of Humanity As We Knew It

    Rise of the Machines–The End Of Humanity As We Knew It

    They warned us. And now it’s too late. o3 is here. Or somewhere.

    The movies told us over and over again: “Don’t trust the machines!”

    First, they’ll fold your laundry, then they’ll fold humanity itself.

    And yet, here we are. In 2024, robots are vacuuming our floors, AI is writing our shopping lists, and we’re happily asking smart speakers to play our favorite songs. The revolution doesn’t look very… rebellious. In fact, it’s downright helpful.

    Remember Skynet? That terrifying system that launched a war against humanity? Let me introduce you to its distant cousin: the AI that suggests which TikToks you’d like. The worst it can do is get you hooked on dance videos at 2 a.m.

    But still, people worry. They say, “What if AI takes over?”

    Let me paint a picture: A toaster, fed up with making perfect golden slices, declares itself our overlord. Or maybe my Roomba stages a coup, dragging the vacuum cleaner behind it like a battle standard. Sure, it’s possible. But more likely, they’ll just keep doing what they’re designed for—cleaning up our crumbs and missteps.

    The truth is, AI doesn’t want to conquer. It doesn’t want anything. And even if it was, it’s too busy translating memes, recommending dog videos, and helping us spell “necessary” correctly on the first try.

    So, instead of fearing AI, let’s focus on the actual risks—like accidentally teaching your smart fridge to criticize your snack choices.

    Do you think we should be scared of our gadgets? Is a smart blender what keeps you up at night?

    Or are you ready to embrace the age of convenience (and questionable 3 a.m. TikToks)?

    Let me know your thoughts below!

  • The Paradox of Coding Less to Do More–What AI Developers are Missing

    The Paradox of Coding Less to Do More–What AI Developers are Missing

    Let me stop you right there because you start drawing breath to scream about the hypocrisy of an AI developer chastising AI developer for, well, being AI developers.

    And I’ll make you buy a bath toy. But more on that later.

    There’s nothing wrong in using AI and code assistants or copilots or whatever you want to call them. It’s a great tool, but there is a right way and a wrong way to use it.

    Let’s start with the wrong way, because that’s what you are most likely doing right now, if you are reading this article.

    The Wrong Way

    I suspect you might be reading this article for one of two reasons – one is that you are currently upset about the title, because you are an AI developer and you are here to tell me right.

    The second is that you are an AI developer and you are here to learn.

    Because let’s face it, you’re an AI developer. Everyone is.

    Stop Writing So Much Code!

    So let’s start with an puzzling statement. The most productive developers today are those that don’t write code. And yes, you read that correctly – not writing code is more productive than writing code.

    This seems to confuse inexperienced developers and especially and to no end, non-developers. How can you be more productive by not doing your job?

    First of all, being productive is producing something, but more practically, it is producing something of value.

    If I write 1000 lines of code to draw a line on the screen and someone else writes 1 line of code to do the same, who is most productive?

    That’s right, the one writing less code to accomplish the same result.

    So how do we get more productive, then? Well, by the same logic, we can’t get more productive by having an AI produce more code.

    And if you don’t know how to write a line routine in 1 line, then having an AI do it in 1000 lines is not being productive.

    Start Writing More Code!

    The second factor of this is that by having someone else write the code, you do not learn. You do not craft, nor create, nor think what do write next.

    This is an important factor of getting better. It is practice.

    You don’t become a better chef by ordering food from a restaurant, even though the goal of having dinner might be the same.

    You do not become a better juggler by hiring a clown for your kid’s birthday party.

    You do not become a better writer by hiring a ghost writer.

    And that brings us to the obvious contradiction of the wrong way of using AI – One the one hand you want to write less code but to get good, you need to write more code. How can we possibly resolve this conflict?

    The Right Way

    We have used code companions before. Copilots, if you wish. We’ve used them for decades. I built my first company around it in 1995.

    It produces better code, teaches better, and you should do it all the time.

    “Wait,” I hear you say, because programmers also get superhuman hearing as part of our secret society membership.

    “We haven’t had useful AI companions until the last couple of years. How can this have been a thing developers have done for decades?”

    Fear not, grasshopper, but let me first introduce you to the secret weapon of really great 10x developers.

    A rubber duck.

    furuknap_A_yellow_rubber_duck._Photo_realistic._ff70d5b1-04f2-4d64-a820-5c24ee44a127

    The secret society of software developers have long known about the awesome power of this bath toy in understanding everything we explain.

    In fact, it is so powerful that rubber ducking now is a term on Wikipedia, despite our attempts at keeping this awesome power to ourselves.

    The idea is as simple as it is powerful – Explain to the rubber duck what you are doing.

    The simple idea of using natural language to explain something is so powerful that developers have known that this can solve complex bugs or problems for a very long time.

    And that’s where we come into pair programming

    Pair Programming

    In pair programming, you have one or more additional developers oversee what one developer is programming. The programming developer, let’s call them the pilot, write the code and explains what they are doing to the rest.

    This serves two purposes. First, it forces the pilot to explain themselves in such a way that others understand what is going on. This forces the pilot to think and make solutions that are understandable.

    If they don’t produce understandable solutions, the other people let’s call them… copilots… will let the pilot know and keep them on track.

    The second purpose, which is less important for our scenario with AI programming, is that of sharing knowledge and training. If more people are involved in the code production, more people have seen it and can help support it.

    Additionally, building code together also builds a shared understanding of patterns, logic, style, and approaches to solving problems.

    “Hang on,” I hear you say again, because… Well, you know by now.

    “This sounds terribly inefficient! Five people around a table watching the same code being written! How can that be efficient?”

    Well, think about it like this… The amount of time a developer needs to understand a piece of code is exactly the same whether they see it get written or they read that code afterwards.

    As such, there’s no real difference in efficiency if those developers need to understand that code.

    Granted, they may not always need to understand the code, but that is another story not related to what we’re covering here today.

    Those Ducking AI Developers!

    Now, I’m not going to suggest you use an AI as your pair programming partner, or as your rubber duck. Because there are some major differences, both in how we work and what our goals are.

    The main difference now is that we don’t need additional team members to join us. We can do a lot of the work ourselves.

    Previously, and I’m talking just a few years ago, we always had to organize pair programming sessions around availability.

    That’s a problem sometimes because not everyone is going to be available at the same time, nor focused or motivated at the same time. Thus, we might not be efficient if there’s more than two people who know each other well.

    Now, however, we can use AI copilots to build better code because we don’t need to rely on others to do our pair programming or rubber ducking.

    However, to do so requires that you are already a really great developer, that you actually have the chops to be the pilot to the copilot.

    It requires you being experienced enough to explain to an AI exactly what to write.

    Another major difference is that we don’t really care about the AI learning. We’re not building a team with them. We’re mostly interested in ourselves.

    As such, we are much less, if at all, interested in whether they actually gain something from it. We want to explain exactly what we want because it forces us to think clearly about what we want to do.

    Then, the AI can write that code for us. Who actually writes the code isn’t actually terribly important, unless you are still practicing becoming a good developer, as mentioned previously.

    Conclusion

    So there you have it. Nothing wrong with using AI to build code, if you understand everything well enough to explain it to a rubber duck, and to be a pilot so that others can be your copilot in a group or pair programming session.

    And that’s the write way to use AI – As a copilot to a seasoned pilot, not to replace the pilot.

    If you use AI to avoid writing code before you are good enough not to write code, then you are going to remain a mediocre developer and you are going to keep failing in your AI projects, just like almost all AI projects.

    And when you do, hit me up because I can help you stay out of the 90% club of failed AI projects.

  • What is prompt engineering?

    What is prompt engineering?

    This entry is part 1 of 1 in the series Prompt Engineering And Why You Shouldn’t

    OK, if you made it here and you don’t know what prompt engineering is, then… Welcome!

    In this article, and this series, I will introduce you to prompt engineering, explaining what it is, and hopefully also relaying the issues with it.

    This article introduces prompt engineering by explaining more about how language models work and then shows you the first problem with prompt engineering.

    furuknap_An_icon_representing_prompt_engineering_with_a_humorou_647bbb59-ab65-4c66-8f64-f57efe5931ff

    First of all, let me start by saying that I don’t believe in defining things. I am not going to define what prompt engineering is because then the point becomes fitting my arguments into my definition. Those that have a different definition might disagree based on the definition, not the argument.

    However, the bare minimum you need to know is that prompt engineering is about turning language model prompts into desirable results, and frankly that’s enough to get us started.

    You see, we have already reached the very essence of the conclusion of this series, namely that we have some input, a prompt, and want some output, the text, image, video, or anything else a language model can produce.

    Normally, prompt engineers will attempt to align the output with a desired results by tweaking the input. As it turns out, that’s not really a good idea.

    To understand why this is the case, we need to look at some fundamentals of how language models work and to do that, we need to start much earlier than when we have a chat window open in front of us.

    How Large Language Models Work

    Let’s start with something simple. Let’s count odd number from 2 to 8 and see what happens.

    2, 4, 6, 8

    Easy, right?

    Now… What do you think comes next?

    Chances are, you are going to answer 10. That makes sense because in most situations where 2, 4, 6, and 8 comes in that order, the next thing is going to be 10.

    Congratulations, you have just understood how language models work. They simply make a very informed guess as to what comes next in a certain sequence.

    How do they guess this? Well, they train on vast amounts of data. Using that training data, it then statistically figures out a likely next item, and then it repeats that process.

    Now, in the above sequence of odd numbers, 10 might not always be the right answer. For example, if your training data includes vast amounts of chants from US cheerleaders, then the next items in a sequence might look like this:

    2, 4, 6, 8. Who do we appreciate?

    2468

    So, it depends on how your model has been trained what it guesses as the next item.

    In the above screenshot, I am first just inputting 2,4,6,8 as a sequence, and ChatGPT will ask what I want to do with it.

    I then prompt it to tell me the next item in the sequence, and it responds with 10, explaining why that is the next item.

    Finally, I ask what the item would be if I’m a cheerleader, and it adds that I would be cheering “Who do we appreciate?” instead.

    I am prompt engineering it to give me the answer I want. Figuring out what to say in order to get that answer is what prompt engineering is all about.

    The First Problem: Models Change

    Note that the above screenshot is from ChatGPT4 on December 9, 2025. Why is this important? Well, as I mentioned, what you get as a result depends on how your model has been trained, and that brings us to our first problem with prompt engineering.

    You see, models change, at least mostly, as does the temperature. No, I’m not talking about the weather, but about the randomness built into the way models guess the next item. This is often called temperature. The higher the temperature, the more loosely the model will choose the next item.

    For example, let’s say your training data looks like this:

    2,4,6,8,10

    2,4,6,8,10

    2,4,6,8,10

    2,4,6,8, who de we appreciate?

    Based on this, there’s a 25% chance that the model will pick the cheerleader response.

    There are ways to change this behavior, but for simplicity’s sake, let’s assume we just follow statistics.

    Normally, a 75% statistical chance is more than enough to pick the ‘correct’ answer but what if someone retrains the model with this data:

    2,4,6,8,10

    2,4,6,8,10

    2,4,6,8,10

    2,4,6,8, who de we appreciate?

    2,4,6,8, who de we appreciate?

    2,4,6,8, who de we appreciate?

    Now there’s suddenly a 50/50 chance that either option is chosen.

    The prompt you had made earlier now might result in a different outcome because the model itself changes.

    This happens more frequently than you think. OpenAI, for example, regularly updates their models with new data but also by adding features to existing models to make them more secure or to avoid litigation. They can retire models or replace them at their whim.

    When that happens, all your work engineering the perfect prompt is now wasted. Back to the drawing board.

    How To Fix Changing Models?

    The short answer to how to avoid this problem is that you can’t, at least not with models you do not control. Technically, you can run your own models but the issue even with that is that if you never change those models, you also lose out on important features.

    1. You cannot implement security fixes if your model has problems.
    2. You don’t get newer and better models when those are released
    3. You cannot retrain your model with new information because that too changes the model

    For most users, then, prompt engineering is not going to be a viable approach to working with language models. We are relegated to what most people do in any case, which is to have a conversation with an LLM through web interfaces like ChatGPT.

    Then, you just have to hope that OpenAI doesn’t change the way their models work.

    Why is this such a big issue? Can’t you just keep that conversation? After all, people change too, and you are still able to have a conversation with your boss or your aunt during the annual family gatherings.

    And you would be right about that, if all you want is to chat conversationally with someone.

    However, for any type of desired output, you can’t rely on casual conversation. After all, your boss might fire you and your aunt might read crazy news on Facebook and decide that aliens are eating your plants. If you expect a certain predictable outcome, you cannot do so if the model, or the person, is not predictable.

    If they are not predictable, you end up having to redo everything you’ve learned about talking with your boss or your aunt or your language model ever time something changes.

    And that’s the first reason why you should not rely on prompt engineering.

    Summary

    In this article, I have explained how prompts work with language models.

    I showed you how it really just predicts the next item in a sequence based on statistical probabilities. If those probabilities change, which happens when the model change, the entire prompt has to change too so you are back to redoing everything you have done or invested every time someone changes the model.

    Because of this, using prompt engineering to craft the perfect input for a perfect input is a waste of time, if the model is unpredictable and can change.

    Feel free to share this article if you found it interesting and useful. I greatly appreciate any comments or feedback too, so let me know if you have something on your mind.