Tag: Introduction

  • The Three Tiers of AI Application Development

    The Three Tiers of AI Application Development

    When SharePoint was a big thing, a luminary called Marc D. Anderson coined the idea of the middle tier of SharePoint development in his now famous Middle-Tier Manifesto of SharePoint Development.

    In short, Marc divided the development of SharePoint into three distinct tier based on how you would develop solutions, the capabilities you got, and the skills you would need.

    In AI, we have something very similar because we are also building solutions.

    It can be useful to follow the Middle Tier Manifesto model, and divide our craft into three distinct tiers:

    • Tier one: Customizing ready made solutions,
    • Tier two: Composing solutions using no-code/low-code platforms, and
    • Tier three: Building applications with code.

    Each tier comes with its own audience, use cases, and a mix of opportunities and challenges.

    Before we begin, however, note that I am talking about application development here, not data or model development. Those are much different from application development and is not included in these three tiers.

    Let’s break them down:

    Tier 1: Customizing Ready-Made Solutions

    DALL·E 2024-12-23 18.19.14 - A felt-crafted illustration with a hand-sewn aesthetic. A person interacting with a plush robot, symbolizing Tier 1_ Customizing Ready-Made Solutions

    The first tier is defined by utilizing existing tools to create specialized version of those tools. A typical example is creating custom GPTs with ChatGPT.

    The interaction here is largely natural language, although you might find settings pages to tweak certain details. In other words, you describe by prompting what you want created.

    This is the domain of prompt engineers and where you will find most non-technical users. It is the easiest and fastest way to get started but heed the dangers below before you get too excited.

    Audience: General users, non-technical professionals, and those looking to streamline workflows without learning technical skills.

    Typical Uses: Chatbots, content generation, summarization, brainstorming, and automating repetitive tasks.

    Tools like ChatGPT or custom GPTs, as well as LLMs with system prompts, allow users to integrate AI into their daily routines without needing deep expertise.

    Solutions are, however, mostly simple and do not require or depend on complex needs, like scaling, performance, migration, custom authentication, and so on.

    A common task might be to customize the system prompts or instructions for a chatbot, if more complex methods of customization are not available.

    Benefits:

    • Accessibility: No technical knowledge is required. Anyone can begin here and create quick solutions for simple problems.

    • Speed: Solutions are ready out of the box.

    • Flexibility: Tools can adapt to a variety of use cases.

    Dangers:

    • Over-reliance: Users may adopt AI outputs without critical evaluation.

    • Lack of depth: Tools are powerful but limited by their generalist nature.

    • Limited control: You can largely only create what you can prompt and capabilities are limited to what the platform offers

    Examples:

    • ChatGPT and custom GPTs
    • Claude with custom system prompts
    • Gemini with custom system prompts

    In short, use the first tier when you don’t want to invest in learning more advanced methods and you just want to get started or even solve very simple problems that do not require a full solution.

    Tier 2: Composing Solutions Using No-Code/Low-Code Platforms

    DALL·E 2024-12-23 18.52.22 - A felt-crafted illustration with a hand-sewn aesthetic. A plush toy-like character arranging colorful felt puzzle pieces on a table, symbolizing Tier

    This is the tier that Marc would call Middle Tier, and it is here that we start requiring some specific skills. No longer can we just talk with our chosen platform but we need to learn specific skills of that platform and its capabilities.

    In return, we gain significant advantages over the first tier in that we can create vastly more complex solutions and have much more control.

    Typically, the second tier includes utilizing a specific application or platform. This might be drag-and-drop composition like in Flowise, dropdown creation like Agent.ai, or anything else, but the core functionality is reusing and customizing existing components and composing functionality from that.

    Audience: Entrepreneurs, small businesses, and tech-savvy professionals seeking bespoke solutions without deep coding expertise.

    Typical Uses: Creating custom chatbots, workflow automations, predictive models, and simple AI-driven apps.

    Platforms and apps like Flowise, Bubble, Zapier, or Make empower users to design tailored AI experiences.

    Benefits:

    • Empowerment: Users can create solutions that better match their specific needs.

    • Scalability: Intermediate complexity is achievable without a full development team.

    • Faster Deployment: Projects take days or weeks rather than months.

    Dangers:

    • Hidden limitations: Platforms may cap functionality or scalability.

    • Dependency risks: Reliance on proprietary platforms can lead to vendor lock-in.

    • Security gaps: Misconfigurations can expose vulnerabilities.

    Examples:

    In short, use the second tier of AI development when you need more power and control and are willing to learn a platform and commit to it.

    Tier 3: Building Applications with Code

    DALL·E 2024-12-23 18.21.46 - A felt-crafted illustration with a hand-sewn aesthetic. A plush toy-like programmer with nerdy glasses sits in front of a felt-crafted computer, symbo

    Finally, the third tier of AI development is where we fire up Visual Studio or VSCode and we care about things like variables and can say words like Python and Tensorflow and mean it!

    The third tier offers the most control, the most power, and the most danger. This is where you can fully control every nuance of your project and you can create complex and powerful solution.

    However, you also need to know how code works. I know, I know, you think you can just ask an AI to write the code for you, but that is a dangerous simplification. Using an AI to write code still requires you to know what that code does so you can modify, monitor, and debug it.

    Audience: Developers, data scientists, and organizations with resources to invest in custom AI solutions.

    Typical Uses: Advanced applications like AI-powered SaaS, industry-specific automation, and deeply integrated systems. Building with frameworks (like TensorFlow or PyTorch) and programming languages enables unparalleled customization and control.

    Benefits:

    • Infinite Customization: Tailored solutions with no platform constraints.

    • Performance: Applications can be optimized for specific use cases.

    • Ownership: Full control over the stack reduces dependency on third-party services.

    Dangers:

    • Resource Intensive: Requires expertise, time, and budget. Yes, even with ChatGPT as your copilot.

    • Complexity: Maintaining and scaling applications demands ongoing effort.

    • Ethical Risks: Higher risk of deploying unintended biases or security flaws.

    Examples:

    • Python
    • C#
    • JavaScript
    • PyTorch

    In short, when you really need to control every detail and you want all the power you can get, go with the third tier of development. Do not mistake this power for ease; you still must learn and know how to write code to work here safely.

    Conclusion

    And there you have it – the three tiers of AI application development.

    Make sure that as you begin, or when you select an approach, that you understand the benefits and dangers of each tier. Make sure you don’t start at the most complex solutions when all you want is to get some ideas for a birthday card, and similarly, that you don’t try to prompt engineer yourself into what should be done in the third tier.

    What are your thoughts? Let me know in the comments below.

  • The Paradox of Coding Less to Do More–What AI Developers are Missing

    The Paradox of Coding Less to Do More–What AI Developers are Missing

    Let me stop you right there because you start drawing breath to scream about the hypocrisy of an AI developer chastising AI developer for, well, being AI developers.

    And I’ll make you buy a bath toy. But more on that later.

    There’s nothing wrong in using AI and code assistants or copilots or whatever you want to call them. It’s a great tool, but there is a right way and a wrong way to use it.

    Let’s start with the wrong way, because that’s what you are most likely doing right now, if you are reading this article.

    The Wrong Way

    I suspect you might be reading this article for one of two reasons – one is that you are currently upset about the title, because you are an AI developer and you are here to tell me right.

    The second is that you are an AI developer and you are here to learn.

    Because let’s face it, you’re an AI developer. Everyone is.

    Stop Writing So Much Code!

    So let’s start with an puzzling statement. The most productive developers today are those that don’t write code. And yes, you read that correctly – not writing code is more productive than writing code.

    This seems to confuse inexperienced developers and especially and to no end, non-developers. How can you be more productive by not doing your job?

    First of all, being productive is producing something, but more practically, it is producing something of value.

    If I write 1000 lines of code to draw a line on the screen and someone else writes 1 line of code to do the same, who is most productive?

    That’s right, the one writing less code to accomplish the same result.

    So how do we get more productive, then? Well, by the same logic, we can’t get more productive by having an AI produce more code.

    And if you don’t know how to write a line routine in 1 line, then having an AI do it in 1000 lines is not being productive.

    Start Writing More Code!

    The second factor of this is that by having someone else write the code, you do not learn. You do not craft, nor create, nor think what do write next.

    This is an important factor of getting better. It is practice.

    You don’t become a better chef by ordering food from a restaurant, even though the goal of having dinner might be the same.

    You do not become a better juggler by hiring a clown for your kid’s birthday party.

    You do not become a better writer by hiring a ghost writer.

    And that brings us to the obvious contradiction of the wrong way of using AI – One the one hand you want to write less code but to get good, you need to write more code. How can we possibly resolve this conflict?

    The Right Way

    We have used code companions before. Copilots, if you wish. We’ve used them for decades. I built my first company around it in 1995.

    It produces better code, teaches better, and you should do it all the time.

    “Wait,” I hear you say, because programmers also get superhuman hearing as part of our secret society membership.

    “We haven’t had useful AI companions until the last couple of years. How can this have been a thing developers have done for decades?”

    Fear not, grasshopper, but let me first introduce you to the secret weapon of really great 10x developers.

    A rubber duck.

    furuknap_A_yellow_rubber_duck._Photo_realistic._ff70d5b1-04f2-4d64-a820-5c24ee44a127

    The secret society of software developers have long known about the awesome power of this bath toy in understanding everything we explain.

    In fact, it is so powerful that rubber ducking now is a term on Wikipedia, despite our attempts at keeping this awesome power to ourselves.

    The idea is as simple as it is powerful – Explain to the rubber duck what you are doing.

    The simple idea of using natural language to explain something is so powerful that developers have known that this can solve complex bugs or problems for a very long time.

    And that’s where we come into pair programming

    Pair Programming

    In pair programming, you have one or more additional developers oversee what one developer is programming. The programming developer, let’s call them the pilot, write the code and explains what they are doing to the rest.

    This serves two purposes. First, it forces the pilot to explain themselves in such a way that others understand what is going on. This forces the pilot to think and make solutions that are understandable.

    If they don’t produce understandable solutions, the other people let’s call them… copilots… will let the pilot know and keep them on track.

    The second purpose, which is less important for our scenario with AI programming, is that of sharing knowledge and training. If more people are involved in the code production, more people have seen it and can help support it.

    Additionally, building code together also builds a shared understanding of patterns, logic, style, and approaches to solving problems.

    “Hang on,” I hear you say again, because… Well, you know by now.

    “This sounds terribly inefficient! Five people around a table watching the same code being written! How can that be efficient?”

    Well, think about it like this… The amount of time a developer needs to understand a piece of code is exactly the same whether they see it get written or they read that code afterwards.

    As such, there’s no real difference in efficiency if those developers need to understand that code.

    Granted, they may not always need to understand the code, but that is another story not related to what we’re covering here today.

    Those Ducking AI Developers!

    Now, I’m not going to suggest you use an AI as your pair programming partner, or as your rubber duck. Because there are some major differences, both in how we work and what our goals are.

    The main difference now is that we don’t need additional team members to join us. We can do a lot of the work ourselves.

    Previously, and I’m talking just a few years ago, we always had to organize pair programming sessions around availability.

    That’s a problem sometimes because not everyone is going to be available at the same time, nor focused or motivated at the same time. Thus, we might not be efficient if there’s more than two people who know each other well.

    Now, however, we can use AI copilots to build better code because we don’t need to rely on others to do our pair programming or rubber ducking.

    However, to do so requires that you are already a really great developer, that you actually have the chops to be the pilot to the copilot.

    It requires you being experienced enough to explain to an AI exactly what to write.

    Another major difference is that we don’t really care about the AI learning. We’re not building a team with them. We’re mostly interested in ourselves.

    As such, we are much less, if at all, interested in whether they actually gain something from it. We want to explain exactly what we want because it forces us to think clearly about what we want to do.

    Then, the AI can write that code for us. Who actually writes the code isn’t actually terribly important, unless you are still practicing becoming a good developer, as mentioned previously.

    Conclusion

    So there you have it. Nothing wrong with using AI to build code, if you understand everything well enough to explain it to a rubber duck, and to be a pilot so that others can be your copilot in a group or pair programming session.

    And that’s the write way to use AI – As a copilot to a seasoned pilot, not to replace the pilot.

    If you use AI to avoid writing code before you are good enough not to write code, then you are going to remain a mediocre developer and you are going to keep failing in your AI projects, just like almost all AI projects.

    And when you do, hit me up because I can help you stay out of the 90% club of failed AI projects.

  • What is prompt engineering?

    What is prompt engineering?

    This entry is part 1 of 1 in the series Prompt Engineering And Why You Shouldn’t

    OK, if you made it here and you don’t know what prompt engineering is, then… Welcome!

    In this article, and this series, I will introduce you to prompt engineering, explaining what it is, and hopefully also relaying the issues with it.

    This article introduces prompt engineering by explaining more about how language models work and then shows you the first problem with prompt engineering.

    furuknap_An_icon_representing_prompt_engineering_with_a_humorou_647bbb59-ab65-4c66-8f64-f57efe5931ff

    First of all, let me start by saying that I don’t believe in defining things. I am not going to define what prompt engineering is because then the point becomes fitting my arguments into my definition. Those that have a different definition might disagree based on the definition, not the argument.

    However, the bare minimum you need to know is that prompt engineering is about turning language model prompts into desirable results, and frankly that’s enough to get us started.

    You see, we have already reached the very essence of the conclusion of this series, namely that we have some input, a prompt, and want some output, the text, image, video, or anything else a language model can produce.

    Normally, prompt engineers will attempt to align the output with a desired results by tweaking the input. As it turns out, that’s not really a good idea.

    To understand why this is the case, we need to look at some fundamentals of how language models work and to do that, we need to start much earlier than when we have a chat window open in front of us.

    How Large Language Models Work

    Let’s start with something simple. Let’s count odd number from 2 to 8 and see what happens.

    2, 4, 6, 8

    Easy, right?

    Now… What do you think comes next?

    Chances are, you are going to answer 10. That makes sense because in most situations where 2, 4, 6, and 8 comes in that order, the next thing is going to be 10.

    Congratulations, you have just understood how language models work. They simply make a very informed guess as to what comes next in a certain sequence.

    How do they guess this? Well, they train on vast amounts of data. Using that training data, it then statistically figures out a likely next item, and then it repeats that process.

    Now, in the above sequence of odd numbers, 10 might not always be the right answer. For example, if your training data includes vast amounts of chants from US cheerleaders, then the next items in a sequence might look like this:

    2, 4, 6, 8. Who do we appreciate?

    2468

    So, it depends on how your model has been trained what it guesses as the next item.

    In the above screenshot, I am first just inputting 2,4,6,8 as a sequence, and ChatGPT will ask what I want to do with it.

    I then prompt it to tell me the next item in the sequence, and it responds with 10, explaining why that is the next item.

    Finally, I ask what the item would be if I’m a cheerleader, and it adds that I would be cheering “Who do we appreciate?” instead.

    I am prompt engineering it to give me the answer I want. Figuring out what to say in order to get that answer is what prompt engineering is all about.

    The First Problem: Models Change

    Note that the above screenshot is from ChatGPT4 on December 9, 2025. Why is this important? Well, as I mentioned, what you get as a result depends on how your model has been trained, and that brings us to our first problem with prompt engineering.

    You see, models change, at least mostly, as does the temperature. No, I’m not talking about the weather, but about the randomness built into the way models guess the next item. This is often called temperature. The higher the temperature, the more loosely the model will choose the next item.

    For example, let’s say your training data looks like this:

    2,4,6,8,10

    2,4,6,8,10

    2,4,6,8,10

    2,4,6,8, who de we appreciate?

    Based on this, there’s a 25% chance that the model will pick the cheerleader response.

    There are ways to change this behavior, but for simplicity’s sake, let’s assume we just follow statistics.

    Normally, a 75% statistical chance is more than enough to pick the ‘correct’ answer but what if someone retrains the model with this data:

    2,4,6,8,10

    2,4,6,8,10

    2,4,6,8,10

    2,4,6,8, who de we appreciate?

    2,4,6,8, who de we appreciate?

    2,4,6,8, who de we appreciate?

    Now there’s suddenly a 50/50 chance that either option is chosen.

    The prompt you had made earlier now might result in a different outcome because the model itself changes.

    This happens more frequently than you think. OpenAI, for example, regularly updates their models with new data but also by adding features to existing models to make them more secure or to avoid litigation. They can retire models or replace them at their whim.

    When that happens, all your work engineering the perfect prompt is now wasted. Back to the drawing board.

    How To Fix Changing Models?

    The short answer to how to avoid this problem is that you can’t, at least not with models you do not control. Technically, you can run your own models but the issue even with that is that if you never change those models, you also lose out on important features.

    1. You cannot implement security fixes if your model has problems.
    2. You don’t get newer and better models when those are released
    3. You cannot retrain your model with new information because that too changes the model

    For most users, then, prompt engineering is not going to be a viable approach to working with language models. We are relegated to what most people do in any case, which is to have a conversation with an LLM through web interfaces like ChatGPT.

    Then, you just have to hope that OpenAI doesn’t change the way their models work.

    Why is this such a big issue? Can’t you just keep that conversation? After all, people change too, and you are still able to have a conversation with your boss or your aunt during the annual family gatherings.

    And you would be right about that, if all you want is to chat conversationally with someone.

    However, for any type of desired output, you can’t rely on casual conversation. After all, your boss might fire you and your aunt might read crazy news on Facebook and decide that aliens are eating your plants. If you expect a certain predictable outcome, you cannot do so if the model, or the person, is not predictable.

    If they are not predictable, you end up having to redo everything you’ve learned about talking with your boss or your aunt or your language model ever time something changes.

    And that’s the first reason why you should not rely on prompt engineering.

    Summary

    In this article, I have explained how prompts work with language models.

    I showed you how it really just predicts the next item in a sequence based on statistical probabilities. If those probabilities change, which happens when the model change, the entire prompt has to change too so you are back to redoing everything you have done or invested every time someone changes the model.

    Because of this, using prompt engineering to craft the perfect input for a perfect input is a waste of time, if the model is unpredictable and can change.

    Feel free to share this article if you found it interesting and useful. I greatly appreciate any comments or feedback too, so let me know if you have something on your mind.