Friday, August 29, 2025

When will AI's "Monsanto Moment" come?

 I cannot help but see parallels between the introduction of AI into software development, and the rise of gene modification in crops. Crops have had their genes modified through domestication for centuries, but the rise of corporations like Monsanto popularizing lab-based manipulation led to the patent system being used to protect that investment.  Back in the early 2000's there was a big push to try and scare consumers about supposed risks of GMO crops, and a lot of that fear carries over today with consumers demanding non-GMO produce on the assumption that it is healthier. The reality behind that push wasn't around health and safety, it was around money and the use of these patents to force farmers to change their practices. Prior to GMO crops, farmers would buy seed stock, and if they had a good harvest they could opt to save some of it as seed for the next year. When they bought GMO seed stock, stuff that was resistant to insects or to herbicides, etc. they sign a contract that bars them from being able to keep seed stock. Each year they would need to buy the full allotment of GMO seed. The company would even go so far as to sue adjacent farmers that might benefit from cross-pollination from GMO fields.  This rather heavy-handed treatment of farmers wasn't likely to garner much sympathy from the general public who only stood to benefit from better supply from the increased yields. Still, to attack back at these corporations, supporters attacked GMO any way they could.

Today, AI tools are becoming increasingly available, at low cost or even free. People have started asking the questions around ownership, both in terms of the licensing/copyright of the code these AI tools have been trained with, and the code they in turn generate. For now, companies like Microsoft will claim if you use an AI tool and in turn get challenged by a copyright holder for a violation, they've got your back, while at the same time what code the tool generates is your IP.. But for how long? Monsanto didn't start from scratch with the gene makeup of the crops it improved. Generations of botanists, farmers, etc. experience and cross-breeding had supplied the current base. Much of that was done for little more than recognition for their work, out in the public domain, expecting that future efforts for improvement would remain freely available. That is until big industry finds a way to patent it's work, commercialize it, and take ownership of it. When it comes to business, it is a lot like fishing. When you first sense a fish is biting at your bait, you need to resist the urge to yank the line or you pull the hook out of its mouth. Fish can be clever and grab the edge of a bait and waiting for a free meal. No, instead you wait patiently until the fish is committed, then pull and set the hook. Once they could convince the patent office and courts, the hook was firmly set.

Today, companies like Microsoft have invested a good deal of money into developing and marketing these AI tools. They are putting their lines out in the water with tasty bait offering to help companies and development teams produce better quality code and products faster than ever. They are being patient, not to spook the fish. Today you own what the tool generates, but soon, I'd wager, companies like Microsoft will set that hook, and like Monsanto, demand their share of the value of the yield you produce directly, or indirectly from their seed; By force. How that exactly shapes up, we'll have to see. Perhaps terms that once you start development with AI assistance you cannot "opt out"? Or will they demand part ownership on the basis that their tools generated a share of the IP in the end product?

Thursday, August 21, 2025

Why I don't Grok AI

 I'm a bit of a dinosaur when it comes to software development. I've been on the rollercoaster chasing the highs from working with a new language or new toolset. I've ridden through the lows when a technology I happen to really enjoy working with ends up getting abandoned. (Silverlight, don't get started, for a website? Never, for Intranet web apps? Chef's kiss)

I'm honestly not that worried about AI tools in software development. As an extension to existing development tools, if it makes your life simpler, all the more power to you. Personally I don't see myself ever using it for a few reasons. One reason is the same as why I don't use tools like Resharper. You know, the 50,000 different hotkey combinations that can insert templated code, etc. etc. The reason I don't use tools like that is because for me, coding is about 75% thinking and 25% actual writing. I don't like to, nor want to write code faster because I don't need to. Often in thinking about code I realize better ways to do it, or in some cases, that I don't actually need that code at all. Having moar code fast can be overwhelming. Sure, AI tools are trained (hopefully) on best practices and should theoretically produce better code the first time around, not needing as much re-factoring, but the time to think and tweak is valuable to me. It's a bit like the tortoise and the hare. Someone with AI assistance will probably produce a solution far faster than someone without one, but at the end of the day, what good is speed if you're zipping along producing the wrong solution? Call me selfish but I also think any developer should see the writing on the wall that if a tool saves them 50% of their time, employer expectations are going to be pushing for 100% more work out of them in a day. 

The second main reason I don't see myself using AI is when it comes to stuff I don't know, or need to brush back up on, I want to be sure I fully understand the code I am responsible for, not just requesting something from an LLM. Issues like "impostor syndrome" are already a problem in many professions. I don't see the situation getting anything but worse when a growing portion of what you consider "employment" is feeding and changing the diapers on a bot. I have the experience behind me to be able to look at the code an LLM generates and determine whether it's fit for purpose, or the model's been puffing green dragon. What somewhat scares me is the idea of "vibe coding" where people that don't really understand coding use LLMs in a form of trial and error to get a solution done. Building a prototype? Great idea. Something you're going to convince people or businesses to actually use with sensitive data or decisions with consequences? Bad, bad idea.

Personally I see the value in LLM-based code generation plateauing rather quickly in terms of usefulness. It will get better, to a point, as it continues to learn from samples and corrections written and reviewed by experienced software developers. However, as Github starts to get filled by AI-generated code, and sites like StackOverflow die off with the new generation of developer consulting LLMs for guidance and "get this working for me" rather than "explain why this doesn't work", the overall quality of generated code will start to slip. With luck it's noticeable before major employers dive all-in, giving up on training new developers to understand code & problem solve, and all of us dinosaurs retire.

Until then I look forward to lucrative contracts sorting out messes that greenhorns powered by ChatGPT get themselves into. ;)

Autofac and Lazy Dependency Injection: 2025 edition

 Thank you C#!  I couldn't believe it's been two years since I last posted about my lazy dependency implementation. Since that time there have been a few updates to the C# language, in particular around auto-properties that have greatly simplified the use of lazy dependencies and their property overrides to simplify unit testing. I also make use of primary constructors, which are ideally suited to the pattern since unlike regular constructor injection, the assertions happen in the property accessors, not a constructor.

The primary goal of this pattern is still to leverage lazy dependency injection while making it easier to swap in testing mocks. Classes like controllers can have a number of dependencies that they use, but depending on the action and state passed in, many of those dependencies don't actually get used in all situations. Lazy loading dependencies sees dependencies initialized/provided only if they are needed, however this adds a layer of abstraction to access the dependency when it's needed, and complicates mocking the dependency out for unit tests a tad more ugly.

The solution which I call lazy dependencies +property mitigates these two issues. The property accessor handles the unwrapping of the lazy proxy to expose the dependency for the class to consume. It also allows for a proxy to be injected. Each lazy dependency in the constructor is optional. If the IoC Container doesn't provide a new dependency or a test does not mock a referenced dependency, the property accessor throws a for-purpose DependencyMissingException to note when a dependency was not provided.

Updated pattern:

 public class SomeClass(Lazy<ISomeDependency>? _lazySomeDependency = null)
 {
     [field: MaybeNull]
     public ISomeDependency SomeDependency
     {
         protected get => field ??= _lazySomeDependency?.Value ?? throw new DependencyMissingException(nameof(SomeDependency));
         init;
     }
 }

This is considerably simpler than the original implementation. We can use the primary constructor syntax since we do not need to assert whether a dependency was injected or not. Under normal circumstances all lazy dependencies will be injected by the container, but asserting them falls on the property accessor. No code, save the accessor property, should attempt to access dependencies through the lazy dependency. The auto property syntax, new to C# gives us access to the "field" keyword. We also leverage a public init setter so that our tests can inject mocks for any dependencies they will use, while the getter remains protected (or private) for accessing the dependency within the class. The dependency property will look for an initialized instance, then check the lazy injected source, before raising an exception if the dependency has not been provided.

Unit tests provide a mock through the init setter rather than trying to mock a lazy dependency:

Mock<ISomeDependency> mockDependency = new();
mockDependency.Setup(x => /* set up mocked scenario */);

var classUnderTest = new SomeClass
{
    SomeDependency = mockDependency.Object
};

// ... Test behaviour, assert mocks.

In this simple example it will not look particularly effective, but in controllers that have several, maybe a dozen dependencies, this can significantly simplify test initialization. If a test scenario is expected to touch 3 out of 10 dependencies then you only need to provide mocks for the 3 dependencies rather than always mocking all 10 for every test.  If internal code is updated to touch a 4th dependency then the test(s) will break until they are updated with suitable mocks for the extra dependency. This allows you to mock only what you need to mock, and avoid silent or confusing failure scenarios when catch-all defaulted mocks try responding to scenarios they weren't intended to be called upon.

Monday, January 23, 2023

Autofac and Lazy Dependency Injection

 Autofac's support for lazy dependencies is, in a word, awesome. One issue I've always had with constructor injection has been with writing unit tests. If a class has more than a couple of dependencies to inject, you quickly run into situations where every single unit test needs to be providing Mocks for every dependency even though a given test scenario only needs to configure one or two dependencies to satisfy the testing.

A pattern I had been using in the past with introducing Autofac into legacy applications that weren't using dependency injection was using Autofac's lifetime scope to serve as a Service Locator, so that the only breaking change was injecting the Autofac container in the constructors, then using properties that would resolve themselves on first access. The Service Locator could even be served by a static class if injecting was problematic with teams, but I generally advise against that to avoid the Service Locator being used willy-nilly everywhere which becomes largely un-test-able. What I did like about this pattern is that unit tests would always pass a mocked Service Locator that would throw on a request for a dependency, and would be responsible for setting the dependencies via their Internally visible property-based dependencies. In this way a class might have six dependencies with a constructor that just includes the Service Locator. When writing a test that is expected to need two of those dependencies, it just mocks the required dependencies and sets the properties rather than injecting those two mocks and four extra unused mocks.

An example of the Service Locator design:

private ISomeDependency? _someDependency = null;
public ISomeDependecy SomeDependency
{
    get => _someDependency ??= _scope.Resolve<ISomeDependency>()
        ?? throw new ArgumentException("The SomeDependency dependency could not be resolved.");
    set => _someDependency = value;
}

public SomeClass(IContainer container)
{
    if (container == null) throw new ArgumentNullException(nameof(container));
    _scope = container.BeginLifetimeScope();
}

public void Dispose()
{
    _container.Dispose();
}


This pattern is a trade-off of reducing the impact of re-factoring code that doesn't use dependencies to introduce IoC design to make the code test-able. The issue to watch out for is the use of the _scope LifetimeScope. The only place the _scope reference should ever be used is in the dependency properties, never in methods etc.  If I have several dependencies and write a test that will use SomeDependency, all tests construct the class under test with a common mock Container that will provide a mocked LifetimeScope that will throw on Resolve, then use the property setters to initialize the dependencies used with mocks. If code under test evolves to use an extra dependency, the associated tests will fail as the Service Locator highlights requests for an unexpected dependency.

The other advantage this gives you is performance. For instance with MVC controllers and such handling a request. A controller might have a number of actions and such that have touch-points on several dependencies over the course of user requests. However, an individual request might only "touch" one or a few of these dependencies. Since many dependencies are scoped to a Request, you can optimize server load by having the container only resolve dependencies when and if they are needed. Lazy dependencies.

Now I have always been interested in reproducing that ease of testing flexibility and ensuring that dependencies are only resolved if and when they are needed when implementing a proper DI implementation. Enter lazy dependencies, and a new example:

private readonly Lazy<ISomeDependency>? _lazySomeDependency = null;
private ISomeDependency? _someDependency = null;
public ISomeDependency SomeDependency
{
    get => _someDependency ??= _lazySomeDependency?.Value
        ?? throw new ArgumentException("SomeDependency dependency was not provided.");
    set => _someDependency = value;
}

public SomeClass(Lazy<ISomeDependency>? someDependency = null)
{
     _lazySomeDependency = someDependency;
}

This will look similar to the above Service Locator example except this would be operating with a full Autofac DI integration. Autofac provides dependencies using Lazy references which we default to #null allowing tests to construct our classes under test without dependencies. Test suites don't have a DI provider running so they will rely on using the setters for initializing mocks for dependencies to suit the provided test scenario. There is no need to set up Autofac to provide mocks or be configured at all such as with the Service Locator pattern.

In any case, this is something I've found to be extremely useful when setting up projects so I thought I'd throw up an example if anyone is searching for options with Autofac and lazy initialization.

Thursday, May 30, 2019

JavaScript: Crack for coders.

I've been cutting code for a long time. I was educated in Assembler, Pascal, and C/C++. I couldn't find local work doing that so I taught myself Access, Visual Basic, T-SQL, PL-SQL, Delphi, Java, Visual C++, and C# (WinForms, WebForms, MVC, Web API, WPF, and Silverlight). It wasn't until I really started to dive into JavaScript that programming started to feel "different".

With JavaScript you get this giggly-type high when you're sitting there in your editor with a dev server running on the other screen tinkering away and the code "just works". You can incrementally build up your application with "happy little accidents" just like a Bob Ross painting. WebPack and Babel automatically wrangled the vast conflicting standards for modules, ES versions, and such into something that actually runs on most browsers. React's virtual DOM makes screen updates snappy, MobX is managing your shared state without stringing together a range of call-backs and Promises. And there is so much out there to play with and experiment. Tens of thousands of popular packages on npm waiting to be discovered.

But those highs are separated by often lingering sessions testing your Google-fu trying to find current, relevant clues on just why the hell your particular code isn't working, or how to get that one library you'd really like to use to import properly into your ES6 code without requiring you to eject your Create-React-App application. You get to grit your teeth with irritations like when someone managing moment.js decided that add(number, period) made more sense than add(period, number) while all of the examples you'd been using had it still the other way around. Individually, these problems seem trivial, but they really add up quick when you're just trying to get over that last little hurdle between here and your next high.


JavaScript development is quite literally Crack for coders.

Monday, May 20, 2019

YAGNI, KISS, and DRY. An uneasy relationship.

As a software developer, I am totally sold on S.O.L.I.D. principles when it comes to writing the best software. Developers can get pedantic about one or more of these principles, but in general following them is simple and it's pretty easy to justify their value in a code base. Where things get a bit more awkward is when discussing three other principles that like to hang out in coding circles:

YAGNI - You Ain't Gonna Need It
KISS - Keep It Stupidly Simple
DRY - Don't Repeat Yourself

YAGNI, he's the Inquisitor. In every project there are the requirements that the stakeholders want, and then there are the assumptions that developers and non-stakeholders come up with in anticipation of what they think stakeholders will want.YAGNI sees through these assumptions and expects every feature to justify it's existence.

KISS is the lazy one in the relationship. KISS doesn't like things complicated, and prefers to do the simplest thing because it's less work, and when it has to come back to an area of a project later on, it wants to be sure that it's able to easily understand and pick the code back up.  KISS gets along smashingly with YAGNI because the simplest, easiest thing is not having to write anything at all.

DRY on the other hand is like the smart-assed, opinionated one in a relationship. DRY wants to make sure everything is as optimal as possible. DRY likes YAGNI, and doesn't get along with KISS and often creates friction in the relationship to push KISS away. The friction comes when KISS advocates doing the "simplest" thing and that results in duplication. DRY sees duplication and starts screaming bloody murder.

Now how do you deal with these three in a project? People have tried taking sides, asking which one trumps the other. The truth is that all three are equally important, but I feel that timing is the problem when it comes to dealing with KISS and DRY.  When teams introduce DRY too early in the project you get into fights with KISS, and can end up repeating your effort even if your intention wasn't to repeat code.

YAGNI needs to be involved in every stage of a project. Simply do not ever commit to building anything that doesn't need to be built. That's an easy one.

KISS needs to be involved in the early stages of a project or set of features. Let KISS work freely with YAGNI and aim to make code "work".  KISS helps you vet out ideas in the simplest way and ensure that the logic is easy to understand and easy to later optimize.

DRY should be introduced into the project no sooner that when features are proven and you're ready to optimize the code. DRY is an excellent voice for optimization, but I don't believe it is a good voice to listen to early within a project because it leads to premature optimization.

Within a code base, the more code you have, the more work there is to maintain the code and the more places there are for bugs to hide. With this in mind, DRY would seem to be a pretty important element to listen to when writing code. While this is true there is one important distinction when it comes to code duplication. Code should be consolidated only where the behavior of that code is identical. Not similar, but identical.  This is where DRY trips things up too early in a project. In the early stages of a project, code is highly fluid as developers work out how best to meet requirements that are often still being fleshed out. There is often a lot of similar code that can be identified, or code that you "expect" to be commonly used so DRY is whispering to centralize it, don't repeat yourself!

The problem is that when you listen to DRY too early you can violate YAGNI unintentionally by composing structure you don't need yet, and make code more complex than it needs to be. If you optimize "similar" code too early you either end up with conditional code to handle the cases where the behaviour is similar but not identical, or you work to try and break down the functionality so atomically that you can separate out the identical parts. (Think functional programming) This can often make code a lot more complex than it needs to be too early in the development leading to a lot more effort as new functionality is introduced. You either struggle to fit with the pre-defined pattern and restrictions, or end up working around it entirely, violating DRY.

Another risk of listening to DRY too early is when it steers development heavily towards inheritance rather than composition. Often I've seen projects that focus too heavily and too early on DRY start to architect systems around multiple levels of inheritance and generics to try and produce minimalist code. The code invariably proves to be complex and difficult to work with. Features become quite expensive to work with because they cause use cases that don't "fit" with previous preconceptions of how related code was written to be re-used. This leads to complex re-write efforts or work-arounds.

Some people argue that we should listen to DRY over KISS because DRY enforces the Single Responsibility Principle from S.O.L.I.D.  I disagree with this in many situations because DRY can violate SRP where consolidated code can now have two or more reasons to change.  Take generics for example. A generic class should represent identical functionality across multiple types. That is fine, however you can commonly find a situation where a new type could benefit from the generic except...   And that "except" now poses a big problem. A generic class/method violates SRP because it's reason for change is governed by every class that leverages it. So long as the needs of those classes is identical we can ignore the potential implications against SRP, but as soon as those needs diverge we either violate SRP, violate DRY with nearly duplicate code or further complicate the code to try and keep every principle happy. Functional code conforms to SRP because it does one thing, and does it well. However, when you aren't working in a purely functional domain, consolidated code serving a part of multiple business cases now potentially has more than one reason to change. Clinging to DRY can, and will make your development more costly as you struggle to ensure new requirements conform.

Personally, I listen to YAGNI and KISS in the early stages of developing features within a project, then start listening to DRY once the features are reasonably established. DRY and KISS are always going to butt heads, and I tend to take KISS's side in any argument if I feel like that area of the code may continue to get attention or that DRY will risk making the code too complex to conveniently improve or change down the track.  I'll choose simple code that serves one purpose and may be similar to other code serving another purpose over complex code that satisfies an urge not to see duplication, but serves multiple interests that can often diverge.


Wednesday, April 10, 2019

The stale danish


One Monday, while walking to the office you pass a little bakery with a window display. A beautiful looking danish catches your eye. The crust looks so light and fluffy, a generous dollop of jam in the middle, and the drizzle of icing sugar. So tempting, but you promised yourself to cut back on the snacks, and continue your way to the office.

Each morning, you walk past that same bakery and see a danish sitting there, just beckoning you... Finally, Friday morning comes around and you see that gorgeous danish waiting there. "I can't take it any more!" you scream to yourself and you give in, walk into the bakery and buy the danish. After the guilt melts away holding that beautiful danish, you take a bite... It's long since gone stale, having been sitting there since Monday.

The stale danish is how I view the Internet, and the marvelous source of information we, as software developers have come to rely on it for. When technology is new, the Internet provides a flury of information and experiences that the global software development community collectively accumulates. However, much of the time, that information isn't versioned, and often sources like blog posts & walk-throughs aren't even dated. As the technology matures, finding the information you need to solve a particular problem becomes a treasure hunt to find a fresh danish in a sea of stale ones. Working with JavaScript frameworks like React and it's supporting libraries like Redux in particular has proven extremely challenging time and time again. Every undated bit of information needs to be taken with a grain of salt, and it's compounded by the fact that with JavaScript, there are always a number of variations to do what should be relatively standard stuff, all conforming to different interpretations of "best practices".

Even normal consumers of information are getting tricked by the stale danishes floating around the web. Technologies like Google street view in particular are marvelous, but they require constant updates because as the world around us changes, the maps and camera images quickly become out of date. Using these services to assist with landmarks can be fraught with danger as you're looking for an intersection after a car dealership or restaurant that relocated six months ago. At least Google had the common sense to put dates on their imagery.

The stale danish is the one key reason that I would much, much rather work in the back end with technologies like .Net, Entity Framework, and SQL, than working in the front end with JavaScript libraries. I love working in the front end, solving usability issues and presenting users with useful views to data and features they need. I enjoy MVC, WPF, and had enjoyed Silverlight largely because the pace of change was a lot slower, and there were a handful of sources to find directly relevant information about the technologies and how to effectively use them.

JavaScript, and open source in general is far more volatile. While you may argue this makes it more responsive so that problems get solve more quickly, the real trouble is that everyone's perception of a problem is different, so there are a lot of "solutions" out there that are screaming on the web for validation. As a software developer, projects need to have a completion date so you do need to choose from those available options at the time, and commit to them. Having relevant and accurate information at hand through the life of the project is essential for when challenges do crop up. Ripping out X to replace it with Y isn't an option, and often the breaking changes from X v0.8 to X v0.9 prohibit the idea of upgrading, so when searching for possible answers and work-arounds to problems, it would be a wonderful if more articles and posts were versioned, or at least had a "Baked On" date listed.