Wednesday, October 12, 2011

Refreshing Settings and dll.config Files

Having worked on plugin-based architecture and projects involving a number of assemblies there has always been a bit of a bad smell to .Net's implementation for configuration settings files; In particular, the use of Settings.Default. I really like how Settings.Default wraps configuration settings presenting them as strongly typed values. What I hate about it is how it uses the DefaultSettingValueAttribute to specify the default value in the event that a setting cannot be loaded. Why is Settings.Default a problem?

1) Because it hides problems with configuration. Essentially it means if you don't have a configuration setting specified, it will use the attributed value by default. This can lead to spitting venom at the computer as you try to spot a type-o in a the configuration XML six months after the fact when that mail server name changes and you realize it's been using the attributed value and absolutely refuses to accept your configuration setting.

2) Because at runtime you may want to have a service running and be able to change configuration settings and have the service pick up those changes. Unfortunately you're out of luck because while user settings can be reloaded, application settings cannot. The service needs to be stopped and restarted to accept the new configuration settings.

3) Because in cases where you have support assemblies in your project, the pain comes in that while it might be nice and sensible to have each assembly's settings located in their .dll.config file (which Visual Studio produces for you) by default, Settings.Default will only look at settings in the calling application's .exe.config file, even if you place the .dll.config file in the runtime folder. This means if you have 30 configuration settings across six assemblies, you're manually copying across 6 configuration sections and 30 settings into the app's .exe.config file. Now you may be perfectly able to build an NAnt task to do this, but it leaves you with a painfully large and convoluted configuration mess in the .exe.config file.

Now all of these problems can be avoided if you choose to load and parse configuration settings yourself using ConfigurationManager and the like. However, you lose that nice encapsulation that Settings does give you. There may also be solutions on the web for one or more of these issues, but I never was able to find one.

So I set about to change that.

Source Code

SettingsExtension.cs is an extension method you can utilize to refresh any Settings instance from it's assembly's .config file. This means if you have a DLL containing a Settings & associated app.config section and deploy the .dll.config file, you can call the Refresh() method to retrieve the values at any time from the .dll.config file. It works for .exe.config files as well, however calling it on a DLL's Settings will not refresh from .exe.config settings if you've modified settings there. The DLL only discovers its own configuration file.

What this means is that if you have a project with an EXE and X number of supporting DLLs, and some of those DLLs want to use configuration settings, you can copy the .dll.config files into your deployment and as long as they have a call to Settings.Default.Refresh() (I.e. from a static Constructor) you can ensure that that DLL will always use the values from its .dll.config file.

Additionally if you want to get creative, you can now set up a File Watcher on a .exe.config file (or .dll.config files) and when it detects a change, call the .Refresh() on the Settings.Default instance to reload those changes.

Looking through the code it should be pretty easy to see what it's doing. It looks for the Setting's assembly's config file, applicationSettings section group, and section for settings, then uses reflection to go through all known settings properties and update them with whatever is found in the file.

An added feature is support for an optional Refresh Listener which the refresh method can communicate back statuses about what it is processing and if it encounters any errors. Exceptions are non-fatal, the tool will report them back and continue. The reporting includes a message and a TraceLevel as an indication of what went wrong. Simply register a method using SettingsExtensions.InitializeListener() method in order to receive messages.

I've tested the implementation with most Settings-supported data types but there might be a few that need some special parsing.

Wednesday, October 5, 2011

Something's fishy with Google...

It might be time to start lining my room with aluminium foil but something very strange has been happening relating to when I try to install Google software on my machine. This is a fairly new machine running Windows 7 Pro. I'm anything but a MS Fanboi but being a MS technology stack developer has paid the bills quite nicely over the years so who am I to start bitching. One thing I do not like is IE, so as soon as the system was up, I installed Firefox and uninstalled IE. This led to some rather annoying but laughable problems like when applications cannot seem to figure out that I don't have IE, they try launching HTML files in MS Word.

A while back I had a play with Chrome on the office laptop and decided to try it at home. All went well until I started noticing that my Firefox browser would hang, and then Windows 7 would completely hang just out of the blue when browsing. (Not using Chrome at the time) This started to get really annoying because task manager etc. didn't even respond. I started digging around and noticed a few other people having trouble with Chrome and Firefox playing nice together on Windows 7 / Vista. I uninstalled Chrome and the problem vanished. For the record I do not use any Google apps such as toolbar etc. either. I passed it off for some weird glitch between the two systems and something with Win 7 falling over after not figuring out how to resolve their differences. Disappointing to say the least, but I prefer Firefox as a browser to Chrome.

However, recently I decided to dust off my old Java books, install the latest Eclipse, and have a peek at the Android SDK. All was fine until I launched the Android SDK Manager then decided to do a bit of browsing while it was downloading my selected SDK versions and such. Within 10 min Firefox had hung again and the system went completely dead!

So really now, Google, What the Farq are you installing with your software that is getting completely shirty with Firefox and/or Windows 7?! My guess is that the SDK downloader is using netcode shared with Chrome that is in some way incompatible with whatever Firefox is using. Or is it something more sinister? While I have found some references of other people having similar problems I haven't seen anything that remotely looks like a solution or acknowledgement that there is a problem.

The question is, if I sacrifice my (relatively minor) preference for Firefox in favour of Chrome, will I be able to browse and utilize the Android SDK without having my machine lock up? And more importantly, will I wake up the next morning without finding a completely new preference shift from the soda I drink to the car I drive due to some form of wifi-induced subliminal suggestion? :)

Sunday, September 11, 2011

A recent example of the value and purpose of unit tests.

Just last week I was approached by our BA(-ish roled individual) who was tasked with documenting our testing process, and in particular our unit testing strategy. He wanted to know what our unit tests did, and how he could express their value to the client. I had just finished identifying an interesting issue that was identified as I wrote unit tests to exercise a recent feature, and it made the perfect example of what the unit test suite did, and how it added value to the project.

A bit of background into the issue:
The business requirements were that a service would be contacted to retrieve updated events for an order. All events that weren't already associated with the order would be associated with the order. The application would report on any new events associated with the order.
*events have no distinct Key from the service, they are matched up by date/time and event code.

The code essentially did the following. *returning IEnumerable


var recentEvents = EventService.GetEventsForOrder(order.Key);
var newEvents = recentEvents.Except(order.Events);
foreach (var newEvent in newEvents)
{
    order.Events.Add(newEvent);
}
return newEvents;

Based on the requirements I scoped out several tests. Ensure that:
Given the event service returns no events, no new events are added or returned.
Given the event service returns new events and the order contains no events, all events are added and returned.
Given the event service returns new events and the order contains one with matching code and date/time, only the new unmatched events are added to the order and returned.
Given the event service returns an event with matching code but different date time, the new event is added to the order without replacing the existing event and returned.


Adding the 2nd test I was suprised to find it failed. The code is de-coupled and the EventService is mocked out to return two events. There were 2 main asserts, one to assert the order contained the new events, and the second that the return value contained the new events. The order assert showed it contained the two new orders, however the returned set had a count of 0. I'm actually pleased when a test does fail, but this was a momentary "WTF?" followed shortly by a "well Duh!". I was returning the IEnumerable from the Except() expression and I had added the items to the list driving the Except; any further iteration of the IEnumerable would see no items since matches were now in the list.

The issue was easy enough to fix with a ToList() call, and I felt it warranted a simple comment to explain why the fixed code was done that way in case someone in there later went and tried re-factoring it out to just use the enumerable.

This gave me a perfect situation to demonstrate to the BA exactly how the unit tests reflected requirements within the application, and how they served to guard those requirements from unexpected future changes. I had integrated the two tests with working code to show how the CI environment reported that all tests pass, then re-introduced the buggy assumption to show how the tests pick up the error.

The other interesting thing to note was that I also showed the BA our NCover results and was a bit surprised to see that with just the first two unit tests that block of logic was reporting back a test coverage of 100%. However I showed that the "touch" count was showing a lot of 1's indicating that a single test was touching many parts of the code only 1 time. This is a warning flag to me that the code really isn't being exercised . If I was solely concerned with test coverage percentage I could have left the tests at that, but I knew the other two scenarios were not represented so I demonstrated how the stats changed by adding them. The test coverage remained 100%, however the touch counts increased to show a minimum of 3, and an average touch count of 6 to 9. It's not a perfectly reliable indication that the coverage really is complete, but by thinking through the scenarios to exercise the code, I'm quite confident that the code is reasonably guarded from the unintentional introduction of bugs.

Wednesday, September 7, 2011

Unit Tests Aren't Expensive

A short time after developers start adopting unit tests, by this I'm meaning committing to unit tests and looking at good test coverage, grumblings will start. Someone will want to make a design change, and by doing so they break a large number of existing tests. All of the sudden it sinks in that now that there are unit tests in addition to existing production code, changes like this look twice as big or bigger. This can be a crisis point for a project. Having unit tests appears to mean that code is less flexible and change is significantly more expensive. This attitude couldn't be farther from the truth. Unit tests aren't expensive. Change is expensive.

The first thing we need to look at is why the unit tests are breaking. Often when developers start writing tests the tests can be overly brittle merely due to inexperience. In these cases it is a learning experience and hopefully the fixed up tests will be a little more robust than they were before. Still, even with well sculpted unit tests guarding the code's behaviour, developers will find reasons to justify changing that behaviour. The presence of unit tests, now broken unit tests, doesn't make that change expensive, it just makes the true cost of that change visible. Making behaviour changes to an existing code base (that has already committed to some form of acceptance testing, and likely is already out in production) is a very costly exercise. Breaking unit tests are demonstrating what the cost of that change is up-front. Without the unit tests, the deposit you pay up front for the change is smaller, but the rest of the cost is charged with interest in the forms of risk and technical debt. Unit tests show you exactly what is dependent on the behaviour as it was when you first committed to it. You have a clear map not only to fix the tests, but ensure that you can fix the code documented by the tests to suit the desired new behaviour. Without that map, you're driving blind and only regression tests are going to find the spots you miss if you're careful, or it will be your customers that find them if you're not.

So, how do you keep the cost of change under control? An obvious answer would be to identify all change before coding starts, and allowing no change until coding finishes. This would be a motto for BDUF (Big Design Up Front) advocates, but in the realm of typical business software this really doesn't work. By the time coding starts and the customer actually sees implementation they are already thinking about changes. The Agile solution is to implement only the minimum functionality you need to meet requirements as you go. Often the biggest contributor to these kinds of large behaviour changes are due to development teams trying to save time in the future by implementing code that isn't necessary now. Classic reasons include over-engineering solutions, scope creep, or committing to a particular technology/framework on the basis that it may solve problems you might encounter, just not right now. Other ways to avoid large change costs is by embracing constant re-factoring. Get in the habbit of including small improvements into your code as you implement new features. Introduce base classes for common code, delete stale code with a vengeance. Any improvement to reduce the overall size of the code base should be a very high priority. Don't leave things for later, because that is debt that starts accumulating interest as new functionality comes to depend on it, or reproduces it for the sake of consistency.

On a side note:
Writing software can be like a game of chess. You don't walk up to a match with a memorized set of moves that will guarantee victory. This isn't tic-tac-toe. You've got to be prepared to adapt to the opponent that is your customer, your manager, your budget; and you've got to be prepared to sacrifice pieces in the hopes of winning the game. A developer with over-engineered code is like a player that values each piece on the board too highly. A lot of time and sweat has been spent in the code and you cannot justify sacrificing it when the demands of the game deviate from the original plan. The result is putting even more sweat and time into the solution and the resulting choice of moves doesn't win you the game, they lose it and the customer walks away.

Sunday, August 28, 2011

How XP can help your solo projects too.

Something has irked me for a while. Solo projects have a rather unique set of problems, at least from my perspective, compared with client projects:
Time- Solo projects are developed in the gaps that you can scrape between the hours you work for a client, and personal life distractions. When you get married and have children, this becomes quite difficult, though I had found plenty of other distractions before that happened. :)

Scope- On client projects it's easy to fight against scope creep, and easy to spot & discuss "kitchen sink" type features or architecture choices being considered today for some mythical benefit tomorrow. Generally I can be quite "lazy" in that I don't want to do more work than I have to, and it saves me headaches in the future. In solo projects I've found it quite a bit harder to fight scope creep as there were no hard targets for time or feature list.
Requirements- In client projects, you either don't have enough requirements and easily have someone to moan to/about getting more detail from, or you have people making attempts to provide too much in the way of requirements up front. With solo projects I am responsible for the requirements, and deciding what will be v1.0 and sticking with that (or at least challenging my wayward heart) is quite difficult. I also HATE writing down requirements. I can't get through more than a few features before I catch myself opening Visual Studio!

One thing that sold me on XP was measures it took in quality. Not only in terms of software quality with unit testing and pair programming, but in terms of systems quality with user stories, planning games, and continuous integration. It made dealing with customers much easier. It let the production of features begin much sooner, increased value in what was being developed, and made cost transparent to the customer. They can see the velocity of the project and value added for extra time invested in getting features just right. These processes have been really effective in dealing with cases where the customer was really "willy-nilly" with their requirements. Then I realized, *I'm* that willy-nilly customer! Why can't I apply XP principles to my own projects?

So the first thing I did is stop worrying about requirements. For some reason I was trying to capture more detail on paper for my own stuff then I try and initially capture from clients for their features. I switched instead to point-form lists, then expanded the most important one into a user story and tasks as I went. I am the customer, or at least the B.A. so I'm a perfect XP customer because I'm accessible 24/7. I also try to distinctly switch hats from developer to customer. (If I don't end up bi-polar by the end of this, both of me will be surprised.) As a customer I let myself loose with the "wouldn't this be cool" but NEVER with the computer running. That stuff goes down on paper. As the developer I try to be as lazy as I can. The main change from the process was devoting more page area for notes as I progressed. I'm working 1-2 hours at a time if I'm lucky, and maybe 2-3 days in a row. I try to get blocks of work done, while noting down what I had in mind for the next blocks.

TDD I'm already a very strong supporter in, whether Test-First or Test-Second. My rules for solo project is that the code for a new task does not get written until the previous task is unit tested. As I'll be working on these projects for some time, and hopefully plan to get other developers on-board with them in the future, unit tests are crucial. The code must always build, run, and the test suite pass before I finish for a day. (That one can irk the wife! :)

It's still early days for applying XP to my current project stack but it has been quite successful so far. If anything for keeping me more focussed on getting chunks of value-added work done. Hopefully if I can keep this up for a month I can work out a system to keep the momentum going without getting caught up. At that point I can bring in at least one other developer to contribute to the projects without wasting their time.

Sunday, August 21, 2011

Pair Programming.

I am an XP (Extreme Programming) advocate. As a whole I've used in on one successful project, and I try to bring elements of it to any client I work for. Pair programming has to be one of the toughest elements to sell, though admittedly I don't pitch it as the most valuable one. I'm sure many XP advocates would cry foul, as pair programming a cornerstone, if not the foundation behind XP. Every other element could easily be discarded by a developer working alone, pair programming helps reinforce that the other elements are followed. I certainly do not disagree.

However, pair programming is the hardest element to get in place. Most clients have existing development teams that have never heard of pair programming and development environments that aren't set up for it. In the project I started with XP this second point was quite an obstacle. We had comfortable cubicle environments but not shaped to fit two chairs with people working side-by-side. Fortunately it was a big office and we confiscated a large meeting room, arranged the tables in a large rectangle so that 4 pairs of developers could sit side-by-side. This went on for about 3 months, but the company wanted their meeting room back, and we were also getting other teams eyeing our original cubicles.

The solution we came up with was Pair Analysis + Task Swap. Each developer was paired with another and both selected a set of stories for the iteration. The pair would sit together to scope out the tasks for all of their combined stories and discuss the approach. Then the actual development was done individually. If during implementation a developer thought a deviation was needed, then they discussed it with their pair. As a task on a story was completed, it was handed over to the other developer. Each developer would not only review the other's work, but see if there were ways to improve it, or look for other possible scenarios that hadn't been thought of, discussing together as necessary before they committed the work.  This was a little painful back when working with a repository that did not support branches. Essentially developers had shares to their development folder that their pair would open to review work. Done again today with a branching repository, each task would be developed against a story branch.

From a technical perspective, how this works:

Developers A & B select one story each. (typically in an iteration they would choose 1~3 each) The development would be done on branches A & B respectively. When developer A completes his work for a task, he pings developer B for a review. Hopefully B will be finishing up a task pretty soon as well, but while waiting for something to review, developer A can continue with tasks in an unrelated section of code. Developer B reviews A's changes on Branch A, while A review's B's changes on Branch B. When both are satisfied of the changes, developer A merges the changes from Branch B and performs an integration confirmation then developer B merges from Branch A and performs the integration confirmation. After the integrations, each developer resumes working in their original branch until their story is complete. A branch only lives as long as a story. If developer B finishes all tasks of their story, developer A will have merged all changes into Trunk, so developer B will confirm the end of the story and terminate the branch. A new branch will be taken off $trunk to start the next story.

Physically during reviews there is a lot of chatter between the two developers and they'll often be at each other's desks when going over code. Sometimes one developer will find an optimization that the other hadn't thought of. There are two options available, either he can pass the task back to the original developer, or take the task on himself, and the other developer can start working on a task on the alternate story that he just finished reviewing. (Swapping stories/branches.) You can even consider encoraging developers swap branches/stories every other task or so.

This approach has trade-offs with pure pair programming. In situations where a lot of re-factoring is found, such as in cases where you're pairing up experienced developers with less experienced developers, pair programming would be more efficient. A pair working together will spot these optimizations as they're going, where this situations leads to work getting re-factored during review. However, the efficiency hit should reduce greatly as the lesser experienced developer experience and exposure to reviewing the other developer's work increases. This swapping is done at the task frequency, not story frequency, so the impact of these re-factorring should be kept quite small. The advantage of this approach is that it can easily be applied in physical environments that make pairing difficult, while maintaining most of the benefit of pair programming. (knowledge sharing, near real-time code review, and emphasis to adhere to the rest of the principles.) It's also a good lead-in to pair programming with developers that find the concept a bit alien. Hopefully it fosters an increasing amount of communication between team members, so much so that they find that the time apart is the wasteful part of the job and end up pushing their desks together themselves. :)

Business Analysts in the mist.

One thing I've noticed since moving to Australia is the lack of business analysts within organizations I've worked at, or worked for. Occasionally a client will have someone who's title is BA, but the normal responses I get when asking if they have a business analyst is either "No" or "Not now, but we plan to hire one." Now in Canada, this would be sending little red flags flying, but it seems to be par for the course at least here in Brisbane. What really sends the alarm bells clanging is when I ask "who defines the requirements then?" If the answer isn't a B.A. or a client then prepare for pain. Usually the answer is either "The Developers/Lead Developer" or "Sales."

Developers generally makes for very poor analysts. Developers are technical, they don't grok business process, only software process. A developer can analyse how something should be done, but not what should be done. Salespeople are often even worse. Salespeople only worry about signing on new customers or upgrading existing ones. They have an excellent perspective on the extreme high-level of what should be done, but they don't understand either from a business perspective or a technical perspective how it could be done.

The best person to define requirements is the client. Now if you're fortunate enough to be using a methodology like Extreme Programming and the client has someone valuable embedded in your team, then there's no real need for a BA. However, the next best thing is a dedicated BA. Now this gets back to businesses that have someone who's title is BA, but isn't a BA. An excellent example of this was a large government organization client. When I asked if they had a business analyst, their response was that they had a whole team of business analysts! What they said was true, it was actually a small department of about 8 BAs that were under the same director, but not actually part of the software development department. Their idea of a BA's role was to go and meet with the client, understand their business processes, write it up in a document, and hand it off to the software developers to build, washing their hands of it. This meant that the developers had a document, some months after a project has started, then if there are issues or clarifications needed, well too bad, lodge a request with the BA department to get the documentation adjusted. Often the BA that did the original work was tied up in a new project so you get a new BA that has no idea about the project. Of course, this model made perfect sense to them. They needed to have a billable block of time to charge back to the client. Once that was done, they needed to be charging other clients. This was not only frustrating from the software development side of things, it drove the clients nuts. (Having to explain the same thing to two or more BA's, and being expected to *pay* for the new BA to get up to speed with the project.)

Most often what businesses call a BA is essentially nothing more than a clerk. Write down requirements so that we effectively have a contract that we can associate a dollar figure to and get a sign-off on. But what a BA is should be so much more than that. In Canadian teams where I've worked with properly embedded BAs, the BA was effectively a conduit to the client. Even within an Extreme Programming project where the client was in another province, the BA proxied for the client when the client couldn't send someone to our office. If we weren't sure about something, we asked the BA. If they weren't sure, it was their job to get in touch with the client and sort it out. They had the business knowledge, and were abreast of the technical details of how the software was being implemented. They were instrumental in giving initial feedback for UAT phases. In short, they started the project, and they finished the project. In XP the BA was the tracker and the client.

So if anyone around Brisbane works for a client/company that has a BA similar to what I describe above, count yourself as lucky and let me know. I'd like to get my picture taken with them because I'm thinking they must be rarer here than dropbears. :)

Wednesday, August 17, 2011

Business software should advise

Business software should be an advisor to users, not a dictator above them. Business software boils down to one thing: Business Rules. How business rules are implemented in software is just as important as the rules are themselves. Some applications seek to "enforce" business rules by restricting behaviour until certain information is provided, or sequence of steps have been followed. There are definitely cases where this must certainly be expected to be the case, such as enforcing authentication and authorization to features. However, when applied to business logic this kind of enforcement leads to inflexible and, in some cases, very costly issues.

People generally like dictators at first. Life is pain, they need someone to stimulate the economy, get the trains running on time. When a software system is first designed, the idea of enforcing rules to save time and minimize mistakes is certainly attractive. Unfortunately, software has to evolve as business requirements evolve, and before you know it, your wonderful software application has divided Poland with Microsoft Office, and invaded Czechoslovakia. All you wanted was a system that would bring the efficiency back into your organization, but pretty soon you have a behemoth costing you hours of time, stacks of bug reports, and your business is failing to serve its customer base which is costing you customers.

Enforcing business rules restricts flexibility. Rather than designing software to be an enforcer, design it solely to be a time saver. Ensure that the only mandatory fields are things that ARE mandatory, and if the system can default a bunch of other optional fields, then fine. Someone can always change the values later. Also accept that a certain amount of business logic is best left in people's heads and hearts. Sure, you could define rules, even strive to make them configurable, but in the end keeping the most flexible business rules outside of software is sometimes the best option.

A perfect example of this was brought up today when one developer was querying another about a legacy system for manufacturing. The question was that when orders are received, different products take different amounts of time to manufacture. How does the factory worker know when they need to manufacture each product to get it done by the dispatch date? The answer was, "The floor supervisor decides what to manufacture to ensure everything is done on time." This is based on a report with the various orders and their respective dispatch due dates. It took a few rounds of questions to let this fact sink in. The software system didn't tell them when to manufacture the product, it simply told them what needed to be manufactured and by when, and a person would need to determine what should be done first.

There are an assortment of rules that govern when products get manufactured, and a *lot* of environmental variables involved. I'm sure the first thoughts of this developer would have been along the lines that the rules could be codified so that the software system could calculate and dispatch out work to the factory floor so that products are manufactured by their dispatch date. This would be more efficient and could probably mean they could accept more work or work with fewer staff. But the problem is that you cannot hope to codify *all* variables that are accounted for in the decisions to get work done. Staff being sick or on vacation, whups, need a rostering component. Machine break-downs or services, ah, incident tracking. Last minute order changes, cancellations, or changes in priorities. Stock shortages or quality issues. Issues that can crop up that haven't even happened before. Machines follow very clear an concise rules very effectively, but they cannot adapt to unknown variables like a human being can.

The result of leaving a good chunk of the business rules for producing the product in a person's head means that the actual mechanical work of getting product produced is completely dynamic. The machine simply advises what needs to be done, can make suggestions based on information it can compile, and records the results of the production. If the machine goes down, the data can still be queried, the product produced. The machine is not relied upon, it can be updated once it sorts itself out.

Friday, July 8, 2011

Pet Peeve: Misuse of KeyValuePair

This is one that irks me when I come across it.

public IList<KeyValuePair<bool, string>> ProcessRecords(IList<MyRecord> records)
{
    var results = new List<KeyValuePair<bool, string>>();
    foreach( var record in records)
    {
         // Do some processing...
         if (success)
             results.Add(new KeyValuePair<bool, string>(true, "Message indicating record was processed successfully.");
        else
            results.Add(new KeyValuePair<bool, string>(false, "Message indicating record was not processed successfully.");
    }
    return results;
}

The above is just a pseudo-example similar to some situations I've come across and even some examples on the web on how you can use KeyValuePair, and even one that gave an example of a KeyValuePair of KeyValuePairs to return a bastardization of triplets. *shudder* The alternative to KeyValuePair would be to create a new class which by all rights would be identical to KeyValuePair. So why not just use KeyValuePair?

Because it is misleading, and it's no different to writing a class to represent a tax invoice and naming it "Order", or "Thing" for that matter. KeyValuePair is meant to store a Key, as in a unique value, against a value. If your method is designed to return a unique list of keys with respective values then by all means use KeyValuePair. But if you're using it to return arbitrary pairs of values then for clairity just create a Pair class instead. 

The problem with returning KeyValuePairs is that looking at that return type you would expect that the data would be suited to being placed in a Dictionary. However if you're returning arbitrary pairs of values then you are misleading other developers all for the sake of being too lazy to define a simple generic class.

Friday, April 22, 2011

Getting .Net Property Names without Magic Strings

This was something that had been perking my interest every so often ever since I started truly adopting Agile development practices and re-factoring code with abandon. This meant that properties and methods could be added, removed, and renamed at any point within the life of a project. There are cases where in debug messages, log entries, reflection lookups, or Argument-related exceptions I want to extract a property name. This resulted in a magic string appearing in the code.

A classic example of needing property names is with WPF binding and PropertyChanged events. Your viewmodels may be listening for property changes on bound domain objects in order to perform actions or update calculated values. Take for example:


The problem here is that if properties within InterestRate (Delta, Rate, and EffectiveDate) are renamed, the above code will stop working as expected. Now effective unit tests should help guard against behaviour changes but it would be nice if we could avoid having a hard-coded string for the property name.

Enter the PropertyName method: I had come across a solution a while ago on Clinton's Blog around using a static method to extract property names. It worked well enough but it was still a bit clumsy. What I ended up with was:



This works well enough but I didn't really like having to explicitly declare the Type (In the above example: InterestRate x) in the parameter expression. Lately I got thinking why this functionality couldn't be adapted into an extension method...


Now the calling code looks like:


This is a compliment to the GeneralToolbox static method in that the extension method will only work against an instance of a class where the static method can work against the type. (In situations where an instance isn't present.)

- Edit: Code & unit tests are now available here.

Monday, April 11, 2011

Around 6+ hours I'd love to get back.

WPF is a beautiful thing most days, but every so often it rears up and slaps you in the face when all you think you have left is a fairly trivial bit of UI functionality left. Then you are burning HOURS making sure you haven't done something completely stupid in your bindings, and Dr. Googling for anyone else who's run into the same problem. The burn isn't that it's a particularly complex thing to do, it's that there end up being so many variations of things to try, most of which won't work for one reason or another, and many other suggestions simply never worked or were even tried. This burns hours upon hours. Even if I say to myself "This isn't that important, set it aside and come back to it later" it's still smouldering in my mind and within a couple minutes I'm back trying something else that comes to mind, and burning more time on it.

In this case all I had left were two little unrelated UI interactivity features that I wanted to polish off before continuing with the next set of requirements.

#1. I present details in a list that is sorted by date. In editing an item within the list I can edit the date. The logical behaviour is that the list should be re-sorted.

#2. Presented in the list rather than using separate views for viewing and editing the details I wanted to swap out a data template (or user control) inside the list item content. (Click a button to expand for edit/review, and another to save/restore to summary mode.)

WPF has Collection View Source objects that sounded like they should have fit my needs for item #1. (instead of binding directly to observable collections)  WPF also has DataTemplateSelectors which looked like they should service my needs for item #2. All set! Not quite....

CollectionViewSource allows you to sort sure enough, but editing the collection items doesn't cause the view source to refresh the sorting/grouping. I spent HOURS of digging and experimenting with different options to tackle this including extending ObservableCollection to provide the sorting, (see here) refreshing via Move operations, to trying to hook into the CollectionViewSource.View.Refresh() with limited success. Finally I hit paydirt with someone that got fed up with exactly the same problem. (see here)

After finally tackling #1 I had renewed energy to tackle #2. (which ironically I had shelved in order to tackle issue #1) I had arm-wrestled with the data template selectors earlier and quickly found while they were good at picking a template, they did not listen for changes to anything they were bound to in order to make that selection so they were only good for a one-off choice. This time the inspiration for the solution came from a little gem of an idea I glanced upon from Stack Overflow (see here), specifically "You could even make your data template a ContentControl, and use a trigger to change the ContentTemplate." I had used DataTriggers before and knew I could swap out individual controls between view and edit variations but I was looking to swap out an entire template in one go. Using a data template containing a content control, and the data trigger to swap out the content template was bloody BRILLIANT!


Finally these WPF UI thorns in my side have been removed and I can resume work without these damn things flaring up to burn up even MORE time. I find it very strange that implementing such functionality was such a chore within WPF, but in case Randall Doser ever comes across this blog... #2, definitely #2.
:)

Saturday, February 26, 2011

3 years, $1M

Most people that know me in the industry know my automatic response when asked to estimate on vague requirements for enhancements or a new small-to-medium-sized system.
"7 years, $1M."
This often was turned around to "1 year, $7M if you're in a hurry."

I've since revised this to a more reasonable "3 years, $1M"

This usually gets a laugh out of people but I'm actually quite sincere about it as a general estimate. If something has been well thought out and requirements have been neatly separated into units of work that can be estimated and built then I can give a detailed estimate for the exact amount of work needed. When all someone gives me is a rough idea of what they want, then my response is that I'm reasonably confident that I can deliver exactly what they want for $1M (preferably up-front) and in 3 years.

This covers the time to do proper requirements gathering, prototyping, iterative development & re-factoring, plus testing to ensure the end product is spit-shined. They will have something available in production before the three year mark, but what they had in their mind (and reasonable additional stuff they're surely to think of or require along the way) will be complete within 3 years. The simple truth is the endless cycles of negotiation, re-prioritization, and up-scaling to try and meet unreasonable time or scope expectations wastes far, far, more money.