Sunday, August 28, 2011

How XP can help your solo projects too.

Something has irked me for a while. Solo projects have a rather unique set of problems, at least from my perspective, compared with client projects:
Time- Solo projects are developed in the gaps that you can scrape between the hours you work for a client, and personal life distractions. When you get married and have children, this becomes quite difficult, though I had found plenty of other distractions before that happened. :)

Scope- On client projects it's easy to fight against scope creep, and easy to spot & discuss "kitchen sink" type features or architecture choices being considered today for some mythical benefit tomorrow. Generally I can be quite "lazy" in that I don't want to do more work than I have to, and it saves me headaches in the future. In solo projects I've found it quite a bit harder to fight scope creep as there were no hard targets for time or feature list.
Requirements- In client projects, you either don't have enough requirements and easily have someone to moan to/about getting more detail from, or you have people making attempts to provide too much in the way of requirements up front. With solo projects I am responsible for the requirements, and deciding what will be v1.0 and sticking with that (or at least challenging my wayward heart) is quite difficult. I also HATE writing down requirements. I can't get through more than a few features before I catch myself opening Visual Studio!

One thing that sold me on XP was measures it took in quality. Not only in terms of software quality with unit testing and pair programming, but in terms of systems quality with user stories, planning games, and continuous integration. It made dealing with customers much easier. It let the production of features begin much sooner, increased value in what was being developed, and made cost transparent to the customer. They can see the velocity of the project and value added for extra time invested in getting features just right. These processes have been really effective in dealing with cases where the customer was really "willy-nilly" with their requirements. Then I realized, *I'm* that willy-nilly customer! Why can't I apply XP principles to my own projects?

So the first thing I did is stop worrying about requirements. For some reason I was trying to capture more detail on paper for my own stuff then I try and initially capture from clients for their features. I switched instead to point-form lists, then expanded the most important one into a user story and tasks as I went. I am the customer, or at least the B.A. so I'm a perfect XP customer because I'm accessible 24/7. I also try to distinctly switch hats from developer to customer. (If I don't end up bi-polar by the end of this, both of me will be surprised.) As a customer I let myself loose with the "wouldn't this be cool" but NEVER with the computer running. That stuff goes down on paper. As the developer I try to be as lazy as I can. The main change from the process was devoting more page area for notes as I progressed. I'm working 1-2 hours at a time if I'm lucky, and maybe 2-3 days in a row. I try to get blocks of work done, while noting down what I had in mind for the next blocks.

TDD I'm already a very strong supporter in, whether Test-First or Test-Second. My rules for solo project is that the code for a new task does not get written until the previous task is unit tested. As I'll be working on these projects for some time, and hopefully plan to get other developers on-board with them in the future, unit tests are crucial. The code must always build, run, and the test suite pass before I finish for a day. (That one can irk the wife! :)

It's still early days for applying XP to my current project stack but it has been quite successful so far. If anything for keeping me more focussed on getting chunks of value-added work done. Hopefully if I can keep this up for a month I can work out a system to keep the momentum going without getting caught up. At that point I can bring in at least one other developer to contribute to the projects without wasting their time.

Sunday, August 21, 2011

Pair Programming.

I am an XP (Extreme Programming) advocate. As a whole I've used in on one successful project, and I try to bring elements of it to any client I work for. Pair programming has to be one of the toughest elements to sell, though admittedly I don't pitch it as the most valuable one. I'm sure many XP advocates would cry foul, as pair programming a cornerstone, if not the foundation behind XP. Every other element could easily be discarded by a developer working alone, pair programming helps reinforce that the other elements are followed. I certainly do not disagree.

However, pair programming is the hardest element to get in place. Most clients have existing development teams that have never heard of pair programming and development environments that aren't set up for it. In the project I started with XP this second point was quite an obstacle. We had comfortable cubicle environments but not shaped to fit two chairs with people working side-by-side. Fortunately it was a big office and we confiscated a large meeting room, arranged the tables in a large rectangle so that 4 pairs of developers could sit side-by-side. This went on for about 3 months, but the company wanted their meeting room back, and we were also getting other teams eyeing our original cubicles.

The solution we came up with was Pair Analysis + Task Swap. Each developer was paired with another and both selected a set of stories for the iteration. The pair would sit together to scope out the tasks for all of their combined stories and discuss the approach. Then the actual development was done individually. If during implementation a developer thought a deviation was needed, then they discussed it with their pair. As a task on a story was completed, it was handed over to the other developer. Each developer would not only review the other's work, but see if there were ways to improve it, or look for other possible scenarios that hadn't been thought of, discussing together as necessary before they committed the work.  This was a little painful back when working with a repository that did not support branches. Essentially developers had shares to their development folder that their pair would open to review work. Done again today with a branching repository, each task would be developed against a story branch.

From a technical perspective, how this works:

Developers A & B select one story each. (typically in an iteration they would choose 1~3 each) The development would be done on branches A & B respectively. When developer A completes his work for a task, he pings developer B for a review. Hopefully B will be finishing up a task pretty soon as well, but while waiting for something to review, developer A can continue with tasks in an unrelated section of code. Developer B reviews A's changes on Branch A, while A review's B's changes on Branch B. When both are satisfied of the changes, developer A merges the changes from Branch B and performs an integration confirmation then developer B merges from Branch A and performs the integration confirmation. After the integrations, each developer resumes working in their original branch until their story is complete. A branch only lives as long as a story. If developer B finishes all tasks of their story, developer A will have merged all changes into Trunk, so developer B will confirm the end of the story and terminate the branch. A new branch will be taken off $trunk to start the next story.

Physically during reviews there is a lot of chatter between the two developers and they'll often be at each other's desks when going over code. Sometimes one developer will find an optimization that the other hadn't thought of. There are two options available, either he can pass the task back to the original developer, or take the task on himself, and the other developer can start working on a task on the alternate story that he just finished reviewing. (Swapping stories/branches.) You can even consider encoraging developers swap branches/stories every other task or so.

This approach has trade-offs with pure pair programming. In situations where a lot of re-factoring is found, such as in cases where you're pairing up experienced developers with less experienced developers, pair programming would be more efficient. A pair working together will spot these optimizations as they're going, where this situations leads to work getting re-factored during review. However, the efficiency hit should reduce greatly as the lesser experienced developer experience and exposure to reviewing the other developer's work increases. This swapping is done at the task frequency, not story frequency, so the impact of these re-factorring should be kept quite small. The advantage of this approach is that it can easily be applied in physical environments that make pairing difficult, while maintaining most of the benefit of pair programming. (knowledge sharing, near real-time code review, and emphasis to adhere to the rest of the principles.) It's also a good lead-in to pair programming with developers that find the concept a bit alien. Hopefully it fosters an increasing amount of communication between team members, so much so that they find that the time apart is the wasteful part of the job and end up pushing their desks together themselves. :)

Business Analysts in the mist.

One thing I've noticed since moving to Australia is the lack of business analysts within organizations I've worked at, or worked for. Occasionally a client will have someone who's title is BA, but the normal responses I get when asking if they have a business analyst is either "No" or "Not now, but we plan to hire one." Now in Canada, this would be sending little red flags flying, but it seems to be par for the course at least here in Brisbane. What really sends the alarm bells clanging is when I ask "who defines the requirements then?" If the answer isn't a B.A. or a client then prepare for pain. Usually the answer is either "The Developers/Lead Developer" or "Sales."

Developers generally makes for very poor analysts. Developers are technical, they don't grok business process, only software process. A developer can analyse how something should be done, but not what should be done. Salespeople are often even worse. Salespeople only worry about signing on new customers or upgrading existing ones. They have an excellent perspective on the extreme high-level of what should be done, but they don't understand either from a business perspective or a technical perspective how it could be done.

The best person to define requirements is the client. Now if you're fortunate enough to be using a methodology like Extreme Programming and the client has someone valuable embedded in your team, then there's no real need for a BA. However, the next best thing is a dedicated BA. Now this gets back to businesses that have someone who's title is BA, but isn't a BA. An excellent example of this was a large government organization client. When I asked if they had a business analyst, their response was that they had a whole team of business analysts! What they said was true, it was actually a small department of about 8 BAs that were under the same director, but not actually part of the software development department. Their idea of a BA's role was to go and meet with the client, understand their business processes, write it up in a document, and hand it off to the software developers to build, washing their hands of it. This meant that the developers had a document, some months after a project has started, then if there are issues or clarifications needed, well too bad, lodge a request with the BA department to get the documentation adjusted. Often the BA that did the original work was tied up in a new project so you get a new BA that has no idea about the project. Of course, this model made perfect sense to them. They needed to have a billable block of time to charge back to the client. Once that was done, they needed to be charging other clients. This was not only frustrating from the software development side of things, it drove the clients nuts. (Having to explain the same thing to two or more BA's, and being expected to *pay* for the new BA to get up to speed with the project.)

Most often what businesses call a BA is essentially nothing more than a clerk. Write down requirements so that we effectively have a contract that we can associate a dollar figure to and get a sign-off on. But what a BA is should be so much more than that. In Canadian teams where I've worked with properly embedded BAs, the BA was effectively a conduit to the client. Even within an Extreme Programming project where the client was in another province, the BA proxied for the client when the client couldn't send someone to our office. If we weren't sure about something, we asked the BA. If they weren't sure, it was their job to get in touch with the client and sort it out. They had the business knowledge, and were abreast of the technical details of how the software was being implemented. They were instrumental in giving initial feedback for UAT phases. In short, they started the project, and they finished the project. In XP the BA was the tracker and the client.

So if anyone around Brisbane works for a client/company that has a BA similar to what I describe above, count yourself as lucky and let me know. I'd like to get my picture taken with them because I'm thinking they must be rarer here than dropbears. :)

Wednesday, August 17, 2011

Business software should advise

Business software should be an advisor to users, not a dictator above them. Business software boils down to one thing: Business Rules. How business rules are implemented in software is just as important as the rules are themselves. Some applications seek to "enforce" business rules by restricting behaviour until certain information is provided, or sequence of steps have been followed. There are definitely cases where this must certainly be expected to be the case, such as enforcing authentication and authorization to features. However, when applied to business logic this kind of enforcement leads to inflexible and, in some cases, very costly issues.

People generally like dictators at first. Life is pain, they need someone to stimulate the economy, get the trains running on time. When a software system is first designed, the idea of enforcing rules to save time and minimize mistakes is certainly attractive. Unfortunately, software has to evolve as business requirements evolve, and before you know it, your wonderful software application has divided Poland with Microsoft Office, and invaded Czechoslovakia. All you wanted was a system that would bring the efficiency back into your organization, but pretty soon you have a behemoth costing you hours of time, stacks of bug reports, and your business is failing to serve its customer base which is costing you customers.

Enforcing business rules restricts flexibility. Rather than designing software to be an enforcer, design it solely to be a time saver. Ensure that the only mandatory fields are things that ARE mandatory, and if the system can default a bunch of other optional fields, then fine. Someone can always change the values later. Also accept that a certain amount of business logic is best left in people's heads and hearts. Sure, you could define rules, even strive to make them configurable, but in the end keeping the most flexible business rules outside of software is sometimes the best option.

A perfect example of this was brought up today when one developer was querying another about a legacy system for manufacturing. The question was that when orders are received, different products take different amounts of time to manufacture. How does the factory worker know when they need to manufacture each product to get it done by the dispatch date? The answer was, "The floor supervisor decides what to manufacture to ensure everything is done on time." This is based on a report with the various orders and their respective dispatch due dates. It took a few rounds of questions to let this fact sink in. The software system didn't tell them when to manufacture the product, it simply told them what needed to be manufactured and by when, and a person would need to determine what should be done first.

There are an assortment of rules that govern when products get manufactured, and a *lot* of environmental variables involved. I'm sure the first thoughts of this developer would have been along the lines that the rules could be codified so that the software system could calculate and dispatch out work to the factory floor so that products are manufactured by their dispatch date. This would be more efficient and could probably mean they could accept more work or work with fewer staff. But the problem is that you cannot hope to codify *all* variables that are accounted for in the decisions to get work done. Staff being sick or on vacation, whups, need a rostering component. Machine break-downs or services, ah, incident tracking. Last minute order changes, cancellations, or changes in priorities. Stock shortages or quality issues. Issues that can crop up that haven't even happened before. Machines follow very clear an concise rules very effectively, but they cannot adapt to unknown variables like a human being can.

The result of leaving a good chunk of the business rules for producing the product in a person's head means that the actual mechanical work of getting product produced is completely dynamic. The machine simply advises what needs to be done, can make suggestions based on information it can compile, and records the results of the production. If the machine goes down, the data can still be queried, the product produced. The machine is not relied upon, it can be updated once it sorts itself out.

Monday, August 15, 2011

How to hire a "Senior Software Developer".

It is hiring time again. The recruitment agents start instinctively calling, smelling the scent of a commission, and pretty soon the resume's start flowing in. The local I.T. market is pretty restricted right now. Initially the client wanted to hire two permanents, (Expectedly to phase me, the final contractor, out) but has had to settle on 1 perm and 1 new contractor because we have a load of work coming down the pipe. Since we're looking to hire people that are expected to hit the ground running in the project these roles were for highly experienced developers that would be familiar with working with things like TDD, IOC/DI, and be able to write loosely coupled, "clean" code. One of my tasks was to put together a coding quiz of sorts to give to successful candidates. Nothing really difficult, but something to give them a taste of what we're looking for and something to give us an idea of how they understand requirements and complete a task. The quiz consisted of a set of requirements with some general instructions on what the sample project already contained, and what they would need to complete. The project they were given consisted of a small number of interfaces for dependencies that would provide much of the additional functionality that they would utilize to meet the requirements. (I.e. data retrieval, sending e-mails etc.) The quiz was expected to take about 2-3 hours to complete including unit tests. (Tested with one of our junior developers.)

Part of the interesting element of this quiz was identified when our developer gave it a test run. There were a few holes in the quiz. For instance a requirement would reference an expiry date on an existing domain object which did not exist. The test implies that elements that their specific requirements were being provided by other team members, so issues like this were left in to see how candidates responded to such problems. (Ignored, noted with ToDos, or brought up when they submitted their response.)

The reason I chose to create a template project with the quiz was to give candidates hints into the way we develop code, and the kinds of things we are looking for. Do they understand how to do dependency injection for their service class based on the interfaces we've provided, or do they use the IOC Container like a global Factory/Registry? Do they try to follow our naming convention and style? Even with the requirements I dropped hints about TDD/BDD and mocking.

The results of the quiz were even better than we expected. Out of a total of 8 candidates, (2 perm applicants and 6 contractors) all with impressive looking resumes, the quiz made it very clear who knew their shiz from those that just continued to write poor quality, unmaintainable code with modern tools and libraries. The quiz probably worked a bit too well because of those 8 candidates, only one really stood out, and he was the one with the least experience. However, he was the only one that understood the concept of dependency injection, and while he had no experience directly with mocking frameworks, he picked up on it from the requirements, researched one, and tried applying it (reasonably well) within a sample unit test. His main failing was the amount of work for the time he spent, but the fact was he hit all of the requirements with clear markers where details still needed to be filled in.

What was really surprising was how bad some of the contractor submissions were. Only one actually met all of the requirements, but while his resume toted .Net 3.5 & 4.0, the code sample he wrote was effectively .Net 2.0. He had provided unit tests, but they were merely auto-generated and code-coverage results for his service was only 70% with much of it single-hit results. Most of the other contractors (these are guys in the same region as me with > 14 years of experience) either missed basic behaviour requirements, or had fairly severe logic errors present in their samples. One big one I looked for was a requirement that each operation in the service had an authorization requirement. Most of the submissions went and did the authorization check in the construction of the service rather than on each call. A risky assumption in any situation with a service, and in cases where this service was Singleton scoped in an IOC container (as something like this would commonly be) this would easily lead to embarrassing bugs. One candidate's project didn't even compile, while another attempted to write unit tests, which all failed when run.

So in the end we'll probably settle for 1 decent intermediate that hopefully has the enthusiasm to step up to the plate, and one contractor who delivered a working sample, but we hope was writing rather dated code out of habbit. (He's done a lot of work for government and their pace of adopting new technology I am well aware of.) The moral of the story is when looking to hire people, don't put too much credit in their resume, or whether they can recite namespaces for commonly used libraries. Build up a code sample project in the style you like with a set of reasonable requirements, and get them to write some code. Seperate the shiny nuggets from the vast slurry that call themselves "Senior Software Developers".