Wednesday, August 28, 2013

On Agile Project Management

I've often seen a bit of confusion around how to manage projects that are using Agile approaches for software development. On the one hand businesses are very receptive to continuous releases, however on the other hand they struggle with trying to fit the project budget and timeline into more traditional project management molds.

The trouble stems from Agile projects being measured on a velocity basis with the highest priority features being tackled first. From a project management perspective you have to keep an eye on what needs to be done, what has been done, how fast things are progressing, and that they are heading the right direction. Unfortunately the instruction given to them is to bolt this somehow into something like a PRINCE2 where the two are pretty distinctly incompatible.

How I visualize managing Agile projects:

You have a large furnace. This furnace burns money (logs) and produces business value. (steam) Developers, testers, and business analysts take the form of valves for directing the steam to practical goals. Your projects are funded by budgets which form different stacks of logs. Adding up all of the valves tells you how fast the furnace burns logs, that is a constant unless you add valves or increase their size.

Now as a project manager you have control over what logs get put in the furnace and where the steam is directed. The furnace burns a constant rate based on the attached valves. Something is always fed into the furnace, so you nominate the pile of logs to pull from, but if you don't specify, the furnace loader will just start throwing in anything flammable which could include the furniture.

A common problem that I'm faced with is when a project manager is trying to directly tie the output of one or more valves to the burning of a specific log. Between sprints, or sometimes even within a sprint they are tempted to fiddle with the valves to switch between different outputs while a log is burning. (This work needs to be billed against billing code X, while that work needs to be billed against Y.)

Some key problems with this:
Agile teams work most efficiently when they are not context switching between tasks. Each time you fiddle with a valve, steam is lost. Developers have to drop what they're doing, start something, put it down, and go back to what they were working on.
Logs burn more smoothly than shards. Project managers stop feeding logs, instead they try and budget for pieces of work and hand-feed just enough money from a particular pile to produce a specific amount of steam. Agile projects estimate on *difficulty* not cost. The measurement is how fast features are implemented, and when they're deemed good enough, not predicting how long something will take to complete. It might be tempting to pre-cut budgets to work in micro-iterations, however the reality is that staff and contractors are paid 9-5. You'll end up with 5 hours of "budget" spent to cover 8+ real hours of cost, and 3+ hours coming from "somewhere".

If you must context switch for budgeting purposes then the best approach I can recommend is to set it up as a completely separate furnace and dedicate valves to it. Avoid attempting to move valves back and forth frequently.

Does this mean that Agile teams cannot switch between priorities? Certainly not! They're ideally set up for dealing with shifting priorities, but what a project manager must tackle is how the logs are accounted for, and managing the valve changes as efficiently as possible.

1) Feed in the logs, and balance out the piles at the end of the sprint. At the beginning of a sprint set the valves. For these two weeks these developers will be working on these stories, while the rest of the team continues with Y. At the end of the sprint you look at what was delivered in both projects, account for which piles the logs for that sprint were coming from, and what the valve settings will need to be for the next sprint. This is a different frame of reference, at the beginning of the sprint you aren't allocating 3 logs from budget A, and 7 from budget B, you are merely setting the valves and putting 10 logs in the furnace. Based on what you get out at the end of the sprint will determine what piles the logs were pulled from.

2) Look at the quality and type of valves available. Some valves leak more than others, especially when fiddled with. Valves can represent individual members of the team, or groups within the team. Fewer large valves will direct the steam more efficiently than lots of small ones. Overall, the less you fiddle with valves, the more efficiently the steam is materialized into product. Getting into the habit of grouping developers into larger valves is also beneficial when you look to grow a team to increase the burn rate for the furnace. Adding 10 individual valves to 10 existing individual valves will mean you have 10 new, fairly leaky valves. However if you had developers grouped into teams, new developers can be merged into new teams with some or all of the developers from the existing teams to tackle new challenges, or added to existing teams. Developers reinforce each others efforts which helps mitigate leaks. (or at least brings issues to light early to be addressed.)

How does this fit with efforts to get budget approvals, set deliverable feature sets and delivery dates? I can't answer that, but I hope my perspective above gives some food for thought about how to better fit it into more traditional frames of reference, or convince upper echelons to to better fuel Agile projects and continue to see the benefits to the end deliverable.

Thursday, June 27, 2013

On Deadlines

It's been pretty busy with one of my current clients. The deadline looms for a release, issues are being uncovered, worked on, resolved... The development manager's facial expressions are changing faster than Melbourne's weather, and the project manager is scurrying around like a Corgi that found it's owner's Ecstasy stash. Just the other day the PM came up to me and asked "Steve, how can you be so calm?" Everything in the project is tracking well, the users were happy with the previews, it was just pre-release jitters. I didn't really have an answer for him, so it was just a bit of humorous banter to lighten the mood. It's not something I've really thought about much as it just seems natural by now, but after thinking about it I thought it might help others deal with stress, deadlines, and potential confrontations in the workplace.

My secret to staying calm is that I don't over-care. Over-caring is when you try to take the world onto your shoulders, give 110% and then some, do whatever it takes! Rather, I invest in the projects I work on. I allocate a reasonable amount of my energy into the project, and can choose to invest a bit extra to push through difficult spots. If I didn't believe in a project then I wouldn't be working on it. In the end, there is always the possibility that the project won't succeed, but if it fails, I can remain confident that it won't be due to something within my control. Also, by being conscious of what I invest in the project (and where) I don't have to get overly distracted by "taking ownership" for a project, which leads to conflicts, time wasting meetings, and frustration.

This is in-line with my general philosophy of life. I live by a policy of Truth, and with a focus of doing what I feel is the right thing, even if it isn't always the most popular thing. This allows one to be supported by their convictions. When you don't waste energy trying to mislead people or mislead yourself it is much easier to view things around you in a more objective fashion. You see opportunities you would otherwise miss if your thoughts are clouded by wondering where you'll get the extra energy to maintain any deception during an inevitable crisis.

Some key points people will notice while I work:
1. I am paid for what I do, not how long I sit at my workstation. Does sitting over your keyboard staring at misbehaving code help you figure out what's going wrong? I doubt it does. I'll browse for ideas, catch up on news, go to the bathroom, knock some balls around on the pool table, see if there's someone else's problem I can help out with around the office or on StackOverflow. When I worked for a company that wrote software to manage a postal print house I'd often spend a few minutes from time to time reloading printers and helping out with the manual mail inserting. It kept me in-touch with the guys that would be using my software plus frees up my thought processes. The context shift lets me spot a new idea when it sneaks up. In the end, all my client needs to decide is whether the amount of value I deliver to the project matches the fee I charge them.

2. I focus on memorizing as little as possible. Preconception is a productivity killer. Convincing yourself that you "know" things leads to arrogance and making assumptions that can easily waste your time plus lead to confrontation within a team. There are people that can just soak in information and retrieve it on demand, and provided that they do it accurately without falling into arrogance it can certainly be an asset. "Googling" or checking out something on StackOverflow should never be viewed as an admission that you're somehow a sub-par software developer. Knowing how to find useful information, absorb, and apply it is a very useful skill.

3. I judge things by what I see people do, not what they say. Before I engage anyone in a discussion about an approach I prepare or select examples outlining my thoughts. I expect the same before I'm convinced of an alternative. There is no build up or stand-off beforehand turning it into a big debate of principles or best practices, just show me code and I'll show you mine. Often by comparing two ideas we come up with a completely new approach that captures the best of both. Other times I'm convinced or doing the convincing and both of us come away knowing more than when we went in.

In general this avoids conflict and confrontation in the workplace, keeps my mind free to spot opportunities, and avoids stress build-up. When I maintain myself in a relatively relaxed state of mind it is very easy to switch into a high-gear to clearly see a problem for what it is and solve it. If I allowed myself to be burning high-octane by default there would be nothing left for the inevitable crisis, and running hot would most certainly lead to mistakes and fuel for those crisis'.

Finally some sci-fi quotes to live by: (What kind of programming geek would I be without them!)

"Understanding is a three edged sword: your side, their side, and the truth." - J. Michael Straczynski

"I've been around long enough to know how ignorant I am. I don't assume the universe obeys my preconceptions. Hah! But I know a frelling fact when it hits me in the face." - Christopher Wheeler (Spoken by Rygel from "I Shrink Therefore I Am")

Monday, December 10, 2012

The cost of TDD: Numbers

This is a question/discussion that crops up again and again. Looking back at my earlier post I thought that perhaps it would be more clear with some numbers.  By "TDD" I refer literally to "test driven development" in the sense that unit tests, whether written before, or after behaviour, are a driving consideration for development.

The trouble with pitching unit testing to developers almost inevitably comes down to "it takes too much time."

Lets say you spend one hour developing a nice, clean, atomic feature. How do you know it works? You fire it up and run through a scenario or two. How long does that take? 5 minutes? Maybe you find a problem so you spend a total of 15 minutes debugging and running through a healthy set of scenarios to ensure your code meets all of the requirements. This is assuming you're a competent software developer that's focused on doing the right thing. You're checking to ensure that the code does what you intended it to do.

Now how long would it take to write unit tests in addition to that manual check? 20 minutes extra? Sounds like a lot, 25 minutes per hour vs. 5 minutes, best case. But let's work with it.

How many hours-worth of features are you going to write in a given two week period? 80? Surely not, I generally get around 60 productive hours in a good iteration. Spending 5 minutes per hour ensuring your code works would give you 55 productive feature hours. Spending 25 minutes per hour  would give you 35 productive feature hours.

But hang on a moment. Let's look back at our first scenario. We spend 5 minutes ensuring that our code does what each requirement is supposed to do. What happens when we change the code to meet the next requirement? Are you sure your previous requirements are still met? Granted every feature is not dependent on every other feature, but lets look at the worst case scenario for the heck of it. Those bugs do love to creep in right where they should be inconceivable. By all means, the first feature requires 5 minutes to test, the second will require 5 minutes plus 5 minutes to regression test the first feature. The third feature will require 15 minutes, the 4th, 20 minutes. By the fifth feature we've matched the per-feature cost of unit testing. We're still ahead, but there's a long way to go and it's only getting more expensive to be sure. By 8 productive hours we're spending 3 hours testing. By 16 hours, we've spent over 11 hours testing. By 28 productive hours we've run out of time. Over half of our time is spent regression testing.

And this is just the first iteration. It only continues uphill from there.  Now obviously not EVERY feature needs to be fully regression tested and require 5 minutes to test. You make a judgement call to determine risk vs. cost.  There will be times it takes more than 5 minutes to test something, and less than 5 minutes to regression test it. But then there is the time needed to record just what needs to be regression tested.  Unit tests will often cost MORE than 25 minutes to write, but in the end it doesn't really matter how long they take because sooner, rather than later, that regression cost and risk is going to catch up. Are you really that confident that after six months of development with 4+ developers touching the common code that every requirement you've written still works the way the client intends to see it when you release?  Oh yeah, there's a regression test before release (we hope) but how many issues do you think they're going to find when that happens, and how much time are you going to spend tracking each and every deviation down? Testing has a nasty way of starting before the developer's pencils are down leaving the door truly open to bugs shipping if developers aren't taking responsibility for asserting their changes don't frak up something else.

But now what about the cost of change? Unit tests get in the way. When the client changes their mind and you try changing stuff, tests break and it's more work for you.  However, unit tests aren't what are expensive, it's change that is expensive. All unit tests have done is helped show you to cost of that change up-front. Those 100 unit tests that broke when you changed that behaviour: Those are the 100 requirements that function based on the original behaviour, and you've just invalidated them. What were you intending to do about those features? Assume they would "just work"? Hope that a regression test picked up those issues? (But won't that still cost you a lot of time and head scratching to track them all down?) Didn't you expect those 100 areas might depend on that change you're about to be making?  Or, perhaps, shouldn't you be glad that *you* see the impact of that change before your client does?

Friday, September 28, 2012

On SharePoint

You’d think embedding content in a SharePoint page, even their bastardized “wiki”, would be a relatively simple, refined feature… I mean it’s all about content management… they have picture libraries, right?!  Sure enough, the steps involve adding an image to a picture library, no problem there.  But then go to embed an image in the page and you get this beauty:

Seriously.. WTF?! Is this 1990 and MS-Access? Was this “feature” just introduced in SharePoint 2007? I gotta wonder because that dialog box is the kind of thing you build when your boss says “Can we have a feature where we can embed an image?” and you stub something in to verify it will work. Then you’re *supposed* to replace it with a more polished, and functional feature! I.e. options to select an image from an image library, or upload an image. (auto-creating a default image library) Something…. ANYTHING…  This is, as Chris Pebble is credited with the coining the phrase, “Protoduction”. (@CodingHorror)

So now all I need is the URL to the image I added in the image library… Now where do I find that… Click on my image in the library to view the item… name, preview, title…. No URL. Right-click on the preview and bring up properties: http://server/teams/IS/WikiImages/_w/Announcements_png.jpg ?? On the previous screen it is http://server/teams/IS/WikiImages/_t/Announcements_png.jpg. Ok, so just how many JPEGs does it need to render of my original PNG? 

Finally, clicking on the preview image itself brings up a browser window containing actual PNG file to get the actual URL… It would have never crossed my mind that a “copy URL” button or slapping it in text field might remotely have been useful especially when faced like idiotic dialog boxes asking me to type in the URL.

So now I finally have an image link in my wiki pointing to a PNG in an image library that has at least two mug-shot JPEGs taken of it. 

SharePoint: Content mismanagement at its finest.

On "clever".

Recently I came across some odd behaviour in a web system around date and time parsing. There was a validation step responsible for ensuring one date/time value was greater than the other. But it seemed to trip up in certain scenarios, such as around 08:00 in the morning. (Anyone used to working with Javascript probably knows exactly what the problem is already.:)

Fortunately I'd come across the cause of this bug a short time ago when enhancing a Javascript-based date parser.

Someone responsible for Javascript's parseInt() method thought it would be clever to try and determine what kind of numeric value was being passed in by inspecting the string and choosing an appropriate conversion rule. (I've read this may be a carry over from C, but regardless it is a stupid assumption that should never have been propagated.) If you pass in "0x..." it can be assumed you want to do a Hex conversion. Fair enough. But then they assumed that if you passed in a value with just a leading 0(zero) you'd want to do an Octal conversion.

I'm sorry, but this has to be one of the dumbest assumptions I've ever seen and surely has led to, and will continue to lead to countless bugs throughout the history of web applications. It's an absolutely stupid assumption because in either Octal or Decimal, 00-07 will result in: 0 - 7. The fun bit is that if you pass parseInt "08" or "09", you get back: wait for it, #null. Pass it "10" and the logic switches to assume you mean Decimal, so it passes you back 10.

So yes, according to parseInt, when you want to check a month number, your calendar must be:
January, Febuary, March, April, May, June, July, #null, #null, October, November, December.

So, if you're encountering unexplained bugs first thing in the morning, or with data for August or September be sure to inspect any and all parseInt calls.

parseInt("08", 10);

Someone owes an apology to the countless developers handed a #null for "08" and "09", and the possible *single* developer wondering where his 9 went after being led to believe parseInt defaulted to Octal.

Tuesday, May 29, 2012

Moq - Avoiding optimized return results.

Recently I was putting together a unit test for a MVVM+C Controller that accessed a VM factory that I was mocking out. This was to address a bug where multiple calls for the same domain object would result in multiple VM instances rather than references to the same instance. (Something I hadn't handled and spotted with some odd UI behaviour.)

Unfortunately I had to eat my dogfood with this bug because I went and fixed it before I had a unit test to reproduce it. My fix appeared to work, and I wrote a unit test that asserted it, but I wanted to be sure it covered the original bug, so I reverted the fix, but the test still passed! Hmm, I verified that the bug was happening at runtime through the UI, and the test was pretty basic. (Kick off the re-creation of the VMs in a particular way, and check two VM references to see if they're the same or not.) I finally tracked it down to Moq doing something unexpected (though likely by design...)

Here's an example of the problematic statement:

IParticipantViewModelFactory mockParticipantViewModelFactory = new Mock();

mockParticipantViewModelFactory.Setup( pvmf => pvmf.Build( stubDtos[0] ) )
.Returns( new ParticipantFullViewModel( stubDtos[0]) );

It's an innocent enough factory mock, wouldn't one expect that each call would return a new participant VM reference? After eliminating other possible issues with my test setup I knocked in the following sanity check:

var test1 = mockParticipantViewModelFactory.Object.Build(stubDtos[0]);
var test2 = mockParticipantViewModelFactory.Object.Build(stubDtos[0]);
Assert.AreNotSame(test1, test2);

Surprisingly the Assert failed?! Two calls to the Mock containing a .Return(new...) returned the same reference. (Where the real factory would have returned references to two distinct objects.)

The solution was to be a little less lazy with the mock definition:

mockParticipantViewModelFactory.Setup( pvmf => pvmf.Build( stubDtos[0] ) )
.Returns( (IFullDto dto) => new ParticipantFullViewModel( dto ) );
Now the above test, even passing in the exact same DTO returns references to two distinct View Models. 

It would appear that the Moq Mock optimized the initial "static" return into a single reference for all .Return calls, where by specifying that it should use the value from the Setup (even though it's always exactly the same) it actually builds a return value for each call.

It was an amusing behaviour to track down. I guess you could ask why did I simplify that mock return like that in the first place? it doesn't look like a very effective mock. The answer was because in this test case I don't care about the "guts" of how the view model that was being set up, I already have unit tests that assert that the VM Factory composes valid VMs, and that the controller composes those VMs through the factory correctly. This test case was for a specific bug where the factory the application was using returned 2 VMs for the same domain object, and that the application controller should handle using an existing reference if it has one.

*Edit: And in deciding to write this up to the Moq team it dawns on me why this did what it did.... I told it to return an object on that call by declaring:
.Returns( new ParticipantFullViewModel( stubDtos[0]) )
when in fact I should have written it as:
.Returns(() => new ParticipantFullViewModel( stubDtos[0]) )


Thursday, May 24, 2012

reCAPTCHA, Stretcha

I hate sifting through spam, and I can sympathise with anyone that doesn't want their blog/site strewn with comments about how their life would be so much happier with V1AGR4. But on the other hand, measures to combat bots such as reCAPTCHA are starting to edge me to question whether or not I actually am human any more... Case in point:

Ok, I thought this service was supposed to use real words. But that last one had me guessing... "ltursth"? "lturyth"? So I ask for a refresh and I get...