Saturday, September 17, 2011

What is my client's/employer's IP?

This topic has boiled up recently on Ayende's blog here and here relating to providing samples from pet projects. Several people have commented on how some contracts are laid out to state that any work an employee does, whether at work or at home is automatically the IP of the company.

Disclaimer: I am not a lawyer. This subject has intrigued me over the years and I have studied what information I could find applicable to Canada and Australia on the subject.

The answer to this question depends entirely on the country and/or state/province the contract was formed. Generally, any unresolvable IP dispute between employer and employee/ex-employee will need to be brought up in court because there are a large number of factors to consider beyond terms in the original contract. Generally, most edge-cases would rarely ever see a courtroom unless there was a clear breach of proprietary code (such as making similar code available to competitors or a developer choosing to become a competitor) or other breaches of contract. (Such as employees that try to keep the cream of their work to themselves, and deliver crap to their employer or try to extort employers for more money for "better" solutions they claim to have developed on their own time.)

Things that a court will factor in: The history of the development of the IP. Specific terms within the contract relating to IP. The seniority or responsibilities of the employee. How the IP relates to the core business of the employer.

 A sticky point between employees and employers is often the contract. I've often seen IP ownership clauses that state something as broad as "all IP created by the employee during the course of employment is unconditionally transferred to the employer." This is a very broad statement that can be interpreted a number of ways. (Hence the need to involve lawyers & case history) "during the course of employment" could be considered to mean "during work hours", or it could be considered to mean "from the moment you sign the contract until the contract ends." The best course of action an employee can take is to *not* sign such a contract until the terms are clarified and the clause is amended. In my experience I have seen broad clauses like this in most contracts I have been presented. As someone that works on a number of side projects this is the first section I look at when presented with a new contract. Usually the client has no problems clarifying the contract as the clause is a common part of the contract boilerplate that their lawyer provides. Own its own, a clause like this is generally unenforceable as a court will likely decide that the role of the employee, and the compensation awarded to the employee, form a restraint of trade on the employee.

Two hypothetical scenarios to consider these factors:
A junior software employee signs on with a company, and gets delegated minor tasks such as commenting code, testing, and some trivial work in a .Net shop. He have an interest in learning technologies  such as ASP.Net MVC and developing websites on their own and while tinkering on their own time and development license, come up with an interesting website/service that he hosts himself. If his contract had such a general IP ownership and their employer attempted to seize the copyright / IP for that website it would likely be defeated in court. The employee's level of responsibility and the distinct difference in function between what the employee contributes in the course of employment would show that the employer would have no grounds to claim the IP. An employer cannot claim copyright over a fiction book you write on your own time, or that wicked recipe for Chili Con Carne. They cannot claim IP you work on that is unrelated to the type of work and technologies they employ you for.

A senior software employee signs on to work with a company building a workflow engine and designer interface for the company's line of products. After a couple years of working on the system he decides that he can build a better workflow engine and/or designer himself and sets out to write one on from scratch on his own time and resources. Under a similar IP agreement an employer challenge to the IP would likely be upheld in court. The employee has applied intellectual property in the form of design and implementation that was created or learned in the course of their employment into a product that directly falls in line with their employer's line of work. Another similar example which would likely fall under the same sort of principles would be if an employer chose to scrap a project and an employee decided it still had merit and began developing it on their own.


But the contract alone is not the only consideration. The history of the IP is considered, as in who came up with the idea and when. How much of the idea was developed at work or using company resources. (Technically this can include discussing the IP with other employees.) Generally it is a good idea to keep a log of when you are working on IP and avoid involving any company equipment / software licenses, or discussing the project features with other employees during work hours. Using company equipment or doing work on the IP during paid company hours is a strong determining factor into IP settlements in the company's favour. However, in cases where the contract does not state the company is entitled to IP of employees, this often doesn't automatically grant IP to the employer.

Cases where a contract is quite explicit that creations outside of work hours are considered the IP of the company would be enforceable in court. However, the other mitigating considerations would still apply, and these would generally be reserved for very high-level roles such as application architects. Companies attempting to use conditions such as these on relatively low responsibility and low paying jobs can find these contract clauses unenforceable.

What about scenarios where your contract allows you to keep IP on stuff you do on your own time, and you come up with something that would be of use to the company? This would be a scenario where you need to tread carefully. If the project is something not directly related to the work you are doing (such as a timesheet tracker, or maybe a simple queuing system for the integration environment, etc.) then the best thing to do before utilizing it at work is requesting permission and granting a license to the company to use your product. (whether you grant it for free, or they agree to a price.) This way it is clearly defined that they have no entitlement to the IP. You're better off not attempting to build something related to the business on your own time without prior explicit written permission from the company that you can spend your own time on a given complimentary application. (I.e. an add-on, diagnostic tool, etc.)

Regarding specifically to some of the comments on Ayende's blog around people that don't have time or energy outside of work, but do build plenty of spikes and such while at work. These generally always fall under company IP and the availability of using this code for your own benefit will depend entirely on your employer. Many employers will grant you permission to use spike code for your own work or to submit on something like CodeProject if you ask, as long as it's not business-specific. If you do intend to use such code as samples for securing a new contract or role with another company, then it would be a good idea to think ahead and obtain permission for interesting bits as they come up, not the week you're expecting to be going in for interviews. :) Resources like CodeProject are fantastic for this sort of thing because it gives you a valid, innocent reason to ask to use the code, and you can point potential employers at your CodeProject profile for examples of your work. (Which is kudos if you can get some complimentary comments associated with it.)

Sunday, September 11, 2011

A recent example of the value and purpose of unit tests.

Just last week I was approached by our BA(-ish roled individual) who was tasked with documenting our testing process, and in particular our unit testing strategy. He wanted to know what our unit tests did, and how he could express their value to the client. I had just finished identifying an interesting issue that was identified as I wrote unit tests to exercise a recent feature, and it made the perfect example of what the unit test suite did, and how it added value to the project.

A bit of background into the issue:
The business requirements were that a service would be contacted to retrieve updated events for an order. All events that weren't already associated with the order would be associated with the order. The application would report on any new events associated with the order.
*events have no distinct Key from the service, they are matched up by date/time and event code.

The code essentially did the following. *returning IEnumerable


var recentEvents = EventService.GetEventsForOrder(order.Key);
var newEvents = recentEvents.Except(order.Events);
foreach (var newEvent in newEvents)
{
    order.Events.Add(newEvent);
}
return newEvents;

Based on the requirements I scoped out several tests. Ensure that:
Given the event service returns no events, no new events are added or returned.
Given the event service returns new events and the order contains no events, all events are added and returned.
Given the event service returns new events and the order contains one with matching code and date/time, only the new unmatched events are added to the order and returned.
Given the event service returns an event with matching code but different date time, the new event is added to the order without replacing the existing event and returned.


Adding the 2nd test I was suprised to find it failed. The code is de-coupled and the EventService is mocked out to return two events. There were 2 main asserts, one to assert the order contained the new events, and the second that the return value contained the new events. The order assert showed it contained the two new orders, however the returned set had a count of 0. I'm actually pleased when a test does fail, but this was a momentary "WTF?" followed shortly by a "well Duh!". I was returning the IEnumerable from the Except() expression and I had added the items to the list driving the Except; any further iteration of the IEnumerable would see no items since matches were now in the list.

The issue was easy enough to fix with a ToList() call, and I felt it warranted a simple comment to explain why the fixed code was done that way in case someone in there later went and tried re-factoring it out to just use the enumerable.

This gave me a perfect situation to demonstrate to the BA exactly how the unit tests reflected requirements within the application, and how they served to guard those requirements from unexpected future changes. I had integrated the two tests with working code to show how the CI environment reported that all tests pass, then re-introduced the buggy assumption to show how the tests pick up the error.

The other interesting thing to note was that I also showed the BA our NCover results and was a bit surprised to see that with just the first two unit tests that block of logic was reporting back a test coverage of 100%. However I showed that the "touch" count was showing a lot of 1's indicating that a single test was touching many parts of the code only 1 time. This is a warning flag to me that the code really isn't being exercised . If I was solely concerned with test coverage percentage I could have left the tests at that, but I knew the other two scenarios were not represented so I demonstrated how the stats changed by adding them. The test coverage remained 100%, however the touch counts increased to show a minimum of 3, and an average touch count of 6 to 9. It's not a perfectly reliable indication that the coverage really is complete, but by thinking through the scenarios to exercise the code, I'm quite confident that the code is reasonably guarded from the unintentional introduction of bugs.

Wednesday, September 7, 2011

Unit Tests Aren't Expensive

A short time after developers start adopting unit tests, by this I'm meaning committing to unit tests and looking at good test coverage, grumblings will start. Someone will want to make a design change, and by doing so they break a large number of existing tests. All of the sudden it sinks in that now that there are unit tests in addition to existing production code, changes like this look twice as big or bigger. This can be a crisis point for a project. Having unit tests appears to mean that code is less flexible and change is significantly more expensive. This attitude couldn't be farther from the truth. Unit tests aren't expensive. Change is expensive.

The first thing we need to look at is why the unit tests are breaking. Often when developers start writing tests the tests can be overly brittle merely due to inexperience. In these cases it is a learning experience and hopefully the fixed up tests will be a little more robust than they were before. Still, even with well sculpted unit tests guarding the code's behaviour, developers will find reasons to justify changing that behaviour. The presence of unit tests, now broken unit tests, doesn't make that change expensive, it just makes the true cost of that change visible. Making behaviour changes to an existing code base (that has already committed to some form of acceptance testing, and likely is already out in production) is a very costly exercise. Breaking unit tests are demonstrating what the cost of that change is up-front. Without the unit tests, the deposit you pay up front for the change is smaller, but the rest of the cost is charged with interest in the forms of risk and technical debt. Unit tests show you exactly what is dependent on the behaviour as it was when you first committed to it. You have a clear map not only to fix the tests, but ensure that you can fix the code documented by the tests to suit the desired new behaviour. Without that map, you're driving blind and only regression tests are going to find the spots you miss if you're careful, or it will be your customers that find them if you're not.

So, how do you keep the cost of change under control? An obvious answer would be to identify all change before coding starts, and allowing no change until coding finishes. This would be a motto for BDUF (Big Design Up Front) advocates, but in the realm of typical business software this really doesn't work. By the time coding starts and the customer actually sees implementation they are already thinking about changes. The Agile solution is to implement only the minimum functionality you need to meet requirements as you go. Often the biggest contributor to these kinds of large behaviour changes are due to development teams trying to save time in the future by implementing code that isn't necessary now. Classic reasons include over-engineering solutions, scope creep, or committing to a particular technology/framework on the basis that it may solve problems you might encounter, just not right now. Other ways to avoid large change costs is by embracing constant re-factoring. Get in the habbit of including small improvements into your code as you implement new features. Introduce base classes for common code, delete stale code with a vengeance. Any improvement to reduce the overall size of the code base should be a very high priority. Don't leave things for later, because that is debt that starts accumulating interest as new functionality comes to depend on it, or reproduces it for the sake of consistency.

On a side note:
Writing software can be like a game of chess. You don't walk up to a match with a memorized set of moves that will guarantee victory. This isn't tic-tac-toe. You've got to be prepared to adapt to the opponent that is your customer, your manager, your budget; and you've got to be prepared to sacrifice pieces in the hopes of winning the game. A developer with over-engineered code is like a player that values each piece on the board too highly. A lot of time and sweat has been spent in the code and you cannot justify sacrificing it when the demands of the game deviate from the original plan. The result is putting even more sweat and time into the solution and the resulting choice of moves doesn't win you the game, they lose it and the customer walks away.