Friday, January 30, 2009

Performance Reviews – part II.

Doing performance reviews are expensive, in both time and opportunity cost. So if you are going to do them, remember that the most important factor in designing a performance review system is to understand the objective you are trying to accomplish. If it is to rank employees 1 to n, then you will need some form of rating system. But that is a crappy goal, and I wonder why companies feel it necessary to do so. It doesn't improve the product, and doesn't make developers more efficient or more productive.

If the objective is to drive specific behaviors that is simple enough to do. Ratings based on easily measured objectives (uch as focusing on bug fixing) will ensure that those metrics are met, most of the time regardless of the cost/benefit to the organization. And if the annual reviews set up competing metrics (QA rated on how many bugs, developers rated on how few bugs) then progress will grind to a halt as teamwork goes right out the window in this zero-sum game.

If the objective is to help people grow in their careers (and this *should* be the objective), then the reviews should not be annual, but much more frequent, quarterly at a minimum. Annual reviews tend to focus more heavily on the most recent quarter, as that is what is freshest in the minds of both the manager and the developer. If you are going to have quarterly reviews then they have to be lightweight. Don't use ratings, but rather focus the review on strengths and weakness, and how those fit into the company's strategic plan, and the developer's personal career plan. Make sure the developer is aware how they are progressing, and knows on which areas to concentrate. Using SMART objectives can be effective here, but they have to be focused enough to be of value. Long term macro objectives (n dollars in gross sales) are not effective at the team level. Individual level Objectives that are SMART and Annual and relevant are a myth. Quarterly objectives are short enough time frame to allow for adjustment to market/company directions without causing the developer to rebel (I am not meeting the annual objective, therefore my bonus is in peril, and I won't be able to afford that family vacation I have been promising the spouse).

Finally, be consistent in the review data. Do not have a quarterly review for results mixed with an annual peer review, mixed with a semi-annual daily behaviors review. Consistency will help drive the message (whatever that message you want to deliver as a manager) more effectively. Have the same information available for each review. If you want to do peer reviews as part of the process, then have them at every review. If the data are not important enough to review quarterly, the do not include that data.

Wednesday, January 21, 2009

Semi-Bi-Annual Review Time - Part 1

It is that time of the year again, time for the Performance Review. if you follow any Atlassian's twitters, you already know the opinions of many on this topic. They range from the light-hearted dismissial ("trying to find enough synonyms for awesome") to the resigned ("[the] whole agonising performance review self-assessment bullshit") to the cynical (see remainder of the post).

This post is not going to be about Atlassian's review policy, but rather what I believe an review policy should be.

I have been on the evaluatee end of 13 annual reviews while working in IT, and have been on the evaluator end for 8 years. I have done reviews with Excel spreadsheets, word docs, and complicated Performance Review tools. I have rated people on three, four and five point scales. I have filled out self assessments based on SMART objectives, B-SMART-R objectives, free form objectives, and no objectives. For the most part, it has been crap.

So how should an semi/annual review look. Here is part 1 of what I recommend.

Don't give ratings/scores.
Scoring systems suck. The mere existence of a rating/scoring system will render the face-to-face meeting the the employee less effective. If you state the rating at the beginning of the meeting, then regardless of the scale almost everyone who doesn't get the highest will spend the rest of the review wondering why they didn't get that rating. If you wait until the end of the review to tell the employee the rating, everything you are saying in the review is being ignored as he or she waits to hear the rating. It doesn't matter if you rate on a three, four, five, or ten point scale, those who did not get highest can be de-motiviated. There is a massive amount of effort (at least for good managers) spent on preparing for the review, and the return on that effort is minimal if the person is busying composing counter arguments in his or her head on why they should be rated higher.

Rating system cause the end of the review session to be focused on "how do I get the highest rating" questions, rather than "where can I improve and be a better developer" questions. The rating system creates this false premise, that if someone ticks all the correct boxes they will get the highest rating, and you as a manager can articulate the exact requirements to tick those boxes. Of course you can't. It is impossible to have everyone be able to be in the top 10% (because the rating systems always have caps on how many people can be in the top bracket), so you cannot give them SMART and/or B-SMART-R objectives that will guarantee a place in the top rating score group. It also becomes harder to articulate that true superstar performers excel above and beyond any set objectives. They have the ability to solve a problem more quickly, ask more probing questions, etc. You can't socre that using SMART objectives.


A better option is a binary scale: successful or not-successful. The biggest benefit is the refocusing of the review on the details of what the person is doing right and what needs to be improved. With this binary scale, it becomes even more essential that the manager spends time doing a thoughtful, thorough and comprehensive review - rather than giving everyone the same general comments and using a rating system as the differentiator. Couple a binary rating with frequent detailed feedback, and you have a much better evaluation system.



Update: I forgot to mention that the rating system is not improved with cool/impressive sounding names for the ratings. Having 1 be "Totally Awesome" and 2 be "Completely Awesome" will not make the person who got a two any happier.

Thursday, January 8, 2009

Bad Analogies Suck.

One of the thing I appreciate about working at Atlassian is that everyone is constantly working on self-improvment. In pursuit of this goal, the Team Leads meet once a week to review and discuss a current leading industry monograph (otherwise known as Book Club).

I learned early on in my academic career that books are not meant to be read as narratives, but rather books are an argument. The author is trying to persuade you that his interpretation of events and/or current practices is correct. One of the worst ways to do this is through the use of bad analogies. And yet, bad analogies seem to be one of the predominate modes of writing in IT books. Bad Analogies distract from that argument, or worse, tend to disprove the very point the author is trying to make.

In Agile Estimating and Planning by Mike Cohn, he has a chapter on Buffering Plans for Uncertainty. When planning for a project with a firm deadline and absolute set of functionality, he uses the analogy of getting on a flight. All the steps (driving to airport, passing security) must be completed, and the deadline is set before the project even starts. To ensure the project gets done, he suggests leaving earlier for the airport. The conclusion reached by using that analogy - start your project sooner. In software development, it is usually not very feasible to solve a scheduling problem by going back and starting the project sooner. The normal solution is, of course, to delay release (take the later flight) but the point of his argument was to show how Agile planning can help make that first flight. It is better however than the next analogy I encounter at Book Club.

In Release It!, Michael T. Nygard uses the analogy of a car that fell apart on the test track to show that even when the process works correctly, bad products can result. In other words, he argues that a failed QA effort proves that successful QA doesn't guarantee a good product. Now passing all QA tests doesn't mean the product won't suck, but that point is lost when the example contraindicates the argument.

The irony is that both of these books are really good. They have good insight, and provide practical guidance on how to do agile planning and build robust systems respectively. I just wish they didn't make it so challenging to agree with their arguments.

Monday, January 5, 2009

Sarcasm - one of the many services I provide.

Cynic (n.) - a person who shows or expresses a sneeringly cynical attitude.
- One who expects things to go wrong.

Cynical Software Manager (n.) - me.

Welcome to my blog. The idea for this blog came as a result of many a conversation with Mel, the Program Manager at Atlassian. Over the course of several IMs and actual face-to-face discussions, when I find myself discussing something stupid, I usually make a smart ass remark, and call it the title of the next chapter in my forthcoming management book. Given that I have no intention of ever writing a management book, those chapters will now be blog posts. The recycling saves me from having to think of new interesting things to write about, I can just write about old interesting things. And remember,
if it was written in a blog, it has to be true.

The title of this "chapter" is in honor of my penchant for being a sarcastic smart ass when dealing with people, even those I like. Especially those I like in fact. And the reason is that sarcasm can be extremely effective in interpersonal communications (or at least I think so, after all I am in software development, so I don't ever get around to talking to real people). An example, you ask, where sarcasm can be useful in the workplace. The best place is in meetings, where using the inherent humor in sarcasm can deflect any notions of personal criticism. It can also be effective in pointing out patently obvious points that were somehow missed, or to illustrate unreasonable expectations. I hope it works, otherwise I have been pissing people off for real for a long time now.

And I am cynical enough to realize that my meager sarcasm skills pale in comparison with the true masters, the cynics that believe nothing will ever get better or be done right. Since I am a very optimistic cynic, I really believe that the project will be a success, and will be finished only a short time after the scheduled release date, once the inevitable happens and is corrected.

It is also difficult to be a full-blooded hard-boiled cynic (first definition) when you work with great people at a great company. And I am not so cynical to be disappointed about that.