eurica

numbers for people.

October 28, 2014
by dave
Comments Off

Roadmap and prioritization basic axioms

This is part of an ongoing series to document what I learned growing PagerDuty from 6 people to over 100 in 3+ years.

There are 4 necessary ingredients every roadmap needs to be a successful:

  1. Benefits
  2. Costs
  3. Hopes and dreams
  4. A defined process for horse-trading

The MVP for a roadmap is a list of features. The size of each of the items on the list depend on how much your organization struggles with prioritization, but the list needs these basic characteristics:

Costs & Benefits
These are your basic ingredients for every item. There are a lot of flavors of each to choose from, but they’ll both need to be represented.

In cases of uncertainty, round cost up and benefits down. For the top items on the list, it’s up to the product team to tidy up the benefits and engineering to narrow the estimates on the cost to the point where your comfortable starting work.

Breaking down the benefits:
Granularity gives better estimates; in terms of breaking down benefits, I like the following:

  • Immediate customer benefit: when we launch this feature, how much of a splash will it make with current customers and sales that we can close because of this.
  • Urgency: how much better it would be to launch this feature now vs in the future.
  • Strategic impact: how much this contributes to our long term vision.
  • Unfair advantage: why we should launch this feature.

The first 3 items are very similar to User-Business Value, Time Criticality and Risk Reduction/Opportunity Enablement Value from agile development’s Weighted Shortest Job First ordering. Your unfair advantage is described well in the Lean Canvas, but can also be thought of as building vs partnering.

I also like to call out the the top requests, e.g. the #1 support request, the #1 chance to learn, the most used feature, etc. to give those features bonus points.

Costs
There are several types of costs:

  • The raw effort in terms of resources
  • Future ongoing maintenance this feature will cause
  • The criticality of resources used relative to you
  • How much institutional overhead you’ll need to navigate with this feature; this also includes how much effort changing the plan would be, especially if you’ve made external commitments to partners.

Costs aren’t always going to be fungible, so you may not be able to pursue all equally costed items interchangably.

Hopes and Dreams
The list should have at least twice as many things as you can get to. The bar to being on the roadmap is very low — things on the bottom of roadmap will be very vague, and come at the cost of writing a sentence.

There is no upper limit on how many things that you won’t do.

Process for re-prioritization
However you do it, there needs to be a regular and inclusive process to re-visit the roadmap. There are many valid possibilities:

  • Give each department some number of votes and do the items with the highest vote/cost ratio. This has the benefit of making explicit the relative weightings of e.g. Sales vs Customer Support in the organization.
  • Stakeholders can get into a room for planning poker
  • The product team can re-evaluate each row on a schedule
  • A benevolent dictator can look at the data and pick what they want

Engineering can be asked to generate detailed plans for features or approximate costs with a gut check but engineering needs to be heavily involved in generating costs.

For one’s sanity, just accept that there’s going to be an emergency entrance at the front of the queue and work to minimize it. There should be one person’s job to protect this queue and evaluate all queue jumpers (and reject most of them).

And then everything else…
A list of features is necessary, but it may be a long way from sufficient. It doesn’t convey a strategy or a vision, but it does help to get everyone pointed in the same direction.

October 26, 2014
by dave
Comments Off

What I’ve been reading (Fall 2014)

Dataclysm: Who We Are (When We Think No One’s Looking)
I’m a fan of OkTrends as well as Rudder’s band Bishop Allen, but I wasn’t really wow-ed by this book. Although it was nice to know some of the stats that seemed wrong we’re USA specific (the racial bias in Canada is less than half as pronounced in Canada) and it’s a good explanation and counter-weight to WEIRD studies, the book could’ve benefitted from more aggressive editing:

“In any event, when I talked about the data as a flood, way back, I perhaps didn’t emphasize it enough: the waters are still churning. Only when they start to calm can can people really know the level and make good the surfeit.”

Still, it’s always great to get at new datasets even if the conclusions aren’t earth-shattering: that men find 20 year old women consistently appealing isn’t rocket science — and 50 year old men who are on dating sites and rating profile pictures might be unrepresentative in their own way.

Interestingly enough, Rudder includes Nate Silver along with Facebook and Google as “three of the biggest forces in modern data”.

The Signal and the Noise: Why So Many Predictions Fail— but Some Don’t
This is the book that I wish I wrote — even if it’s light on actionable advice, the math is sound and consistently interesting. The premise revolves around Bayes’ Theorem
bayes formula
Silver’s plan of action: Start with your theory and how likely it is, and update it according to new information. It’s a subtle but important update to the scientific method that keeps all theories on the continuum between true and false. The chapters break down into real-world examples from baseball to earthquakes. I read the book in one sitting (high praise!) and completely forgot to take any notes, so I’ll have to go through it again.

The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
Ben Horowitz’s A16Z invested in PagerDuty, so I knew that he was a smart investor. This book focuses on Ben’s time as CEO of Loudcloud/Opsware from its founding in 1999, through the dot-com crash until the sale to HP for $1.65B in 2007. It tells a very CEO-centric story, possibly because Horowitz managed to keep a lot of employees on board and working long after it seems that they really should’ve moved on to greener pastures. After 8 years, and many hundreds of millions in capital, it’s hard to imagine the rate of return those employees was particularly exciting.

Empty Mansions: The Mysterious Life of Huguette Clark and the Spending of a Great American Fortune
I didn’t end up finishing this book. Interesting as it was to read about the early 1900’s, most of this book seems to be a painful read about 1 or 2 women spending vast amounts of money to not be happy.

What If?: Serious Scientific Answers to Absurd Hypothetical Questions
I mostly bought this book to support what-if.xkcd.com, and I haven’t regretted it for a second.

“What If? is one of my Internet must-reads, and I look forward to each new installment, and always read it with delight.” —Cory Doctorow, BoingBoing

October 26, 2014
by dave
Comments Off

Here’s an ignite talk & a podcast

I overstretched myself a little last week with two speaking engagements:

  • Speaking about “Bringing Advanced Analytics to DevOps“, Brian Gracely and I had a good 30 minute conversation on best practices in the ops space.
  • As the opening act for Gene Kim, I gave an 5 minute ignite talk on “4 Magic words to get DevOps into Enterprise” (Alignment, Auditable, Risk Reduction and Recruiting) for the closing ceremonies at the DevOps Enterprise conference.
 Thanks to @SteveElsewhere for the picture.

Thanks to @SteveElsewhere for the picture.

September 15, 2014
by dave
Comments Off

2 reasons software usually takes longer than estimated

Like many people whose jobs involve getting software out the door, I think a lot about why we usually ship late (and never early). Unfortunately my theory is pretty mundane — the less accurate the estimate, the more likely it is to be late rather than early.

1. Errors don’t cancel out. A simple example: If a half of the tasks in a project take twice as long as you expect, and the other half take half as long — rather than being on time, your project took 25% longer than scheduled1. Unfortunately, to compensate for 1 task taking twice as long as you thought, you need to come in 50% under for 2 other similar sized tasks. The numbers are correspondingly worse if you’re off by a larger multiple.

2. The winners curse is very hard to avoid. Generally the same team working with the same tools in the same problem space is going to have very similar cost benefit ratios for the top items on their backlog. The top projects on your backlog have costs of X1…XN and values of Y1…YN. Assuming that you’ve done your homework, you’ve ordered them so that X1/Y1 > X2/Y2 > … > XN/YN — in reality it’s likely closer to the truth that X1/Y1 = X2/Y2 = … = XN/YN. In words: you’re most likely to do the projects, or parts of a project that you’re the most optimistic about — so half the time, you’re picking a project because you’re overly optimistic about how long it will take.

What can we do instead?

One of my favourite aspects of Agile is the frequent demos with a process for re-prioritizing tasks, but even on a waterfall project there’s a simple rule of thumb that I like:

By the halfway mark2 on a schedule:

  • Someone not working on the project/feature must’ve successfully used the feature
  • You have a finite, prioritized and shrinking list of remaining features and bugs
  • You have a line that separates blockers from non-blockers and most of your effort is going towards blockers

The goal is to control as much as you can — if you can’t predict when a project will be done, you can at least put yourself in a position to decide between launching on time or launching with all the bells and whistles (or somewhere in between).

Some better answers:

  1. 50% x 2 + 50% x 1/2 = 125%
  2. I remember once being asked to bid on a project that was “90% finished” and dutifully (and I strongly suspect, correctly) estimated that we’d have to basically start over and do 100% of the project.

August 28, 2014
by dave
Comments Off

Making a survey for tablets with Google Spreadsheets/forms

Google forms are a great tool to throw together quick feedback forms – we use an iPad with a form in the lunchroom to rate our caterer.

Great idea, poor execution. Most of them end up looking like this:

ugly google forms

We can do better. Take this form for instance. We can convince just about anyone to click on a form optimized for an iPad: Live demo

The great news, is that the form is just HTML that you can muck around with that goes straight into a Google spreadsheet. Open up your form and view the source. You only need 2 values: the form’s action URL and what the input’s name is:

where to find values

Then stick them in this HTML (and probably tweak the CSS). The text in the DIVs

August 25, 2014
by dave
Comments Off

How much bad software development practices cost

I’m always jealous of other people’s datasets. Although Rally’s The Impact of Agile Quantified whitepaper is obviously targeted at getting people to adopt their product, it has some useful back of the envelope values taken from almost 10,000 projects:

  • A dedicated team (>95% focus) is roughly twice as productive on individual tasks as one that’s sparsely focused (>50% focused on one project) (page 5)
  • The median team has 25% of its members switch teams every 3 months (page 6)
  • Team stability can increase productivity by 50%: moving from 60% of the team staying each quarter to 90% staying increases average story count from 50 to 75 (page 7)
  • Small teams (1-3) move slightly faster (+17%) but ship more defects (17% again) making it a wash. (page 12)

Worth taking a look at. Credit: Chris Gagné