eurica

numbers for people.

December 14, 2014
by dave
Comments Off

Certified Scrum Product Owner notes

We sent a truckload of PMs and engineers to a 2 day Certified Scrum Product Owner course from Jeff McKenna. I was a little surprised to hear that Scrum trainers aren’t allowed to change their slides once they’ve been approved — switching between the powerpoint deck and the unsanctioned handouts was surprisingly distracting.

The class started off with an amusing bit of the Dunning-Kruger effect when we sorted ourselves in order of agile experience. Since most of us at PagerDuty had done the reading already, I found myself listening a lot to the background patter:

“I got out of coding because it’s always the same thing over and over.”

“I wanted to be the manager so I could be the one giving the orders.”

If nothing else, taking part in training is a great chance to better understand the median software management process, but needless to say, doing the same thing over under orders is not something that excites me. There were a few good discussions of “doing agile” vs “being agile”

Here are my rough notes:

First up: You can’t improve what you can’t see, so we discussed a few variations of process control with 3 characteristics: You can see it, understand what it means and have the power to do something. We mentioned the Department of Defense’s Capability Maturity Model
although in terms of military models, I prefer to think about how we can tighten John Boyd’s OODA loop.

A Scrum team should be

  • 3-9 people working cohesively full time — that’s empowered to say no when they don’t think that they can deliver.
  • The Product Owner is responsible for maximizing the value of the work being done.
  • The ScrumMaster is in charge of optimizing the team’s velocity, this is distinct from the product owner’s role — which seems to be one of the lynchpins of Scrum.
  • Part of the ScrumMaster’s role is to “Manage impediments” — in my mind, that’s a job the product owner shares heavily in, especially when those impediments have root causes in other stakeholders.

The entire team works on one story at a time, this seemed to me like an artifact of having large stories or inefficient deploy technologies. A team where one person can understand the whole of the team’s domain and uses modern deployment tools feels like it would cause more overhead — the 9-women-1-baby-1-month scenario.

The effort required to make a change should be proportional to: (the origin thought put in) multiplied by (the magnitude of current usage/dependencies).

When ordering projects projects by descending RoI, break ties in favour of doing the smaller ones first — the variance is lower, but also you can release them sooner. But don’t do small things to the exclusion of big things.

“Burndown charts don’t help during a sprint”

Story Points:
Every story needs a value. 1 story point can be considered the minimum possible change, which is largely a measure of your overhead: a conversation, branching master, making a tiny change, running the tests, submitting a pull request, doing a code review. 0 points can be considered a story that can share the overhead with just about any other story — such as a text change to a part of the code that you’re already touching this sprint (conversely, a text change to a different part of the code, may require its own conversations and overhead). Targeting 40 story points per sprint is a good balance between generating too much detail and having enough stories to invoke the Law of Large Numbers — “average of the results obtained from a large number of trials should be close to the expected value” — to generate relatively consistent sprint estimates. Personally, I prefer valuing stories 1, 2, 4, 8, 16 &c over the fibonnaci sequence, but the important thing is the delta between story sizes increases with the sizes 1.

A minimal story probably involves at least one branch statement. “I can enter my first name” and “I can enter my last name” don’t make for independent stories.

Estimate the future using yesterday’s weather: you’ll get the same done next sprint as last sprint. This is in sharp contrast to the worst project I ever worked on: where every iteration, we predicted 40 points, and did 30 — so management would ask how we planned to do 50 points next sprint.

Don’t have a “done-done” state. The definition of done includes testing to the point where people can change the code in the future and not break any implied (and hence un-tested) assumptions. Otherwise you’ve made the code more fragile. I suspect the ideal level of testing isn’t measured by percent code coverage but rather having emotionless deploys — where even new employees can make changes, and if the tests pass, you’re completely comfortable deploying the changes. Code reviews then only serve to: enforce code cleanliness, confirm that the new code works and to ensure the same level of testing going forward. McKenna believes there’s too much tolerance of bugs in silicon valley today; I’m not sure how much I agree — there’s certainly a lot of crappy software being written, but when you’re writing something that benefits from network effects (Twitter, Facebook) rather than something where you are solving a particular problem that touches on revenue for businesses (PagerDuty, Salesforce.com) I can understand the urge to increase your feature velocity for 95% of your users rather than dot the i’s and cross the t’s for the remaining 5%.

The value of fixing technical debt is measured by its effect on velocity — don’t forget the estimated time to End-of-life-ing a particular piece of code. Technical improvements can address user stories too — performance is a feature. Everything worthwhile can be demoed to the right audience: these tests failed and now they pass, deploys take 40% less time, 4000 lines of code comply with our style guide, &c.

Long term planning involves documenting and socializing the “Approach” rather than the “plan”. Everything needs acceptance criteria, “3 people on the team have read and understood the approach” is a good one.

Dan Pink’s Drive: The surprising truth about what motivates us raised an interesting point — paying people enough “not to think about it” is increasingly hard in San Francisco.

“10-20% of people shouldn’t do agile” (based on the description of those 10-20%, it sounds like they actually shouldn’t do software). Converting a Project Management Professional (PMP) to a ScrumMaster is a pathway to strife & conflict.

The Sprint Review involves a demo — the product owner encourages people to come.

“Buffer the promise, not the plan” — aim for accurate estimates, but don’t promise them, you’ll be over 50% of the time even in a perfect world.

Scaling scrum cross-team & remotely: Have a scrum of scrum where the people most exposed to cross-team issues check that no-one is blocking anyone else. Dependencies highlight problems in your team structure. Keep teams together, if the team has to be remote, leave the product owners at HQ.

If the house is on fire every week, it’s not a fire, you just live in a particularly hot house. Schedule a fire drill every week, make it a 0 point story, and track how much time is spent on it.

One pseudo-strength of the waterfall method is you can document a huge pile of potential functionality that satisfies every stakeholder and then react with shock when most of it doesn’t get done — with agile, you can re-prioritize actual functionality every sprint, which can mean a lot more contact with stakeholders.

Must Should Could Won't DoOne thing that I disagreed with was verbiage around “Must do” vs “Should do”, “Could do” and “Won’t do”.

Ideally 100% of the time should be spent on “Should do”, since “Must do” generally arises when the product owner (or the stakeholders) missed something coming down the pipe. When the iPhone came out, RIM/Blackberry had a boatload of things that they needed to do — likely more than they could in a timeframe that mattered; it was up to the product team to tackle those things proactively. Sometimes an external actor can dump something on you without warning, the European Union could require your software to track users with actual homebaked gingerbread cookies; but those are rare — and if they are not in your sector, you should invest in lobbying and research.

“Should do” is another word for “Has a positive return on investment based on our velocity and market”. So a “Won’t do” can be promoted if the market or our velocity improves, and similarly a “Should do” can be demoted if we slow down or our market position weakens.

“Could do” is just a holding area — we make software, we could do just about anything. It’s up to the product team to understand the potential RoI, in whatever terms & accuracy your organization operates in. The “Could do” list will likely be longest list by far. Predicting that some fraction of the could’s will become should’s seems like a convoluted way of padding your estimates.

In the exercises we worked on, the class seemed far too eager to both accept that a feature was mandatory and that we should do some fixed fraction of the things that we could do. Planning on doing half of all the things that we’ve put on the backlog to satisfy stakeholders seems like asking for trouble.

Other:
Things to look in to: Agile portfolio planning, JIT Budgeting & Beyond Budgeting, Waltzing With Bears: Managing Risk on Software Projects

The class didn’t cover product discovery, which is the area that I’m most interested in. It was hard for me to charge in to some of the examples without doing a better job of justifying what we were trying to build.

  1. I really don’t understand the obsession with using powers of 1.618 over powers of 2

November 30, 2014
by dave
Comments Off

Optimizing Who’s Hiring posts on Hacker News

PagerDuty is a data focused company and recruiting is no exception1. We’re growing quickly and one tiny corner of our recruiting effort has been to post in Hacker Newsmonthly Who’s Hiring thread. Over the past 6 months, I’ve been tracking our click-through on each of our posts, here’s what I’ve learned:

A typical post would get 100-200 clicks:

  • I tested the waters in May with a short post that garnered 90 clicks
  • In June, I botched the formatting, for 712 clicks on the job-specific links. An additional 25 out of 135 clicks on http://pduty.me/jobshnjune came from that post.
  • In July, my post got 195 3 clicks
  • In August, my post got 1984 clicks & Shack’s reply got another 80
  • In September, I dropped the ball and 2 engineers posted, click-through was low
  • An offhand comment that I made in response to a project that one of our customers posted, got 206 clicks.
  • In October our post was a post by an engineer later in the day with comparatively low clicks.

Most clicks come in the first few days
Hardly surprising, although if you want to stand out as an application there might be some benefit is applying to a multi-week old posting.

Post early
Again, this shouldn’t be surprising, since the earlier you post, the more people see it. But I was surprised by the magnitude, the October posting was especially hurt by being posted later — the thread going live at 6am Pacific time doesn’t help us on the west coast.

Everything should have an owner
I’m less concerned with the hundreds of clicks we missed by publishing late as I am with the duplication of work. We’re still very much a do-ocracy — which is great for so many reasons — but it can also mean someone re-writes the job posting rather than cutting and pasting from the wiki.

Formatting and content matter
The time spent improving the text and formatting of the post improved click-through, so I’d recommend writing your post ahead of time and recycling the best content (if you don’t have a great paragraph about why people should work for you, write that now).

Have a hook
Our Toronto job postings do very well considering the distribution of HN readers between Toronto and the bay area. We have a well articulated hook for our Canadian recruiting: Take the TTC to work in Silicon Valley. SV and SF are fiercely competitive places, even for great companies to find great people.

Conclusion: the impact was minimal
Out of the 1000 clicks that I’ve tracked through these comments and the ones on my blog, we’ve gotten 10 leads, some of which were promising but 0 hires. In fact, we (originally) didn’t post in November’s thread, out of respect for our team’s work/life balance. (One of our engineers posted anyway, it’s still a do-ocracy).

I suspect that the readers of HN, even the ones browsing a hiring thread are typically happy where they are and so need more than a passive posting in a thread with 100 other companies /speculation.

Unapologetic call to action:
We’re smart people who are great to work with, join our team in SF or Toronto.

  1. It is an exception in the sense that everything I’m talking about here is coming from public data sources so I can write about it, which is kind of cool.
  2. 31+13+18+9
  3. 23+18+67+62+25
  4. 44+97+34+23

November 25, 2014
by dave
Comments Off

A limited defense of the 2 egg problem

There’s a lot of backlash against using problems to asses cleverness in interviews. So much so that it looks like the site that reminded me to write this article changed their example question away from the derided “2 egg problem”.

I believe it is the prime example of everything that is wrong with engineering interview culture. Not sure if that makes it ideal material for a site like this or pointless trivia. — Top comment in a Hacker News thread

Before I was the charming well-rounded jock that I am today[citation needed], I was essentially a mathlete, so questions like this give me the same warm fuzzy feelings as reading Dr Seuss books might give to a liberal arts major1. But I’ve also done hundreds of interviews there’s one more thing that I’ve found:

How you solve an arbitrary stupid problem is a valuable thing to find out.

If you asked me this question, here’s the genre of answer that you’d get:

First call out your assumptions

  • That each floor is equally likely to be the fatal floor. This is a biggie: common sense dictates that the LD50 of a dropped egg is less than a meter, so the naive approach will work best (the egg will break on floor 1 or possibly 2, and we can all go home).
  • You aren’t looking for a mathematical proof that my solution is optimal — this would be a huge undertaking, I can definitely give you some possible bounds on the optimal solution, and we might get lucky but it’s not a 60 minute problem. So we’re looking for the best solution I can find. This is valuable because it means we’re (correctly) focused on how I approach the problem)
  • The metric we’re optimizing is expected number of floor drops: so time spent climbing down to collect unbroken eggs doesn’t matter and buying 2-dozen eggs to speed things up is out of scope. Also we aren’t looking to minimize the worst case (even though that’s often similar to minimizing the average case).

I’m also looking to tease out any extra information, Google’s version is very careful to call out all the details that you need, but interviewers are human.

It’s also worth noting that I’ve already made a mistake, the solution space to this problem (all possible egg dropping algorithms) seems to be small enough that we can try them all2 — Maybe I don’t belong at Google :)

Go looking for interesting angles

  • If we had only one egg, we’d need to use the naive approach (dropping from floor 1, 2, 3 and so on). And at some point we’re going to be down to one egg, so we’re looking to use the first egg to minimize the number of floors we have to use the naive approach on.
  • 100 is a square number, and we have 2 eggs. 100 is not a power of 2 (I don’t know if this is useful yet)

Prove that you can totally handle this
So now let’s show off our basic math skills and mention that the naive approach will take 50 drops on average — it’s also O(n) and wastes an egg which are the two cardinal sins of search algorithms.

Ordinarily I’d go to a binary search next, but that feels too easy and I’m worried that half the time we’re going to break on floor 50 and that’s a lot of naive searching already. Also the two outcomes aren’t exactly similar (a broken egg means we lose resources). I wouldn’t mention it but I also don’t want to have to keep figure out how to round 100/2n.

Since 100 is square, I wonder if that’s relevant, so I’d look at moving up 10 floors each time with the first egg.

Ballparking this process, for floor XY, it’s going to be roughly X+Y+1 drops to determine that it’s the fatal floor, except floors X0 Where it will be X+9…

At this point, I’m going to cut off my hypothetical interview, since the cardinal rule in an interview question is to know what you’re looking to measure — and I can’t guarantee that I am looking to get the same things out of this question as the people who ask it.

My goal here isn’t to wow you with my arithmetical prowess. However, given that it’s hard in interviewing to find new but sufficiently uncorrelated things to ask a person, this genre of question can show a lot about your problem solving approach, even if you aren’t disrupting the SaaS egg-dropping space (YC15)3.

Not convinced? You can still interview at PagerDuty, we don’t use this question.

  1. This sentence has hilarious, not only do I have a Wikipedia joke and a jab at artsies but if you view the source, there’s still another joke — the semantic web isn’t dead, it’s just sleeping!
  2. Start of the proof: given (eggs, floors) we always move up in terms of floors if the egg doesn’t break, down if it does, so there is a bounded set of strategies. Given f(2,100) you have an especially easy problem since f(1,n) needs to check n/2 floors on average. You can now recurse over every possible solution for f(2,m<n) and construct your algorithm from the best moves for 1 floor left, 2 floors left, etc…
  3. Yet. Email me if you want a beta invite to my new Tinder-for-eggs app

November 17, 2014
by dave
Comments Off

Notes from BVP’s Enterprise Forum

Bessemer Venture Partners invited me to join other product people from their portfolio companies for a mini-conference on product development for the enterprise. Here’s a rough pass at my takeaways, ranging from the obvious to the insightful:

Roadmapping:
Always ask customers “What do you not like about what you do today?”

Voting on what to do next is not a great idea. It might be salvageable to vote on “will this feature move this needle”. Pandora’s model [link] may find the “things we absolutely have to do this quarter” but Pandora might be trapped in a competitive low margin space where they have things they must do, instead of building a product with a defensible moat where they are in control of what they want to do next.

Idea management is what PM does, it’s different from idea generation (which anyone can and should do). It’s up to PM to define the process & evaluation that an idea has to go through before its built and to put every new idea through its paces. There were several different processes discussed for how an idea gets into the product team’s funnel, some product teams actively solicit every last wisp of an idea (this is what PagerDuty does) other teams asked for business cases to be written up or presented internally to the product team. Everyone agreed that it’s important to market PM internally as the processors of the process not the single source of ideas.

Longtime PMs were impressed with the spreading acceptance that roadmaps will have broad dates rather than exact ones and that scope can change.

It’s important to understand what your customers want from dates on a roadmap, some PMs have found that large companies only want to understand when the next version is coming out so they can do release planning — so it’s better to release quarterly or twice a year with whatever functionality is ready. Other customer prefer the faster release cycle over the predictable dates. I was surprised at how many companies had slow release cadences (only me and one other PM in one discussion pushed to production daily).

Betas are exciting, bring customer excitement forward as much as possible, gather their feedback while there’s still time to use it.

There was not a lot of love for having a public roadmap. But take some chances to bring excitement forward.

Tracking Product Development:
We spoke about how to measure product velocity, all the best techniques seem to track engineering velocity as a proxy (story points). Keep your story points representing the same type of thing (a function that does X) rather than representing a time period or you’ll never be able to measure engineering acceleration or deceleration.

One PM has a script generates a cheat sheet of what % of the user base uses each feature every week. We were all jealous of him, even if some features might be used by different personas on different schedules. It’s be awesome if that cheat sheet could be filtered by segment.

If customers want to be a part of beta programs, exchange that for a commitment to do regular NPS/feature surveys.

Does anyone make an invite only UserVoice? So that only confirmed customers can see the features, not competitors?

A PM makes a quarterly report that buckets each feature release by red/yellow/green according to his feel. Execs love it.

Does it ever make sense to filter feature requests through the account owner for a multi-user product?

Absolute NPS isn’t useful, but relative segmentation is: how do large accounts like us vs small, old vs new, etc.

Figuring out price elasticity is a hard problem, experiment according to your pricing model. Sales reps love price raises to clear the sales pipe (order now or the price goes up). Try to change your pricing independent of functionality so you can measure accurately.

Understand the budgeting process in your customers is valuable, if you sell 2 tools, one for sales and one for marketing bundling doesn’t make sense, they have separate budgets (so a bundled product is actually harder to buy).

Great quote: “Grandfathering old customers on pricing changes is necessary for our mental health to do pricing changes and research”

Resource Allocation:
You can measure anything. Measuring a vibe or happiness team: median time to first employee referral.

Cycle through the person dedicated to maintenance. It can follow the on-call schedule, or rotate quarterly.

The goal is overall velocity, you optimize your maintenance around that.

Some orgs spend their maintenance time on bugs that don’t need PM involvement — how do they manage to find so many easy bugs in production, sounds like a failure of testing.

Dealing with tech debt, it’s engineering’s time with no questions asked but they should explain what they aim to fix before they do it.
Bad: Change boring framework to new beta framework
Good: optimize the slowest query/page/function, attack the bugs that alert the most often, one page tech proposals.

Bucket products and features into: create/build, grow, run, transform/deprecate in order of which ones have the brightest futures

“Me too” features: Are they table stakes now or can you ignore your competitor’s feature? Was it on your roadmap before, how does this change the timeline? Is the feature real or FUD? Try it out & arm your salespeople with the weaknesses. Launching it now is less exciting for marketing.

November 15, 2014
by dave
Comments Off

Ask a PagerDuty API expert: #1 Dashboards

Background: I’m writing examples against the PagerDuty API for our developer evangelism team, here’s a sneak preview:

A dashboard of recent incidents

PagerDuty_Dashboard_ExampleSome of our users have been using browser plugins to refresh the dashboard more frequently, there are some great tools that work with PagerDuty to do that: StatusPage.io, Geckoboard, along with open source projects like pagerduty-dashing and Brainiac.

But I reached out to some of those accounts, and they wanted something that:

  • Can be displayed on a monitor like a status board
  • Gives at-a-glance awareness of the volume and distribution of incidents to non-responders

Let’s do that with our API directly. Here are the most recent incidents grouped by when they happened including the state and who’s currently assigned.

Or, if you want to get started right away, it’s also easily configurable from the URL by passing in your subdomain and an
API key. Put in your own values to generate your status page:

Subdomain:
Your API key:

How does it work:

Like all of my examples, this uses my JavaScript library PDJS to do the authentication and handle the nitty-gritty aspects of the API.

See the source code

If you view the source, you’ll see it’s just one function “loadIncidents” which is run once the page loads and hits the incidents API then loops through them for display, and finally schedules itself to run again in 30 seconds (Not the most efficient solution but it’s the easiest to explain). Here’s the important part in a little more detail:

Further information

There are some other ingredients to this example:

  • I wrote some CSS quickly, please feel free to take this code and make it much prettier — let us know what you build:
  • There is also some code to parse query and hash parameters so that anyone could use this example. I’m a little proud of that last bit since anything after the #
    is not sent to the server
  • Moment.js is being used to format and manipulate times.
  • jQuery (as always) is being used for some semantic neatness in the JavaScript.

Bonus question: List everyone in an account by escalation policy

I’ve also been asked to list all user by which escalation policy they’re on. It’s not as visually appealing as the status page, but a pretty easy snippet nonetheless:

http://jsfiddle.net/eurica/y9ojs7nx/

October 28, 2014
by dave
Comments Off

Roadmap and prioritization basic axioms

This is part of an ongoing series to document what I learned growing PagerDuty from 6 people to over 100 in 3+ years.

There are 4 necessary ingredients every roadmap needs to be a successful:

  1. Benefits
  2. Costs
  3. Hopes and dreams
  4. A defined process for horse-trading

The MVP for a roadmap is a list of features. The size of each of the items on the list depend on how much your organization struggles with prioritization, but the list needs these basic characteristics:

Costs & Benefits
These are your basic ingredients for every item. There are a lot of flavors of each to choose from, but they’ll both need to be represented.

In cases of uncertainty, round cost up and benefits down. For the top items on the list, it’s up to the product team to tidy up the benefits and engineering to narrow the estimates on the cost to the point where your comfortable starting work.

Breaking down the benefits:
Granularity gives better estimates; in terms of breaking down benefits, I like the following:

  • Immediate customer benefit: when we launch this feature, how much of a splash will it make with current customers and sales that we can close because of this.
  • Urgency: how much better it would be to launch this feature now vs in the future.
  • Strategic impact: how much this contributes to our long term vision.
  • Unfair advantage: why we should launch this feature.

The first 3 items are very similar to User-Business Value, Time Criticality and Risk Reduction/Opportunity Enablement Value from agile development’s Weighted Shortest Job First ordering. Your unfair advantage is described well in the Lean Canvas, but can also be thought of as building vs partnering.

I also like to call out the the top requests, e.g. the #1 support request, the #1 chance to learn, the most used feature, etc. to give those features bonus points.

Costs
There are several types of costs:

  • The raw effort in terms of resources
  • Future ongoing maintenance this feature will cause
  • The criticality of resources used relative to you
  • How much institutional overhead you’ll need to navigate with this feature; this also includes how much effort changing the plan would be, especially if you’ve made external commitments to partners.

Costs aren’t always going to be fungible, so you may not be able to pursue all equally costed items interchangably.

Hopes and Dreams
The list should have at least twice as many things as you can get to. The bar to being on the roadmap is very low — things on the bottom of roadmap will be very vague, and come at the cost of writing a sentence.

There is no upper limit on how many things that you won’t do.

Process for re-prioritization
However you do it, there needs to be a regular and inclusive process to re-visit the roadmap. There are many valid possibilities:

  • Give each department some number of votes and do the items with the highest vote/cost ratio. This has the benefit of making explicit the relative weightings of e.g. Sales vs Customer Support in the organization.
  • Stakeholders can get into a room for planning poker
  • The product team can re-evaluate each row on a schedule
  • A benevolent dictator can look at the data and pick what they want

Engineering can be asked to generate detailed plans for features or approximate costs with a gut check but engineering needs to be heavily involved in generating costs.

For one’s sanity, just accept that there’s going to be an emergency entrance at the front of the queue and work to minimize it. There should be one person’s job to protect this queue and evaluate all queue jumpers (and reject most of them).

And then everything else…
A list of features is necessary, but it may be a long way from sufficient. It doesn’t convey a strategy or a vision, but it does help to get everyone pointed in the same direction.

October 26, 2014
by dave
Comments Off

What I’ve been reading (Fall 2014)

Dataclysm: Who We Are (When We Think No One’s Looking)
I’m a fan of OkTrends as well as Rudder’s band Bishop Allen, but I wasn’t really wow-ed by this book. Although it was nice to know some of the stats that seemed wrong we’re USA specific (the racial bias in Canada is less than half as pronounced in Canada) and it’s a good explanation and counter-weight to WEIRD studies, the book could’ve benefitted from more aggressive editing:

“In any event, when I talked about the data as a flood, way back, I perhaps didn’t emphasize it enough: the waters are still churning. Only when they start to calm can can people really know the level and make good the surfeit.”

Still, it’s always great to get at new datasets even if the conclusions aren’t earth-shattering: that men find 20 year old women consistently appealing isn’t rocket science — and 50 year old men who are on dating sites and rating profile pictures might be unrepresentative in their own way.

Interestingly enough, Rudder includes Nate Silver along with Facebook and Google as “three of the biggest forces in modern data”.

The Signal and the Noise: Why So Many Predictions Fail— but Some Don’t
This is the book that I wish I wrote — even if it’s light on actionable advice, the math is sound and consistently interesting. The premise revolves around Bayes’ Theorem
bayes formula
Silver’s plan of action: Start with your theory and how likely it is, and update it according to new information. It’s a subtle but important update to the scientific method that keeps all theories on the continuum between true and false. The chapters break down into real-world examples from baseball to earthquakes. I read the book in one sitting (high praise!) and completely forgot to take any notes, so I’ll have to go through it again.

The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
Ben Horowitz’s A16Z invested in PagerDuty, so I knew that he was a smart investor. This book focuses on Ben’s time as CEO of Loudcloud/Opsware from its founding in 1999, through the dot-com crash until the sale to HP for $1.65B in 2007. It tells a very CEO-centric story, possibly because Horowitz managed to keep a lot of employees on board and working long after it seems that they really should’ve moved on to greener pastures. After 8 years, and many hundreds of millions in capital, it’s hard to imagine the rate of return those employees was particularly exciting.

Empty Mansions: The Mysterious Life of Huguette Clark and the Spending of a Great American Fortune
I didn’t end up finishing this book. Interesting as it was to read about the early 1900’s, most of this book seems to be a painful read about 1 or 2 women spending vast amounts of money to not be happy.

What If?: Serious Scientific Answers to Absurd Hypothetical Questions
I mostly bought this book to support what-if.xkcd.com, and I haven’t regretted it for a second.

“What If? is one of my Internet must-reads, and I look forward to each new installment, and always read it with delight.” —Cory Doctorow, BoingBoing

October 26, 2014
by dave
Comments Off

Here’s an ignite talk & a podcast

I overstretched myself a little last week with two speaking engagements:

  • Speaking about “Bringing Advanced Analytics to DevOps“, Brian Gracely and I had a good 30 minute conversation on best practices in the ops space.
  • As the opening act for Gene Kim, I gave an 5 minute ignite talk on “4 Magic words to get DevOps into Enterprise” (Alignment, Auditable, Risk Reduction and Recruiting) for the closing ceremonies at the DevOps Enterprise conference.
 Thanks to @SteveElsewhere for the picture.

Thanks to @SteveElsewhere for the picture.