numbers for people.

July 14, 2015
by dave
Comments Off on What do computers dream about?

What do computers dream about?

A big shout out to for doing all the leg work to make Google’s software for computers to have dream-like interpretations of images.

So finally computers can look at clouds and see a flock of space peacocks chasing a 5 legged squirrel armadillo away from a lighthouse:
space peacocks

To let the computer’s mind run wild, I gave it a dozen non-descript pieces of pictures I’d taken to see what it dreamt up. Since I couldn’t make heads or tails of the resulting nonsense, the clever computers at Berkeley explained them to me. Much like you or I, computers seem to dream about birds, dogs, antelopes, handkerchiefs and you know, geological formations.

Original Dream What’s in the dream
IMG_0330 qp90uZJ1LXNNOLb92INhm8NQ_dreamt 0.84881 flamingo
0.74559 bird
0.65944 aquatic bird
0.56977 wading bird
0.36431 prairie chicken
IMG_0333 Hzi6MH8qCCsiLtewLfLj6Gyk_dreamt 1.21175 handkerchief
1.05661 piece of cloth
1.03547 piece
1.01434 part
0.99567 fabric
IMG_1612 NWofjnA2hLWnf83xRhUhNFtB_dreamt 0.88854 canine
0.79935 dog
0.78993 carnivore
0.75896 domestic animal
0.70264 hunting dog
IMG_3230 XkuSEA038UUQsZeFwBplOjSW_dreamt 1.30765 cock
0.90564 bird
0.38734 dog
0.36383 canine
0.35712 hen
IMG_3231 xGB3OUEprcl0ekuTKuh1JqdJ_dreamt 1.66706 handkerchief
1.42896 piece of cloth
1.40037 piece
1.37178 part
1.33577 fabric
IMG_3237 wCqTPBYTSb1s9jbEYObIFhUm_dreamt 0.59630 antelope
0.54578 bovid
0.53312 ruminant
0.51408 ungulate
0.51125 even-toed ungulate
IMG_3238 eFLaMUWORARQJUoFh6bgnb2h_dreamt 1.71119 monarch
1.68605 danaid
1.28717 butterfly
1.26044 lepidopterous insect
0.91870 insect
IMG_3275 iE7dx4cfej2bsvbYGFibjBSq_dreamt 1.45040 brain coral
1.42910 stony coral
1.40779 coral
1.31499 maze
1.28531 anthozoan
IMG_3276 VWYI8YVPl6oWlJgRFrB4nS9F_dreamt 0.91945 bird
0.66665 aquatic bird
0.57942 hip
0.49358 oscine
0.48158 passerine
IMG_3281 kEKQ3j0w9K0WpVgpNZMXdV0N_dreamt 1.08375 bird
0.92162 parrot
0.89063 lorikeet
0.87755 lory
0.46121 European gallinule
IMG_3480 9DKCeU89PCMFt7d8jylrU4Sr_dreamt 0.53145 vessel
0.53038 aquatic bird
0.50841 craft
0.49928 geological formation
0.44851 canoe
IMG_20130721_162034 1Rw6K0krbUDJgRvwESIWnivG_dreamt 1.27505 oscine
1.24403 passerine
1.14763 brambling
1.03948 finch
0.90104 bird

June 4, 2015
by dave
Comments Off on Seriously, get your FAQs from your users

Seriously, get your FAQs from your users

Listening to Kathy Sierra at Mind the Product, she makes the point that your users are not happy stock photos, that your product’s make it or break it moments are often when the user feels helpless.

She had a lot of good points, but when she mentioned that help & FAQs are usually written from exactly the wrong perspective1 it reminded me how easy this problem is to solve.

You should already have the data to make better help.  Here’s what we do at PagerDuty

  • Track all the pain points you see in usability sessions.  Ideally you should fix the most common ones, but if you can’t: use the wording the user uses to express what they are doing and make it findable from the page that they get stumped.
  • Track all the searches in your help docs and use that to prioritize help articles & FAQs.  Again it’s a great source of how the user words their issues.
  • Quantify UX pain with metrics. We use KISSMetrics to track how many people get stumped in a task.  Some of them are obvious to fix: we saw too many people clicked on disabled page elements and so we made that clearer, other times we direct them to the right documentation for their integration.
  • We have inline help with a person powered by Olark on every page. This is much more expensive, but it also gives you the chance to clarify and really understand the person’s question (as well as to solve it).
  • And here’s a big one a lot of company ignore: help people who aren’t even asking.  One of the great thing about tools like Intercom (which we don’t use, we do this manually) is that you can identify groups of users having a degraded experience — and while it may be the wrong product call to change the UX for everyone, you can send them the information they need to succeed.  For us, one of these groups might be “people who use PagerDuty with physical pagers”, “people who send hundreds of alerts to the on-call person every day” or “people who people who have 10 contact methods”. Those are all very small groups (relatively) but they’re easily identified fro the data and we reach out to them proactively to improve their experience.

A great support team (seriously, come join us) definitely helps, but they key element is getting into the frustrated customer’s mindset from the actual words they use and the actual frequency they use them.

If you want them to RTFM, make a better FM. - Kathy Sierra #mtpcon

  1. she called out Microsoft Excel, and I’ve experienced that pain more than once

March 6, 2015
by dave
Comments Off on Quick code sample: Integrating JIRA and PagerDuty

Quick code sample: Integrating JIRA and PagerDuty

Here is a program I threw together to create PagerDuty incidents that track urgent JIRA issue. The PD incidents are updated and closed automatically if the JIRA issue changes.

Step 1, configure PagerDuty
Create a generic API service in PagerDuty, and note the service key

Step 2, Deploy to Heroku (optional)
You can test with my Heroku instance ( or deploy it yourself by clicking here:
Deploy to Heroku

Configure your JIRA webhooks with the URL: where “abc123″ is the service key for your PagerDuty service

Step 3, configure JIRA

JIRA webhooks configuration

Step 3b, Choose what issues trigger PD Incidents
JIRA can send webhooks on all incidents that match a piece of JQL, for instance

project = HD AND priority in (“Needs Immediate Attention”, “P1 – Major / Must have”)

Will trigger on any ticket that gets created or moved to our HelpDesk team (HD) that has its priority set to urgent or P1.

Step 3c, Configuration

  • Make sure that the webhook is configured to fire when an issue is “created, updated or deleted”
  • Do not check “Request with empty body will be sent to the URL. Leave unchecked if you want to receive JSON.”
  • You can test your JQL as a search to see what issues would have

Questions can be sent to, as always thanks go out to Runscope for helping me debug my webhook traffic.

February 23, 2015
by dave
Comments Off on Manually triggering incidents in PagerDuty

Manually triggering incidents in PagerDuty

The easiest way to allow people to manually trigger incidents in PagerDuty is to set up an email service and publish the address. You may want to set up filtering to only trigger on email from inside your organization:
Configuring a manual email service

But if you want more control, you can easily use my JavaScript library PDJS to create an embeddable button on your internal wiki or intranet that triggers PagerDuty incidents.

You can see your manually created incident on the dashboard we built in the last API column.

Here’s the code:

December 14, 2014
by dave
Comments Off on Certified Scrum Product Owner notes

Certified Scrum Product Owner notes

We sent a truckload of PMs and engineers to a 2 day Certified Scrum Product Owner course from Jeff McKenna. I was a little surprised to hear that Scrum trainers aren’t allowed to change their slides once they’ve been approved — switching between the powerpoint deck and the unsanctioned handouts was surprisingly distracting.

The class started off with an amusing bit of the Dunning-Kruger effect when we sorted ourselves in order of agile experience. Since most of us at PagerDuty had done the reading already, I found myself listening a lot to the background patter:

“I got out of coding because it’s always the same thing over and over.”

“I wanted to be the manager so I could be the one giving the orders.”

If nothing else, taking part in training is a great chance to better understand the median software management process, but needless to say, doing the same thing over under orders is not something that excites me. There were a few good discussions of “doing agile” vs “being agile”

Here are my rough notes:

First up: You can’t improve what you can’t see, so we discussed a few variations of process control with 3 characteristics: You can see it, understand what it means and have the power to do something. We mentioned the Department of Defense’s Capability Maturity Model
although in terms of military models, I prefer to think about how we can tighten John Boyd’s OODA loop.

A Scrum team should be

  • 3-9 people working cohesively full time — that’s empowered to say no when they don’t think that they can deliver.
  • The Product Owner is responsible for maximizing the value of the work being done.
  • The ScrumMaster is in charge of optimizing the team’s velocity, this is distinct from the product owner’s role — which seems to be one of the lynchpins of Scrum.
  • Part of the ScrumMaster’s role is to “Manage impediments” — in my mind, that’s a job the product owner shares heavily in, especially when those impediments have root causes in other stakeholders.

The entire team works on one story at a time, this seemed to me like an artifact of having large stories or inefficient deploy technologies. A team where one person can understand the whole of the team’s domain and uses modern deployment tools feels like it would cause more overhead — the 9-women-1-baby-1-month scenario.

The effort required to make a change should be proportional to: (the origin thought put in) multiplied by (the magnitude of current usage/dependencies).

When ordering projects projects by descending RoI, break ties in favour of doing the smaller ones first — the variance is lower, but also you can release them sooner. But don’t do small things to the exclusion of big things.

“Burndown charts don’t help during a sprint”

Story Points:
Every story needs a value. 1 story point can be considered the minimum possible change, which is largely a measure of your overhead: a conversation, branching master, making a tiny change, running the tests, submitting a pull request, doing a code review. 0 points can be considered a story that can share the overhead with just about any other story — such as a text change to a part of the code that you’re already touching this sprint (conversely, a text change to a different part of the code, may require its own conversations and overhead). Targeting 40 story points per sprint is a good balance between generating too much detail and having enough stories to invoke the Law of Large Numbers — “average of the results obtained from a large number of trials should be close to the expected value” — to generate relatively consistent sprint estimates. Personally, I prefer valuing stories 1, 2, 4, 8, 16 &c over the fibonnaci sequence, but the important thing is the delta between story sizes increases with the sizes 1.

A minimal story probably involves at least one branch statement. “I can enter my first name” and “I can enter my last name” don’t make for independent stories.

Estimate the future using yesterday’s weather: you’ll get the same done next sprint as last sprint. This is in sharp contrast to the worst project I ever worked on: where every iteration, we predicted 40 points, and did 30 — so management would ask how we planned to do 50 points next sprint.

Don’t have a “done-done” state. The definition of done includes testing to the point where people can change the code in the future and not break any implied (and hence un-tested) assumptions. Otherwise you’ve made the code more fragile. I suspect the ideal level of testing isn’t measured by percent code coverage but rather having emotionless deploys — where even new employees can make changes, and if the tests pass, you’re completely comfortable deploying the changes. Code reviews then only serve to: enforce code cleanliness, confirm that the new code works and to ensure the same level of testing going forward. McKenna believes there’s too much tolerance of bugs in silicon valley today; I’m not sure how much I agree — there’s certainly a lot of crappy software being written, but when you’re writing something that benefits from network effects (Twitter, Facebook) rather than something where you are solving a particular problem that touches on revenue for businesses (PagerDuty, I can understand the urge to increase your feature velocity for 95% of your users rather than dot the i’s and cross the t’s for the remaining 5%.

The value of fixing technical debt is measured by its effect on velocity — don’t forget the estimated time to End-of-life-ing a particular piece of code. Technical improvements can address user stories too — performance is a feature. Everything worthwhile can be demoed to the right audience: these tests failed and now they pass, deploys take 40% less time, 4000 lines of code comply with our style guide, &c.

Long term planning involves documenting and socializing the “Approach” rather than the “plan”. Everything needs acceptance criteria, “3 people on the team have read and understood the approach” is a good one.

Dan Pink’s Drive: The surprising truth about what motivates us raised an interesting point — paying people enough “not to think about it” is increasingly hard in San Francisco.

“10-20% of people shouldn’t do agile” (based on the description of those 10-20%, it sounds like they actually shouldn’t do software). Converting a Project Management Professional (PMP) to a ScrumMaster is a pathway to strife & conflict.

The Sprint Review involves a demo — the product owner encourages people to come.

“Buffer the promise, not the plan” — aim for accurate estimates, but don’t promise them, you’ll be over 50% of the time even in a perfect world.

Scaling scrum cross-team & remotely: Have a scrum of scrum where the people most exposed to cross-team issues check that no-one is blocking anyone else. Dependencies highlight problems in your team structure. Keep teams together, if the team has to be remote, leave the product owners at HQ.

If the house is on fire every week, it’s not a fire, you just live in a particularly hot house. Schedule a fire drill every week, make it a 0 point story, and track how much time is spent on it.

One pseudo-strength of the waterfall method is you can document a huge pile of potential functionality that satisfies every stakeholder and then react with shock when most of it doesn’t get done — with agile, you can re-prioritize actual functionality every sprint, which can mean a lot more contact with stakeholders.

Must Should Could Won't DoOne thing that I disagreed with was verbiage around “Must do” vs “Should do”, “Could do” and “Won’t do”.

Ideally 100% of the time should be spent on “Should do”, since “Must do” generally arises when the product owner (or the stakeholders) missed something coming down the pipe. When the iPhone came out, RIM/Blackberry had a boatload of things that they needed to do — likely more than they could in a timeframe that mattered; it was up to the product team to tackle those things proactively. Sometimes an external actor can dump something on you without warning, the European Union could require your software to track users with actual homebaked gingerbread cookies; but those are rare — and if they are not in your sector, you should invest in lobbying and research.

“Should do” is another word for “Has a positive return on investment based on our velocity and market”. So a “Won’t do” can be promoted if the market or our velocity improves, and similarly a “Should do” can be demoted if we slow down or our market position weakens.

“Could do” is just a holding area — we make software, we could do just about anything. It’s up to the product team to understand the potential RoI, in whatever terms & accuracy your organization operates in. The “Could do” list will likely be longest list by far. Predicting that some fraction of the could’s will become should’s seems like a convoluted way of padding your estimates.

In the exercises we worked on, the class seemed far too eager to both accept that a feature was mandatory and that we should do some fixed fraction of the things that we could do. Planning on doing half of all the things that we’ve put on the backlog to satisfy stakeholders seems like asking for trouble.

Things to look in to: Agile portfolio planning, JIT Budgeting & Beyond Budgeting, Waltzing With Bears: Managing Risk on Software Projects

The class didn’t cover product discovery, which is the area that I’m most interested in. It was hard for me to charge in to some of the examples without doing a better job of justifying what we were trying to build.

  1. I really don’t understand the obsession with using powers of 1.618 over powers of 2

November 30, 2014
by dave
Comments Off on Optimizing Who’s Hiring posts on Hacker News

Optimizing Who’s Hiring posts on Hacker News

PagerDuty is a data focused company and recruiting is no exception1. We’re growing quickly and one tiny corner of our recruiting effort has been to post in Hacker Newsmonthly Who’s Hiring thread. Over the past 6 months, I’ve been tracking our click-through on each of our posts, here’s what I’ve learned:

A typical post would get 100-200 clicks:

  • I tested the waters in May with a short post that garnered 90 clicks
  • In June, I botched the formatting, for 712 clicks on the job-specific links. An additional 25 out of 135 clicks on came from that post.
  • In July, my post got 195 3 clicks
  • In August, my post got 1984 clicks & Shack’s reply got another 80
  • In September, I dropped the ball and 2 engineers posted, click-through was low
  • An offhand comment that I made in response to a project that one of our customers posted, got 206 clicks.
  • In October our post was a post by an engineer later in the day with comparatively low clicks.

Most clicks come in the first few days
Hardly surprising, although if you want to stand out as an application there might be some benefit is applying to a multi-week old posting.

Post early
Again, this shouldn’t be surprising, since the earlier you post, the more people see it. But I was surprised by the magnitude, the October posting was especially hurt by being posted later — the thread going live at 6am Pacific time doesn’t help us on the west coast.

Everything should have an owner
I’m less concerned with the hundreds of clicks we missed by publishing late as I am with the duplication of work. We’re still very much a do-ocracy — which is great for so many reasons — but it can also mean someone re-writes the job posting rather than cutting and pasting from the wiki.

Formatting and content matter
The time spent improving the text and formatting of the post improved click-through, so I’d recommend writing your post ahead of time and recycling the best content (if you don’t have a great paragraph about why people should work for you, write that now).

Have a hook
Our Toronto job postings do very well considering the distribution of HN readers between Toronto and the bay area. We have a well articulated hook for our Canadian recruiting: Take the TTC to work in Silicon Valley. SV and SF are fiercely competitive places, even for great companies to find great people.

Conclusion: the impact was minimal
Out of the 1000 clicks that I’ve tracked through these comments and the ones on my blog, we’ve gotten 10 leads, some of which were promising but 0 hires. In fact, we (originally) didn’t post in November’s thread, out of respect for our team’s work/life balance. (One of our engineers posted anyway, it’s still a do-ocracy).

I suspect that the readers of HN, even the ones browsing a hiring thread are typically happy where they are and so need more than a passive posting in a thread with 100 other companies /speculation.

Unapologetic call to action:
We’re smart people who are great to work with, join our team in SF or Toronto.

  1. It is an exception in the sense that everything I’m talking about here is coming from public data sources so I can write about it, which is kind of cool.
  2. 31+13+18+9
  3. 23+18+67+62+25
  4. 44+97+34+23

November 25, 2014
by dave
Comments Off on A limited defense of the 2 egg problem

A limited defense of the 2 egg problem

There’s a lot of backlash against using problems to asses cleverness in interviews. So much so that it looks like the site that reminded me to write this article changed their example question away from the derided “2 egg problem”.

I believe it is the prime example of everything that is wrong with engineering interview culture. Not sure if that makes it ideal material for a site like this or pointless trivia. — Top comment in a Hacker News thread

Before I was the charming well-rounded jock that I am today[citation needed], I was essentially a mathlete, so questions like this give me the same warm fuzzy feelings as reading Dr Seuss books might give to a liberal arts major1. But I’ve also done hundreds of interviews there’s one more thing that I’ve found:

How you solve an arbitrary stupid problem is a valuable thing to find out.

If you asked me this question, here’s the genre of answer that you’d get:

First call out your assumptions

  • That each floor is equally likely to be the fatal floor. This is a biggie: common sense dictates that the LD50 of a dropped egg is less than a meter, so the naive approach will work best (the egg will break on floor 1 or possibly 2, and we can all go home).
  • You aren’t looking for a mathematical proof that my solution is optimal — this would be a huge undertaking, I can definitely give you some possible bounds on the optimal solution, and we might get lucky but it’s not a 60 minute problem. So we’re looking for the best solution I can find. This is valuable because it means we’re (correctly) focused on how I approach the problem)
  • The metric we’re optimizing is expected number of floor drops: so time spent climbing down to collect unbroken eggs doesn’t matter and buying 2-dozen eggs to speed things up is out of scope. Also we aren’t looking to minimize the worst case (even though that’s often similar to minimizing the average case).

I’m also looking to tease out any extra information, Google’s version is very careful to call out all the details that you need, but interviewers are human.

It’s also worth noting that I’ve already made a mistake, the solution space to this problem (all possible egg dropping algorithms) seems to be small enough that we can try them all2 — Maybe I don’t belong at Google :)

Go looking for interesting angles

  • If we had only one egg, we’d need to use the naive approach (dropping from floor 1, 2, 3 and so on). And at some point we’re going to be down to one egg, so we’re looking to use the first egg to minimize the number of floors we have to use the naive approach on.
  • 100 is a square number, and we have 2 eggs. 100 is not a power of 2 (I don’t know if this is useful yet)

Prove that you can totally handle this
So now let’s show off our basic math skills and mention that the naive approach will take 50 drops on average — it’s also O(n) and wastes an egg which are the two cardinal sins of search algorithms.

Ordinarily I’d go to a binary search next, but that feels too easy and I’m worried that half the time we’re going to break on floor 50 and that’s a lot of naive searching already. Also the two outcomes aren’t exactly similar (a broken egg means we lose resources). I wouldn’t mention it but I also don’t want to have to keep figure out how to round 100/2n.

Since 100 is square, I wonder if that’s relevant, so I’d look at moving up 10 floors each time with the first egg.

Ballparking this process, for floor XY, it’s going to be roughly X+Y+1 drops to determine that it’s the fatal floor, except floors X0 Where it will be X+9…

At this point, I’m going to cut off my hypothetical interview, since the cardinal rule in an interview question is to know what you’re looking to measure — and I can’t guarantee that I am looking to get the same things out of this question as the people who ask it.

My goal here isn’t to wow you with my arithmetical prowess. However, given that it’s hard in interviewing to find new but sufficiently uncorrelated things to ask a person, this genre of question can show a lot about your problem solving approach, even if you aren’t disrupting the SaaS egg-dropping space (YC15)3.

Not convinced? You can still interview at PagerDuty, we don’t use this question.

  1. This sentence has hilarious, not only do I have a Wikipedia joke and a jab at artsies but if you view the source, there’s still another joke — the semantic web isn’t dead, it’s just sleeping!
  2. Start of the proof: given (eggs, floors) we always move up in terms of floors if the egg doesn’t break, down if it does, so there is a bounded set of strategies. Given f(2,100) you have an especially easy problem since f(1,n) needs to check n/2 floors on average. You can now recurse over every possible solution for f(2,m<n) and construct your algorithm from the best moves for 1 floor left, 2 floors left, etc…
  3. Yet. Email me if you want a beta invite to my new Tinder-for-eggs app

November 17, 2014
by dave
Comments Off on Notes from BVP’s Enterprise Forum

Notes from BVP’s Enterprise Forum

Bessemer Venture Partners invited me to join other product people from their portfolio companies for a mini-conference on product development for the enterprise. Here’s a rough pass at my takeaways, ranging from the obvious to the insightful:

Always ask customers “What do you not like about what you do today?”

Voting on what to do next is not a great idea. It might be salvageable to vote on “will this feature move this needle”. Pandora’s model [link] may find the “things we absolutely have to do this quarter” but Pandora might be trapped in a competitive low margin space where they have things they must do, instead of building a product with a defensible moat where they are in control of what they want to do next.

Idea management is what PM does, it’s different from idea generation (which anyone can and should do). It’s up to PM to define the process & evaluation that an idea has to go through before its built and to put every new idea through its paces. There were several different processes discussed for how an idea gets into the product team’s funnel, some product teams actively solicit every last wisp of an idea (this is what PagerDuty does) other teams asked for business cases to be written up or presented internally to the product team. Everyone agreed that it’s important to market PM internally as the processors of the process not the single source of ideas.

Longtime PMs were impressed with the spreading acceptance that roadmaps will have broad dates rather than exact ones and that scope can change.

It’s important to understand what your customers want from dates on a roadmap, some PMs have found that large companies only want to understand when the next version is coming out so they can do release planning — so it’s better to release quarterly or twice a year with whatever functionality is ready. Other customer prefer the faster release cycle over the predictable dates. I was surprised at how many companies had slow release cadences (only me and one other PM in one discussion pushed to production daily).

Betas are exciting, bring customer excitement forward as much as possible, gather their feedback while there’s still time to use it.

There was not a lot of love for having a public roadmap. But take some chances to bring excitement forward.

Tracking Product Development:
We spoke about how to measure product velocity, all the best techniques seem to track engineering velocity as a proxy (story points). Keep your story points representing the same type of thing (a function that does X) rather than representing a time period or you’ll never be able to measure engineering acceleration or deceleration.

One PM has a script generates a cheat sheet of what % of the user base uses each feature every week. We were all jealous of him, even if some features might be used by different personas on different schedules. It’s be awesome if that cheat sheet could be filtered by segment.

If customers want to be a part of beta programs, exchange that for a commitment to do regular NPS/feature surveys.

Does anyone make an invite only UserVoice? So that only confirmed customers can see the features, not competitors?

A PM makes a quarterly report that buckets each feature release by red/yellow/green according to his feel. Execs love it.

Does it ever make sense to filter feature requests through the account owner for a multi-user product?

Absolute NPS isn’t useful, but relative segmentation is: how do large accounts like us vs small, old vs new, etc.

Figuring out price elasticity is a hard problem, experiment according to your pricing model. Sales reps love price raises to clear the sales pipe (order now or the price goes up). Try to change your pricing independent of functionality so you can measure accurately.

Understand the budgeting process in your customers is valuable, if you sell 2 tools, one for sales and one for marketing bundling doesn’t make sense, they have separate budgets (so a bundled product is actually harder to buy).

Great quote: “Grandfathering old customers on pricing changes is necessary for our mental health to do pricing changes and research”

Resource Allocation:
You can measure anything. Measuring a vibe or happiness team: median time to first employee referral.

Cycle through the person dedicated to maintenance. It can follow the on-call schedule, or rotate quarterly.

The goal is overall velocity, you optimize your maintenance around that.

Some orgs spend their maintenance time on bugs that don’t need PM involvement — how do they manage to find so many easy bugs in production, sounds like a failure of testing.

Dealing with tech debt, it’s engineering’s time with no questions asked but they should explain what they aim to fix before they do it.
Bad: Change boring framework to new beta framework
Good: optimize the slowest query/page/function, attack the bugs that alert the most often, one page tech proposals.

Bucket products and features into: create/build, grow, run, transform/deprecate in order of which ones have the brightest futures

“Me too” features: Are they table stakes now or can you ignore your competitor’s feature? Was it on your roadmap before, how does this change the timeline? Is the feature real or FUD? Try it out & arm your salespeople with the weaknesses. Launching it now is less exciting for marketing.