Discover more from Human Skills
Human Skills 018 - Creating Clarity From Ambiguity
with Kristján Pétursson
I first had the pleasure of working with Kristján Pétursson in 2008, when he was already a wickedly productive software engineer. Since then he moved his way up the ladder, becoming a manager, then director, then temporarily head of engineering at Apartment List. He quickly learned that he did not like that, and has been working as a staff engineer ever since.
He is an incredibly clear thinker with a very fast and clever wit, and in this conversation he managed to boil down almost all of engineering and product development to a single tool: Checklists. With the core tool of checklists and the core goal of systematically destroying ambiguity, we spanned an incredible range of important topics of personal and team productivity, deadlines, and more.
Note: I will be traveling next week, and so will not ship an interview. The next Human Skills interview will come out on August 15.
Do you have things that you have gamified for yourself?
Yes, in that everybody loves a checklist. So that's a productivity thing that is good for me and good for the people that I have to foist tasks upon. You know you love checking the box. You love the dopamine hit and you love like, "here, we're getting closer to having zero things left to do.
And assuming you're not a project manager and derive innate joy from burndown charts... you do probably derive joy from watching the thing or being done with the thing. Or having accomplished it.
Plus, I think in checklists. People will write paragraphs and paragraphs of text, that I will write in bullet points instead. But whatever form the list takes, having the list brings like clarity to everybody, to yourself and everybody around you.
And actually, I was thinking about this in the shower this morning. of like, "what should I maybe bring up with Kevin?" I think it's ambiguity that destroys velocity almost faster than anything else.
Ambiguity of like, "what are we gonna build?" or "how are we supposed to build this?" Or was it supposed to be A or B or why did we make this decision? So having the checklist of "this is the thing," it is a very, very cheap way to eliminate most of the ambiguity.
And then everybody feels good because they're like chugging along the top speed instead of kind of thrashing between tasks to figure out what's happening.
Do you have other tactics you use to move from ambiguity to clarity?
The checklist is 95% of my tools. Because somebody says, "hey, how is this thing?" And the answer is this way, this way, and that way, right? Or "hey, how are we going to do this thing?" And the answer is ABC.
You do at times need to elaborate... and so that gets more into the one to many communication on how the project is going to go down or has gone down or maybe is currently going down... or in better cases up.
There's a book that I should read called The Checklist Manifesto by the surgeon general. I forget his name. He's a doctor. I think a surgeon, where checklists are important, right? In the same way that they're important for airplanes.
I should probably read this book, given my love of checklists. But I also kind of assume that I know what's inside. I should probably verify that.
So the other thing is, like in non-checklist formats, the heuristic that I try to employ whether it's a ticket or a tech spec or a project one pager or whatever, the heuristic I try for is: if I close my eyes for some substantial amount of time... If I go chill on a beach for a bit, and then I come back, has what happened been substantially similar to what I thought was going to happen?
And how can I write this document or whatever it is in such a way that that is likely to happen? So in a ticket, that means, a clear acceptance criteria. However you want to do your tickets and in whatever ticket tracking system you're using, whether it's a checklist in a VIM file or Jira. Write down, "this is what we need to do," If you know anything about where to do it, give them pointers. They don't have to start from scratch. Whoever picks it up. And then acceptance criteria.
I've been in so many retrospectives where the team was like, "we need better acceptance criteria." And then a month later, "we need better acceptance criteria." It's like, OK, "what happened? All you had to do was write it."
That works in tickets, that works in tech specs... here's what I know, here's what we still need to find out, here's, given what I know is how I think we should go about this. And very importantly in there, wait, why are we doing this again? Not from the, "what is the purpose of this project" perspective, from the, "why did we choose this way" perspective?
Write down the ways you didn't choose, for God's sake. Because otherwise you're gonna run into... every approach has a problem and as soon as somebody runs into the problem they're gonna say "oh, why didn't we do it the other way?"
If you write down why you didn't do it the other way they'll look at it and be like "oh, right, right. Yeah, it sounded worse the other way" and they can move on. Otherwise the same conversation happens again.
What goes into writing useful acceptance criteria?
To sort of put it in one sentence, it should be a clear statement of "the thing that happens when." Like when I load this page and I click this button and I enter this data and I click this other button, this is the result.
There are a bunch of things implied in that of like, I didn't find any errors along the way. Those are obvious, people will find those.
But you need that simple statement of the desired fact to be clear and everybody can infer what needs to happen to get there. People use phrases like outcome-driven development or whatever the thing is. Here's the outcome. I try this and this happens.
Whether you're front-end or back-end or wherever, it's going to change maybe what that means.
I'm nearly a hundred percent back-end. So my acceptance criteria is the case analysis resident in the unit tests. You had these inputs. They had these ranges they could all cover. And so you've picked interesting cases from each and in all their combinations. And you've proved to me in a test that that is what happens. So from the backend, I feel like the unit tests are the acceptance criteria. And the case analysis.
When you're writing a ticket, you don't want to be doing that. That's the job of the person building it to think through. But they need at least enough to go on of like, when somebody in the UI clicks this, this thing happens, okay, that boils down to this backend function, here are my domains and ranges, and then I've walked through all the possibilities.
That's what the one sentence in the ticket needs to be, sort of drill down in detail but also be interpreted by the implementer.
But if you're a front end person, it's going to be your playwright tests... or your manual clicking God forbid, or whatever. It's going to mean a different thing in a different place. But hopefully you have people on the team or can train people on your team to, given the one sentence outcome, figure out the interstitial pieces.
And if you can't write that sentence, you probably need to make more than one ticket.
More on the destruction of ambiguity and acceptance criteria
There's a lot of work that people don't like doing that doesn't actually take very long. It's just not very interesting. But doing it destroys ambiguity.
I'll just give you my current example. We're working through a bunch of API upgrades, and some of those APIs have needed to span out to other tasks, because they're not actually the one API. It takes in a keyword that's effectively a function call of "which of these 30 things are we actually gonna do in this API?"
So it started out looking at the initial tracker as one API upgrade. It is actually a set of 30 things, independent tasks to do. And I realized at some point, that only 10 of these things were written down.
So the thing that nobody really wants to do is find the other 20 and write them down. Or in this context, ask the person very clearly who is responsible for that thing, "where are these 20 things?" In fact, not where are they, but please put these in the list.
Just be clear at every step of what the necessary outcome is, what the acceptance criteria is. Because acceptance criteria is happening all the time, not just in like... what code gets written, it's in how the meta process is happening and it's being tracked.
So the acceptance criteria I have for you right now is all of these 30 functions need to be in the spreadsheet. Because even if you know you're about to do them by yourself, everybody else needs to know that they exist. Because otherwise they think we're done and we're not, we have 20 left to do.
Do you have a system to evaluate how much ambiguity is remaining?
So the system is checklists. If a question exists, a list is missing. I would happily put that on a t-shirt. If a question exists, a list is missing. And I've seen this recently.
I was just giving somebody advice on... there's the sort of end of the project, the almost end of the project, the transition from done to done done, where everybody knows that the work is winding down, but nobody will say out loud, we're finished.
And my question is, where's the list?
If the list is empty and we're not done, what's missing from the list? If the list is not empty, but we feel done, do we still need those things on the list?
And that's kind of it, right? Only one of those two things could be the case. And then the work that nobody particularly wants to do and often is unaccounted for is make the list. Just write it down because now everything has, everybody has one place to look to know when we are finished.
And if ever there's a mismatch between the list and the feeling of being finished or the ability to click the launch button, turn off the experiment, it's ready, then fix the list.
On deadlines and ambiguity
This is another thing that I've appropriated from my aphorisms guy. His name is Paw, by the way, P-A-W. Nobody likes deadlines. Nobody likes to pick a date.
For whatever human reasons, uncertainty that you will hit it and the feeling of embarrassment if you fail. The idea that the data is being chosen by somebody else, but engineering has not yet had input and we don't know if we can do that... all kinds of things, whatever.
The deadline doesn't matter. Usually, unless you have a contract with the government and your funding will be pulled if it's not June first, usually the deadline is fully under your control. Or at least in many cases.
And I haven't worked at jobs that typically have customers waiting on us. You have freelanced and contracted, so perhaps you have a different perspective on deadlines.
But, so the framework here is you don't start with a deadline. You start with a target date. Here's what you're shooting for, but everybody needs to be clear that that's a target date, that is not a deadline. And it will move if it needs to move.
The only reason it's not a deadline is there are things we don't know. We have some uncertainties or some ambiguities or whatever. That, by the way, is a checklist.
So you write up your tech spec, you make your tickets, you do whatever your process is. And then you say, okay, given this, and we've done our planning poker and we have a hundred things, they're going to take three days each. And we have 10 people. So that comes down to whatever the math is. Given that our target date is this. But here's our list of stuff we don't know.
Some of it is stuff that we need to go prototype to discover, like how this code works. Some of it is stuff that we need to talk to some stakeholders and decide. Some of it is user research that we need to do to see if anybody actually likes this approach. Whatever the uncertainty is, that's on the list. And that's why this is not a deadline yet, it's target.
Now you have no ambiguity. The uncertainty is there, but it's clear what the uncertainty is. And there will always be an item of like, "shit we didn't think of." Everybody just needs to be okay with that. You'll find them. If you never find them, you didn't need them.
But now your stakeholders, the people who want the deadline. Right? So this is a Paw saying: "Every target date deserves to grow up into a deadline."
At some point you're supposed to launch this thing. It's going to happen on a date. If you can know that date beforehand, that's lovely. Right? Because then you can talk to marketing and whoever and do your whole go to market thing.
So every target date deserves to graduate into a deadline. The way it graduates is by eliminating your uncertainties.
So any stakeholder is now fully within their rights to come to you and say, "how is that list?" And if the list is not getting smaller, you're not doing your job.
And so the list itself is a set of tickets to do a set of tasks to perform. Of like, "okay, I'm going to go prototype this. I'm going to figure out some things that's going to turn into task tracker tickets, and now this uncertainty is gone." When there are no longer any uncertainties, you have a deadline.
Links to Kristján
Thanks for reading Human Skills! Subscribe for free to receive new posts and support my work.