Meetings as a forcing function

Every two weeks we do a review of progress against our Objectives and Key Results. It takes between 60 and 90 minutes and involves about 20 people. We go over the key results one by one, describe their current status, assess whether we’re on track to achieving them, and identify next steps.

Well, that’s not quite right. Key result owners populate a shared document before the meeting where all of those things are described. Then they’re stated aloud in the meeting. There’s sometimes discussion, but it’s not anything that can’t be done via email the shared document. A naive observer in this meeting would probably recommend this meeting be cancelled. This naive observer would be wrong.

When each person states their update aloud, they experience it in a way they wouldn’t if it was just words on a page. That also means that, before the meeting, they anticipate what they’ll do and how they’ll experience it. They do not want to show up unprepared. There are 20 of their peers who will be listening to them and possibly asking questions. If they haven’t done their work, it’ll be obvious, and it’ll be felt very differently than if they had simply not filled in some text in a document for people to read.

A less naive observer might also recommend that this meeting be cancelled. This less naive observer would recognize that simply cancelling the meeting would be insufficient. The forcing function of the meeting would need to be replaced with some other mechanism. That could work. It could be better. But it’s not free, and it’s not automatic. The value of this meeting is not what happens in the meeting. It’s what happens before the meeting because of the meeting.

Skills are overrated

Skills matter. They definitely matter. They just don’t matter as much as people seem to think. That’s because many important and valuable abilities are not a function of knowledge. They’re a function of attitude.

We’ve all been in useless meetings. What does it take to run an effective meeting? You don’t need advanced graduate work. You don’t need decades of experience. What you need is a clear purpose and agenda and the discipline to stick to them. Easier said than done? Sure. But the part that’s hard isn’t knowledge. It’s attitude.

What about listening? Being a good listener is a valuable ability. Exactly how much is involved in doing that? Look at the person who is talking. Hear what they’re saying. Think about what they’re saying. Don’t talk. You don’t need an executive MBA for this. You don’t have to be in the fast track high potential development program to get access to a rare opportunity to grow. You just need to control yourself and pay attention.

Then there’s being accountable to stakeholders. That’s simple also. You remember what you told them. You look at what you did and didn’t do, then write an email and click Send. And you do that every week or every month or whatever the right cadence is. You don’t need an executive coach. You don’t need a license from the state. You just need to value it, make time for it, and do it.

What about producing high quality code? Surely that requires otherworldly talent, a mastery of complex algorithms, and the ability to read binary faster than most people read prose. Except… it’s not. I’ve known plenty of incredibly smart people who wrote bad code. I’ve known numerous medium capable programmers, including myself, who produced high quality code. Most of the difference between producing poor code and producing excellent code is not whether you started programming at age 7 on your mom’s computer or whether you got a computer science degree at MIT. It turns out most of the difference is the same as the difference between poor work and excellent work anywhere: improving your work until it is excellent instead of letting it be anything less. Perhaps you have to gain some knowledge and experience to understand what makes code good and bad, but mostly this is not a skill. This is an attitude.

Skills are important. They are. But what skills do is mostly establish what the potential is. They’re like a driving test to get a license. They show that you can drive well. But will you drive well when you have your license and don’t have a test proctor in the passenger seat? You can tell just by watching the roads for five minutes how weak of a guarantee that is. The skills are required. They’re not sufficient. And it turns out for a lot of things, the skills are really, really easy. When someone still doesn’t do what’s needed as well as it’s needed? The answer is attitude. They just don’t want to.

Defining different job levels

Different levels of a career track have different expectations. One obvious difference is in the scope and scale of the job. While important, those are not the most important.

There are at least two more important ways that the job changes at higher levels. The first is in how the job is specified. At the lowest levels, what you specify is activities or tasks. Do this, do that, etc. Individuals at these levels will be productive, but often the purpose of these levels is learning by doing. Typically at this stage they’re executing someone else’s plan.

As you ascend to the middle levels, the job should specify the output. What should you be producing or delivering? To what standard? You are expected to know enough of the how at this point that the how doesn’t need to be given to you. You get the goal, and you’re expected to figure out the how for yourself. That’s in part to reduce the load on your manager, but it’s also because you may know how to do it better than your manager does.

Once you get to the higher levels, the job becomes about outcomes. You define what should be produced and delivered to achieve the desired outcomes. It doesn’t matter if you define “good” deliverables or execute well. If it doesn’t achieve the desired outcome, you failed.

The second way the job specification changes is in how specific it is. Lower levels will enumerate precisely what activities are to be performed and how. Definitions of the middle levels, focusing on outputs, will tend to be shorter. Briefest of all will be the descriptions of jobs at the most senior levels. At the top is the CEO’s job, which comes down to “make the company successful.”

Your flawed future self

We’ve all been there:

  • “We’ll do better next time.”
  • “We can refactor that later.”
  • “Let’s just ship it now and clean up the tests later.”
  • “We’re just using that database until we prove the concept, then we’ll switch to something solid.
  • “We won’t make that mistake again.”

You won’t do better next time. You won’t refactor that code until it’s caused you far more pain than gain. You won’t clean up the tests. You’ll stick with the crappy database until your business is on the line. You will make that mistake again.

We all fall prey to this form of optimism bias, the baseless belief that things will work out better than all the evidence says it does. We are particularly vulnerable when it comes to beliefs about ourselves. We can barely admit we’ve made mistakes in the past, so contemplating the possibility of making mistakes in the future is nearly impossible.

When you make a mistake and suffer the consequences, there are three ways to react:

  1. Do nothing
  2. Half-assed improvements
  3. Real improvement

The problem is that a lot of half-assed improvements look whole-assed when viewed through rose-tinted glasses. You give your future self a little bit too much of the benefit of the doubt. They’ll be smarter, more knowledgeable, less easily distracted, more careful about following directions, and all the things that we wish we’ll be but never will. For instance, you might add items to a checklist, because your future self will always look up the checklist, will always find it, will carefully follow every item in exactly the right order without taking any shortcuts, etc. This unrealistic optimism leads us to believe that we can address complex problems that we failed to solve today by adding complexity tomorrow.

Dan Milstein, then of then Hubspot famously said, “let’s plan for a future where we’re all as stupid as we are today.” Yes, learning is important. Yes, you’re getting better every day. But the way in which you’re going to be better is unpredictable, and you’re not guaranteed to be better in all the ways you need to be exactly when you need to be. If you really want to deal with this problem better in the future, you have to accept that your future self is going to be basically just as flawed as your present self and construct your system to work in spite of those flaws.

Grant me efficiency and elegance (but not yet)

Software engineers love efficiency. They love elegance. It’s what they’re taught, and it’s what attracts a lot of them to the field. The problem is they overdo it.

These days I’m not up close to the technology. And yet often I’m able to see solutions that the hands-on people don’t. These are often wasteful or ugly solutions, but they have one big advantage: they’re doable, in some cases more so than what they came up with.

The reason I see what they don’t is that software engineers often prize efficiency and elegance so much that they reject any solution that isn’t both. They may not even be able to conceive of them. Unfortunately, in too many situations, what that leaves them with is nothing. That’s because there either isn’t an efficient and elegant solution, or nobody has thought of one.

Efficiency and elegance matter, but they matter second. What comes first is effectiveness. If you only have one candidate solution that is effective, efficiency and elegance are irrelevant. After all, if you reject your only effective solution on those grounds, you have nothing remaining. The same thing can happen if you have multiple effective solutions, none of which is efficient or elegant.

The answer is not to ignore those considerations. The answer is to apply them only if they still leave you something you can use. Use efficiency and elegance to narrow down the candidate solutions, but not if they eliminate every option. After all, solving the problem in a mediocre way is typically better than not solving it all.

Explaining yourself for fun and profit

I have highly capable team members, but sometimes they need me to solve a problem or make a decision. Describing a solution or communicating a decision can often be brief, but I spend a lot of time describing how I arrived at the solution or decision. There are a few reasons for that.

The first is that it gives me a way to teach the other person. If I’m doing my job well, my team members are becoming steadily more capable, and I hand off more and more responsibility to them. Certainly they’ll learn from example and experience, but they’ll learn faster if I describe what I see as the key principles and how to apply them to a specific situation. That’s better than only describing principles in the abstract or only giving them answers where they have to infer the reasoning.

The second reason is that I am frequently wrong. I may be missing important facts, I may misunderstand them, or I may have a flaw in my reasoning. If I just deliver an answer, it’s difficult to tell that I went wrong. However, if I explain the facts as I understood them and how I interpreted them, then my team member can observe my mistake and share it with me. I learn something, and we end up with a better decision.

Third, and relatedly, it makes it easier to invite disagreement. If all the other person has is my conclusion, there’s a finality and opacity to it that makes it hard to engage. However, if I describe my thinking, there’s more for the other person to grab on to if they think we should go in a different direction.

Finally, I believe it shows respect. The people on my team are motivated, capable professionals. They’re not flunkies I expect to do my bidding without question. I don’t want to be given orders, and I don’t want to give them. By investing my time in explaining myself, teaching them, and inviting criticism and disagreement, I show them their opinions and perspectives matter to me. It’s about them executing my brilliant ideas, but the two of us putting our heads together to solve problems as partners.

The purpose of interviewing

I’ve had many discussions with many people about how an interview process should be constructed. As part of that, I try to understand the other person’s why. What, in their view, is the purpose of interviewing? Take a minute and think about your answer, then read on.

Among the answers I’ve heard and read are to:

  • decide if we should hire the candidate
  • assess the candidate’s fit
  • understand the candidate’s skills
  • to assess the candidate’s abilities
  • to find out if there is rapport
  • verify the resume

The poverty of these answers is depressing. What’s so bad? Some of these answers are too shallow and beg the question, e.g. finding out of there is rapport. Why do we want to find out if there’s rapport? Others try to go go too far, establishing an expectation of an interview that it cannot achieve, e.g. deciding if you should hire a candidate. An interview can’t do that, and any advice that asserts that is not actionable. That one in particular is tautological.

There’s ample advice, some good and some bad, about how to interview for specific roles. There’s also a lot of general advice, similarly a mix of good or bad. Almost never are there clearly stated goals that achieve meaningful outcomes while being specific enough to be useful. The advice is just too much of a cargo cult of dogmas, copying, and shallow thinking. Almost nobody seems to have thought deeply about this from a first principles perspective. Absent that clarity, you will struggle to define a good process to hire the right candidates, you will struggle to assess whether it’s working, and you will struggle to improve it.

The good news is that there is a right answer. There is only one purpose to interviewing, which is the same as any other candidate assessment activity: to gather information to help predict future job performance. No more and no less. Every single word in that purpose is necessary, and it is sufficient to determine all of the activities in the candidate assessment. In slightly altered order:

  • to gather information: interviewing is a process of discovering, refining, and verifying information. It is not a decision-making tool. It can be a sales pitch and a relationship-builder, but those are purely secondary. If they happen, good, but don’t sacrifice the primary goal to achieve them.
  • future job performance: you don’t care about someone’s past performance. You want to know what they’ll do for you. You also don’t want to turn this into a binary question of hire versus no hire. For one, the interview can’t do that; a person has to do that. For another, the expectations can be somewhat flexible; perhaps you’re willing to lower expectations for a candidate who comes more cheaply. In addition, you’re rarely considering only one candidate for a role. The question isn’t whether to hire a particular candidate but rather which candidates do you most prefer of the ones exceeding the minimum.
  • to help predict: you don’t care about someone’s past knowledge, skills, projects, education, etc. for their own sake. These are just a means to an end: making a decent prediction about how well the candidate will do the job. No single piece of information will predict everything, nor will any prediction be perfect, hence “help predict.”

You may have found yourself reading the above and thinking, “that’s what I meant, or “we do that,” or something else. I’m 99% sure you were sort of right but also sort of wrong. This is something where “close enough” is actually something else. Doing something vaguely like this is not at all the same as doing exactly this. It’s like saying the Louvre is a museum.

You may also have found yourself reading the above and thinking it’s obvious. I’m 99% sure it wasn’t, which certainty is based on the number of times I’ve heard the right answer versus all the other ones. I’ve found that this is one of those truths that is obscure beforehand and then obvious afterward. That doesn’t mean you knew it all along.

If you cannot explain how an activity in your assessment process provides information that predicts future job performance, you should discard it. Maybe it doesn’t provide information, maybe it’s backward looking, or whatever. On the other hand, if you have an important element of job performance that cannot be predicted from the information your process collects, then you have a problem. Maybe there are elements that are impossible to predict, but I’ve never seen one. All I’ve seen are more predictive, less predictive, and useless.

None of this doesn’t mean you can’t have a sales pitch in your schedule. It’s just not part of assessing the candidate. Your interview process can have multiple activities in it, but it can’t do them well without you having crystal clear goals for each one. The most important one? Gathering information that helps predict future job performance.

Delegating cognitive overhead

A naive impression of delegation is that it’s about the work. Suppose you’re hiring people to landscape your front yard. You might tell them plant a maple here, spread mulch there, build a wall around the oak, remove the hydrangea, and so forth. Then the crew digs, scrapes, plants, pulls, etc. according to your directions, while you keep your hands clean and your brow unsweated.

This is not how it goes in knowledge work. In knowledge work, what is being delegated is thinking. If someone is putting together a proposal, creating a financial model, constructing a product roadmap, drawing a wireframe, or building a new feature, the work is not the typing or the drawing. The work is the thinking. The work is taking an abstract idea and turning it into something detailed that is the best expression of that idea. If you’re frequently asking for guidance, then you’re not accomplishing the goal the delegator had when they gave you the task. The hard part is the thinking! If they still have to do a lot of the thinking, then the delegation failed. What the delegator wants is to never have to think deeply about this task again. That includes worrying about whether it’s getting done.

To be a successful delegate, you have to discover the questions and answer most of them yourself. For the ones that remain, you have to ask for guidance briefly and in a way that minimizes the cognitive load while also extracting the maximum information both for those questions and any future ones. And then you have to regularly show that you are making the expected level of progress, but with no more information than is needed. I want to be able to live in a world where 99.9% of the time this project does not exist. When I delegate something, the ideal outcome is I hear about it for about one minute every few weeks, and all that I need to say is an acknowledgment that I’ve heard and perhaps an affirmation of the delegate’s effort and talent. More than that and it’s not delegation as much as it is partnership or (ugh) supervision. I want to live with the benefits of this project and none of the costs except one, which I’ll happily pay: employing the delegate.

Agree in Principle, Disagree in Particular

Almost nobody thinks they’re perfect. We all know we sometimes misunderstand, sometimes misjudge, and otherwise make mistakes. And yet, people still have difficulty admitting they are wrong. I imagine the dialogue going something like:

Person 1: Do you ever make mistakes?
Person 2: Of course, I’m human.
Person 1: Well, what about that time you cut the budget for the redesign project?
Person 2: That was the right thing to do because …
Person 1: And when you fired Jared?
Person 2: I know people disagreed, but Jared had to go because …
Person 1: What about when you promised the client we’d deliver by November 15?
Person 2: Sure, we ended up slipping into January, but if I hadn’t done that …

Somehow what happens is that we agree in principle but disagree in particular for every single situation. That’s odd, right? How is it we can acknowledge that we make mistakes and yet never find a mistake?

The answer is ego. Admitting we make mistakes in the abstract is easy. Admitting that we made a specific mistake in a specific situation means accepting responsibility for the specific negative consequences that resulted. It means admitting specific flaws in ourselves. When that happens, it’s too real.

The fact is that the reasoning that led us to make the mistake is embedded in what we believe, and we probably believe the same things now as when we made the mistake. The problem with the mistake is that it made sense at the time, and so it will also make sense later, even when the results aren’t what we intended. The thinking that made us make that mistake is going to make it hard for us to understand the thinking that recognizes the mistake. To actually manifest our stated belief in real life, we have to accept that it’s when we feel that we’re right that we’re most likely to be wrong.

I had a meeting with a long-time colleague recently. He heard me out on an idea, and he cautioned me that the the way I expressed it might be understood a different way and thus have a different effect than I intended. His perspective didn’t make sense or sound valid to me, which made me want to dismiss it. But that is exactly why I needed to hear it. If it had made sense, then I would have expressed the idea differently, and I wouldn’t need him to tell me what he did. The fact that it felt right was a strong signal I needed to hear how it might be wrong.

A snowflake becomes an avalanche

I’ve sat in on a lot of presentations: design reviews, product pitches, budget requests, etc. Usually it’s one or two people presenting to many more. This creates an asymmetry between how it feels to provide feedback and how it feels to receive it.

Suppose you’re sitting in a design review with a dozen other senior people. Someone from another team is describing how they want to implement a new system. You think it’s mostly good, but there’s one thing that you see a problem with. Sure there are more things that could be better, but there’s one thing that’s genuinely important, and you don’t want to be a bother. So you bring up your one point, have a discussion, and then things move on. That seems healthy and manageable, right?

Now imagine that everyone else attending the design review thinks the same way you do. There are thirteen people, each of whom is bringing up one criticism. They could argue about more, but they’re being pragmatic and trying to avoid being jerks. Each one of them feels like they’ve been productive and restrained. The person on the receiving end? They just got thirteen different criticisms. It feels like being in front of a firing squad. Maybe they get overwhelmed, maybe they show it. Everyone else is confused at the reaction. That’s because because they’re not under the dog pile. They just feel their own weight. They don’t feel everyone else’s weight crushing them.

What’s the answer? Well… it’s not clear. Those thirteen questions are probably legitimate questions, even if they don’t reflect genuine and serious problems. Clearly they should be addressed. What the critics need to remember is that they’re not coming across as separate individuals with single criticisms. Even if your point is essential and productively stated, it’s going to be received negatively and as a part of barrage. That means you have to soften your criticism and really focus on assisting with the development of the solution, far more than you would in a one-on-one conversation, even if to you it feels exactly the same.