4 Hiring Myths Common in HackerNews Discussions

Another day, another HackerNews discussion about hiring being broken. The most recent one I saw was triggered by a blog post by the formidable Aline Lerner (disclaimer: Aline is a friend and we collaborated on a hiring book last year). Now, I 100% agree that hiring is broken, and Aline’s post is really thoughtful. In fact, a lot of “hiring is broken” articles are thoughtful.

But the discussion threads are something else—they miss the point of the article. The discussion threads are even more broken than hiring. And they’re really repetitive. They always do contain grains of truth, but inevitably have us reaching conclusions that are simplistic, and in my opinion, create a pretty bad attitude in the tech industry.

Conclusion #1: “Hiring sucks for candidates, but hiring managers can do what they want

The truth is that hiring is hard for everyone. There’s no question about it. It’s hard for both candidates and for hiring managers. Sure, FAANGs and the startup-du-jour might have a leg up, but most people who are hiring are trying to hire at a non-FAANG, non-sexy company. If you’ve never done it, you should try it at some point in your career. It’s an incredibly humbling experience. Or, at the very least, find a friend who’s spent time on hiring, and ask them for their favorite battle story. They’ve been ghosted by candidates. They’ve spent hours trying to convince people to talk to them. They’ve spent even more time getting candidates to the offer stage, only to lose out to the FAANG / startup-du-jour.

And yes, on the balance, power and information asymmetry work out in favor of the companies hiring. And that asymmetry is much larger with FAANGs. But even FAANGs have to invest a tremendous amount of time and energy into hiring. It’s not really easy for anyone.

Especially if you want to do it well. Ask any successful leader (entrepreneur, manager) what they spend most of their time on, and it’ll either involve a large chunk spent on hiring (if they appreciate the problem and give it the attention it deserves) or dealing with the consequences of bad hiring (if they don’t).

Conclusion #2: “Hiring is a crap-shoot—it’s a roll of the dice

I strongly disagree with this one. When writing the Holloway Guide to Technical Hiring and Recruiting, I got to interview dozens of really thoughtful hiring managers and recruiters. They were really good at their jobs. And there were some common themes. They were thoughtful about every step of their process. They kept their process balanced and fair, holding a high bar but respecting candidates and their time. They didn’t chase the same pool of candidates everyone else was chasing—instead, they found non-traditional ways to discover really talented and motivated people who weren’t in the pool of usual suspects. They were thoughtful about what signals they were looking for and how best to assess them. And, they deeply understood their team’s needs, and candidates’ needs, and were really good at deciding when there was or wasn’t a fit. But most of all, they were effective: they built really talented teams.

There are a handful of companies that have built amazing hiring engines, and the proof is that they’ve been able to put together really strong teams. You can generally tell that if a person worked at a certain company at a certain time, that person is probably incredibly intelligent and incredibly motivated (some examples are Google, Facebook, Stripe, Dropbox at different points in time). There will always be noise. Even the best hiring managers will sometimes make hiring mistakes. And of course, even the best engineers may not be a fit for every role or every company.

Again, hiring is hard. But there is not a shred of doubt in my mind that if you are thoughtful about it, you can hire well. And really, you don’t need to be perfect at it. You just need to be better than the rest.

Conclusion #3: “FAANGs suck at hiring”

This one has some truth to it, but it’s a lot more subtle than “FAANGs suck at hiring”. Because let’s face it, they do hire really smart people. Some of the smartest people I know are at FAANGs right now. So let’s decouple that statement a little more.

FAANGs do suck at parts of hiring, like their candidate experience. They can be really slow at making hiring decisions. Their hiring process might be tedious and seem arbitrary. But they usually can get away with it, and you probably can’t! They’ve got a strong brand, interesting technical challenges (interesting for some people, at least), and a lot of money. In fact, one FAANG VP of Engineering told me: “our process is what we can get away with”. To the point that they can even play it off as a positive: “our process is slow and long because we are very selective”.

And look, I’m sure FAANGs lose some talented candidates who get turned off by their “you’d-be-blessed-to-work-with-us” attitude. They definitely have a lot of room for improvement. But at the end of the day, they’re operating a process that’s delivering large quantities of really smart people at scale. In fact, I’d argue their internal processes around strategy, performance management / promotions, etc cause incredibly more damage to them than broken hiring—if you lose out on hiring one talented person when you have thousands applying to work for you, that’s one story, but if you hire someone really talented and driven, and they work for you for 6 to 12 months but don’t meet their potential and leave in bitter frustration… well, that’s a subject for another post)

“But”, people go on, “FAANGs also don’t know how to interview!” Which brings me to trope #4.

Conclusion #4: “Whiteboard and algo/coding interviews suck”

Again, this one has some truth to it, but if you just stop at the above statement, you miss the point.

Algo/coding interviews are one of the primary hiring mechanisms used by FAANG companies. And they are incredibly unpopular—at least in discussion threads. But big companies have spent years looking at their hiring data and feeding that back into their hiring process (coining the term “people analytics” along the way).

The argument against them is usually a combination of:

  • they really only assess pattern-matching skills (map a problem to something you’ve seen before)
  • they only assess willingness to spend time preparing for these types of interviews

These are fair criticisms, but that doesn’t mean these interviews are actually terrible. I mean, they might be terrible for you if you’re interviewing and you don’t get the job. You’re probably a brilliant engineer, and I agree, these interviews certainly don’t fully assess your ability (or maybe you’re a shit engineer, I don’t know you personally). In any case, the leap from “this interview sucked for me” to “this interview sucks” is still pretty big.

If you’re a large tech co with a big brand and a salary scale that ranks at the top of Levels.fyi, you probably get a lot of applications. So a good interview process is one that weeds out people who wouldn’t do well at your company. To do well at a large tech company, you need to (and I’m painting with a really broad brush, but this is true for 90% of roles at these companies):

  1. Some sort of problem-solving skill that’s a mix of raw intelligence and/or ability to solve problems by pattern-matching to things you’ve seen before.
  2. Ability/commitment to work on something that may not always be that intrinsically motivating, in the context of getting/maintaining a well-paying job at a large, known company.

Hopefully you can see where I’m going with this. Basically, the very criticisms thrown at these types of interviews are the reason they work well for these companies. They’re a good proxy for the work you’d be doing there and how willing you are to do it. If you’re good at pattern matching, and are willing to invest effort into practicing to get one of these jobs, you’ll probably do well at the job.

Not that there’s anything wrong with that type of work. I spent several years at big tech co’s, and the work was intellectually stimulating most of the time. But a lot of times it wasn’t. It was a lot of pattern-matching. Looking at how someone else had solved a problem in a different part of the code-base, and adapting that to my use-case.

On the other hand, if you’re an engineer (no matter how brilliant) who struggles with being told what to do or doing work that you can’t immediately connect to something intrinsically motivating to you, that FAANG interview just did both you and the company a favor by weeding you out of the process.

So the truth is, there is no single “best interview technique”. In our book, we wrote several chapters about different interviewing techniques and their pros and cons. In-person algo/coding interviews on a whiteboard, in-person interviews where you work in an existing code base, take-home interviews, pairing together, having a trial period, etc all have pros and cons. The trick is finding a technique that works for both the company and the candidate.

And that can really differ from company to company and candidate to candidate. A VP at Netflix told me about how they had a really strong candidate come in, but when asked to do a whiteboard-type interview, informed them (politely) that they might as well just reject him then. He was no good at whiteboard interviews… But if they allowed him to go home and write some code, he’d be happy to talk through it. And since then, many Netflix teams have offered candidates the choice of doing a take home.

And really, any interview format can suck. It can fail to assess a candidate for the things a company needs and it can be a negative candidate experience. Which would you rather have:

  • A whiteboard interview with heavy algorithms for a role where that knowledge (or ability to develop that knowledge) isn’t critical, delivered by an apathetic engineer who doesn’t care about their job.
  • A poorly-designed take-home, requiring skills that you don’t have and won’t need for the job, and that you spend hours thinking through and working on, send in, and get rejected without getting any feedback.

Probably neither.

At my current startup (Monarch Money), we give candidates the choice of real-time CoderPad interview, take-home interview, or showing us and talking through a representative code sample. Most people choose the take-home, and we like that—based on where we are as a company (seed-stage startup), how we operate (distributed even before Covid), etc, it ends up being a better proxy for the work they’d do on the job. In either case, we do our best to only do this once we and they believe there might be a strong fit, and when we do it, we try to give people feedback so that even if they don’t get the job, they get at least got something out of it. Will we still do this at scale? Almost definitely not. Once we have multiple teams and hiring managers, we’ll probably have to rely on more standardization, which will probably push us towards more standard interviews (though I hope to resist it as long as we can!). And we’ll try to maintain the same principles (being respectful of people’s times, looking for a proper fit, etc).

So here’s what sucks about hiring

So here’s what actually sucks about hiring:

  • Diversity. We really, really suck at diversity. We’re getting better, but we have a long way to go. Most of the industry chases the same candidates and assesses them in the same way.
  • Generally unfair practices. In cases where companies have power and candidates don’t, things can get really unfair. Lack of diversity is just one side-effect of this, others include poor candidate experiences, unfair compensation, and many others.
  • Short-termism. Recruiters and hiring managers that just want to fill a role at any cost, without thinking about whether there really is a fit or not. Many recruiters work on contingency, and most of them suck. The really good ones are awesome, but most of the well is poison. Hiring managers can be the same, too, when they’re under pressure to hire.
  • General ineptitude. Sometimes companies don’t knowing what they’re looking for, or are not internally aligned on it. Sometimes they just have broken processes, where they can’t keep track of who they’re talking to and what stage they’re at. Sometimes the engineers doing the interviews couldn’t care two shits about the interview or the company they work at. And often, companies are just tremendously indecisive, which makes them really slow to decide, or to just reject candidates because they can’t make up their minds.

As a company, the best you can do is be thoughtful and fair with your process. It’s not easy, but it’s doable. And as a candidate, the best you can do is try to find and work with companies that are thoughtful and fair with their hiring processes if you have that privilege.

If you thought this post was insightful, we’ve got a book full of this type of thinking that I worked with a really awesome group of contributors on. Check out the Holloway Guide to Technical Recruiting and Hiring.

The Shape of Technical Debt

The term technical debt is so common that you’d be hard-pressed to find anyone in the software world that hasn’t heard of it. But what is technical debt? There are a few frameworks out there, so I’ll list them quickly, but then I’d like to present one I’ve found especially useful with the teams I work on or advise: inertia vs. interrupts.

Existing Frameworks

First, it’s helpful to define what technical debt is not. Bob Martin has a great article on this called A Mess is not a Technical Debt. “A mess is just a mess”. You can make short-term decisions that may not be best for the long-term, if you’re under constraints, but you should still do it in a prudent way. Some decisions are just bad.

Martin Fowler expands on this by creating a Technical Debt Quadrant, with two dimensions: deliberate vs. inadvertent, and reckless vs. prudent.

Ideally, you’re in the right half of this two-by-two: always prudent, because there’s no excuse for being reckless. But, if you’re low on experience for the type of system you’re designing, you might be prudent and inadvertent. Martin Fowler argues that prudent / inadvertent is really common with great designers, because they are always learning.

One of my favorite frameworks is from the Mythical Man-Month’s No Silver Bullet essay. Fred Brooks breaks down complexity (which is a little different than technical debt) into to dimensions: accidental complexity and essential complexity. Essential complexity is caused by the complexity of the underlying problem that software solves—accidental complexity is introduced by the implementation developers take. In this world, technical debt is, essentially all accidental complexity.

A final framework defines technical debt as the gap between how software is structured and how you would structure it if you were writing it from scratch today (I don’t remember where I read this definition, please tell me if you do!). In this world, technical debt is a little more fluid, because it can increase simply by your team thinking up a better architecture or design.

A different framework—manifestations

These frameworks are all great, and in fact, you can go even deeper to define what technical debt looks like. Books like Refactoring and Clean Code have done this well. But usually, what you need, is something a little more concrete that you can make decisions about.

So the framework I like to use is to look at manifestations of technical debt; what impact does technical debt have? By looking at how technical debt actually impacts your ability to deliver product, you can make decisions in a less subjective way.

At a high-level, technical debt can manifest itself in two ways:

  • Interrupts: Interrupts are when existing systems are taxing your time through reactive work like maintenance, bugs, fire-fighting, etc, so you spend less time being able to change a system. In other words, writing code now that creates interrupts in the future means you (or someone else) will be able to spend less time on forward work in the future. Both the quantity and severity of interrupts matter. Interrupts are particularly hazardous because they usually result in costly forced context switching. So you find your team, over time, spending more and more time responding to incidents than building product.
  • Inertia: Inertia means that a system is hard to make forward changes to (because it is hard to understand or reason about, because it’s not modular and hence hard to change, etc). This makes forward work difficult for you (ie even when you can spend time doing forward work, it’s really slow) or for others (e.g. because the system is hard to understand, it is a tax on the time of people who need to learn more about it, as well as on people who need to explain to others how it works).

Why look at the manifestations? Two reasons.

First, it helps identify the tangible symptoms and effect of technical debt, and helps avoid theoreticals. For instance, most teams have at least one part of their code base that they feel pretty bad about, and would love to spend time fixing. Is it worth fixing now? Sometimes, that code is well-isolated and pretty functional. It isn’t creating any interrupts. No one is changing it or will need to change it in the near future. So yes, it is technical debt, and it might have inertia, but inertia only matters if that code needs to be changed.

Secondly, by identifying the effects of technical debt, you can decide how best to fix it. For instance, if a piece of code is really hard to change because it suffers from change amplification, and you expect to be making a lot of changes to it, it’s probably worth refactoring. If a piece of code is creating a lot of interrupts and not creating value for users, you might want to just get rid of it entirely.

Finally, it helps avoid ideological discussions. We’ve all worked with someone who is ideological about their code. “X is shit, Y is great”. By looking at manifestations, you’re forced to be thoughtful and justify any claim you make. “X is shit” doesn’t fly any more. You have to say “X is bad, because when we want to do Z in the future, it will be really difficult.”

So sometimes, it’s useful to start with the symptoms before the diagnoses, and look at how technical debt manifests itself before deciding what to do about it.

Hire people you believe in—believe in the people you hire

I was reading an amazing Twitter thread about Bill Grundfest, founder of The Comedy Cellar and the guy who discovered some of the most famous comedians.

In the thread, which includes stories of Jon Stewart, Bill Maher, and Ray Romano, the pattern is essentially:

  1. Bill is able to detect talent, even early on in people’s careers when they haven’t had success yet.
  2. He’s able to zoom-in on what’s holding them back, and giving them one key piece of advice.
  3. He believed in them.

The thread focuses on the first two pieces: detecting talent and giving advice. The third piece is a little hidden, but in my mind, it’s probably the most impactful.

I’ve seen over and over in my career how having someone believe in you can be life-changing. I’ve sometimes been the recipient of that, sometimes a spectator, and most recently, I’ve tried to be a provider of that.

People can tell when you don’t believe in them, and they can tell when you do. It has an effect on their behavior, and can be self-fulfilling. This is sometimes known as the Pygmalion effect. Having someone believe in you is tremendously powerful. If someone believes in you, they will give you more support and more opportunity. They will boost your confidence—and I’ve seen lack of confidence hold back way too many brilliant people. And they will help create the type of psychological safety necessary for you to do your best work.

So here’s my rule as a manager. I only hire people I believe in, and I do my best to let them know I believe in them.

Just because someone has been successful, or has a lot of experience, that doesn’t necessarily mean that I believe in them. Believing in someone means that I believe their best is yet to come.

On the other hand, believing in someone doesn’t always mean believing that they will be successful right away, or that they will be successful in the exact way or role I’ve intended for them. Believing in someone isn’t relaxing my standards or expectations of them either. That’s the opposite of believing in someone. In order to believe in someone, you need to maintain high standards for them, and have faith that they will meet those standards.

This applies beyond just managers hiring their teams. If you have the luxury of choosing where you work, you are essentially “hiring” a person as your manager, a team as your colleagues, or a company as your employer. So if you can, always choose to work with people you believe in, and people who believe in you. Work with people for whom you believe their best is yet to come. And if you lose faith in them, or you feel like they lose faith in you, do them and yourself the favor of moving on.

Code is a liability, not an asset

I loved this quote from the book Software Engineering at Google. Found it a helpful reminder.

Earlier we made the assertion that “code is a liability, not an asset.” If that is true, why have we spent most of this book discussing the most efficient way to build software systems that can live for decades? Why put all that effort into creating more code when it’s simply going to end up on the liability side of the balance sheet? Code itself doesn’t bring value: it is the functionality that it provides that brings value. That functionality is an asset if it meets a user need: the code that implements this functionality is simply a means to that end. If we could get the same functionality from a single line of maintainable, understandable code as 10,000 lines of convoluted spaghetti code, we would prefer the former. Code itself carries a cost — the simpler the code is, while maintaining the same amount of functionality, the better.

How I Started Writing More

Of all the advice I’ve gotten in my career, the one I wish I’d gotten (and heeded) earlier was to just write more.

Up until two years ago, I had barely written much at all. In the past two years, I went from writing short answers on Quora, to writing several articles featured on the front page on HackerNews (even making the top spot), to writing a book on recruiting. I don’t consider myself a talented writer (or even a writer at all), but I feel like I’ve come a long way, and I wanted to share why I did this and how I accomplished it.

Writing has vastly improved my communication skills. It has forced me to improve the clarity of my thinking. It has helped me connect with other people. And hopefully, it has helped me share ideas with people who found them useful (or at least, thought-provoking).

How to write more

If you’re anything like me, actually sitting down and writing something is tremendously difficult. There are a whole host of excuses you can come up to avoid writing; everything from “I have nothing interesting to say” to “I have no time”.

Here’s how I overcame those barriers.

Start small

The easiest way to overcome any procrastination-prone task is just to break it up into the tiniest, easiest of pieces, and just get started.

Specifically, for writing, I learned a bunch of “start small” tricks when I was working at Quora from the Quora Writer Relations team. I had been trying to push some of my colleagues to write more so they can experience different angles of the product, and in turn, better understand how it works and how we can improve it. It was also a chance to get my colleagues to show off some of the cool work they were doing. But I found it really difficult to convince my colleagues to write about their work.

My requests were usually something like: “Hey Y, you should write about that cool feature you just built” and were ineffective. But the Writer Relations team had a little trick. Their conversations would go something like this:

WR: Hey, Y, you should write more on Quora.
Y: Uhh, maybe I will at some point. I don’t have anything I’m ready to write about now.
WR: You were just talking about [some topic like powerlifting/sushi-making/Pokemon Go/growing up in ___/going to school at ___]. That was interesting. Here, I’ll ask a question about that on Quora right now, and send you a link. Just write an answer there.
Y: Ummm…
WR: Just write what you were saying earlier, it was cool. Don’t overthink it.

And lo and behold, Y would write an answer about that topic. But something amazing would happen. Within a few weeks, Y would be writing more, and they would write more confidently. Because they didn’t overthink it.

Often, people hesitate to write because they think they must write really well, with masterpieces just flowing out of them. But it’s better to start small, and work your way up.

Write for yourself first

In addition to starting small, another tip is to write for yourself. There are a lot of benefits to writing. For one, it improves your communication skills. But more importantly, it forces you to think about things more deeply, to make your thinking more structured and concrete. So writing is helpful even if no one is going to read what you write.

Some of my favorite pieces were written on this blog before I even put my name on it or shared it anywhere. It was just a domain called “somehowmanage.com”. No one knew about it, no one was reading it, and even if they were, they had no way of connecting it to me. So when I wrote here, I didn’t worry about whether someone would judge my ideas. I just wrote.

Now, I have since added my name on this blog now because I’ve found it incredibly fulfilling to connect with people who stumble across it. But when I write, I still try to write primarily for myself, and maybe a close circle of people I might send each article to because it’s relevant to something we’ve talked about. If other people read and find it useful, that’s just icing on the cake. So write like nobody is watching.

Find a steady source of prompts

A good writing prompt can make a world of difference. Prompts give you a starting point, a thread you can pull at to unravel the spool. Good prompts don’t just give you a starting point, they give you an indication that at least one other person (or group of people) in the world care about this topic.

I have a few steady sources of prompts for myself:

  • One-on-ones with my team. Sometimes, a colleague on my team will ask me a question like “how do I improve my product intuition?” or “can you tell me a bit more about window functions?”. Often, I don’t have a fully thought-out and coherent answer right then and there, so I’ll say: “Great question. I’m going to sit down for an hour this weekend and write an answer to that question as an internal company doc, or a blog post, or a Quora answer, then I’ll share it with you and we can discuss and iterate on it together.” My best prompts have come from my team asking me questions that I assumed I knew the answer to, but not well enough to write about yet.
  • Conversations with smart friends. There are some people who will spit out gems while you’re talking to them and asking them questions, or will ask you really insightful questions when you’re just engaged in random chatter. For instance, I was talking to my friend Josh Levy a few weeks ago about how I prefer pull-based urgency to push-based urgency, and he asked me what I meant. Turns out, what I meant was really fuzzy and ambiguous, but we explored the idea together and it became a pretty long discussion/debate that I then turned into a blog post and he turned into a Twitter thread.
  • Reading, especially outside my domain. I learn the most, and find the most valuable prompts, when I’m reading something I’m generally ignorant about (like history, or psychology, or how the military makes decisions).
  • Quora questions or Ask HackerNews questions.
Keep a list of prompts and drafts

I keep a Google Doc with prompts I want to write about at some point, and will often sit down and write out a few paragraphs for each prompt. These paragraphs might turn into drafts that then turn into blog posts, or they might never see the light of day. But, keeping them in an accessible Google Doc has a few benefits.

First, let’s say I’ve got a fuzzy idea I want to write about. In my mind, it’s probably at around 25% clarity. I jot down a few ideas and sentences, and now that I’m forced to think about it, it gets to 50%. But now I’m stuck at 50% clarity. A few weeks later, I have a conversation with someone or read something somewhere, and now, I can take that idea to 75 or even 100%. If I hadn’t written anything down, I’d still be stuck at 25%. But now I can build towards a point where it’s coherent enough to share with other people.

Structure Your Time

Writing won’t happen unless you make time for it. But different schedules work for different people. For instance, my friend Alex Allain wrote a book by forcing himself to write for ten minutes “every day, no excuses, ever”. Other people I know block off some time on a weekend morning. I write sporadically, but try to publish something (even if it’s really light/silly) once a week. The point is, since writing is a little uncomfortable, you need some forcing function to get yourself to do it.

I purposely kept this point last. A lot of advice around writing starts by asking you to find a forcing function and block off time. But I’ve found that that only works after you’ve reduced the friction of writing and convinced yourself of its benefits.

Find Support

When working on my book, I partnered with an awesome editor, Rachel Jepsen. Rachel not only worked closely with me to edit what I wrote, she also gave me feedback on my overall writing style and voice. However, equally as important, she provided much-needed moral support. When I was stuck on a thought or wording, or when I was feeling lack of confidence, she was always there to encourage me. That helped overcome a lot of writing friction. And in addition to the help I got from Rachel, my publisher Holloway provided a lot of other support.

Unfortunately, apart from book-writing, I don’t have a dedicated editor and publisher supporting me when I write. But I’ve tried to recreate that support whenever I’m feeling stuck. I’ll reach out to friends or colleagues to ask them to brainstorm or proof-read something I’m writing. Sometimes, just finding someone to bond with over how difficult writing is is encouraging enough to get me over a hump.


I hope this helps you write more. I’ve met and been inspired by so many incredible people with amazing thoughts and ideas. I always urge them to write more. Not only would it probably benefit them, but it would amplify their ability to benefit others.

I wouldn’t feel comfortable publishing this piece without thanking the people who helped me build my writing confidence and find my voice. In particular, the Writer Relations and Comms teams at Quora: Jonathan Brill, Alecia Li, and Jessica Shambora. And the team I worked with at Holloway: Josh Levy, Courtney Nash, and most of all Rachel Jepsen, who spent hours upon hours helping me become a better writer.

Push vs. Pull-based Urgency

I was talking to my friend Josh a couple weeks ago about the speed at which teams move and how best managers can create a sense of urgency. We pretty quickly agreed that there are two types of urgency for a team: push-based and pull-based.

I felt like this is an important delineation, and wanted to summarize our conversation a little, just by giving some examples.

  • Team:
    • Push-based: You hire people who will what is asked of them, and nothing more.
    • Pull-based: You hire people who are conscientious and self-motivated, and if they aren’t being pushed, they will push themselves and everyone around them.
  • Goals:
    • Push-based: The team has “soul-less” goals like “get X done by Y” or “increase metric Y by Z”.
    • Pull-based: The team clearly understands the value of what they are building and are excited about making it real.
  • Road-blocks:
    • Push-based: Team members constantly lose momentum because they run into obstacles that are beyond their control.
    • Pull-based: The “path is paved”, so-to-speak. Team members might face obstacles, but they are either empowered to clear those obstacles or have access to executives that can clear the obstacles for them.
  • Deadlines:
    • Push-based: Arbitrary deadlines like “our exec wants this done by Friday”.
    • Pull-based: Self-imposed deadlines like “this should take 2 weeks, and we will hold ourselves accountable to that”.

We actually discussed deadlines a little bit, and as Josh always does he broke out a few dimensions:

  • Deadlines can be artificial or real. An artificial deadline is one in which there will be no repercussions if the deadline is missed, and a real one has repercussions.
  • Deadlines can be superimposed or self-imposed. A team can decide a deadline for itself, or someone (typically more senior) can decide a deadline for them.
  • Deadlines can be evident or arbitrary. An evident deadline might be “if we don’t build our product by November, we’ll miss out on the holiday season orders.”

So a deadline that is real, superimposed, and arbitrary, could be something like “If you don’t accomplish this by end-of-month, you will be fired.”

Someone once told me that managers push and leaders pull. Plenty of people, companies, and teams have gotten results with very “push-based” urgencies, but if you ask someone which type of urgency they prefer on their team, they’ll likely say “pull-based”. Which is an interesting point to ponder.

Is Revenue Model More Important than Culture?

HN Discussion Here: https://news.ycombinator.com/item?id=24543510

I always loved getting problems of the type “What is the limit as x approaches infinity” type in high-school/college. You’re given an equation (of the classic y=x format), and asked to derive what the value of y will be as x grows to infinity.

One thing you learn pretty quickly about these types of problems is that often it doesn’t matter where the function “starts” (or where it is at small values of x). It could start at zero, or at negative infinity, but its limit might be infinity, and vice versa (it could start large but have a limit of zero or negative infinity).

In fact, for many equations, there’s usually one dominant term. This is the term that dominates the limit. There might be countless other factors or parts of the equation that matter initially, but eventually it’s that dominant term that wins. This is sometimes known as the dominant term rule. We’ll get back to this in a second.

Ads vs. Search

Google had a little press kerfuffle a few months ago. You can read a summary in the New York Times here, but the short of it is that the company launched a design change that made search results and ads look very similar. Presumably, this increased revenue for Google, since many people ignore ads when they can easily identify them, the same way you’d ignore stepping in dog crap if you can identify it in the mud (and yes, given the state of online ads and content I pick this analogy deliberately). But there was a pretty strong backlash against this as a “dark pattern” designed to trick users. After the negative press, Google walked back the change.

If you’ve been following the news around big tech companies these past couple of years, this type of behavior is not surprising at all. These companies have grown really large, are arguably monopolistic, and hyper-focused on growth and revenue. Over and over, they have made decisions that have resulted in backlash from the press and from their users.

On the other hand, if I ignore the past ten years, and jump back to when I worked at Google as an entry-level Software Engineer, it is a little surprising to me. I worked at Google from 2006-2009. At the time, it was already a rapidly-growing public company (I think I joined when there were around 8,000 employees, and left when there were 20,000). I initially worked on the team responsible for AdWords, so I had some exposure to the culture and decisions that were made at the time (of course it wasn’t deep exposure, since I was an entry-level Software Engineer on the lowest rung of the ladder… but it was exposure nonetheless).

Note: I’m going to pick on Google a little bit here, but I do love that company. I think there’s a lot it can improve on, but it’s still one of my favorite and least “evil” large tech company. I chose them simply because I’m more familiar with them.

At the time, Google employees might have argued against making a change because it was “evil”. The “don’t be evil” motto was still around, and as engineers who were building parts of the product and making decisions, we were pretty ideological about it. One of the company’s values was also to put users first, employees second, and shareholders third. By any of these lenses, the type of design change that Google got flak for recently would have been highly unlikely at the time.

Revenue is the Dominant Term

Let’s take a dominant term view of this problem. When a company is first built, several variables dictate its decisions:

  • The implicit values/culture of the early team. As Ben Horowitz would say, “what you do is who you are.” 
  • The explicit values/culture of the early team. Are we user-centric? Data-driven? …
  • The revenue model.

I think that over time, the revenue model is the dominant term. The limit of a product towards infinity, so to speak, is based on its revenue model. If your revenue model is ads, it doesn’t matter if your stated mission is “to organize the world’s knowledge and make it universally accessible and useful”, “to give people the power to build community and bring the world closer together”, or anything else. If your revenue model is ads, you are an ads company.

I’m not diminishing the role of culture and values. I think those are critical. Part of me would love to believe the hundreds of books written on how culture determines everything. But I don’t. At least not for companies that can hire some of the smartest people in the world, gather massive amounts of data, and build technology more sophisticated than ever. And be trying to “maximize shareholder value”.

I’ve actually agonized over whether culture or revenue are the dominant term. In fact, I agonized so much that I’ve had this article in my head for years, and in a Google Doc for months, but I couldn’t get myself to write it / publish it. Because part of me believes culture always wins. Actually, all of me wants to believe culture always wins. But I’ve had my idealism crushed enough times by hard realities. 

Yes, having and espousing a positive culture and set of values are important. And they may shape how and how quickly the revenue model dominates (for example, companies like Enron or pre-IPO Uber show how bad things can get if you have a terrible culture). But regardless of your mission statement, your culture, your values, and so on, if you choose the wrong revenue model, it will dominate them in a shareholder-value-driven, capitalistic society. Culture can only dominate if it’s negative. A positive culture is necessary, but it’s not sufficient.

In other words, over the long term, a company (and its product) will morph to take the shape of its revenue streams.

Charlie Munger Knew It

Charlie Munger, Warren Buffet’s business partner, has a pretty famous speech where he talks about the power of incentives.

“Well I think I’ve been in the top 5% of my age cohort all my life in understanding the power of incentives, and all my life I’ve underestimated it. And never a year passes, but I get some surprise that pushes my limit a little farther.” —Charlie Munger

Charlie gives several examples: for instance, FedEx needed to move/sort their packages more quickly, so instead of paying employees per hour, they paid them per shift: productivity increased dramatically (employees now had less incentive to take longer hours to do the same amount of work). Charlie’s model of human behavior is pretty simple: we follow incentives. He makes people sound almost coin-operated.

Now, this isn’t entirely true—there are plenty of examples and research showing that our behavior is more complicated than simple incentives would predict. But Charlie is arguably one of the best investors in the world, and he’s onto something. Even though there might be other variables that influence our behavior, you can still simplify things down to incentives. Incentives are his dominant term.

That incentives are dominant is actually pretty obvious to a lot of people. Somehow in the tech industry, we seem to have just clouded our own judgement through some sense of moral superiority. We care about the impact we’re having on the world. We have noble missions that we rally around and try to hire people who are excited by them. So far, so good. But then we shoot ourselves in the foot by setting up business models with misaligned incentives.

A System View

If you take the view view that a company is not a product, but rather, a company is a complex system that creates a product, you can take a system’s view and arrive at the same conclusion around the importance of business models. In an essay quite famous with “system thinkers”, Donella Meadows outlines the “leverage points” of a system—places you can intervene in a system to change its behavior. Here’s her ranked list, in increasing order of effectiveness:

Now, organizations can have cultures that “transcend paradigms”, but in the modern corporate world, these are rare. And, without going down a rabbit hole here, organizations based on cultures that transcend paradigms can create both massive good OR massive damage. But for our purposes, it’s safe to assume that most companies are not transcending paradigms.

The next two leverage points are the mindset out of which a system arises. Most modern companies arise and operate in a mostly capitalistic paradigm, and have goals of maximizing shareholder value via growth and revenue. And over time, the way that revenue can be grown will dominate almost anything else, even the explicitly stated goals (aka its “mission”), because the paradigm creates implicit goals that outweigh the explicit ones.

Look for Aligned Business Models

So what does this mean in practice? Well, if you only care about making money, it doesn’t mean much. But if you do care about more than money, if you care about the impact your work has and you want to be proud of what you do, it’s worth thinking through this a little more deeply.

Whether you’re starting a company or joining one, look for a business model without perverse incentives. A business model that sets things up so that the better a product is, the better off the company is and its users are.

Sometimes, counterintuitively, a business model may seem aligned at first glance, but end up being quite harmful. The classic example that we’re all now aware of is free products. Free seems great at first glance. But companies have to make money somehow. So they sell ads, or data, or some mix of the two that their users don’t quite understand. And so now, success for the company means more time spent on the product (which may or may not be a good thing for users), less privacy (definitely not a good thing for users), and ultimately more ads.*

So often, paid is better than free*. At Monarch Money, my current startup, we’ve chosen to go with a paid model, with a hope that we’ll be more aligned in creating value for our users (who we can now call customers… notice how there’s a word for “customer service”, but no “user service”?). There will still be plenty of forks in the road where we can decide whether we help our customers, or take advantage of them, and I hope our values will help us navigate those forks, but at least the revenue model is in our favor.

Another layer to consider is whether your product and revenue model help people with just short-term goals, or a mix of both short and long-term goals. Products that are great and helpful, help their users with both. Good products might help with one or the other. The products with the most potential for damage provide some short-term benefit at the expense of longer-term goals.

So when you consider starting or joining a company, look at the business model, and do the “limit math”. Think about what things might look like if you become massively successful, because you might be.

*This is an opinion piece. I had to draw a lot of simplifications to keep this article short. A lot of statements are definitely not universally true, but are true enough that they’re worth using as examples.

Disrespectful Design—Users aren’t stupid or lazy

It’s a common narrative in tech to design products with the assumption that users are stupid and lazy. I think that is both disrespectful and wrong.

The idea is rooted in a lot of research around product usability, but it has been bastardized. Think of it as a perversion of the Don’t Make Me Think thesis.

Don’t Make Me Think, the seminal web usability book by Steve Krug, tells us that products should be as simple as possible for users to use. Products shouldn’t be self-explanatory (ie understandable given a set of instructions), they should be self-evident (ie so obvious that they can be used without having to read instructions). A good door has a push/pull sign to make it self-explanatory, but it still requires you to read and think. An even better door wouldn’t even need that label at all—you know what to do instinctively.

But somehow, we’ve perverted that idea. Users are lazy, even stupid, we say. They just want to flick their fingers down an infinite feed, letting their eyes wander from item to item.

But in Don’t Make Me Think, Krug never refers to users in a derogatory way. He tells us how good products should work, and why basic psychology supports that. People want to reduce cognitive friction as much as possible. People don’t like unneeded cognitive friction. People skim quickly and “muddle through” products. And, most of all, people won’t undertake effort unless they believe it’s worth the cost. These are all facts backed by usability research and psychology.

In other words, he tells us what good products should look like, and how people use them. But he doesn’t pass judgment on users. That’s up to us.

And so naturally, we apply our view of the world, our values. If you view your users with contempt, then the reason behind why people don’t like complicated products is because they are stupid and lazy. If, on the other hand, you respect your users, you might view things differently.

Firstly, our brains have been wired, through millions of years of evolution, to conserve effort and time. That’s actually not being lazy, it’s being smart and protective of one of our most valuable assets. Naturally, we don’t undertake an activity unless we believe it’s worth the cost (though there are ways to trick us, more on that later). And if it takes effort to even figure out how much effort an activity will require, we’ll avoid that activity altogether. That’s the functional, practical piece of our brain at work.

Secondly, we are a complex bundle of emotions. Even if we’re smart, we don’t like feeling stupid. And complex, difficult things makes us feel stupid. They strike at our very identity and self-worth. So we try to avoid them like we avoid that hard topic we were never good at in school. That part is the emotional piece of our brain at work.

So what explains the rise of products like Facebook, which have gotten a large part of humanity mindlessly scrolling through feeds of what can most easily be described as garbage content? Well, we humans aren’t perfect. If you’ve got billions of dollars, some of the brightest minds, and a lot of data at your disposal, you can get a lot of people to do what you want. If you treat users as stupid and lazy, you can turn them into stupid and lazy people in the context of your product… but that’s a subject for another post.

So here’s how I think about people and product design.

Firstly, products should definitely be as simple as possible. Because I respect users’ time, not because I look down on their intelligence.

Second, have a theory of how people behave. I’m a big fan of Self-determination Theory, which states that people value autonomy, relatedness, and competence. And I love building product that help people improve all those three dimensions.

Three, have a set of principles for your product. For instance, of the three axes of self-determination, I particularly care about autonomy (control). And I’ve found that good products, ones that respect their users, give them more control. Bad products take away control. Simplicity can fulfill both of those purposes. It can give people control by abstracting away things they don’t care about and helping them focus. Or it can take away control by only letting users do things the product’s designers want them to do. So that’s one of my principles: give people control. Help them do things they want to do, not things you want them to do.

Let’s respect our users. Technology can bring out the best or worst in us, both individually and collectively. Let’s focus on the best.

EDIT: The above article is what I wrote, in its half-formed state on a Sunday morning. It looks like it’s blowing up on HackerNews, so I wanted to just add a few points.

  • I know I can come across as idealistic. I’ve even gotten that as feedback on a formal performance review (but also, I’ve gotten that I’m cynical, so *shrug*). I’m not saying people can’t be lazy, entitled, or stupid. We can. We have that capacity. But we have the capacity for so much more than that. And we should focus our tools, our technology, on our best capacities.
  • If Self-determination Theory resonates with you, I’d urge you to think about how it applies to building teams or even parenting. Your employees and colleagues, or your children and family members, have all the human capacities as well (though obviously, for children, they are still under development). Since I’m much more experienced at managing teams (dozen years) than being a parent (two years), I’ll just say that companies that view employees as lazy and incompetent are a scourge. If you can afford to avoid working at companies like that, try your best. And if you’re tasked with building companies or teams, you get to choose. You still need rules, hierarchies, and processes, but if you give people autonomy and relatedness/purpose, and trust their competence, I hope you’ll be pleasantly surprised. If you treat employees as stupid and lazy, they will be.
  • On simplicity vs. control/flexibility: I’m a big fan of the Alan Kay quote that “simple things should be simple, complex things should be possible.” I think great products find a way of achieving both those objectives. You keep things simple, but don’t throw out the baby with the bath-water. Like the word processor. 99% of the time, you just want to type some text, so you get a cursor and WYSIWIG typing. But sometimes, you want to style, you want to indent, you want to program macros. We apply this principle often at Monarch Money (personal finance platform that I’m working on) and so far have found it to be quite successful.

About me: I’m a software builder / entrepreneur. I write about software, software engineering management, and product-building. I currently manage the engineering team at Monarch Money, a personal finance platform. You can follow me here on this blog, or on Medium. I also helped write a book on hiring/recruiting in the software world with a group of really awesome people.

Data is Not a Substitute For Good Judgment

The tech industry prides itself on being “data-driven”. We’re so data-driven, in fact, that there are hundreds of startups building analytics tools (Segment alone lists over 100) and A/B testing tools (~30). We both laugh at but also secretly admire stories like Google A/B testing 40 shades of blue for its links. A typical consumer-product tech company might be running anywhere from dozens to thousands of A/B tests concurrently, and analyzing thousands of metrics to decide what to launch.

At the surface, it makes sense. “Data doesn’t lie”, we are told. Data, we are promised, will help us overcome our own numerous cognitive biases. It will cut through team and company politics. Depending on who you ask, data is the new oil, data is the new gold, or data is the new religion. Every product decision you make, every OKR you set—must be grounded in data. “You can’t argue with the data!”

I’ve worked at multiple consumer internet companies, and I’ve seen it first hand. I joined the cult and drank the Kool Aid. And, I love data. I’m an engineer at heart. Data works. But like any religion, data can be taken to the extreme, with dangerous consequences.

So I’m not saying we should throw the baby-data-Messiah out with the bathwater. All I’m saying is that data is a tool, and you should use it as such.

Imagine you’re a product manager at a consumer internet company. Your task is to build a landing page to get users to sign up for your product. So you put a lot of valuable information on that page. The conversion rate is low. You run an A/B test with a bunch of variations, and you realize that withholding critical information boosts the sign-up rate. You run more A/B tests. The relationship holds. Less valuable information, more signups. Before your know it, you’re a full-fledged landing page click-bait artist. Your page is shit but you nailed the conversion rate!

“Wait a minute,” you’re saying. This is a problem that can be solved with more data. And yes, you can start measuring downstream metrics like retention, etc, and maybe you learn that tricking your customers into signing up by withholding information results in lower retention. But now you’ve shifted the problem downstream, and what will likely happen is that you (or another product manager) will now be tasked with increasing the downstream retention, and again, the data guides you towards more dark patterns. Because your entire funnel is now grounded in dark patterns. And now any time you actually try to deliver real value to users, your metrics drop.

If this example sounds cartoonish and hard to believe, I assure you I’ve seen it (or something similar) happen multiple times at very respectable companies. We need to understand that data is not a substitute for anything. It’s not a substitute for understanding your customers and their problems. Data is not a substitute for good judgment. Data can actually become a crutch that gets in the way of problem-solving. More data can lead to data hoarding and decisions to the detriment of your customers, your product, and your company.

Data also leads to large, monopolistic consumer internet companies that have lost sight of the problem they’re trying to solve and instead just want to boost their metrics. It also leads to disenchanted employees. You go out and hire the smartest, most passionate people you can find, and turn them into A/B testing monkeys. Initially, they love it—they make changes, they see numbers go up. They get promoted, because you reward them based on “impact”, and the data shows that they have had impact. But they turn off the part of their brain that cares or thinks critically. Data is not a substitute for purpose. Like any shallow gamification, the effect eventually wears off.

Use data as a tool. It is powerful. Don’t use it as a religion. Work with people and companies who understand that. Work with people who are truly focused on solving a problem. Use data to validate the problem and the solutions, but don’t let it lead you blindly.

Write a Design Doc—even if no one else will read it

I often write design documents even if no one will read them.

There are a lot of resources out there on how to write good design documents. There are also many different ways to define what constitutes a design doc—what it includes, how long it is, how formal it is, etc.

For my purposes, a design doc is any document that you write before you begin the actual implementation. It can be long or short, formal or informal, etc. The point is it’s something you do independently of the implementation.

Most of the known benefits of writing design docs center around organizational alignment. Design docs can help you plan, help you get input from others on your team or in your company, serve as a record for the future. At larger companies, they’re also a great educational channel. While experienced engineers debate pros/cons of different approaches, many other can watch from the stands.

I’m a big fan of design documents on large teams and at large companies, but I still find them tremendously valuable even if no one else reads them.

A good design doc includes, at some level of detail:

  • What you’re planning to do.
  • Why you’re doing it.
  • How you’re going to do it (including discussions of alternative implementations).

Being forced to write those things down (even if it’s in a few sentences or paragraphs plus a diagram or two) sets a minimum bar that can help solve a lot of software development problems.

  1. Thinking strategically instead of tactically. Tactical thinking focuses on the details and on immediate results. Strategic thinking focuses on higher-level concepts (what we’d call “architecture”) as well as on the future. Code lends itself to tactical thinking. Design docs force strategic thinking.
  2. Creative thinking. Complementary to strategic thinking, when writing out a plan, you’ll often realize that there are alternative solutions to the problem you’re trying to solve (or in some cases, that the problem you’re trying to solve isn’t worth solving). It’s hard to do this when you’re bogged down in implementation details.
  3. Avoiding complexity and obscurity. Being forced to articulate your plan in pain English can often expose complexity. Often, things that are complex tend to be hard to describe, and so, if you think your implementation is simple but are finding that writing out your high-level plan is hard, it’s a good indicator you’re wrong about how simple it is.

It is, of course, entirely possible to sometimes begin with the implementation first, but in this case, you should treat the implementation as a discovery implementation or a prototype to collect some “on the ground details”. But once you have those details, then you write your design doc before beginning the real implementation.