How I Started Writing More

Of all the advice I’ve gotten in my career, the one I wish I’d gotten (and heeded) earlier was to just write more.

Up until two years ago, I had barely written much at all. In the past two years, I went from writing short answers on Quora, to writing several articles featured on the front page on HackerNews (even making the top spot), to writing a book on recruiting. I don’t consider myself a talented writer (or even a writer at all), but I feel like I’ve come a long way, and I wanted to share why I did this and how I accomplished it.

Writing has vastly improved my communication skills. It has forced me to improve the clarity of my thinking. It has helped me connect with other people. And hopefully, it has helped me share ideas with people who found them useful (or at least, thought-provoking).

How to write more

If you’re anything like me, actually sitting down and writing something is tremendously difficult. There are a whole host of excuses you can come up to avoid writing; everything from “I have nothing interesting to say” to “I have no time”.

Here’s how I overcame those barriers.

Start small

The easiest way to overcome any procrastination-prone task is just to break it up into the tiniest, easiest of pieces, and just get started.

Specifically, for writing, I learned a bunch of “start small” tricks when I was working at Quora from the Quora Writer Relations team. I had been trying to push some of my colleagues to write more so they can experience different angles of the product, and in turn, better understand how it works and how we can improve it. It was also a chance to get my colleagues to show off some of the cool work they were doing. But I found it really difficult to convince my colleagues to write about their work.

My requests were usually something like: “Hey Y, you should write about that cool feature you just built” and were ineffective. But the Writer Relations team had a little trick. Their conversations would go something like this:

WR: Hey, Y, you should write more on Quora.
Y: Uhh, maybe I will at some point. I don’t have anything I’m ready to write about now.
WR: You were just talking about [some topic like powerlifting/sushi-making/Pokemon Go/growing up in ___/going to school at ___]. That was interesting. Here, I’ll ask a question about that on Quora right now, and send you a link. Just write an answer there.
Y: Ummm…
WR: Just write what you were saying earlier, it was cool. Don’t overthink it.

And lo and behold, Y would write an answer about that topic. But something amazing would happen. Within a few weeks, Y would be writing more, and they would write more confidently. Because they didn’t overthink it.

Often, people hesitate to write because they think they must write really well, with masterpieces just flowing out of them. But it’s better to start small, and work your way up.

Write for yourself first

In addition to starting small, another tip is to write for yourself. There are a lot of benefits to writing. For one, it improves your communication skills. But more importantly, it forces you to think about things more deeply, to make your thinking more structured and concrete. So writing is helpful even if no one is going to read what you write.

Some of my favorite pieces were written on this blog before I even put my name on it or shared it anywhere. It was just a domain called “somehowmanage.com”. No one knew about it, no one was reading it, and even if they were, they had no way of connecting it to me. So when I wrote here, I didn’t worry about whether someone would judge my ideas. I just wrote.

Now, I have since added my name on this blog now because I’ve found it incredibly fulfilling to connect with people who stumble across it. But when I write, I still try to write primarily for myself, and maybe a close circle of people I might send each article to because it’s relevant to something we’ve talked about. If other people read and find it useful, that’s just icing on the cake. So write like nobody is watching.

Find a steady source of prompts

A good writing prompt can make a world of difference. Prompts give you a starting point, a thread you can pull at to unravel the spool. Good prompts don’t just give you a starting point, they give you an indication that at least one other person (or group of people) in the world care about this topic.

I have a few steady sources of prompts for myself:

  • One-on-ones with my team. Sometimes, a colleague on my team will ask me a question like “how do I improve my product intuition?” or “can you tell me a bit more about window functions?”. Often, I don’t have a fully thought-out and coherent answer right then and there, so I’ll say: “Great question. I’m going to sit down for an hour this weekend and write an answer to that question as an internal company doc, or a blog post, or a Quora answer, then I’ll share it with you and we can discuss and iterate on it together.” My best prompts have come from my team asking me questions that I assumed I knew the answer to, but not well enough to write about yet.
  • Conversations with smart friends. There are some people who will spit out gems while you’re talking to them and asking them questions, or will ask you really insightful questions when you’re just engaged in random chatter. For instance, I was talking to my friend Josh Levy a few weeks ago about how I prefer pull-based urgency to push-based urgency, and he asked me what I meant. Turns out, what I meant was really fuzzy and ambiguous, but we explored the idea together and it became a pretty long discussion/debate that I then turned into a blog post and he turned into a Twitter thread.
  • Reading, especially outside my domain. I learn the most, and find the most valuable prompts, when I’m reading something I’m generally ignorant about (like history, or psychology, or how the military makes decisions).
  • Quora questions or Ask HackerNews questions.
Keep a list of prompts and drafts

I keep a Google Doc with prompts I want to write about at some point, and will often sit down and write out a few paragraphs for each prompt. These paragraphs might turn into drafts that then turn into blog posts, or they might never see the light of day. But, keeping them in an accessible Google Doc has a few benefits.

First, let’s say I’ve got a fuzzy idea I want to write about. In my mind, it’s probably at around 25% clarity. I jot down a few ideas and sentences, and now that I’m forced to think about it, it gets to 50%. But now I’m stuck at 50% clarity. A few weeks later, I have a conversation with someone or read something somewhere, and now, I can take that idea to 75 or even 100%. If I hadn’t written anything down, I’d still be stuck at 25%. But now I can build towards a point where it’s coherent enough to share with other people.

Structure Your Time

Writing won’t happen unless you make time for it. But different schedules work for different people. For instance, my friend Alex Allain wrote a book by forcing himself to write for ten minutes “every day, no excuses, ever”. Other people I know block off some time on a weekend morning. I write sporadically, but try to publish something (even if it’s really light/silly) once a week. The point is, since writing is a little uncomfortable, you need some forcing function to get yourself to do it.

I purposely kept this point last. A lot of advice around writing starts by asking you to find a forcing function and block off time. But I’ve found that that only works after you’ve reduced the friction of writing and convinced yourself of its benefits.

Find Support

When working on my book, I partnered with an awesome editor, Rachel Jepsen. Rachel not only worked closely with me to edit what I wrote, she also gave me feedback on my overall writing style and voice. However, equally as important, she provided much-needed moral support. When I was stuck on a thought or wording, or when I was feeling lack of confidence, she was always there to encourage me. That helped overcome a lot of writing friction. And in addition to the help I got from Rachel, my publisher Holloway provided a lot of other support.

Unfortunately, apart from book-writing, I don’t have a dedicated editor and publisher supporting me when I write. But I’ve tried to recreate that support whenever I’m feeling stuck. I’ll reach out to friends or colleagues to ask them to brainstorm or proof-read something I’m writing. Sometimes, just finding someone to bond with over how difficult writing is is encouraging enough to get me over a hump.

Conclusion

I hope this helps you write more. I’ve met and been inspired by so many incredible people with amazing thoughts and ideas. I always urge them to write more. Not only would it probably benefit them, but it would amplify their ability to benefit others.

I wouldn’t feel comfortable publishing this piece without thanking the people who helped me build my writing confidence and find my voice. In particular, the Writer Relations and Comms teams at Quora: Jonathan Brill, Alecia Li, and Jessica Shambora. And the team I worked with at Holloway: Josh Levy, Courtney Nash, and most of all Rachel Jepsen, who spent hours upon hours helping me become a better writer.

Push vs. Pull-based Urgency

I was talking to my friend Josh a couple weeks ago about the speed at which teams move and how best managers can create a sense of urgency. We pretty quickly agreed that there are two types of urgency for a team: push-based and pull-based.

I felt like this is an important delineation, and wanted to summarize our conversation a little, just by giving some examples.

  • Team:
    • Push-based: You hire people who will what is asked of them, and nothing more.
    • Pull-based: You hire people who are conscientious and self-motivated, and if they aren’t being pushed, they will push themselves and everyone around them.
  • Goals:
    • Push-based: The team has “soul-less” goals like “get X done by Y” or “increase metric Y by Z”.
    • Pull-based: The team clearly understands the value of what they are building and are excited about making it real.
  • Road-blocks:
    • Push-based: Team members constantly lose momentum because they run into obstacles that are beyond their control.
    • Pull-based: The “path is paved”, so-to-speak. Team members might face obstacles, but they are either empowered to clear those obstacles or have access to executives that can clear the obstacles for them.
  • Deadlines:
    • Push-based: Arbitrary deadlines like “our exec wants this done by Friday”.
    • Pull-based: Self-imposed deadlines like “this should take 2 weeks, and we will hold ourselves accountable to that”.

We actually discussed deadlines a little bit, and as Josh always does he broke out a few dimensions:

  • Deadlines can be artificial or real. An artificial deadline is one in which there will be no repercussions if the deadline is missed, and a real one has repercussions.
  • Deadlines can be superimposed or self-imposed. A team can decide a deadline for itself, or someone (typically more senior) can decide a deadline for them.
  • Deadlines can be evident or arbitrary. An evident deadline might be “if we don’t build our product by November, we’ll miss out on the holiday season orders.”

So a deadline that is real, superimposed, and arbitrary, could be something like “If you don’t accomplish this by end-of-month, you will be fired.”

Someone once told me that managers push and leaders pull. Plenty of people, companies, and teams have gotten results with very “push-based” urgencies, but if you ask someone which type of urgency they prefer on their team, they’ll likely say “pull-based”. Which is an interesting point to ponder.

Is Revenue Model More Important than Culture?

HN Discussion Here: https://news.ycombinator.com/item?id=24543510

I always loved getting problems of the type “What is the limit as x approaches infinity” type in high-school/college. You’re given an equation (of the classic y=x format), and asked to derive what the value of y will be as x grows to infinity.

One thing you learn pretty quickly about these types of problems is that often it doesn’t matter where the function “starts” (or where it is at small values of x). It could start at zero, or at negative infinity, but its limit might be infinity, and vice versa (it could start large but have a limit of zero or negative infinity).

In fact, for many equations, there’s usually one dominant term. This is the term that dominates the limit. There might be countless other factors or parts of the equation that matter initially, but eventually it’s that dominant term that wins. This is sometimes known as the dominant term rule. We’ll get back to this in a second.

Ads vs. Search

Google had a little press kerfuffle a few months ago. You can read a summary in the New York Times here, but the short of it is that the company launched a design change that made search results and ads look very similar. Presumably, this increased revenue for Google, since many people ignore ads when they can easily identify them, the same way you’d ignore stepping in dog crap if you can identify it in the mud (and yes, given the state of online ads and content I pick this analogy deliberately). But there was a pretty strong backlash against this as a “dark pattern” designed to trick users. After the negative press, Google walked back the change.

If you’ve been following the news around big tech companies these past couple of years, this type of behavior is not surprising at all. These companies have grown really large, are arguably monopolistic, and hyper-focused on growth and revenue. Over and over, they have made decisions that have resulted in backlash from the press and from their users.

On the other hand, if I ignore the past ten years, and jump back to when I worked at Google as an entry-level Software Engineer, it is a little surprising to me. I worked at Google from 2006-2009. At the time, it was already a rapidly-growing public company (I think I joined when there were around 8,000 employees, and left when there were 20,000). I initially worked on the team responsible for AdWords, so I had some exposure to the culture and decisions that were made at the time (of course it wasn’t deep exposure, since I was an entry-level Software Engineer on the lowest rung of the ladder… but it was exposure nonetheless).

Note: I’m going to pick on Google a little bit here, but I do love that company. I think there’s a lot it can improve on, but it’s still one of my favorite and least “evil” large tech company. I chose them simply because I’m more familiar with them.

At the time, Google employees might have argued against making a change because it was “evil”. The “don’t be evil” motto was still around, and as engineers who were building parts of the product and making decisions, we were pretty ideological about it. One of the company’s values was also to put users first, employees second, and shareholders third. By any of these lenses, the type of design change that Google got flak for recently would have been highly unlikely at the time.

Revenue is the Dominant Term

Let’s take a dominant term view of this problem. When a company is first built, several variables dictate its decisions:

  • The implicit values/culture of the early team. As Ben Horowitz would say, “what you do is who you are.” 
  • The explicit values/culture of the early team. Are we user-centric? Data-driven? …
  • The revenue model.

I think that over time, the revenue model is the dominant term. The limit of a product towards infinity, so to speak, is based on its revenue model. If your revenue model is ads, it doesn’t matter if your stated mission is “to organize the world’s knowledge and make it universally accessible and useful”, “to give people the power to build community and bring the world closer together”, or anything else. If your revenue model is ads, you are an ads company.

I’m not diminishing the role of culture and values. I think those are critical. Part of me would love to believe the hundreds of books written on how culture determines everything. But I don’t. At least not for companies that can hire some of the smartest people in the world, gather massive amounts of data, and build technology more sophisticated than ever. And be trying to “maximize shareholder value”.

I’ve actually agonized over whether culture or revenue are the dominant term. In fact, I agonized so much that I’ve had this article in my head for years, and in a Google Doc for months, but I couldn’t get myself to write it / publish it. Because part of me believes culture always wins. Actually, all of me wants to believe culture always wins. But I’ve had my idealism crushed enough times by hard realities. 

Yes, having and espousing a positive culture and set of values are important. And they may shape how and how quickly the revenue model dominates (for example, companies like Enron or pre-IPO Uber show how bad things can get if you have a terrible culture). But regardless of your mission statement, your culture, your values, and so on, if you choose the wrong revenue model, it will dominate them in a shareholder-value-driven, capitalistic society. Culture can only dominate if it’s negative. A positive culture is necessary, but it’s not sufficient.

In other words, over the long term, a company (and its product) will morph to take the shape of its revenue streams.

Charlie Munger Knew It

Charlie Munger, Warren Buffet’s business partner, has a pretty famous speech where he talks about the power of incentives.

“Well I think I’ve been in the top 5% of my age cohort all my life in understanding the power of incentives, and all my life I’ve underestimated it. And never a year passes, but I get some surprise that pushes my limit a little farther.” —Charlie Munger

Charlie gives several examples: for instance, FedEx needed to move/sort their packages more quickly, so instead of paying employees per hour, they paid them per shift: productivity increased dramatically (employees now had less incentive to take longer hours to do the same amount of work). Charlie’s model of human behavior is pretty simple: we follow incentives. He makes people sound almost coin-operated.

Now, this isn’t entirely true—there are plenty of examples and research showing that our behavior is more complicated than simple incentives would predict. But Charlie is arguably one of the best investors in the world, and he’s onto something. Even though there might be other variables that influence our behavior, you can still simplify things down to incentives. Incentives are his dominant term.

That incentives are dominant is actually pretty obvious to a lot of people. Somehow in the tech industry, we seem to have just clouded our own judgement through some sense of moral superiority. We care about the impact we’re having on the world. We have noble missions that we rally around and try to hire people who are excited by them. So far, so good. But then we shoot ourselves in the foot by setting up business models with misaligned incentives.

A System View

If you take the view view that a company is not a product, but rather, a company is a complex system that creates a product, you can take a system’s view and arrive at the same conclusion around the importance of business models. In an essay quite famous with “system thinkers”, Donella Meadows outlines the “leverage points” of a system—places you can intervene in a system to change its behavior. Here’s her ranked list, in increasing order of effectiveness:

Now, organizations can have cultures that “transcend paradigms”, but in the modern corporate world, these are rare. And, without going down a rabbit hole here, organizations based on cultures that transcend paradigms can create both massive good OR massive damage. But for our purposes, it’s safe to assume that most companies are not transcending paradigms.

The next two leverage points are the mindset out of which a system arises. Most modern companies arise and operate in a mostly capitalistic paradigm, and have goals of maximizing shareholder value via growth and revenue. And over time, the way that revenue can be grown will dominate almost anything else, even the explicitly stated goals (aka its “mission”), because the paradigm creates implicit goals that outweigh the explicit ones.

Look for Aligned Business Models

So what does this mean in practice? Well, if you only care about making money, it doesn’t mean much. But if you do care about more than money, if you care about the impact your work has and you want to be proud of what you do, it’s worth thinking through this a little more deeply.

Whether you’re starting a company or joining one, look for a business model without perverse incentives. A business model that sets things up so that the better a product is, the better off the company is and its users are.

Sometimes, counterintuitively, a business model may seem aligned at first glance, but end up being quite harmful. The classic example that we’re all now aware of is free products. Free seems great at first glance. But companies have to make money somehow. So they sell ads, or data, or some mix of the two that their users don’t quite understand. And so now, success for the company means more time spent on the product (which may or may not be a good thing for users), less privacy (definitely not a good thing for users), and ultimately more ads.*

So often, paid is better than free*. At Monarch Money, my current startup, we’ve chosen to go with a paid model, with a hope that we’ll be more aligned in creating value for our users (who we can now call customers… notice how there’s a word for “customer service”, but no “user service”?). There will still be plenty of forks in the road where we can decide whether we help our customers, or take advantage of them, and I hope our values will help us navigate those forks, but at least the revenue model is in our favor.

Another layer to consider is whether your product and revenue model help people with just short-term goals, or a mix of both short and long-term goals. Products that are great and helpful, help their users with both. Good products might help with one or the other. The products with the most potential for damage provide some short-term benefit at the expense of longer-term goals.

So when you consider starting or joining a company, look at the business model, and do the “limit math”. Think about what things might look like if you become massively successful, because you might be.


*This is an opinion piece. I had to draw a lot of simplifications to keep this article short. A lot of statements are definitely not universally true, but are true enough that they’re worth using as examples.

Disrespectful Design—Users aren’t stupid or lazy

It’s a common narrative in tech to design products with the assumption that users are stupid and lazy. I think that is both disrespectful and wrong.

The idea is rooted in a lot of research around product usability, but it has been bastardized. Think of it as a perversion of the Don’t Make Me Think thesis.

Don’t Make Me Think, the seminal web usability book by Steve Krug, tells us that products should be as simple as possible for users to use. Products shouldn’t be self-explanatory (ie understandable given a set of instructions), they should be self-evident (ie so obvious that they can be used without having to read instructions). A good door has a push/pull sign to make it self-explanatory, but it still requires you to read and think. An even better door wouldn’t even need that label at all—you know what to do instinctively.

But somehow, we’ve perverted that idea. Users are lazy, even stupid, we say. They just want to flick their fingers down an infinite feed, letting their eyes wander from item to item.

But in Don’t Make Me Think, Krug never refers to users in a derogatory way. He tells us how good products should work, and why basic psychology supports that. People want to reduce cognitive friction as much as possible. People don’t like unneeded cognitive friction. People skim quickly and “muddle through” products. And, most of all, people won’t undertake effort unless they believe it’s worth the cost. These are all facts backed by usability research and psychology.

In other words, he tells us what good products should look like, and how people use them. But he doesn’t pass judgment on users. That’s up to us.

And so naturally, we apply our view of the world, our values. If you view your users with contempt, then the reason behind why people don’t like complicated products is because they are stupid and lazy. If, on the other hand, you respect your users, you might view things differently.

Firstly, our brains have been wired, through millions of years of evolution, to conserve effort and time. That’s actually not being lazy, it’s being smart and protective of one of our most valuable assets. Naturally, we don’t undertake an activity unless we believe it’s worth the cost (though there are ways to trick us, more on that later). And if it takes effort to even figure out how much effort an activity will require, we’ll avoid that activity altogether. That’s the functional, practical piece of our brain at work.

Secondly, we are a complex bundle of emotions. Even if we’re smart, we don’t like feeling stupid. And complex, difficult things makes us feel stupid. They strike at our very identity and self-worth. So we try to avoid them like we avoid that hard topic we were never good at in school. That part is the emotional piece of our brain at work.

So what explains the rise of products like Facebook, which have gotten a large part of humanity mindlessly scrolling through feeds of what can most easily be described as garbage content? Well, we humans aren’t perfect. If you’ve got billions of dollars, some of the brightest minds, and a lot of data at your disposal, you can get a lot of people to do what you want. If you treat users as stupid and lazy, you can turn them into stupid and lazy people in the context of your product… but that’s a subject for another post.

So here’s how I think about people and product design.

Firstly, products should definitely be as simple as possible. Because I respect users’ time, not because I look down on their intelligence.

Second, have a theory of how people behave. I’m a big fan of Self-determination Theory, which states that people value autonomy, relatedness, and competence. And I love building product that help people improve all those three dimensions.

Three, have a set of principles for your product. For instance, of the three axes of self-determination, I particularly care about autonomy (control). And I’ve found that good products, ones that respect their users, give them more control. Bad products take away control. Simplicity can fulfill both of those purposes. It can give people control by abstracting away things they don’t care about and helping them focus. Or it can take away control by only letting users do things the product’s designers want them to do. So that’s one of my principles: give people control. Help them do things they want to do, not things you want them to do.

Let’s respect our users. Technology can bring out the best or worst in us, both individually and collectively. Let’s focus on the best.


EDIT: The above article is what I wrote, in its half-formed state on a Sunday morning. It looks like it’s blowing up on HackerNews, so I wanted to just add a few points.

  • I know I can come across as idealistic. I’ve even gotten that as feedback on a formal performance review (but also, I’ve gotten that I’m cynical, so *shrug*). I’m not saying people can’t be lazy, entitled, or stupid. We can. We have that capacity. But we have the capacity for so much more than that. And we should focus our tools, our technology, on our best capacities.
  • If Self-determination Theory resonates with you, I’d urge you to think about how it applies to building teams or even parenting. Your employees and colleagues, or your children and family members, have all the human capacities as well (though obviously, for children, they are still under development). Since I’m much more experienced at managing teams (dozen years) than being a parent (two years), I’ll just say that companies that view employees as lazy and incompetent are a scourge. If you can afford to avoid working at companies like that, try your best. And if you’re tasked with building companies or teams, you get to choose. You still need rules, hierarchies, and processes, but if you give people autonomy and relatedness/purpose, and trust their competence, I hope you’ll be pleasantly surprised. If you treat employees as stupid and lazy, they will be.
  • On simplicity vs. control/flexibility: I’m a big fan of the Alan Kay quote that “simple things should be simple, complex things should be possible.” I think great products find a way of achieving both those objectives. You keep things simple, but don’t throw out the baby with the bath-water. Like the word processor. 99% of the time, you just want to type some text, so you get a cursor and WYSIWIG typing. But sometimes, you want to style, you want to indent, you want to program macros. We apply this principle often at Monarch Money (personal finance platform that I’m working on) and so far have found it to be quite successful.

About me: I’m a software builder / entrepreneur. I write about software, software engineering management, and product-building. I currently manage the engineering team at Monarch Money, a personal finance platform. You can follow me here on this blog, or on Medium. I also helped write a book on hiring/recruiting in the software world with a group of really awesome people.

Data is Not a Substitute For Good Judgment

The tech industry prides itself on being “data-driven”. We’re so data-driven, in fact, that there are hundreds of startups building analytics tools (Segment alone lists over 100) and A/B testing tools (~30). We both laugh at but also secretly admire stories like Google A/B testing 40 shades of blue for its links. A typical consumer-product tech company might be running anywhere from dozens to thousands of A/B tests concurrently, and analyzing thousands of metrics to decide what to launch.

At the surface, it makes sense. “Data doesn’t lie”, we are told. Data, we are promised, will help us overcome our own numerous cognitive biases. It will cut through team and company politics. Depending on who you ask, data is the new oil, data is the new gold, or data is the new religion. Every product decision you make, every OKR you set—must be grounded in data. “You can’t argue with the data!”

I’ve worked at multiple consumer internet companies, and I’ve seen it first hand. I joined the cult and drank the Kool Aid. And, I love data. I’m an engineer at heart. Data works. But like any religion, data can be taken to the extreme, with dangerous consequences.

So I’m not saying we should throw the baby-data-Messiah out with the bathwater. All I’m saying is that data is a tool, and you should use it as such.

Imagine you’re a product manager at a consumer internet company. Your task is to build a landing page to get users to sign up for your product. So you put a lot of valuable information on that page. The conversion rate is low. You run an A/B test with a bunch of variations, and you realize that withholding critical information boosts the sign-up rate. You run more A/B tests. The relationship holds. Less valuable information, more signups. Before your know it, you’re a full-fledged landing page click-bait artist. Your page is shit but you nailed the conversion rate!

“Wait a minute,” you’re saying. This is a problem that can be solved with more data. And yes, you can start measuring downstream metrics like retention, etc, and maybe you learn that tricking your customers into signing up by withholding information results in lower retention. But now you’ve shifted the problem downstream, and what will likely happen is that you (or another product manager) will now be tasked with increasing the downstream retention, and again, the data guides you towards more dark patterns. Because your entire funnel is now grounded in dark patterns. And now any time you actually try to deliver real value to users, your metrics drop.

If this example sounds cartoonish and hard to believe, I assure you I’ve seen it (or something similar) happen multiple times at very respectable companies. We need to understand that data is not a substitute for anything. It’s not a substitute for understanding your customers and their problems. Data is not a substitute for good judgment. Data can actually become a crutch that gets in the way of problem-solving. More data can lead to data hoarding and decisions to the detriment of your customers, your product, and your company.

Data also leads to large, monopolistic consumer internet companies that have lost sight of the problem they’re trying to solve and instead just want to boost their metrics. It also leads to disenchanted employees. You go out and hire the smartest, most passionate people you can find, and turn them into A/B testing monkeys. Initially, they love it—they make changes, they see numbers go up. They get promoted, because you reward them based on “impact”, and the data shows that they have had impact. But they turn off the part of their brain that cares or thinks critically. Data is not a substitute for purpose. Like any shallow gamification, the effect eventually wears off.

Use data as a tool. It is powerful. Don’t use it as a religion. Work with people and companies who understand that. Work with people who are truly focused on solving a problem. Use data to validate the problem and the solutions, but don’t let it lead you blindly.

Write a Design Doc—even if no one else will read it

I often write design documents even if no one will read them.

There are a lot of resources out there on how to write good design documents. There are also many different ways to define what constitutes a design doc—what it includes, how long it is, how formal it is, etc.

For my purposes, a design doc is any document that you write before you begin the actual implementation. It can be long or short, formal or informal, etc. The point is it’s something you do independently of the implementation.

Most of the known benefits of writing design docs center around organizational alignment. Design docs can help you plan, help you get input from others on your team or in your company, serve as a record for the future. At larger companies, they’re also a great educational channel. While experienced engineers debate pros/cons of different approaches, many other can watch from the stands.

I’m a big fan of design documents on large teams and at large companies, but I still find them tremendously valuable even if no one else reads them.

A good design doc includes, at some level of detail:

  • What you’re planning to do.
  • Why you’re doing it.
  • How you’re going to do it (including discussions of alternative implementations).

Being forced to write those things down (even if it’s in a few sentences or paragraphs plus a diagram or two) sets a minimum bar that can help solve a lot of software development problems.

  1. Thinking strategically instead of tactically. Tactical thinking focuses on the details and on immediate results. Strategic thinking focuses on higher-level concepts (what we’d call “architecture”) as well as on the future. Code lends itself to tactical thinking. Design docs force strategic thinking.
  2. Creative thinking. Complementary to strategic thinking, when writing out a plan, you’ll often realize that there are alternative solutions to the problem you’re trying to solve (or in some cases, that the problem you’re trying to solve isn’t worth solving). It’s hard to do this when you’re bogged down in implementation details.
  3. Avoiding complexity and obscurity. Being forced to articulate your plan in pain English can often expose complexity. Often, things that are complex tend to be hard to describe, and so, if you think your implementation is simple but are finding that writing out your high-level plan is hard, it’s a good indicator you’re wrong about how simple it is.

It is, of course, entirely possible to sometimes begin with the implementation first, but in this case, you should treat the implementation as a discovery implementation or a prototype to collect some “on the ground details”. But once you have those details, then you write your design doc before beginning the real implementation.

Our hiring process is what we can get away with

Here’s why Google hires the way it does but you shouldn’t.

Every month or so on HackerNews, there’s a thread about how interviewing is broken which usually devolves into “Google (and the rest of the FAANGs) suck at interviewing”.

Last year, I spent a few months working on The Holloway Guide to Technical Hiring and Recruiting, over the course of which I got to spend a lot of time talking to people who have been really successful at designing hiring processes, conducting interviews and being interviewed. I learned far more about interviewing than I could have imagined, and it was a great chance to reflect on a lot of these interviewing debates.

One quote from my discussions with hiring managers stood out, and it was from a frustrated high-ranking VP at one of the FAANGs: “Our hiring process is what we can get away with”. He was making the point that many companies blindly copy their hiring process in general and their interviewing process specifically, assuming it’s the reason behind their success, but in reality, their hiring process isn’t actually that good and they’ve just been able to get away with it because they have a strong enough brand and can pay well enough (to quote him, “we still think we’re the only game in town”).

I think that’s partly true, and definitely would fit into the “big companies suck at interviewing narrative”, but I also think that if you just take it at face value, it’s a little simplistic. These big tech company’s have spent years looking at their hiring data and feeding that back into their hiring process (coining the term “people analytics” along the way). Yes, they could all probably be a little more successful if they just dialed down the arrogant “you’d-be-blessed-to-work-here” attitude that’s ingrained in their hiring processes, but in reality, their interviewing processes work quite well for them.

Why? Well if you ask someone why big tech co’s do the types of algo/coding (aka “leetcode”) interviews they do, one answer you might get is that you need to have solid algorithm skills to succeed there. In fact, Dan Luu cites that as “the most common answer” for why algorithmic interviews are necessary. I actually don’t think that’s the most common answer (at least, not from the people I asked, who were hiring managers and recruiters). The most common answer is actually: we want to hire smart problem-solvers with strong analytical skills, and since we stopped asking brain-teasers because they’re irrelevant, algorithmic questions are the next best thing. In other words, algorithmic questions are just a better way of assessing analytical skills.

That’s bullshit. Anyone who has been on either side of an algo interview knows that you can totally prepare for them. Dozens of best-selling books, venture-backed startups, and cottage industry coaching practices have made money helping people improve their performance on these interviews. So it doesn’t actually assess your raw analytical skills. If you look at criticisms of algo interviews, what you’ll hear is that they actually assess:

  • willingness to spend time preparing for these types of interviews.
  • pattern-matching skills (map a problem to something you’ve seen before).

But this still works for big tech co’s.

If you’re a large tech co with a big brand and a salary scale that ranks at the top of Levels.fyi, a good interview weeds out people who wouldn’t do well at your company. To do well at a large tech company, you need to (and I’m painting with a really broad brush, but this is true for 90% of roles at these companies):

  1. Some sort of problem-solving skill that’s a mix of raw intelligence and/or ability to solve problems by pattern-matching to things you’ve seen before.
  2. Ability/commitment to work on something that may not always be that intrinsically motivating, in the context of getting/maintaining a well-paying job at a large, known company.

Hopefully you can see where I’m going with this. Basically, the very criticisms thrown at these types of interviews are the reason they work well for these companies. They’re a good proxy for the work you’d be doing there and how willing you are to do it.

Not that there’s anything wrong with that type of work. I spent several years at big tech co’s, and the work was intellectually stimulating most of the time. But a lot of times it wasn’t. It was a lot of pattern-matching. Looking at how someone else had solved a problem in a different part of the code-base, and adapting that to my use-case.

You really only need one Dan Luu per like 10 or 100 engineers at a FAANG. Most people aren’t going to be optimizing at the level he is, they’re going to be doing work that’s mostly a mix of problem-solving by pattern matching, and ideally, they’re motivated enough to have that job for as long as possible.

Now, unless you are one of those large companies, the type of people you want to hire will be a little different. You might need people who are passionate about a particular domain, or are really strong creative problem-solvers—and sometimes the very things that make someone a strong creative problem-solver can make them a weak pattern-matcher. Entrepreneurs tend to be creative problem-solvers; VCs tend to be strong pattern-matchers. With few exceptions, strong entrepreneurs are shit VCs and vice versa.

To tie this back to the original quote, you also probably don’t have the brand or money that big tech co’s do. So you might incorporate algorithms into your interview process, but you might also consider hands-on in-person interviews or take-home interviews (though those also have pro’s and con’s). The point is, don’t dismiss what big tech co’s do, but don’t blindly copy them either.

The Software Over-specification Death Spiral

I see a common pattern with startups and teams I’ve advised or been a part of. I call it the Software Over-specification Death Spiral, or SODS for short.

It looks like this:

  1. Product Manager (or CEO, Engineering Manager, etc) drafts up some sort of specifications or requirements for a new feature or product.
  2. Product Manager (or CEO, Engineering Manager, etc) drafts up some sort of specifications or requirements for a new feature or product.
  3. The requirements are handed over to the engineering team to implement.
  4. The engineering team “implements” it, but gets some things wrong. One bucket of things that are prone to being wrong could be things that were so obvious that the PM thought they weren’t worth making explicit. Another bucket are corner or edge-cases that the PM didn’t think of.
  5. The PM is surprised at this, and work has to be redone.
  6. In an effort to prevent this in the future, both the PM and the engineering team agree that requirements need to be more detailed.
  7. Surprise: despite the increased details in requirements, the engineering team still gets it wrong.

In each iteration of this loop, everyone agrees the requirements just need more details, but every time that happens, things are still wrong. What’s going on?

Breeding Code Monkeys

You’re breeding code monkeys, is what’s happening. Software is complex and malleable, and no set of specifications or requirements will ever be complete. There will always be behavior that requires some “filling in the blanks” at implementation time. The person doing the implementation needs to be able to either:

  • Recognize when they should ask for clarification, or, preferably,
  • Be able to fill-in-the-blanks correctly.

Counterintuitively, once you have this problem, the more you try to weed out ambiguity in requirements, the more likely you are cause the opposite effect of what you’ve want. Engineers turn off the part of their brain that they would use to think through product decisions, and become, essentially, “code monkeys” that just do what they’re told.

A Better Way

Does that mean you shouldn’t write specifications or requirements, or you should right less? Well, let’s not throw the baby out with the bathwater just yet. Specifications are important, but if you’re missing key pieces, they can make your problem worse.

At a high-level, when you’re building software, there are three questions to answer:

  1. What the software does—requirements and behavior
  2. Why it does what it does—actual problem it solves for its users
  3. How it does what it does—the actual implementation

These three are all related. SODS occurs when engineers try to do #1 and #3 without understanding #2. As Simon Sinek would say, “you must start with why”.

It’s on both the PM and the engineering team to understand “the why”. A few suggestions for PMs (and engineers):

  • Hire or worth with engineers who are inclined to understand the product, because they could be users themselves or because they have some other interest in the product space (or, as an engineer, try to help build products that you would use or that you find interesting).
  • Make sure you explain why something is worth building (or, as an engineer, make sure you understand that, and if you don’t, ask).
  • Push engineers to be involved in product decisions (or, as an engineer, try to be involved in various pre-implementation parts of the product lifecycle).
  • Undertake other activities that help engineers build product intuition. Engineers can spend time with users, spend time understanding your products analytics, etc.
  • If you really want to test whether an engineer is thinking about “the why”, have an engineer write the specifications for a feature of reasonable size, and then have the PM review them. If they can’t even attempt that, that’s usually a bad sign (though the worst sign of all is if they absolutely don’t want to even try).

Don’t fall victim to SODS.

Software as a Liability

On many teams I’ve advised or been a part of, code is generally viewed as an asset. Only some code, the “bad code”, is considered technical debt. The highest-performing teams, however, viewed things differently. For them, all code is technical debt on some level.

Programming vs. Software Engineering

Software requires two broad classes of effort. There is the immediate effort to write the software, and then future effort to maintain it. (Titus Winters of Google would call the former simply “programming”, and the sum total of both as “software engineering”).

Software engineering is programming integrated over time. Engineering is what happens when things need to live for longer and the influence of time starts creeping in.

— Titus Winters

It turns out that both the initial effort (programming) and the eventual effort (software engineering) are hard to estimate. If you’ve been in the software world long enough, the notion of a project’s initial implementation being delivered on time is so rare, it’s almost a joke. And of course, as you get to the “future effort” part, things become even harder to estimate and predict.

A Taxonomy of Technical Debt

Once you’ve built something, you (or someone else) become probably responsible for maintaining it. This cost is usually referred to as “technical debt”. We can break the cost of this future work into two broad classes:

  • Interrupts: Interrupts are when existing systems are taxing your time through reactive work like fixing bugs, fire-fighting, etc. Writing code now that creates interrupts in the future means you (or someone else) will be able to spend less time on making progress on other work later. Both the quantity and severity of interrupts matter. Interrupts are particularly hazardous to engineering teams because they are hard to plan for and usually result in forced context switching.
  • Inertia: Inertia means that a system is hard to make new changes to (because it is hard to understand or reason about, because it’s brittle, not modular and hence hard to change, and so on). This makes forward work difficult for you (ie even when you can spend time doing forward work, it’s really slow) or for others (e.g. because the system is hard to understand, it is a tax on the time of people who need to learn more about it, as well as on people who need to explain to others how it works).

It’s worth noting that if you had two systems that were identical in quality, you’d find that the costs of interrupts increased with system usage—how many people use your product, how often they use it, and how diverse their usage patterns are. In fact, Hyrum’s law tells us that the more people use your product, the more diverse their usages will be. But with the pressure of increased and different usage, your system finds new and different ways to fail, and the cost of failure (since your system is highly-used and depended upon) increases, too.

On the other hand, the cost of inertia increases with the quantity/scope of future changes you need to make to your product. And, of course, for poorly-designed systems, inertia and interrupts create vicious feedback loops. High inertia means you create bugs as you change your code, resulting in interrupts. And when interrupts happen and you need to fix them, it will be really costly because your system has inertia.

The point here is that all software is costly. Poorly written software is obviously more costly, but all software requires effort both now and in the future. Hence, it all creates technical debt—you will be paying interest on it, and occasionally you (or someone else) may need to pay down the principal with some refactoring or re-architecture. In other words, you are creating a liability.

A lot of people think of liability from a financial (debt) or legal perspective, but literally, a liability is simply “the state of being responsible for something”. And when you write software, you or someone else will be responsible for it.

Why Write Code?

But if all code is technical debt, why write any software at all? Well, the functionality that software enables is an asset. At some level, any valuable piece of software is solving some problem. And that’s the important distinction here. Software is the means, not the end.

Another way to frame this comes from “Uncle Bob” Martin (author of the seminal book Clean Code—though this framing is covered in another of his books, Clean Architecture). He views software as having two dimensions: behavior and structure.

Every software system provides two different values to the stakeholders: behavior and structure. Software developers are responsible for ensuring that both those values remain high.

— Bob Martin

Uncle Bob goes on to argue that structure (the ability to modify a piece of software) is more important than behavior (its current functionality). His argument is compelling: a perfectly functional but inflexible system will be impossible to change when change is required, but a poorly functioning system that is extremely flexible can easily be fixed.

I think that statement is mostly true. I use the word “mostly” because you could argue that there are some systems where existing functionality matters more than future flexibility. There are some critical contexts in which being absolutely certain that software is operating correctly outweighs any increased costs in flexibility (e.g. a car, an airplane, medical equipment). And there are some contexts in which not having some functionality really quickly will mean that it doesn’t matter how flexible that software is in the future, because it will have no future (e.g. an early-stage startup). But I don’t want to go too deep on this topic, since it’s tangential to the point I’d like to get to (and we’re getting there, I promise!).

Future Scope and Likelihood

So, putting this all together, software makes sense to build if and only if the value it creates now (through its functionality), and the value it enables in the future (through its ability to change its functionality), outweigh the costs it takes to build it now and maintain it in the future. But that “future” part is hard to predict.

I’ve always been amazed at how financial analysts can put together a spreadsheet to value an asset or investment. They’ll confidently forecast out a series of cash-flows, often in perpetuity. When they can’t forecast perpetuity with a straight face, they’ll slap a terminal value on it instead.

In software, it’s not that easy to predict (or pretend to predict) the future. So it’s easy to just, like Bob Martin, say that flexible software is better than inflexible software. But that’s a truism and it doesn’t really solve the problem of how, exactly, to think about building your software.

Software purists will make the case that “good software is good software”. Practice good design patterns, use the SOLID principles, remove (or encapsulate) complexity, etc. And you should. There are things that are almost universally good or bad architectural decisions in software, and we’ve got some great literature to help guide us.

But remember, writing good software is the means, not the end. Your goal is to build software in a way where current and future functionality outweigh current and future costs. And to get that right, we have to understand the scope and likelihood of future changes. Without that, we’re flying blind.

Domains, Users, and Problems

I’m a big fan of Domain-Driven Design because it shifts focus off of the code into the domain you’re trying to model.

The most significant complexity of many applications is not technical. It is in the domain itself, the activity or business of the user. When this domain complexity is not dealt with in the design, it won’t matter that the infrastructural technology is well-conceived.

— Domain Driven Design

The promise is simple: model the underlying domain correctly, and not only should the code and architecture fall into place now—they should be able to adapt and evolve as your requirements change.

Deeply understanding the domain you’re working with is a great start, but focusing too much on domain modeling can be misguided. Your users don’t care about how the domain is modeled—they care about whether your software solves their problems. So you actually need to understand three things:

  1. The domain.
  2. Your users.
  3. Your users’ problems.

How to actually do that probably requires a separate article(s), but it basically comes down to spending time thinking about, discussing, and analyzing those three aspects.

Don’t Gatekeep

As a final thought, there’s a risk of taking “software as a liability” to an extreme.

You’ve probably worked with one of these developers before: the type that gatekeeps software. Asked to implement something by a Product Manager or a colleague, their go-to response is “no, that’s too complicated”. Then they walk off feeling good that they have just prevented adding a bunch of complexity into the code base, and the future stream of liabilities that would create.

Any principle can be abused with the wrong attitude. So yes, all software is a liability, and it all has costs, but by truly understanding the domain you’re building in, the users you’re building for, and the problems you’re solving, you can help manage the trade-off between the cost of software and the benefits it provides.