At many startups, culture happens organically. It’s just built around the personalities and values of the founders and early team.
But anyone who has built a company before learns a pretty vital lesson: culture is important, and when something is that important you have to be intentional about it.
We wanted to build a company that would endure. We started noticing [these types of] companies have something in common…We started to realize that we needed to have intention, culture needs to be designed.
Another way of thinking about this is that, as a company, you are not just building a product. You’re building an engine, a machine that builds a product. That machine is composed of several pieces:
The team you hire.
The culture you put in place (deliberately or accidentally). This includes things like what is rewarded/punished, what incentives are in place, etc.
The processes you put in place (again, deliberately or accidentally). Formal and informal communication. How decisions are made.
I’d like to make three arguments here:
It’s helpful to think of both the “actual” product and the machine as products. Going forward, I’ll call the former the user product and the latter the company product.
Your user product and your company product both sprout out of your values.
There is a feedback loop between those two products. Your company product (aka your team/culture/process) and your user product both impact each other. They are intertwined.
Let’s first discuss the dualism between your user product and your company product. Most people are familiar with the idea of finding product-market fit for your users (or, more generally, finding product-market-channel-model fit for your users). This will involve thinking through things like:
Identifying who your core customer is.
Understanding their core problems.
Offering a solution to their problems.
Positioning that solution as being differentiated, via some narrative/brand.
Communicating that narrative to your core customers.
In our tech-industry-lean-startup world, you do that iteratively as you learn and grow, but it’s basically the same core loop.
In reality, every company is actually running two of those loops whether they realize it or not. The obvious one is for their user product, but they’re running one for their company product too:
Identifying what they need to accomplish, and who will help them be successful.
Figuring out what they can offer them as an employer.
Positioning their company as a differentiated employer, via an employer brand.
Communicating that narrative to potential employees.
Many companies get this second loop wrong. For example, I commonly see early-stage startups trying to mimic the recruiting practices of larger companies. If you’re competing for the same candidates in the same way—against companies that can pay more and have a much more recognized brand—without some unique value proposition, you’re setting yourself up for failure. It’s basic supply and demand. So you have to find a way to differentiate yourself as an employer. My friend/co-author Aline Lerner has written about that here if this is something you’re interested in learning more about.
Your Values Define Your Two Products
OK, so if you’re building a company, you’re building two products and running two loops for each of those products. What shapes those two loops? Especially in your early days, your company’s user product and company product both grow out of your values—your beliefs about how the world is or how it should be.
Values are rarely right or wrong, but they can be substantially different. Let’s take some opposing values and see what effect they might have on a company’s culture or user product.
Let’s start with a company’s view on how decisions should be made:
Data is crucial. Everything can and should be measured, and decisions should be made based on that.
Not everything can be measured, sometimes you have to use your judgment/instincts when things are ambiguous.
You can kind of imagine what product and culture companies with each of those values might build.
The duality isn’t perfect, though, and it can break down a little. Let’s take a company’s view on relationships/transactions:
The world is generally a zero-sum game. When two parties interact, when one wins, the other loses.
The world is not always zero-sum. If you think hard enough, you can come up with ways to align incentives so everyone wins.
Things can potentially diverge here. A company might have a different set of values for “in-group vs. out-group“. For example, a company might treat its employees with a lot of trust, but treat users/customers in a highly adversarial manner. This is actually a natural sociological outcome (as humans, we evolved in tribes/clans), but it might be hard to maintain as a company grows (tribes/clans tend to break down at scale).
Some companies are more interesting in that they seem to have the opposite view-point: employees may not treated very collaboratively or trusted, but when it comes to their users or customers, they are obsessive. I haven’t worked at either Amazon or Apple, but from the outside, they generally seem to be that type of company. It’s like an anti-tribe. Or maybe a cult, I don’t know.
The Two Loops are Inter-Twined
OK, so you have two loops, and they sprout out of your values. But they’re also intertwined and interdependent.
I think in the early days, your company product dominates and it influences the user product you build. You start without a product, without users/customers, and without revenue, so the product you build is shaped by your values, and those values are set by the early team.
But as your company grows, the type of people you attract and the value proposition you offer them is somewhat determined by your user product. Depending on what you build and how you build it, different types of people will want to come work with you. Ultimately, and I’ve written about this before, the revenue model, which is part of your user product, will dominate.
So the lessons are:
Be intentional about both the product you build for your users and the “product” you build for your team, especially in the early days.
Pick a revenue model with care, since that will probably be the dominant term over time.
When a team or company is not functioning as it should, two types of problem-solvers often emerge. The organizational psychologist tries to debug the culture. The organizational mechanic tries to debug the process.
The mechanic asks what meetings or what documentation is missing. Organizational mechanics love “reviews” (meetings that force decisions to be made). When it comes to communication, they look at the mechanics of what is said and when (how it is said is less relevant). Mechanics look at the structure and the connections. When all else fails, mechanics become surgeons. They “operate”. They pursue “reorgs” or just plain old lay-offs / firings.
Organizational psychologists are more about the human part of the equation. What incentives has the organization set up? How can those incentives be changed? What is rewarded and what is punished? When it comes to communication, they look at the how and the why.
Really good psychologists can dig in even deeper. They can particularly understand how the psychology of an organizations leaders amplifies and impacts the rest of the organization. Is the CEO a micromanager? Is the Head of HR/People generally a cynic who doesn’t trust people to do what’s right? Is the Head of Product uncomfortable with ambiguity or with quantitative analysis? What effects does that have down the chain-of-command?
Great leaders are able to put both the mechanics and the psychology together. They understand that teams are complex systems of humans. They understand that debugging is cyclic: the mechanics and the process affect the culture and the psychology, but the mechanics and the process are an output of an organizations culture and psychology. You need to look at both sides to solve most problems.
I always struggled with double-entry accounting, even after I got an MBA. I could do it and mostly memorized my way through a lot of the jargon, but it wasn’t until I took a systems view to accounting that I really understood the mechanics of how things worked. I figured I’d share some thoughts about that.
Stocks vs. Flows
A common theme in systems thinking is the idea of stock vs. flow. In accounting, everything looks like numbers, but in reality, some of those numbers are stocks, and some are flows, and understanding that distinction is critical.
So—more generally—what are stocks and flows? The classic illustrative example is a bathtub. The stock is the amount of water in the tub. Flows are water going into, or out of, the tub (via the drain and via the faucet). There are a few important relationships between stocks and flows:
Time: A stock is measured as an amount at a fixed point in time (10 gallons at 10:00AM). A flow is actually a different unit, it is either a flow (ie 1 gallon / minute) or a fixed amount over a period (5 gallons between 10:00AM and 11:00AM).
Space: A stock is measured for a system with a boundary (e.g. a bathtub). Anything within the boundary is part of that stock. Flow happens between system boundaries.
There is a simple law that holds true for all systems, which is that the change in stock of a system between two time periods is equal to the sum of the flow that happened within that period. This is obvious but stating it is important. If the tub had 10 gallons at 10:00AM and 15 gallons at 10:15AM, this means the total flow was +5 gallons in that period. It could be that the faucet added 10 gallons, 8 went down the drain, 1 evaporated, and you poured in 4 using a bucket. That sum has to be 5 (15 – 10).
How does this relate to personal finance? Well the first and obvious mistake is mixing stock and flow. Your cash balance, in a checking account, is a stock. An expense is a flow. They both look like numbers with a dollar sign, and they are related (like any stock and flow might be), but they are really different things.
Confusing stock and flow can often lead to simple mistakes or underspecified statements. Like “I have a $10K balance in my account in October”. That is an underspecified statement. Does it mean you maintained a minimum balance of $10K in October? Does it mean you had $10K at midnight of October 1st?
One of the interesting pieces of systems thinking is that you can draw arbitrary boundaries around pieces of a system, and when you do, events can look very different. You can draw smaller boundaries to observe things more carefully, or larger boundaries for a broader view.
Let’s take this “system” with two cash accounts.
The green boundary encompasses only one account. That account has a balance, and it has some flows (cash flow). For the purposes of the green system, it doesn’t matter if a flow is an expense, a transfer, etc, all that matters is how big it was, whether it was in or out, and when it happened. You can call it a credit or a debit—that’s just convention.
A bank’s statement tells you the stocks and flows. For a period of time, the green “system” will have a starting balance, some flows (in and out), and an ending balance. That should all sum up.
In fact, if you know (or can predict) all the flows, you can derive the balance at any point in time. An account is a series of financial flows (magnitude, direction, datetime).The balance is always directly derivable from the cash flows. Simplistically, having that series of flows fully specifies the account. And, for the purposes of a single account, it doesn’t really matter where the money is flowing to (or from), just that it is flowing in (or out) of that single account.
Now instead, if you take the (broader) yellow system, the story is different. A transfer between two accounts isn’t an inflow or outflow, it’s neutral, and you could, if you didn’t care about individual account balances, simplify the system like this:
Let’s do a slightly more complicated example. At Monarch, we sometimes see users treat credit card payments as an expense, while others treat them as a transfer. This discrepancy is pretty easy to explain using system boundaries.
If you’re using the yellow boundary, that payment is a neutral transfer. That’s also the proper accounting solution, of course. Technically, that payment doesn’t affect your net worth, because it reduces an asset and a liability by the same amount. It is two opposing flows.
But if you’re looking at the cash account only, because you’re cash-oriented (maybe because you’re worried that if you run out of cash you won’t be able to pay your rent or buy food), you’d be looking at the green boundary. In which case, that credit card payment is an outflow, and it looks more like expense. So, in a sense, while the yellow boundary gives you the right solution technically, it’s hard to tell someone looking at the green boundary that their explanation is wrong.
And in fact, the allocation of money within accounts does impact the future flows, in ways that people underestimate… because things like interest create a feedback loop, where future flows depend on the balance, and interest compounds in ways that people tend to always underestimate but is extremely poweful.
Let’s do one more, where you make a payment towards your auto loan.
If you take the green boundary, then a payment to your car loan looks like an outflow of cash from your account, and it seems like it is decreasing your net worth (ie an expense). However, if you take a broader system view, then you are actually paying down a liability, so your cash account is losing cash, but part of that cash is going to decreasing your liability (ie a transfer), and that part is neutral to your net worth. Then there’s interest involved too, but that interest is actually expense. And payments may have tax implications as well (e.g. mortgage). And of course, I didn’t even factor in that your car is depreciating. Anyway, the point is, a flow or event can look very different depending on how you draw your boundaries. Not only can it get more complicated, it also just looks like a completely different event.
Double-entry accounting is the main paradigm in use with modern accounting. It sound scary—this is where you run into words like “debits” and “credits”. You can memorize what each of those is, but I find it a lot easier to first intuitively understand what’s going. And the stock and flow system model can help with that.
Double-entry means that every transaction to/from an account has to have corresponding, opposing entry to/from a different account. It’s not entirely intuitive to most people, but we can easily map it to our model by just creating the right system of accounts and drawing the right boundaries. Basically, because a transaction is a flow, it has to flow from somewhere (one entry) to the other (the other, opposing entry).
For example, a transfer between two bank accounts is easy, because the opposing transactions are clear. -$100 out of your checking account, $100 into your savings.
What about an expense? Expenses look something like this:
When you buy groceries for $10, you can think of it as being a flow of -$10 out of your cash accounts, and a flow of +$10 into a “groceries account”. Except the groceries account isn’t a “real account”, it’s just created because it helps us “balance the books”. And we shouldn’t include it inside our system boundary for net worth (or cash or anything else). But if you want, you could create a system boundary that encompasses all expenses, and it would look like this:
Businesses all use double-entry accounting. When they buy something, if that thing is an asset, they’d decrease their cash account, create an account for the new asset (or put into an existing asset account). They would then depreciate the asset over time (by, you guessed it, creating a “depreciation account”). The asset account is included in the business’s “net worth” (balance statement)—it’s inside that system boundary. But depreciation is outside the boundary, and so it isn’t, and it becomes an expense. (Of course, if they buy something that doesn’t become an asset, it would an immediate expense).
You can really create accounts for anything you want, and if you wanted a complete system, you would do exactly that.
Anyway, stocks and flows and double-entry accounting are the same thing, assuming you’re labeling things correctly, and all flows are opposing. The only difference is rather than using diagrams like we use in stocks and flows, a double-entry might look like:
I find stocks and flows more intuitive than double-entry accounting. For one, anyone I know who has studied accounting initially struggles to understand a debit vs a credit. Also, it’s less visual, and makes it less clear what the “system boundaries” are.
Going Beyond Simple Systems
Since we’re building a personal finance platform, we spend a lot of time thinking about these things.
Let’s assume you’re a person or household, and you’re trying to understand your financial picture. If you look at what a lot of current budgeting apps do, they provide:
Historical views (spending): you can break things down by account, category, etc because you have the account-level data from banks. You can tell someone what they’ve spent money on. Sure, there might be some confusion around whether paying down a credit card is an expense or a transfer, but those are easily resolvable.
Current views (balances): again, you can break things down pretty well if you have the account-level data. You can track balances and things like net worth over time, assuming you have the data and are tracking all accounts.
Forward view (budgeting): this is where things start to fall apart, because this is where system boundaries start to matter a lot more. Most budgeting views take a really broad system boundary view, where expenses are outflows, income is inflow, and transfers are neutral (since they are within the system’s boundaries). Accounts are mostly lumped in together.
You can’t really do accurate long-term forecasting or give people solid financial advice based on that. You can’t model. Sure, it lets you build a simple product, and that can work for some people, but most people we talk to end up in spreadsheets because they want to model. Let’s look at some of the information you lose when you draw such a large system boundary for a forward-looking view of finances.
Balance-Dependent Flows (e.g. Interest)
In our simple two-cash-account example above, without any interest, the allocation of cash between accounts didn’t really matter. In reality, interest complicates things because it results in flows that depend on balances. Cash in a high-interest-rate savings account increases due to interest (and vice versa for a credit card balance incurring interest). So the allocation of balances within the broader boundary does impact the actual flows, and you can’t just lump everything together without losing some information that is crucial for forecasting (compounding interest is a huge factor in the longer term). Allocation does matter.
Gains / Losses
Interest actually isn’t terribly complicated because it is largely predictable. Gains and losses are a lot more volatile. A stock portfolio could appreciate or depreciate, which means traditional (physical) systems analogies break down. The water in your tub never appreciates. You can’t mark its value up or down. Water has to flow in or out, whether it’s through a faucet, drain, splashing, evaporation, or your toddler deciding to pee in it while you’re giving him a bath.
Now you for many types of assets, you can’t accurately forecast gains or losses. You can make assumptions based on history or your forecast (ie a certain growth rate for a portfolio of stocks), but it’s just a forecast. And that forecast will depend on allocation (are you investing in high-risk assets? Index funds? A mix?)
Liquidity / Hard Fungibility
Things that are “fungible” are things that can replace each other. Due to liquidity, not all dollars you own are “fungible”, and not all balances are interchangeable
A dollar in cash is not the same as a dollar in retirement savings in your 401K which is not the same as a dollar of value in a home you own. Various balances might have restrictions about when they can be liquidated (like desperately selling a house or car), might result in losses / expenses if liquidated at the wrong time (eg, paying taxes or penalties for early withdrawal from a retirement account), or might have transaction fees associated with them. A lot of this information is important for the purposes of forecasting and planning, but the broad system view just sums these up to your net worth. It’s nice to see how your net worth is trending, but having $10K in your retirement account today doesn’t mean that money is spendable right now.
In addition to “hard fungibility”, there is also “soft fungibility” (I made this term up, I’m sure there’s a more formal one). Soft fungibility is mentally-imposed—it’s some emotional value you as a person assign to an allocation of money. Some people call that “earmarking”.
Sometimes, money with the same liquidity is treated differently, usually because it has earmarked that money for some purpose. For instance, let’s say someone (a parent) has two checking accounts: one out of which they spend regularly, and the other where they are keeping money to buy Christmas gifts for their children. Now let’s say there’s a shortfall in their spending account, will they dip into the Christmas gift account to cover it? Many people won’t. They might, instead, do something seemingly irrational, like borrowing at high-interest to cover their spending, just to avoid touching this “earmarked” money.
In fact, there is a lot of research into why some people have savings earning low interest while carrying debt with high interest, when the rational thing to do would be to use the savings to pay down the debt. The reasons are complicated, but usually it’s because they have earmarked their savings for some purpose, and a dollar that has been “earmarked” for something seems to be worth more than its intrinsic value.
Putting It Together
So based on the previous few points, you might assume that to really master finances, you need to worry about really granular, detailed data. You need a full system, with internal and external boundaries.
And to some extent, that is true. But that introduces complexity. And while you can use software and good product design to hide some of that complexity, at some point, it eventually bleeds through—and the system stops being purely mechanical, because there are humans in the mix.
Humans with goals and aspirations for the future, but also concerns and worries. Humans with baggage and a complex relationship with money. Humans that might throw their hands up and avoid problems that seem complex or stressful. So there’s a balance between creating a system that’s complete, but complicated, and one that’s tractable, but misses big parts of the picture.
And there are ways to navigate that tension, using good product design, basic psychology, and—spoiler alert—systems thinking. But that’s a topic for another post.
(After writing this, I came across this excellent pieceby the awesome Martin Kleppmann, which is probably a lot more eloquent than anything I’d ever write. But I still figured it was worth sharing the systems model, since it is slightly different)
Small differences in the productivity of software developers on a team can easily magnify themselves over time.
On many software teams, one engineer seems significantly faster than the others. Now, in some cases, it’s because that engineer is cutting corners left and right. They get stuff done, but they cause damage. They’re a tactical tornado.
Let’s assume your fastest engineer isn’t that type. They put out good code. It’s possible that they’re not actually as fast as you think. It turns out that even a marginal advantage for one engineer can translate into significant speed differences.
Let’s imagine a simple team of two engineers, Amy and Bob. All things equal, Amy would be 10% as fast as Bob. And by all things equal, I mean they’d produce the same quality of code, Amy would just produce 10% more. This slight speed advantage could be because she’s naturally faster, or because she joined the team earlier and hence has more context on the code.
That 10% difference can actually translate to a pretty large difference in output. Initially, Amy produces 10% more code. Now Bob has to increase the time he spends doing code reviews of Amy’s code, and just generally keeping up with Amy’s changes so he has context and can make the changes he needs to make. Which means he has less time to write code, which frees up Amy to be even more effective, which increases her output yet again.
Over time, this is amplified. Amy is spending most of her time writing code. Bob is spending most of his reviewing and keeping up. Amy’s slight advantage turns into a much larger one. Other colleagues start to view her as the go-to person, and wonder why Bob is falling behind.
A good engineering manager or senior engineer can detect when that’s happening and try to correct the balance. But often the team kind of settles into a mode where Amy is assumed to be better and more productive and everything is funneled to her.
On my product engineering teams, I under-specify product requirements by design. That is, the work that engineers are asked to do is always left a little ambiguous.
I used to have a very naive view of how militaries made decisions. You had a formal chain-of-command, and detailed instructions were passed down that chain and implemented, no-questions-asked. If you questioned those instructions (or god forbid, decided to deviate from them), you would be reprimanded: yelled at by your superior officer or court-martialed (whatever that means) or something, idk.
It turns out modern militaries don’t operate that way (or at least, they try not to). In fact, over a century ago, the Germans developed a style of military tactic called Auftragstaktik, or “mission-type tactics”. Here’s how one German officer, Von Schell, described it in a 1917 military book called Battle Leadership that is popular until this day (it’s recommended on US Marine Corps Commandant’s reading list):
In the German army we use what we term “mission tactics”; orders are not written out in the minutest detail, a mission is merely given the commander. How it shall be carried out is his problem.
The Germans (or Prussians at the time, again idk) apparently developed this tactic of avoiding detailed orders in response to being beaten by Napoleon. Napoleon’s troops couldn’t be superior to theirs, they concluded, so he had to have just managed his troops better. Their detailed orders led to rigid tactics, and in the (at the time) modern warfare, there was no room for detailed, rigid commands. So, to give officers on the ground—who had the best knowledge of reality on the ground—the ability to adapt, they are given less detailed orders.
It turns out there is another benefit to less-detailed orders—a psychological one. Here’s our Von Schell again: “There is also a strong psychological reason for these ‘mission tactics’. The commander… feels that he is responsible for what he does. Consequently, he will accomplish more because he will act in accordance with his own psychological individuality”.
What Von Schell is describing, essentially, is what we could today call empowerment, though that term doesn’t come into existence until several decades later (and then it proceeds to get mis-used to death in the corporate world).
Big Waterfalls, Small Waterfalls
Now, in the type of software I’m involved in, we don’t have to defeat Napoleon and luckily, if we screw up, no one dies. But I’ve found that the idea of under-specifying works really well with my software teams.
In a traditional software development process, a product manager (or at many startups, the CEO who is the de-facto product manager) sets a high-level vision for what needs to get implemented. That product manager then works with designer(s) to translate that vision into more granular artifacts, like a product requirements document (PRD) and/or some visuals (mockups, etc). There might be user stories involved. Eventually, these get translated into “requirements” that are then given to the engineering team to build, maybe in the form of tasks in a tool like JIRA, Asana, etc.
You might recognize this as the “waterfall” method of software development, and it is the equivalent to my naive view of how militaries operate. It is rigid and instructions flow in one direction. The software industry recognized a couple decades ago, and movements like Agile were born. The spirit of Agile was to break the rigidity of the process and make things more light-weight and, well, agile.
But when not implemented thoughtfully, all Agile does is break the process down into smaller waterfalls. This is a definite improvement—smaller cycles and feedback loops are better than larger ones. But it still leaves a lot of room for improvement.
This is where under-specification comes in.
A Chance to Exercise Judgment
To summarize the benefits of “mission tactics” again, there were two benefits. The first is a tactical one: the people making smaller, on-the-ground decisions can make them faster and better. The second is a psychological one: mission tactics create a sense of ownership, which makes people more engaged and invested in the outcome.
This carries over into the software world. No matter how hard you try to specify everything, there will almost always be uncertainty. There will be edge cases you didn’t anticipate. Sometimes, it will become clear that an interaction or feature won’t work as designed only as it’s being built. And finally, things may end up being harder or easier to build than anticipated, which changes the calculus about which things are even worth building in the first place.
Any software developer working on a product needs to be constantly making micro-decisions around what they’re building. When something is unclear or doesn’t make sense, do they:
build it as designed?
halt, and flag it to someone on the product/design team but wait to get an answer?
improvise with something that makes more sense?
do some combination of the above?
These micro-decisions require an understanding of what they are building and why. They require an understanding of the users who will use the product, and the problem space. And, they require an understanding of the scope and likelihood of possible future changes. They require thinking holistically and strategically. But most of all, they require good judgment.
Good judgment is never engaged when detailed instructions are given. Good judgment is engaged and improved when there is room for it to grow.
Does this mean that specifications should be entirely ambiguous? Of course not. Without enough direction, it’s hard to build anything at all. A good overview of what needs to be built and why, along with some user stories and some visuals can help an engineer understand the “intent” of what needs to be built. Good visuals are especially important, because they remove the burden of thinking about how something will look, and let the focus be on how it will behave and how it will be built.
Does this work perfectly? Of course not, either. Mistakes will be made, details will be missed. But, it will be the details that are missed, not the big picture. And over time, as the team builds judgment and understanding, those missed details will tend to shrink.
Unfortunately, I often see teams go the other direction: over-specifying. This can be an especially vicious loop to get stuck in, because it gets worse over time. Engineering tasks are heavily specified, so engineers don’t engage their judgment. They turn into literal “code monkeys”. They make obvious mistakes. The response to that? More specification the next time around to try and remove any opportunity for error. Which leads to less judgment, and more mindless coding. You get the point.
Does this always work?
It also turns out that many modern militaries adopted a variation of “mission tactics”, known as Mission Command. Mission Command swings the pendulum back a little from mission tactics. Instead of just communicating intent without details, superior officers exercise judgment in terms of deciding when to use more detailed instructions and control, and when to delegate. Officers are told that Mission Command requires “shared understanding, mutual trust, and high competence”. The literal chart given to officers to help decide how much detail to use in control looks like this:
You can map that to software development as well. For under-specification to work well:
There needs to be a lot of ambiguity and lack of predictability.
The team is competent and experienced.
There is a high-level of trust and shared purpose.
These conditions are a lot easier to achieve at earlier stage start-ups, and it gets harder as a team grows to maintain these factors. Companies try to eliminate ambiguity and predictability as they grow. It gets harder and harder to maintain the same bar for talent (assuming that bar was there to begin with). And, of course, trust / intimacy starts to break down. And that typically tends to be when teams start to over-specify again.
Most architectural mistakes I’ve seen in software stem from a mistake either in the domain model or the data flow. Understanding what each of those two things is, how to do them both well, and how to balance the tensions between them is an essential skill every developer should invest in.
Let’s use an example to talk to expand on this.
Let’s imagine we’re building a personal finance product. A user has a set of financial transactions (Transaction). Each transaction has a dollar amount, happens on a date, in a financial account (Account) and is labeled with a category (Category).
Further, we know a few other things:
The balance of an account at any point in time is always the sum of all transactions up and until that time.
Users may want to add, remove or edit transactions at any point.
Users will want to see the balance of their accounts at any point in time, and how the balances change over time.
Users will want to slice and dice their cash flow, too. They will want to see the sum of their transaction amounts between certain dates, for certain categories, and for certain accounts, and they may want to group that data too (for instance, a user might want to see how much they’ve spent by category, each month over the past 12 months).
Sounds pretty straightforward so far. But let’s dig in.
When it comes to modeling your domain, the seminal idea is Domain-Driven Design (DDD). The fundamental idea behind DDD is to map entities in your software to entities in your “business domain”. Parts of this process are pretty natural. For instance, we’ve already started doing that above (entities for a Transaction, an Account, and a Category all naturally fell out of just describing what users want to do).
But domain-driven design doesn’t stop there. It requires technical experts and “domain experts” to constantly iterate on that model, refining their shared model and then updating the software representation of that code. This can happen naturally as you evolve your product and use-cases, but often, it’s a good idea to trigger it up front through in-depth discussion and questioning of how the model could accommodate future use-cases.
For example, here are some questions that might help us refine our model, and some possible answers.
For starters, here’s one: what if an account has a starting balance? How do we represent that? Does that violate our initial assumption that an account’s balance is the sum of all its transactions? The answer depends on how you model your domain.
For some products, it might make sense to add a starting_balance field to your Account entity. A more “pure” approach might be to keep the initial invariant (that an accounts balance is the sum of all transactions), but refine things so that starting balances are actually a special type of Transaction (with some invariants around that—for instance, an Account can only have one starting balance Transaction, and it must be on the date the Account is opened). But this is good, we’re domain-modeling now! We’re rethinking some of our assumptions, and that’s pushing us to think more deeply about our understanding of the model.
Here’s a trickier one: what if a transaction occurs between two accounts? In our current model, we’d actually have two transactions (one leaving the first account, and one entering the second one). That might be fine in many applications, but if you’re an accounting product, you might realize that this model can introduce some inconsistencies. What if one transaction is missing? In the real world, money flows from some place to another. Maybe every transaction requires two accounts (from_account and to_account). A domain expert on your team would now point out that you’re brushing up against double-entry accounting. We don’t need to go down that route, but you can see how a question prompted us to revisit our understanding of the model.
This is just an overview of domain-driven design. You can read a lot more about it on Wikipedia, or by reading Eric Evans’ classic book, but at a high level, in domain-driven design you create a “bounded context” for your domain model, iterate on your understanding of the domain model, come up with a “ubiquitous language” to describe that model, and constantly keep your software entities in sync with that domain model and language.
Data Flow Design
Data flow design takes a bit of a different approach. Instead of focusing on the entities, you focus on the “data”. Now, you might argue that data and entities are the same, or should be the same, and in an ideal world they would be, but software has real-world limitations set by the technology that enables it. Things like locality, speed, and consistency start to rear their heads.
Let’s apply that to our example above. Again, we had already naturally started doing some data flow design in defining our original problem: all of the “users will want to…” statements are about data flow. For example, let’s consider the balances question: “users will want to see the balance of their accounts at any point in time, and how the balances change over time.”
Our model dictates that balances are derived from transactions. How do we respond to a query like “what was the balance every day over the past year for a user’s account?” The simplest way could be to always derive, on-the-fly, the balances of an account by walking through all its transactions. That way, if anything in the underlying transactions change, the balances are always consistent. But this is where technical limitations start to hit us. Can we do that calculation fast enough when we get the query? What if the query is something like “out of the 10 million accounts in the system, show me all accounts for which the balance exceeded $10,000 on any day in the past 5 years”?
You probably already have solutions simmering in your head. Caching for faster queries. Updating balances whenever transactions change. Some additional data store that makes it easy/fast to index and execute queries like that. But you’re no longer just thinking about the domain model. You’re thinking about the data.
To do data flow design well, you need to think through a few dimensions. The first is read vs. write data paths. Clearly, when transactions are changed, balances need to change to reflect that. Should that happen on write, when a transaction is updated? Or should it happen on read? (should we lazily only do the work when we know we need it). Or should we do it asynchronously in between so that we can have fast reads and fast writes, while sacrificing some consistency.
Next, you need to think through read vs. write patterns. How frequent are writes? How frequent are reads? Are they varied or skewed? Depending on the answer, you might be OK doing more work on write, or you might be OK doing more work on read. Or you might introduce something like caching if a lot of reads are similar. Or, you might go full on Command Query Responsibility Segregation.
You’ll also need to think through your consistency requirements. We’ve already hinted at that above, but maybe you can offload some work if you’re OK with data you read being a little out of sync with the data you write. You can use asynchronous or batching models.
Finally, there’s a question around where invariants should live. In modeling the domain, you usually end up with some “invariant”: things that should always be true. These invariants work like constraints, giving you assumptions you can trust throughout the life cycle of any entity or the data representing it (like, the balance of an account is the sum of all its transactions, or an account can only have one starting balance transaction). But when thinking about data flow, you need to worry about how to check and enforce those constraints. Should that happen in the application layer? In the data storage layer?
A full exploration of what this means in practice is beyond our scope here, but the main point is that in addition to our nice, clean domain model, we also have all this extra logic that is not part of our domain. It’s just a function of technological limitations. That’s the tension.
I’ve found that most software engineers start their careers with a bias either towards data model or data flow. As two extremes, consider:
The data model purist: Spends an exorbitant amount of time thinking through and modeling the domain before writing a line of code. Draws a lot of diagrams, possibly of database schemas. Gets really frustrated at implementation time because the data flow reality sets in and they realize they will need to “corrupt” their model.
The data pragmatist: Thinks through the end-to-end data flow really well, quickly writes code and spins up multiple data services. Was big on “polyglot persistence” when that was a word. Has figured out how to parallelize / partition things before figuring out what those things are.
Many people start off as one of those two, overlooking the other side of the equation, then learn through experience that you have to think about both from the get go.
I find that to strike a good balance, it’s best to do design in an iterative fashion. First, of course, you need a really solid understanding of the underlying problem you’re trying to solve and why it needs to be solved. Then, you take turns thinking through the domain model, and the data flow.
Write or sketch out a quick data model.
Map it to the problem space: does it represent the domain well? Does it support what the product needs to do now and do later? Fiddle the requirements a little bit. Does the model hold up?
Now map the data flow. Look at the UI and what data needs to be shown. Think about the interactions that need to happen and what data needs to be changed. Now think about how that would work at a much larger scale.
Rinse and repeat. Pull in some colleagues, get feedback, and continue repeating again. And even when you start writing code, you still keep iterating.
You should start with a slight bias towards getting the data model right, and worry more about data flow as you gain confidence in your data model, and as you start to hit the performance problems that only show up once you have enough scale and once your product is complex enough. But you always keep both concepts (the data model, and the data flow design) top of mind as you’re working.
Another day, another HackerNews discussion about hiring being broken. The most recent one I saw was triggered by a blog post by the formidable Aline Lerner (disclaimer: Aline is a friend and we collaborated on a hiring book last year). Now, I 100% agree that hiring is broken, and Aline’s post is really thoughtful. In fact, a lot of “hiring is broken” articles are thoughtful.
But the discussion threads are something else—they miss the point of the article. The discussion threads are even more broken than hiring. And they’re really repetitive. They always do contain grains of truth, but inevitably have us reaching conclusions that are simplistic, and in my opinion, create a pretty bad attitude in the tech industry.
Conclusion #1: “Hiring sucks for candidates, but hiring managers can do what they want“
The truth is that hiring is hard for everyone. There’s no question about it. It’s hard for both candidates and for hiring managers. Sure, FAANGs and the startup-du-jour might have a leg up, but most people who are hiring are trying to hire at a non-FAANG, non-sexy company. If you’ve never done it, you should try it at some point in your career. It’s an incredibly humbling experience. Or, at the very least, find a friend who’s spent time on hiring, and ask them for their favorite battle story. They’ve been ghosted by candidates. They’ve spent hours trying to convince people to talk to them. They’ve spent even more time getting candidates to the offer stage, only to lose out to the FAANG / startup-du-jour.
And yes, on the balance, power and information asymmetry work out in favor of the companies hiring. And that asymmetry is much larger with FAANGs. But even FAANGs have to invest a tremendous amount of time and energy into hiring. It’s not really easy for anyone.
Especially if you want to do it well. Ask any successful leader (entrepreneur, manager) what they spend most of their time on, and it’ll either involve a large chunk spent on hiring (if they appreciate the problem and give it the attention it deserves) or dealing with the consequences of bad hiring (if they don’t).
Conclusion #2: “Hiring is a crap-shoot—it’s a roll of the dice“
I strongly disagree with this one. When writing the Holloway Guide to Technical Hiring and Recruiting, I got to interview dozens of really thoughtful hiring managers and recruiters. They were really good at their jobs. And there were some common themes. They were thoughtful about every step of their process. They kept their process balanced and fair, holding a high bar but respecting candidates and their time. They didn’t chase the same pool of candidates everyone else was chasing—instead, they found non-traditional ways to discover really talented and motivated people who weren’t in the pool of usual suspects. They were thoughtful about what signals they were looking for and how best to assess them. And, they deeply understood their team’s needs, and candidates’ needs, and were really good at deciding when there was or wasn’t a fit. But most of all, they were effective: they built really talented teams.
There are a handful of companies that have built amazing hiring engines, and the proof is that they’ve been able to put together really strong teams. You can generally tell that if a person worked at a certain company at a certain time, that person is probably incredibly intelligent and incredibly motivated (some examples are Google, Facebook, Stripe, Dropbox at different points in time). There will always be noise. Even the best hiring managers will sometimes make hiring mistakes. And of course, even the best engineers may not be a fit for every role or every company.
Again, hiring is hard. But there is not a shred of doubt in my mind that if you are thoughtful about it, you can hire well. And really, you don’t need to be perfect at it. You just need to be better than the rest.
Conclusion #3: “FAANGs suck at hiring”
This one has some truth to it, but it’s a lot more subtle than “FAANGs suck at hiring”. Because let’s face it, they do hire really smart people. Some of the smartest people I know are at FAANGs right now. So let’s decouple that statement a little more.
FAANGs do suck at parts of hiring, like their candidate experience. They can be really slow at making hiring decisions. Their hiring process might be tedious and seem arbitrary. But they usually can get away with it, and you probably can’t! They’ve got a strong brand, interesting technical challenges (interesting for some people, at least), and a lot of money. In fact, one FAANG VP of Engineering told me: “our process is what we can get away with”. To the point that they can even play it off as a positive: “our process is slow and long because we are very selective”.
And look, I’m sure FAANGs lose some talented candidates who get turned off by their “you’d-be-blessed-to-work-with-us” attitude. They definitely have a lot of room for improvement. But at the end of the day, they’re operating a process that’s delivering large quantities of really smart people at scale. In fact, I’d argue their internal processes around strategy, performance management / promotions, etc cause incredibly more damage to them than broken hiring—if you lose out on hiring one talented person when you have thousands applying to work for you, that’s one story, but if you hire someone really talented and driven, and they work for you for 6 to 12 months but don’t meet their potential and leave in bitter frustration… well, that’s a subject for another post)
“But”, people go on, “FAANGs also don’t know how to interview!” Which brings me to trope #4.
Again, this one has some truth to it, but if you just stop at the above statement, you miss the point.
Algo/coding interviews are one of the primary hiring mechanisms used by FAANG companies. And they are incredibly unpopular—at least in discussion threads. But big companies have spent years looking at their hiring data and feeding that back into their hiring process (coining the term “people analytics” along the way).
The argument against them is usually a combination of:
they really only assess pattern-matching skills (map a problem to something you’ve seen before)
they only assess willingness to spend time preparing for these types of interviews
These are fair criticisms, but that doesn’t mean these interviews are actually terrible. I mean, they might be terrible for you if you’re interviewing and you don’t get the job. You’re probably a brilliant engineer, and I agree, these interviews certainly don’t fully assess your ability (or maybe you’re a shit engineer, I don’t know you personally). In any case, the leap from “this interview sucked for me” to “this interview sucks” is still pretty big.
If you’re a large tech co with a big brand and a salary scale that ranks at the top of Levels.fyi, you probably get a lot of applications. So a good interview process is one that weeds out people who wouldn’t do well at your company. To do well at a large tech company, you need to (and I’m painting with a really broad brush, but this is true for 90% of roles at these companies):
Some sort of problem-solving skill that’s a mix of raw intelligence and/or ability to solve problems by pattern-matching to things you’ve seen before.
Ability/commitment to work on something that may not always be that intrinsically motivating, in the context of getting/maintaining a well-paying job at a large, known company.
Hopefully you can see where I’m going with this. Basically, the very criticisms thrown at these types of interviews are the reason they work well for these companies. They’re a good proxy for the work you’d be doing there and how willing you are to do it. If you’re good at pattern matching, and are willing to invest effort into practicing to get one of these jobs, you’ll probably do well at the job.
Not that there’s anything wrong with that type of work. I spent several years at big tech co’s, and the work was intellectually stimulating most of the time. But a lot of times it wasn’t. It was a lot of pattern-matching. Looking at how someone else had solved a problem in a different part of the code-base, and adapting that to my use-case.
On the other hand, if you’re an engineer (no matter how brilliant) who struggles with being told what to do or doing work that you can’t immediately connect to something intrinsically motivating to you, that FAANG interview just did both you and the company a favor by weeding you out of the process.
So the truth is, there is no single “best interview technique”. In our book, we wrote several chapters about different interviewing techniques and their pros and cons. In-person algo/coding interviews on a whiteboard, in-person interviews where you work in an existing code base, take-home interviews, pairing together, having a trial period, etc all have pros and cons. The trick is finding a technique that works for both the company and the candidate.
And that can really differ from company to company and candidate to candidate. A VP at Netflix told me about how they had a really strong candidate come in, but when asked to do a whiteboard-type interview, informed them (politely) that they might as well just reject him then. He was no good at whiteboard interviews… But if they allowed him to go home and write some code, he’d be happy to talk through it. And since then, many Netflix teams have offered candidates the choice of doing a take home.
And really, any interview format can suck. It can fail to assess a candidate for the things a company needs and it can be a negative candidate experience. Which would you rather have:
A whiteboard interview with heavy algorithms for a role where that knowledge (or ability to develop that knowledge) isn’t critical, delivered by an apathetic engineer who doesn’t care about their job.
A poorly-designed take-home, requiring skills that you don’t have and won’t need for the job, and that you spend hours thinking through and working on, send in, and get rejected without getting any feedback.
At my current startup (Monarch Money), we give candidates the choice of real-time CoderPad interview, take-home interview, or showing us and talking through a representative code sample. Most people choose the take-home, and we like that—based on where we are as a company (seed-stage startup), how we operate (distributed even before Covid), etc, it ends up being a better proxy for the work they’d do on the job. In either case, we do our best to only do this once we and they believe there might be a strong fit, and when we do it, we try to give people feedback so that even if they don’t get the job, they get at least got something out of it. Will we still do this at scale? Almost definitely not. Once we have multiple teams and hiring managers, we’ll probably have to rely on more standardization, which will probably push us towards more standard interviews (though I hope to resist it as long as we can!). And we’ll try to maintain the same principles (being respectful of people’s times, looking for a proper fit, etc).
So here’s what sucks about hiring
So here’s what actually sucks about hiring:
Diversity. We really, really suck at diversity. We’re getting better, but we have a long way to go. Most of the industry chases the same candidates and assesses them in the same way.
Generally unfair practices. In cases where companies have power and candidates don’t, things can get really unfair. Lack of diversity is just one side-effect of this, others include poor candidate experiences, unfair compensation, and many others.
Short-termism. Recruiters and hiring managers that just want to fill a role at any cost, without thinking about whether there really is a fit or not. Many recruiters work on contingency, and most of them suck. The really good ones are awesome, but most of the well is poison. Hiring managers can be the same, too, when they’re under pressure to hire.
General ineptitude. Sometimes companies don’t knowing what they’re looking for, or are not internally aligned on it. Sometimes they just have broken processes, where they can’t keep track of who they’re talking to and what stage they’re at. Sometimes the engineers doing the interviews couldn’t care two shits about the interview or the company they work at. And often, companies are just tremendously indecisive, which makes them really slow to decide, or to just reject candidates because they can’t make up their minds.
As a company, the best you can do is be thoughtful and fair with your process. It’s not easy, but it’s doable. And as a candidate, the best you can do is try to find and work with companies that are thoughtful and fair with their hiring processes if you have that privilege.
The term technical debt is so common that you’d be hard-pressed to find anyone in the software world that hasn’t heard of it. But what is technical debt? There are a few frameworks out there, so I’ll list them quickly, but then I’d like to present one I’ve found especially useful with the teams I work on or advise: inertia vs. interrupts.
First, it’s helpful to define what technical debt is not. Bob Martin has a great article on this called A Mess is not a Technical Debt. “A mess is just a mess”. You can make short-term decisions that may not be best for the long-term, if you’re under constraints, but you should still do it in a prudent way. Some decisions are just bad.
Martin Fowler expands on this by creating a Technical Debt Quadrant, with two dimensions: deliberate vs. inadvertent, and reckless vs. prudent.
Ideally, you’re in the right half of this two-by-two: always prudent, because there’s no excuse for being reckless. But, if you’re low on experience for the type of system you’re designing, you might be prudent and inadvertent. Martin Fowler argues that prudent / inadvertent is really common with great designers, because they are always learning.
One of my favorite frameworks is from the Mythical Man-Month’s No Silver Bullet essay. Fred Brooks breaks down complexity (which is a little different than technical debt) into to dimensions: accidental complexity and essential complexity. Essential complexity is caused by the complexity of the underlying problem that software solves—accidental complexity is introduced by the implementation developers take. In this world, technical debt is, essentially all accidental complexity.
A final framework defines technical debt as the gap between how software is structured and how you would structure it if you were writing it from scratch today (I don’t remember where I read this definition, please tell me if you do!). In this world, technical debt is a little more fluid, because it can increase simply by your team thinking up a better architecture or design.
A different framework—manifestations
These frameworks are all great, and in fact, you can go even deeper to define what technical debt looks like. Books like Refactoring and Clean Code have done this well. But usually, what you need, is something a little more concrete that you can make decisions about.
So the framework I like to use is to look at manifestations of technical debt; what impact does technical debt have? By looking at how technical debt actually impacts your ability to deliver product, you can make decisions in a less subjective way.
At a high-level, technical debt can manifest itself in two ways:
Interrupts: Interrupts are when existing systems are taxing your time through reactive work like maintenance, bugs, fire-fighting, etc, so you spend less time being able to change a system. In other words, writing code now that creates interrupts in the future means you (or someone else) will be able to spend less time on forward work in the future. Both the quantity and severity of interrupts matter. Interrupts are particularly hazardous because they usually result in costly forced context switching. So you find your team, over time, spending more and more time responding to incidents than building product.
Inertia: Inertia means that a system is hard to make forward changes to (because it is hard to understand or reason about, because it’s not modular and hence hard to change, etc). This makes forward work difficult for you (ie even when you can spend time doing forward work, it’s really slow) or for others (e.g. because the system is hard to understand, it is a tax on the time of people who need to learn more about it, as well as on people who need to explain to others how it works).
Why look at the manifestations? Two reasons.
First, it helps identify the tangible symptoms and effect of technical debt, and helps avoid theoreticals. For instance, most teams have at least one part of their code base that they feel pretty bad about, and would love to spend time fixing. Is it worth fixing now? Sometimes, that code is well-isolated and pretty functional. It isn’t creating any interrupts. No one is changing it or will need to change it in the near future. So yes, it is technical debt, and it might have inertia, but inertia only matters if that code needs to be changed.
Secondly, by identifying the effects of technical debt, you can decide how best to fix it. For instance, if a piece of code is really hard to change because it suffers from change amplification, and you expect to be making a lot of changes to it, it’s probably worth refactoring. If a piece of code is creating a lot of interrupts and not creating value for users, you might want to just get rid of it entirely.
Finally, it helps avoid ideological discussions. We’ve all worked with someone who is ideological about their code. “X is shit, Y is great”. By looking at manifestations, you’re forced to be thoughtful and justify any claim you make. “X is shit” doesn’t fly any more. You have to say “X is bad, because when we want to do Z in the future, it will be really difficult.”
So sometimes, it’s useful to start with the symptoms before the diagnoses, and look at how technical debt manifests itself before deciding what to do about it.
I was reading an amazing Twitter thread about Bill Grundfest, founder of The Comedy Cellar and the guy who discovered some of the most famous comedians.
In the thread, which includes stories of Jon Stewart, Bill Maher, and Ray Romano, the pattern is essentially:
Bill is able to detect talent, even early on in people’s careers when they haven’t had success yet.
He’s able to zoom-in on what’s holding them back, and giving them one key piece of advice.
He believed in them.
The thread focuses on the first two pieces: detecting talent and giving advice. The third piece is a little hidden, but in my mind, it’s probably the most impactful.
I’ve seen over and over in my career how having someone believe in you can be life-changing. I’ve sometimes been the recipient of that, sometimes a spectator, and most recently, I’ve tried to be a provider of that.
People can tell when you don’t believe in them, and they can tell when you do. It has an effect on their behavior, and can be self-fulfilling. This is sometimes known as the Pygmalion effect. Having someone believe in you is tremendously powerful. If someone believes in you, they will give you more support and more opportunity. They will boost your confidence—and I’ve seen lack of confidence hold back way too many brilliant people. And they will help create the type of psychological safety necessary for you to do your best work.
So here’s my rule as a manager. I only hire people I believe in, and I do my best to let them know I believe in them.
Just because someone has been successful, or has a lot of experience, that doesn’t necessarily mean that I believe in them. Believing in someone means that I believe their best is yet to come.
On the other hand, believing in someone doesn’t always mean believing that they will be successful right away, or that they will be successful in the exact way or role I’ve intended for them. Believing in someone isn’t relaxing my standards or expectations of them either. That’s the opposite of believing in someone. In order to believe in someone, you need to maintain high standards for them, and have faith that they will meet those standards.
This applies beyond just managers hiring their teams. If you have the luxury of choosing where you work, you are essentially “hiring” a person as your manager, a team as your colleagues, or a company as your employer. So if you can, always choose to work with people you believe in, and people who believe in you. Work with people for whom you believe their best is yet to come. And if you lose faith in them, or you feel like they lose faith in you, do them and yourself the favor of moving on.
Earlier we made the assertion that “code is a liability, not an asset.” If that is true, why have we spent most of this book discussing the most efficient way to build software systems that can live for decades? Why put all that effort into creating more code when it’s simply going to end up on the liability side of the balance sheet? Code itself doesn’t bring value: it is the functionality that it provides that brings value. That functionality is an asset if it meets a user need: the code that implements this functionality is simply a means to that end. If we could get the same functionality from a single line of maintainable, understandable code as 10,000 lines of convoluted spaghetti code, we would prefer the former. Code itself carries a cost — the simpler the code is, while maintaining the same amount of functionality, the better.