This is a memo I published internally to my team at Monarch. I’m sharing it more publicly in case it helps other software engineering teams that are managing the crazy times we’re experiencing.
There’s no question: AI is changing how we work as Software Engineers. There’s a lot of hype, excitement, anxiety, and uncertainty around these changes.
As an Engineering org, we’ve had a strong set of Engineering Values (How We Work Together) that have served us really well as we’ve grown. I wanted to drop a few thoughts on our philosophy on AI in Engineering, grounded in these values. For more details, you can see our AI in Engineering@Monarch [internal, redacted link] doc.
Here is my ask of our team as we explore and implement AI in Engineering:
Understand and explore the bleeding edge, but adopt a dampened one
We definitely believe in and want to leverage AI in our work to increase productivity and quality. That said, if we try to always be on the bleeding edge, we will suffer from:
- Thrash. Since the bleeding edge is constantly changing (new tools come out, existing tools leapfrog each other, etc). Setting up, learning, and utilizing new tools and workflows takes time. We don’t want that to take away from our momentum and focus on shipping.
- Security exposure. There is a gold rush in AI. Companies are cutting corners to ship (or adopt) new tools. As evidence: every couple of days there is a new, high-profile AI-related vulnerability. We’ve built our product around trust, security and privacy. We cannot compromise here.
So as an org, we may feel one step behind the bleeding edge, only adopting things once they are a bit more mature and battle-tested (”a step behind the blood”).
That said, to know we are (only) a step behind, we must still understand the frontier. To do this, we will:
- dedicate time and resources to exploration (collectively, as an org)
- empower team-members to explore in certain, safe circumstances (eg prototypes, hackathons, or other individual initiatives).
- expect people to share what they learn: tools, workflows, prompts, tips, failure modes.
We need to understand the bleeding edge, but work at a step behind it.
Continue to own your work
Whether you use AI or not, if work has your name on it, you are accountable for it.
That means that you are responsible for the quality of the written documents or code that you put out. You should review everything before you ask others to take a look.
Likewise, work we put out collectively to our users has our company’s name on it and we are collectively accountable for it (its functionality, its quality, its security, etc). AI has no accountability, no pride in it’s craft, no shame if it gets things wrong. The human (that’s you) provides the accountability.
It’s much easier to generate code or documents, but if you generate a lot and don’t control for quality, you are shifting the burden onto your peers (who will review your work), or worse, our users (if it doesn’t get properly reviewed and tested).
As a side note, even teams at frontier AI labs don’t blindly trust their AI. When we’ve asked friends there about how they use their own tech, they have said there is always human review. Apparently, claims of otherwise are probably just one-offs (ie, prototypes or non-critical systems) or just plain hype.
Do the deep thinking yourself (don’t get laizy)
Andy Grove argued that often, writing a deep report is more important than reading it: “Their (ie the document’s) value stems from the discipline and the thinking the writer is forced to impose upon himself as [she] identifies and deals with trouble spots”.
If you ask AI to write a document for you, you might get 80% of the deep quality you’d get if you wrote it yourself for 5% of the effort. But, now you’ve also only done 5% of the thinking. Delegate things that require time and toil to AI, but keep things that require thought, judgment, and rigor for yourself.
You can still use AI as a thought-partner, idea generator, editor, or synthesizer. You can (should) also use AI for toil (things that are time-consuming, repetitive, and menial). But you need to do the deep thinking yourself.
Continue to leave room for inspiration
When we wrote our Engineering values, and included “leave room for inspiration”, one thing we were guarding against was working so hard with so little slack that we don’t have room for inspiration, creativity and brilliance. AI changes that risk profile. With AI and increased productivity, you might have more time and slack, but, if you’re delegating too much to AI, you may not have the deep thought, context, and connectedness to the code and product that is required for inspiration.
People often worry about AI slop, but if you’re owning your work and reviewing it (as requested above), you will catch bad things and ideas that look bad things. You’ll need to be more careful about catching bad ideas that look like great ideas (since generative AI is notorious for producing those), but again, if you’re owning your work, you should catch those, too.
I’m most worried about missing good ideas that sound like bad ideas (at first)—in other words, sins of omission. Those will never occur unless you own your work, do the deep thought—and create space for inspiration.
Carefully design validation/verification loops
We strongly believe in systems thinking, and one of the most important parts of systems thinking is feedback loops. When using AI, think about feedback and validation loops:
- Creating ways for AI to validate it’s own work allows it to run more autonomously with less input from you. You can get much higher leverage if AI has a way to test the functionality and quality of it’s own work.
- That said, that doesn’t absolve you of owning your work, and so, you should also be thinking about human validation loops. Where should you be involved? Often, the template will look like: asking AI to develop a plan and reviewing that, then asking it to do some work and reviewing/refining that.
In other words, design that system (you + AI), will figuring out your role in it, since you will ultimately own the output.
Use AI more liberally in safe settings
We’ve found that there are a couple areas where using AI more liberally (that is, more autonomous agents, less human-in-the-loop, etc) makes a lot of sense, and we recommend you use these in your workflow:
- Conceptual prototypes. It can often be faster to build a concept (whether in the Monarch code-base or in some 0-1 tool like Replit) than to get designs into Figma or a PRD into Notion. These concepts can help show-case things internally, to users in surveys/interviews, etc.
- Internal tooling. Since these won’t be user-facing, the amount of polish, etc is lower, and they can be built more liberally.
- 0-1 builds. New code that is less reliant on existing code is easier to use AI to build.
Each of these may require more thought, polish, or verification later, but in the early stages, they can be great areas to “build-then-think” (rather than “think-then-build”).
Frequently Asked Questions
Will AI replace my job?
If you consider your job to be “typing code into an editor”, AI will replace it (in some senses, it already has). On the other hand, if you consider your job to be “to use software to build products and/or solve problems”, your job is just going to change and get more interesting.
There is a lot that goes into building great software that AI isn’t going to replace (at least, any time soon). How we work will change, and we should be able to build faster and with better quality.
Am I falling behind if I’m not using AI constantly?
We know it can be stressful to feel like you’re not keeping up, but on the other hand, if we don’t change how we work at all, we will eventually fall behind. This has always been the case in software development, but things are moving a lot faster now.
That said, constantly worrying about falling behind only creates anxiety. Our philosophy (as described above) is to collectively explore the bleeding edge, but work an inch or two behind it. We also will walk that path together, so that no one feels like they are being left behind. You are expected to contribute to exploration and sharing learnings, but you aren’t expected to figure out our full strategy on how we use AI on your own.
Is the code AI writes actually good?
You should be the judge. With the right context and the right prompting, we’ve found that AI can write good code (at minimum, consistent with the code base it’s operating in). But since you’ll also be reviewing the code, you can and should decide when it has written good code or not.
Am I losing skills by relying on AI?
It depends on how you use it. If you abdicate your responsibility as a developer to AI, yes, your skills may atrophy. But if you do the deep work, and review/validate AI’s work, your skills shouldn’t atrophy. In fact, they should improve, since you’ll constantly and instantly have access to a somewhat knowledgable
