Site icon Accidentally in Code

High Throughput, Low Completion

A dark room dominated by two large monitors displaying green Matrix-style cascading code, surrounded by a chaotic wall of wires, knobs, fairy lights, and tinkered electronics.
Credit: Flickr / MeTaMiND EvoLuTioN MeTaVoLuTioN

I have never been someone who organises things.

I have a great memory and a high tolerance for ambiguity, and I evolved to match the era of the internet where searching things by keyword was sufficient. I documented, extensively. But I did not really organize. Not myself, nor – despite my job – other people. I used to joke that I caused people to organise themselves around me: I remembered everything and asked a lot of questions; working with me effectively required those below me to have it together.

In as much as I optimised for anything, I optimised for being able to think.

Big things documented. The weekly list on a sheet of paper. How they fit together, how to reason about things, how to prioritize… that lived in my head.

Like everyone, AI has shifted my workflows, and somehow I have become a person who organises things.

What broke

The first thing that broke is that Jean and I DDoSed each other. We got productive enough that relying on reliability of the other wasn’t enough – we needed a source of truth to know everything that was going on, and who was doing it. We threw out our system (although if I’m being honest, system is a generous description) and built a new one. Nothing that fancy – just a shared system in GitHub and clarity about the state something was in and what it was waiting on.

Then I DDoSed myself. Too many things in flight, flitting between them, nudging them along. So many terminals I needed to color code them in order to be able to find them. There’s an HBR piece on AI brain fry defining it as mental fatigue from oversight of AI tools beyond your cognitive capacity, with measurable effects on errors and decision-making. This was 100% what I had done to myself. The brain fry is the feeling. The result was too many things moving and nothing finishing. I worry in general that the push to “AI driven productivity gains” drives people into this headspace – with all the harm that brings – and misses the point of the productivity, which is the outcomes the productivity is meant to drive.

Jean suggested I look at turning Claude code into a “chief of staff”, and so I forked the repo and started iterating aggressively. My trello board went in the bin – the API that had seemed like a nice to have a week earlier suddenly a fundamental requirement – GitHub became the source of truth. I added in my existing skills for email and social media. I figured out how to version and symlink them.

It was an expensive and chatty todo list. Becoming more like an actual chief of staff required a set of underlying skills – for course operations, inbox triage, analytics, social, the things I do every week – and a way to surface state. Given the starting point, it became the place I looked to put the things I needed, as I realised I needed them.

This is essentially the principles of context engineering, applied to my own effectiveness. As an engineering manager, my role was to notice and remove bottlenecks. To figure out the failure mode and make (or cause to be made) the process that addresses it. To build the structure that makes individual output add up to team output. Now my productivity has gone way beyond what I thought was achievable as one person, and the result was looking a lot less like individual productivity problems used to, and much more like the kind of problems that occur in a team.

The value of organising things became a lot more apparent. In the same way people used to organise themselves in order to manage up to me, now I create organizational systems to build skills that can help me manage myself – help me finish things, rather than starting one more thing, because the green terminal isn’t currently running anything and it could be.

My prompts are short, and laden with typos. But a collection of skills makes me effective. The working-with-cate skill tells Claude how to be effective with me in return. The DRI program manager skill surfaces the things that need attention. The inbox management skill ensures loops get closed and don’t fall through the cracks, the content manager skill keeps me on track on the internet. Some are just for me. Increasingly, they are shared with the people I work with.

The skill alone is not enough, they need to be backed by data. For my non-technical colleagues, that is often notion. For other engineers, it’s often GitHub. Or another API. Combining an API on DRI with the GitHub repo is what makes the DRI PM skill accurate – and as such, useful.

In functional programming, we talk about pure functions, and side effects. AI alone was like a pure function where I – the lossy human – was responsible for the side effects. Increasingly, to be more effective, I want the side effects. I want the skill to write to the source of truth, and flag updates to itself when it realises it is less useful than it is supposed to be.

To manage the side effects, you need guardrails. Now my collection of skills has progressed such that I added a structured rubric to run skills against.

The ever increasing burden of reviewing

Which brings us to the skill of reviewing. When my job was weighted more toward reviewing other people’s work than producing my own, AI looked, frankly, like a productivity tool for everyone except me. It is much more fun to be a creator than a reviewer in this timeline. The skeptics aren’t wrong about slop.

But liberated from other people’s nonsense and trying to grapple with my own, I stopped debugging at one level of indirection, and started getting to the core of it.

The answer is not “stop reviewing.” The answer is to get structurally better at reviewing – which mostly means asking better questions and shaping the work so it’s queryable.

For example, PRs. I know there’s a debate on this, and I get it. I still believe PRs are useful. Firstly because they segment the change – if it breaks something, you know where to look. Secondly because the PR itself is a queryable unit. You can run rules over it, check whether the standards you encoded are being met. Reviewing – for me at least – looks less like looking at the code, and more like thinking through how to validate the work, ensuring that has happened, and figuring out what to learn when it has not.

There’s a good chance my opinion will be different next week. The willingness to put everything I think I know in the bin weekly if not daily is currently the most useful thing I can do to accelerate my own learning. I don’t think anyone really knows what they are doing yet; the constant iteration is necessary to navigate the learning curve.

The metric isn’t output, it’s completion

The trap I fell into is that the cost of generating drops and you can have a lot more things going. Just because you can doesn’t mean you should.

Individual productivity is not a pure function. There are side effects and interdependencies. Hooking up the waiting list to my email and to GitHub is a win – when the thing I’m waiting on comes in, the item moves from waiting to needing my action. Similarly, surfacing the things others are waiting on from me is a reminder that team movement beats individual productivity.

Part of why I wasn’t given to organize things, is that I’m an ADHD-creative type. Executive function is boring, which makes it expensive, which makes me not do it. Building an operational floor for myself helps liberate me from worrying about things, from expensive “did I remember this” panic. I’ve given that work to the robot, so I can work to my actual strengths.

In the first period – before the DDoSing began – I felt the high of AI making me more capable at the things I’m good at. Now I’m leaning into how to be effective at handing off the things I’m bad at. We’re not going to be winning any prizes for organisation, but that’s okay because organization is not the point. The point is the output. The point is finishing the thing. Shipping it.

Everyone is figuring this out

If it is this much work for one person spanning two small teams, of course large organisations are struggling. The learning curve is hard — trying to navigate new tools and a step change in productivity at one time. I’m unconvinced anyone has it figured out, despite some people loudly claiming they do. I’m so grateful I have this time to work through this learning curve working with fewer people, in a more collaborative set up.

I was heads down for a while, but lately I’ve been talking to more people, being more willing to share where I’m at, and offer suggestions. In doing so, I realised I had rewired my brain. Things seemed obvious to me that weren’t obvious before. How to reshape a problem, how to systematise it, how not just to increase the throughput, but drive the overall goal forward, too.

This rewiring happened because I had the space to mess around – to try things, abandon them, try different things, talk it through with someone who was also open they are figuring it out. You cannot build that space by reading a blog post about it. You have to do, and badly, and give yourself permission to learn.

Fundamentally, I’m optimistic about the value of engineers, despite the narrative that we can be replaced by AI. I think learning and adapting is a core skill for us, and we have it to call on, even if the timing is not what we would choose. But sometimes that skill alone feels insufficient, and for that, Jean and I built Navigating the AI Shift. We saw the dissonance and the struggle, and designed a structured space for engineers and EMs to do some rewiring, with the support and accountability of coaching but at a more accessible price point.

The first cohort starts May 11. We’d love you to join us.

Navigating the AI Shift

The tools change. Your career continues.

Enroll Now →
Exit mobile version