My latest in Quartz…
Early on at Amazon, CEO Jeff Bezos famously issued a memo about how software was to be built at the company. Teams would share their data through service interfaces, or APIs, the same way that they would share it with an outside customer. That meant that a developer on one team didn’t need to know anything about how another team operated in order to integrate the product it made—he or she could follow the documentation and use that product as though it were an external service. Ultimately, this ease of cooperation became extremely efficient and is what paved the way for Amazon Web Services—a $6.7 billion business that powers huge parts of the web (including Netflix).
Georgetown University computer science professor Cal Newport recently argued that a similar idea could be applied to humans, or the way that leaders put together teams. By defining each person’s work as a collection of inputs and outputs, leaders could define communication protocols to reduce the overhead of collaboration (often measured in meetings) and allow for greater efficiency in communication across teams and more “deep work.”
This is the kind of extreme stance that Newport is known for—the kind of thing that makes him well known and successful as a theoretical computer science academic and author. I learn a lot from what he writes; I never apply it to the same extent.
2 replies on “Why you can’t manage humans like they’re software”
This was a great article Cate! I agree, you can’t manage humans the same way you manage software, and at the end of the day it really does come down to relationships and trust. I do have some food for thought, however, and am curious about your take on it. In the not-so-distant future, I imagine chunks of what humans are tasked with will be replaced by artificial intelligence. Coming up with an API for interacting with what is now managed by humans may really come in handy when that future day is upon us. How would you see these principles applied differently in a scenario where you have a team comprised of both humans and A.I.? I feel like trust would still be a factor here since the A.I. may or may not be doing things correctly, however, the relationship side of things would be much different.
This is an interesting question 🙂 There’s kind of two pieces of this I think:
– Management of tasks taken by AI systems. This should connect to the way we manage tasks today.
– Management of the AI process – the kind of training data and feedback an AI gets. This is much more interesting and nuanced, and ties into the broader conversation of not replicating and amplifying the worst of human bias with machined.
So yes, I agree – the relationships change, but the trust and active involvement in the process would still be important.