Move Fast and Break Things

The Zuck-approach to policymaking

Yash Dubey
4 min readMay 30, 2023

If you don’t know about ChatGPT yet (but somehow found this piece), I admire your ability to stick to Medium and nothing else. Otherwise, you know that the latest godfather/techbro/saviour-of-humanity Sam Altman has been doing the rounds in front of the flock of headless chicken politely called the United States Senate. And the sticking point reported here, here, here and here is that this time it’s different. Sam wants — nay, calls — for regulation! Regulation of the goldmine him and his company were the first to claim. A techbro calling for regulation? What’s next, Trump voting Democrat? DeSantis at Disneyland?! Here he (isn’t it inevitable a “he”? Looking at you Liz Holmes) stands, on the cusp of immense plunder as Zuckerberg did before him, as Jack did. And he steps back? Or does he?

New bird, same tune?

Shrewd Operator

Sam isn’t your ole techbro. He’s a battle hardened business operator. I suppose working with the holy Musk makes you develop a certain canny attitude. And Sam oozes the self-assured killer instinct that the most successful businessmen espouse. Let’s take a step back ourselves to understand why OpenAI is so keen to be regulated. Usually the anathema to businesses that innovate rapidly and often seen as cumbersome obstacles to the imminent delivery of utopia at the hands of “profit-maximising” entities, why do regulations seem to hold a special place in Altman’s heart?

Well, look a little deeper, the reasoning is almost crystal clear. This is a page straight out of the Zuckerbook and in some ways, the Art of War. You see, OpenAI’s ChatGPT became too popular too fast. I mean look at this:

The insanity of instant adoption

Along with this came warnings of the dangers of AI. None other than our lord and saviour, Elon Musk, claimed that AI held incredible dangers for humanity. Global backlash was instant and loud. Would you want to be handed the controls of this ticking time bomb? Ages ago, Zuckerberg realised that running a social media platform is a lose-lose-lose battle as this article explains. So he chose the easy way out and created a “moderation council” to address such issues. Now, he can simply shrug and point his fingers at the council and chalk it down to a decision made by “experts”.

Sam Altman, being a battle-hardened operator, instead claims to want to hand power “to the people” so when it all inevitably blows up in a quagmire of racism, sexism, anti-Semitism and a whole bunch of other -isms with a healthy dosage of environmental catastrophe, he can shrug his shoulders and say, “Well, this is what the people wanted”. Except the people aren’t you and I. The people are extremely skilled professionals with a knowledge of the industry and those with the financial muscle to generate the kind of feedback OpenAI wants. If you’re a freelance web developer in Bangladesh, well, too bad! The people have spoken! Your livelihood is in the hands of a dozen Stanford grads, a Russian hacker group and techbros.

The Policy

Believe you me, I hang my head in shame as I take a page of out the old, young, pre-lizard transformation Zuckerberg. We need to move fast and break things. The law is absurdly slow. The politicians woefully inept. If you doubt me, just watch 5 minutes of any conversation about technology between a CEO and the U.S Senate. I guarantee an initial reaction of dumbfounded silence followed by “This is a joke isn’t it?”. Unfortunately, this is our shared reality. Policy no longer guides development of groundbreaking technologies. Give me one reason why AI should not be a public good and I will give you five reasons why it should be. Yet, a plethora of highly cocooned engineers control this vital transformation in human technology. (Side note: ChatGPT is relatively rudimentary but the future of AI is more or less clear; It will be the next revolution in technology)

So what do I propose? That policy move rapidly. That we make rules today that we break tomorrow. That we make mistakes and learn from them.

Move fast and break things

As much as I despise Zuckerberg, this statement holds weight. We must rapidly evolve our laws. We must experiment with them. Test the limits and suffer from mistakes. Because the earlier we make mistakes, the earlier we learn and the better we guide technology. I argue that deployments of technology for facial recognition served as an early experiment. Now, if we impose strict limits for, say, bias in the technology, we can experiment with limits. This approach has limitations. We will make mistakes. We will suffer. But imagine a world where social media was steered to serve people and not the other way round. Where addictive dark patterns were banned immediately. Where teens didn’t drive themselves to mental agony online. Would the mistakes have been worth it?

I think so.

--

--