Hi! After a long hiatus, I’m back to writing on Margins. I’ve been on a writing break as I’ve started my own company 4.5 years ago. I just left, so here we are! Missed you all. If you are in NYC, say hi.

AI makes people do weird things, and ask even weirder questions. Is it going to destroy us all. That seems insane; No. why not just pull the plug? Okay, but what if it destroyed us before we could? Be serious. You are veering into omnipotence. That won’t happen. OK, an easy one: Is it going to take away all our jobs? Probably.
Yet, the most important AI questions have nothing to do with whether it'll achieve consciousness or destroy civilization. The real questions are much more boring—and much more important. Questions like who's going to control the models we use, whether we'll ever stop relying on a handful of companies, and why some models are just... better, despite all logic suggesting they shouldn't be.
So, today, I want to pose a few of them and see what kind of answers I can come up with.
Open vs. Closed: Who Controls the Future?
Let's start with a simple one: open or closed? Here’s another way to put it. Do we expect companies like OpenAI, Google, Anthropic, and a few others to always maintain an edge? Or do we expect open‑weight (i.e., open‑source) models to ever catch up to frontier models from private companies?
This has big implications. Are we forever going to rely on third parties, or at some point do we get to deploy these on-premises on our (cloud) servers? Or is it going to be like databases, where Oracle serves a few big customers and everyone else uses open-source software?
On the closed side, we don't really have much evidence, except for the most compelling one: past performance. Take coding models, for example. While models like Qwen are always impressive, when it comes to mind‑share and pure developer productivity, most people are still using models from Anthropic (Claude) or Google (Gemini). Even Facebook, which has made a big show of keeping its models open, has recently switched to Claude.
The "open" side, however, has history on its side. At some level, all LLMs are a bunch of data and code. There's no magic, and eventually people figure out your tricks. You can even see OpenAI thinking this way too; why else would it be ramping up their own consulting arm deploying their models in enterprises?
Even the most advanced models would fit on a thumb‑drive, and even if you did not have access to that drive, you can extract much of the magic sauce remotely, like Deepseek allegedly did. Why do you need OpenAI's models if you can get 90 % of the way there with an Accenture contract?
Why Are Some Models Just... Better?"

Yet, this question raises another question: why are the closed models still so far ahead of other models? Or, even a narrower question: why do Anthropic models seem to be significantly better than models from companies with access to larger datasets and more GPUs?
The answer seems to be that we don't really know what makes one model “better” than another. There are so many benchmarks that every model developer picks the one they rank the best in when they announce their model’s supremacy. While many developers I talk to prefer Anthropic models, there are many others who prefer Gemini. I think Claude Opus (from Anthropic) is the best writer, but it could also be that I don't have access to o3‑pro just yet. And I didn't even talk about Grok at all yet.
The other part is that these are such cutting‑edge technologies that the talent really matters; you can get materially different models if you have the right people on staff. Zuckerberg's $100 M+ offers sound a lot less crazy once you realize those same people—and not much else—made OpenAI worth hundreds of billions. I mean, they said it themselves just a few months ago.
Should We Even Care About Safety?"
This brings me to another question: should companies act with safety in mind when developing and releasing these models? Or is it at best a way to keep some trust‑and‑safety‑minded people busy and generate good press, and at worst, a way to build some regulatory barriers for new entrants?
I generally think companies should act responsibly and should try to put the well‑being of humanity somewhere in their spreadsheets. But I am also aware that most companies will do whatever makes sense for their shareholders as they wont to do, and it's the job of regulators to make sure the negative externalities are folded into those models.
For example, Grok—the brainchild of Elon Musk—released its models with practically zero safety precautions, to exactly no one's surprise, drawing the ire of many in the industry.
Yet, I've not seen a lot of people care about the safety of these models when they are deploying them in their enterprises. They care about cost, of course, and care quite a bit about how these models use their companies' data.
Should We Sell AI Chips to China?

Speaking of safety, yet another question: should we be selling our most advanced AI chips to China for training and inference?
The knee‑jerk answer seems to be no. Clearly, one of the few things we still do better than anyone else is design these chips (even though we do make them in Taiwan and South Korea). At the same time, as the Huawei experiment has shown us, the more we restrict the exports, the more we encourage and enable them to build their own advanced chips.
The long‑term solution here seems to be actually attracting the talent to build chips (and the chip factories) to the US, not repel them like the Trump administration wants to do. We need to play both the long and the short game. The short game is the talent, and the long game is the supply chain resilience.
The short game is obvious. The long game, less so. But here’s an anecdote. An Apple executive once told me that the Bush administration once kindly asked them to start buying precision glass lenses from US suppliers so they'd keep their spy‑satellite building pipelines and the talent supply intact. Turns out both the people who know how to make them and the machines take years to develop. What happens when (if?) run out of people who know how to design chips from scratch? Has our Boeing experience not taught us well?
The Boring Futures We Can't Predict

While it's fun to consider doomsday or post‑scarcity scenarios, the future is probably going to be more boring than we expect in the short term and more unpredictable in the long term.
We first feared mass unemployment from steam engines, then electricity, but no one thought air‑conditioning—a boring technology made possible by electricity—would upend entire countries' industries. The UAE wouldn't exist as we know it without AC. Singapore wouldn't be a financial hub. The entire American South would have developed differently.
The grand questions are fun to ponder, but the future rarely announces itself with fanfare. Sometimes, it’s a proviso in a policy paper. But most often, it shows up as a line item in someone's IT budget, solving a problem we didn't even know we had.
omg you are back 🥹
Welcome back :)