Ranjan here. Talking about how tech companies were always about the people who worked there.
I’ve been doom-scrolling endlessly. It began on Saturday afternoon as my family and I were taking a stroll through the East Village. We saw police truck after police truck, sirens blaring, and heading towards Union Square. We started to hear the whirr of helicopters. I started scrolling through Twitter and haven't really stopped since.
We took another short walk on Sunday afternoon and saw a lot of broken windows. Some restaurants and shops in the neighborhood we really like were smashed up. It was sad and it was real.
But then we sat in a park in the afternoon where everything felt serene. Flowers blooming, kids running around, and a dude playing a huge keytar. But then I checked the feeds and things looked like things were bubbling up again.
That's been one of the more disorienting parts of this: separating what I see on my feed and what’s directly around me. This compounded by the fact that I can hear sirens and the helicopters constantly. Trying to process what’s happening and assess danger, especially when you have little kids, feels bewildering.
This weekend that line between the interwebs and IRL felt like it collapsed. There is just so much happening in so many places that there’s no escaping what you see and what’s next door, no matter where you are. Protests in 140 cities. 40 million unemployed. 100,000 dead. Many of us were privileged with the comfort that what we were read about and watched was happening somewhere else to someone else. That distance is narrowing by the day.
Which brings me to Facebook......
The Tech Apology no longer works
That distance from the risk has always let the employees at companies like Facebook ignore the impact of their leadership and work. I know I compare Silicon Valley to Wall Street, but it always felt the same to me as a mortgage trader wheeling and dealing mortgage derivative products that were ten degrees and ten thousand miles away from the family getting foreclosed on. Not facing the people you impact firsthand, or feeling the impact directly, provides a good deal of moral leeway. My co-host Can calls it abstraction while I think of it as detachment (he is the software engineer of our duo).
Back in January, I had written a post on how Facebook's PR felt like it was breaking. For years they had been laser-focused on delivering the classic tech apology:
Before 2019, it felt like the Facebook communications machine was a well-oiled, unstoppable juggernaut. Bad news bounced off of them, as they were frighteningly disciplined in how they responded to any inbound scandal. Margins' readers are probably familiar with the refrain:
We're sorry. We should've done better. We're working on it and improving every day. It's only “(tiny number) X” percentage of overall posts. AI. Machine Learning.
The genius of this apology was it forced us to excuse today's problems for tomorrow's promises of technology. We were fools for suggesting that they should be expected to moderate everything, but we were still dependent on the products so just had to wait for “AI and ML” to fix it.
Using technology to combat misinformation is one of those topics I have a frustrating level of knowledge about. Frustrating because I've done a great deal of tinkering, building, and reading, but am far being from an expert. But it never made any sense to me that Facebook could solve its unique challenges with technology. The idea that computers could understand language to discern irony, sarcasm, idioms, in hundreds of languages, to create a "truthful" environment seemed an impossibility.
Furthermore, Facebook's entire business model was predicated on the salacious and tantalizing, which easily veers to the questionably truthful. Gmail and Hotmail solved the problem of email spam because they had an existential reason to; otherwise, their tools became unusable. Facebook exists to keep people engaged and sell ads. Misinformation is good for business.
Maybe I was missing something and there was some combination of user and content signals, manual tagging and flagging, that could somehow move us in the right direction, but it really didn’t add up.
That mentality of technology solving misinformation was certainly not unique to Facebook. I certainly watched those Jack Dorsey talks where he pontificated about new magical metrics that would create "conversational health":
Right now, Twitter is testing two metrics. The first is used to measure the health of single tweets — what Gasca calls a “toxicity” metric — and is based on machine learning algorithms created by Google that the search giant has made public for other companies to use.
The second metric doesn’t yet have a name, though Gasca called it “healthy.” The metric is meant to measure conversational health and takes into account three factors: civility, receptivity, and constructivity.
But it was the April 2018 Zuck testimony that really started to give us hints at how this was more stalling than strategy:
“Hate speech—I am optimistic that over a five-to-10-year period we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate, to be flagging things to our systems, but today we’re just not there on that,” Zuckerberg said. “Until we get it automated, there’s a higher error rate than I’m happy with.”
Hello, Human
This past week showed us just how human these challenges always were. It was never about all-powerful technologies, but instead, regular people making decisions.
First Twitter made the very editorial decision to put a "fact-check" on Trump's mail-in voter fraud tweet.
No “machine learning algorithms created by Google”. Just a human decision that something had to be done. And it was a courageous one.
Clicking through the little fact-check label a CNN article refuting the claim, along with this "What You Need to Know" box that feels like a low-grade Snopes effort:
Real people took a real decision and did something real!!!
Then there was the looting and shooting tweet, which Twitter put a warning label on.
More powerful than simply hiding the tweet was they eliminated all open engagement with the tweet. You can’t reply and you can only quote tweet the hidden tweet. I can’t quite describe how happy this made me - it directly fulfilled points 1 and 3 in The Margins' 5 Point Plan to Fix Social Media!! It starts to drain the lifeblood of a Trumpian figure.
Real people were taking real product decisions. These were isolated, purposeful actions that, maybe one day could be scalable, but that wasn't the point right now. The ball started rolling. Trump issued an Executive Order that he most likely has already forgotten about while declaring martial law last night. Finally, the real discussion that we've all waited for since 2016 on how to deal with all this shit began.
This was all a warmup to the news that Zuckerberg had directly phoned Trump regarding the Looting and Shooting post. According to Axios:
Later that day, Trump phoned Facebook CEO Mark Zuckerberg. During the call, Zuckerberg "expressed concerns about the tone and the rhetoric," according to a source familiar with the call.
Zuckerberg "didn't make any specific requests," the source said. A second source familiar with the call said the Facebook boss told Trump that he personally disagreed with the president's incendiary rhetoric and that by using language like this, Trump was putting Facebook in a difficult position.
This shocked me not because Zuckerberg single-handedly played out the 'arbiter of truth' role he normally argues against. It was just how human this moment is. Two guys talking on the phone about what to do with a Facebook post. If it wasn't so consequential, it'd feel pedestrian.
For all the talk about AI tools and machine learning and natural language understanding, this is what it boiled down to. Two dudes on the phone.
I acknowledge this is a distinct challenge from moderating billions of posts per day, but the core challenges with Facebook and problematic content have always been human. It was always about the will, ethics, and incentive structures within the company.
There was an incredible WSJ piece covering the post-2016 efforts at Facebook to tackle misinformation. We learn about initiatives to combat divisiveness called "Common Ground” that fell victim to corporate infighting and management musical chairs (I had to get one Chairs reference in).
This was always about people. It's Sandberg getting sidelined by Kaplan, and maybe Thiel as well. Cox being pushed out by Zuck. Systrom and Koum and Acton all giving up. I won't pretend to understand the exact internal dynamics, but this was never a question about the magical application of undiscovered technologies to tough content problems. It was always about the leaders, managers, and rank and file, and the decisions that they made.
Walkouts and People
I started this piece by commenting on how I've been having a harder time distinguishing between my digital feeds and my day-to-day. That convergence is something that ironically been one of Mark Zuckerberg's visions.
But that that loss of digital distance will affect those to whom it previously provided comfort. Tech employees can much more directly grasp the consequences of the divisiveness that their platform exacerbates. Instead of genocides in Myanmar, we're seeing the President threatening violence on his own people, and those threats coming true. And their leader making the human, editorial decision to use their platform to promote it.
It always needed to be the employees of Facebook to start pushing back, privately and publicly. This is the first week we've really seen it, and I can say, amidst all the shit out there right now, this does make me just a little bit optimistic.
Note 1: I do recognize a bit of irony that the Facebook walkout, which I am arguing was indirectly a response to the dissolution of the line between the digital and physical worlds, was held…virtually.
Note 2: My co-host Can promises that everyone who donated to Frontline Foods and emailed us in his edition will be mentioned this weekend! We seriously appreciate it.