Ranjan here. This week we’re talking about text analysis, the way we communicate, and our robotic overlords.
CB Insights was the first company I saw use Natural Language Processing (NLP) to analyze earning call transcripts. They used algorithms to “read” thousands of pages of text to glean insight that no human analyst ever could feasibly do.
In their typical style, they presented their findings in compelling, fun narratives (if you like this newsletter, you’ll probably like the one they produce). They wrote about stuff like whether corporate execs mention Voice products, or whether they use the term Big Data or AI more ….
They also did things like sentiment analysis on CEO language (how positively do they speak), or scoring CEOs on jargon usage. While these feel like fun NLP party tricks, you can quickly see how in financial markets, where everyone is looking for some hidden edge, this type of analysis will become increasingly valuable.
(Most fascinating to me here was the seeming correlation between being scored as a “positive” person and speaking in a lot of jargon. That says something about the models we use to analyze this stuff.)
A friend of mine came up with the half-joke, half-serious business idea that we should coach CEOs and CFOs on what terms to use to trigger algorithms in a positive way. If you just use the right selection of words and tone of voice, maybe you could trick automated trading systems into spitting out buy orders.
The language of CEOs
I came across a related academic paper that builds on the idea of analyzing public corporate executive speech (specifically social media feeds) to predict financial and operational performance.
Side note: the paper introduced me to the term Upper Echelon Theory, which is a great management term that you should use in conversation this weekend. It states:
Organizational outcomes are partially predicted by managerial background characteristics of the top level management team.
Essentially who your managers are, will influence how your business does. Brilliant!
Back to the paper: The researchers analyzed CEO social media feeds for "linguistic cues" to define a personality type for each leader (using the Big Five personality traits framework). They "investigate the CEO personality’s impact on both operational and financial performance."
One of the straightforward findings was:
We can see that CEO’s extraversion is positively related to cost efficiency. One feasible explanation is that extraverted CEOs are more sociable, which helps them build broad and diverse networks of social relationships. As a result, extraverted CEOs are accessible to more outer resources which are of benefit to reducing cost incurred to their own companies. Empirically, CEO’s high level of extraversion also improves employee productivity. It can be a result of they being more talkative, warm, enthusiastic and optimistic.
A slightly more intriguing finding was:
The most interesting and counter-intuitive empirical results come from the impacts of CEO’s conscientiousness and openness to experience. Conscientiousness is defined as self-disciplined, organized, strong-willed, dependable and achievement oriented, while openness to experience is defined as intellectual, curious, insightful, creative, unconventional and artistic. Intuitively, these two traits should be “good” characteristics. However, empirical results show that their impacts on business performance indicators are mostly negative. Conscientious CEOs lead to low cost efficiency.
It’s not just CEOs that are being algorithmically evaluated. Startups are already creating algorithms to help hiring managers understand how a candidate could "fit" into a company:
"We are very sharply focused on helping organizations hire faster and better at this point of time. Our analysis suggests that DeepSense can provide a 20-30% improvement in recruiter productivity while also lowering 'overall time to hire' by a non-trivial, measurable margin as well. DeepSense makes it possible by letting AI do part of the evaluation (esp. that around cultural fit and personality)
Leaving aside the ethical questions of what data is being used to train the models used to evaluate "cultural fit", I would bet that the public language you use will increasingly be used to inform both professional and/or personal evaluations. Maybe the "CEO earnings call language coach" idea is a good one.
Should everyone, from the CEO, down to the college student applying for their first job, start training themselves on how to better communicate to suit the algorithms that will eventually judge them?
…whatever the answer, you should definitely use Upper Echelon Theory in conversation this weekend.
We’ve always altered how we publicly present ourselves in the professional context. It’s just being sensible. But that has always been to present ourselves in specific ways to other people. Real-life humans. Do we have to start changing the way we speak to also satisfy the algorithms?
Job candidates can start figuring out how to game the systems that will analyze their body language, facial movements, and speech patterns in an interview. CEOs can finally hire me as an AI-speech training coach. But as I was thinking about the coach idea, it hit me: this is how so much of our online communication is. We already adapt our speech to please the robots.
The entire world of SEO-driven article writing is real people sitting down and trying to figure out what combination of words will please satiate the Googles crawlers. The way we post on any algorithmically-ranked social media platform (which is....all of them) is to please the robots that will judge the words, images and video enough to seed them into our connections’ feeds. Real-life people will eventually have to Like or Retweet, but unless the robotic tribunal approves, it won’t end up in anyone’s feeds.
It’s why everyone’s Instagram pictures look the same. The training has already begun.
Robot <> Human
So what's the logical endpoint for our discourse?
Any communication exchange that is even slightly algorithmically mediated is a natural candidate for optimization. Will the most important professional skill become the ability to speak in a way that makes the algorithm happy? Will our entire lives end up looking like a LinkedIn feed?
Usually, if you see a headline or post that has clearly been optimized for an algorithm, you find it a bit awkward. But the way we post does end up influencing the way we speak. It’s like there is a robot-to-human spectrum, and while systems like Open AI’s GPT-2 are making machine-generated text sound more human, the robots, using tools like Gmail Autocomplete, are quietly training us to speak more like a machine.
I always thought to pass the Turing Test, a robot would need to convincingly speak like a human. Maybe, as more of our speech becomes geared towards pleasing machines, the robots won't have quite as far a hill to climb to leave the Uncanny Valley.
Or maybe they’re already here 🤯
I’ve frequently referenced Winners Take All by Anand Giridharadas. He had a very important thread about the Mueller hearings, and whether Mueller should’ve adapted to the current state of communications & discourse, and given the soundbites that were needed to heighten the state of emergency on election interference. Effectively, you need pithy soundbites geared towards virality and algorithmic satiation to push a message. That’s just how things work now.
This is something I am really torn on, especially in light of what I covered in this week’s newsletter. Was Robert Mueller abdicating responsibility by sticking to an traditionalist interpretation of how things should work? Should he have accepted, if he really believes we are under attack, that maybe just give a soundbite? Would it be the same as FDR going on radio, or JFK going on television, and everyone just has to adapt to the medium and platform of the era?
Or, given the platforms are algorithmic at their core, will a Trumpian figure always win these battles. Is our current information ecosystem transition something more pernicious than those of the past, and if it’s the game being played, the robots and dictators have already won. Do you fight noise with more noise, or is silence the ultimate weapon?
I really don’t know where I stand on this one, but it could be the thing I think about the most. Maybe for another week. In the meantime, read this thread.
I’ll leave on a brighter note: