Hi folks. It’s me, Can, as your host today. Ever heard of the cloud?
I talked last week here about the arbitrary, if not blurry, line between the server and the app. Products are products, features are features. It is probably good that you don’t care whether it is the supercomputer in your pocket that enables that Instagram filter, or some server running in a data center in Nebraska. Through thousands of layers of magic, and some cool math, internet glosses over distances.
The general trend here is that more of the computation is slowly moving towards the cloud, and more devices and apps are just becoming windows into what's running on the server (“the cloud”), as opposed to having any logic in them. In the industry jargon, more and more devices are just becoming thin clients.
You can view lots of businesses through this lens. Internet first killed the shrink-wrap software, and now it’s on its way to making it all some web apps running in your browser. Google (and Nvidia, and Sony, and Microsoft) even want to kill the gaming console by streaming you interactive video games. Whether that thin client, through which other things run, is a browser, or a VNC client, or some other shell is really an implementation detail.
Physics Strike Back
Yet, you can’t just avoid the physics entirely. Stadia might be right around the corner but its limited rollout points to how hard it is build not just a high-bandwidth network, but also a low-latency one. I once worked at a company whose entire premise was building a cloud-first filesystem (with local data as a mere cache), but I also found it somehow poetic that the previous tenants for our office was OnLive, the original “streaming gaming company”. We did a tad better, but not much better.
In fact, a good half of my time at Uber was really dealing with this problem. It's hard to figure out whether things should live on the server or the client (i.e. “the app”). Here's some context. My team worked on the authentication system all Uber apps worked on, which meant that we had to build and support various account management lifecycle flows, such as account recovery (“Forgot Password”). You could put the entire logic on the server, and serve the flow with a “Webview”, an in-app-browser, but you end up losing some of the benefits of native form handling, and possibly make too many network requests that slow down. But if you do put things on the client, you lose the ability to change things on the fly, or at least make it much more complicated. This stuff is hard!
Latency isn’t however the only reason why you’d consider moving more of the computation to the “edge”. Just ask Apple. The one-time world’s biggest corporation established itself as the privacy czar of the tech world, in the face of fierce competition from its data guzzling peers. Where Google makes billions by knowing more about you, Apple tries to do the same with knowing less.
Thermonuclear War on Privacy
It’s important to remember how we arrived here. For many years, tech companies like Google pushed the scientific edge in deriving insights from huge data sets, which allowed it to build even more engaging services that pulled in more services that provided both larger and higher fidelity data sets; a positive feedback loop.
Apple’s response to this has been multi-faceted. First prong in the approach is to change the narrative around privacy; that make people realize (or believe, depending on your initial stance) privacy is a human right. Yet, in a world where many people seem to be happy with the compromise of exchanging some personal data in return for free products, that can be a tall order. How do you even communicate to millions of people that their data doesn't leave the device? For many that aren't technologists, the connection between "data some company collects on me" and "features I get to enjoy" simply isn't there.
The second prong in this approach is Apple moving more of the computation that generally happens in the cloud to the client. This started a few years ago, but seems to be picking up. Apple showcased how it can detect and identify faces solely on the device, and followed it by even more advanced recognition of objects and sceneries. The data never leaves your phone.
Yet, it’s hard to shake off the feeling building sophisticated ML models without enough data. Apple also invested deeply in what is called differential privacy. This grossly means that Apple can ship data from your device (“upload it to the cloud”), but scramble it enough it before to make it anonymous. In some way, Apple still does a bunch of “machine learning” in the cloud. A good example for this is the predictive text. I don’t need to know that you write “Manchester” and follow it with “United”, but if I collect enough even-scrambled data from people, I can tell that one follows the other often enough.
Uber is another firm that invests heavily in differential privacy, especially for sharing data internally. Most people are really interested in the patterns in data, not the individual pieces. What is the real use case for someone to look at my individual trips?
To me, it seems like there’s room for investment in privacy preserving or even privacy enhancing research areas like differential privacy. The field is still nascent, at least compared to the years of person-hours and billions of dollars spent on building models that demand more and more data. As I’ve written various times before, it’s likely that we have not only over-estimated the value of data (by undercounting its liabilities), but we’ve generally not been ambitious about what we can do without putting all the data in one place, waiting patiently to be hacked or use some nefarious purpose.
On its face, tide seems to be slowly turning, at least away from “more data is good” dogmatism. I am old enough to remember when it was not just “edgy” but generally accepted to say that “privacy is dead” in Silicon Valley. That was never really true, but it provided a good cover for those who never cared about other people’s privacy. It’s true that in America, where privacy is generally considered a commercial matter, as opposed to, say, Europe where it’s an enshrined civic right can’t be priced. As I discussed before, some of the liabilities around data are more widely recognized.
So, which one is better? Cloud or the edge? Server or the client? Datacenter or the app?
Outlook: Cloudy?
I know this is a cop out, as far as predictions go. But here’s the truth: we’ll probably end up somewhere in the middle where we use a combination of both. And in that new world, we’ll have new questions to grapple with. For example, differential privacy is great for shipping data around, and models that allow devices to make predictions can be downloaded to be run on device. Yet, devices, once they are shipped, are hard to update, unlike a server that can acquire truly new capabilities, and increase its performance. It’d be misleading to say an edge-computing heavy approach wouldn’t result in some casualties in terms of products and features.
On the flip side, even with technologies 5G rolling out slowly, there’ll still be huge performance benefits to doing some computing locally. The line between the server and the client will depend on many factors. Some connection technologies will be rolled out in some countries than others. Regulations such as data residency requirements will add further friction in some places, but they will also act as encouragement in others. Fears of balkanization will do the same. Higher costs of breaches, both regulatory and publicity wise and increased sensitivity towards privacy will make edge computing more attractive.