Learn, Predict, Steer: The God That Google Made

It was sometime in 2012 that I began to deeply distrust Google. Because of what I saw as the trajectory of their business model, with respect to Big Data:

LEARN. PREDICT. STEER.

PHASE 1: LEARN — Google would collect as much data as it could on every person who used their services, and conceivably, every person on the planet. Contacts, locations, travel patterns, email, text messages, phone calls, consumer habits, interests, concerns, political ideologies, religious affiliations, family relationships, occupations, and so much more.

PHASE 2: PREDICT — With so much data collected, and through the use of complex algorithms and artificial intelligence, Google would anticipate the thoughts and actions of its users (and everyone else), and make convenient suggestions based on what it extrapolated their wants or needs to be in the moment.

PHASE 3: STEER — Deferring to the almighty power of the algorithm, Google would start to influence people’s actions, conceivably in directions that represented their own interests, motivations, ideologies, and so on.

This idea sprung fully formed from my mind after an experience with what was then called “Google Now”, and then evolved into the Google Assistant. One day, I walked to the corner store, and my phone prompted me with transit directions to my partner’s house. That corner was indeed where I would catch a trolley, but this “suggestion” unnerved me. I didn’t like the idea of this machine, or the company behind it, knowing enough about me as to predict what I would do next. Thus began a long process, with many relapses, to remove Google from my life.

It was an ambitious goal. Because Google was ubiquitous, pervasive, and in so many cases, offered the best tech solutions for various everyday “problems”. Problems, mind you, which really didn’t exist before Google made itself so indispensably convenient.

Then in May of 2018, a strange internal video leaked from Google. It was called The Selfish Ledger, made by members of the company’s experimental wing, “Google X” — afterward just X, a subsidiary of parent company, Alphabet.

There were many voices clamoring to comment on this provocative “thought experiment”, but I had a different take. As I watched the video, my anti-Google impulses went into overdrive.

See! I knew it!”

It seemed to validate my theories about Google’s trajectory, in spite of the company’s official statement that the Selfish Ledger was a mere “thought experiment”. However, I felt something else, too.  Maybe because of the music or images, or because it was talking about memetics — a major interest of mine — I was also highly engaged, even mesmerized.

As the narrator laid out his dystopian vision with a pleasant British accent, I felt vaguely placated and a sense of awe. I thought ahead to the world the video envisioned, and some part of me felt at peace in the thought of relinquishing control, of allowing some gentle overseer to guide me. Much of this had to do with the vulnerability that arises out of life being so difficult at times, but I think it also invoked something deeper and endemic to the human condition. The same impulse that pushes people toward religion, or toward the stars. Toward all things mysterious and greater than ourselves.

If I, as non-religious person, who was hyper-skeptical of Google and its designs, felt such a thing, I could only imagine how easy it would be for most people to slide into the sort of complacency envisioned by the video. Because Google, without question, made life easier. At some point, it even made phone calls on our behalf. There was a slippery slope between convenience and dependency.

The video had a disingenuous — and rather pretentious way — of talking about a process that has already been taking place for the whole of human history. The process that sets humans apart from all other animals. The “Lamarckian data” which might be mapped and understood to predict and steer human behavior is exactly what Richard Dawkins called memes. These “selfish replicators”, analogous to genes, exist independently of Google’s ability to track them, and are already passed on from one generation to the next. What the video called the Selfish Ledger, we already knew under a different name.

CULTURE.

Google was proposing they act as the custodians, the mediators, the purveyors of human culture — stored, analyzed, and transmitted as “data” — and conceivably modifying it along the way. Not just on the broad, collective level, but on a deep, hyper-personal, individual level. Where humans have long sought a total understanding and thereby a mastery over nature, Google would be the stewards of our nurture.  Analogous to GMOs, perhaps we would become “CMOs” (culturally modified organisms), or maybe MMOs (memetically modified).

I thought about the implications of this corporate AI Overseer guiding thoughts, emotions, opinions, consumer habits, politics, public policy, among many other things, including the very modes of human interaction and our reasons for doing anything at all.

And to me, it sounded like:

PHASE 4: CONTROL — manipulate the thoughts and emotions, dictate the actions, drive the behaviors of users under the pretext of what’s best for them, and the world.

It sounded like a “benevolent” dictator, but one with far more insight into people’s thoughts and emotions on an individual and collective level than any human or their administration could ever manage.

It sounded like a god.

Not merely the kind Neil Gaiman postulated in his novel, American Gods, competing for our attention and brain space, but an actual force that actively and constantly intervened in our lives to determine their outcomes. A god that was functionally omnipotent, with respect to its power over human lives, and one that already was omnipresent.

It seemed particularly salient given the explanation of how our data would live independently of us as people, as the substrate with which this god molded the world and society. How it imagined future generations being born into a world already shaped and guided by data and algorithms and the artificial intelligence that made meaning of it all.

I briefly wandered down a philosophical rabbit hole, considering that this god had long existed in some other form, perhaps laying dormant in the collective unconscious, until humans and their technology gave it the means to manifest with agency in the physical world.

At some point, we had a choice as to how much of our personal information we offered up to the new god. Though it was neither as much choice as we would have liked, nor as much as Google led us to believe. Future generations would not have much choice at all, because the Infernal Machine had already started unwinding. Their input had little influence over the new direction of the world, driven as it was by the momentum of millions of data points from billions of people over many years.

The ideas presented in the video were both awe-inspiring, and terrifying, as one imagined a god must be.

The God That Google Made. Past tense. Because I am imagining a future where the Selfish Ledger has already become a reality. A future that looks back on all the omens that signaled the rise of this digital god, and our failure to do anything about it.

Or perhaps our inability to do anything, once a certain line was crossed. But where was the line? When we, as a society, shrugged at any revelation that our every thought and action was being tracked by private companies and used to manipulate us? Or was it when artificial intelligence algorithms started to function in ways their programmers did not predict or anticipate? There was a chilling irony in that. Or maybe with the advent of quantum computing, which allowed these digital minds to function at speeds and levels of efficiency that humans could barely comprehend, let alone control.

That future was not as far away as we thought, with respect to time or the ethical distance Google placed between “thought experiment” and company policy. The slide into complacency and dependency had already begun, and was accelerating at an exponential rate.

There were scattered and muted critiques from some technologists, and from anyone who ever regarded science fiction about artificial intelligence as cautionary tales. But by and large, the new god’s arrival was met with fanfare and devotion. While venom and bile was reserved for any who dared a more heretical interpretation of what the future held. A future that would inevitably see the proliferation of creepy robots*.

As I considered the many ways humans could and probably would be complicit in our own destruction, the quest to “Un-Google” my life continued…

_____________________

* Boston Dynamics, the robotics company behind “Atlas”, was owned by Google X until 2017.

Scroll to Top