Lost without a map. propaganda prowls

If propaganda preys on our despair at the complexity of the world, as sociologist Jaques Ellul suggests, artificial intelligence might further prime us for its messages and calls to action.

AI will allow us to solve radically more complex problems, but the way it affects how we build collective models of the world risks leaving us in the dark about the mechanisms behind the solutions. At the same time the AI potential to design messages, target audiences, and influence opinions could unleash both a collective doubt in our sources of knowledge and an effective means of persuasion the likes of which we have not seen before.

Navigating this new world, which could be less than an election cycle away, will require new tools and understanding. Despite my current concerns with social networks, it may be that upgrades to their transparency and ability to explore our networks and influencers, could equip us with the means to steer our own course.

Look upon my data and despair

“Developments are not merely beyond man’s intellectual scope; they are also beyond him in volume and intensity; he simply cannot grasp the world’s economic and political problems. Faced with such matters, he feels his weakness, his inconsistency, his lack of effectiveness. He realises that he depends on decisions over which he has no control, and that realisation drives him to despair. Man cannot stay in this situation too long. He needs an ideological veil to cover the harsh reality, some consolation, a raison d’etre, a sense of values. And only propaganda offers him a remedy”

If the world was overwhelming in Jacques Ellul’s day (1914–1994), and that was what made us vulnerable — or dependent on — propaganda, we are like a child born without an immune system in the face of the complexity and impenetrability of what’s coming. The capacity of AI in restricted domains to outperform our sharpest minds is on the one hand terrifying, but on the other likely to lead to genuine advances in wellbeing.

Already, machine learning has revealed the potential to spare the lives and suffering of millions of sentient creatures a year, while assisting with creating safety guidelines that protect people and aquatic ecosystems. A team at John Hopkins University has developed algorithms that can effectively model the toxicity of chemicals previously tested on animals. It is worth noting though that this can work because of the millions of animal tests already conducted that provide the data for developing the models.

While this outcome is, of course, to be welcomed the effect of solving ever more complex problems will have the impact of making the world seem more complex and harder to understand to the layperson. We could end up with high priests of AI, or with the AI as oracular, upon whom responsibility for explaining the complex falls. In this world, the emphasis on those who tell the best stories will be magnified.

Even the experts in this field feel that their priestdom is overwhelmed by understanding its own summonings. According to computer scientist François Chollet of Google, the lack of understanding of the internal workings of AI by practitioners means that when it comes to improving their systems they are relying on “folklore and magic spells.”

In other words even practitioners fear that they are not able to keep pace with the AI they are making, the complexity is too much. While for some this seems like a problem, there are others who do not see the need to slow down the rate of innovation for the sake of improving the understanding of the inner workings of AI.

New Radical Empiricism

The power of AI will bring into existence solutions to questions so complex that the mind can’t fathom, and if we’re not careful, the impenetrability of its machinations could leave us passive bystanders. Even as it helps us manage higher orders of complexity, it may leave us no better informed about the world. Some believe it may be fundamentally changing the way we understand the world collectively.

Zavain Dar (venture capitalist at Lux Capital / lecturer at Stanford) is investing in companies that practice what he calls New Radical Empiricism (NRE). This, he claims, is a new approach to science — a new way in which we make collective models of the world. In his telling of it, AI could drive changes in the way we explore the world that will leave behind collective models entirely.

Our current system is premised on there being a ground truth that is knowable to human minds and which our cognitive systems can represent. In other words the current system is realist (there are actually rules/ ground truth out there) and reductionist (they can be reduced to our level of explanation). We generate theories about the world which we can apply across situations. Once we have arrived at the law of gravity, figuring out how a physical object (of appropriate scale) will be attracted to another is a matter of applying this law.

By contrast, NRE does not assume, or seek, a ground truth, or theory. Instead, the NRE approach creates predictions about the way the world will be based on crunching data — prior empirical facts. It also does not assume that it is necessary to be able to explain the world. And it does not assume that the results have to be humanly knowable.

“Look at data, model data, rinse, repeat, assume nothing.”

If Newton had had this approach to gravity he would have built a model that after a processing a vast amount of data about falling objects would finally be able to predict every other falling object. There would be no Theory of Gravity to be taught in schools, simply a model into which you enter the objects you are interested in. The falling of apples would be predictable as it is now, but still mysterious.

While Chollet might call this approach a “cargo cult” ( he originally said it about the field of AI), Google Director of Research Peter Norvig refers to the realist reductivist system that builds theories as “Platonic mysticism”

Lost without a map

The NRE seems to be creating AI that while it improves our ability to get the results we want, need not be simultaneously improving our understanding of the world.

Donald Norman, the director of the Design Lab at the University of California, San Diego, talks about cognitive artifacts — machines or devices that amplify our cognitive capacities. These artifacts are “a store of memory, a mechanism for search, and an instrument of calculation”.

All such devices — an abacus, a calculator, even pen and paper — that play a role in our extended cognition, amplify our capacities. However, David Krakauer ,Professor of Complex systems at the Santa Fe Institute, points out that whereas some of those devices improve our own understanding of the world, others merely give us the answer but our understanding is not improved, and through dependence may even be hampered.

When I had to read maps to get around, I had a spatial awareness of the city, a seemingly instinctive sense of where the river was at any time and ability to pick out salient features of the world around me to assist with wayfinding. So, when I lost the A to Z, I was not lost. These days, using GPS map apps, I often get to places faster than I would have otherwise, but I’m being led by the nose and panic, lost without a map, if the phone battery dies.

It seems to be that the NRE that Zavain Dar is observing is the latter type of cognitive artifact. In this case the artifact is not just a machine, a single gadget, but the whole of the process whereby we collectively model the world. Under this model, we will have more complex — at potentially better answers — to an impenetrable world.

AI propaganda 2.0

As if impenetrability and complexity was not enough, AI is also creating the tools to undermine the trust we can have in our sources, in our very networks. This could potentially sever permanently the “truth”discovery element of communication. Truth discovery is the very mechanism that propaganda overrides and replaves with establishment of tribal allegiance as the purpose of communication.

According to Sean Gourley, speaking at CogX the information campaigns of the Trump election and Brexit were primitive, despite the impact of the outcomes they supported. Gourley is not in the NRE camp and sees AI as being able to help us better understand the world. His PrimerAI creates intelligence summaries and making his algorithms interpretable is important so that human analysts can tweak the results to suit them.

However, he notes that AI technology could be deployed to unleash a much more effective propaganda through:

  • Mass production of natural language content to be A/B tested across social networks to refine the message. I mentioned previously how forums on the internet like 4chan /pol acted as generators of vasts amount of memetic content whose amplification was decided by the “organic” method of how viral each managed to get. In an AI enabled model, the number of messages created would be orders of magnitude higher, limited not by the number of participants in a forum, but by the compute power applied. At the same time, using data provided at the rate of 6,000 tweets a second, these millions of messages could be tested for virality in models of social media before even being released.
  • Modeling opinion dynamics: bringing AI modelling to how people form and change opinions. Gourley imagines peoples opinions as moving through space like a murmuration of starlings, and describes the use of these tools to pull edges, and then the rest of the swarm in a certain direction.
  • Targeted injection into networks. Being very specific about where messages are inserted into networks to achieve optimum diffusion. As far as I can tell, this could include both selecting targets for optimum diffusion and, potentially changing the shape of the network in ways that enhance the transmission of the message. See more on the spread of behaviours through human networks below.

Facbook to the rescue?

Given my position on the poison of captured identities , and their role in accelerating the spread of propaganda, I never though I would see a role for the social network in buffering us from the excesses of AI.

I previously talked about tending to our mind and our mental models of the world as a garden. Now I wonder whether we should also tend to our networks like a garden. After all, the spread of ideas and actions (good and bad) through our networks might have as strong impact on us as our own models.

Just a snippet from Nickey Case’s awesome game on human networks

In this example we can see that a certain behaviour, say binge drinking, transmits between people if a certain amount of their friends binge drink (50% in this example). The infrastructure of that network plays a role in the likelihood that someone will drink. In the initial setup the person at the bottom right does not become a binge drinker. Adding one connection within the network changes the dynamics and everyone becomes a binge drinker.

In the positive example, there is a threshold (20%)for passing a positive behaviour — volunteering. In this case just one extra connection allows for the behavior to spread.

Imagine if we could tell facebook. Hey, I want to stop binge drinking. Facebook already alters our feed for the effect of keeping us on site, driving algorithmic addiction. Imagine if it could alter our feed to help us get results we want. Hey Facebook. I want to loose weight. It could then alter my feed so that it prioritised stories from friends that relate to their own successes, aims, targets on this front.

Facebook could also tell me when new behaviours or attitudes arrived in my network and their origin. “Hey, Gustavo, you might notice more radical vegan memes because your ex-girlfriend just friended a member of ‘radical vegans are us’.” Perhaps thus informed about the residents of my network garden, I can guard against being manipulated.

Or Twitter could help me visualise where new trends are coming from, not just that they are. Hey @goldengus we noticed that a lot of people are suddenly talking about #anger, these three accounts, which are followed by these people in your network seem to be the source. Or: hey @goldengus you follow the same accounts as X% of the people you follow, maybe its time you added some new perspectives to the mix.

Interact with this story on Medium.

Credit where its due:

I’m going to make a point of saying go play this game on Wisdom and Madness of crowds, and chuck some coin Nicky’s way , I did. The little glimpse above is just a taste of the crowd system goodness in store. It rewards your time.

Hat tip to azeem Azhar at Exponential View — they put together the Cutting Edge stage at CognitionX 2018 where both Zavain Dar and Sean Gourley spoke.

 

Standard

Leave a comment