The Last Long Night

Photo by Forest Katsch on Unsplash

Advertisement
Standard

Lost without a map. propaganda prowls

If propaganda preys on our despair at the complexity of the world, as sociologist Jaques Ellul suggests, artificial intelligence might further prime us for its messages and calls to action.

AI will allow us to solve radically more complex problems, but the way it affects how we build collective models of the world risks leaving us in the dark about the mechanisms behind the solutions. At the same time the AI potential to design messages, target audiences, and influence opinions could unleash both a collective doubt in our sources of knowledge and an effective means of persuasion the likes of which we have not seen before.

Navigating this new world, which could be less than an election cycle away, will require new tools and understanding. Despite my current concerns with social networks, it may be that upgrades to their transparency and ability to explore our networks and influencers, could equip us with the means to steer our own course.

Look upon my data and despair

“Developments are not merely beyond man’s intellectual scope; they are also beyond him in volume and intensity; he simply cannot grasp the world’s economic and political problems. Faced with such matters, he feels his weakness, his inconsistency, his lack of effectiveness. He realises that he depends on decisions over which he has no control, and that realisation drives him to despair. Man cannot stay in this situation too long. He needs an ideological veil to cover the harsh reality, some consolation, a raison d’etre, a sense of values. And only propaganda offers him a remedy”

If the world was overwhelming in Jacques Ellul’s day (1914–1994), and that was what made us vulnerable — or dependent on — propaganda, we are like a child born without an immune system in the face of the complexity and impenetrability of what’s coming. The capacity of AI in restricted domains to outperform our sharpest minds is on the one hand terrifying, but on the other likely to lead to genuine advances in wellbeing.

Already, machine learning has revealed the potential to spare the lives and suffering of millions of sentient creatures a year, while assisting with creating safety guidelines that protect people and aquatic ecosystems. A team at John Hopkins University has developed algorithms that can effectively model the toxicity of chemicals previously tested on animals. It is worth noting though that this can work because of the millions of animal tests already conducted that provide the data for developing the models.

While this outcome is, of course, to be welcomed the effect of solving ever more complex problems will have the impact of making the world seem more complex and harder to understand to the layperson. We could end up with high priests of AI, or with the AI as oracular, upon whom responsibility for explaining the complex falls. In this world, the emphasis on those who tell the best stories will be magnified.

Even the experts in this field feel that their priestdom is overwhelmed by understanding its own summonings. According to computer scientist François Chollet of Google, the lack of understanding of the internal workings of AI by practitioners means that when it comes to improving their systems they are relying on “folklore and magic spells.”

In other words even practitioners fear that they are not able to keep pace with the AI they are making, the complexity is too much. While for some this seems like a problem, there are others who do not see the need to slow down the rate of innovation for the sake of improving the understanding of the inner workings of AI.

New Radical Empiricism

The power of AI will bring into existence solutions to questions so complex that the mind can’t fathom, and if we’re not careful, the impenetrability of its machinations could leave us passive bystanders. Even as it helps us manage higher orders of complexity, it may leave us no better informed about the world. Some believe it may be fundamentally changing the way we understand the world collectively.

Zavain Dar (venture capitalist at Lux Capital / lecturer at Stanford) is investing in companies that practice what he calls New Radical Empiricism (NRE). This, he claims, is a new approach to science — a new way in which we make collective models of the world. In his telling of it, AI could drive changes in the way we explore the world that will leave behind collective models entirely.

Our current system is premised on there being a ground truth that is knowable to human minds and which our cognitive systems can represent. In other words the current system is realist (there are actually rules/ ground truth out there) and reductionist (they can be reduced to our level of explanation). We generate theories about the world which we can apply across situations. Once we have arrived at the law of gravity, figuring out how a physical object (of appropriate scale) will be attracted to another is a matter of applying this law.

By contrast, NRE does not assume, or seek, a ground truth, or theory. Instead, the NRE approach creates predictions about the way the world will be based on crunching data — prior empirical facts. It also does not assume that it is necessary to be able to explain the world. And it does not assume that the results have to be humanly knowable.

“Look at data, model data, rinse, repeat, assume nothing.”

If Newton had had this approach to gravity he would have built a model that after a processing a vast amount of data about falling objects would finally be able to predict every other falling object. There would be no Theory of Gravity to be taught in schools, simply a model into which you enter the objects you are interested in. The falling of apples would be predictable as it is now, but still mysterious.

While Chollet might call this approach a “cargo cult” ( he originally said it about the field of AI), Google Director of Research Peter Norvig refers to the realist reductivist system that builds theories as “Platonic mysticism”

Lost without a map

The NRE seems to be creating AI that while it improves our ability to get the results we want, need not be simultaneously improving our understanding of the world.

Donald Norman, the director of the Design Lab at the University of California, San Diego, talks about cognitive artifacts — machines or devices that amplify our cognitive capacities. These artifacts are “a store of memory, a mechanism for search, and an instrument of calculation”.

All such devices — an abacus, a calculator, even pen and paper — that play a role in our extended cognition, amplify our capacities. However, David Krakauer ,Professor of Complex systems at the Santa Fe Institute, points out that whereas some of those devices improve our own understanding of the world, others merely give us the answer but our understanding is not improved, and through dependence may even be hampered.

When I had to read maps to get around, I had a spatial awareness of the city, a seemingly instinctive sense of where the river was at any time and ability to pick out salient features of the world around me to assist with wayfinding. So, when I lost the A to Z, I was not lost. These days, using GPS map apps, I often get to places faster than I would have otherwise, but I’m being led by the nose and panic, lost without a map, if the phone battery dies.

It seems to be that the NRE that Zavain Dar is observing is the latter type of cognitive artifact. In this case the artifact is not just a machine, a single gadget, but the whole of the process whereby we collectively model the world. Under this model, we will have more complex — at potentially better answers — to an impenetrable world.

AI propaganda 2.0

As if impenetrability and complexity was not enough, AI is also creating the tools to undermine the trust we can have in our sources, in our very networks. This could potentially sever permanently the “truth”discovery element of communication. Truth discovery is the very mechanism that propaganda overrides and replaves with establishment of tribal allegiance as the purpose of communication.

According to Sean Gourley, speaking at CogX the information campaigns of the Trump election and Brexit were primitive, despite the impact of the outcomes they supported. Gourley is not in the NRE camp and sees AI as being able to help us better understand the world. His PrimerAI creates intelligence summaries and making his algorithms interpretable is important so that human analysts can tweak the results to suit them.

However, he notes that AI technology could be deployed to unleash a much more effective propaganda through:

  • Mass production of natural language content to be A/B tested across social networks to refine the message. I mentioned previously how forums on the internet like 4chan /pol acted as generators of vasts amount of memetic content whose amplification was decided by the “organic” method of how viral each managed to get. In an AI enabled model, the number of messages created would be orders of magnitude higher, limited not by the number of participants in a forum, but by the compute power applied. At the same time, using data provided at the rate of 6,000 tweets a second, these millions of messages could be tested for virality in models of social media before even being released.
  • Modeling opinion dynamics: bringing AI modelling to how people form and change opinions. Gourley imagines peoples opinions as moving through space like a murmuration of starlings, and describes the use of these tools to pull edges, and then the rest of the swarm in a certain direction.
  • Targeted injection into networks. Being very specific about where messages are inserted into networks to achieve optimum diffusion. As far as I can tell, this could include both selecting targets for optimum diffusion and, potentially changing the shape of the network in ways that enhance the transmission of the message. See more on the spread of behaviours through human networks below.

Facbook to the rescue?

Given my position on the poison of captured identities , and their role in accelerating the spread of propaganda, I never though I would see a role for the social network in buffering us from the excesses of AI.

I previously talked about tending to our mind and our mental models of the world as a garden. Now I wonder whether we should also tend to our networks like a garden. After all, the spread of ideas and actions (good and bad) through our networks might have as strong impact on us as our own models.

Just a snippet from Nickey Case’s awesome game on human networks

In this example we can see that a certain behaviour, say binge drinking, transmits between people if a certain amount of their friends binge drink (50% in this example). The infrastructure of that network plays a role in the likelihood that someone will drink. In the initial setup the person at the bottom right does not become a binge drinker. Adding one connection within the network changes the dynamics and everyone becomes a binge drinker.

In the positive example, there is a threshold (20%)for passing a positive behaviour — volunteering. In this case just one extra connection allows for the behavior to spread.

Imagine if we could tell facebook. Hey, I want to stop binge drinking. Facebook already alters our feed for the effect of keeping us on site, driving algorithmic addiction. Imagine if it could alter our feed to help us get results we want. Hey Facebook. I want to loose weight. It could then alter my feed so that it prioritised stories from friends that relate to their own successes, aims, targets on this front.

Facebook could also tell me when new behaviours or attitudes arrived in my network and their origin. “Hey, Gustavo, you might notice more radical vegan memes because your ex-girlfriend just friended a member of ‘radical vegans are us’.” Perhaps thus informed about the residents of my network garden, I can guard against being manipulated.

Or Twitter could help me visualise where new trends are coming from, not just that they are. Hey @goldengus we noticed that a lot of people are suddenly talking about #anger, these three accounts, which are followed by these people in your network seem to be the source. Or: hey @goldengus you follow the same accounts as X% of the people you follow, maybe its time you added some new perspectives to the mix.

Interact with this story on Medium.

Credit where its due:

I’m going to make a point of saying go play this game on Wisdom and Madness of crowds, and chuck some coin Nicky’s way , I did. The little glimpse above is just a taste of the crowd system goodness in store. It rewards your time.

Hat tip to azeem Azhar at Exponential View — they put together the Cutting Edge stage at CognitionX 2018 where both Zavain Dar and Sean Gourley spoke.

 

Standard

Propaganda as Adversarial Attack

At the intersection of neuroscience and computer science the metaphor of mind as machine is in full swing.

On the one hand we are conceiving of our mind as algorithms and on the other we are building intelligent machines based on the way the mind seems to work.

As part of the understanding of how to build intelligent machines, and protecting themselves from those machines, people are developing ways of tricking algorithms into categorisation errors. The process involved in this deception is known as an adversarial attack, and it may tell us something about how propaganda works on us.

I predict a riot

The mind is a prediction machine, and so as we try to make our machines more mind-like we are working on their ability to predict. The mind builds models of the world to improve our predictive abilities and reduce the energetic cost of sensing everything. Our awareness at any given time is based mainly on these internal models rather than on what is “streaming” from the outside world. As David Eagleman notes:

“What we see in the world is what we think we’re seeing out there. Most of vision is an internal process happening completely within your brain and the information dribbling into your retinas is just a small part of what you’re actually perceiving. About 5% of the information of your visual stream is coming through your retinas — the rest is all internally generated given your expectations about the world.” (26:40)

One explanation for the LED incapacitator riot control weapon could be that because it pulses randomly shifting pulses at victims, they have to exert too much energy on actually deciphering what is in front of them, as their mental models are unable to provide useful predictions.

Surprise!

Another way to explain the role of the mind is that the mind is a surprise management machine.

The mechanisms of the mind operate in a way to reduce surprise while remaining alert to it. Surprise is an energy-intensive mental state but also a good guide to both danger and the need to update mental models. It is important that we do not put unnecessary energy into regular but non-threatening occurrences, but that we are alive to threatening occurrences with whatever regularity they occur.

This is not a Panda

One of the fields of AI that is rapidly developing is visual recognition. It has already been deployed in everything from driverless cars to tracking criminals through facial recognition. Several of the ways in which AIs work, including “state-of-the-art neural networks,” can be tricked into misclassifying an image that is only slightly different to an image that it would correctly classify.

In the example below, the machine correctly classified the the picture on the left as a panda. The system was then shown the same image with the added “noise” shown in the middle picture, such that the first two images combined still resulted in a panda-looking image — that on the far right. I cannot tell the difference between the image on the left and the one on the right and it seems straightforward to say they are both images of pandas.

However, that imperceptible difference was enough to make the machine confidently misclassify the picture as a gibbon.

Adversarial attack example from Explaining and Harnessing Adversarial Examples

Making errors like this in the lab is one thing, but this vulnerability of machine learning has real-life implications. Activists can take advantage of it to fool facial recognition technology. With the simple addition of black-and-white art stickers, a driverless car (below) can be fooled into thinking that a stop sign is actually indicating a 45mph speed limit.

In these cases, in order for the attack to work, the machine has to fail to detect that it is receiving a novel input, the first stage in identifying surprise.

We like to think that we would not fall for such simple tricks, that our human intelligence somehow frees us from this.

Nothin’ proper about ya propaganda

Despite the arguments over whether the word intelligence can be applied to computers, our intelligence, both individual and collective, is far from immune to adversarial attacks.

In machines, the adversarial attack leads to misclassification of something, the inability to see it for what it really is.

When humans are subject to adversarial attacks in the form of propaganda, they lose the ability to see something for what it really is. This is because propaganda distorts the image in a way that makes a correct identification hard.

Our personal psychology is the element of the machine that propaganda exploits, claims Jacques Ellul. Faced with a situation we do not understand and inhabiting a world of decisions made beyond our comprehension or ability to influence, we seek consolation from that despair, and the simple messages of propaganda provide comfort.

On the blog post Distributed Mimetics, Julien Delacroix traces Ellul’s work on propaganda and highlights that at its core, propaganda is:

a pervasive tool of social control that is rooted in individual psychology... The individual psychology is the desire to be included in a group. The emotional stakes of this inclusion is substituted in for whatever substantive issue would otherwise be up for reasoning and decision.

Propaganda disrupts the role of communication as a method for relaying or discovering information. Instead, propaganda turns instances of communication from opportunities to establish the reality of the world into opportunities to establish in-group identity.

The adversarial technique at play is turning an information channel into an inclusion/identity channel. The result is that instead of recognising the information for what it is and identifying it accurately, a participant in propaganda can’t see the information for what it is and instead responds as is expected to achieve inclusion.

Propaganda, notes Delacroix, “reframes the stakes of the issue as inclusion or exclusion — regardless of whether this framing is really relevant to the problem at hand and often despite it clearly being irrelevant — and it provides clear answers to the question of what will lead to inclusion.”

Fools follow rules

Jason Stanley, author of How Propaganda Works, agrees that propaganda is not simply malicious or biased communication. It can work for good or bad causes, but it is a part of a mechanism “by which people become deceived about how best to realize their goals.”

Stanley identifies another adversarial mechanism: the stereotype, or “social script.” The social script is a part of a mental model that is so strong that novel inputs are not recognised, and are instead reduced to a previous category. This is an opposite example to the machine that sees a panda and identifies it as a gibbon. In this case, the human ignores the deviation from their existing model and carries on as if their previous model was accurate.

As an example of social scripts, he notes in an interview with Roxanne Coady that “Every German raised under National Socialism has only three pictures in their head when they hear the word ‘hero’: a racecar driver, a panzer truck driver and a storm trooper” (22:12) and that in the US there have been attempts to create a strong a link between immigration and crime or blackness and violence.

Social Script: “Mexicans are rapists and animals”

Input: Mexican doing landscaping

Output: “You’re a rapist and an animal”

Said it was blue, when ya blood was red

These techniques are doubly adversarial.

Firstly, they prevent the person who is either lured into a tribal channel or has a social script running from seeing the world the way it actually is.

Secondly, because the participant in the propaganda is unaware that they are interacting in the tribal channel or are not aware that the social script distorted their experience of reality, they do not update their mental model.

Propaganda disrupts the operations of mind as a surprise management machine. This machine needs to reduce the attention/energy it gives non-threatening novel inputs, and needs to improve its ability to detect threatening novel inputs.

Propaganda does the opposite, particularly what Stanley labels authoritarian propaganda. It heightens the fear and outrage about others whose threat is disproportionately inflated, and so they become a huge attention/energy drain.

At the same time, it dulls the surprise response to incidents which should generally be alarming, which merit closer inspection, and should update our mental models accordingly. Our mental models are doing the heavy lifting of perceiving and sensemaking.

Manage Surprise

Given that we are subject to, and participants in, propaganda:

What can we do to ensure that our mental model is the best it could be? That it best tracks reality?

How do we avoid wasting energy/attention on inputs that are neither as surprising nor as threatening as they seem?

How do we avoid dulling the surprise of threatening inputs and failing to update our models?

This is a call for audience participation.

One answer is to pay attention closely to our responses. Let the idea that you are “training” a model in your brain seep in.

Observe your interactions: Did you communicate from the perspective of seeking inclusion, or from the perspective of trying to see things the way they really are? (Don’t be ashamed of seeking belonging — that’s what the game is ultimately about — just be aware of which motivation is guiding your thinking.)

Feel suddenly surprised/outraged? Assess whether the message genuinely merits it and what the source was.

Feel blasé and find yourself thinking “it’s just the way it is”? Look further.

Be curious, read across a range of perspectives. Seek the kind of surprise that updates your mental models.

Which social scripts are you running? Which people make you think “they’re all the same” — Immigrants? Experts? Politicians? Gun owners?

Be the gardener of your mind.

Inspiration for this article came from the Distributed Memetics post and my two days at CognitionX, where Alex Kaula introduced me to the notion of surprise avoidance.

Interact with this story on Medium.

Standard

Releasing Captured Identities

Despite all my fears about data ownership, privacy, algorhythmic addiction, there is one reason I haven’t left facebook: they have my identity. To the extent that who I am is a node in a network, that who I am is revealed through my relationships and connections, who I am is controlled by Facebook. My identity is captured.

If I leave I will loose access to, and the ability to interact with a few organisations whose only channel is facebook and some people who I only connect with through the platform. I will cease to exist to them, they will cease to exist to me. If I leave, I will be unrecogniseable to those sites and organisations I log into with my Facebook ID.

They’ll just be one follower less. One less friend. Out of site…out of Stalin’s playbook.

Locked-in

In his account of ‘network effects’ W Brian Arthur (who coined the term) notes that “a technology that by chance gains an early lead in adoption may eventually ‘corner the market’ of potential adopters, with the other technologies becoming locked out.

While would be competitors are locked out, the user base is also locked-in. In the digital realm, that lock-in is accompanied by tracking, monitoring, offering up for manipulation. Because competitors are locked out and the user is locked in, we tolerate being treated as product.

As the platform modifies users reality for its own ends, the user comes to see the world through the platform’s prism.

Visibility is a trap

In Michel Foucault’s book Discipline and Punish — The Birth of a Prison , he notes of the prisioner in Jeremy Bentham’s panopticon prison (a central tower that can see into all the prison cells but cannot be seen into): “He is seen, but he does not see; he is the object of information”.

Another feature of an architecture in which the subject is always visible while the patrolling power is unobservable, is that it ends up institutionalising the subject. The prisoner internalises the rules and norms of the prison guard and the power structures beyond.

“He who is subjected to a field of visibility, and who knows it, assumes responsibility for the constraints of power; he makes them play spontaneously upon himself; he inscribes in himself the power relation in which he simultaneously plays both roles; he becomes the principle of his own subjection.”

There is evidence of the platforms letting users know they are watching — an ingredient in driving internalisation of norms. Content producers from across the political spectrum claim that their posts are reaching fewer followers on Facebook, they are being blocked from Twitter ,their youtube accounts are being demonetised, and Amazon is now issuing warning letters to people who don’t shop right. The threat is virtual exile. Because the platforms also hold our identity, exile is loss of digital personhood.

According to Foucault, users will internalise and enforce the norms upon themselves (if they aren’t already).

In the case of the platforms that algorithmically tweak user experience, a user’s subconscious is not the only entity self-policing. Their avatar, their virtual identity, is self-policing for them. The data points they create feed into an algorithm that allows the platform to present them with a reality most likely to lead to a sale. Both their subconscious and their virtual self is now collaborating with the power structures.

Monopolice

These platforms are bahaving like monopolies, or utilities, in that they reduce user choice. By locking out competition and locking in users, they end up creating uniform experiences. There are of course differences in terms of the content of conversations, content consumption and contentedness of users, but the product is the same. The effect of lock-in is to reduce choice.

Lock-in is achieved through the rapid capture of users. In moving early the big firms that surveil/monetise our data have starved new entrants into social platforms of the vital ingredient — users. Platforms built to connect users wither if they cannot attract enough users to generate network effects.

Exchanges as examples

In facing the platform problem, social platforms are analogous to (crypto)currency exchanges. For exchanges — where people sell or buy from other people — size of the userbase translates into liquidity in the market. Higher liquidity invites further participation, better price discovery, and encourages bigger transactions. A trader wouldn’t want to accumulate more of a currency than they knew they could then dispose of. Since exchanges make their money through fees as a percentage of trades, volume and size of trades is important to their model.

Securing users and therefore the liquidity they bring puts the exchange ecosystem in the pattern of delivering the spoils to the victors, like the early winners of social media. This feedback loop ends up strengthening the position of early winners while making it difficult for new participants to join in.

One solution to the problem of how to start a new exchange without users, is to use federated liquidity pools. These pools can be accessed by any new exchanges, giving these up-starts instant liquidity. In other words, the first new users to arrive at the exchange are not faced with empty space, inability to trade, or prices that are wildly off the mark. Instead they can trade across participating exchanges, their orders are matched with buyers and sellers across the exchange landscape.

Hydro Protocol and the 0x Project offer these liquidity pools to new and established exchanges. Their solution could see exchanges collaborate on the total market, while each maximising their appeal to their particular audience. This behind the scenes connection allows all of them to benefit from the liquidity, while each is able to bring new participants to the table using their own distinctive user experience, pricing structures, branding and outreach.

In this model new entrants can focus on their product, innovation and user experience, not simply user acquisition. The result is user choice is allowed to flourish, which could increase total participation in the market. With user choice comes the freedom from internalising platform norms, and freedom to find aligned norms instead.

Lock-out for good

There is a further feature of the Hydro Protocol that could illustrate a way to liberate identity.

Exchanges typically require the user to deposit their financial capital. The exchangebecomes the custodian (or captor) of it.

However, exchanges using the Hydro Protocol, are not custodians/captors of the users tokens. The tokens don’t leave the users own wallet until the orders are matched. At that point the tokens they’ve sold disappear and the corresponding amount of the tokens they’ve bought appear. The tokens are locked-out of the exchange. When orders are matched, the exchange benefits in fees for its role in matchmaking.

The trades themselves exist independent of any single exchange, they are not locked-in because they are recorded publicly on the blockchain. Once recorded, the trade would go through even if the initial exchange disappeared. While there is still room for competition to attract users on the basis of user experience or pricing, this ecosystem acknowledges that there is a pre-competitive commons — the health of the marketplace and user choice.

Social wallets and Federated Identity Pools

Could this approach help liberate our identities and offer us genuine choice? Could future social media platforms simply be the interface I use to record a comment, post a photo, tweak a meme which is visible to whoever I choose across a whole range of platforms.

In this model as a user I would be part of a federated identity pool, so that any of my public posts, photos, content, would be visible to a user of any new platform in the ecosystem. This would allow a proliferation of user experiences because new platforms could offer meaningful content, existing contacts wherever they are and opportunity for interaction from day 1. It would also mean that creating experience niches, would not necessarily lead to ideological bubbles.

Since all my interactions sit in my personal “social wallet”, I remain in control of them. If I decide to change platform, I can still take (or port) all my interactions with me.

If my main interest is in carving myself a longbow and I don’t care too much about seeing ads, my portal would have primitive technology graphics, curate a list of bowyer groups and offer me adverts for shooting ranges. Alternatively you could opt for a paid news service to avoid adverts altogether and see a stream of user comments on the news of the day. But those different versions could engage with eachother, say around a news story of arrow gambling in the markets of Northern India, from within their own user experience. If either of us changes user interface, we keep our relationship.

During teer betting, 50 archers take aim at a single target. Around them, in the marketplace, bookies at kiosks compete to take bets on how many arrows will hit the target, winning punters on the basis of their odds, personal rapport and design flair.

As well as freeing our identities, federated identity pools could allow each platform to perfect their aim, while we could place our social bets with the one that most took our fancy.

The “Oh the irony” update After posting on twitter, I saw this sponsored tweet in my timeline.

Originally published at diglife.com on June 9, 2018.


Standard

Truth: Shills and Fudders

Truth is a useful guide to action and coordination. As an accurate enough reflection of reality, truth provides us with the information necessary to make decisions and take actions that contribute to the achievement of our goals. It is also the basis upon which our cooperation and collaboration with others can take place — we need a shared understanding before we can progress jointly.

But truth, and our connection to this external feature of the physical and social world, is being redefined.

Frogs and fairytales

Our understanding of reality has never been perfect. Frogs, honed through millions of years to catch fast moving insects at the snap of a tongue, can now be tricked to lurch at digital ants on a phone screen. Their ability to detect the truth of the world was good enough for a quarter of a billion years. Better sight or processing would have come at an unnecessary cost. Equally, our connection is good enough but incomplete and easily deceivable.

It has been good enough to rise as a species to adapt across habitats, improve our lives, launch missions to the moon and alter our genetics. Despite all these achievements it is lacking. If our senses were perfectly honed there would be no room for disagreements over the facts of the world — and yet disagreements over basic facts seem to be peaking. At times it feels like we are inhabiting different worlds with different ontologies. The most popular video on Youtube suggested that the student survivors from Parkland were actually crisis actors and that the whole attack was staged in an attempt to pass gun legislation.

Gospel truth

At some point, as information about the world beyond the limits of our own perceptions became increasingly relevant to our actions, we began to rely on intermediaries — the media, religions, ideologues— to provide information about the world. These extended networks became part of our distributed cognitive assembly. These centralised nodes were given a privileged seat as an oracle, a source of truth that we could depend on independent of our own observations and positions. The reported observations were taken largely as true, as a guide for individual decisions and collective coordination.

But as the broadcast era makes way for the digital one, the motives of these oracles has come into question, and alternative versions of truth, and of arriving at the truth, are emerging. Instead of central oracles of truth, the media and religions have come to be seen as mouthpieces for certain interests — their ownership is not to be trusted to have either interests aligned with our own nor a neutral position.

Consensus truth for good or shill

Instead we are moving to a consensus version of truth. In it we still rely on a distributed cognition assembly to arrive at the truth, though in this version of truth, what establishes something as a fact of the universe is the volume and amplification it receives across the whole network, not from centralised nodes. Because activation across the network plays a role in determining the truth, as individuals we now have a role in establishing the truth in a way not seen before. But we are not just the citizen journalists or prosumers adding an observation to the network; we are also increasingly involved in advocating for it’s elevation to canon.

The clearest place to see this is in the communications of the cryptocurrency space. Here there are no “fundamentals” of the value of a coin, no true value — few have any live use cases, sales or users to base calculations upon. Instead, sentiment dominates these markets. Where sentiment dominates, sentiment equals truth. As sentiment builds it brings new network participants into the consensus, creating a feedback loop in which the consensus view becomes more likely to manifest.

In this world, every participant has a stake, every node has a coin, every node has a reason to manipulate sentiment. All participants in the crypto media space are also vested interests.

In this world coin holders shill their coin — emphasise (or fabricate) good news to boost its price. On the other hand people shorting a coin (betting on a crumbling price) will spread FUD (fear, uncertainty, doubt) in order to bring its price down. By way of acknowledging the epistemological dishonesty, many comments by highly followed individuals are appended with DYOR (Do Your Own Research).

The former gospel institutions of news are now seen as shills or fudders for their own interests. In the vacuum left behind as their authority drains away, we have all become shills and fudders: for the candidate, for the outcome, for the team, for our public self, seeking through our networks to influence not only the opinions, but the truth.

Gospel vs Consensus

In the gospel version of truth the facts of the world were the basis for action and coordination. In the consensus version of truth, there is coordination to establish the truth. The coordination is driven by the existing positions of participants and coherence forms around efforts to spread consensus.

Gospel truth: coordination is made possible by truth.

Consensus truth: truth is made possible by coordination.

One downside to this new mechanism of distributed cognition is that difference on the “facts” leads to discoherence and loss of membership. There is a disincentive to challenge the truth because as the establishment of it is the cohering principle, failing to uphold consensus is grounds for exclusion.

Artificial truth

There is a tension between the network consensus truth and the sort of truth that emerges from or is the basis for artificial intelligence work. In the AI it is operating from an internalised gospel state created as it establishes the reality of the world step by step (in reference to its utility) or based upon the “ground truth” offered up in its training.

Although consensus truth is the result of a dynamic open process in which everyone is a participant, AI truth is a private process which even the designers of the AI do not have access to. Actions are taken by the AI which can not be examined to establish the truth by which it seems to be operating.

Designers do have control over the “ground truth” — the “objective data” that AI is trained on — by, for example, tagging objects of a visual recognition AI by hand. But the AI may well build its own inferences and find proxies for that objective data, and there are increasingly AIs that do not train on data sets but rather work from rules.

Nothing but…

It seems we are moving from depending on distributed cognition to reveal truth, to a new relationship with truth. We are moving into an era in which we are participants in a network, seeking consensus; or we allow into our cognitions the actions of an AI as if it had truth, although in fact the truth itself is kept hidden.

Yeah, I’m shilling this position for now … what do you think?

Interact with this story on Medium.

Standard

Jeremy’s Bomb

Jeremy Corbyn was attacked for not wanting to launch a nuclear attack. When would you launch one?

The trial of Jeremy

Watching the question time with Jeremy Corbyn there were moments that I thought that he had matured immensely as a politician during his bruising stint as party leader.

Then the audience went Nuclear

Corbyn went on the back foot, trying to bat away the question. His accidental rope-a-dope revealed a blood thirsty streak in Conservative voters, but I couldn’t help but feel there is a better answer. So I thought of a better question.

When do you drop the bomb?

Under what circumstances would I feel comfortable dropping a nuclear bomb.

This is an open list and thought experiment.

Feel free to contribute the specific conditions under which you would be ok with… mass murder of innocents, ecosystem obliteration, a chain reaction of instability across the globe and, if your enemy hadn’t already launched, — putting your own people up as collateral to direct retaliatory nuclear attack.

The #nuclearcode

We have failed with all diplomatic, economic and social engagement with the enemy.

We have lost faith in the capacity of the armed forces, including our cyber warriors to incapacitate enemy nuclear weapons.

We are certain that our use of nuclear weapons will render their nuclear weapons unusable.

We are certain that our use of nuclear weapons will degrade the chain of command such that a retaliatory nuclear strike can not be launched.

We are certain that degrading the command structure will not result in the edges of the enemy’s system launching retaliatory attacks as they have been programmed to.

We are certain that the enemy is not on our territory, making targetting them … tricky.

We have made provision for the evacuation of our people any of our high value targets — cities, infrastructure, bases.

We have made provision for continuity of food and water supply.

We have made provision for continuity of services away from high value targets.

We have reliable evidence that our enemy is preparing a single attack on civilians that will result in deaths of… 10,000 or more. (where is this number).

The enemy chain of command we are targeting represents or has the support of the population where it is based.

Fallout Calculator

And just in case it comes to that moment when you have to decide and you have not met these conditions. You might find it useful to figure our where to escape to when the nuclear bombs come right back at ya. Fallout Calculator

Tell me what other conditions need to be met — and take the survey below.

Interact with this story on Medium.

Standard

Hydra Campaigning

The Conservative party has learned from the last two populist upsets and is deploying a campaign designed to be as noncommital and unaccountable as possible. And, as that backfires, could be preparing to wriggle around democratic principle by readying a rapid leadership replacement.

The Trump and Brexit experiences deployed new kinds of campaign.

Both campaigns understood that what moves people is not numbers and facts, but sentiments, identity, story. Make America Great Again. Take Control. Those ideas resonated deeply with people who felt out of control, people who felt their country was no longer great, no longer sovereign.

While Trump did provide some ad-hoc base-pleasing detail, the Brexit campaign effectively avoided detail and commitment entirely by presenting a hydra of spokespeople and advocates. The “official” campaign itself was fronted by various egos (at least one of which didn’t even want Brexit but was a political opportunist). But on top of that there was the unofficial campaign of Nigel Farage, and plenty of promotion by professional politicians rising to their platform, despite lacking the position or power to enact what they said (Daniel Hannan). Pundits such as Louise Mensch added to the noise.

The vacuum of clear leadership allowed all of the possible versions of Bexit to be scattered to the seven winds to find the nook in which they could take root. It also meant nobody had to take responsibility for the claims they made.

Between them they issued (or came so close via implication that to deny that this was deliberate is disingenuous) promises such as:

  • £350 million for the NHS
  • Keep free movement
  • No free movement
  • Membership of the Single Market
  • Rejecting the Single Market
  • End VAT on energy bills
  • Norwegian Model, Canadian Model, WTO rules….. the list goes on

In this general election the Tories have picked the slipperiest parts of both those campaigns.

They opted for an uncosted manifesto full of platitudes and “narrative”, and then as soon as it was published started changing their minds on it. That initial manifesto is no longer the blueprint for governing, it is stripped of mandate.

Their pronouncements equally cannot be what they are held accountable for as they contradict each other on major policy areas such as income tax.

Recently it even became clear that there are already moves within the Conservative Party to unseat Theresa May. Steve Hilton, a former head of strategy for David Cameron, said the Prime Minister should resign. That kind of statement represents more than an isolated sentiment and in all likelihood an indication that even in victory Theresa May will find her leadership severely challenged from within.

We are voting for a party whose leadership may quickly change. The Queen’s speech could be full of non-manifesto policies to be enacted by an unelected leader.

This is troubling. The electorate is getting a bait and switch and in the process accountability is disappearing.

Edit: Just saw this from Amber Rudd silencing a candidate at a hustings when he starts to question her role in Saudi arms deals… The other part of the slipperiness in this campaign is the refusal to answer questions — a typical political technique taken to extremes.

https://www.youtube.com/watch?v=TEcMW6RmC_w

Interact with this story on Medium

Standard

Your pension is paying for your AI / robot replacement

If your pension fund is being invested in companies working on automation or robotics, you are paying for your own workplace obsolescence whether you are a driver or a lawyer … or a fund manager!

Example:

Transport for London has 15,327 members of staff in operational roles including train drivers, drivers and network controllers.

In 2015 Transport for London chose Blackrock to manage £3.8 billion of its Pension Fund

But… Blackrock has ETF that tracks robotics and automation, TfL pensions could be invested in the technologies that will make staff redundant.

And automation is not just coming for drivers and operators. What’s your pension invested in?

While you’re looking, how do you feel about your pension funding unchecked exploitation of humans (eg conflict mining), further degradation of the lands ability to support the life that feeds us (fertilisers and pesticides), murderous munitions (arms firms). Or, really pick your pet causes — is your pension working with or against them.

Ask your pension fund trustees how the asset allocation strategies take into account long term Enviromental, Social, Governance issues. Find out who they have pointed as managers and what they are doing to mitigate risk and positively influence for your benefit.

And if you’re worried that you are paying for the robot that is taking your job. Start learning to do something else. Start learning to learn continuously.

Because… For almost any human skill you can think of, someone is trying to write an algorithm to do it

Find and interact with this story on Medium

Photo by Franck V. on Unsplash
Standard

What the alt got right

There was a bizarre inversion earlier this month when the right was protesting against the overreach of the secret services, while the left found itself cheering them on.

This was just the clearest example of the left coming untethered from its own foundations, driven to that position in response to its political opponents. But despite an enmity that could drive such a profound inversion, many of the observations and complaints of the alt right, could have found a home in the left.

The alt-right rhetoric took a swing at issues that are concerns for people on both sides — Bernie was pointing all of this out as well. In ceding the battle ground to the right, the left’s own answers risk being drowned out by the blame and scapegoating of the right. Is engagement on shared issues a way to protect the people and communities targeted for scapegoating by the right? Would engaging give the left some leverage in reshaping the situation?

The Alt-Right recognised that intermediaries are unreliable, unaccountable and duplicitous. Much of the value that people are and create, is taken in a way that does not give satisfactory returns. The intermediaries are “blind platforms” that take individual and community goods without being transparent about how they put it to use to generate value.

Politicians receive public political capital at the ballot box, doing what they perceive they need to get the vote to be intermediaries between us at the state apparatus. Our political capital is then spent on our behalf. There are no guarantees that it is spent as we would want it. There are no guarantees that our aggregated capital is spent in defence of public goods. Instead it feels as if private political capital is able to yoke ours to its aims.

Banks get our financial capital, as intermediaries between us and the markets. We give them decision making control over our money from the perspective of making more of it, and yet the impacts of how they make it extend well beyond what we imagine. They are free to invest in ways we disagree with, whether it be in weapons manufacturers or clean tech. Their investments often erode public goods like clean air, healthy land, peace. Once they have our funds, they do with them as they like. They seem to put our financial capital at risk, but reap the rewards themselves.

Media organisations clamour for our attention capital — as intermediaries with reality. But most of the mainstream media is owned by oligarchs with an interest to present reality in certain ways. The profit motive of the media itself drives a particular representation of reality — one in which buying the products advertised is a good idea. Because they need our attention, they masquerade what draws our attention as being what is important and true.

Facebook and Google offer to mediate our social and shopping experiences. But they take a lot more in exchange; identity and data capital. They are not even transparent mediators within our own network, Facebook makes most prominent those posts of my friends it thinks will keep me on the site longer. They also decide to sell information about who I am, and who to sell it to. The alt-right is actively seeking alternatives to these internet giants and proposing wholesale migrations onto other platforms.

All these blind platforms are, or should be, of concern to the left. The opportunity to reshape them into a form that better serves humanity is exciting. Somehow the left was content with incremental improvements as long as it was in power. Now it is no longer the bulwark of institutions that failed by the left’s own standards, these institutions can be rebuilt.

Standard

Towards a post red/blue Collective Intelligence

Given the scale of the challenges ahead globally and locally, and the immensity of the opportunities that technologies on the rapidly approaching horizon offer, it is necessary to develop a collective intelligence that makes the best decisions.

The red insurgency was the better collective intelligence during the US electoral campaign, beating all other communication infrastructures. It was up against an incumbent lumbered with centralisation and ultimate disconnect.

The red insurgency, while outperforming rivals, is not the best possible CI.

Features of winning collective intelligence

Thomas Malone, founder of the MIT Centre for Collective Intelligence cites three characteristics that make a collective intelligence better able to resolve problems and come up with better solutions. These are “social perceptiveness of the people in the group, equal group member participation, and a higher number of women”.

This runs counter to the assumption that groups with simply smarter people will make better decisions. The greater educational attainment of the Blue Church is no indicator that it should perform better, despite all the surprise expressed by snobbish blues.

Blue Church did not play to its own claimed strengths. Inclusion and empathy elements of social perceptiveness and equal participation, are meant to be part of the “progressive” repertoire and yet, the imperious Blue Church communication infrastructure was neither socially perceptive, nor did it encourage equal group member participation. It did not acknowledge it’s own grass-roots distaste for Hillary nor the groundswell of social support for Bernie. Once Hillary was installed, there was little it could do to glean and reflect information from a network that was withering below it because the messages came from the centre, leaving little room for amplification of messages initiated, honed, and elevated by the swarm.

That is not to say that the insurgent red religion performs well against these criteria.

Synthetic social perceptiveness

Social perceptiveness is the ability to discern what someone is thinking through some means of human observation.

The red insurgency (as described here) replaced social perceptiveness with “no safe space” and “attention rewards”, while dismissing expressions of values with the epithet “virtue signalling”. This served parts of the role of social perceptiveness — the elimination of friction. It reducing everyone’s motivations to the same thing, while eliminating room in which to talk about concerns. Instead of discerning other people’s meaning, it behaved as if it didn’t matter. The only thing that mattered was the contribution made to the whole as measured by the attention their ideas could command. Participants were treated as only a single dimension of their self — leaving little room for acknowledging what they were thinking, feeling. Instead of social perceiving as a means to facilitating collective decision making, there was a denial of the internal life that needs to be perceived.

There are hints that this approach to social perceptiveness of the insurgent reds on twitter is responsible for some rising noise in exchanges. A few scouts have reacted incorrectly to comments, assuming that their interlocutor was a blue challenger when a little investigation revealed them to be someone seeking clarification, or even agreeing with their perspective but with a sarcastic presentation of a blue challenger. This may be an intrinsic limitation of current online platforms and not unique to the red insurgency. In fact the red approach may have been an adaptation to having to operate in such a medium.

True social perceptiveness

Developing true social perceptiveness will be one of the aims of either a resurgent/resistant blue, or a nascent post red/blue CI that has as its goal to find the best solutions to developing and deploying solutions to the challenges of the 21st century.

This will require new ways of organising and deployment of new technologies. Research suggests online communities all struggle with social perceptiveness dimension of collective intelligence and that the MIT definition applies only to face to face, or interaction with more visual cues (Jordan Barlow and Alan Dennis). A combination of more real life meetups — like protest marches, presidential rallies, town halls — and VR that allows virtual face to face could help overcome the current limitations. While still a broadcast rather than live interaction medium, the red insurgency has already made great use of live streaming and videos. It’s scouts are urging followers to start their own video channels — prepare for a proliferation of new insurgency voices.

Could a resurgent/resistant blue meet the social perceptiveness requirement as a means of improving and defeating the insurgent red CI and ultimately to escape the red/blue opposition for good?

This is a difficult task. Social perceptiveness requires an awareness of the many interests and needs of participants while also finding a way to make decisions. It is a challenge to perceive simultaneously every social angle in every conversation and to take account of every nuance in every decision. As the amount of contextual knowledge required for participating in a conversation goes up, the potential for the memetic spread and attention magnetism goes down. What does a socially perceptive CI look like if not taking into account every group’s need for recognition, acknowledgement, voice and power?

What principle can cohere post red/blue collective intelligence? The theory in Jordan Greenhall’s Situational Assessment was that what gave the red insurgency coherence was the information infrastructure rather than any point of principle, value or feeling . I expanded a bit in Collective Intelligence and Swarms in the Red Insurgency where I claimed that attention acted as a coherence mechanism and feel that now there is a new element to the coherence: the sense of winning.

Designing coherence?

The task now is to find the principle, or design the infrastructure from which a collective intelligence can emerge which can best meet the challenges and harness the technological opportunities coming our way.

Whatever the principle or infrastructure design looks like, it needs to facilitate:

Bringing in the widest range of inputs, maybe even weighting finge an unorthodox inputs slightly higher.

Filtering, testing, amplifying ideas while generating the commitment required to drive action.

Any other design principles?

Find and interact with this story on Medium

Standard