“Tomorrow I might buy myself a sweet Munchkin Kitten”

Right from the start Firefox was designed to help people to be autonomous online. Mozilla felt that the quality of the online experience would be best if users could freely choose who they would like to be online, and then discover things they might not dare to do in real life. This is why Firefox has built-in privacy.

However, the internet has dramatically changed the concept of privacy. As privacy is at the core of open societies, this change has raised concerns and been the subject of scientific research. From time to time, we like to inform our community about the results of the latest research in this field.

I spoke with Prof. Lutz Hachmeister – author, award-winning filmmaker and German media policy expert. In Cologne, Hachmeister recently held a colloquium on “Brain reading and social control” and I wanted to know if the technique of mind reading is something that might open up a whole new perspective on the internet.

Thoughts are free, as the old folk song goes. Is this still the case?

Very restricted. Quite apart from the fact that individual thought production has always been strongly influenced by genetic make-up and social experiences. But when we speak of “mind reading”, apart from spiritualism and parapsychology, there are essentially two aspects. Firstly, non-invasive measurements of the brain, or even direct medical interventions in the brain, to decode consciousness or explore the structures of thought production. Fortunately, so far, this has only been possible to a limited extent and has come up against major methodological and experimental difficulties.

Another path has been altogether more successful, especially since the advent of the internet. I call this the “externalisation mode”, meaning the attempt to algorithmically examine and categorise thoughts that we enter online in the form of speech, images or even music. There is also individual tracking, because everyone connected to networks or platforms express thoughts at a very high frequency, whether through buying behaviour or political statements. A tendency towards totalitarian digitisation promotes these constant measurements.

These are the two sides of the coin. Data and knowledge companies have made tremendous progress in the externalised method of brain reading. The results serve to accelerate data capitalism: results are evaluated in every way, monetised and part of the huge proceeds in turn flow back into the exploration and categorisation of streams of thought.

But what is a thought in the first place?

That puts us right in the middle of a centuries-old philosophical turmoil. In the nineteenth century, Frege and Husserl distinguished mostly intentional acts from the constant flow of consciousness. Frege explicitly speaks of “grasping thoughts”. This makes the thought problem very close to one of linguistic analysis.

We also constantly verbalise thoughts, even when we are not speaking. But if we approach it only from the materialistic side: as long as one does not believe in something like the Holy Spirit or spiritual entities, we can assume that thoughts are energy – i.e. hypothetically and factually measurable. The term “stream of thought” has a nice double meaning.

There are several category problems. For example: when does a thought begin and when does it stop? We rightly speak of associations and chains of thought. Isolating a single thought and localising it in the brain hardly seems possible due to the enormously complex neuronal operations involved. In addition, the electrical energy that a thought generates, or is composed of, is so weak that it has been hitherto hard to measure.

So far, it has not been possible to measure streams of thought outside the head either, although it is theoretically possible to assume that thoughts outside the head must exist as energy. The fact that researchers are still unable to measure the complexity of streams of thought does not mean that it is not possible.

Mind reading has necessarily always been a technique of interpretation, because, as you say, it has so far been impossible to medically define a neurological entity that describes a thought. Do you believe that one day you will be able to neurologically decipher thoughts?

I believe that we will measure everything that can be measured. The difficulties that exist at the moment could indeed be due to the measuring instruments, not to the basic measurability. There was also a world before the microscope or telescope, and with both instruments worlds were discovered that no one knew existed: microbes and distant galaxies.

Worlds have been discovered through new measuring instruments, new world views have been formed and old ones have been overturned. Something similar can be imagined in brain research. But you also have to keep in mind that thoughts are not the same as the “mind”, but that they are always inherently linked to chemical reactions – if we do not think completely transhumanistically.

That is why Norbert Wiener’s thesis that the brain is a computer is also lacking in complexity and why internet euphoria disciples also dreamed of a world beyond “people made of flesh and blood”.

The lie detector exploits the connection between thought production and body reactions, primarily by measuring skin reactions. However, this is not  reliable, and is highly controversial as methodology in criminological procedures, but works surprisingly well on the whole.

At the conference, which we were recently inviting to, the lie detector was discussed at length. The lie detector is an early attempt to get closer to the truth of statements. It is not insignificant that many subjects tell the truth out of fear of the lie detector, i.e. the measuring instruments, by their very existence, influence the statements made.

An age-old human dream, no?

In the nineteenth century, the phenomenon of “mediums” was rampant, not in the sense of today’s mass media, but rather individuals who claimed special thought-energy or telepathic abilities.

Especially in Britain in the Victorian era, there were many famous mediums – some highly paid personal mediums – and scientists were also very preoccupied with whether you can move objects using thought, for example. In Germany, this later became popular again on TV through Uri Geller, the spoonbender.

In most cases, it has turned out to be charlatanism. In Freiburg, the psychologist and doctor Hans Bender taught “frontier areas of psychology” and in 1950 also founded a corresponding institute. It is quite interesting that especially the Nazi SS and SD had a strong interest in signing him up as early as 1940 for the newly founded “Reich University of Strasbourg”. Although quite obscure, it still represents a moment of scientification of mind reading and telepathy.

In addition to the lie detector, the invention of electroencephalography (EEG) in the 1920s by neurologist Hans Berger (not to be confused with Bender) was a decisive leap in the analysis of brain waves. Today, we are constantly reading about experiments that attempt to locate a simple mental operation, such as thinking about a colour, as a digital data packet and then re-implement it in a subject at a different location with electrodes – with moderate success so far.

At MIT, neuro-wearable “neuro-muscular impulses” were measured on faces using an “AlterEgo”, and it is assumed that “inner monologues” also emit electrical impulses to the face.

Of course, this is only the beginning of the beginning. It is very unlikely that we will soon be able to transfer a complicated thought such as “Tomorrow I might buy myself a sweet Munchkin Kitten” from Cologne to Australia. But who knows.

We were both together some time ago at an event where one of the publishers of German newspaper FAZ said that the nature of the internet was speed. The net has not created anything new, it just connects things faster. But that was before Big Data. Has mind reading since become the core of the internet?

So far, this still works in translation mode. In principle, we measure the connection between thought and language; you can also use facial expressions. But it is always derivative, a kind of translation.

While not the direct measurement of a brain flow, we ourselves perform an action by entering a will or speech act, or a potential act of purchase through another device such as a smartphone, into a network that has already collected millions and billions of other acts of volition or speech and gives us answers, or at least options, based on the stored data.

An important part of AI research – and you can argue for a long time about whether AI really is artificial, or a logical consequence of socio-technical evolution – started with computational linguistics, i.e. with researchers such as Joseph Weizenbaum at MIT – who later became one of the harshest computer critics – and his project “Eliza”.

This already shows that the transformation of consciousness into language is one of the central gateways to our thoughts. Of course, there are still major inaccuracies as a result of complex grammar, logic, conventionalities – but most of all because we need to differentiate between feelings and rational statements. You can say that you have a stomach ache, but the rational statement is something other than the feeling of the stomach ache itself.

As always in history, the military and police play a major role as drivers of new technologies. From what historical context does mind reading come and does this origin play a role in the current state of technology?

Yes, a film like “The Men Who Stare At Goats” seems bizarre at first, but these experiments did exist. Intelligence services have been involved with truth drugs, brain flow measurements and speech system recognition from the start – and, of course, this is always being perfected, with a perverse logic.

Everyone knows how close the connection is between intelligence services, the military-industrial complex and media and knowledge companies. Language analysis is and remains a cardinal task of any intelligence work, whether in Russia, China or the US – but not so much in Germany, I suppose.

The US researcher Michael Kosinski has become famous for a study. In the analysis of Facebook likes, it discovered with high probability whether someone is gay or a smoker or a conservative. This may not be as astonishing as it seems, but Kosinski is also a strong protagonist of “no privacy”, which says that voluntarily disclosing data (and thus comments and thoughts) is ultimately perfectly right and good for people.

Kosinski thinks that by accumulating data we can detect diseases, epidemics, anti-social behaviour (assuming we can agree on what that is) and crime earlier – which will then help us live safer, longer, happier and healthier lives.

Of course, this largely ignores the dialectic between digitisation and psychophysical responses, and one must always wonder who is financing such research and theses.

Traditionally, facial recognition has played a major role in mind reading. Eye movements, gestures, facial expressions. Today, big tech companies have an infinite number of models and smartphones that already recognise the moods of users. How far can this go?

It can go on forever. Mimicking is a very direct form of expressing consciousness. It is not just language, even if language is a particularly interesting field, because it is structured in such a way that its relation to thoughts seems clearer. But, of course, in evolutionary terms, sounds and facial expressions are also interwoven in the same complex.

Can you only explore the individual or are there also approaches towards a form of mass suggestion?

I am very sceptical about the term “mass suggestion”. Even the earlier “mass psychology” has turned to be something of a dead end in propaganda and persuasion research. Certainly, any method of externalisation can also be applied to groups, parties, companies, national stereotypes – otherwise sociology and social psychology would not exist as academic disciplines.

You have already pointed out that the most progress in the world of thought has been made in language. Here, research comes primarily from the political field, such as Chomsky or Lakoff. Many of these approaches work with two political camps – left and right, 0 and 1, as in the digital world. Are the United States or Britain, with its two political camps, particularly susceptible to abuse of propagandistic algorithms because voters can only be pushed in two directions?

Apart from the fact that the first-past-the-post voting system in the US and the United Kingdom is, of course, generally in need of reform, we must distinguish two things. Firstly, there is the electoral system itself. A cemented two-party system actually makes it increasingly difficult for other parties to play a role on the political marketplace, thereby promoting the bipolarity of a society, as we are seeing in the US under Trump. But that has a long history.

Another question is whether a simple measuring instrument, used to measure a political following, favours the emergence of simpler systems. Dividing people into just two categories, conservative or progressive, may well have a standardising effect – because everything focuses on questions and criteria that make this classification possible; such as, are you in favour of the equality of all sexual orientations, or can you still start something with the concept of nation or religion.

A cruder measurement may then lead to a coarsening of the political-propagandistic spectrum. However, I am not yet convinced that Russian or Iranian troll factories or companies such as Cambridge Analytica had a decisive influence on the election of Trump or Brexit. Of course, there are very tight choices that can be influenced micro-propagandistically, using digital platforms and networks. However, an analogous “rational choice” by voters still seems to me to be a decisive counterweight.

Lutz Hachmeister, thank you for your time.

This post is also available in: Deutsch (German)

Share on Twitter