The artefact and the algorithm — a search for subjectivity and means for resistance in the pre-formulated discourse

“Life is unfair. Kill yourself or get over it.” The chorus of the song “Child psychology” by Black box recorder in my Spotify discover weekly playlist started to irritate me. But I wasn’t irritated at the song, really. Instead, I had a bone to pick with the Spotify algorithm that had the nerve to suggest this song for me to enjoy with my Sunday morning coffee.

This “old man shouting to algorithms” moment lead me to think if the learnings from the cultural studies are on the same page with the algorithmic age we are at.

We know that we, as consumers, can and will negotiate the meanings we put to different representations. According to Stuart Hall, for example, we can have an opposing, negotiating or accepting reading behavior with a piece of text. But here’s the question: can we have similar attitudes towards the algorithms that deliver us these representations (artefacts, texts, meanings)? Can we negotiate or oppose the process through which a certain representation is delivered to us? Is it possible or reasonable to draw a distinction between the process (algorithm) and the actual content (artefact) that was delivered?

And if so, can we negotiate with algorithms while accepting the artefact?

Traditionally, the artefact belonged to the structuralists and culturalists, and the structures that provided them (media companies) belonged to the academic realm of the political media economy. While both of these schools of thought have a say on algorithms (algorithms as texts or grammar / algorithms as platform business model), the role of the structuralist school has so far been limited in the discussion. Nevertheless, the structuralist ideas can help us to identify the inherit power, oppression, alienation, and ideology formulation, and more importantly, the countertactics to tackle unhealthy power structures.

The political media economy is of course directly useful in understanding the power within the new media companies, such as Facebook. The old Frankfurt school wisdom is that media is controlled by groups who employ it to further their own interests. But this is less useful than understanding what can a “user” do about it. Indeed, there’s a lot of current discussion regarding the power of algorithms, but less so about the human ability to negotiate with algorithmic inputs. The lack of this side of the conversation is harmful: it leads to naturalisation of certain technological developments, and thus, to desperation.

This is not to say that this discussion does not exist, albeit most of it focuses on repurposing algorithms to demonstrate their limitations or to showing how people already use algorithms with unintended ways. As an example of the former, this artist made a delightful demonstration by drawing a ritual magic circle on the ground to capture self driving cars. For the latter, the work done by Airi Lampinen and others about so called “profile work” is interesting and helpful. Most of these examples, however fascinating and useful, still demonstrate the “algorithmic resistance” on the level of concrete actions and misuses, but not so much on the level of practices we take everyday to cope in the algorithmic world.

The need for better understanding of everyday practices to cope and resist algorithms becomes even more important when it is realised that most algorithms are not very good. They look good in a-b-tests and in the platform company data, but for the actual people, they rarely work. An algorithmic recommendation fails, except by accident. This does not matter much, because people believe the algorithms constantly succeed.

I have a firsthand experience about this from my old startup company Scoopinion, which used massive behavior data, including reading time and style, to aggregate long form stories. The users loved the product and often informed us right after starting to use the product that the recommendation algorithm is by far the best they have ever experienced: “It knows me so well!” This is all nice, but actually the algorithm needed weeks of data to actually start working. These people simply received a general list of links: good stories for sure, but there was no personalisation at all.

First five laws of Wiio, rewritten for algorithms:

1. Recommendation usually fails, except by accident.

1.1 If recommendation can fail, it will.

1.2 If recommendation cannot fail, it still most usually fails.

1.3 If recommendation seems to succeed in the intended way, there’s a misunderstanding.

1.4 If you are content with your algorithm, recommendation certainly fails.

2. If a user action can be interpreted in several ways, a learning algorithm will interpret it in a manner that maximizes the damage.

3. There is always a user who knows better than your algorithm what was meant with the recommendation.

4. The more the algorithm recommends, the worse it succeeds.

4. 1 The more the algorithm recommends, the faster misunderstandings propagate.

5. In big data and deep learning, the important thing is not how things are but how they seem to be.

Why are we then so content with recommendation algorithms? Simply, similarly to Santa Claus, for a recommendation algorithm to work flawlessly, we need to believe in it. This belief constitutes the misunderstanding that the algorithmic output is correctly delivered to us. Further, our behaviour, driven by this belief, confirms to the algorithm that it was right.

Not only is there no way of knowing what part of the algoritm is “magic” and what is actual, data-based insight, the situation is becoming even more tricky through the introduction of deep learning systems. The further away the recommendation algorithm moves from the programmer, the more naturalised it becomes (with nature I refer to an area separate from human activity). These deep learning systems, which don’t enable discussion about their motivations or priorities and which are instead completely naturalised as parts of our everyday environment, could make opposing algorithms impossible.

Encoding and decoding in the pre-formulated discourse

Algorithms are formulated instructions and the total set of instructions construct a discourse, which can be thought of as an ideology. Algorithms are, stealing the term from Althusser, an ideological apparatus. Culture studies were once a clear turn towards common expressions and the interactions between a text and the subject. I seek to find the interrelation between algorithms and the subjects using them. To what extend the media, automatically personalised and improved, is the message? Should the semiotic toolkit consider algorithms as texts or approach them in the same way than grammar rules?

Similarly to grammar, algorithms are becoming naturalised. According to Gramsci, power is realized only via acceptance of its objects. But if this naturalisation is not understood, the acceptance happens without further discussion, and algorithms, growing in abundance, begin to stagnate the current ideology.

Thus, algorithms are a formulation of the (current) ideology. For Marx and Engels, ideology is the dominant ideas and representations in a given social context. Ideologies do appear natural. Interestingly, according to Althusser, ideologies operate in everyday practices instead of some for of externally imposed doctrine. Ideology is an effect of the structure of society, a force through which all practices are interrelated to shape social consciousness. So there’s a feedback loop: on the one hand, the ideologies are shaped through practices, and on the other hand they shape these practices through molding the social consciousness.

Enter Stuart Hall: shaping ideologies through counteraction is possible if there’s a way for “users” to take a negotiating stance or opposing stance not only towards the artefacts (products of the ideology) but also towards the underlying structures of algorithms (processes which formulate ideology).

Ideology as instructions

Ideology is visible in daily practices — behaviour. Behaviour through time is called ritual. Behaviour inspected through the collective is culture. Ideology is culture, produced.

Ideology is the process of produced collective behaviour, which leaves behind artefacts as crystallisations of the past status of the process. These artefacts can get different meanings through time: with the change of meaning, behaviours change. Meanings change via change of focus through time, e.g. by how what is collectively understood as “work” changes, and via actionable insights (beliefs). Thus, behaviour changes when there’s actionable insight that demonstrate a reason to change the behaviour. Those insights can come from analytics, and this analytics need data.

All of these — behavior, analytics, data — need instructions, algorithms. Algorithms as naturalised and static ideology guide the collection of data and dictate what is not collected, and further, decide upon the selection of analyses and recommendations of insights.

As mentioned, according to Althusser, belief presupposes behavior. He writes: “This ideology talks of actions: I shall talk of actions inserted into practices. And I shall point out that these practices are governed by the ritual in which these practices are inscribed, within the material existance of an ideological apparatus. — Ideas have disappeared as such to the precise extent that it has emerged that their existence is inscribed in the actions of practices governed by rituals defined in the last instance by an ideological apparatus.”

Critique of algorithmically formulated ideologies

The aforementioned ideological apparatuses are, according to Althusser, for the most part private. Ideology, it seems, does not become true in public but instead in the most private, through material and real actions and rituals. Which brings us to the key aspect of recommendation algorithms, the purpose of which is often to “personalise”.

While social media was first applauded for its ability to create, enhance and annotate subcultures, this is no longer true. The endless personalisation, no matter it’s merely an illusion of personalisation as argued earlier in this text, is in fact the most efficient process of alienation. Personalisation of imaginary collectives is a manufactured division of collectives to singular. The algorithmic personalisation, taken to the extreme makes resistance impossible, because resistance can only work if it is collective.

“Resistance of algorithms that stagnate current ideology can only work if the resistance is collective.”

Furthermore, ideology that is constructed via algorithms don’t change dynamically, but instead, it remains in the local maximum declared as optimal by separating collective data to individual, naturalised insights. However, societies, like species, need to reproduce to survive. The world changes due to constant actions of man and also without them, and an ideology that relies on increasing formulation of static rules is bound to face an abrupt end when the disconnect between how things actually are and how they are perceived grows too large.

A technologist might argue that this lack of dynamism (the static ideology through algorithms) is fixed with the new branch of algorithms, the deep machine learning. Wrong. The outputs of these learning algorithms are very similar to any other algorithms, although they can be used to solve more complex problems. There is an easy example to demonstrate how these deep learning algorithms are also creating a static ideology: Natural language processing is a very typical use case of an learning deep learning algorithm. Would you say you talk naturally to Siri or Alexa? You don’t. Instead, you mold your language to the static abilities of these ideological apparatuses to make them understand you.

As mentioned earlier, the only real “development” from the previous algorithmic approaches and machine learning algorithms is that it becomes more and more difficult to find the author of these algorithms. Moreover, typically very large quantities of collective date play a role. When these methods cannot be deconstructed and parsed to individual, human-born decisions, deep learning algorithms cease to be cultural artefacts — they become naturalised culture. Indeed, if ideology is the science of ideas, as Gramsci has mentioned, and we take this to understand the analysis of ideas and even more precisely, the origin of ideas, and if we further acknoledge that these origins have become more blurry through the introduction of learning algorithms, we are trancending from dynamic ideology to a static culture. In other words, these learning algorithms are the very essence of ideology, trancending their nature as agreements.

There’s a belief that this might be a positive development: that through this mechanism mankind could move beyond the current chains of perception. According to this belief, the actual end game of “evidence based policy”, for example, is an algorithmically controlled “perfect” society. However, this belief is false. In this “perfect” society there is no difference between perceived behavior and ideology. The truth is, however, that there will always be a difference between intention and ideology. Because the gap between intention and action cannot be measured, algorithm monitors the measurable action. This example might be most vividly visualised by imagining a child playing Youtube videos. The recommendation algorithm can constantly find better videos for the kid to play with, most likely after an endless amount of time ending up with just two most perfectly suitable videos for the child — a local maximum. But the intention of the child is not to watch Youtube videos. It is to learn and experience, to grow and to criticise.

The statement is of course a normative counterideological view to challenge the naturalised, formulated ideology of endless Youtube streams. Concerning ideology that is constructed via actions, it is the latter, not the former, that more closely resembles our current realities.


As described, this time again by paraphrasing Althusser, the ideological apparatuses “interpellate individuals into preconceived forms of subjectivity that leave no space for opposition or resistance”. All resistance is collective. There’s only capability to change meanings through collective action. While personalization of collectives makes collective resistance impossible, it only emphasizes the need for collective publics. These publics do exist. In their most banal and dadaistic form, these publics can be found on 4chan and dark corners of Reddit, where armies of collectivised individuals are ready to game algorithms to demonstrate their hollowness.

Counteraction tactics include collaboration to create collective publics, negotiation of individuals and collectives regarding algorithmic outputs, and in broader sense, accepting the duality of the artefact and algorithm and employing critical approaches towards either one of them, if needed.

In the end, the most effective countertactic for maintaining dynamically changing ideology might be to accept that algorithms are not only instructions but also agreements. By making it visible and clear what kind of agreements each individual and we collectively subscribe to might help to renegotiate them.




Researching journalism platforms. Foresight and business model specialist.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Cognitive Artifacts (complementary & competitive)

Intelligent Decision Support Systems 101 — The basics(Part 1)

MLOps for Conversational AI with Rasa, DVC, and CML (Part I)

Introducing the robowork index

Chatbots Are Coming For Real Estate — The Question Is When

What Is Pre-Training in NLP? Introducing 5 Key Technologies

Forget the Hype — It’s time to get real about AI in Drug Development

Customer Service Hour: How Bot are Revolutionizing Customer Service

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Johannes Koponen

Johannes Koponen

Researching journalism platforms. Foresight and business model specialist.

More from Medium

Part 2. Everything in our world, including you, consists of the simplest elementary particles.

Insert Interval: Leetcode Medium — Blind 75 (Intervals)

Codility Lesson 1 (Binary Gap)

Machine Learning with Spark MLlib

Spark Components