5 min read

A Prediction: Generative AI & Compulsive Conspiracy Visualization

A Prediction: Generative AI & Compulsive Conspiracy Visualization

This is a quick article to get out a prediction before it comes true, basically an attempt at securing bragging rights (Look, I saw this coming!). The idea is this: As soon as we see wildly available, high-quality video generation with little or no content moderation, some people engaging in conspiracy theory will start compulsively generating, consuming and sharing video clips of extremely violent acts – not as a means of producing fake evidence, but to "explore" the crimes of shadowy perpetrators. Let's call that behaviour Compulsive Conspiracy Visualization.

Conspiracy Theorizing as Addiction

The concept of Compulsive Conspiracy Visualization goes back to a mental overlap between my recent article about generative AI causing an epistemic crisis and working on a new project about looking at conspiracy theorizing through the lens of addiction. The new project is based on the idea that buying into a conspiracy theory is a highly intense experience (the enormity of the conspiracy, the sheer evilness of the perpetrators, the scale of the consequences, the realization that the world is completely different from what one believed to be true etc.) triggering a set of intense emotions (e. g. anger, moral outrage, fear, but also feelings of epiphany, elation etc.). The premise is that the intensity of the experience can motivate repeated engagement with the conspiracy theory and, over time, can become the kernel of addictive behavior in a similar way substance use and other behaviours offer experiences that can become the kernel of addictive behaviour. Engagement with conspiracy theories from this perspective would be less about truth seeking than about the experience on offer.

I'm not yet 100% sure, whether this idea is really good and if I'll ever write a dedicated article about it, but it did lead me to the prediction this post is about. So, for the sake of argument, let's say the idea is brillant and describes something real about at least some people engaging with and buying into conspiracy theories.

If conspiracy engagement really is about the experience it offers and can turn into a behavioral addiction, those getting addicted are likely to suffer from the same patterns other addicted people do – namely an inability to stop even though the experience has gone stale with repeated use. I think this model can add to other existing explanations why people believing in one conspiracy theory are a lot more likely to believe a host of other theories as well. From the perspective I'm developing, they are hunting for a fresher experience, a new fix.

And this is why I think we will see people will also go on conspiracy visualization binges. If there is a generative system with little to no content moderation that is capable of creating believable video clips, it is extremely tempting to create video showing the things you believe are actually happening. And photo-realistic images hit differently. If systems like this are available, they can be used to intensify the experience, providing a better, harder fix. If my theory of addiction is correct, people compulsively "using" conspiracy theories are likely to also incorporate generative AI into their habit.

Fakes for Truth

Of course, many generated video clips will be produced as fakes claiming to be real recordings of real events, but this prediction is NOT about that. It is about people knowingly consuming generated clips, still claiming this is in servitude to some kind of truth-seeking effort. I think their argument will go like something this:

We already KNOW that group A (e. g. Democratic elites, Republican super donors) perpetrate terrible acts (e. g. torture, abuse, cannibalism, mass murder) against group B (e. g. children, immigrants). Of course I know the clips I'm generating, watching and sharing are not video evidence, but they do show something we KNOW is happening. We don't need more evidence, but watching those clips allows us to feel the wrongness of those deeds on a more visceral level. We feel how wrong those heinous acts really are. They allow us to feel a moral and emotional reaction to something that is happening behind closed doors. They don't show truth, but they allow us to experience it.

Escalating Nightmares

If that step was taken, it would represent an escalation in conspiracy theory addiction – with a high risk of extremely unhealthy feedback loops. If used to intensify the experience, things can become really violent and grotesk quickly. And what starts with generating what one believes to be true can quickly change to believing everything one generates. Honestly, I shudder at the thought of what some people might do to themselves sitting alone at home, getting lost in highly personalized nightmare worlds. Or what they might share with each other online.

Timeline and p-Conspiracy Visualization

When making predictions, it seems to be good manners in the AI-space to add timelines and p-values. Here is my attempt to provide these.

Video generation is getting better fast, but even if 2024 is the year it becomes really good, I think we won't necessarily see Compulsive Conspiracy Visualization. This is mostly, because I think implementation and spread will be somewhat slower than it was for single images (I might be wrong about that) – but also, because I think early providers of those solutions will implement very strict content moderation. Everyone knows that this tech is likely going to be used for porn eventually, but it seems the companies driving the AI-revolution have little appetite for that particular business model ... for now. I think availability of models that allow you to show extreme violence, abuse, and murder are a prerequisite for the emergence of Compulsive Conspiracy Visualization. My guess is it will start with readily available and easy to use Open Source models. I think it is pretty likely we will see available Open Source models allowing you to do a lot of sick stuff by the end of 2026 (p=0.8). Please note that this is a number I made up. I have no special insight into the pace of Open Source development. This just feels right (#truthiness).

I'd give the likelihood of my thoughts about conspiracy theories and addictions a more than 50 % chance of being at least somewhat correct (p=0.6), and I think IF we get the models and IF my thoughts about addiction and conspiracy are correct, Compulsive Conspiracy Visualization will happen (p=1). So... 0.8 * 0.6 * 1 = 0.48.

My prediction is that we will have heard about something like Compulsive Conspiracy Visualization by the end of 2026 with a likelihood of 0.48. Probably in the media after some kind of violent event and everyone starts talking about that particular "hobby" of the perpetrator(s).

...

This got darker than I originally thought it would. Let's leave it here.

💡
Discussion & Feedback here.