multi-storied #35: Truth in a year of lies
Not even a whole day had passed after Joe Biden endorsed Kamala Harris for president before the lies began. I’ll link to just one, for illustrative purposes: a deepfake video purporting to show Harris saying, nonsensically, “Today is today. And yesterday was today yesterday. Tomorrow will be today tomorrow.” The deepfake marked the end of one tide of lies surrounding Biden (to the point that, when people circulated his note withdrawing from the race, it was difficult to know how real it was) and the start of another tide of lies in the presumptive Harris-Trump contest for the US presidency.
And that’s just the western front. We’ve had lies already in elections in India, the UK, and France. In fact, people in at least 64 countries will have voted by the end of this year, which means 64 different fora for lies. And then there are the lies surrounding Gaza, and the lies surrounding Ukraine. Not to mention the lies that politicians and armies and companies ordinarily tell, just because, you know, it’s a working day.
A few months ago, I took a train to Leicester to interview a man named Eliot Higgins. The final piece, published in WIRED, appeared in what I might call the “pure” interview format—which is to say that it was just my questions and his answers, condensed and edited. I love this format and the varying modes of its best practitioners like Isaac Chotiner (prosecutorial) and David Marchese (anecdotal banter). For my magazine articles, I’ve probably done an average of an interview a week for every one of the last 14 years, but those are usually worked into the weave of a larger narration. The skill of the “pure” interview is different: to somehow engineer an exchange that feels conversational and discursive, even though you’ve prepared and structured in advance. (Also: I won’t pretend that it isn’t a relief to come away from one of these interviews and know that the bulk of your work is done—that you don’t have to do another two dozen interviews, then dream up the structure of a piece, then sweat over its sentences.)
Anyway: Here’s Eliot, after the jump.
TEN years ago, Eliot Higgins could eat room service meals at a hotel without fear of being poisoned. He hadn’t yet been declared a foreign agent by Russia; in fact, he wasn’t even a blip on the radar of security agencies in that country or anywhere else. He was just a British guy with an unfulfilling admin job who’d been blogging under the pen name Brown Moses—after a Frank Zappa song—and was in the process of turning his blog into a full-fledged website. He was an open source intelligence analyst avant la lettre, poring over social media photos and videos and other online jetsam to investigate wartime atrocities in Libya and Syria.
In its disorganized way, the internet supplied him with so much evidence that he was beating UN investigators to their conclusions. So he figured he’d go pro. He called his website Bellingcat, after the fable of the mice that hit on a way to tell when their predator was approaching. He would be the mouse that belled the cat.
Today, Bellingcat is the world’s foremost open source intelligence agency. From his home in the UK, Higgins oversees a staff of nearly 40 employees who have used an evolving set of online forensic techniques to investigate everything from the 2014 shoot-down of Malaysia Airlines Flight 17 over Ukraine to a 2020 dognapping to the various plots to kill Russian dissident Alexei Navalny.
Bellingcat operates as an NGO headquartered in the Netherlands but is in demand everywhere: Its staffers train newsrooms and conduct workshops; they unearth war crimes; their forensic evidence is increasingly part of court trials. When I met Higgins one Saturday in April, in a pub near his house, he’d just been to the Netherlands to collect an award honoring Bellingcat’s contributions to free speech—and was soon headed back to collect another, for peace and human rights.
Bellingcat’s trajectory tells a scathing story about the nature of truth in the 21st century. When Higgins began blogging as Brown Moses, he had no illusions about the malignancies of the internet. But along with journalists all over the world, he has discovered that the court of public opinion is broken. Hard facts have been devalued; online, everyone can present, and believe in, their own narratives, even if they’re mere tissues of lies. Along with trying to find the truth, Higgins has also been searching for places where the truth has any kind of currency and respect—where it can work as it should, empowering the weak and holding the guilty accountable.
The year ahead may be the biggest of Bellingcat’s life. In addition to tracking conflicts in Ukraine and Gaza, its analysts are being flooded with falsified artifacts from elections in the US, the UK, India, and dozens of other countries. As if that weren’t enough, there’s also the specter of artificial intelligence: still too primitive to fool Bellingcat’s experts but increasingly good enough to fool everyone else. Higgins worries that governments, social media platforms, and tech companies aren’t worrying enough and that they’ll take the danger seriously only when “there’s been a big incident where AI-generated imagery causes real harm”—in other words, when it’s too late.
WIRED: You now preside over the world’s largest open source, citizen-
run intelligence agency. A decade ago, when you switched from your blog to the Bellingcat website, what path did you see this taking?
ELIOT HIGGINS: At that point, I was still trying to figure out exactly how I could turn this into a proper job. I’d been blogging for a couple of years. But I had children, and it was getting more important to earn a living. When I launched Bellingcat, the goal was to have a space where people could come publish their own stuff. Because at that point, I had several people who’d asked to publish on my blog. I needed a better-looking website. I also wanted a place where people could come together. But that was the extent of my strategy. There was no grand plan beyond that. It was all, “What’s happening next week?”
Well, I launched on July 14, and then three days later MH17 was shot down. The way the community formed around MH17, it was really a massive catalyst for open source investigation—in terms of the growth of the community, the work we did developing techniques, the profile that gave it. Today our Discord server has more than 28,000 members. People can come and discuss stuff they think might be worth investigating, and we’re publishing articles based off the work of the community.
The world is never boring these days. What has it been like at Bellingcat since October 7, for example?
We’ve hired more people. We’re bringing in more editors. We’ve shifted people from other projects. We’ve already got one person who’s specifically working on archiving footage. But what’s different is that you don’t get the same kind of footage that we’ve gotten from, say, Ukraine or Syria. There’s actually a lot less coming from the ground.
Because of internet blackouts?
Yeah, and a lot of the stuff we find is actually from Israeli soldiers who’re misbehaving and doing stuff that I would say are definitely violations of international laws. But that’s coming on their social media accounts—they post it themselves.
Another issue is: Because of the lack of electricity there, you actually get a lot of stuff happening at night that you can’t really see in the videos. Like the convoy attack that Israel had the drone footage of—there’s lots of footage of that, but it’s just all at night and it’s pitch-black. But there was a good piece of analysis I saw recently where they used the audio and could actually start establishing what weapons were being used. Just the sound itself makes it very distinct …
Like audio signatures of missiles?
Yeah, and it’s not just being able to identify the type of weapon: When you fire something, you can hear the sound of the bullet going by but also the sound the barrel makes—and you can use that to measure how far away the shot came from. When the Al Jazeera journalist Shireen Abu Akleh was killed in 2022, we had the footage where she was shot. And the shot came from the direction of positions occupied by Israeli forces. [Months after the shooting, the Israel Defense Forces announced that there was “a high possibility” that the journalist was killed by one of its soldiers.]
Are there things you haven’t seen before, coming from this conflict?
It’s certainly the first time I’ve seen AI-generated content being used as an excuse to ignore real content. When a lot of people think about AI, they think, “Oh, it’s going to fool people into believing stuff that’s not true.” But what it’s really doing is giving people permission to not believe stuff that is true. Because they can say, “Oh, that’s an AI-generated image. AI can generate anything now: video, audio, the entire war zone re-created.” They will use it as an excuse. It’s just easy for them to say.
And then they can stay in their own information silo …
Yeah, just scrolling through your feed, you can dismiss stuff easily. It reinforces your own beliefs. Because Israel-Palestine has been such an issue for so long, there is a huge audience already primed to be emotionally engaged. So you see grifters churn out misattributed imagery or AI-generated content. The quality of that discourse is really low. It means that if you’re looking for real accountability, it’s hard.
You have this entirely transparent process, where you put all your evidence and investigations online so anyone can double-check it. But it’s a feature of the world we live in that people who’re convinced of certain things will just remain convinced in the face of all the facts. Does the inability to change minds frustrate you?
I’ve gotten used to it, unfortunately. That’s why we’re moving toward legal accountability and how to use open source evidence for that. We have a team that’s just working on that. You can have the truth, but the truth is not valuable without accountability.
What do you mean by legal accountability?
Well, you have people on the ground capturing evidence of war crimes. How do you actually take that from YouTube to a courtroom? No one has actually gone to court and said, “Here’s a load of open source evidence the court has to consider.” So we’ve been doing mock trials using evidence from investigating Saudi air strikes in Yemen.
A lot of our work is educating people: Lawyers in general don’t know much about open source investigation. They need the education to understand how investigators work, what they’re looking for—and what is bad analysis.
Because there’s more and more bad analysis with open source evidence. Do you know Nexta TV? They’re this Belarusian media organization, and they did a series of tweets after the attack on the concert in Moscow. They said there’s a lot of people in this scene wearing blue jumpers. They could be FSB agents [members of Russia’s Federal Security Service]. But where’s the proof they’re FSB agents in the first place? That was terrible analysis, and it went viral and convinced people there was something going on. If you can draw colored boxes around something and say you’re doing open source investigation, some people will believe you.
There are elections this year in the US and in the UK and in India. Are you preparing to deal with these three big election events as you deal with Ukraine and Gaza?
There’s only so much we can do to prepare, because I think the scale of disinformation and AI-generated imagery will be quite significant. If you look at what’s happened already in the US with the primaries, you’ve already got fake robocalls; the DeSantis campaign used AI-generated imagery of Trump and Dr. Fauci hugging each other. So that line has already been crossed. These tools are available to ordinary members of the public as well, not just political agents.
Which makes it much worse.
Yeah, because it’s not what the campaigns decide to do, it’s what their supporters decide to do.
Given this flood of AI-generated imagery, are you wary of Bellingcat turning into just a fact-checker rather than doing these much deeper investigations where you build a case?
It’s like the Kate Middleton thing that happened recently. I really tried not to join the conversation. I thought: This is really stupid discourse. But then you start seeing, like, TikTok videos that were saying, “Oh, the color’s being photoshopped” or whatever, and they have millions and millions and millions of views. So you kind of feel: Yeah, I have to say something. It’s actually a good reflection of how disinformation starts and spreads, and the dynamics.
I will not lie. I was fascinated too, for the span of a week.
That’s why it was prime territory for disinformation! I’ve dealt with lots of communities who believe in conspiracy theories. None of them generally believe they’re conspiracy theorists. They believe they’re truth seekers fighting against some source of authority that is betraying us all. They’ve come to understand that a source of authority cannot be trusted, because of their personal experiences.
I love a phrase you used for this once: that people who believe in conspiracy theories have previously suffered some kind of “traumatic moral injury.”
I use the example of Covid. A lot of people who were driving Covid disinformation were people in the alternative health community who’ve often had bad experiences with medical professionals. Like they’ve had a treatment go wrong, or they’ve lost a loved one, or they’ve been mistreated. And some of that is legitimate. Some of that is real trauma.
Now, they found like-minded people, and within that community you have people who are anti-vaxxers. When Covid came along, suddenly those voices became a lot louder within those communities. And the distrust people had in medical professionals was kind of reinforced. It’s about feeding their anxiety—and they’re being fed every single day, every time they scroll through their groups.
In an era when AI images are going to proliferate, wouldn’t you rather that people have this heightened spidey sense about the world, where they’re alert? That they’re too skeptical rather than too trusting?
I’d argue against the frame of that question. If you have people’s spidey sense tingling all the time, they’ll just distrust everything. We’ve seen this with Israel and Gaza. A lot of people are really at that point where they do care about what’s happening, but it’s so confusing that they cannot stand to be part of this anymore. You’re losing people in the center of the conversation. This is a real threat to a democratic society where you can have a debate, right?
Is this AI-generated stuff at a stage of sophistication where even your team has to struggle to distinguish it?
Well, we explore the network of information around an image. Through the verification process, we’re looking at lots of points of data. The first thing is geo-location; you’ve got to prove where something was taken. You’re also looking at things like the shadows, for example, to tell the time of day; if you know the position of the camera, you’ve basically got a sundial. You also have metadata within the picture itself. Then images are shared online. Someone posts it on their social media page, so you look at who that person is following. They may know people in the same location who’ve seen the same incident.
You can do all that with AI-generated imagery. Like the Pentagon AI image that caused a slight dip in the stock market. [In May 2023, a picture surfaced online showing a huge plume of smoke on the US Department of Defense’s lawn.] You’d expect to see multiple sources very quickly about an incident like that. People wouldn’t miss it. But there was only one source. The picture was clearly fake.
My concern is that someone will eventually figure that out, that you’ll get a coordinated social media campaign where you have bot networks and fake news websites that have been around for a long time, kind of building a narrative. If someone were clever enough to say, “OK, let’s create a whole range of fake content” and then deliver it through these sites at the same time that claims an incident has happened somewhere, they’d create enough of a gap of confusion for an impact on the stock market, for panic to happen, for real news organizations to accidentally pick it up and make the situation much worse.
So how do we even begin to fix this?
Social media companies need to have the responsibility—like, legislatively—to have AI detection and flagging as part of the posting process. Not just as something that’s a fact-check layer, because that’s not going to matter at all. I don’t think a voluntary system is going to work. There need to be consequences for not doing it. I think my worry is that we’re only going to figure this out when something really terrible has happened.
Do you still do a lot of investigative work yourself now?
No. If I’ve got a gap in my day to do a quick geolocation or something like that, I’ll do it. I’m involved with a lot of the work we do on our production company side of things, so that’s keeping me busy. I do a lot around PR and comms.
Is that easy for you? Somewhere you’d said that when you were younger, you were slightly socially anxious?
I was cripplingly socially anxious. I’ve had to beat it out of me. When I first started doing this, I had loads of anxiety, really serious levels. The idea of speaking on stage was terrifying to me. The first time I did a big event on stage was at a 2013 Google Ideas summit. I don’t remember anything about that. Just dripping with anxiety. But doing this again and again, about something I really care about, has helped balance that out.
How do you spend your spare time online? What do you do on holiday?
I’ve removed Twitter from my phone, because that was one of the worst things. Arguing with people …
You don’t do that anymore, I noticed. You used to do it a lot, and in such good faith.
It was kind of like testing my own knowledge. If someone can come up to me and say, “Oh, you’re wrong because of this,” and I can’t argue against that, then I’m the one in the wrong. It used to be worthwhile having those debates, even if they were arguing in bad faith. But it got to the point where the mythology around Bellingcat that existed in these echo chambers became crystallized. When someone now says, “Oh, Bellingcat is the CIA,” it’s always the same nonsense.
OK, you’re not arguing as much. What else are you doing?
I use AI a lot for my own entertainment. Do you know Suno AI, or Udio? These are music-creation tools—and in the past six months they’ve taken huge, huge leaps.
Oh, Suno. It’s the Hindi word for “listen.”
Yeah. Have you used these at all?
No.
I’ll show you. I have a SoundCloud where I upload my music. You can put in style prompts. You can also put in custom lyrics.
This is how the founder of Bellingcat spends his spare time.
Yeah. I like it especially when the AI generator really gets weird, goes completely off the rails. I write loads of songs about things like filter bubbles online and stuff. If you can condense an idea into a lyrical form, I find that helps process it into a simpler form to explain it to people in articles and books.
When you’re giving these prompts, are you giving them influences or are you just giving them genres?
Oh, I’ve got a whole process for this now! It used to be that I’d say, “OK, let’s do an ambient song.” But then I was thinking: How do I get the exact sound of certain bands? Because you can’t put in “Make a Beastie Boys song.” It won’t let you prompt it that way; they’re clearly trying to avoid getting sued. But I go to ChatGPT and explain the scenario: I’m giving prompts for a music-generation program that requires style tags and types of music, so what are the style tags for, like, Kraftwerk? It will break down styles into separate tags, and you can take those tags and put them back in.
I’ve read elsewhere that you call any yearning for a time before the internet “cyber-miserabilism,” which is a great phrase. But it’s also true that all of us remember our minds being calmer before we started scrolling through feeds.
You’re continually wired now. What really worries me is how this is traumatizing people. We had this a lot with Ukraine in 2022, when there were so many people engaged with the content stream. Those people were saying, “I just feel horrible all the time.” We didn’t realize we were traumatizing ourselves. We’re seeing the same issue with Israel and Gaza and people streaming through this imagery that’s just reinforcing the hate they have for the other side.
In the early days of Bellingcat, you were being exposed to videos like that on a daily basis, very often including footage of dead bodies. How do you protect yourself from what you’re seeing?
For me, it felt like there was a point to it, because I had success through seeing all this stuff. It’s the powerlessness that is often part of the traumatic response. But you can learn to disassociate from that.
Can you though?
I just think I got very good at compartmentalizing stuff. It’s so, so important for this work. With MH17, I was looking at the wreckage of the site. There was a big, high-resolution photo, and I was going through it looking at the details of the shrapnel holes, and there was a doll in the wreckage, and my daughter had been given the exact same doll by her aunt when she was born. What happens then is you have a subconscious engagement with it. And you have to stop at that point. Trying to push through it is a really bad idea.
When I was looking at the victims of the 2013 sarin attacks in Syria, for example, we were trying to identify the symptoms. And one of the symptoms is the constriction of pupils. So I had to look at the eyes of these dead people to find enough screenshots to establish their cause of death. That was upsetting in itself. But then you go online, and you have all these idiots saying: “Oh, it’s fake. No one really died. The babies are acting.” That is traumatic.
What happens to a lot of people is they have this kind of compulsive witnessing, where you’re like, “I have to witness this thing.” Because, in history, people have turned their backs, right? So I have to witness this, so that these people’s suffering is being acknowledged. It’s an illusionary way of getting power back from the situation, because it really doesn’t change anything. All you’ve done is traumatized yourself.
I understand Bellingcat offers psychological support so anyone on staff can get free therapy. Do people use that counseling facility a lot?
Oh yeah, absolutely. It’s not just about the content we face but also the reaction from governments that we have to deal with. Which can be, as you know, quite aggressive.
I did wonder about that. I’ve read that you don’t eat room service meals anymore, and I wanted to know what else you do or don’t do. And also, what changed when Bellingcat was declared a foreign agent by Russia in 2021?
We have a security team, we have a lot of reviews around cybersecurity. We have a lot of discussions about our physical security. We have staff retreats, where consultants come talk to us about, like, “Here’s what to do if you’re being followed.” Fun stuff like that. Being declared undesirable and a foreign agent—in one sense, it’s a badge of honor. It’s also a problem, because we try to be transparent about who funds us, but if we’re a foreign agent and have donations from people who’re linked to Russia, that will put them at risk. We’ve had to stop publishing some of our donors’ names, which we’re not fans of. But they need to be protected.
What about this meeting, for instance? How did you know whether to agree to have a cup of coffee with me? What did you do?
Well, some research. First of all, I made sure to know what you look like. There’s been incidents where people have had meetings with journalists, who suddenly start asking very weird questions. They’ll start saying, “Oh, Israel are pretty awful, aren’t they?” And then you wonder, “What’s going on here?” I know people who’ve had Skype calls, and suddenly their call is on Iranian state media, selectively edited.
I found a quote online from one of your former employees in which he says, “Data is the great equalizer between an individual and the state.” But surely, at some point, governments and intelligence agencies will find ways to hide their own data better?
Russia tried to do that. After we did the first investigation of the poisoners [of Sergei Skripal, a former Russian intelligence officer, in England], we got copies of their GRU documents. The next time we tried that, they’d removed the photos from the documents of GRU officers. But that just told us they were GRU officers. When we posted about that, the photos returned, but they were of different people. They’d replaced a photo of a man with a photo of a woman. So … they’re not smart.
But they’re bound to get smarter?
Maybe. The thing is, these are doors. One door closes, we just go through the 10,000 other open doors. It’s never the end of the investigation. We just need to take another route.