Over the last couple of decades, the emergence of social media as the default platform for consuming information has resulted in two things that have shaken our self-assurance that if we see it, we can believe it. (Image credit: Gerd Altmann on Pixabay)
While I was growing up and forming my political consciousness, one thing was repeated ad nauseum — you cannot trust politicians. Across party lines and beliefs, it was a truth universally acknowledged that people contesting power will manipulate information, make false promises, slander their opponents, and engage in corrupt information practices. As elections drew near, various sources would also keep reminding us to be aware of rumours, gossip, and baseless information that would often spread unchecked, inciting voters towards certain decisions.
Given that this was an accepted state of things, we have to start wondering why the introduction of deepfakes in our ongoing election cycle is causing so much anxiety. Some of the answers are obvious. We might be used to encountering doctored information, but we had an innate trust in our capacity to sift the truth. There was a belief that we could see through the manipulations, and we had access to alternative sources that could verify and corroborate information that we were unsure about. We also, to varying degrees, had trust in media institutions and regulatory bodies to check, contain, and confirm the truth value of the information coming to us. We were familiar with the media to know when things were changed, edited, or revised. Lies and fakes were a part of the information ecosystem but we also felt confident about the tools, strategies, and collective experience we had, to examine and evaluate the truth of these messages.
The biggest change that deepfakes have introduced is not about the nature of information but about our capability to trust our own judgement of this information. It is important to note that when faced with deepfakes, we are not just being fooled. That would have been an easier thing to deal with — because if we were being fooled, we would find interventions to provide ourselves with information, data, science, and proofs that we can depend upon to verify the information. We would have provided technological solutions and apps to detect the possibility of the fake and show its true nature. Community interventions where people give context, counter the information, and verify and fact-check it — things that we are already doing, would have evened out the terrain. The current state of algorithmic detectors would have sped up the process of fact-checking.
With deepfakes, the focus is often on managing the production, circulation and reception of this information. But the real attention has to be given to an extraordinary condition that we have naturalised over time: A condition where we are unable to trust our own decision on whether something is true or not. Even when we have done all the verifications and come up with an answer, the question remains: Can I trust my analysis of this information? Deepfakes are obviously sophisticated technological wizardry that allow for non-real things to claim reality. However, what is different in this moment from older histories of information falsification is that we have lost the assurance that what we believe is true.
Over the last couple of decades, the emergence of social media as the default platform for consuming information has resulted in two things that have shaken our self-assurance that if we see it, we can believe it. One, is context collapse. It is important to note that we trust information not only because of its content but because of the context within which we receive it. This is a dialogue. It is a relationship. Something that your friend tells you is more trustworthy than what a random stranger shares with you. Somebody who is a certified expert on something might be more credible than a person expressing their opinions. But the age of expertise has collapsed with the flattened context of social-media interfaces. We consume everything through the same interface, with very little attention paid to the source. Even when we know the source, we are not sure if the information is something that they have analysed or are merely passing along, curated by an algorithm of digital engagement. When the context of our information collapses, we can no longer know how to trust that information and it shakes our belief that we have the ability to discern the fake from the not fake.
The second thing that we have normalised with digital platforms is information overload. We are so saturated with too much information that we have given up trying to find information on our own terms. Information is given to us. When we look for sources to corroborate it, those sources are also given to us. We do not manage either the sources or the contexts of our media consumption. We have outsourced this to algorithms that curate, manipulate, shape, and circulate information based on pre-set logics of profit and engagement. The condition of information overload also means that we are consuming information quickly and at a speed that makes thoughtful engagement difficult. Things pass by in the blink of a scroll, and we work largely on intuition and fragmented impressions, and depend on algorithms of vested interests already defining through headlines and information flows, what the meaning of that information is going to be.
Thus, when the Indian electoral processes are worried about deepfakes, they need to realise that deepfakes can be managed only as long as there is clarity between what is real, what is fake, and what is really fake. As long as we keep on accepting the collapse of context and information overload as our default modes of social media and digital engagement, no amount of regulation is going to stop the circulation of deepfakes. This is because we are now living in an age of suspended judgement, where everything is potentially fake as long as it fools us.
The writer is professor of Global Media at the Chinese University of Hong Kong and faculty associate at the Berkman Klein Centre for Internet & Society, Harvard University, USA
© The Indian Express Pvt Ltd
First uploaded on: 11-05-2024 at 11:31 IST