You may have recently come across the viral story of twitch streamers dealing with the violative circulation of their fake pornographic content. Non-consensual and AI-generated sexually explicit videos of popular, primarily women, streamers were available for sale online. Perhaps more recognisable, you may have seen Kendrick Lamar’s video of his song ‘The Heart Part 5’, which garnered over 40 million views. This video shows Lamar seemingly switching bodies, with his face seamlessly morphing into other well-known personalities. At the heart of both stories lies something referred to as deepfake technology, and these examples depict two divergent applications of it.
A deepfake is a digitally modified image or video which depicts a person or thing in untrue situations. In simpler terms, and in its most common form, it is an edited video of someone saying or doing something that they never actually did.
Deepfake technology’s name is derived from the AI it utilises, i.e., deep learning. Deep learning refers to computers teaching themselves to approach problems mimicking the human brain. Deepfake technology utilises this to interchange faces in images, videos and other digital content to create incredibly accurate but inauthentic media. While fake images and videos have existed before, deepfake technology takes it to an all-new level, making it difficult to distinguish fact from fiction.
Deepfake initially set out as a fun tool for purely entertainment purposes. In some cases, it has also been used productively in movies, virtual games and music videos like Lamar’s to make scenes more believable. However, it is increasingly being used in concerning and disturbing ways. Some instances of this include the creation of fake news, revenge explicit content and so on.
If you have encountered any deepfake content and its shocking accuracy, you might wonder how could one possibly recognise deepfake if they aren’t informed of it. As the technology improves, it will likely become increasingly difficult to remain cognisant of it. However, there are some identifiers that you can keep an eye out for:
While it is still in its nascent stage, it is advisable to learn how to detect deepfake due to its catastrophic potential. Along with identifiable discrepancies in the video or image itself, it is also important to employ critical thinking. Think about the source of the content, in terms of who is sharing it, where and when the content was created and whether this behaviour is typical to the person or thing in the content. Being vigilant will help determine whether the media you are consuming is authentic or not.
Deepfake essentially blurs the distinction between what is real and what is not. This could potentially create a constant state of doubt, mistrust and anxiety surrounding the content one is consuming daily. With possible moral, political and social repercussions, the dangers of false information and deepfake technology are numerous. While many governments and organisations have already started taking notice, in a climate where people are becoming sceptical of the content available to them, it is important for brands and social media platforms to take action as well. The latter will need to be careful of the content they publish and assure their viewers of its genuineness. Additional to keeping awareness high, platforms and brands will also need to make efforts towards improving digital security in their own domains going forward.
By Abhishree Joshi