Have you ever seen the Netflix series Black Mirror? This British science fiction series depicts a version of a realistic dystopian future in each episode in order to question our modern society and its new, digital technologies. An episode from 2015, called Be Right Back, explores the relationship between artificial intelligence (AI) and human. It tells the story of a young woman called Martha, who is struggling with the death of her boyfriend Ash. Whilst mourning him, she finds out about a new technological service that allows her to text with an AI version of Ash by collecting all his text messages and social media posts. Although this may sound like this could still be a prediction of the future, the opposite is true.
Some casual conversations between Replika (white) and me (grey)/Photograph: Rebecca Haselhoff.
Replika is not the only example of how artificial intelligence can be used to help people with their mental health. Although the concept of deepfakes is still a rather new one, it could become very useful in the future in the field of psychotherapy and psychological support. The Dutch documentary Deepfake Therapy, for instance, shows how such synthetic media technologies are used to help people that are mourning the death of their loved ones. Whilst supervised by a qualified therapist, people can now have realistic video conversations with a deepfake of someone who passed away, which can give them consolation and comfort. Apart from helping people with grieving, deepfakes could potentially also be used to help individuals with Post-Traumatic Stress Disorder (PTSD). Often, patients with PTSD are treated by means of exposure therapy, which means that they are encouraged to expose themselves to the objects or situations that they fear, within a safe environment, to face those fears and overcome them. By means of deepfake technologies, exposure therapy could be improved by recreating synthetic content that shows a specific person or situation that the patient is scared of.
First of all, some clear privacy regulations should be set when using digital technologies that connect with our emotions and cognitive conditions. Moreover, even though deepfakes might prove to be very useful for psychological support and mental help, it is essential to evaluate the ethical risks that come with using such new technologies during therapy sessions. For example, patients might want to use deepfakes just to keep a deceased person alive, instead of using them as a temporary aid during their grieving process. It could also be argued that using AI to digitally resurrect a deceased person will only lead to the creations of empty vessels that are filled with emotions and behaviours that are deemed fitting in a therapy session for grieving and, thereby, fake identities that do not fit the people who passed away. It will, therefore, be valuable to consider what exact uses of deepfakes could be potentially beneficial or worth implementing for mental health reasons and it will always be important to analyse whether the use of deepfakes will not worsen the mental state of a patient in therapy. In addition, as they did in the documentary Deepfake Therapy, it will be essential to accompany the patients with a qualified professional, such as a psychologist, who can guide them sensibly during those therapy sessions and who can support them during their recovery process. However, exploring the potentials of deepfakes and AI-generated content in the area of mental health could be of great value and it will, therefore, be important to have open discussions on how such technologies could be used for psychological support. And in the meantime, deepfakes can be used in our daily lives as a new and fun way of improving our mental wellness, for instance with an app like Replika.
The next blogpost will be discussing deepfakes & content creation: In what ways are deepfakes used for entertainment content? What opportunities could deepfake provide for the movie industry? And what legal or moral issues might arise when using deepfakes for content creation?
My name is Rebecca Haselhoff, an MA student of Media Studies: Digital Cultures at Maastricht University. I’m doing a research internship at Beeld en Geluid, focusing on deepfake technologies and the different ways in which deepfakes can be used and what impacts they can have on society.
Subscribe to the newsletter Research of Sound and Vision and stay informed of all meetings and activities we do to make our collections accessible for research. The newsletter is in Dutch.