It may sound like something out of a “Black Mirror” episode. Still, advances in artificial intelligence (AI) now allow people to interact and hold conversations with loved ones from beyond the grave.
Several companies, including California-based HereAfterAI, are using technology similar to that which powers AI chatbots and voice assistants to create virtual versions of actual humans. Available as a mobile app, HereAfterAi uses photographs, video interviews, and voice recordings to create interactive avatars capable of conversation.
You, Only Virtual, also based in Los Angeles, strives to “recreate the relationship dynamics between you and your loved one through conversation– enabling authentic communication upon one’s passing.
Tel Aviv, Israel-based Hour One allows anyone to create a fully digital clone of themselves that can speak “on camera” without any input from live humans.
AI Overcoming Creativity Hurdles
Cloning human digital thought patterns has been considered the Holy Grail of artificial intelligence since Alan Turing introduced his eponymously named test in the 1950 paper “Computing Machinery and Intelligence.”
One of the most critical stumbling blocks data scientists have encountered in developing artificial intelligence is separating creative thought into convergent and divergent thinking.
Convergent thinking is figuring out concrete solutions to problems using memory and logic. Divergent thinking is the process of exploring multiple possibilities to solve a problem and is much more free-flowing and spontaneous.
Previous generations of AI have performed poorly in divergent thinking, but theory and practice have advanced considerably. In 2016, a New York University AI machine named Benjamin wrote a garble script for “Sunspring,” a fictional short film.
Since then, AI technology has seen increasing usage and skill in creating everything from gaming to music to journalism. In a widely covered event in August, Capitol Music Group signed and quickly dropped an AI rapper named FN Meka after being criticized for racial stereotyping.
Technology Builds On Large Language Models
Building convincing digital replicas of humans relies on using large AI language models (LMMs), which can decipher meaning from prompts and respond with clear text. Two of the most popular are GPT-3 from OpenAI and Google’s LaMDA, which can be customized to mimic unique conversational patterns.
Paired with advances in voice cloning, this has led to the creation of Deepfakes, which are increasingly realistic but completely false versions of actual events and statements.
Ethical Concerns Remain
While the use of AI digital cloning has been proposed as a means of helping people deal with the grieving process, it raises issues about consent and privacy, both pre-and post-mortem.
Ownership of intellectual property and data falls into a legal gray area, namely, does content created by AI have the same legal rights and philosophical status as that produced by actual humans?
The use of this technology regarding memories, beliefs, and personal preferences is seen by some critics as a gross invasion of privacy. It can also be seen as exploitative, as evidenced by the furor surrounding the 1997 Fred Astaire Dirt Devil commercial, in which the dancer’s likeness was licensed by his estate for commercial purposes, despite Astaire’s death a decade earlier.
Technology Offers Digital Marketing Opportunities
Los Angeles-headquartered company, StoryFile, offers conversational videos, including an interactive version of William Shatner on its homepage, which showcases some of the possibilities this technology creates.
Marketing companies could run promotions with dead celebrities, but more enticing is the use to predict customer choices. They can cluster consumer demographics and historical information using big data to create increasingly accurate consumer models.
Likewise, in an expanded version of the personalization already in use by companies like Meta and Google, it could be used to create a digital clone of a person designed to mimic the choices you would make, increasing the relevancy of results or marketing campaigns at the cost of personal privacy.
Featured Image: igor kisselev/Shutterstock