Earlier this year, the world looked on in amazement as a series of TikTok videos of a deepfake Tom Cruise showed the progress being made by the technology. The videos prompted many to voice concerns about the impact of this technology on society and particularly how we might protect ourselves from such manipulation.
Recent research from the University of York explores how effective our gut instincts might be in detecting these kinds of online fakery. The research focuses its gaze on the online reviews that form such a crucial part of our commercial experience online. The authors argue that our gut instinct can often be both powerful and accurate at alerting us to the potential of fake reviews, and therefore we don’t need algorithms to help us spot fake testimony.
The study highlights the challenges involved in detecting fake reviews for both computers and humans alike, before arguing that a greater awareness of the various linguistic nuances often used in fake reviews can provide humans with the edge when it comes to spotting them.
The authors suggest that companies currently use algorithmic approaches to detecting fake reviews posted on their site, and these systems are generally quite accurate, the whole approach is generally pretty opaque and is therefore hard to gauge its effectiveness. As such, they wanted to test how effective humans could be in filling in any gaps that may exist.
After testing nearly 400 people on a range of hotel reviews that were a mixture of fake and authentic, the researchers found that people tend to use the same kind of cues as the algorithms do when they’re trying to spot fakes.
This includes linguistic clues such as the amount of detail used or the excessive number of superlatives.
The task proved a relatively straightforward one for those who were already sceptical of online reviews beforehand, but even among this group, there was a lack of skill in spotting things such as whether the review was non-committal or was easy to read, both of which are things algorithms are trained to detect. Instead, the volunteers had to rely on their gut instinct to sniff out the fakes.
Relying on instinct
Instinct can often be downplayed in terms of its potency, especially in comparison to computers that don’t have to rely on anything as ephemeral as “emotion,” and can instead rely upon Vulcan-like logic and data to make its decisions.
“The outcomes were surprisingly effective,” the researchers say. “We often assume that the human brain is no match for a computer, but in actual fact, there are certain things we can do to train the mind in approaching some aspects of life differently.”
Could a similar approach be used to spot deepfakes, such as those produced showing Tom Cruise performing magic tricks? Research from Binghamton University suggests that our instincts could be similarly strong in this field too.
Indeed, they found that there are detectable changes in our pulse when we look at deepfake videos, which manifests in subtle changes in our skin color as a result of something known as photoplethysmography.
It’s an approach that is more commonly found in the kind of pulse oximeters used in doctor’s offices, fitness trackers, and so on. The study found that this instinctive response was very accurate in detecting deepfakes, even though the researchers tested participants on videos of a higher quality than those typically found on the web.
Slowing the spread
Of course, even identifying fake content online may only be the start of the battle, as research from Nanyang Technological University (NTU) found that even when people were made aware that a video was a fake, they were still pretty likely to share it with their social network.
Similar findings were found in research from the University of Regina, which found that people are happy to share fake news, even when they’re told that it’s fake.
The researchers found that this seemingly illogical phenomenon occurs because when we’re on social media we’re not thinking as analytically as we might otherwise be.
So even if our instincts are generally pretty good at spotting fake content online, actually getting us to act on those instincts and not spread the fake content even further may be somewhat easier said than done. As a result, the days of relying on AI-based tools to help us both detect and not spread fake content are unlikely to be numbered any time soon, as in the war against fakery, we need all the help we can get.