Photo credit: Possessed Photography/Unsplash
in

Artificial intelligence improves, deepfake risks continue

More than two years after the Senate passed the Deepfake Report Act of 2019, deepfake incidents keep popping up. If you’re unfamiliar with the issue, deepfakes are a combination of artificial intelligence (AI) used to create fake audio and videos.

Largely intended to frame a person for doing something negative, a deepfake was recently linked to a Pennsylvania mother named Raffaela Spone. CNN reports that Spone was charged with three counts of cyber harassment of a child and three counts of harassment, both misdemeanors. Why? She allegedly used deepfake technology to cyberbully three girls from her daughter’s cheerleading team in compromising positions: nude, drinking alcohol and vaping.

Is artificial intelligence making people’s lives worse?

AI has become its own worst enemy. While the tech and business world will all but certainly continue to increase and improve its sophistication level, hackers are usually lurking nearby and just as tech savvy.

In all fairness, AI isn’t always linked to negativity. It’s been useful for YouTube video comments, as a personal assistant (Siri and Alexa), for self-driving cars, and improving customer service for phone representatives. It’s also had admirable results working with medical professionals to identify breast cancer.

However, the smarter it gets, the more it can be both beneficial and detrimental to businesses and technology companies. While Spone is a rare example of an overzealous mom hacker, the bigger issue is deepfake tactics could largely lead viewers to believe that actual video footage is fake. If used on a larger scale, this could also affect the criminal justice system, background checks and business reputations. If technology has a hard enough time dissecting what is fake news versus a legitimate video, how can we trust our own eyes?

Artificial intelligence has complicated relationship with minorities

There’s already a love-hate relationship between African-Americans and facial recognition software. For many years, it’s been widely reported that African Americans are two times more likely to be targeted and arrested as members of any other race in the United States. As arrests increase, so do mug shot photos. And those mug shot databases and surveillance footage are too often matching up melanin-rich people who may not be the person in the photo or video. In all of AI’s advancements, recognizing features on darker skin is not one of them.

And being misidentified by a human is traumatic enough. Now add in AI advancements to make it twice as hard for someone to clear their name(s) via video. So what exactly can a user do to get ahead of a risky situation like this? In the case of Spone, the IP address linked to the account posting deepfake videos was what was used to figure out the identity of the hacker.

Sharing is caring, or is this the case in AI?

What happens when users think they’re being socially conscious and sharing information to better inform, and they end up spreading malicious information? It’s already happening. Facebook banned certain types of deepfake videos in early 2020.

Although the social media platform was criticized for censoring videos that “looked” fake regardless of whether the moderators had proof, they chose to do so anyway with a general rule of thumb: remove artificial intelligence (or machine learning) videos “that aren’t apparent to an average person and would likely mislead” viewers. Whether the users who shared deepfake videos knew it was fake or not is still unclear.

Some deepfake videos get a pass though. Satire and comedic views, such as the one with Jordan Peele in an example deepfake video of former President Barack H. Obama, would be an example of one that stayed put for being obvious to the “average” person. Peele also made what he was doing clear throughout the video.

So how is the tech industry trying to get a handle on deepfake footage and photos that have no intention of letting viewers know it’s fake? The National Science Foundation and the National Institute of Standards and Technology were assigned to study and accelerate the creation of technology that can help detect these false videos. The Pentagon, through the Defense Advanced Research Projects Agency (DARPA), is also working with several research institutions (like SRI International in Menlo Park, California) to develop technology to spot a deepfake.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *