Deep fake videos9/3/2023 ![]() ![]() The description on the Kaggle Website explains, "AWS, Facebook, Microsoft, the Partnership on AI’s Media Integrity Steering Committee, and academics have come together to build the Deepfake Detection Challenge (DFDC). As for whether or not such a project would have been better than the original-like many deepfakes, it all comes down to how you look at it.We already know DeepFakes can be quite believable, but just how believable are they? Kaggle's Deepfake Detection Challenge (DFDC) recently sought an algorithmic answer to this question of detecting fakes. Improving the public’s media literacy and critical reasoning skills are key factors in ensuring people remember a Will Smith-starring Matrix as an interesting Hollywood “What If?” instead of fact. Motivated reasoning is bad enough on its own, but deepfakes could easily exacerbate this commonplace logical fallacy if people aren’t aware of such issues. As such, you are more likely to believe a deepfake if it is in favor of your socio-political leanings, whereas you may be more skeptical of one that appears to “disprove” your argument. If one is shown supposed evidence in support of existing beliefs, a person is more likely to take that evidence at face value without much scrutiny. ![]() ![]() That said, they conceded deepfakes could be better at spreading misinformation if they manage to go viral, or remain memorable over a long period of time.Ī key component to these bad faith deepfakes’ potential successes is what’s known as motivated reasoning-the tendency for people to unintentionally allow preconceived notions and biases to influence their perceptions of reality. ![]() Speaking with The Daily Beast on Friday, misinformation researcher and study lead author Gillian Murphy did not believe the findings to be “especially concerning,” given that they don’t indicate a “uniquely powerful threat” posed by deepfakes compared to existing methods of misinformation. But as disconcerting as those numbers may be, using deepfakes to misrepresent the past did not appear to be any more effective than simply reading the textual recaps of imaginary movies. Of those, many believed these imaginary movies were actually better than the originals. Upon review, nearly 50 percent of participants claimed to remember the deepfaked remakes coming out in theaters. Meanwhile, some volunteers were also provided with text descriptions of the nonexistent remakes. From there, the participants watched clips from the actual remakes of movies like Charlie and the Chocolate Factory, Total Recall, and Carrie. To test the forgeries’ efficacy, researchers at University College Cork in Ireland asked nearly 440 people to watch deepfaked clips from falsified remakes of films such as Will Smith in The Matrix, Chris Pratt as Indiana Jones, Brad Pitt and Angelina Jolie in The Shining, and Charlize Theron replacing Brie Larson for Captain Marvel. According to findings published earlier this month in PLOS One, deepfake clips can alter a viewer’s memories of the past, as well as their perception of events. Deepfake technology has already proven itself a troublingly effective means of spreading misinformation, but a new study indicates the generative AI programs’ impacts can be more complicated than initially feared. ![]()
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |