User:Allisonwd/sandbox

User:Allisonwd/sandbox

← Previous revision Revision as of 23:47, 27 April 2026
Line 3: Line 3:


=== '''Video''' ===
=== '''Video''' ===

==== Add ====
Beyond automated detection, viewers can rely on context and perceptual cues to identify deep fake content. Studies have found that viewers can note inconsistencies with body language, facial features, lighting and audio to mouth movement. These are some ways to identify inauthentic content, but these are not always the most reliable ways to determine whether or not the content is real. Evidence suggests that in natural viewing conditions, viewers are not more likely to recognize deep fake videos over authentic videos. Using content warnings has shown mixed results, where the warnings did not improve the accuracy of detection and also resulted in some individuals incorrectly judging videos as deep fake.{{Cite journal |last=Lewis |first=Andrew |last2=Vu |first2=Patrick |last3=Duch |first3=Raymond M. |last4=Chowdhury |first4=Areeq |date=2023-11 |title=Deepfake detection with and without content warnings |url=https://royalsocietypublishing.org/doi/10.1098/rsos.231214 |journal=Royal Society Open Science |language=en |volume=10 |issue=11 |doi=10.1098/rsos.231214 |issn=2054-5703 |pmc=10679876 |pmid=38026025}}
Beyond automated detection, viewers can rely on context and perceptual cues to identify deep fake content. Studies have found that viewers can note inconsistencies with body language, facial features, lighting and audio to mouth movement. These are some ways to identify inauthentic content, but these are not always the most reliable ways to determine whether or not the content is real. Evidence suggests that in natural viewing conditions, viewers are not more likely to recognize deep fake videos over authentic videos. Using content warnings has shown mixed results, where the warnings did not improve the accuracy of detection and also resulted in some individuals incorrectly judging videos as deep fake.{{Cite journal |last=Lewis |first=Andrew |last2=Vu |first2=Patrick |last3=Duch |first3=Raymond M. |last4=Chowdhury |first4=Areeq |date=2023-11 |title=Deepfake detection with and without content warnings |url=https://royalsocietypublishing.org/doi/10.1098/rsos.231214 |journal=Royal Society Open Science |language=en |volume=10 |issue=11 |doi=10.1098/rsos.231214 |issn=2054-5703 |pmc=10679876 |pmid=38026025}}


== Pornography ==
== Pornography ==

==== Add ====
Deepfake technology has become a tool for gender-based harassment and violence, proportionally targeting women and marginalized groups. There is an increasing ethical and equity concerns of intimidation and reputational harm intentions behind this specific media.{{Cite journal |last=Lazard |first=Lisa |last2=Capdevila |first2=Rose |last3=Turley |first3=Emma L |last4=Gilfoyle |first4=Kathryn |last5=Stavropoulou |first5=Nelli |date=2025-11-21 |title=Deepfake Technology and Gender-Based Violence: A Scoping Review |url=https://journals.sagepub.com/doi/10.1177/15248380251384271 |journal=Trauma, Violence, & Abuse |language=en |doi=10.1177/15248380251384271 |issn=1524-8380}}
Deepfake technology has become a tool for gender-based harassment and violence, proportionally targeting women and marginalized groups. There is an increasing ethical and equity concerns of intimidation and reputational harm intentions behind this specific media.{{Cite journal |last=Lazard |first=Lisa |last2=Capdevila |first2=Rose |last3=Turley |first3=Emma L |last4=Gilfoyle |first4=Kathryn |last5=Stavropoulou |first5=Nelli |date=2025-11-21 |title=Deepfake Technology and Gender-Based Violence: A Scoping Review |url=https://journals.sagepub.com/doi/10.1177/15248380251384271 |journal=Trauma, Violence, & Abuse |language=en |doi=10.1177/15248380251384271 |issn=1524-8380}}