For a $600 an hour expert witness fee, Stanford professor Jeff Hancock, whose biography claims he's "well-known for his research on how people use deception with technology," apparently used deception with technology by citing numerous academic works that do not appear to exist, the Minnesota Reformer reports.
At the behest of Minnesota Attorney General Keith Ellison, Hancock recently submitted an affidavit supporting new legislation that bans the use of so-called "deep fake" technology to influence an election. The law is being challenged in federal court by a conservative YouTuber and Republican state Rep. Mary Franson of Alexandria for violating First Amendment free speech protections.As an example, the declaration cites a study called "The Influence of Deepfake Videos on Political Attitudes and Behavior," which claims it was published in the Journal of Information Technology & Politics in 2023 - however there's no study by that name in said journal, and academic databases have no record of the study existing.
Hancock's expert declaration in support of the deep fake law cites numerous academic works. But several of those sources do not appear to exist, and the lawyers challenging the law say they appear to have been made up by artificial intelligence software like ChatGPT.
The specific journal pages referenced are from two completely different articles.
"The citation bears the hallmarks of being an artificial intelligence (AI) 'hallucination,' suggesting that at least the citation was generated by a large language model like ChatGPT," wrote the plaintiffs' attorneys. "Plaintiffs do not know how this hallucination wound up in Hancock's declaration, but it calls the entire document into question."
Libertarian law professor Eugene Volokh found another fake entry - a study titled "Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance," which doesn't appear to exist.
According to the Reformer, if the citations were fabricated by AI, Hancock's entire 12-page declaration may have been entirely cooked up.
According to Frank Bednarz, an attorney for the plaintiffs, those in support of the deep fake law in question have argued that "unlike other speech online, AI-generated content supposedly cannot be countered by fact-checks and education," however "by calling out the AI-generated fabrication to the court, we demonstrate that the best remedy for false speech remains true speech โ not censorship."
Reader Comments
my citationsDemocracy!"-mis/dis/mal/pro/con/cis/insert prefix here/information Professor John Hancock
I thought this title was a parody, like this quote above, but the situation is hilariously real. At least I didn't rely on chatgpt for this manufactured quote.