06/13/2024 / By Laura Harris
Educators are raising the alarm over how the illicit use of artificial intelligence (AI) is becoming a trend in schools, with students using the technology to generate harmful contents like deepfake sexual images of classmates and fraudulent voice recordings and videos.
The rise of deepfake technology, which can create hyper-realistic but entirely fabricated images and recordings, is now facilitating a new form of bullying that is difficult to handle.
In February, a scandal erupted when educators found out that students at Beverly Vista Middle School in Beverly Hills, California, were spreading around AI-generated deepfake photos of nude bodies with the real faces of female students superimposed on them.
Similarly, in November, male students at Westfield High School in New Jersey used AI to produce sexually explicit images of more than 30 female classmates without their consent.
Meanwhile, in 2021, Raffaela Spone, a mother who allegedly used explicit deepfake photos and videos to frame the cheerleading rivals of her daughter, has been charged with multiple counts of harassment. Spone tried to get members of the Victory Vipers competitive cheerleading team in Doylestown, Pennsylvania kicked off the team by creating deepfake images and videos of them naked, drinking and smoking marijuana and sending them to their coach. (Related: REPORT: Chinese operatives use AI-generated images to spread disinformation and provoke discussion on divisive political issues targeting America.)
These schools and even AI experts face significant hurdles in responding to such incidents, but they also acknowledge the fact that they are responsible for doing so.
Claudio Cerullo, the founder of TeachAntiBullying.org, stated the need for collaboration with law enforcement and new policy development to combat these AI-driven threats. Cerullo, who also serves on Vice President Kamala Harris’s task force on cyberbullying, noted the increased risk of teen suicide linked to cyberbullying and the urgent need for strong preventive measures.
“We need to keep up and we have a responsibility as folks who are supporting educators and supporting parents and families as well as the students themselves to help them understand the complexity of handling these situations so that they understand the context, they can learn to empathize and make ethical decisions about the use and application of these AI systems and tools,” said Pati Ruiz, the senior director of education technology and emerging tech at Digital Promise.
In February, the Federal Trade Commission proposed comprehensive protections against deepfakes to curb the growing threat of AI-driven impersonations. The proposed measures seek to outlaw the creation and distribution of deepfake content, amid rising concerns about the misuse of this technology.
As a response, the Department of Justice appointed a dedicated AI officer to deepen the understanding and regulation of AI technologies.
Meanwhile, in May, Senate Majority Leader Chuck Schumer (D-NY) endorsed a bipartisan detailed report that outlined the critical areas Congress must address concerning AI, particularly deepfakes. The bipartisan Defiance Act, introduced in both the House and Senate in March, seeks to create a federal civil right of action for victims of nonconsensual AI pornography, allowing them to seek justice in court.
However, the conversation becomes particularly complex when considering the involvement of minors in creating or distributing deepfake content.
“I think treating this as a problem specific to child pornography or deepfake nudes is actually missing the forest for the trees,” said Alex Kotran, the co-founder and CEO of the AI Education Project. “I think those issues are where deepfakes really are coming to a head — it’s like the most visceral — but I think the bigger challenge is how do we build sort of like the next iteration of digital literacy and digital citizenship with a generation of students that is going to have at their disposal these really powerful tools.
“We have to try to get ahead of that challenge because I think it’s really undermined kids mental well being and I see very few organizations or people that are really focusing in on that. And I just worry that this is no longer a future state, but very much like a clear and present danger that needs to be sort of taken ahead of.”
Visit FutureScienceNews.com for more on the dangers of AI technology.
Watch this clip from Al Jazeera discussing and warning about the spread of deepfakes in politics.
This video is from the MissKitty channel on Brighteon.com.
Elon Musk blasts Google’s Gemini AI for ELIMINATING WHITES in its generated images.
Sports Illustrated caught publishing articles created by non-existent AI-generated writers.
AI-generated ads and addicting psychiatric medications make for a “deadly cocktail.”
Leading scientific journal publishes fake AI generated paper about rat with giant penis.
Sources include:
Tagged Under:
AI, artificial intelligence, big government, bullying, campus insanity, computing, conspiracy, crime, cyber war, cyborg, Dangerous, deception, deepfakes, faked, future science, future tech, Glitch, information technology, inventions, public education, robotics, robots, sexual harassment, Twisted
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2018 DECEPTION.NEWS
All content posted on this site is protected under Free Speech. Deception.news is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Deception.news assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.