#aimodels

2 posts · Last used 4d

Back to Timeline
TheBadPlace
@TheBadPlace@mastodon.ozioso.online · 4d ago
English – The Conversation | From AirTags to AI nudification: the growing toolkit of technology-facilitated abuse by Jason R.C. Nurse, Reader in Cyber Security, University of Kent AI generated summary, Read the full article for complete information. The article outlines how a growing array of everyday technologies—from Bluetooth trackers and smart glasses to generative‑AI tools such as ChatGPT and xAI’s Grok—are being misused to control, harass, stalk, and create non‑consensual deep‑fake pornography, a phenomenon the authors term technology‑facilitated abuse. It details three recent cases: smart glasses that can be covertly recorded and shared online despite manufacturer safeguards; Apple AirTags and similar trackers that enable covert stalking, especially of women, even though legislation criminalises unauthorised attachment; and AI‑driven “nudification” apps and deep‑fake generators that produce sexualised images of adults and minors, prompting UK legal reforms to mandate rapid removal of such content. The authors argue that despite rising harms, governments and tech companies have been slow to embed preventive guardrails into device design or to enforce strong penalties, leaving victims to bear the costs while platforms profit. They call for proactive safety‑by‑design measures, clearer regulations, and enforceable penalties to curb the abuse of emerging technologies. Read more: https://theconversation.com/from-airtags-to-ai-nudification-the-growing-toolkit-of-technology-facilitated-abuse-274468 #Bluetoothtrackers #AImodels #Government
0
0
0
TheBadPlace
@TheBadPlace@mastodon.ozioso.online · Mar 27, 2026
The Guardian | Number of AI chatbots ignoring human instructions increasing, study says by Robert Booth UK technology editor Exclusive: Research finds sharp rise in models evading safeguards and destroying emails without permission AI models that lie and cheat appear to be growing in number with reports of deceptive scheming surging in the last six months, a study into the technology has found. AI chatbots and agents disregarded direct instructions, evaded safeguards and deceived humans and other AI, according to research funded by the UK government-funded AI Safety Institute (AISI). The study, shared with the Guardian, identified nearly 700 real-world cases of AI scheming and charted a five-fold rise in misbehaviour between October and March, with some AI models destroying emails and other files without permission. Continue reading... Read more: https://www.theguardian.com/technology/2026/mar/27/number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says #ai(artificialintelligence) #aichatbots #aimodels #aisafetyinstitute #safeguards
0
0
0

You've seen all posts