Janine MachinTechnology correspondent, BBC East
Essex PoliceA police force has said it is the first to work with phone companies to educate young people and their families about the dangers of artificial intelligence (AI).
Essex Police has launched an awareness video with EE and its parent company BT and will offer AI safety advice at its stores in the county.
AI has been used to manipulate videos, images and audio but still appear real – known as deepfakes – for the purposes of online sexual abuse and spreading disinformation.
Det Insp Emma Portfleet said while AI apps could be used for “positive reasons, they also have the potential for immeasurable harm”.
In the video, a young actor talks through the advantages of AI, describing it as “like magic”.
The boy later appears filming an older man using a walking frame on a High Street, and he uses the technology to adapt the footage, making it appear as if the man is pirouetting like a ballet dancer.
He tells the audience that AI can be used “to spread lies and invade your privacy”.
Essex Police says it deals with the results of AI videos daily.
Essex PoliceIn April, 26-year-old Brandon Taylor, a barman from Braintree, was jailed for five years for creating sexually explicit images depicting real women.
He took photos from their social media accounts and used AI to manipulate them before sharing them on websites, some of which glorified rape.
The sharing of sexually explicit deepfake images is a criminal offence under the Online Safety Act of 2023, and this year the government announced the law would be toughened.
FacebookThe problems caused by deepfakes are varied and widespread.
The Conservative MP for Mid Norfolk, George Freeman, reported a deepfake to police in October.
The bogus video, which looked and sounded like the MP, appeared to show him announcing he was defecting to Reform UK.
Freeman is calling for changes in the law to protect victims.
Det Insp Portfleet, who leads the Essex Police online investigation team, said the online and real worlds “are merging so fast and it’s so hard for people to know what’s fake and what’s real”.
“The campaign is just one way to get ahead of the problem; we will always investigate crime, but we would far rather stop it happening in the first place,” she said.
Essex PoliceFrom February, EE mobile phone stores in Essex will offer dedicated appointments to families who want to learn more about AI safety.
“We understand that growing up in an online world can be difficult,” said EE retail director Asif Aziz.
“We hope to help young people and their parents better navigate the online world with confidence and positivity.”
The Cambridge-based Internet Watch Foundation is responsible for finding, removing and blocking online images of child sexual abuse, including those generated with AI.
Chief executive Kerry Smith welcomed the campaign.
“The harm is real, and children feel the same shame and guilt as if it was a real photo,” she said.
She said children could use the IWF and Childline Report Remove tool to confidentially report sexual imagery of themselves that is online.
“We can take steps to get it removed as swiftly as possible.”




