News

How Did Fake News Trigger Violence? How Is the UK Government Overhauling Online Safety Laws to Tackle This Growing Threat?. H

Two weeks ago, a 17-year-old man stabbed three girls to death at a children’s dance class in the Merseyside town of Southport.

In the immediate aftermath of the attack, social media reports identified the perpetrator as an asylum seeker who arrived in the UK in 2023. Posts on X under the perpetrator’s pseudonym were widely shared and viewed millions of times. This helped spark far-right anti-immigration protests, which turned violent, with shops and mosques attacked with bricks and petrol bombs.

Prime Minister Keir Starmer’s ruling party is considering a review of the Online Safety Act, which would require tech giants to prevent fake news and harmful content from spreading on their platforms, sources told CNBC.

108016701 1722945067865 gettyimages 2164489011 dsc06097_63pustzr.jpeg
Fake news spread sparks series of riots in UK. Photo: CNBC

Several senior government officials have also commented on the possibility of further tightening regulations on fake news and content that incites violence.

“Some aspects of the Online Safety Act have not yet come into force. We are prepared to make changes if necessary,” said the prime minister’s chief of staff.

Accordingly, the information and communications management agency Ofcom is said to not have the necessary sanctions for social networking platforms because the above law has not been fully implemented.

Online Safety Act

The Online Safety Act is a landmark piece of legislati on in the UK that forces social media and video streaming companies to remove illegal content from their platforms.

Advertisement

The regulation includes a requirement that tech companies proactively identify, mitigate and manage the risk of harm from such content appearing on their platforms.

Some content is criminal, such as child sexual abuse, fraud, racially or religiously motivated crimes, incitement to violence and terrorism.

Once the rules come into force, Ofcom will have the power to impose fines of up to 10% of a company’s annual global turnover for breaches. In the case of repeated breaches, senior managers could even face jail terms.

However, the law will only come into full force from 2025, after the consultation on codes of conduct for companies is completed. 

(Synthetic)

UK considers banning TikTok The UK’s National Cyber ​​Security Agency is looking into whether the Chinese-origin video app should be excluded from government devices and officials’ personal phones.

Two weeks ago, a 17-year-old man stabbed three girls to death at a children’s dance class in the Merseyside town of Southport.

In the immediate aftermath of the attack, social media reports identified the perpetrator as an asylum seeker who arrived in the UK in 2023. Posts on X under the perpetrator’s pseudonym were widely shared and viewed millions of times. This helped spark far-right anti-immigration protests, which turned violent, with shops and mosques attacked with bricks and petrol bombs.

Prime Minister Keir Starmer’s ruling party is considering a review of the Online Safety Act, which would require tech giants to prevent fake news and harmful content from spreading on their platforms, sources told CNBC.

108016701 1722945067865 gettyimages 2164489011 dsc06097_63pustzr.jpeg
Fake news spread sparks series of riots in UK. Photo: CNBC

Several senior government officials have also commented on the possibility of further tightening regulations on fake news and content that incites violence.

Advertisement

“Some aspects of the Online Safety Act have not yet come into force. We are prepared to make changes if necessary,” said the prime minister’s chief of staff.

Accordingly, the information and communications management agency Ofcom is said to not have the necessary sanctions for social networking platforms because the above law has not been fully implemented.

Online Safety Act

The Online Safety Act is a landmark piece of legislation in the UK that forces social media and video streaming companies to remove illegal content from their platforms.

The regulation includes a requirement that tech companies proactively identify, mitigate and manage the risk of harm from such content appearing on their platforms.

Some content is criminal, such as child sexual abuse, fraud, racially or religiously motivated crimes, incitement to violence and terrorism.

Once the rules come into force, Ofcom will have the power to impose fines of up to 10% of a company’s annual global turnover for breaches. In the case of repeated breaches, senior managers could even face jail terms.

However, the law will only come into full force from 2025, after the consultation on codes of conduct for companies is completed. 

(Synthetic)

UK considers banning TikTok The UK’s National Cyber ​​Security Agency is looking into whether the Chinese-origin video app should be excluded from government devices and officials’ personal phones.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *