FacebookInstagramTwitterContact

 

How To Watch Boeing's 1st Starliner Astronaut Launch On May 6 Live Online           >>           Jack Dorsey Says (On X) That He’s Not On The Bluesky Board Anymore           >>           Al Jazeera Office Raided As Israel Takes Channel Off Air           >>           Bushmills: Man Nailed To Fence In 'Sinister Attack'           >>           US Campus Protests: 'Student Arrests Will Be My Final College Memory'           >>           Is Zimbabwe Zigzagging Into Further Currency Chaos?           >>           Improve Standard of Living           >>           Brunei International Wushu Championship           >>           Educational Intervention Programme Briefing           >>           Conversion Ceremony           >>          

 

SHARE THIS ARTICLE




REACH US


GENERAL INQUIRY

[email protected]

 

ADVERTISING

[email protected]

 

PRESS RELEASE

[email protected]

 

HOTLINE

+673 222-0178 [Office Hour]

+673 223-6740 [Fax]

 



Upcoming Events





Prayer Times


The prayer times for Brunei-Muara and Temburong districts. For Tutong add 1 minute and for Belait add 3 minutes.


Imsak

: 05:01 AM

Subuh

: 05:11 AM

Syuruk

: 06:29 AM

Doha

: 06:51 AM

Zohor

: 12:32 PM

Asar

: 03:44 PM

Maghrib

: 06:32 PM

Isyak

: 07:42 PM

 



The Business Directory


 

 



Internet & Media


  Home > Internet & Media


Facebook Failed To Stop Test Ads From Threatening Midterm Election Workers


Jon Fingas

 


 December 2nd, 2022  |  10:51 AM  |   593 views

CALIFORNIA, UNITED STATES

 

The social network's automatic moderation didn't spot some obvious red flags.

 

Meta's election integrity efforts on Facebook may not have been as robust as claimed. Researchers at New York University's Cybersecurity for Democracy and the watchdog Global Witness have revealed that Facebook's automatic moderation system approved 15 out of 20 test ads threatening election workers ahead of last month's US midterms. The experiments were based on real threats and used "clear" language that was potentially easy to catch. In some cases, the social network even allowed ads after the wrong changes were made — the research team just had to remove profanity and fix spelling to get past initial rejections.

 

The investigators also tested TikTok and YouTube. Both services stopped all threats and banned the test accounts. In an earlier experiment before Brazil's election, Facebook and YouTube allowed all election misinformation sent during an initial pass, although Facebook rejected up to 50 percent in follow-up submissions.

 

In a statement to Engadget, a spokesperson said the ads were a "small sample" that didn't represent what users saw on platforms like Facebook. The company maintained that its ability to counter election threats "exceeds" that of rivals, but only backed the claim by pointing to quotes that illustrated the amount of resources committed to stopping violent threats, not the effectiveness of those resources.

 

The ads wouldn't have done damage, as the experimenters had the power to pull them before they went live. Still, the incident highlights the limitations of Meta's partial dependence on AI moderation to fight misinformation and hate speech. While the system helps Meta's human moderators cope with large amounts of content, it also risks greenlighting ads that might not be caught until they're visible to the public. That could not only let threats flourish, but invite fines from the UK and other countries that plan to penalize companies which don't quickly remove extremist content.

 


 

Source:
courtesy of ENGADGET

by Sean Rayford/Getty Images

 

If you have any stories or news that you would like to share with the global online community, please feel free to share it with us by contacting us directly at [email protected]

 

Related News


Lahad Datu Murder: Remand Of 13 Students Extende

 2024-03-30 07:57:54

North Korean Weapons Are Killing Ukrainians. The Implications Are Far Bigger

 2024-05-05 10:30:19

Have The Wheels Come Off For Tesla?

 2024-05-04 07:51:07