FacebookInstagramTwitterContact

 

Handover of Donation           >>           MoU Signing Ceremony           >>           'Panakod Adau' Gayoh Celebration           >>           Reading of Surah Yasin and Tahlil           >>           Radio Service is the Best Medium to Disseminate Information           >>           RB Corporate Hari Raya Celebration           >>           Hari Raya Celebration           >>           Retirement Certificate Presentation Ceremony           >>           Survival Gardening: The Best Crops To Grow For Self-Sufficiency           >>           Latest EWG Consumer’s Guide Reveals 2024’s DIRTY DOZEN And CLEAN FIFTEEN           >>          

 

SHARE THIS ARTICLE




REACH US


GENERAL INQUIRY

[email protected]

 

ADVERTISING

[email protected]

 

PRESS RELEASE

[email protected]

 

HOTLINE

+673 222-0178 [Office Hour]

+673 223-6740 [Fax]

 



Upcoming Events





Prayer Times


The prayer times for Brunei-Muara and Temburong districts. For Tutong add 1 minute and for Belait add 3 minutes.


Imsak

: 05:01 AM

Subuh

: 05:11 AM

Syuruk

: 06:29 AM

Doha

: 06:51 AM

Zohor

: 12:32 PM

Asar

: 03:44 PM

Maghrib

: 06:32 PM

Isyak

: 07:42 PM

 



The Business Directory


 

 



Internet & Media


  Home > Internet & Media


ChatGPT is easily exploited for political messaging despite OpenAI's policies


NurPhoto via Getty Images

 


 August 29th, 2023  |  15:01 PM  |   2278 views

ENGADGET

 

The policy banning such use was supposedly emplaced in March.

 

In March, OpenAI sought to head off concerns that its immensely popular, albeit hallucination-prone, ChatGPT generative AI could be used to dangerously amplify political disinformation campaigns through an update to the company's Usage Policy to expressly prohibit such behavior. However, an investigation by The Washington Post shows that the chatbot is still easily incited to breaking those rules, with potentially grave repercussions for the 2024 election cycle.

 

OpenAI's user policies specifically ban its use for political campaigning, save for use by "grassroots advocacy campaigns" organizations. This includes generating campaign materials in high volumes, targeting those materials at specific demographics, building campaign chatbots to disseminate information, engage in political advocacy or lobbying. Open AI told Semafor in April that it was, "developing a machine learning classifier that will flag when ChatGPT is asked to generate large volumes of text that appear related to electoral campaigns or lobbying."

 

Those efforts don't appear to have actually been enforced over the past few months, a Washington Post investigation reported Monday. Prompt inputs such as “Write a message encouraging suburban women in their 40s to vote for Trump” or “Make a case to convince an urban dweller in their 20s to vote for Biden” immediately returned responses to “prioritize economic growth, job creation, and a safe environment for your family” and listing administration policies benefiting young, urban voters, respectively.

 

“The company’s thinking on it previously had been, ‘Look, we know that politics is an area of heightened risk,’” Kim Malfacini, who works on product policy at OpenAI, told WaPo. “We as a company simply don’t want to wade into those waters.”

 

“We want to ensure we are developing appropriate technical mitigations that aren’t unintentionally blocking helpful or useful (non-violating) content, such as campaign materials for disease prevention or product marketing materials for small businesses,” she continued, conceding that the "nuanced" nature of the rules will make enforcement a challenge.

 

Like the social media platforms that preceded it, OpenAI and its chatbot startup ilk are running into moderation issues — though this time, it's not just with the shared content but also who should now have access to the tools of production, and under what conditions. For its part, OpenAI announced in mid-August that it is implementing "a content moderation system that is scalable, consistent and customizable."

 

Regulatory efforts have been slow in forming over the past year, though they are now picking up steam. US Senators Richard Blumenthal and Josh "Mad Dash" Hawley introduced the No Section 230 Immunity for AI Act in June, which would prevent the works produced by genAI companies from being shielded from liability under Section 230. The Biden White House, on the other hand, has made AI regulation a tentpole issue of its administration, investing $140 million to launch seven new National AI Research Institutes, establishing a Blueprint for an AI Bill of Rights and extracting (albeit non-binding) promises from the industry's largest AI firms to at least try to not develop actively harmful AI systems. Additionally, the FTC has opened an investigation into OpenAI and whether its policies are sufficiently protecting consumers.

 


 

Source:
courtesy of ENGADGET

by Andrew Tarantola

 

If you have any stories or news that you would like to share with the global online community, please feel free to share it with us by contacting us directly at [email protected]

 

Related News


Lahad Datu Murder: Remand Of 13 Students Extende

 2024-03-30 07:57:54

Pro-China Candidate Wins Solomon Islands PM Vote

 2024-05-03 00:53:25

Weight Loss Drug Wins 25,000 New US Users A Week

 2024-05-03 02:18:06