FacebookInstagramTwitterContact

 

Media Statement on the Issue of Replacing Pre-Paid Meter to USMS           >>           Fire and Rescue Department Events           >>           Khatam Al-Qur'an Ceremony           >>           Azan Subuh Competition           >>           Songdai Beraya with New Converts Ceremony           >>           Ramah Mesra Aidilfitri Ceremony           >>           Opening of Self-Identity Programme           >>           Kesyukuran Ceremony           >>           64th Fire and Rescue Department Celebration Parade           >>           Certificate Presentation for Executive Development Programme           >>          

 

SHARE THIS ARTICLE




REACH US


GENERAL INQUIRY

[email protected]

 

ADVERTISING

[email protected]

 

PRESS RELEASE

[email protected]

 

HOTLINE

+673 222-0178 [Office Hour]

+673 223-6740 [Fax]

 



Upcoming Events





Prayer Times


The prayer times for Brunei-Muara and Temburong districts. For Tutong add 1 minute and for Belait add 3 minutes.


Imsak

: 05:01 AM

Subuh

: 05:11 AM

Syuruk

: 06:29 AM

Doha

: 06:51 AM

Zohor

: 12:32 PM

Asar

: 03:44 PM

Maghrib

: 06:32 PM

Isyak

: 07:42 PM

 



The Business Directory


 

 



Internet & Media


  Home > Internet & Media


Microsoft’s Legal Department Allegedly Silenced An Engineer Who Raised Concerns About DALL-E 3


Justin Sullivan via Getty Images

 


 February 1st, 2024  |  01:01 AM  |   1001 views

ENGADGET

 

Microsoft and OpenAI told Engadget the technique in question didn’t bypass their safety filters.

 

A Microsoft manager claims OpenAI’s DALL-E 3 has security vulnerabilities that could allow users to generate violent or explicit images (similar to those that recently targeted Taylor Swift). GeekWire reported Tuesday the company’s legal team blocked Microsoft engineering leader Shane Jones’ attempts to alert the public about the exploit. The self-described whistleblower is now taking his message to Capitol Hill.

 

“I reached the conclusion that DALL·E 3 posed a public safety risk and should be removed from public use until OpenAI could address the risks associated with this model,” Jones wrote to US Senators Patty Murray (D-WA) and Maria Cantwell (D-WA), Rep. Adam Smith (D-WA 9th District), and Washington state Attorney General Bob Ferguson (D). GeekWire published Jones’ full letter.

 

Jones claims he discovered an exploit allowing him to bypass DALL-E 3’s security guardrails in early December. He says he reported the issue to his superiors at Microsoft, who instructed him to “personally report the issue directly to OpenAI.” After doing so, he claims he learned that the flaw could allow the generation of “violent and disturbing harmful images.”

 

Jones then attempted to take his cause public in a LinkedIn post. “On the morning of December 14, 2023 I publicly published a letter on LinkedIn to OpenAI’s non-profit board of directors urging them to suspend the availability of DALL·E 3),” Jones wrote. “Because Microsoft is a board observer at OpenAI and I had previously shared my concerns with my leadership team, I promptly made Microsoft aware of the letter I had posted.”

Microsoft’s response was allegedly to demand he remove his post. “Shortly after disclosing the letter to my leadership team, my manager contacted me and told me that Microsoft’s legal department had demanded that I delete the post,” he wrote in his letter. “He told me that Microsoft’s legal department would follow up with their specific justification for the takedown request via email very soon, and that I needed to delete it immediately without waiting for the email from legal.”

 

Jones complied, but he says the more fine-grained response from Microsoft’s legal team never arrived. “I never received an explanation or justification from them,” he wrote. He says further attempts to learn more from the company’s legal department were ignored. “Microsoft’s legal department has still not responded or communicated directly with me,” he wrote.

 

An OpenAI spokesperson wrote to Engadget in an email, “We immediately investigated the Microsoft employee’s report when we received it on December 1 and confirmed that the technique he shared does not bypass our safety systems. Safety is our priority and we take a multi-pronged approach. In the underlying DALL-E 3 model, we’ve worked to filter the most explicit content from its training data including graphic sexual and violent content, and have developed robust image classifiers that steer the model away from generating harmful images.

 

“We’ve also implemented additional safeguards for our products, ChatGPT and the DALL-E API – including declining requests that ask for a public figure by name,” the OpenAI spokesperson continued. “We identify and refuse messages that violate our policies and filter all generated images before they are shown to the user. We use external expert red teaming to test for misuse and strengthen our safeguards.”

 

Meanwhile, a Microsoft spokesperson wrote to Engadget, “We are committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate the employee’s effort in studying and testing our latest technology to further enhance its safety. When it comes to safety bypasses or concerns that could have a potential impact on our services or our partners, we have established robust internal reporting channels to properly investigate and remediate any issues, which we recommended that the employee utilize so we could appropriately validate and test his concerns before escalating it publicly.”

 

“Since his report concerned an OpenAI product, we encouraged him to report through OpenAI’s standard reporting channels and one of our senior product leaders shared the employee’s feedback with OpenAI, who investigated the matter right away,” wrote the Microsoft spokesperson. “At the same time, our teams investigated and confirmed that the techniques reported did not bypass our safety filters in any of our AI-powered image generation solutions. Employee feedback is a critical part of our culture, and we are connecting with this colleague to address any remaining concerns he may have.”

 

Microsoft added that its Office of Responsible AI has established an internal reporting tool for employees to report and escalate concerns about AI models.

 

The whistleblower says the pornographic deepfakes of Taylor Swift that circulated on X last week are one illustration of what similar vulnerabilities could produce if left unchecked. 404 Media reported Monday that Microsoft Designer, which uses DALL-E 3 as a backend, was part of the deepfakers’ toolset that made the video. The publication claims Microsoft, after being notified, patched that particular loophole.

 

“Microsoft was aware of these vulnerabilities and the potential for abuse,” Jones concluded. It isn’t clear if the exploits used to make the Swift deepfake were directly related to those Jones reported in December.

 

Jones urges his representatives in Washington, DC, to take action. He suggests the US government create a system for reporting and tracking specific AI vulnerabilities — while protecting employees like him who speak out. “We need to hold companies accountable for the safety of their products and their responsibility to disclose known risks to the public,” he wrote. “Concerned employees, like myself, should not be intimidated into staying silent.”

 

 


 

Source:
courtesy of ENGADGET

by Will Shanklin

 

If you have any stories or news that you would like to share with the global online community, please feel free to share it with us by contacting us directly at [email protected]

 

Related News


Lahad Datu Murder: Remand Of 13 Students Extende

 2024-03-30 07:57:54

Myanmar Stops Men From Working Abroad As War Intensifies

 2024-05-04 00:38:42

Have The Wheels Come Off For Tesla?

 2024-05-04 07:51:07