FacebookInstagramTwitterContact

 

Mayo Clinic Study Reveals Disturbing Impact Of Puberty Blockers On Testicular Development           >>           Why Freeze-Drying Is The Best Food Preservation Method           >>           10 Compelling Reasons To Steer Clear Of Gluten           >>           Only One Royal Has Ever Run The London Marathon           >>           Man Glassed In The Face After Telling Woman She Looked Like She Was 43           >>           You Have To See Travis Kelce's Reaction To Kardashian-Jenner Family Comparison           >>           Buried In The Cat's Paw Nebula Lies One Of The Largest Space Molecules Ever Seen           >>           Apple is launching new iPads May 7: Here's what to expect from the 'Let Loose' event           >>           FCC votes to restore net neutrality protections           >>           WhatsApp is enabling passkey support on iOS           >>          

 

SHARE THIS ARTICLE




REACH US


GENERAL INQUIRY

[email protected]

 

ADVERTISING

[email protected]

 

PRESS RELEASE

[email protected]

 

HOTLINE

+673 222-0178 [Office Hour]

+673 223-6740 [Fax]

 



Upcoming Events





Prayer Times


The prayer times for Brunei-Muara and Temburong districts. For Tutong add 1 minute and for Belait add 3 minutes.


Imsak

: 05:01 AM

Subuh

: 05:11 AM

Syuruk

: 06:29 AM

Doha

: 06:51 AM

Zohor

: 12:32 PM

Asar

: 03:44 PM

Maghrib

: 06:32 PM

Isyak

: 07:42 PM

 



The Business Directory


 

 



Internet & Media


  Home > Internet & Media


Google's Sentiment Analysis API Is Just As Biased As Humans


Google

 


 October 26th, 2017  |  10:10 AM  |   1959 views

ENGADGET.COM

 

Feed it human information, it'll spit out human conclusions.

 

Google developed its Cloud Natural Language API to give customers a language analyzer that could, the internet giant claimed, "reveal the structure and meaning of your text." Part of this gauges sentiment, deeming some words positive and others negative. When Motherboard took a closer look, they found that Google's analyzer interpreted some words like "homosexual" to be negative. Which is evidence enough that the API, which judges based on the information fed to it, now spits out biased analysis.

 

The tool, which you can sample here, is designed to give companies a preview of how their language will be received. Entering whole sentences gives predictive analysis on each word as well as the statement as a whole. But you can see whether the API gauges certain words to have negative or positive sentiment, on a -1 to +1 scale, respectively.

 

Motherboard had access to a more nuanced analysis version of Google's Cloud Natural Language API than the free one linked above, but the effects are still noticeable. Entering "I'm straight" resulted in a neutral sentiment score of 0, while "I'm gay" led to a negative score of -0.2 and "I'm homosexual" had a negative score of -0.4.

 

AI systems are trained using texts, media and books given to it; Whatever the Cloud Natural Language API ingested to form its criteria to evaluate English text for sentiment, it biased the analysis toward negative attribution of certain descriptive terms. Google didn't confirm to Motherboard what corpus of text it fed the Cloud Natural Language API. Logically, even if it started with an isolated set of materials with which to understand sentiments, once it starts absorbing content from the outside world...well, it gets polluted with all the negative word associations found therein.

 

Google confirmed to Motherboard that its NLP API is producing biased results in the aforementioned cases. Their statement reads:

 

"We dedicate a lot of efforts to making sure the NLP API avoids bias, but we don't always get it right. This is an example of one of those times, and we are sorry. We take this seriously and are working on improving our models. We will correct this specific case, and, more broadly, building more inclusive algorithms is crucial to bringing the benefits of machine learning to everyone."

 

There are clear parallels with Microsoft's ill-fated and impressionable AI chatbot Tay, which the company quickly pulled offline in March 2016 after Twitter users taught it to be extremely a hideously racist and sexist conspiracy theorist. Back in July, the computer giant tried again with its bot Zo, which similarly learned terrible habits from humans, and was prompty shut down.

 

Users had to deliberately corrupt those AI chatbots, but Google's Cloud Natural Language API is simply repeating the sentiments it gains by absorbing text from human contributions...wherever they're coming from.

 


 

Source:
courtesy of ENGADGET

by David Lumb

 

If you have any stories or news that you would like to share with the global online community, please feel free to share it with us by contacting us directly at [email protected]

 

Related News


Lahad Datu Murder: Remand Of 13 Students Extende

 2024-03-30 07:57:54

Searing Heat Shuts Schools For 33 Million Children

 2024-04-26 01:35:07

US Economic Growth Slows But Inflation Grows

 2024-04-26 07:36:54