FacebookInstagramTwitterContact

 

Miscellaneous Offences Act 2021           >>           Temporary Road Closure           >>           Temporary Closure of Petrol Stations           >>           Blood Donation Campaign           >>           Study Visit           >>           Discussion Session           >>           Village Head Appointment Certificate Presentation           >>           Presentation of Letters of Appointment and Incentive           >>           Opening Ceremony of School Leaders Convention           >>           Tahlil Ceremony           >>          

 

SHARE THIS ARTICLE




REACH US


GENERAL INQUIRY

[email protected]

 

ADVERTISING

[email protected]

 

PRESS RELEASE

[email protected]

 

HOTLINE

+673 222-0178 [Office Hour]

+673 223-6740 [Fax]

 



Upcoming Events





Prayer Times


The prayer times for Brunei-Muara and Temburong districts. For Tutong add 1 minute and for Belait add 3 minutes.


Imsak

: 05:01 AM

Subuh

: 05:11 AM

Syuruk

: 06:29 AM

Doha

: 06:51 AM

Zohor

: 12:32 PM

Asar

: 03:44 PM

Maghrib

: 06:32 PM

Isyak

: 07:42 PM

 



The Business Directory


 

 



Gadgets


  Home > Gadgets


GPT-4 performed close to the level of expert doctors in eye assessments


boonchai wedmakawand via Getty Images

 


 April 19th, 2024  |  00:56 AM  |   1064 views

ENGADGET

 

It scored higher than unspecialized junior doctors and trainee ophthalmologists.

 

As learning language models (LLMs) continue to advance, so do questions about how they can benefit society in areas such as the medical field. A recent study from the University of Cambridge's School of Clinical Medicine found that OpenAI's GPT-4 performed nearly as well in an ophthalmology assessment as experts in the field, the Financial Times first reported.

 

In the study, published in PLOS Digital Health, researchers tested the LLM, its predecessor GPT-3.5, Google's PaLM 2 and Meta's LLaMA with 87 multiple choice questions. Five expert ophthalmologists, three trainee ophthalmologists and two unspecialized junior doctors received the same mock exam. The questions came from a textbook for trialing trainees on everything from light sensitivity to lesions. The contents aren't publicly available, so the researchers believe LLMs couldn't have been trained on them previously. ChatGPT, equipped with GPT-4 or GPT-3.5, was given three chances to answer definitively or its response was marked as null.

 

GPT-4 scored higher than the trainees and junior doctors, getting 60 of the 87 questions right. While this was significantly higher than the junior doctors' average of 37 correct answers, it just beat out the three trainees' average of 59.7. While one expert ophthalmologist only answered 56 questions accurately, the five had an average score of 66.4 right answers, beating the machine. PaLM 2 scored a 49, and GPT-3.5 scored a 42. LLaMa scored the lowest at 28, falling below the junior doctors. Notably, these trials occurred in mid-2023.

 

While these results have potential benefits, there are also quite a few risks and concerns. Researchers noted that the study offered a limited number of questions, especially in certain categories, meaning the actual results might be varied. LLMs also have a tendency to "hallucinate" or make things up. That's one thing if its an irrelevant fact but claiming there's a cataract or cancer is another story. As is the case in many instances of LLM use, the systems also lack nuance, creating further opportunities for inaccuracy.

 


 

Source:
courtesy of ENGADGET

by Sarah Fielding

 

If you have any stories or news that you would like to share with the global online community, please feel free to share it with us by contacting us directly at [email protected]

 

Related News


Lahad Datu Murder: Remand Of 13 Students Extende

 2024-03-30 07:57:54

'Close Enough To See Their Faces': Chased Down By China In South China Sea

 2024-05-02 00:57:36

Tesla Staff Say Firm's Entire Supercharger Team Fired

 2024-05-02 00:12:47