A new study reveals that real physicians still outperformed online symptom-cheker applications. The study tried to compare the performances of human doctors to sophisticated computer programs on disease diagnostics for the first time.

A research paper published in the journal JAMA Internal Medicine by authors from Harvard Medical School has escalated the reliability of real physicians' diagnosis as compared to self-checking online applications. Though designed with high-end programs, following algorithms to be human-like in making decisions, experts found out that there is still a huge discrepancy between the applications' performances with human doctors.

Read: This is How Artificial Intelligence Will Change Urban Life in 2030, According to Stanford Study

Involving 234 doctors specializing on internal medicine, 45 cases were provided by the research team for the physicians' evaluation. The cases ranged from the simplest and common disease to complex, uncommon ailments. Doctors were instructed to identify the disease and provide two other possible diagnoses for the patients. But here’s the catch: to further lessen the bias between computers and doctors, the physicians were only allowed to diagnose their “clinical vignettes” through historical records and symptoms — no further examinations such as physical, blood or laboratory tests.

Comparing the performances, the symptom-checker applications got the right diagnosis with 34 percent accuracy while physicians' evaluations had a 72 percent accuracy. This only implies that real doctors can provide proper diagnosis, which is more than twice the reliability of online health diagnostic platforms. Considering the two additional diagnosis as possible answers, physicians still got 84 percent chance of getting the right ailment while applications only scored 51 percent.

Read: AI, Robots to Eliminate 6 Percent of US Jobs by 2021, Report Says

However, even though physicians seemed to work out better than the programmed checkers, they still committed errors of around 15 percent. In a press release from EurekAlert, Ateev Mehrotra, the senior researcher of the said study, clarified that their findings opened new windows of opportunities in enhancing online applications.

"While the computer programs were clearly inferior to physicians in terms of diagnostic accuracy, it will be critical to study future generations of computer programs that may be more accurate," Mehrotra suggested. "Clinical diagnosis is currently as much art as it is science, but there is great promise for technology to help augment clinical diagnoses...That is the true value proposition of these tools."