Artificial intelligence system detects often-missed cancer tumors

Authored by m.digitaljournal.com and submitted by gone_his_own_way
image for Artificial intelligence system detects often-missed cancer tumors

Medical scientists and engineers have come together to develop artificial intelligence system designed to detect often-missed cancer tumors, thereby helping to boots patient survival rates.

Florida - Researchers based at University of Central Florida developed the system by teaching a computer platform the optimal way to detect small specks of lung cancer in computerized tomography (CT) scans. These are of the type, through size and appearance, that radiologists sometimes have difficultly in identifying. In trials, the healthcare artificial intelligence system was found to be 95 percent accurate in total. Moreover, this was ahead of the typical scores achieved by human medics, which typically fall within the range of 65 percent when accuracy.The method used to train the artificial intelligence platform was not dissimilar to the way algorithms that facial-recognition software uses are taught key characteristics in relation to image analysis. To train the platform, the researchers provided in excess of 1,000 CT scans (taken from the U.S. National Institutes of Health database) to the software.Over time the platform was taught to ignore other tissue, nerves and other masses found in the CT scan images and instead to only focus on lung tissues and abnormal formations that could be tumors. The platform began to show success, and learnt to differentiate between cancerous and benign tumors. Given that successful diagnosis and treatment of lung cancer is highly dependent on early detection of lung nodules, developing a system to assist with this can help to boost patient survival rates. Discussing how the platform was developed , one of the researchers, Rodney LaLonde explains: "We used the brain as a model to create our system...You know how connections between neurons in the brain strengthen during development and learn? We used that blueprint, if you will, to help our system understand how to look for patterns in the CT scans and teach itself how to find these tiny tumors."The new medical imaging research will be presented to MICCAI 2018 (21st International Conference on Medical Image Computing and Computer Assisted Intervention), which takes place in Granada, Spain during September 2018. The associated conference paper is titled "S4ND: Single-Shot Single-Scale Lung Nodule Detection."

footprintx on August 27th, 2018 at 14:24 UTC »

It's my job to diagnosis people every day.

It's an intricate one, where we combine most of our senses ... what the patient complains about, how they feel under our hands, what they look like, and even sometimes the smell. The tools we use expand those senses: CT scans and x-rays to see inside, ultrasound to hear inside.

At the end of the day, there are times we depend on something we call "gestalt" ... the feeling that something is more wrong than the sum of its parts might suggest. Something doesn't feel right, so we order more tests to try to pin down what it is that's wrong.

But while some physicians feel that's something that can never be replaced, it's essentially a flaw in the algorithm. Patient states something, and it should trigger the right questions to ask, and the answers to those questions should answer the problem. It's soft, and patients don't always describe things the same way the textbooks do.

I've caught pulmonary embolisms, clots that stop blood flow to the lungs, with complaints as varied as "need an antibiotic" to "follow-up ultrasound, rule out gallstones." And the trouble with these is that it causes people to apply the wrong algorithm from the outset. Somethings are so subtle, some diagnoses so rare, some stories so different that we go down the wrong path and that's when somewhere along the line there a question doesn't get asked and things go undetected.

There will be a day when machines will do this better than we do. As with everything.

And that will be a good day.

SirT6 on August 27th, 2018 at 12:49 UTC »

Very interesting paper, gone_his_own_way - you should crosspost it to r/sciences (we allow pre-prints and conference presentations there, unlike some other science-focused subreddits).

The full paper is here - what’s interesting to me, is it looks like almost all AI systems best humans (Table 1). There’s probably a publication bias there (AIs that don’t beat humans don’t get published. Still interesting, though, that so many outperform humans.

I don’t do much radiology. I wonder what is the current workflow for radiologists when it comes to integrating AI like this.

avatarname on August 27th, 2018 at 12:45 UTC »

But wait - somebody wrote that Watson was useless in spotting cancer, therefore all so-called AI is worthless in medicine field and we are heading for AI winter. //sarcasm