Medical image analysis: how computer vision helps diagnosticians

24.12.2019
10 min.

Medical imaging is an expanding market. According to Zion Market Research, its size was around $34 billion in 2018, and is expected to reach about $48.6 billion by 2025. Medical images comprise the major data source in healthcare, accounting for at least 90% of all medical data according to GE Healthcare. This amount is overwhelming for manual review and diagnostics—radiologists and pathologists have to sift through thousands of images across multiple modalities on a daily basis. A huge flow of various images clashing with outdated manual review processes increases chances for medical errors and misdiagnosis.

Certainly, those engaged in healthcare software development recognize this problem and provide the industry with automated methods for analyzing medical images based on computer vision. Computer vision spans multiple techniques of acquiring, processing, and analyzing image and video sources to output certain decisions about the objects in them. In healthcare, computer vision can complement routine diagnostics and optimize workflows of radiologists and pathologists.

Computer vision applications in healthcare

With the exponential growth in hardware performance, computer vision is gradually becoming a common decision-making support tool in healthcare.

A computer “sees” images differently from humans. While humans use contextual information stored in their brain to identify particular elements, computers view an image as a series of zeros and ones. The machine’s ability to reason about those zeros and ones depends on the data it was trained on. If the training is successful, computers, unlike human eyes, can catch very tiny details.

The typical medical image datasets processed with computer vision are acquired during the following procedures:

MRI

Magnetic Resonance Imaging (MRI) works best for detecting problems in soft tissues such as joints and the circulatory system.

Arterys specializes in deciphering cardiac MRI images, undertaking various tasks such as analyzing myocardium perfusion, assessing late gadolinium enhancement, detecting ventricular contours and cardiac function, and more. The company is a pioneer in blood flow visualization and calculation. It takes only 15 seconds for their algorithm to analyze the image, compared to 30 minutes for a human specialist.

Arterys Cardio

CT

CT scans are mostly used to detect tumors, internal brain bleedings, and other life-threatening conditions. Zebra Medical Vision uses image analysis for a number of human body systems, also offering providers to calculate coronary calcium scores automatically using basic chest CTs. Since coronary artery calcium is a biomarker for coronary artery disease, its quantification allows predicting heart attacks and strokes in high-risk patients.

Ultrasound

This technique is used to scan organs to check for their correct functioning and presence of any anomalies. This is the least invasive technique, and it is used on pregnant women to follow up on fetal development.

Bay Labs creates algorithms to interpret echocardiograms with the particular focus on the technology’s application in developing countries, where highly-trained doctors are scarce. The system can analyze the ultrasound video and identify signs of cardiovascular disorders, such as rheumatic heart disease, allowing even junior diagnosticians to perform echocardiograms and ensure valid diagnosis.

X-ray

It is used to identify abnormalities or organ damage. Computer vision can be used to classify the scans just like human radiologists, and it can identify all potential problems in one single take. This is a healthier approach as it limits radiation exposure.

Microsoft developed a tool called InnerEye, which is able to identify possible anomalies, such as tumors, in X-ray images. After radiologists upload a 3D scan into InnerEye, the software recognizes potential tumors and colors suspicious areas for the radiologist to inspect closely.

Nuclear medicine (SPECT and PET)

Nuclear imaging is used to visualize tissue and organ structure as well as functions. During nuclear imaging, a very small amount of radioactive substance (radionuclide) is emitted. Radionuclide is absorbed by body tissue. By observing the behavior of the radionuclide in the body, the radiologist can assess various conditions such as tumors and infections. With nuclear imaging, tumors can be detected at early stages.

With its broad coverage, medical image analysis achieved with computer vision allows doctors to detect malign changes in patient bodies, track the development of tumors, evaluate hardening of the arteries, as well as measure organs and blood flows with more precision compared to human specialists.

Non-invasive cancer diagnostics

Google, IBM, clinical researchers at universities, and more than 100 startups invest their time and effort into leveraging computer vision for diagnosing cancer with digital imaging only, without invasive biopsies. These virtual biopsies supersede invasive procedures in accuracy, cost-effectiveness, patient comfort, and time-to-result.

Because medical image analysis algorithms are very sensitive to the presence of cancer signs, machines can help health specialists detect the disease at the earliest stages to increase survival rates and offer patients best chances for smooth recovery.

Of course, algorithms aren’t perfect, and false positives are not uncommon. Nevertheless, human radiologists and pathologists are there to re-check the machine diagnosis and confirm or reject it. The diagnostic process is still under human control; it is only augmented with the sensitivity of computer vision in order to ensure the best outcome.

Breast cancer

MIT Computer Science and AI Lab developed a cancer prediction tool, which is a prediction model based on machine vision and deep learning. This tool can predict the development of cancer for up to five years in advance. It was trained on 90,000 mammograms from 60,000 patients, supplied by Massachusetts General Hospital.

MIT claims that this tool works equally well for white and black patients, unlike other similar projects which are biased toward white women in their training data. According to MIT, black women are 42% more likely to die from breast cancer than white women. This is because black women are not well served by the existing cancer detection techniques.

Lung cancer

Mindshare Medical developed the RevealAI-Lung computer-assisted diagnostic software to be used together with CT scans for faster and easier lung cancer detection. This product was approved for sale in Europe in July 2019.

In addition to helping with diagnostics, RevealAI-Lung can offer recommendations for individual follow-ups. It integrates well with Picture Archiving and Communication Systems. Mindshare Medical has demonstrated the ability of its product to reduce false positives and the time needed to decide on a diagnosis. This can reduce a patient’s exposure to radioactive substances and extra biopsies.

Enhancing precision medicine

Precision medicine offers tailored treatment according to patients’ detailed profiles and based on all the available health, environmental and socioeconomic data. Respectively, precision medicine is a demanding area that requires substantial technical support to process and analyze enormous datasets.

Computer vision makes part of the precision medicine tech stack along with big data analytics and AI, allowing doctors to extract quantifiable data points from each image in any modality.

Genome sequencing

Health Nucleus is a clinical research center that offers whole genome sequencing combined with MRI scanning to create a better picture of an individual’s health and disease risks. The company evangelizes their proprietary approach to prevention and early detection of neurodegenerative, cardiovascular and metabolic disorders by looking at a patient on macro- and micro-levels at the same time, generating about 150GB of data per individual.

Imaging biomarkers

Quibim is both an installable and cloud platform that provides hospitals and diagnostic imaging centers with an array of imaging biomarkers that help to track how an individual’s genotype interacts with the environment. Particularly, the company offers insights into the human phenotype and quantifies a patient’s response to treatment, genetic expression, and environmental factors. These findings can be used during clinical trials or for tracking chemotherapy results.

The platform extends to multiple patient health domains, including neuro- and musculoskeletal systems, liver, lungs, and a vast oncology cluster.

Quibim imaging biomarkers

Decision support in emergency care

MaxQ-AI creates a set of diagnostic tools with 3D imaging, patient-specific data, deep vision and cognitive analytics at their core, partnering up with GE Healthcare, IBM, and Samsung. All the partners focus on using real-time data in the emergency room to assess patients suspected of acute head trauma or stroke and detect intracranial hemorrhage. In such a case, patients can receive well-timed treatment and avoid long-term consequences of a chronic condition.

Another system of this kind, AIDoc is built on a deep learning technology that detects high-level abnormalities in medical images across spine, abdomen, head and chest areas. In particular, the system can detect bone hypodensity, free fluid in the abdomen, intracranial hyperdensities, free air in the chest, and more. AIDoc can be embedded into a doctor’s workflow via PACS and widgets, prioritizing cases with suspicious findings.

Diabetic retinopathy

Diabetes is the leading cause of preventable blindness in the US. Nevertheless, it is not possible to diagnose diabetic retinopathy through a primary care provider before vision deterioration takes place.

Most patients with diabetes check their eyesight regularly. However, diabetic retinopathy can advance significantly before it affects vision. As a result, a patient with a perfect eyesight can be on the verge of losing their sight due to blood vessel damage. Diabetic retinopathy can be detected using a special equipment, but why bother registering for another appointment when you just tested your eyesight, and the result was satisfying?

There are no early symptoms for this disease, and by the time the patient starts to lose sight, the disease will most likely be in its advanced stages. However, early detection and timely treatment can reduce the damage. Automated medical image analysis of retinal fundus datasets allows doctors to identify risks and evaluate the severity of a complication right away.

Detecting diabetic retinopathy before vision loss

Intelligent Retinal Imaging Systems (IRIS) used Microsoft Azure to create a platform to help identify diabetic retinopathy before patients start losing sight. This is how it works: trained medical staff (not necessarily an ophthalmologist) takes an image of the retina using the IRIS system. The image is then transferred to Azure Service Bus for image enhancement such as color morphing. Afterwards, the image passes to the Azure Machine Learning Package for Computer Vision that identifies and categorizes the pathology.

We went from zero to 300,000 patients examined in under five years—there is no way we could have done that without Azure.

Jonathan Stevenson

Dermatology

Dermatologists rely on visual inspection while examining patients and coming up with the diagnosis. This opens the possibility for machine vision and AI-based applications to enter the field to assist dermatologists in early detection of skin conditions.

Detecting skin abnormalities using a mobile app

ECD-Network has developed the SkinIO app that uses computer vision and deep learning to detect skin abnormalities using a mobile device.

Patients begin by downloading the app and creating their accounts. They capture a photo of the region they want to check for cancer and upload it to SkinIO. The app can either instruct the patient to submit more photos, or immediately schedule an appointment with a dermatologist. If there is nothing to worry about, SkinIO will schedule reminders for patients to upload follow-up photos after a predefined period of time to check for any potential skin changes or growth.

SkinIO can detect various skin conditions, including cancerous cells and benign tumors such as lipoma.

SkinIO

Fracture identification

Computer vision coupled with AI can spot fractures, dislocations, and soft tissue injuries. These are typically hard to detect by a human eye and with standard imaging while causing long-term suffering for patients if remained undetected.

Vertebral fracture detection using neural networks

Computer vision with deep neural networks can detect osteoporotic vertebral fractures. Osteoporosis is a disease that makes bones fragile, less dense, and prone to fracturing. The problem with osteoporosis is that it develops over a long period of time, and it can only be diagnosed after discovering the first fracture. Osteoporosis is a prevalent disease in the US, affecting over 3 million people each year. Women over 50 years are the most likely to get a spinal fracture as a consequence of this disease.

The current standard for detecting spinal fracture is to use CT or X-ray, which are manually checked by a health professional.

Researchers at Dartmouth College, Hanover, developed a neural network-based model that uses computer vision to detect osteoporotic vertebral fractures. The system was trained on over a thousand CT scans of chest, abdomen and pelvis areas. After testing the model, it showed a promising 89.2% accuracy, which surpassed professional radiologists’ accuracy of 88.4%.

Computer vision augments healthcare

The diagnostics field experiences disruption with technologies such as computer vision. There are certain debates whether it will entail job shortages or improve precision and help radiologists accomplish their work faster.

However, even today incorporating computer vision into medical image analysis has undoubtable benefits:

  1. It improves quality of diagnosis: while diagnosticians rely on their experience and human judgment cannot be avoided in some cases, algorithms provide superior accuracy and can pick up on details which may escape human eyes.
  2. It saves time, and consequently, lives: computer vision can detect life-threatening conditions in earlier stages.
  3. It reduces costs: misdiagnosing can result in wasting thousands of dollars on the wrong treatment for both the patient and the medical system. Computer vision tends to work precisely and suggest a highly likely diagnosis from the start, to be verified by a human expert.
Tags: