Medical Image Analysis: How Computer Vision Helps Diagnosticians

03.07.2018
7 min.
title

According to IBM researchers, medical images comprise the major data source in healthcare, accounting for at least 90% of all medical data. This amount is overwhelming for manual review and diagnostics—radiologists and pathologists have to sift through thousands of images across multiple modalities on a daily basis. The huge flow of varied images clashing with outdated manual review processes increases chances for medical errors and misdiagnosis.

Certainly, healthcare software development companies see this problem and provide the industry with automated methods for analyzing medical images based on computer vision. Computer vision comprises multiple techniques of acquiring, processing, and analyzing image and video sources to output certain decisions about the objects in them. In healthcare, computer vision can complement routine diagnostics and optimize workflows of radiologists and pathologists.

In this article, we will review the vast landscape of computer vision implementation across the healthcare industry, covering both adopted and emerging medical image analysis methods and approaches.

Computer vision’s broad spectrum of applications

With the exponential growth in hardware performance, computer vision is gradually becoming a common decision-making support tool in healthcare. The typical medical image datasets processed with computer vision methods are acquired from:

  • MRI
  • CT
  • Ultrasound
  • Nuclear medicine (SPECT and PET)
  • X-ray
  • Optical and confocal microscopy, etc.

With the broadness of coverage, medical image analysis achieved with computer vision allows doctors to detect malign changes in patient bodies, track the development of tumors, evaluate hardening of the arteries, as well as measure organs and blood flows with more precision compared to human specialists.

Non-invasive cancer diagnosis

Google, IBM, clinical researchers in universities, and more than 100 startups invest their time and effort into leveraging computer vision technologies for diagnosing cancer only with digital imaging, not requiring invasive biopsies. These virtual biopsies supersede invasive procedures in accuracy, cost-effectiveness, patient comfort, and time to result.

Because medical image analysis algorithms are very sensitive to potential cancer, machines can help health specialists detect disease at the earliest stages to increase survival rates and offer patients best chances for smooth recovery.

Of course, algorithms aren’t perfect, they can output false positive results too. Nevertheless, human radiologists and pathologists are there to re-check final machine diagnosis and confirm or reject it. The diagnosing process is still under human control, just enhanced with the sensitivity of computer vision to ensure the best outcome.

Real-world examples

1.  Breast cancer

In March 2017, Google’s team reported that they have created an algorithm for finding malign tumors in breast tissue and adjacent lymph nodes. Based on machine learning, predictive analytics, and pattern recognition, the trained algorithm was able to detect breast cancer with 89% accuracy compared to 73% for a human pathologist in unlimited time conditions (the specialist spent 30 hours on 130 slides).

2.  Barrett’s esophagus

Researchers from the Eindhoven University of Technology in The Netherlands created a medical image analysis system especially for identifying early neoplastic lesions in patients with Barrett’s esophagus. These lesions can develop into esophageal cancer and are incredibly hard to spot without proper training.

The algorithm processes endoscopy images of Barrett’s esophagus and finds even the slightest differences in the texture and color, comparing the findings with previously analyzed images of Barret’s patients with and without the lesions. Notably, the system achieved a nearly perfect score and proved itself a feasible tool for automated decision support in esophageal cancer prevention.

3.  Skin lesions

Stanford’s team trained a convolutional neural network (CNN) to identify skin lesions and diagnose skin cancer on biopsy-validated clinical images. Their system performed as accurately as 21 board-certified dermatologists in identifying keratinocyte carcinomas, melanomas, seborrheic keratoses, and nevi.

Blood flow quantification and visualization

MRI allows for noninvasive quantification and visualization of blood flow in vessels, enabling the better understanding of how cardiovascular pathologies affect cardiac hemodynamics. Computer vision extends the potential of MRI images, making diagnostics faster and more precise, also allowing to predict and prevent critical events.

Real-world examples

1.  CT

Zebra Medical Vision uses image analysis for a number of human body systems, also offering providers to calculate coronary calcium scores automatically, using basic chest CTs. Since coronary artery calcium is a biomarker for coronary artery disease, its quantification allows predicting heart attacks and strokes in high-risk patients.

2.  Ultrasound

Another company, Bay Labs, creates algorithms to interpret echocardiograms with the particular focus on technology’s application in developing countries, where there is scarcity in highly-trained doctors. The system itself can analyze the ultrasound video and identify signs of cardiovascular disorders, such as rheumatic heart disease, allowing even basic-level health specialists perform echocardiograms and ensure valid diagnosis.

3.  MRI

Arterys also specializes in deciphering cardiac MRI images, undertaking various tasks, such as analyzing myocardium perfusion, assessing late gadolinium enhancement, detecting ventricular contours and cardiac function, and more. The company pioneers in blood flow visualization and calculation. For their algorithm, it takes only 15 seconds to analyze the image, compared to 30 minutes for a human specialist.

Enhanced precision medicine

Precision medicine offers patients tailored therapies according to their detailed profiles, created with all available health, environmental, and socioeconomic data. Accordingly, precision medicine is a demanding area and requires substantial technical support to process and analyze enormous datasets.

Computer vision adds up to the precision medicine tech stack along with machine learning and AI, allowing doctors to extract quantifiable data points from each image in any modality.

Real-world examples

1.  Genome sequencing and MRI

Health Nucleus is a clinical research center that offers whole genome sequencing combined with MRI scanning to create a better picture of an individual’s health and disease risks. The company evangelizes their proprietary approach to prevention and early detection of neurodegenerative, cardiovascular, and metabolic disorders by looking at a patient on macro and micro levels at once, generating about 150GB of data per individual.

2.  Drug discovery

Atomwise created AtomNet to accelerate drug discovery with computer vision and deep learning. Their proprietary technology examines millions of 3D images of molecules to achieve unprecedented speed, accuracy, and diversity in drug discovery. In particular, the company already explored 8.2 million molecules and identified a protein-protein inhibitor with potential to treat multiple sclerosis, as well as discovered a possible alternative drug for the Ebola virus.

3.  Imaging biomarkers

Quibim is a both installable and Cloud platform that provides hospitals and diagnostic imaging centers with an array of imaging biomarkers that help to track how an individual’s genotype interacts with the environment. Particularly, the company creates an insight into the human phenotype and quantifies patient response to treatment, genetic expression, and environmental factors. Their findings can be used for tracking chemotherapy results or during clinical trials.

The platform offers valuable insights across multiple patient health domains, including neuro and musculoskeletal systems, liver, lungs, and vast oncology cluster.

4.  Decision support in routine care

Mindshare Medical developed an evidence-based clinical decision support system that helps providers reduce misdiagnosis and false positive cases. The solution offers personalized diagnostics and therapeutic guidance on treatment plans and follow-up procedures to create a full spectrum of services for value-based care-oriented organizations.

5.  Decision support in emergency care

MaxQ-AI creates a set of diagnostic tools with 3D imaging, patient-specific data, deep vision and cognitive analytics at their core, partnering up with GE Healthcare, IBM, and Samsung. All the partnerships focus on using real-time data in the emergency room to assess patients suspected of acute head trauma or stroke and detect intracranial hemorrhage. In such a case, patients can receive well-timed treatment and avoid long-term consequences of a chronic condition.

AIDoc spots a deep learning technology that detects high-level abnormalities in medical images across spine, abdomen, head and chest areas. In particular, the system can detect bone hypodensity, free fluid in the abdomen, intracranial hyperdensities, free air in the chest and more. AIDoc embeds into a doctor’s workflow via PACS and widgets, prioritizing cases with suspicious findings.

Diabetic retinopathy

According to the International Diabetes Federation, 425 million adults (20-79 years) were living with diabetes in 2017. IDF also claims that about 1 in 3 diabetes patients will develop diabetic retinopathy, which is a vision-threatening complication. Diabetic retinopathy is the leading cause of vision loss in adults (20-65 years).

However, early detection and timely treatment can reduce the harm from retinopathy. Automated medical image analysis of retinal fundus datasets allows doctors to identify risks and evaluate the severity of a complication right away.

Real-world example

A team of Google researchers reported creating an algorithm, achieving more than 90% accuracy in detecting diabetic retinopathy compared to manual grading. The algorithm is trained on a neural network and defines diabetic retinopathy and macular edema, also computing the complication’s severity.

Computer vision extends healthcare

These medical image analysis applications are in no case exhaustive since we see the strong multidirectional effort of vendors and researchers to integrate computer vision into care delivery at every possible point, from initial diagnosis to treatment and follow-ups.

Of course, there are certain debates on radiology disruption and how it will affect the future of health specialists, whether it will entail shortage of workplaces or just make radiologists and pathologists precise to perfection in diagnosing patients. We are on the positive side here, believing that benefits of adopting computer vision widely in healthcare are worth the investment.

Tags: