Five Radiology Artificial Intelligence Companies That Somebody Should Build and Invest In

from THCB at http://bit.ly/2ipg51X on December 5, 2017 at 08:42PM

By HUGH HARVEY

I’ve previously written comprehensively on where to invest in Radiology AI, and how to beat the hype curve precipice the field is entering. For those that haven’t read my previous blog, my one line summary is essentially this:

“Choose companies with a narrow focus on clinically valid use cases with large data sets, who are engaged with regulations and haven’t over-hyped themselves …”

The problem is… hardly any investment opportunities in Radiology AI like this actually exist, especially in the UK. I thought it’s about time I wrote down my ideas for what I’d actually build (if I had the funding), or what companies I would advise VC’s to invest in (if they existed).

Surprisingly, none of the companies actually interpret medical images – I’ll explain why at the end!

1. Radiological Ontology Modelling

OK, this one might sound a bit simple and obvious, but it’s actually the most crucial of all Radiology AI efforts.

First, I need to explain something about radiology – it’s not just the clinical specialism of interpreting medical images, it’s a skilled process of converting those expert interpretations into text. Radiologists are essentially acting as Fourier Tranforms – converting digital images to analogue words and sentences written in their own ‘radiology’ language. As a radiologist, I’ve learnt to speak ‘radiology’ – I can say things like ‘cluster of biliary hyperechoic calcifications with posterior acoustic shadowing’ with a straight face, and any other radiologist will know exactly what I have just described and on what modality. (Translation – I was taking about gallstones as seen on ultrasound).

This language, or ontology, is unique to the field, and is fairly homogenous across international borders too. Every single medical scan should, in best practice, have a report written in this ontological format. That means that there is a fairly standardised radiological description of nearly every medical scan ever taken across the world sitting in databases somewhere. (If that doesn’t get data scientists excited, then I don’t know what will). We are taking about billions of data points here! Even better – billions of data points in digitised healthcare records!

Image from Modelling Radiological Language with Bidirectional Long Short-Term Memory Networks, Cornugreta et al, 2016.

So, the first Radiology AI company I would build or invest in is one that can apply state-of-the-art Natural Language Processing (NLP) vectorisation and concept modelling to radiological reports. This technology goes beyond simple LSTM concept recognition, Word2vec embeddings and other language conceptualisation models, by adding in Recurrent Neural Nets to build in understanding. Used alongside RadLex (an official radiology ‘dictionary’) and other medical ontology databases, a company could build a powerful tool to effectively annotate and conceptually model every radiological report they have access too.

Whoever builds this and gets to market first with a plug-and-play API has the potential to be the foundation for most other Radiology AI research, services and other medical imaging AI companies.

2. Radiology to Lay Translation

The main problem with ‘radiology’ as a language is that not many other people can understand it (including many doctors). Radiologists often put a summary at the end of a report to highlight the main points, however, this summary is a simplification of the body of the report, and often doesn’t cover its details and nuances. Meaning can get lost, summaries can be taken as gospel truth, and clinical errors can (and have) happened because of this lack of detail.

Secondly, there has recently been a push towards a more patient-facing, value-driven radiological service, one that allows for direct radiologist-patient interactions to explain imaging findings. Not surprisingly, uptake has been slow, largely due to the fact that radiologists are too overwhelmed to take time away from reading scans.

My second company would solve these two problems, by building on the first piece of ontology work and produce radiological to lay-person translations of reports. The value add is clear – both non-radiological clinicians and patients alike would benefit from a more accessible report, without any loss of quality or change in the radiologist’s workflow. In essence it would be a radiological Babelfish – translating seeminglessly between radiologist and non-radiologists.

Improving cross-speciality communication in this manner could reduce requests for second reads and opinions, provide a more robust and thorough understanding of clinical state, as well as provide better insight and assurance to patients.

An added freebie bonus of this technology is the ability to push reports through translation services, instantly globalising reports, and opening up the possibility of UK radiology services outsourcing to foreign teleradiology companies to utilise non-English speaking radiologists.

3. Predictive Semantics

Way back in time, radiologists used to hand-write their own reports in the patient’s clinical notes. Then came along the dictaphone and radiology secretary/typist, before the more recent introduction of voice recognition software (which in itself is a form of AI). Progress is progress, after all. My point is that modern radiologists are very used to speaking clearly and understandably into a microphone all day long, and seeing their words appear on screen.

Radiologists also have multiple screens, and many attention-draining activities, from the actual images they are reading, to the PACS system screen, to the report, as well as access to books and website references. It is these references that my third Radiology AI company would focus on.

By building on language modelling and concept modelling, I would aim to design (or invest in) a system that builds in a concept aggregator with inferencing capabilities. Such a system could theoretically predict what the summary findings of a report are going to be, based on what the radiologist is saying. An example could be a radiologist describing a lesion, and the system suggesting a list of possible pathologies of the lesion (e.g Non-ossifying fibroma, aneurysmal bone cyst, fibrous cortical defect; these can all look very similar!). All this could be performed in real-time.

This has two functionalities: 1) providing decision support to radiologists in the form of possible differential pathologies, and 2) speeding up dictation processes by obviating the need to dictate the summary. Both add value by improving workflow and reducing diagnostic errors.

4. Content-Based Image Retrieval

Every hospital has a PACS archive – a huge data store of all the images ever taken in the past 10 years (unbelievably it is common practice for archived images to get deleted due to capacity issues!). This archive is essentially a dark pit, into which billions of valuable clinical data points are thrown, never to be seen again. What a waste!

Currently it is impossible to search a PACS archive for specific clinical content. Yes, you can search for a patient name or ID, or filter by imaging modality and date, but this means you need to have an idea of when/how or on whom the scan was performed on before you search. However, you don’t know what pathology is in the scan until you open it. There is no functionality to search by clinical concept, pathology, or even better, by image search itself.

There are many reasons why a hospital would want to search through its imaging archive for specific clinical pathologies: audit, research, error handling, teaching cases, cross-referencing … to name but a few. At present the only way to keep a log of certain types of clinical case (e.g keep a log of all patients with a rare bone tumour) is for the reporting radiologist to manually add a case to a file at the time of reporting. After that, it’s lost somewhere in the archive (unless someone remembers the patient’s details).

My fourth Radiology AI company would provide both text-based and image-based search of PACS archives. Text-based search would be a simple case of running our first company’s concept modelling API on the entire archive, and connecting the results to a smart search functionality. Instantly clinicians could search for ‘teenagers with metastatic osteosarcoma’ and have dozens of cases returned for them to view. Honestly, having trawled through millions of archives before during my academic tenure, this simple functionality would have saved months of research time!

I would like to go even further though. By using simple non-interpretive image perception technology such as Manifold Ranking, I would aim to build a system that allows radiologists to search a PACS archive based on image content. (Google already has a similar service that allows you to upload an image and find similar – and this is no different).

http://bit.ly/2jfHJPS allows you to upload an image to find similar content

Imagine a radiologist looking at a complex renal tumour case with an odd vascular pattern, but they are unsure what it is. Our radiologist could use a tool within the reading PACS software to crop the area, click search, and a few seconds later get dozens of similar studies that have previously been reported. This instant comparison based on content would transform workflow, reduce reliance on external reference sources, be an excellent teaching aid, reduce errors and provide a robust methodology for reviewing clinical decision making processes based on previous cases.

5. Digital Red Dots

Long ago, before radiology was a digital service, radiologists worked with hard copy plain films. These sheets of silica and chemicals were processed in dark rooms, hung up wet to dry, before being carefully placed in brown paper document slips for transport to the reading room. Radiographers (technologists) were in charge of this process (not the radiologist). Over time, as radiographers grew skilled at reading the images, a system developed whereby a the radiographer would place a small circular red sticker in the corner of the film to mark that they thought the image contained pathology that required clinical review. This ‘red dot’ system worked well, especially under the supervision of experienced radiographers. Red dot films would be placed at the top of the reporting pile, ensuring that urgent clinical findings would be seen first by the duty radiologist, and in turn, the sickest patients would have their images reported to the relevant clinical team in timely fashion.

Unfortunately, once digital PACS was introduced, this simple alerting system disappeared. Yes, radiographers can mark images digitally, but more often than not, the image still appears in its normal place in the reporting queue. The radiologist has no idea which films contain pathology and which don’t, prior to actually opening them up in their reading PACS.

Digitally annotated ‘red dot’ of a left orbital floor fracture. Note the annotation isn’t in fact red nor a dot, but it still does the job!.

My fifth and probably most clinically valuable Radiology AI company would develop a digital red dot system to bring back early triage. By training Convolutional Neural Networks on imaging data sets annotated simply as ‘normal’ and ‘abnormal’ a simple triage system would be built, with high abnormality sensitivity but low pathological specificity. The urgent films would appear at the top of the reading list, above the less important ‘normal’ studies.

The benefits speak for themselves – scans with emergent findings would be reported and acted upon as a priority over ‘normal’ studies. Patient safety would drastically be improved overnight. Hospitals with reporting backlogs (i.e every single hospital in the UK) could more effectively and confidently process the important scans first, potentially reducing cancer and other waiting lists by orders of magnitude. The cost savings are potentially immense. The most exciting part is that such a system would lay foundations for developing pathological classifiers, allowing us to finally glimpse the start of an AI that interprets pathology in medical images.

So, there you have it. Five companies that use AI in medical imaging which absolutely need to be built, need investment, and need nurturing. AI in radiology does not have to be solely about interpreting images – that’s the remit of highly specialised humans, and arguably a far harder technological challenge. Instead, we must build tools that augment and aid radiologists, ease pain points, and improve safety and workflow. In fact, I’d argue that actual interpretation of images simply cannot happen without these five technologies existing first.

None of this is possible however without access to imaging data, a framework for research, development, regulatory approval, marketisation and launch. For that, we need a Radiology AI incubator – and I just so happen to have a plan for that too

If you are as excited as I am about the future of AI in medical imaging, and want to discuss these ideas, please do get in touch. I’m on Twitter @drhughharvey

About the author:

Dr Harvey is a board certified radiologist and clinical academic, trained in the NHS and Europe’s leading cancer research institute, the ICR, where he was twice awarded Science Writer of the Year. He has worked at Babylon Health, heading up the regulatory affairs team, gaining world-first CE marking for an AI-supported triage service, and is now a consultant radiologist, Royal College of Radiologists informatics committee member, and advisor to AI start-up companies, including Kheiron Medical.