Though it may be surprising to some, the role of AI in pathology can be an emotive subject. We pathologists have a justifiable high degree of self-regard; it takes many years of taxing training and study to earn the FRCPath, and who wants to think they are replaceable by some algorithm?
I write this as a histopathologist, though I am sure that much of this article can be applied to other branches under our College’s oversight. But perhaps histopathology gives us the best arena in which to compare the human against the AI machine. The central diagnostic task – the conversion of images into accurate descriptive text (often a very few words of diagnosis, grade and stage) – is one where AI methods are perhaps the most highly developed and are moving most quickly.
In its usual form, the deep learning architecture AI uses to interpret digital pathology microscopy images relies upon a design that quite deliberately mimics the cerebral cortex, with virtual neurons being taught to form strong or weak connections with their neighbours to replicate the judgements of a human expert. This is ‘supervised learning’; many of us will have been roped into providing annotations to groups or companies seeking to train expert systems.
So far, so straightforward. In essence, we are training an algorithm in much the same way as one trains a specialty trainee, by enabling a bunch of neurons to connect features of an image with semantic labels, so that in the future it (or he/she/they) can accurately describe an image it hasn’t seen before.
This supervised training process underlies the burgeoning industry of AI pathology. Countless expert-trained algorithms are slowly hacking their way through the regulatory jungle. As they painstakingly demonstrate their cost-effectiveness, they are starting to appear on the desktops of diagnostic departments lucky enough to be digitally equipped. They are, at present, ruthlessly focused on single tasks, like detecting or grading prostate adenocarcinoma in routine diagnostic slides, and as such are proving to be useful tools with the potential to leverage human skills and improve patient outcomes.
This is all well and good. The big companies will make and sell dozens of these diagnostic bots, each trained by dozens of pathologists who busily gave away or sold their learned expertise. But I think this phase, the era of supervised AI pathology, will be short-lived, and may already be coming to an end.
First, let’s consider why we might want this era to end. It is expensive, and tedious, to perform all those annotations. Worse, to my mind, it is inherently limited. By choosing a human task to reproduce, we are limiting even the cleverest algorithm to the confines of that task. For example, while Gleason grading of prostate adenocarcinoma is an admirable prognostic and decision-making metric, there is no doubt that there is more prognostic information in the image than a massively reductive grading system can hope to capture. There is also the risk that our experts might be teaching the AI some bad habits along with the good ones.
What, then, is the alternative? The answer, once again, comes from the computer science community, as we enter the realm of unsupervised or self-supervised AI (ssAI) learning.
ssAI is conceptually challenging; how can an AI, or anything, learn without being taught? The best analogy here is perhaps a newborn child. The infant has no knowledge of gravity, or object permanence, or even of basic visual entities like edges and textures, and no expert will ever teach them. Instead, the baby is equipped (by natural selection – or God, if you prefer) with a learning architecture that is capable of acquiring these essentials by itself. If these features are taken out of the environment altogether, they are never learned, as various unkind animal experiments have proven.
The emerging generation of hugely knowledgeable pathology AI ‘foundation models’, as they are known (the field is alive with zippy-sounding neologisms), has learned the recurrent features or building blocks of pathology images without any labels at all, using just such specialised ‘self-supervised’ neural architectures. Just as early microscopists were confronted suddenly by a world of image features and entities for which they had no names, and to which they had to bring order, these algorithms are learning the recurrent building-blocks of histopathological images without external reference.
The power comes when this deep understanding of the image is linked to ground truth, which need not be expert-defined at all. Definitive ground truths, like survival, or mutational status, or anything else which can be measured about the patient, can then be associated with image features, often yielding surprisingly powerful insights that would be beyond the human expert. For example, outcome prediction from biopsies is vastly more accurate than human-performed (or human-trained) tumour grading, and AIs can accurately infer the DNA mutations that underlie single cases of various cancers from H&E images.
These ‘super-human’ tasks are surely where the true power of AI pathology will lie. It is the direct connection of the unsupervised learning architecture to definitive ground truth that will yield the most startling power in the next round of algorithms. The human analogy here is at an altogether different scale; now, we are seeing the machine performing tasks equivalent to several generations of academic histopathological research, linking important biological and clinical entities to tissue morphology in a ‘oner’, without the need even to frame the question.
What does this mean for patients and for the profession? As ever, in our society (and I say this without prejudice), applications will be driven by the marketplace. As ‘big pharma’ realises the need for these superhuman abilities to protect their drug development pipelines, and grasps the potential of AI to stop the incredibly costly failures of clinical trials due to inaccurate patient selection, then we will see the technology propelled into use. It’s already begun, in fact.
This ought to yield great results for our patients. We are in the era of whole new categories of cancer therapeutics, antibody-drug conjugates, bispecifics and the like, which will all require specific predictive biomarker tests. What will these look like? Many of us believe that AI, operating on digital H&E images, or immunohistochemistry, or some multiplex spatial modality, perhaps integrating radiology and the whole biochemical and molecular history of our patients, will be the best way to make these life-and-death decisions.
The same ssAI approaches that are set to revolutionise diagnostics and biomarkers bring us to the brink of a golden age of scientific discovery. These methods are incredibly timely, arriving at the same time as a number of other technological advances (in areas including optics, microfluidics, mass spectroscopy, nucleic acid biochemistry, fluorophore chemistry) that have synergised to give us ‘spatial biology’. Spatial biology (zippy neologism #2) is arguably just histopathology with more colours, a set of technical platforms that reveal the location of gene expression products (mRNA, protein) or metabolites, or other things, in an image that might have many thousands of layers, where each one shows the distribution of a single molecular entity.
In the basic science and translational research laboratories that harbour these platforms, we labour to optimise these complex assays, generating incredibly information-rich images of diseased tissues. But then, we often use them to ask incredibly focused, ‘narrow’ questions. Sometimes it feels like exploring a museum at night using a laser-pointer, as the wonders all around the object of focus are ignored, or barely perceived. More sophisticated ‘classical’ methods such as neighbourhood analysis are much better, as they can discover some of the underlying structure of the tissue, but self-supervised AI will do so with vastly greater power. We are already starting to see how these methods can support powerful novel biological inferences from spatial biology data. This combination of ssAI with ever more information-rich imaging modalities will crack open huge new areas of metazoan biology, and empower basic scientists and pharma discovery alike.
We hear a lot about the risks. I am relatively sanguine about the issues of trust, and of machine errors. I have faith in the ability of regulatory structures to overcome these. I am much more concerned about the effects upon human professionalism. If, as a trainee pathologist, I had access to a tool that would supply an accurate differential diagnosis to every case that I looked at, I doubt that I would have acquired the same degree of diagnostic skill.
It is the same doubt that makes us fear for the future of creative writing or graphic design or many other professions in the face of AI. I think perhaps there will be a lucky last generation of skilled morphologists who will be able to spot when their AI tools have made mistakes. Perhaps one generation will be all we need, though? By the time they (we) retire, maybe the machines won’t be making mistakes.
We shouldn’t forget, it’s not the first technological revolution in our profession. The mortuary had to make room for the microscope, and the microscope had to make room on the bench for molecular testing. The role of the pathologist only evolved, and became more interesting, and more intellectually satisfying. Perhaps it grew more stressful too, but I think that might just be the price of therapies that work.
For now, I think the best advice for people entering the profession is simply to master these tools. I’ll end with a cliché, but it’s a good one. You won’t be replaced by AI, but you might be replaced by a pathologist who makes better use of AI than you do.