AI and the doctor

Dr Marc Jacobs
Dr Marc Jacobs
Data Scientist & Machine Learning Engineer
Oct 2, 2024
5min read

The application of AI in healthcare has a long history of development in which we have seen some amazing developments (e.g., digital imaging), but also our fair share of disillusionments (e.g., personalized medicine). Nevertheless, the healthcare system as a nursing ground for technology makes a lot of sense considering that is a system under heavy pressure (and already bursting at the seams) due to increasing demand, a growing shortage of professionals and an ever-expanding bureaucracy.

In general, the use of AI to support healthcare focuses primarily on the doctor and especially on diagnostics. For instance, neural networks are deployed at a large scale to support the categorization of objects at high speed and 24/7 demand. In this way AI is currently being used to support radiologists and pathologist-anatomists in recognizing tumors. There are also algorithms to support the discharge of an ICU patient and there are thoughts about building a ‘proprietary’ medical ChatGPT to process information faster.

These are all applications for which AI can serve perfectly well as a personal assistant serve, but only if the user also knows exactly what a given answer means, how that answer can then be interpreted, and how it was derived in the first place. Unfortunately, any room for a dedicated symbiosis between doctors and technology is often overshadowed about an almost intrinsic need to show who is better (AI or the doctor) rather than about discovering ways to work faster and better. Together.

I believe that this tendency to compare the performance of AI with that of a flesh-and-blood doctor makes doctors unnecessarily vulnerable because it puts them on the spot. Doctors are rarely, if ever, involved in the development of AI, and any collaboration is limited to comparing the number of ‘successes’ of the doctor with that of an algorithm. Instead of putting doctors against technology, these experts of the system should be placed at the vanguard of development because they understand above all else what is necessary to develop, for whom and why. Apart from having a direct impact on the development of AI, doctors will also have to think carefully about how much responsibility they are willing to take for decisions that are (partly) based on the outcomes of an AI algorithm. As a results, technology developers will have to do so as well.

Suppose a radiologist comes to the office and must assess 300 images for the day. Each of these images are based on the suspicion of a tumor. Suppose further that the doctor initially uses AI to rank the 300 suspected tumors from high probability to low probability. Only then will the doctor look at the pictures himself. He, or she, will start with the most difficult pictures because these require the most attention, according to the algorithm. The doctor will most likely view these images with the least amount of bias because the AI algorithm did not provide a high probability of either a positive or negative verdict. However, for the later ones, which were labeled either positive or negative by AI with a high probability, the doctor will already have somewhat of a fixed view.

The knowledge already provided by the algorithm simply cannot be ignored, and it is more difficult for the doctor to correct the algorithm if he places trust in that algorithm. This is because the AI-algorithm ensures that the so-called ‘prior’ of the doctor shifts. In summary, the images labeled with a high probability will now require more evidence to reach the level of low probability. Hence, the doctor’s final assessment is partly based on the order of the images as presented by the AI-algorithm. This is just human, and already happening.

All of this has consequences. If a doctor knows that the least severe cases come at the end, the likelihood of false negatives will also become the highest. This is because the prior knowledge provided by the algorithm influences the assessment of the doctor. It then becomes questionable whether the doctor has really become better at his or her work by deploying AI. Without the deployment of AI, it would be easier to view all the images from the same starting position, which is not to say that this will make the final assessment better. Fatigue has its limitations.

However, if we allow AI (and we are already doing so) to influence our perceptions, it is essential that doctors understand how algorithms come to their conclusions. In most cases, the doctor probably has no idea how the

AI algorithm arrives at its judgement but remains fully accountable. This can lead to some very confrontational and inevitable situations in which (with the power of hindsight) the AI algorithm should or should not have been trusted.

AI technology is here to stay and healthcare providers, if anything, will see an increase in applications ‘powered by AI’. To ensure that these tools are used in a smart and responsible way, healthcare provides must know exactly what these tools can do, as much as know what their stethoscope can do. Only then will AI be able to live up to its promise without causing unnecessary damage and a loss of control.

About the authors

Marc merges his expertise in data science and machine learning with a strong foundation in medical psychology, bringing a unique perspective to healthcare analytics and AI innovation.

Subscribe to our newsletter

Sign up to hear about our free events, latest updates and exclusive offers.

By clicking Subscribe you agree to our Privacy Policy and consent to receiving email updates from Verity Barrington.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.