With great AI power, comes great responsibility

Dr Marc Jacobs
Dr Marc Jacobs
Data Scientist & Machine Learning Engineer
Oct 9, 2024
6min read

In the early days of the young superhero Spiderman, when he had just discovered his powers after a radioactive spider bite. He didn't use his newfound powers for the common good, instead he decides to make quick money wrestling. One evening, he hears an elderly lady calling for help but decides to let the culprit escape. Once home, he finds his aunt, who tells him how his uncle's fight with a burglar cost him his life. Furious with grief, Spiderman finds the murderer that very night, it was the same man who robbed the old woman earlier.

From then on, he is guided by his beloved uncle's wise words: 'With great power comes great responsibility.' A lesson hard learned.

Technology empowers. Technology can easily be used as an extension of yourself, and the impact it has as a result can be enormous. For many people, like Yuval Noah Harari or Elon Musk, the possibilities are limitless. Certainly, Musk is taking serious steps to literally merge humans and technology. However, the use of technology is never without problems, and our own technical creations often manage to impose themselves on us creators. We will have to take our own steps in defining what technology means to us and what choices we can make in its use.

Ultimately, it is up to us humans to decide how and where we want to use technology, and for what purpose. This is the responsibility I want to discuss in this opinion piece.

That our current technology is becoming increasingly influential, means that our relationship with our technology is changing. Not only will we have to keep a close eye on which processes we do, or do not want to leave to the influence of AI, and how much 'freedom' AI is allowed, but perhaps it is also time to discuss who is 'allowed' to use AI.

Of course, AI itself does not do anything right or wrong - it is humans who are responsible for it and who can use AI to do harm. Just for fun, lets ask ask ChatGPT what 'doing harm' means. I got the following answer:

'Doing harm' refers to performing acts that are harmful, immoral, unjust or negative. It can also refer to deliberately causing pain, suffering, distress or harm to others. Doing harm can range from small, mundane acts to serious crimes, depending on the context and severity of the actions. Doing harm often violates moral and ethical standards, and it can affect individuals, communities and society.
It includes behaviours that harm others, such as lying, stealing, cheating, committing violence or deliberately harming others. People have different views on what is considered 'evil', depending on cultural, religious and moral beliefs. However, it is generally accepted that doing harm has negative consequences for both the perpetrator and those affected by their actions.

ChatGPT's answer to my question quickly speaks of intention, and only then of people. While there is no universal definition of 'good' or 'evil', and both are strongly tied to time and culture, it does talk about 'negative consequences'. So, an algorithm can do harm, but not willingly - that honour is reserved only for us humans. Even if technology is programmed to take an action itself that has negative consequences, humans remain responsible for that consequence.

So, the relationship between technology and negative consequences lies in the intention to allow it. Since technology is not intentional, it is up to humans to determine why, how and where technology can be used. Anyone using a calculator to calculate where an atomic bomb will cause the most casualties cannot possibly defend themselves by arguing that it is the fault of the calculator that so many deaths occur.

For every action we perform, with or without the help of our technology, we are responsible.

Since technological development never stops, using it responsibly is also a continuous learning process. So, we will have to keep up with what we have created.

If I ask ChatGPT for a meaning around the word’s 'responsibility' and 'taking responsibility', I get the following comprehensive answer:

'Responsibility' refers to the duty or obligation to bear the consequences of your actions, decisions and behaviours. It involves being aware of the impact your actions can have on yourself, others and the environment, and being willing to accept the consequences of those actions, both positive and negative. Taking responsibility means actively recognising that you have an impact on the situations you find yourself in and being willing to take action to achieve the best outcome. It means taking responsibility not only for your successes, but also for your mistakes, and being willing to correct those mistakes and learn from them.
Taking responsibility is an important trait in personal and professional life. It means being aware of your role in different situations, feeling responsible for your choices and being willing to be accountable for your actions. It also implies the ability to support others, cooperate and understand the consequences of your behaviour. In short, responsibility and accountability are about realising that you have a role in shaping the world around you and taking action to fulfil that role in a positive and constructive way.  

The word 'human' is not mentioned. Now, I did not specifically ask for that either, but everything described above refers to humans. For instance, it talks about 'duty', 'being conscious', 'recognising', 'awareness' and 'ability'. These are all traits related to people.

While it is quite nice to ask ChatGPT these kinds of questions, there is little point or utility in questioning an algorithm about ethical issues. After all, to take a serious stance on these, you need norms and values that stem from a certain consciousness. Asking ChatGPT to explain what "taking responsibility" means is therefore as empty as someone without children answering what it means to be a parent. There is simply no experience from which to draw knowledge or feeling, leaving only imagination. Since AI has no consciousness, we cannot expect even the latter.

The deployment of AI is always determined by humans. We can never blame AI for anything. Taking responsibility is reserved for humans, and it is up to us to take that responsibility seriously.

But how can health care professionals make themselves resilient towards governments and big tech companies? We think there are ultimately two routes by which citizens can determine what our society will look like and what place AI should have in it.

First, it is important that we start to realise how much data we generate ourselves, and how much data is generated in the environments in which we work and live.

Algorithms do their work invisibly, but this is only possible because the data being used is also collected invisibly. In that regard the smartphone is the empress of data-collection, but we seem to hand our data over quite easily. Not only because we devalue that which is invisible, but also because we really want to be able to try something new. And quite often, in the technological world, you are the product and that which seems to come for free is paid for with data. Being aware of that invisible contract would be step one.

Apart from our actual choice of who we hand over our data to, there is a second route through which we can take our own responsibility. This is the route of technological 'resilience', where health care professionals gather more knowledge about the ins and outs of algorithms. This route requires an investment that should come from government, but we don't see that happening any time soon.

Therefore, it is up to the health care professionals themselves to get acquainted with the ins and outs of the tools they are asked to use.  

Models are not the end of a conversation but rather the beginning. With that statement, I want to clarify that modelling is a human process, in which the outcomes mostly resemble an opinion in mathematical form. The responsibility in using technology lies with the user.

We cannot condemn a gun manufacturer for killing a person. Even when our technology gets to the point where guns can fire autonomously, we cannot condemn a gun. Responsibility (and ultimate conviction) will always remain with the person who gave the instruction.

It will become increasingly important for health care professionals, businesses and policymakers to engage with each other. I am not so naïve as to believe that companies and governments will open at the first convenience, or even when told to do so.

Therefore, it is up to the professional in which patients place their trust that they must ask themselves: Can I take the responsibility for this power?

About the authors

Marc merges his expertise in data science and machine learning with a strong foundation in medical psychology, bringing a unique perspective to healthcare analytics and AI innovation.

Subscribe to our newsletter

Sign up to hear about our free events, latest updates and exclusive offers.

By clicking Subscribe you agree to our Privacy Policy and consent to receiving email updates from Verity Barrington.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.