top of page

How ethical is AI?

by Pete Phillips, Head of Digital Theology and Director of the Centre for Digital Theology

Last November, I attended a session at an academic conference on the Ethics of AI.

We talked about the problem of human bias within the data which machine learning uses to learn. This inherent human bias tended to work against minority ethnic groups or even majority non-white groups in decision making. This meant that the AI learnt the bias and applied the same criteria to decisions. Racial bias rolled on. Indeed, we see this not just in machine learning algorithms but also in facial recognition algorithms where it is often harder for a dark-skinned face to be recognised than a white one. (Here's a more intensely theological view:

The group also queried whether we could afford all the computer time on training AI. Were there more important areas of study. But AI was great at medical diagnosis, legal processing, climate predicting. All of these areas, alongside the many that Mark Arnold points to in this week's additional video, show the important of AI for the good of humanity.

But is AI all that ethical?

AI and Climate Change

Paul Johnston has written a number of pieces on the problem around AI and Climate Change, while pointing to further work by Maddie Stone and I've highlighted the worlk of Kate Crawford in other posts. The key points that Paul raises are:

  • Data Centres emit approximately 2% of global greenhouse gases - approx because many don't share their usage.

  • Data Centre (approx) use = whole aviation industry!

  • Most techies think someone will fix the Climate problem sometime

  • Techies who understand the above are keen to act

  • Climate Data is one point of action (but makes more use of Data Centres)

In fact, Paul uses Project Drawdown to highlight key alternative areas which would have a better effect on emissions:

  • refrigerant management

  • onshore wind turbines

  • reduced food waste

  • plant-rich diet

  • tropical forests

  • educating girls]family planning...

Working on such projects are much more likely to impact the Climate emergency. Indeed, simply being a human being and thinking more about how we develop our species in general would be better than focussing on burning yet more fuel.

Paul's point is that recycling, changing our way of live will be too little. We need to change the way human society works in order to make a significant difference - and to tackle the big polluters of the energy companies and the governments who protect them.

AI and Personhood

Beth Singler and Chris Cotter uploaded a great conversation on the Religious Studies Project podcast exploring some of the wider issues around Ethics and Ai but also mentioning our own discrimination against AI - we tend to treat objects which are sentient but non-human or non-white-male worse! So, if AI becomes sentient, would we be enslaving that sentience to work for us and perform our thinking for us? Beth talks about BladeRunner in which the replicants are used as slaves and rebel against this. But, of course, it's worse than this because women replicants are treated as sex slaves in the first BladeRunner and, in the sequel BladeRunner2049, the same pattern is maintained with the film including scenes of the mutilation of a naked female replicant. Beth asks why would we create sentient AI which will suffer - don't we have enough suffering already?

So Ethics and AI goes both ways...we need more ethical AI but also more ethical humans.

Beth made a film querying why we would want to create machines which could experience pain:

The 45 minute discussion between Chris Cotter and Beth Singler can be found below.

Joshua K. Smith has been looking at this subject over in the States with his two latest book on Robotic Persons and the forthcoming Robot Theology. Joshua is going to join us for our Webinar next week to help us think through some of the issues around these subject.

Adam Graber and Chris Ridgeway have been discussing the issue of AI and Ethics over at the FaithTechHub in the US. In their white paper on this subject, they explore exactly what we mean by the term AI but also point to some of the difficulties of getting AI to understand our moral codes. So Bishop Stephen Croft wrote about the ten commandments of pro-human AI but how does an AI machine understand a moral imperative, a command, in any case. You have to teach them morality! There are quite a few humans with googlepixel brains who haven't mastered morality yet! Here's FaithTech founder James Kelly exploring the subject...

FaithTech argue that this is where the Church could step in:

And herein lies the great opportunity for the Global Christian Church. Sure the problem before us is deeply complex needing universal definitions and genuine unity. But that is an opportunity the Global Church should not only welcome, but play a pivotal role in shaping.

What would it mean for us, the Church, to explore more about how we might be pro-human tech. Of course, lots of organisations are already working in this direction, including:

How might you engage in this subject? Where could your church help out in helping others to think through the issues around AI and the future of computing? Could you run a tech for humans workshop or educate your own church about where we are going.

Why not sign up for Premier Digital's videos and blogs to get more information on AI, Ethics and the Church.

Why not join us for our webinar on Wednesday 26th January 3pm-4.30pm with:

  • Dr Joshua K Smith - Pastor, Associate Fellow at the Kirby Laing Centre, Cambridge and author of Robot Personhood and Robot Theology

  • Anna Puzio - PhD student at Munster University on AI and Ethics, Speaker at last year's Global Network for Digital Theology

  • Paul Johnston - Technologist, Entrepreneur, Serverless Tech Innovator

  • Nicole Kunkel - Humboldt University, Berlin and co-oganiser for German Network for Theology and AI.


bottom of page