The debate is on about robotics and artificial intelligence (AI). A number of conferences around the world are addressing issues related to the ethical questions being posed by the advent of the fourth industrial revolution.

The impact on society and the economy is still immeasurable although it is estimated that the market for machines ‘acting as companions in the household’ will triple by 2023 to €15 billion, and that machines ‘designed to work alongside humans’ will grow to a €3 billion market in the next two years. Add this to the market for industrial robots, which is expected to reach €40 billion next year. The latter’s impact on the quality and quantity of the job market is still undetermined.

Technological development is both accelerating in speed and complexity and this makes oversight even more difficult. AI, which is even more powerful as a technology, challenges us to think even more carefully about the ethical implications of these machines.

Technology, like money, is by nature neutral. It all depends on how society decides to use it. But we can easily be misled into thinking and believing that “reality, goodness and truth automatically flow from technological and economic power” (Laudato Si’ # 105).

There are benefits that derive from AI; for example, it is already being used in medicine and health to diagnose cancer or assist the mobility of the blind. It is used for security and crises response, while it contributes to empowerment in education, the environment and the economy.

AI… challenges us to think carefully about the ethical implications of these machines

Pope Francis, in his message to the World Economic Forum in Davos in January 2018, called for technological development that is of service to humanity and the protection of the “common home”. On the contrary we are experiencing the abuse of technology by government, politicians and business, controlling internet data to manipulate voting patterns and people’s preferences.

“The fact is that contemporary man has not been trained to use power well because our immense technological development has not been accompanied by a development in human responsibility, values and conscience” (Laudato Si’ #105).

This is why any reference to ‘electronic personality’ is dangerous. The implications are whether machines like human beings should be responsible for their actions. Can machines be liable to damages if they damage property or hurt people? As a concept this is similar to that of a company being a legal entity separate from the human persons that own it or manage it. The question is whether machines can have human rights.

It sounds ridiculous to write about the differences between machines and human beings. For us the differences are obvious. But can there be a discussion on ‘legally electronic persons’ on the same lines of a company being a legal entity? The point being made here is that robots can make autonomous decisions while operating. The fundamentals go beyond the issues of legality. Why would one label a robot human? Even if we call them ‘electronic persons’, can robots have a conscience? Can they have the desire to build relationships? Do these machines have the freedom to decide and execute?

An interdisciplinary debate is being encouraged but even this requires appropriate safeguards that prevent innovators and consumers from trusting these machines blindly. In Laudato Si’, Pope Francis writes that “we have certain superficial mechanisms, but we cannot claim to have a sound ethics, a culture and spirituality genuinely capable of setting limits and teaching clear self-restraint” (# 105). Humanity must remain constantly within the loop of any debate.


Comments not loading? We recommend using Google Chrome or Mozilla Firefox with javascript turned on.
Comments powered by Disqus