AI ETHICIST
Animal Welfare and AI Ethics
by Kashyap Kompella
Much of AI ethics is about reducing the harms to humans, but what about animals? Of course, concern about animal rights is not new, and that movement is well-established. A discussion of animal rights philosophy is beyond the scope of this column. Instead, I want to examine two ways in which AI and animal welfare intersect.
ANIMAL MODELS VS. AI MODELS
Animal experiments are used extensively in safety studies for many consumer and pharma products. According to Thomas Hartung, writing in The Scientist, toxicity information is missing for a majority of the 100,000 chemicals used in consumer products, and we resort to testing these chemicals on animals, causing them a lot of suffering and pain. So, it’s good news when AI models can predict the toxicity of chemicals and reduce the need for animal testing. To be sure, there are limits to how much AI can replace animal testing, according to an article by Juan Carlos Marvizon at Speaking of Research, but such in silico alternatives are worth exploring further.
ANIMALS AND AUTONOMOUS MACHINES
As the adoption of autonomous (and semi-autonomous) machines and robots increases, they come into contact with animals (for example, domestic animals and vacuuming robots, farm animals and robot harvesters, or self-driving cars and stray animals). How should such animal-machine interactions be approached? How should the AI algorithms in these machines respond? How should they be programmed to reduce the suffering to animals and look for ways to improve animal welfare? Of course, such concerns exist when service robots are interacting with humans or when cobots (collaborative robots in shared spaces) are working alongside humans. And we already have safety standards defined by ISO so that humans and robots can safely share a workspace. But discussions around ensuring animal welfare are few and far between, and there aren’t many industry efforts to considering animal encounters. For instance, self-driving cars have trouble recognizing animals on the road.
ANIMALS ON THE AI FARM
The trolley problem is a series of thought experiments designed to tease out the ethical norms and values in difficult situations by forcing decisions regarding who will be harmed in a particular scenario. They may seem too simplistic, but trolley problem-like scenarios involving animals can fast become real-life situations as our stock of autonomous machines and vehicles increases. I think for most humans, a machine that prioritizes humans over animals is a no-brainer. But imagine a scenario in which the machine has to choose between damage to property and harm to an animal. There may be no simple or one-size-fits-all answer. The point is that there are choices to be made. On what ethical principles are those decisions going to hinge?
I just discussed two areas—one in which AI can clearly have a role in increasing animal welfare, and the other in which we need to consider and codify animal welfare. There are several other areas that present opportunities as well as pose moral and ethical questions related to animal rights. The scenarios and decisions to be made span a spectrum, including your robot vacuum sparing or squashing a bug it finds while cleaning and considering the impact of an automated drone strike on livestock.
Today, there is a broad consensus that our AI applications adhere to our ethical norms and values. However, both in AI and AI ethics disciplines, there isn’t a major initiative to address the impact of AI on animals. It’s time we expanded the scope of AI ethics to encompass animal welfare.
|