In late 2017, Jacy Reese Anthis’ book “The End of Animal Farming” and the groundbreaking Transformer paper were published, coinciding with significant advancements in artificial intelligence (AI) since the introduction of AlexNet, ImageNet, and deep learning around five years earlier. With sufficient funding, Anthis shifted his focus to AI research and co-founded the Sentience Institute.
An exclusive interview with AIM saw Anthis discuss the importance of AI rights. He explained that their focus has always been on the expansion of humanity’s moral circle and a broad mandate to address non-human intelligence.
Defining AI Rights
Anthis published a comprehensive article advocating for an AI rights movement, despite AI currently lacking the ability to experience emotions, suffering, or happiness like humans or animals. However, Anthis, a PhD fellow at the University of Chicago, believes this could change soon.
Anthis discussed various conceptions of rights in philosophy, explaining that rights should be tailored to the interests of the sentient beings involved. He pointed out that AI possesses unique capabilities, responsibilities, and societal roles that should shape their rights, emphasizing the importance of their sentience and the need to protect their interests.
Opinions on AI rights differ significantly. Gary Marcus, a prominent voice in AI, questioned whether people advocating for AI rights would extend those rights to calculators, smartwatches, or the internet.
The recent debate comparing AI to calculators highlights a fundamental difference: calculators are not sentient beings. Anthis emphasized the need to initiate discussions about AI rights early to ensure moral progress and avoid repeating historical mistakes, such as slavery and environmental degradation.
Anthis referred to the example of chess in the 1990s, where many thought solving chess would represent the pinnacle of AI intelligence. When AI defeated chess champion Garry Kasparov in 1997, expectations needed to be reassessed.
Criteria for Rights
As living organisms exhibit significant diversity, AI rights must account for this range. The discussion extends to non-sentient AI like current language models. Anthis mentioned the research of Kate Darling at MIT Media Labs, which explores people’s discomfort with the idea of a robot dog being abused by humans.
Anthis envisions an AI Bill of Rights that encompasses the entire spectrum of AI capabilities, from subservient and powerless entities to those that can fully participate in political processes. He suggested that “do no harm” could serve as a fundamental principle in establishing AI rights.
The Ultimate Goal
Anthis quoted Erik Brynjolfsson’s Turing Trap to highlight the ongoing conversation about AI automation and augmentation. He emphasized that replicating human capabilities in AI may not be the best way to harness their full potential.
Anthis believes the end goal should be achieving safe artificial general intelligence (AGI). He expressed concerns about current trends towards more powerful systems without adequate attention to alignment and safety, cautioning against the pursuit of unsafe AGI outcomes.