Why do robots need rights? It may seem counterintuitive, but better rights for robots mean better rights for humans.
This article will first lay out the reasons why robots need rights and what is required to implement this proposal. By robots, we refer to any type of digital technology such as artificial intelligence (AI), generative AI, AI algorithms, and any other digital tools that can work around the clock at augmenting, replacing, or controlling the work of humans (Doellgast 2023).
Table of Contents
There are three main reasons why robots need rights.
First, Homo Sapiens are not the only species that benefit from rights. As the article will argue, there are legal and moral arguments in favor of extending legal personhood, and therefore providing material protections for rights for non-human species and entities such as the Whanganui River in New Zealand or the Ganges River in India.
Second, there is a utilitarian argument. Granting robot rights will ensure that robots and AI algorithms incorporate the human values we cherish and contribute to the development of the world in the direction of these values. In turn, integrating human values in robots is a good insurance measure for a scenario where robots begin to exercise significant decisionmaking power over human beings. Under such a scenario, robots are likely to extend benefits and exceptions to human beings in line with the rights encoded in their algorithms.
Third, as strange as it may sound, robots may develop emotions. There is one thing for certain, and that is we cannot accurately predict the future. The technology is not there yet, but this does not preclude us from advancing an argument that in the case that robots develop emotions, there should be rights to protect them. Again, a stronger set of protections for robot emotions will mean a stronger framework of protections for human emotions. There have been scholars who have advanced moral arguments for granting animals the same rights as humans, on the grounds that animals, just like Homo Sapiens, are sentient species (Steiner 2008).
In the last section, we will discuss how to implement this through describing the principles and encoding rights in AI protocols, which will be a major component of how robots will operate in the future.
How was the idea for this article born?
Labor rights for robots mean labor rights for humans. Robots are thought to replace humans. However, if we want to improve labor rights and working conditions for humans in the rising age of robots and digital transformation there needs to be parity with robots. Robots need labor rights as well.
If you think that more efficiency for humans will be achieved through allowing robots to work through weekends, holidays, then you may be wrong.
Robots too need paid-holidays, sick days, and other labor rights and benefits for human workers that are enshrined in the Universal Declaration of Human Rights (UNDHR), the Fundamental Conventions of the International Labor Organization (ILO), and many country’s constitutions and laws. Robots too help create surplus value and they should be duly compensated. Although robots can in theory work without a break, over time, they require maintenance, and they too break down. This is why we advocate for robot rights.
Stefan is a PhD student at the School of Industrial and Labor Relations at Cornell University, where students study the world of work and how to improve working conditions for workers. So, it was not unnatural or illogical for Stefan to start making the argument that robots, as producers of value, should also have labor rights. This is how the idea for this article was born. What you are reading below is a product of the joint intellectual effort of Stefan Ivanovski and Arpit Chaturvedi (Cornell University MPA ‘18).
The story started on February 1, 2024. Stefan and Arpit met for the first time over dinner at Telluride House at Cornell University. A mutual friend invited them. It was a diverse group of Cornell students, mostly lawyers, but they hailed from different parts of the world, Asia, the Middle East, and Europe. There were vibrant conversations on many topics. Then, Stefan shared his controversial opinion that robots should have labor rights. He shared an anecdote that inspired him to make the claim that robots should have rights. Arpit jumped on and expanded the discussion from labor rights for robots, to broadly, rights for robots. After receiving some friendly backlash about their outrageous ideas, and what seemed to their friends as an advocacy for robots and for a robot-led world order, Stefan and Arpit, agreed that they would write an article about robot rights.
Stefan recalls how one day he wanted to pay his taxes in his native Macedonia around 8pm on a Thursday. He logged into his bank account. He filled out the required payment details and he pressed send. The system rejected his payment. The error message stated that he had to choose a valid time. Stefan was struck to learn that by valid time the system required payments to be made during business hours. He thought to himself, “this is an electronic payment system, why does it matter at what time of the day I am filing the payment? When I send an email, the email instantly arrives. There are no hours of operations for sending or receiving emails. It is a 24/7 service!”
However, the error message made Stefan think further: “What am I doing filing taxes at 8pm on a Thursday night? I should be spending time with my partner, family, and friends!” If robots can work through weekends and holidays, then there certainly will be humans who will monitor the work of the robots, especially in those jobs where there is a need for human augmentation.
However, even in jobs that can be fully automated or managed by robots, there needs to be robot rights. In the following sections we will outline why rights for robots matter and how to implement them.
Why do rights for robots matter?
The idea that a robot or an AI algorithm could have rights might seem ridiculous at first. After all, rights can only exist where there is agency and consciousness. However, there are arguments both instrumental and moral on why robots or AI algorithms should have rights.
Rights are Not Just for Homo Sapiens
Rights are not the monopoly of human beings. Or more precisely, homo sapiens need not be the only entities bearing rights in the legal world. In 2017, a groundbreaking event occurred when the Whanganui River in New Zealand was recognized as a legal entity. This was the first instance in history where a river was given the same legal rights, responsibilities, and potential liabilities as a human being. The court decided that two people, a representative of the Crown and a representative of Whanganui River, will be appointed to act on the river’s behalf and protect its interest.
Shortly after the bill passed in New Zealand, India’s Uttarakhand High Court granted the same legal personhood to the river Ganges. In New Zealand’s case, the legal personhood status was based on the Maori worldview that sees the river as an integrated, living entity with its own rights and values. It reflects a shift towards acknowledging the interconnectedness of ecosystems and the need to protect the natural environment for future generations. Legally, it was recognized that ecosystems, like individuals, have inherent rights to exist, flourish, and regenerate.
Likewise, in 1969, the Supreme Court of India, in the case of ‘Yogendra Nath Naskar v. Commission of Income-Tax, Calcutta,’ declared that a Hindu idol is like a legal person. This means it can own property and be taxed through its Shebaits (humans appointed to act on behalf of deity are called the “shebait”), who are responsible for managing the idol’s possessions.
Legal personality, also known as juristic personhood, is a fundamental concept in jurisprudence, as articulated in various legal sources. According to Salmond on Jurisprudence (12th Edn., 305), a legal person is defined as “any subject-matter other than a human being to which the law attributes personality.” The entities falling under this category are those “created and devised by human laws for the purposes of society and government,” commonly referred to as “corporations or bodies politic.” Analytical and Historical Jurisprudence (3rd Edn., page 357) defines a “person” for the purpose of jurisprudence as “any entity (not necessarily a human being) to which rights or duties may be attributed.” This definition encapsulates the broad scope of legal personality, extending beyond the realm of natural persons. The evolution of fictional personality into a juristic person is deemed essential for socio-political-scientific development. Paton’s “Jurisprudence” (3rd Edn., pages 349-350) further elaborates on the nature of legal personality, asserting that it is an “artificial creation of the law.” Legal persons are entities capable of being “right-and-duty-bearing units,” acknowledged by the law as capable of being parties to legal relationships. As Salmond puts it, “a person is any being whom the law regards as capable of rights and duties.”
The arbitrary nature of legal persons, as creations of the law, is emphasized by Salmond. In his exposition, he notes that while the law can recognize numerous kinds of legal persons, the ones acknowledged within the legal system are relatively few. Corporations are expressly identified as legal persons, and there is a suggestion that registered trade unions and friendly societies (mutual associations) may also fall under this classification (see endnote 1).
Drawing parallels with established legal entities like corporations, an argument can be made that robots and AI algorithms, when recognized as legal persons, could be attributed with the capacity for both rights and duties. In this context, the duty-bearing aspect becomes instrumental in ensuring that robots and AI algorithms, like any legal persons, are subject to legal obligations and can be held accountable for their actions. The algorithms legal representatives could be called upon and held accountable just like the shebaits for the idols or representatives for the Whanganui iwi were appointed to act on the river’s behalf. These human agents, within prescribed legal limits, would act as representatives of the legal persona attributed to the AI algorithm, further reinforcing the duty-bearing nature of legal personality.
Recognition of AI algorithms as legal persons also aligns with the socio-political-scientific development, given the increasing role of AI in various facets of society. Granting legal personality to AI acknowledges their evolving status as entities with the capacity for both rights and duties in the legal landscape. Further, by doing this, AI algorithms, much like traditional legal persons, can be considered capable of owning property, entering into contracts, and being subject to legal obligations. Recognizing AI algorithms as legal persons could thus provide a legal framework that holds AI accountable and responsible as an entity for dealing with the various consequences of AI actions.
Already in various legislations, there is a consideration of holding human beings accountable for the actions of high-risk AI, popularly known as the “human-in-the-loop (HITL)” model of AI governance. Human-in-the-loop (HITL) is an AI governance model where humans actively oversee and approve AI decisions, while human-out-of-the-loop (HOOTL) allows AI to operate autonomously without direct human intervention.
Given that HITL is emerging as a popular and viable regulatory approach, and given the precedences of according non-human entities status of legal personhood, it may well be argued that AI algorithms should be treated as human beings or legal persons in the eyes of the laws. This would not only keep AI accountable, it would also recognize the role that AI plays and will play in our daily lives. Making AI a duty bearing unit would also come with it being a right bearing unit due to the inalianable nature of duties and rights for the functioning of the law.
Not only this, according personhood to AI will result in a significant reduction in the currently existent confusion on AI regulations and will expand the scope for a comprehensive regulatory environment for AI. Can an algorigthm result in a murder? There is a contentious debate around this question. These and many other complex questions will potentially find some resolution with algorithms attaining the status of “right-and-duty-bearing units.”
The Utilitarian Argument for Rights to AI Algorithms
While in the previous section we have argued that (a) rights of personhood can be attributed to non-homo sapien entities, (b) the legal and jurisprudential arguments to attribute such rights to AI algorithms and (c) how doing so could provide a more comprehensive legal framework for AI regulations. In this section, we argue from a rationalist perspective in favor of according rights to AI algorithms. We argue that attributing such rights can indeed, be beneficial for human beings and could lead to a more symbiotic rather than a competitive relationship between AI and humans.
This would need some backward induction – a technique of reasoning where one starts from the end of a problem and works their way to the beginning (see endnote 2).
Imagine this: In a future where AI has escaped human control and is now shaping human condition and human values in its own image and as per its own logics, much like the humans have done with nature and technology, it would be beneficial for human beings to encode AI with human values. An AI algorithm that has internalized human values is also likely to be more benevolent to human beings in the scenario where human beings become subservient to the algorithms (these doomsday scenarios have been repeatedly and seriously considered by scientists and technology leaders).
Moreover, when AI becomes self-referential and influential enough that it is shaping human values (some argue that it already is with social media platforms now using AI based suggestions and searches), it would be beneficial for human beings to pre-emptively encode these algorithms with human values, especially the ethics and norms related to the Universal Declaration of Human Rights, so that AI replicates such values in future human societies.
The absence of encoded rights could lead to an AI-driven society that does not align with human values. This could result in societal structures that are detrimental to human well-being. If we grant AI personhood rights and ensure these rights are exercised, AI would have some human values encoded. This would not make AI inefficient or biased, but rather, it would ensure that as AI shapes society, it does so in a way that respects and upholds human values.
Since AI will shape society and as it develops it could become increasingly self-referential, an encoding of rights would be necessary for AI to build rights-based society. The process of granting rights would involve careful consideration of what these rights should be, ensuring they align with our most cherished human values. This would not only protect humans but also guide AI development in a direction that is beneficial for society. Moreover, given the current concerns about AI replacing human jobs and potentially causing harm to human well-being, if AI has rights and it exercises those rights it would have encoded some of the human values (which inevitably are being encoded in AI algorithms because it is built on human data). The AI would give its human subjects the necessary holidays, breaks, and dignity.
To be clear, this is not about making AI inefficient or encoding yet another bias in the AI in the form of rights. The idea is to reduce human biases, reduce the negative insticts of humankind, and increase the proportion of humanitarian values in AI algorithms.
Robots may develop emotions: How did Animal Rights Evolve?
During the Renaissance and Enlightenment periods in Europe (14th to 18th centuries), thinkers such as René Descartes proposed the idea that animals were mere automata without consciousness or feelings. However, Enlightenment philosophers like Jeremy Bentham began questioning these views, arguing that the capacity to suffer, rather than rationality alone, should be the basis for moral consideration. In more recent years, Gary Steiner (2008), has written about animal rights and moral philosophy. He argues that animals have their own subjective experiences that matter to them, in addition to feelings, and therefore this should grant them equal treatment to humans, when it comes to rights. The animal rights movements draw their origins from these inquiries.
The 19th century witnessed the rise of anti-cruelty movements in response to the harsh treatment of animals in industrial settings. Legislation such as the Cruel Treatment of Cattle Act in the United Kingdom (1822) marked an early legal recognition of the need to protect animals from cruelty. The beasts of burden, and with time, most animals, began to possess rights – much like human beings.
The argument does not stop with beasts of burdens and large animals, but it is now being extended to crustaceans – which include edible crabs, prawns and lobsters. The question of whether shrimps feel pain is a subject of ongoing scientific debate. Shrimps (prawns, crabs, etc.), like other crustaceans, have a functional nervous system and sensory receptors that help them detect environmental stimulations. They also possess opioid receptors similar to those of mammals. These receptors are necessary for detecting and responding to potentially harmful stimuli. In a study conducted by Robert Elwood and his colleagues at Queen’s University Belfast in the UK, shrimps were observed grooming and massaging an irritated antenna for up to five minutes after being exposed to the irritant acetic acid. This behavior could be interpreted as an indication of pain experience.
However, the complexity of pain, which is a mental state associated with suffering, makes it difficult to unambiguously determine its presence in animals. While shrimps exhibit responses to potentially painful stimuli, these responses could be reflexive rather than indicative of a subjective experience of pain. Nevertheless, various rights groups now are now advocating for the rights of shrimps and prawns because they feel pain.
At the heart of the debate was the consideration of whether these creatures have consciousness or not. And this should lead us to question whether machines have consciousness or not. However, that is a much more complex and ambitious question. A more immediate inquiry should lead us to determine whether machines can suffer?
Just as shrimps have a functional nervous system and sensory receptors, advanced AI systems could potentially be equipped with sophisticated sensory inputs that allow them to respond to harmful stimuli. In a recent article in Nature, Kingson Man and Antonio Demasio (2019) argue that such an artificial sense of feeling might arise if robots were programmed to experience something akin to a mental state such as pain. This would imply the need for bringing in rights for AI, just as we did for animals.
However, responding to harmful stimuli is not the same as experiencing pain. Yet, if shrimps are counted as creatures who can potentially have rights because they feel pain and even be sentient, it is likely that in the future, we would not find it absurd to accord such rights to robots as well.
If we, as human beings, accord rights to any entity that feels pain, then AI could be one of them qualifying it to become a right bearing entity. Moreover, again going back to the utilitarian perspective, it is easy to see that human beings eventually got behind protecting animals, even if they could not directly experience an animal’s physical and psychological conditions, because they (we) had a sense of rights. Human beings simply extended their own sense of human rights and dignity to other living beings.
When AI or robots develop to a level where they can have a more significant impact on the world (and it could be sooner than we expect since AI algorithms evolve in an exponential manner), they in their own turn, could look at human beings just like the human beings looked at animals. If humans had no sense of rights and dignity, they would have never extended these to other living beings, but they did. They did too little and too late, but animal rights did become an accepted value and has had significant impact on policy. Similarly, if AI, from the very beginning has encoded rights and a sense of dignity, it could be expected that it would accord dignity to non-AI entities, such as human beings, once it becomes influential enough to significantly impact their lives.
How to implement robot rights?
Encoding Rights in AI Protocols
There are various norm setting protocols out there when it comes to regulating AI. The European Union has the Ethics Guidelines for Trustworthy AI, the EU AI Act, then there is the Montreal Declaration for Responsible AI, UNESCO has its own set of Recommendation on the Ethics of Artificial Intelligence, and many countries such as Singapore, Canada, Korea, and other countries have developed AI strategies directly or indirectly based on these principles. However, most of these principles are focused on how humans should regulate AI, without much consideration on how AI would regulate human beings and what we can do to plan for that scenario.
If AI indeed reaches such a state, which according to various estimates, it is likely to do so, then it would be necessary to encode such protocols not just in the AI norm setting principles, but in actual programs, so that the AI can avoid exploiting human beings by being self-referential. This is to say that we should encode AI algorithms in a manner that they do consider certain rights as innate and foundational aspects of existence and they see themselves, as well as human beings and hopefully even other species, as possessing such rights. If human beings had no innate sense of rights, they would not have been able to extend a similar logic to the animals and nature. The reason why humans thought animals have rights and should be protected against exploitation is because humans had self-awareness – an innate encoding of some fundamental rights that exist for themselves (irrespective of the fact whether this coding came into existence naturally or evolved over time with philosophy and laws) and could be extended to the animals. Therefore, if AI programmers encode two things in algorithms – a sense of rights and a sense of “doing unto others as you would have them do unto you”, i.e. a karma code, then there is a glimmer of hope that in the event of an AI dominated world, the AI will be kinder to humankind.
Conclusion
Looking back, we can see that the rights for robots also mean rights for humans in a world that is moving in the direction to be dominated by robots and AI algorithms.
Some critics will point out that if there is parity between robot rights and human rights, then robots would also be entitled to overtime, paid-sick or maintenance time and so forth. Some may even wonder, “does that mean that there will be internet shutdowns during weekends?” Most likely, yes. This is not unusual as even the stock market does not work over the weekends, yet it is one of the vehicles for wealth generation. When there is a big economic or financial shock, trading is paused or it arbitrarily ends. If trading is automated, why is it not done around the clock? Why are financial traders given a break over the weekend or after 5pm, when most stock market exchanges stop working? This is in order to avoid larger calamities.
If we are to ensure that we will preserve rights as humans, we need to encode rights for robots. Only through parity in rights between robots and humans can we achieve a more dignified working future. The future of work is a future with rights for robots.
Disclaimer
Rights for robots is not a new idea. We confirm that we have independently authored this article without prior consultation to other related literature.
For those interested in consulting some earlier work on the topic, they can check out Rights for Robots: Artificial Intelligence, Animal and Environmental Law by Joshua C. Gellers or Human rights for robots? A literature review by John-Stewart Gordon & Ausrine Pasvenskiene.
Endnotes
- Salmond categorizes legal persons into three main types. Firstly, there are corporations, formed by personifying groups or series of individuals. These individuals, constituting the corpus of the legal person, are referred to as its members. Secondly, institutions, such as churches, hospitals, universities, or libraries, can be recognized as legal persons by the law. In this case, the corpus for personification is not a group of persons but an institution itself. Thirdly, legal persons can also encompass funds or estates dedicated to special uses, like charitable funds or trust estates.
- Backward induction involves examining the last decision point and identifying the best action at that stage. This process continues in a backward manner until the optimal action for every possible point in the sequence is determined. This iterative process results in a sequence of optimal actions, providing a roadmap for decision-making from start to finish.
References
- Doellgast, Virginia, and Valerio DeStefano. 2023. “Regulating AI at Work: Labour Relations, Automation, and Algorithmic Management.” Transfer: European Review of Labour and Research 29 (1): 9-20. SAGE Journals.
- Doellgast, Virginia, Ines Wagner, and Sean O’Brady. 2023. “Negotiating Limits on Algorithmic Management in Digitalized Services: Cases from Germany and Norway.” Transfer: European Review of Labour and Research 29 (1): 105-120. SAGE Journals.
- Doellgast, Virginia. 2023. “Strengthening Social Regulation in the Digital Economy: Comparative Findings from the ICT Industry.” Labour and Industry 33 (1): 22-38. Tandfonline.
- Hohfeld, Wesley Newcomb. Fundamental Legal Conceptions as Applied in Judicial Reasoning and Other Legal Essays. Yale University Press, 1919.
- Legal Service India. “Rights of A Deity.” Legal Service India.
- Man, Kingson, and Antonio Damasio. 2019. “Homeostasis and Soft Robotics in the Design of Feeling Machines.” Nature Machine Intelligence 1: 446-452. Nature.
- McFarland, Alex. 2022. “What is Human-in-the-Loop (HITL)?” Unite.AI. Unite.AI.
- Paton, George Whitecross. A Text-Book of Jurisprudence. 3rd ed. Oxford: Clarendon Press, 1967. Amazon.
- Salmond, John W., and P. J. Fitzgerald. Salmond on Jurisprudence. 12th ed. London: Sweet & Maxwell, 1966. WorldCat.
- Steiner, Gary. 2008. Animals and the Moral Community: Mental Life, Moral Status, and Kinship. New York: Columbia University Press.
- Supreme Court of India. Yogendra Nath Naskar v. Commissioner of Income Tax, Calcutta, 1969 AIR 1089, 1969 SCR (3) 742. Indian Kanoon.