In Law a company is treated as having the rights and obligations of a person. In this era of Artificial Intelligence (intelligent assistants, ‘Robo’-advisors, robots, and autonomous vehicles) ‘algorithms’ are rapidly emerging as artificial persons: a legal entity that is not a human being but for certain purposes is considered by virtue of statute to be a natural person. Intelligent algorithms will increasing require formal training, testing, verification, certification, regulation, insurance, and most importantly status in law.
For example, already in financial services Regulators require firms to demonstrate that trading algorithms have been thoroughly tested, demonstrate ‘best execution’ and are not engaged in market manipulation. Other interesting cases are healthcare algorithms; medical-assistant’ Chatbots and patient screening systems which will increasingly dispense medical advice and treatments to patients. Regulators, who have traditionally regulated firms and individuals, are raising the status of ‘algorithms’ to ‘persons’.
We consider the emergence of ‘Algorithms as artificial persons’, with the need to formally verify, certify and regulate algorithms. Its aim is to start discussion in the Legal profession regarding the legal impact of algorithms on firms, software developers, insurers, and lawyers. This paper is written with the expectation that the reader is familiar with ‘Law’ but has a limited knowledge of algorithm technologies.
The science fiction writer Isaac Asimov famously proposed “Three Laws of Robotics” (Asimov, 1950): 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Later he added an additional law: 0) A robot may not injure humanity, or, by inaction, allow humanity to come to harm.
Fast forward, in 2007 the South Korean Government proposed a Robot Ethics Charter and in 2011 the UK Research Council EPSRC (Boden et al., 2011) published five ethical “principles for designers, builders and users of robots”. More recently, the Association for Computing Machinery (ACM, 2017) in 2017 published seven principles for algorithmic transparency and accountability:
In this era of Artificial Intelligence (AI), ‘algorithms’ are rapidly emerging in Law as artificial persons. Already algorithmic trading systems (Treleaven et al., 2013) account for 70%-80% of US Equity trades. Apple, Google and Amazon provide ‘intelligent’ virtual assistants and Chatbots (Virtual assistant, 2017), such as Apple Siri, Google Assistant, and ‘smart’ devices such as Amazon Echo, Google Home and Apple HomePod that interact with speech. Numerous financial firms provide financial ‘Robo’ investment advisors (Robo-advisor, 2017). Baidu has a medical-assistant advisor currently running in China (Baidu Research, 2017). And Google, Ubur, Tesla and most car manufacturers are working on autonomous vehicles (Autonomous car, 2017). In response, governments and regulators are modifying national laws to encourage innovation, with lawyers and insurers scrambling to absorb the implications.