Weekend Long Read: Why, When, and How to Regulate AI

23 Apr 2025

By Simon Chesterman

Debate should move away from abstract consideration of what rules might constrain or contain AI behavior, and get into the more practical challenges of lawmaking. Photo: AI generated

The better part of a century ago, science fiction author Isaac Asimov imagined a future in which robots have become an integral part of daily life. At the time, he later recalled, most robot stories fell into one of two genres. The first was robots-as-menace: technological innovations that rise up against their creators in the tradition of “Frankenstein,” but with echoes at least as far back as the Greek myth of Prometheus, which had been the subtitle of Mary Shelley’s 1818 novel. Less common, a second group of tales considered robots-as-pathos — lovable creations that are treated as slaves by their cruel human masters. These produced morality tales about the danger posed not by humanity’s creations but by humanity itself.

Asimov’s contribution was to create a third category: robots as industrial products built by engineers. In this speculative world, a safety device is built into these morally neutral robots in the form of three operational directives or laws. The first is that a robot may not injure a human, or through inaction allow a human to come to harm. Second, orders given by humans must be obeyed, unless that would conflict with the first law. And third, robots must protect their own existence, unless that conflicts with the first or second laws.

The three laws are a staple of the literature on regulating new technology, though, like the Turing Test, they are more of a cultural touchstone than a serious scientific proposal. Among other things, the laws presume the need only to address physically embodied robots with human-level intelligence — an example of the android fallacy. They have also been criticized for putting obligations on the technology itself, rather than the people creating it. Here it is worth noting that Asimov’s laws were not “law” in the sense of a command to be enforced by the state. They were, rather, encoded into the positronic brains of his fictional creations: constraining what robots could do, rather than specifying what they should do.

More importantly, for present purposes, the idea that relevant ethical principles can be reduced to a few dozen words, or that those words might be encoded in a manner interpretable by an AI system, misconceives the nature of ethics and of law. Nonetheless it was reported in 2007 that South Korea had considered using them as the basis for a proposed Robot Ethics Charter. This was one of many attempts to codify norms governing robots or AI since the turn of the century, accelerating in the wake of the First International Symposium on Roboethics in Sanremo, Italy, in 2004. The European Robotics Research Network produced its Roboethics Roadmap in 2006, while the first multidisciplinary set of principles for robotics was adopted at a Robotics Retreat held by two British Research Councils in 2010.

The years since 2016 in particular saw a proliferation of guides, frameworks, and principles focused on AI. Some were the product of conferences or industry associations, notably the Partnership on AI’s Tenets in 2016, the Future of Life Institute’s Asilomar AI Principles in 2017, the Beijing Academy of Artificial Intelligence’s Beijing AI Principles in 2019, and the Institute of Electrical and Electronics Engineers’ Ethically Aligned Design in 2019. Others were drafted by individual companies, including Microsoft’s Responsible AI Principles, IBM’s Principles for Trust and Transparency, and Google’s AI Principles — all published in the first half of 2018.

Governments have been slow to pass laws governing AI. Several have developed softer norms, however, including Singapore’s Model AI Governance Framework in 2019, Australia’s AI Ethics Principles in 2019, China’s AI Governance Principles in 2019, and New Zealand’s Algorithm Charter in 2020. At the intergovernmental level, the G7 adopted the Charlevoix Common Vision for the Future of Artificial Intelligence in 2018, the OECD issued its Recommendation of the Council on Artificial Intelligence in 2019, and the European Union published Ethics Guidelines for Trustworthy AI in 2019, precursor to the draft AI Act circulated in 2021. Various parts of the UN system have adopted documents, most prominently UNESCO’s Recommendation on the Ethics of Artificial Intelligence in 2021. Even the Pope endorsed a set of principles in the Rome Call for AI Ethics in 2020.

What is striking about these documents is the overlapping consensus that has emerged as to the norms that should govern AI. Though the language and the emphasis may differ, virtually all those written since 2018 include variations on the following six themes:

1.Human control — AI should augment rather than reduce human potential and remain under human control.

2.Transparency — AI systems should be capable of being understood and their decisions capable of being explained.

3.Safety — AI systems should perform as intended and be resistant to hacking.

4.Accountability — Though often left undefined, calls for accountable or responsible AI assume or imply that remedies should be available when harm results.

5.Non-discrimination — AI systems should be inclusive and “fair,” avoiding impermissible bias.

6.Privacy – Given the extent to which AI relies on access to data, including personal data, privacy or personal data protection is often highlighted as a specific right to be safeguarded.

Additional concepts include the need for professional responsibility on the part of those developing and deploying AI systems, and for AI to promote human values or to be “beneficent.” At this level of generality, these amount to calls for upholding ethics generally or the human control principle in particular. Some documents call for AI to be developed sustainably and for its benefits to be distributed equitably, though these more properly address how AI is deployed rather than what it should or should not be able to do.

None of the six principles listed above seems controversial. Yet, for all the time and effort that has gone into convening workshops and retreats to draft the various documents, comparatively little work has been applied to what they mean in practice or how they might be implemented. This is sometimes explicitly acknowledged and addressed, with the justification that a document is intended to be applicable to technologies as yet unknown and to address problems not yet foreseen.

A different question yields a more revealing answer, which is whether any of these principles are, in fact, necessary. Calls for accountability, non-discrimination, and privacy essentially amount to demands that those making or using AI systems comply with laws already in place in most jurisdictions. Safety requirements recall issues of product liability, with the additional aspect of taking reasonable cybersecurity precautions. Transparency is not an ethical principle as such but a prior condition to understanding and evaluating conduct. Together with human control, however, it could be a potential restriction on the development of AI systems above and beyond existing laws.

Rather than add to the proliferation of principles, this article will shift focus away from the question of what new rules are required for regulating AI and address three questions:

1) Why regulation is necessary.

2) When should changes to regulatory structures (including rules) be adopted.

3) How all this might be implemented.

To regulate, or not to regulate?

In theory, governments regulate activities to address market failures, or in support of social or other policies. In practice, relationships with industry and political interests may cause politicians to act—or refrain from acting—in less principled ways. Though the troubled relationship between Big Tech and government is well documented, this section will assume good faith on the part of regulators and outline considerations relevant to the choices to be made.

In the context of AI systems, market justifications for regulation include addressing information inadequacies as between producers and consumers of technology, as well as protecting third parties from externalities—harms that may arise from deploying AI. In the case of autonomous vehicles, for example, we are already seeing a shift of liability from driver to manufacturer, with a likely obligation to maintain adequate levels of insurance. This provides a model for civil liability for harm caused by some other AI systems—notably transportation more generally (including drones) and medical devices—under existing product liability laws.

Regulation is not simply intended to facilitate markets, however. It can also defend rights or promote social policies, imposing, in some cases, additional costs. Such justifications reflect the moral arguments for limiting AI. In the case of bias, for example, discrimination on the basis of race or gender is prohibited even if it is on some other measure “efficient.” Similarly, the prohibition on AI systems making kill decisions in armed conflict is not easily defended on the utilitarian basis that this will lead to better outcomes whereby such systems may be seen as more compliant with the law of armed conflict than humans. The prohibition stems, instead, from a determination that morality requires that a human being take responsibility for such choices.

Other reasons for restricting the outsourcing of functions to AI include public decisions, where legitimacy depends on the process as much as the outcome. Even if an AI system were believed to make superior determinations than politicians and judges, governmental functions that affect the rights and obligations of individuals should nonetheless be undertaken by office-holders who can be held accountable through political or constitutional mechanisms.

A further reason for regulating AI is more procedural in nature. Transparency, for example, is a necessary precursor to effective regulation. Though not a panacea and bringing additional costs, requirements for minimum levels of transparency and the ability to explain decisions can make oversight and accountability possible.

Against all this, governments may also have good reasons not to regulate a particular sector if it would constrain innovation, impose unnecessary burdens, or otherwise distort the market. Different political communities will weigh these considerations differently, though it is interesting that regulation of AI appears to track the adoption of data protection laws in many jurisdictions. The United States, for instance, has largely followed a market-based approach, with relatively light touch sectoral regulation and experimentation across its 50 states. That is true also of data protection, where a general Federal law is lacking but particular interests and sectors, such as children’s privacy or financial institutions, are governed by statute. In the case of AI, the U.S. National Science and Technology Council argued against broad regulation of AI research or practice. Where regulatory responses threatened to increase the cost of compliance or slow innovation, the Council called for softening them, if that could be done without adversely impacting safety or market fairness.

The document advancing these positions was finalized six months after the European Union enacted the General Data Protection Regulation (GDPR), with sweeping new powers covering both data protection and automated processing of that data. The EU approach has long been characterized by a privileging of human rights, with privacy enshrined as a right after the Second World War, laying the foundation for the 1995 Data Protection Directive and later the GDPR. Human rights are also a dominant theme in EU considerations of AI, though there are occasional murmurings that this makes the continent less competitive.

China offers a different model again, embracing a strong role for the state and less concern about the market or human rights. As with data protection, a driving motivation has been sovereignty. In the context of data protection, this is expressed through calls for data localization — ensuring that personal data is accessible by Chinese state authorities. As for AI, Beijing identified it as an important developmental goal in 2006 and a national priority in 2016. The State Council’s New Generation AI Development Plan, released the following year, nodded at the role of markets but set a target of 2025 for China to achieve major breakthroughs in AI research with “world-leading” applications — the same year forecast for “the initial establishment of AI laws and regulations.”

Many were cynical about China’s lack of regulation, and its relaxed approach to personal data has often been credited as giving the AI sector a tremendous advantage. Yet laws adopted in 2021 and 2022 incorporated norms closely tracking principles also embraced in the European Union and international organizations. More generally, such projections about future regulation show that, for emerging technologies, the true underlying question is not whether to regulate but when.

The Collingridge dilemma

Writing in 1980 at Aston University in Birmingham, England, David Collingridge observed that any effort to control new technology faces a double bind. During the early stages, when control would be possible, not enough is known about the technology’s harmful social consequences to warrant slowing its development. By the time those consequences are apparent, however, control has become costly and slow.

The climate emergency offers a timely illustration. Before automobiles entered into widespread usage, a 1906 Royal Commission studied the potential risks of the new machines plying Britain’s roads; chief among these was thought to be the dust that the vehicles threw up behind them. Today, transportation produces about a quarter of all energy-related CO2 emissions and its continued growth could outweigh all other mitigation measures. Though the Covid-19 pandemic had a discernible effect on emissions in 2020 and 2021, regulatory efforts to reduce those emissions face economic and political hurdles.

Many efforts to address technological innovation focus on the first horn of the dilemma: predicting and averting harms. That has been the approach utilized by most of the principles discussed at the start of this article. In addition to conferences and workshops, research institutes have been established to evaluate the risks of AI, with some warning apocalyptically about the threat of general AI. If general AI truly poses an existential threat to humanity, this could justify a ban on research, comparable to restrictions on biological and chemical weapons. No major jurisdiction has imposed a ban, however, either because the threat does not seem immediate or due to concerns that it would merely drive that research elsewhere. When the United States imposed limits on stem cell research in 2001, for example, one of the main consequences was that U.S. researchers in the field fell behind their international counterparts. A different challenge is that if regulation targets near-term threats, the pace of technological innovation can result in regulators playing an endless game of catch-up. Technology can change exponentially, while social, economic, and legal systems tend to change incrementally. For these reasons, the principles discussed at the start of this article aim to be future-proof and technology-neutral. This has the advantage of being broad enough to adapt to changing circumstances, albeit at the risk of being so vague as to not offer meaningful guidance in specific cases.

Collingridge himself argued that instead of trying to anticipate the risks, more promise lies in laying the groundwork to address the second aspect of the dilemma: ensuring that decisions about technology are flexible or reversible. This is also not easy, presenting what some describe as the “barn door” problem of attempting to shut it after the horse has bolted. The following two sections consider two approaches to the timing of regulation that may provide a way to address or mitigate the Collingridge Dilemma: the precautionary principle and masterly inactivity.

An ounce of prevention

A natural response to uncertainty is caution. The precautionary principle holds that if the consequences of an activity could be serious but are subject to scientific uncertainties, then precautionary measures should be taken or the activity should not be carried out at all. This principle features in many domestic laws concerning the environment and has played a key role in most international instruments addressing the topic. The 1992 Rio Declaration, for example, states that “[w]here there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.” In some implementations, the principle amounts to a reversal of the burden of proof: those who claim an activity is safe must prove it to be so.

Critics argue that the principle is vague, incoherent, or both. A weak interpretation amounts to a truism, as few would argue that scientific certainty is required for precautions to be taken; a strong interpretation is self-defeating, since precautionary measures can themselves have harmful effects. In a book-length treatment denouncing it as “European,” Cass Sunstein outlines the predictably irrational ways in which fears play out in deliberative democracies, notably the over-valuation of loss and the reactive nature of public opinion with regard to risk. That said, most accept the notion that there are at least some risks against which precautionary steps should be taken before they materialize or can even be quantified.

In the context of AI, the precautionary principle is routinely invoked with regard to autonomous vehicles, lethal autonomous weapons, the use of algorithms processing personal data in judicial systems, and the possibility of general AI turning on its human creators. Only the last one is a proper application of the principle, however, in that there is genuine uncertainty about the nature and the probability of the risk. The precise failure rate of autonomous vehicles may be unknown, for example, but the harm itself is well understood and capable of being balanced against the existing threat posed by human drivers. As for lethal autonomous weapons, opponents explicitly reject a cost-benefit analysis in favor of a bright moral line with regard to decisions concerning human life. Though there are ongoing debates about the appropriate degree of human control, the “risk” itself is not in question. Similarly, wariness of outsourcing public sector decisions to machines is not founded — or, at least, not only founded — on uncertainty as to the consequences that might follow. Rather, it is tied to the view that such decisions should be made by humans within a system of political accountability.

Nevertheless, as indicated earlier, it is telling that, despite the risks of general AI, there has thus far been no concerted effort to restrict pure or applied research in the area. More promising are calls that implicitly focus on the second horn of Collingridge’s dilemma: requirements to incorporate measures such as a kill switch, or attempts to align the values of any future superintelligence with our own. These can be seen as applications of the principle that human control should be prioritized. If a path to general AI becomes clearer, they should become mandatory.

Masterly inactivity

Another response to uncertainty is to do nothing. Refraining from action may be appropriate to avoid distorting the market through pre-emptive rulemaking or delaying its evolution through lengthy adjudication. The term sometimes used to describe this is “masterly inactivity.” With origins in nineteenth century British policy on Afghanistan, it suggests a watchful restraint in the face of undesirable alternatives. Britain’s involvement in Afghanistan, it should be noted, ended in humiliating defeat.

In the context of AI, this amounts to a “wait and see” approach. Yet there is a difference between passively allowing events to play out and actively monitoring and engaging with an emerging market and its actors. Government engagement in the processes that led to the principles described at the start of this article is an example, as is the encouragement of industry associations to develop standards and research addressing governance possibilities.

Inactivity may also amount to a passing-the-buck exercise. Even if governments choose not to regulate, decisions with legal consequences will be made — most prominently by judges within the common law tradition, whose decisions constitute a law-making function. Such decisions are already influencing norms in areas from contracts between computer programs and the use of algorithms in sentencing to the ownership of intellectual property created by AI. This can be problematic if the law is nudged in an unhelpful direction because of the vagaries of how specific cases make it to court. It is also limited to applying legal principles after the fact — “when something untoward has already happened,” as the British House of Commons Science and Technology Committee warned. Masterly inactivity, then, is not a strategy. Properly used, however, it may buy time to develop one.

Regulatory approaches

Regulation is a contested concept and embraces more than mere “rules.” A leading text distinguishes three distinct modalities of regulation that are useful in considering the options available. First, regulation can mean a specific set of commands, i.e. binding obligations applied by a body devoted to this purpose. Second, it can refer to state influence more broadly, including financial and other incentives. Finally, and broader still, regulation is sometimes used to denote all forms of social or economic suasion, including market forces. The concept of “smart regulation” applies not only to regulatory functions carried out by institutions of the state but also professional associations, standard-setting bodies, and advocacy groups. In most circumstances, multiple instruments and a range of regulatory actors will produce better outcomes than a narrow focus on a single regulator. And these different modalities of regulation can interact and affect each other. An industry may invest in self-regulation, for example, due to concerns that failure to do so will lead to more coercive regulation at the hands of the state.

Regulation is not limited to restricting or prohibiting undesirable conduct; it may also enable or facilitate positive activities, i.e. “green light” as opposed to “red light” regulation. “Responsive regulation,” for instance, argues in favor of a more cooperative relationship, encouraging regulated parties to comply with the goals of the law rather than proceed according to mere rule compliance. Other approaches emphasize efficiency. Risk-based and problem-centered regulatory techniques, for example, seek to prioritize the most important issues — though the identification, selection, and prioritization of future risks and current problems involve uncertainty as well as normative and political choices.

The tools available to regulatory bodies may be thought of in three categories also: traditional rulemaking, adjudication by courts or tribunals, and informal guidance, comprising standards, interpretive guides, and public and private communications concerning the regulated activity. Tim Wu once provocatively suggested that regulators of industries undergoing rapid change should consider linking the third with the first two by issuing “threats,” i.e. informally requesting compliance, but under the shadow of possible formalization and enforcement.

Many discussions of AI regulation recount the options available (e.g. a sliding scale, a pyramid, a toolbox, and so on), but the application tends to be either too general or too specific. It is, self-evidently, inappropriate to apply one regulatory approach to all of the activities impacted by AI. Yet, it is also impractical to adopt specific laws for every one of those activities. A degree of clarity may, however, be achieved by distinguishing between three classes of problems associated with AI: managing risks, proscribing others, or ensuring that proper processes are followed.

Managing risks

Civil liability provides a means by which to allocate responsibility for risk — particularly in areas that can be examined on a cost-benefit basis. This will cover the majority, perhaps the vast majority, of AI activities in the private sector: from transportation to medical devices, from smart home application to cognitive enhancements and implants. The issue here is not new rules but how to apply or adapt existing rules to technology that operates at unprecedented speeds, with increased autonomy, and with varying degrees of opacity. Minimum transparency requirements may be needed to ensure that AI systems are identified as such and that harmful conduct can be attributed to the appropriate owner, operator, or manufacturer. Mandatory insurance may be able to distribute those risks more efficiently, but the fundamental principles remain.

For situations in which cost-benefit analysis is appropriate but the potential risks are difficult to determine, regulatory “sandboxes” allow new technologies to be tested in controlled environments. Though some jurisdictions have applied this to embodied technology, such as designated areas for autonomous vehicles, the approach is particularly suited to AI systems that operate online. Originating in computer science, a virtual sandbox lets software run in a manner that limits the potential damage if there are errors or vulnerabilities. Though not amounting to the immunity that Ryan Calo once argued was essential to research into robotics, sandboxes offer “safe spaces” to test-drive innovative products without immediately incurring all the normal regulatory consequences. The technique has been most commonly used with finance technology (or “fintech”), enabling entrepreneurs to test their products with real customers, fewer regulatory constraints, reduced risk of enforcement action, and ongoing guidance from regulators. Pioneered by Britain in 2016, it is credited with giving London a first-mover advantage in fintech and has since been copied in other jurisdictions around the world.

Drawing red lines

In some cases, however, lines will need to be drawn as to what is permissible and what is not. These red lines will, in some cases, go beyond merely applying existing rules to AI. Linked with the ethical principle of maintaining human control, an obvious candidate is prohibiting AI from making decisions in the use of lethal force.

Yet even that apparently clear prohibition becomes blurred under closer analysis. If machines are able to make every choice up to that point — scanning and navigating an environment, identifying and selecting a target, proposing an angle and mode of attack — the final decision may be an artificial one. Automation bias makes the default choice significantly more likely to be accepted in such circumstances. That is not an argument against the prohibition, but in favor of ensuring not only that a human is at least “in” or “over” the loop but also that he or she knows that accountability for decisions taken will follow them. This is the link between the principles of human control and accountability, i.e. not that humans will remain in control and machines will be kept accountable, but that humans (and other legal persons) will continue to be accountable for their conduct, even if perpetrated by or through a machine.

The draft AI Act of the European Union also seeks to prohibit certain applications of AI, notably real-time biometric surveillance, technologies that manipulate or exploit individuals, and social scoring.

A discrete area in which new rules will be needed concerns human interaction with AI systems. The real challenge here, however, is not new laws to protect us from them but to protect them from us. Anodyne examples include those adopted in Singapore in early 2017, making it an offence to interfere with autonomous vehicle trials. These are more properly considered as an extension of the management of risk associated with such technologies. More problematic will be laws concerning human action perpetrated against machines. At present, for example, it is a crime to torture a chimpanzee but not a computer. As social robots become more prevalent, in industries from eldercare to sex work, it may be necessary to regulate what can be created and how those creations may or may not be used and abused.

In 2014, for example, Ronald Arkin ignited controversy by proposing that child sex robots be used to treat pedophiles in the same way that methadone is used by heroin addicts. Though simulated pornography is treated differently across jurisdictions, many have now prohibited the manufacture and use of these devices through creative interpretations of existing laws or by passing new ones such as the Curbing Realistic Exploitative Electronic Pedophilic Robots (CREEPER) Act in the United States.

As lifelike embodied robots become more common, and as they play more active roles in society, it will be necessary to protect them not merely to reduce the risk of malfunction but because the act of harming them may be regarded as a wrong in itself. The closest analogy will, initially, be animal cruelty laws. This is, arguably, another manifestation of the android fallacy, e.g. purchasing a lifelike robot and setting it on fire will cause more distress than deleting its operating system. Moving forward, however, the ability of AI systems to perceive pain and comprehend the prospect of non-existence may change that calculation.

This raises the question of whether red lines should be established for AI research that might bring about self-awareness or the kind of superintelligence sometimes posited as a potential existential threat to humanity. Though many experts have advocated caution about the prospect of artificial general intelligence (AGI), few had called for a halt to research in the area until March 2023, when the Future of Life Institute issued an open letter — signed by Elon Musk among others — calling for a six month pause on the development of generative AI, in the form of large language models “more powerful than GPT-4,” referring to the generative pre-trained transformer (GPT) chatbot developed by OpenAI. The letter received much attention but did not appear likely to result in an actual suspension of research. Tellingly, no government has instituted such a pause — though Italy did ban ChatGPT due to concerns about its use of personal data, and China announced restrictions on similar technology if it risked upsetting the social and political order.

As Bostrom and others have warned, there is a non-trivial risk that attempts to contain or hobble AGI may in fact bring about the threat they are intended to avert. A precautionary principle approach might be able to stop short of such capabilities. Yet AGI seems far enough beyond our present capacities that this would be an excessive response if implemented today. In any case, a ban in one jurisdiction may not be binding in another. Short of an international treaty, with a body competent to administer it, unilateral prohibition would be ineffective.

Limits on outsourcing

Limiting the decisions that can be outsourced to AI is an area in which new rules are both necessary and possible. One approach is to restrict the use of AI for inherently governmental functions. There have been occasional calls for a ban on government use of algorithms, typically in response to actual or perceived failures in public sector decision-making. These include scandals over automated programs that purported to identify benefit fraud in Australia and the Netherlands, and the Covid-19 university admissions debacle in Britain.

Other jurisdictions have prohibited public agencies from using specific applications, such as facial recognition. San Francisco made headlines by prohibiting its use by police and other agencies in 2019, a move that was replicated in various U.S. cities and the state of California but not at the federal level. As in the case of data protection, Washington has thus far failed to enact broad legislation (despite several attempts). Europe approached the same question initially as an application of the GDPR and then incorporated a ban on real-time remote biometric identification in publicly accessible spaces into the draft AI Act. China, for its part, has far fewer restrictions on facial recognition, though the government has acknowledged the need for greater guidance and there has been at least one (unsuccessful) lawsuit.

Banning algorithms completely is unnecessary, not least because any definition might include arithmetic and other basic functions that exercise no discretion. More importantly, it misidentifies the problem. The issue is not that machines are making decisions but that humans are abdicating responsibility for them. Public sector decisions exercising inherently governmental functions are legitimate not because they are correct, but because they are capable of being held to account through political or other processes.

Such concerns activate the first of the two principles discussed at the start of this article: human control and transparency. A more realistic and generalizable approach to the regulation of AI in the public sector is escalating provisions for both in public sector decision-making. An early example of this was Canada’s provisions on transparency of administrative decisions. A similar approach was taken in New Zealand’s Algorithm Charter. Signed by two dozen government agencies, the Charter included a matrix that moves from optional to mandatory based on the probability and the severity of the impact on the “wellbeing of people.” Among other provisions, mandatory application of the Charter requires “human oversight,” comprising a point of contact for public inquiries, an avenue for appeals against a decision, and “clearly explaining the role of humans in decisions informed by algorithms.” It also includes provisions on transparency that go beyond notions of explainability and include requirements for plain English documentation of algorithms and publishing information about how data are collected, secured, and stored.

These are important but ultimately insufficient steps. For such public sector decisions, it is not simply a question of striking “the right balance,” as the Charter states, between accessing the power of algorithms and maintaining the trust and confidence of citizens. A more basic commitment would guarantee the means by which to challenge those decisions, not just legally, as in the case of decisions that violate the law, but also politically by identifying human decision-makers in positions of public trust who can be held to account through democratic processes for their actions or inaction.

One of the most ambitious attempts at regulation of this space — still being debated at the time of writing — is the EU’s draft AI Act. As written, it adopts an expansive definition of AI and applies to all sectors except for the military. Intended to be horizontal legislation, it would provide baseline rules applicable to all use-cases, with stricter obligations being possible in sensitive areas (such as the medical sector). It also classifies AI applications by risk: low-risk applications are not regulated at all, while escalating requirements for assessment prior to release on the market apply to medium- and high-risk applications. As indicated earlier, certain applications would be prohibited completely.

Optimists hope that the AI Act may enjoy the “Brussels effect” and shape global AI policy, in the way that the EU GDPR shaped data protection laws in many jurisdictions. Critics have highlighted the extremely broad potential remit of the legislation to a wide range of technologies, as well as the vagueness of some of its key proscriptions, such as whether recommendation algorithms and social media feeds might be considered “manipulative.” Others have pointed to the risks of AGI and the need to regulate it, and the concerns raised about large language models discussed earlier.

Conclusion

If Asimov’s three laws had avoided or resolved all the ethical dilemmas of machine intelligence, his literary career would have been brief. In fact, the very story in which the laws were introduced focuses on a robot that is paralyzed by a contradiction between the second and third laws, resolved only by a human putting himself in harm’s way, thus invoking the first. (The robot initially tries to comply with a weakly-phrased order that would entail its own certain destruction and ends up stuck in an “equilibrium” — quoting Gilbert and Sullivan, for reasons that are never explained — until the need to save a human life breaks it free.)

A blanket rule not to harm humans is obviously inadequate when forced to choose between the lesser of two evils. Asimov himself later added a zeroth law, which provided that a robot’s highest duty was to humanity as a whole. In one of his last novels, a robot is asked how it could ever determine what was injurious to humanity as a whole. “Precisely, sir,” the robot replies. “In theory, the Zeroth Law was the answer to our problems. In practice, we could never decide.”

The demand for new rules to deal with AI is often overstated. Ryan Abbott, for example, has argued that the guiding principle for regulatory change should be AI legal neutrality, meaning that the law should not discriminate at all between human and AI behavior. Though provocatively simple, the full impact of such a rule is quickly abandoned: personality is not sought for AI systems, nor are the standards of AI (the “reasonable robots” in the title to Abbott’s book) to be applied to human conduct. Rather, Abbott’s thesis boils down to a case-by-case examination of different areas of AI activity to determine whether specific sectors warrant change or not.

This is a sensible enough approach, but some new rules of general application will be required, primarily to ensure the first two principles quoted at the start of this article — human control and transparency — can be achieved. Human control requires limits on the kinds of AI systems that can be developed. The precautionary principle offers a means of thinking about such risks, though the clearest decisions can be made in bright line moral cases like lethal autonomous weapons. More nuanced limitations are required in the public sector, where it is less important to constrain the behavior of AI systems than it is to limit the ability of public officials to outsource decisions to these systems. On the question of transparency, accountability of government officials also requires a limit on the use of opaque processes. Above and beyond that, measures such as impact assessments, audits, and an AI ombudsperson could mitigate some harms and assist in ensuring that others can be tracked back and attributed to legal persons capable of being held to account.

As AI becomes more sophisticated and pervasive — and as harms associated with AI systems become more common — demand for restrictions on AI will increase. This article has sought to move debate away from abstract consideration of what rules might constrain or contain AI behavior, to the more practical challenges of why, when, and how regulators may choose to move from ethics to laws. The precise nature of those laws will vary from jurisdiction to jurisdiction. The only safe bet is that there are likely to be more than three.

Simon Chesterman is David Marshall professor and vice provost (educational innovation) at the National University of Singapore, where he is also the founding dean of NUS College. He serves as senior director of AI governance at AI Singapore and editor of the Asian Journal of International Law. Previously, he was dean of NUS Law from 2012 to 2022 and co-president of the Law Schools Global League from 2021 to 2023.

This article was first published in Comparative Studies in February.

caixinglobal.com is the English-language online news portal of Chinese financial and business news media group Caixin. Global Neighbours is authorized to reprint this article.

Image:  Quality One – stock.adobe.com