Dr Emma Carmel illustrates the paradoxical realities of ethics of AI development, with recommendations for forming an effective public policy

The number of ethical guidelines, frameworks and principles on AI just keeps growing and growing. International organisations, national regulators, parliamentary committees, even the corporate sector, are overflowing with documents proclaiming the need for AI to be ethical.

Of course, AI technologies (AITs) cannot themselves be ethical. AITs are, whatever their sophistication, speed and processing power, still just technologies. So, AI ethics, at least for public policy, really refers to the ethics of AI development and use by humans, in specific social, institutional and political situations. And importantly, we do not all agree on what counts as ethical behaviour; our ethics are also shaped by our religious, professional and social backgrounds.

What is driving the adoption of such frameworks and guidelines, is a need to make sense of the implications of these rapidly developing technologies, and their accelerating use, across wildly different applications and settings.

We need guidelines, even though – or rather, because – we do not really yet know what their implications are. There is profound uncertainty about what AITs can do, how they work, and what their consequences will be, over time and in different contexts. So it is not surprising that governments and international authorities are seeking to provide a set of directions to help legislators, policy officials and decision-makers, cut a clear path through the thickets of hype and horror on AIT development and use.

So what does it really mean for AITs to be ethical? And is ‘AI ethics’ enough, when we are addressing the development and use of AI in public policy and government?

First: ‘AI ethics’

A quick review of the plethora well-meaning documents produced over the last 2 to 3 years, shows that ‘AI ethics’ is paradoxically, both impossibly general and much too narrow to be helpful to the policymaker. There are some common, but very abstract concerns: with transparency, accountability, bias and (sometimes) privacy. The problem with these demands is that there is in fact very little agreement on what constitutes ‘transparency, accountability, bias or privacy’. And what each of these terms means in a different policy area, or to a different policy stakeholder, can look quite different, or even conflict with each other. AI ethics needs to address concrete contexts and situations, including guidance for when ethical principles conflict in practice.

Second: ‘Ethics of AI in government’

Any system to provide for ethical standards in the development and use of AI in public policy and services must meet three practical needs. All of these must be in place, with political buy-in, and a willingness to prioritise and resource them, for ‘ethics of AI in government’ to become reality.

Scientific/technical authority and expertise to be able to understand and respond to the speed and sophistication of technological developments.

Policy knowledge and ethical sophistication to practically assess, and respond to, the specific challenges of using AITs in public policy and services. These must include the special legal and ethical responsibilities for just and inclusive government.

Political authority and capacity to understand and oversee the full range of policies and public bodies affected.

Once these are in place, then the resources are available to make a workable system of ethical AIT development and use in public policy.

The policy focus should be on four areas:

  • ‘Decision to adopt’ ethical frameworks that set out minimum requirements of AIT systems for ethical standards to be met, both in general and in specific policy domains. These should include the weighting to be given to concerns about objectivity and transparency, and highly staged public procurement processes that facilitate ethical review and exit points should the ethical requirements not be met.
  • Decision / inference model and source code guidelines, including external validation of the quality, compatibility and appropriateness of originating decision-model, source code and/or learning system.
  • Data use, compilation, sharing, cleaning and storage guidelines, including requirements for external validation of the quality, compatibility and appropriateness of training data
  • Functioning, application and audit guidelines, including procedural requirements for decision reviews and systems audits of AIT recommendations, their use by human decision-makers and decision outcomes for individuals and groups.

It is vital that any general system of protocols and regulation for the use of AITs in public policy and service has enough flexibility to account for the specific risks and attributes in particular policy areas. AIT use proceeds differently, and presents diverse risks of harm, depending on whether they are used in defence and security; social welfare and care; health; criminal justice and policing; immigration. But in order to be effective, these sectoral-based protocols still have to be coherent with any general guidelines. And to ensure that these are not just ‘policies on paper,’ responsibilities for guideline development and oversight must be unambiguous and well-understood by all affected staff, whether technicians, data scientists or decision-makers.

Guidelines and protocols also need enough flexibility to be able to respond robustly, rapidly and appropriately to new technological developments as they arise. Legislating ethics risks rigid and ineffective rules. But very general frameworks pose grave risks to the quality, or even legality, of decision-making (e.g. by procuring AITs that are systematically biased).

Useful ways of maintaining a coherent approach across sectors are to focus on well-publicised and resourced digital hubs, and ‘digital playbooks’, that offer a central repository for all frameworks, protocols, procedures. These repositories must also name the roles and individuals responsible for developing and overseeing them. And if something goes wrong for citizens and the staff who are using AITs, an ombudsman is also a way of identifying problems in process, with a lower regulatory burden.

There is no doubt that putting together such a practical system of ‘AI ethics for government’ is challenging. However, it is absolutely vital that it is not seen as restricting innovations that might be possible from the use of AITs in public policy. Governments have special responsibilities to protect, care for and serve all their citizens. The ethics of AI are central to this responsibility. When made practical in this way, they can be used by governments to shape more innovation; for better, more useful and appropriate AITs, and for the benefit of all citizens.

*Please note: This is a commercial profile

Contributor Profile

Associate Professor, & Director, MSc in Public Policy
Faculty of Humanities and Social Sciences, University of Bath
Phone: +44 (0)122 538 4685
Website: Visit Website

LEAVE A REPLY

Please enter your comment!
Please enter your name here