Autonomous Vehicles (AV) have gone from fiction to reality within a few years, with the emergence of assisted and autonomous driving functionalities. They offer the promise of a better life. They could transform lifestyles and offer Mobility as a Service (MaaS) to all. They could save lives, cut the number of accidents, reduce traffic congestion and pollution thanks to real-time optimization. They could also reduce bottlenecks in law enforcement. Smart cars can make and help humans make more informed, reliable and swifter decisions, and focus on more value-added activities. Using big data along with machine learning algorithms, artificial intelligence (AI) developers are now able to create computer programs and build systems that can mimic human driving.
Any vehicle driver or pilot makes thousands of conscious and unconscious decisions with potential impact on himself, others, and the environment. With AVs, decision-making is put partially or fully in the hands of AI systems. Besides its potential for more efficient decisions, algorithms fed by big data can also lead to undesirable or dramatic outcomes. AV and AI entail risks first and foremost to human rights: physical integrity, privacy, safety, security, non-discrimination and self-determination. AVs can indeed kill and hurt, discriminate on the basis of gender or disability, and process massive amounts of information about users’ locations, conversations and habits. Blind, opaque or premature reliance on AI systems poses a societal threat. Society and governments have to make choices to ensure AVs are fair, ethical and transparent. How can the AV promise be achieved while mitigating risks? This can only be achieved through adequate accountability requirements and processes.
Accountability, responsibility and liability are three different concepts by law, philosophy and ethics. Accountability is a legal and ethical duty to account and be answerable for one’s decisions and actions. Accountability implies explainability as to how and why decisions are taken, how algorithms and codes are programmed, in light of given standards, rules, benchmarks, values and stakeholders. Responsibility refers to someone being in charge to perform a task properly by the nature of one’s position, function or commitment. Responsibility implies an obligation, control or authority. It can also call on feeling responsible based on morality. Liability is relevant only when things go wrong. It is a legal and court concept determining who shall pay for the adverse effects of certain decisions or actions, such as road accidents. For example, an AV manufacturer may be (i) responsible for building safe AV by hiring the best engineers, (ii) accountable vis-à-vis users, auditors and society, and (iii) liable towards victims in case of accidents.
What laws say
France: The French term responsabilité encompasses accountability, responsibility and liability. The most recent legislation adopted in France in the AV field focuses on facilitating testing and adapting liability regimes. The Action Plan for Business Growth and Transformation (“PACTE” Law) of 22 May 2019) provides, for example, that the driver is exempted from criminal liability for traffic regulation infringements occurring during the time the autonomous system is activated. In addition, the draft general Law On Mobility (“LOM”) adopted on 18 June 2019 by the National Assembly, authorizes the government to amend liability rules taking into account delegation of decision making to AV systems. France’s Presidency has set up a Mission (2019), a Strategy (2018) and a High Representation (2017) for the Development of AV, identifying three priorities: safety, progressivity and acceptability. Success will require accountability benchmarks and processes ahead of liability issues.
EU: The EU brings the benefit of setting common standards for all member states, but also sometimes extraterritorial (like GDPR, the General Data Protection Regulation). The EU identified that fragmented regulatory approaches would hinder the development of AV and it has undertaken various initiatives on AV and AI. Accountability and transparency are in themselves ethical principles under the 2018 AI for Europe Communication. Accountability is a key driver of technological trustworthiness and justice, under the 2019 Guidelines for Trustworthy AI. Accountability requires that mechanisms be put in place both before and after AI development, including their assessment by internal and external auditors. Algorithmic opacity is a challenge for accountability, from technical, societal and legal standpoints. To address the challenge, the EU proposed in April 2019 a Governance Framework for Algorithmic Accountability including policy options to promote governance, including awareness, whistle-blowing and oversight.
Question & reflections to foster and address accountability
The “trolley problem” is dead. Long live the legitimacy question. Should the AV or trolley be coded to kill its passengers or pedestrians in the case that saving both is impossible? This question to date is left for developers and engineers to address. But accountability calls for a more fundamental question: who is legitimate to make that decision in the first place? Should it be left to companies to decide, or should it be for society and lawmakers to draw decision-making fundamentals, especially when human rights and integrity are at stake?
Who’s behind the AV wheel? Who should be accountable? Who exactly makes decisions and who should be accountable have no straight answer. There is a need to address multiplicity, complexity, inter-dependency and indivisibility of decisions and actions. The Society of Automotive Engineers identifies six levels of driving automation, from zero (no automated functions and active human driver) to five (driver-less cars), from which accountability grids may derive. The decision-making process is complex, at times unpredictable (deep learning) and it involves multiple players, e.g. manufacturers and suppliers of cameras, electronic and machine-learning systems, physical parts, the AV selling or leasing company, CEOs, and authorities that certified or authorized the vehicle. Players may also be unidentifiable where open source software is used.
One novel duty would enable accountability: creating a universal duty on AV input contributors (i) to map their respective involvement and governance and (ii) to map actual and potential risks.
Accountability and risks in light of what? Initiatives, principles, guidelines and laws face similar challenges as AV and AI themselves: multiplicity and complexity. To date, every country, region or international organisation adopts its own sets of principles. How viable is it? Accountability innovation lies first in streamlining and consolidating the requirements, values, rules and standards in light of which accountability shall be assessed. There will be no risk mapping and upfront care for accountability absent viability and intelligibility. For example, non-discrimination, pollution minimization and cyber-security could become common standards in light of which any developer of AV should be accountable, worldwide.
Consolidation and viability of accountability requirements foster effective and sustainable innovation investments, interoperability, level playing fields and better governance of accountability exposure.
Who is best placed to hold AV accountable? AVs are by definition mobile, and function with AI and data flows easily and automatically cross-border. Which jurisdiction, authority, country, or organisation is entitled, competent or best placed to hold AV players accountable? How do we minimize accountability evasion? Oversight, audit and evaluation are crucial, but by whom? Effectiveness of accountability calls for enforcement governance and the creation of a specialized, multi-jurisdictional body or a mutual recognition mechanism.
Once oversight has reached its limits, should accidents or harm occur, that is where liability steps in. Whether bodies competent to hold accountable (e.g. international body) should be the same as the ones competent to holding liable (e.g. courts) is an essential question where integrity, independence, prevention of conflicts of interest and AI acumen will be key factors of success.
Accountability is good for everyone
Ethics are not bad for innovation, nor is accountability. On the developer side, accountability stimulates ecosystem intelligence, better risk management, sustainable investment and growth. On the user side, accountability ensures than non-human cars do not become inhuman, and it builds trust, and in turn, adoption. On the government side, it ensures technological development also means socio-economic development. Accountability promotes a feeling-responsible attitude and value-based thinking, which presents the most important opportunity for AI and AV to be human by design.
Mona Caroline Chammas, lawyer at the Brussels and New York bars, GOVERN&LAW founding partner
Juliette Goyer, lawyer at the Paris bar, GOVERN&LAW partner
A car with two front-seats’ passengers is about to have a fatal accident that could take a pedestrian’s life. Who should survive: the AV’s passengers or the pedestrian? To this end, the Moral Machine test, developed by MIT, is very enlightening in helping us understand how to assign responsibility. Nevertheless, it is not where the real question lies; who is legitimate to make that decision? http://moralmachine.mit.edu/