You are here

Responsible AI: 2021 Update

Responsible AI: A Global Policy Framework
2021 Update

In May 2019, almost a year to the day after we had commenced our initial collective efforts, ITechLaw pub­lished the first edition of Responsible AI: A Global Policy Framework.

We had the great honour to work with a remarkable multi-disciplinary team of 54 technology legal experts, researchers and industry representatives from 16 countries to produce a richly researched policy guide to the responsible deployment of AI systems.

This Update continues to fulfill the promise and potential of ITechLaw as a global association promoting networking and thought-leadership amongst leading technology lawyers worldwide.

Get the 2021 Framework


As noted in the first edition of Responsible AI, the policy framework that we published in 2019 was neces­sarily embryonic. Artificial intelligence’s development is still in its infancy and the potential societal impact of artificial intelligence is difficult to fully grasp, particularly in a field in which the rate of change continues to be almost exponential. These factors have placed a great weight of responsibility on all those who are engaged in the development and deployment of such AI systems. It is not surprising, therefore, that not only policymakers, but also industry representatives and AI researchers are looking for solid legal and ethical guideposts. We are, collectively, participating in an ongoing dialogue.

It is in this context that I am pleased to welcome the publication of the 2021 Update to Responsible AI: A Global Policy Framework. As we undertook to carry on the dialogue, we could not have been better served than by the two editors of this current update, John Buyers of Osborne Clarke LLP, UK, and Susan Barty of CMS LLP. Together with a team of 38 specialists from 17 countries, John and Susan have not only produced a substantive update to each of the eight principal chapters to Responsible AI and a comprehensive update to the original Global Policy Framework but have also developed a practical “Responsible AI Impact Assessment” template that we hope will be of significant value to AI experts and industry leaders.

– Charles Morgan of McCarthy Tétrault LLP
President, International Technology Law Association


It would not be an understatement to say that the world has changed beyond recognition since the pub­lication of the first edition of Responsible AI. We have all been placed in the grip of a global pandemic, dramatically changing our working and personal lives, forcing distance between us and our loved ones and transforming innocent gestures of social interaction, such as shaking hands and hugging, into poten­tially deadly interactions. Where once we might have flown or driven to a meeting or conference, we now use video conferencing.

Isolation has made us even more dependent upon technology: to work, to socially interact, to inform, educate, and to entertain. Social media and predictive technologies have become ever-present in ways we could not even have imagined: driving and manipulating opinions, influencing behaviours, and inevitably powering news cycles. Indeed, as we bring this update to publication we’re witnessing first hand the impact of these technologies on a very unconventional US Presidential election.

The consensus is that rather than enrich us as human beings, exposure to too much technology dimin­ishes us. This is perhaps not surprising as forced isolation has driven many to the conclusion that we need real social relationships and interaction to thrive as human beings.

It is in this environment that we bring you our 2021 update to Responsible AI. In a fast-moving world, Artificial Intelligence moves at light speed. We’re now seeing the first nascent global steps towards regu­lation: the collective governmental realisation of the enormous harm that this technology can wield if left untrammelled. It looks like the EU is “first out of the blocks” with a proposal that would align machine learning to a regulatory environment not too dissimilar to the one Europeans face with data. The EU’s compliance-driven thinking is inevitably tempered by the more entrepreneurial and enterprise-friendly approaches advocated by the United States and China. Time will tell which vision will prevail.

In the meantime, it has become ever more critical to measure and gauge the impact of artificial intelligence “on the ground” and away from the academic debate. We are inevitably “wising up” to the consequences of ill-thought through development and use– whether that is physical harm, exclusion, or erosion of personal liberty. It is in this environment we launch our Responsible AI Impact Assessment tool (or RAIIA for short) which is designed to help measure, in quantifiable and real terms, the impact of a proposed AI solution. We hope you find it a valuable, and practical tool.


Edited by John Buyers of Osborne Clarke LLP, UK and Susan Barty of CMS LLP, and written together with a team of 38 specialists from 17 countries.