International human rights law as a framework for algorithmic accountability

Citations of this article
Mendeley users who have this article in their library.


Existing approaches to 'algorithmic accountability', such as transparency, provide an important baseline, but are insufficient to address the (potential) harm to human rights caused by the use of algorithms in decision-making. In order to effectively address the impact on human rights, we argue that a framework that sets out a shared understanding and means of assessing harm; is capable of dealing with multiple actors and different forms of responsibility; and applies across the full algorithmic life cycle, from conception to deployment, is needed. While generally overlooked in debates on algorithmic accountability, in this article, we suggest that international human rights law already provides this framework. We apply this framework to illustrate the effect it has on the choices to employ algorithms in decision-making in the first place and the safeguards required. While our analysis indicates that in some circumstances, the use of algorithms may be restricted, we argue that these findings are not 'anti-innovation' but rather appropriate checks and balances to ensure that algorithms contribute to society, while safeguarding against risks.




McGregor, L., Murray, D., & Ng, V. (2019). International human rights law as a framework for algorithmic accountability. International and Comparative Law Quarterly, 68(2), 309–343.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free