Human Rights and Artificial Intelligence

The details
Essex Law School
Colchester Campus
Postgraduate: Level 7
Monday 15 January 2024
Friday 22 March 2024
20 October 2023


Requisites for this module



Key module for


Module description

Artificial intelligence never seems to be far from the headlines. Sarah O`Connor of the Financial Times was quoted in a recent House of Lords report as stating that `if you ever write an article that has robots or artificial intelligence in the headline, you are guaranteed that it will have twice as many people clicking on it`.

The advent of big data and more advanced and cheaper computational power has meant that machine learning has become much more accessible and available to a wide range of actors. AI is no longer the preserve of science fiction but is already a reality with many forms of machine learning and robotics already being used today. Within the last few years, governments and major technology companies have started to release AI strategies and major investments are being made in innovation and in understanding how AI will benefit and present risks to society.

AI is already offering many opportunities for the better protection of human rights while at the same time presenting serious risks. While many actors describe these risks narrowly (focusing on the right to privacy), the threats (as well as the opportunities) affect the entire human rights spectrum. For example, AI applications may be used to document human rights violations; implement the Sustainable Development Goals; respond more effectively to the refugee `crisis`; and manage the impacts of climate change. They potentially offer innovative ways to enhance access to education; enable persons with disabilities and older persons to live more autonomously; advance the right to the highest attainable standard of health; and provide ways to tackle human trafficking and forced labour.

At the same time, the use of big data and AI can present significant risks to human rights, even in contexts where they are used with the intention to advance them. They can introduce new threats and aggravate and amplify existing challenges to human rights, for example by reducing accountability for rights violations due to opaque decision-making processes, or by widening inequality. This could be due to factors such as uneven distribution of benefits, discriminatory impacts and biased datasets. Big data and AI have wide ranging effects across society and individuals` lives, including collective impact, many of which are not yet fully understood. They can put the full spectrum of human rights – civil, cultural, economic, political and social – at risk.

Many actors involved in the governance and regulation of AI as well as key international, regional and national human rights institutions and NGOs are starting to work on the human rights impact of AI and to develop responses, although this remains at an embryonic stage.

This module is designed to enable you to learn about the different technologies that fall under the broad and popular heading of `AI` and to understand and analyse how their use in different contexts affects human rights. The module is offered by the ESRC Human Rights, Big Data and Technology project based at the University of Essex.

We are a major ESRC investment examining the risks and opportunities for human rights posed by big data and technology and developing effective policy, governance and regulatory responses. We work internationally on these issues and will integrate our ongoing research and practical experience on these issues into the module. This will enable you to engage with technological and policy developments as they happen in a rapidly changing field.

As many international organisations, governments, technology companies and NGOs are starting to grapple with the impact of AI on human rights and the way in which they work, this module will provide preparation for study and employment after the LLM and MA across a range of domains and institutions.

Module aims

1. To enable students to develop a working knowledge of the different types of AI applications;

2. To enable students to critically analyse and assess the human rights impact of the use of AI applications in a range of areas of life;

3. To enable students to develop critical and strategic analysis into the types of regulatory and governance available and that will effectively protect human rights.

4. To enable students to apply existing human rights and human rights law principles to new technologies, and to work through how these rights can apply in practice to inform decision-making.

Module learning outcomes

At the end of this module, students will be able to:
1.) Demonstrate a strong understanding of the human rights issues arising in relation to the use of new technologies, with a particular focus on artificial intelligence/ machine learning.
2.) Demonstrate a good understanding of the logic underpinning artificial intelligence in order to facilitate an understanding of the human rights issues that arise when such technologies are deployed;
3.) Demonstrate an appreciation of how new technology can be used to advance human rights, and the human rights harms arising in this regard;
4.) Demonstrate in-depth knowledge regarding the human rights issues relating to specific applications of artificial intelligence (for instance in law enforcement, the health sector, etc.);
5.) Demonstrate conversance with existing approaches to algorithmic accountability/human rights compliance;
6.) Apply a human rights based approach to new and emerging technologies.
7.) Apply existing understanding of the international human rights framework, its strengths and the challenges it faces, to this new and emerging area, including the role and responsibilities of businesses within international human rights law and the ongoing challenges with the implementation and operationalisation of human rights.

Module information

This module will be delivered through nine two-hour seminars:

1. Introduction to the module and primer on big data and artificial intelligence
This seminar will introduce the learning objectives associated with the module, discuss the motivation underpinning its development, and address issues relating to the assessment. The main part of the seminar will then discuss the logic and science underpinning big data and artificial intelligence technologies. Particular emphasis will be placed on statistical accuracy, the relationship between correlation and causation, and the distinction between group level v. individually-focused decisions.

2. The potential opportunities and harms of big data and artificial intelligence to human rights
This seminar examines the real-world application of big data and artificial intelligence technologies with a particular emphasis on how such technologies can be used both to advance and to undermine human rights protections. A key message underpinning this seminar is that technological advancement is central to the effective promotion and protection of human rights, but that technological developments should serve society rather than constitute an end of itself.

3. Current regulatory responses in the 'artificial intelligence' sector
This seminar will discuss existing proposals regarding the regulation of artificial intelligence technologies and tech-companies. The discussion is intended to provide students with a contextual understanding of the current scope of debate, and the various advantages and disadvantages of existing proposals, both regulatory and technological.

4. Tech companies approach to the governance and 'regulation' of artificial intelligence
This seminar will discuss the proposals presented by tech companies in relation to the deployment and 'regulation' of artificial intelligence and big data technologies. Relevant concepts in this regard include ethics-based approaches, the promulgation of 'community standards', and privacy-based models. Key issues arising in relation to the regulation of tech companies themselves will also be discussed in this seminar, with a particular focus on the difficulties inherent in regulating globalised tech companies by means of national legislation.

5. Case Study: AI and law enforcement
This seminar examines the use of artificial intelligence and machine learning-based technologies in the law enforcement sector. Of particular interest are topics such as hotspot policing, predictive policing vis-à-vis individuals, the use of risk assessment algorithms in a criminal justice context, and the use of live facial recognition technologies.

6. Case Study: AI and health
This seminar examines the use of artificial intelligence in the health sector. Topics to be addressed include the potential for big data and artificial intelligence to identify inequity in the health sector, issues arising in relation to preventive interventions, and the risks that health technologies may exacerbate existing inequalities.

7. Case Study: AI and Assistive Technology
This seminar examines the ways in which AI and assistive technology can enable persons with disabilities and older persons to live more independent and autonomous lives alongside the risks that such technology could be used to undermine other non-technological approaches to addressing barriers to living in the community as well as potentially increase social isolation and present a range of risks to human rights including but beyond privacy.

8. Case Study: AI and the humanitarian sector
This seminar examines how big data and artificial technologies are used, or can be used, by different actors in order to facilitate humanitarian responses. Of particular interest is the use of artificial intelligence to predict IDP or refugee flows, and to determine particular groups' protection needs.

9. Accountability and remedies
This seminar brings together the previous discussion, focusing on how algorithmic accountability can be achieved – with a particular focus on the development and refinement of a human rights based-approach – and how the right to an effective remedy can be ensured. In this context, the right to an effective remedy includes correcting issues relating to how AI applications and algorithms work, and providing remedy to affected individuals.

Learning and teaching methods

This module will be taught via weekly 2-hour seminars. The module teaching team will upload all relevant teaching materials on Moodle. You will find reading lists, the textbook, weekly handouts or PPS notes on Moodle. The materials in question are designed both to help you navigate the material to be covered in the seminars and to equip you to analyse the required readings. You will be expected to have completed the required readings in advance of your seminars.


This module does not appear to have a published bibliography for this year.

Assessment items, weightings and deadlines

Coursework / exam Description Deadline Coursework weighting
Coursework   LW937 - Essay    100% 

Exam format definitions

  • Remote, open book: Your exam will take place remotely via an online learning platform. You may refer to any physical or electronic materials during the exam.
  • In-person, open book: Your exam will take place on campus under invigilation. You may refer to any physical materials such as paper study notes or a textbook during the exam. Electronic devices may not be used in the exam.
  • In-person, open book (restricted): The exam will take place on campus under invigilation. You may refer only to specific physical materials such as a named textbook during the exam. Permitted materials will be specified by your department. Electronic devices may not be used in the exam.
  • In-person, closed book: The exam will take place on campus under invigilation. You may not refer to any physical materials or electronic devices during the exam. There may be times when a paper dictionary, for example, may be permitted in an otherwise closed book exam. Any exceptions will be specified by your department.

Your department will provide further guidance before your exams.

Overall assessment

Coursework Exam
100% 0%


Coursework Exam
100% 0%
Module supervisor and teaching staff
Prof Lorna McGregor, email: lmcgreg@essex.ac.uk.
Law Education Office, pgtlawqueries@essex.ac.uk



External examiner

Dr Titilayo Adebola
University of Aberdeen
Lecturer in Law
Available via Moodle
Of 18 hours, 16 (88.9%) hours available to students:
0 hours not recorded due to service coverage or fault;
2 hours not recorded due to opt-out by lecturer(s), module, or event type.


Further information
Essex Law School

Disclaimer: The University makes every effort to ensure that this information on its Module Directory is accurate and up-to-date. Exceptionally it can be necessary to make changes, for example to programmes, modules, facilities or fees. Examples of such reasons might include a change of law or regulatory requirements, industrial action, lack of demand, departure of key personnel, change in government policy, or withdrawal/reduction of funding. Changes to modules may for example consist of variations to the content and method of delivery or assessment of modules and other services, to discontinue modules and other services and to merge or combine modules. The University will endeavour to keep such changes to a minimum, and will also keep students informed appropriately by updating our programme specifications and module directory.

The full Procedures, Rules and Regulations of the University governing how it operates are set out in the Charter, Statutes and Ordinances and in the University Regulations, Policy and Procedures.