Artificial intelligence never seems to be far from the headlines. Sarah O`Connor of the Financial Times was quoted in a recent House of Lords report as stating that `if you ever write an article that has robots or artificial intelligence in the headline, you are guaranteed that it will have twice as many people clicking on it`.
The advent of big data and more advanced and cheaper computational power has meant that machine learning has become much more accessible and available to a wide range of actors. AI is no longer the preserve of science fiction but is already a reality with many forms of machine learning and robotics already being used today. Within the last few years, governments and major technology companies have started to release AI strategies and major investments are being made in innovation and in understanding how AI will benefit and present risks to society.
AI is already offering many opportunities for the better protection of human rights while at the same time presenting serious risks. While many actors describe these risks narrowly (focusing on the right to privacy), the threats (as well as the opportunities) affect the entire human rights spectrum. For example, AI applications may be used to document human rights violations; implement the Sustainable Development Goals; respond more effectively to the refugee `crisis`; and manage the impacts of climate change. They potentially offer innovative ways to enhance access to education; enable persons with disabilities and older persons to live more autonomously; advance the right to the highest attainable standard of health; and provide ways to tackle human trafficking and forced labour.
At the same time, the use of big data and AI can present significant risks to human rights, even in contexts where they are used with the intention to advance them. They can introduce new threats and aggravate and amplify existing challenges to human rights, for example by reducing accountability for rights violations due to opaque decision-making processes, or by widening inequality. This could be due to factors such as uneven distribution of benefits, discriminatory impacts and biased datasets. Big data and AI have wide ranging effects across society and individuals` lives, including collective impact, many of which are not yet fully understood. They can put the full spectrum of human rights – civil, cultural, economic, political and social – at risk.
Many actors involved in the governance and regulation of AI as well as key international, regional and national human rights institutions and NGOs are starting to work on the human rights impact of AI and to develop responses, although this remains at an embryonic stage.
This module is designed to enable you to learn about the different technologies that fall under the broad and popular heading of `AI` and to understand and analyse how their use in different contexts affects human rights. The module is offered by the ESRC Human Rights, Big Data and Technology project based at the University of Essex.
We are a major ESRC investment examining the risks and opportunities for human rights posed by big data and technology and developing effective policy, governance and regulatory responses. We work internationally on these issues and will integrate our ongoing research and practical experience on these issues into the module. This will enable you to engage with technological and policy developments as they happen in a rapidly changing field.
As many international organisations, governments, technology companies and NGOs are starting to grapple with the impact of AI on human rights and the way in which they work, this module will provide preparation for study and employment after the LLM and MA across a range of domains and institutions.