Regulation of Artificial Intelligence is the need of the hour. The release of The Blueprint for An AI Bill of Rights (AIBoR) by the US White House Office of Science and Technology Policy is fitting. The Blueprint adds to global debates on AI governance by intending to guide the design, use, and deployment of automated systems for the protection of the American people in the age of AI. Although lacking legal enforcement, the Blueprint signals a deliberate move towards a US model for the governance of AI which acknowledges the role played by automated systems in widening existing patterns of discrimination and inequality. Similarly to the proposed AI Act of the European Union, the AIBoR adopts a rights-based framework and sets out guidance to ensure the protection of rights through practices such as data minimization, privacy by design, seeking consent from data subjects, and an express deterrence of continuous surveillance in spaces such as work and housing where its use limits rights, opportunities, and access. By arguing for policy guardrails to limit the perpetuation of such harms the Blueprint is a massive step forward by the US towards a rights-based regulation of AI.
So what are the implications of the Blueprint for Africans? If implemented, it could exert great benefit to the African continent. The embrace of AI in Africa has not been without its challenges and the consequences of this technology have been felt through human rights violations, systemic discrimination, and deepened inequalities. This piece discusses the potential implications of the blueprint on AI governance in Africa and for Africans and seeks to highlight some of the challenges posed by AI and the benefits that this blueprint could present for these hurdles.
Highlights of the AI Bill of Rights
The Blueprint promotes five principles for automated systems. These are that:
- Automated systems should be safe and effective by ensuring consultation from diverse communities is done during their development while systems undergo testing, risk identification, and mitigation before deployment;
- Automated systems should be used and designed equitably. Developers should protect users from algorithmic discrimination by implementing equity assessments, using representative data, and ensuring that there is ongoing disparity testing and mitigation;
- Users should be protected from abusive data practices. Also users should have agency over how their data is used;
- Users should be notified in plain and clear language when an automated system is being used and how and why an outcome impacting them was determined by an automated system; and
- Users can opt-out of an automated system and have an accessible, equitable, and effective human alternative and fall back.
Effects of the use of automated systems on Africans
Arguably the African continent has often been neglected during conversations about AI and automated systems. The effect of this neglect has been the importation of these technologies without adequately ensuring that they fit circumstances and conditions. By pointing to the need to consider the social impact before deployment and open testing, the US Blueprint provides the language and principles that others can leverage to push for similarly safe and effective systems in their circumstances. Similar provisions in African jurisdictions could limit exposure to Africans from automated systems that may not advance their interests.
Discriminatory Practices
Africans have experienced systemic discriminatory practices the world over. These may now be further entrenched with the rollout of automated systems. For example, Africans have faced high visa rejections when seeking to travel outside the continent as a result of bias in decision-making technologies used in various immigration sectors. The use of CCTvs with embedded facial recognition technologies has also become more prevalent in Africa. The danger of these systems is the collection of indiscriminate footage of people. In Johannesburg, CCTvs provide a powerful tool to monitor and segregate historically disadvantaged individuals under the disguise of the provision of neutral security.
The Blueprint serves as a guide to address AI discriminatory practices by providing principles for social protections against algorithmic discrimination. Additionally, the opt-out option on automated systems with an intended use within sensitive domains which are highlighted as ‘those in which activities being conducted can cause material harms, including significant adverse effects on human rights such as autonomy and dignity, as well as civil rights, provided in the blueprint enables individual resistance to severe discriminatory practices and protects the access to essential opportunities and services through human alternatives, consideration, and fallback.
Violation of human rights
The blueprint recognizes the impact of AI technologies on the enjoyment of human rights. For example, typically AI uses data that was acquired in an ethically dubious manner, either through breaches of privacy or routine surveillance that compromises basic freedoms of movement, association, and expression. A notable provision of the AIBoR is its concern not just for individual rights, but also towards the protection of communities against group harms. The Blueprint stipulates that AI and other data driven automated systems cause harm to individuals but a greater magnitude of their impacts are mostly readily available at the community level. The blueprint broadly defines communities as neighbourhoods, social network connections, families, and people connected by identity among others. This provision advances the rights of African communities that have been subjected to surveillance practices from foreign companies and multinational companies.
Implications of the AI Bill of Rights on AI governance in Africa
The evident outcome that policy frameworks such as the AIBoR seek to achieve is to provide a balance between potential harms and encourage innovation of AI technologies. The blueprint can help inspire the African continent to consider the principles it has laid out in the development of regulatory responses for responsible AI. However, it is vital we, as a continent, do not blindly adopt such principles. Rather, we should tweak them to the development priorities and lived experiences of Africans while keenly noting that Africa is not homogenous and a collective policy would not be an effective rule-making mechanism. Africa is a continent of different cultures, ethnicities, and religions. This means that each African country, with its peculiarities, is tasked with its policy-making centered on its specific values. We should also avoid the danger of a blanket regulation that fails to contextualize the continent’s needs and problems and instead limits African AI policy to the advancement of developed policies that are inapplicable to Africa’s unique circumstances by enacting sector-specific ethical and responsible AI principles, especially in strategic sectors of the African continent such as Agriculture, Fintech, and Healthcare among others.
Though the blueprint is an appreciated step, its non-binding nature provides little assurance with regard to implementation or sanctions for non-compliance. What will be most interesting to see is the action taken as a result of this blueprint; instances of recall, redress, and the ability to opt-out of these systems in practice.
Research ICT Africa is working with partners at D4d.net to develop a rights-based Global Index on Responsible AI, which will measure commitments and progress to responsible AI in countries around the world. The blueprint will be an important instrument in assessing the activities of the US in supporting rights-based AI governance and in setting standards that can be considered and reproduced in other parts of the world.