Thank you for your continued support and commitment to our newsletter. Your subscriptions, shares, and reads are noticed and appreciated.
As the year continues to swiftly go by, the topic of AI in Africa remains constant and ever so vibrant. As such, we would like to keep you up to speed with some of these ongoing discussions. In this issue, we feature two pieces, a blog article from Research ICT Africa’s researcher, Hanani Hlomani on ‘Why South Africa needs a more holistic and contextual approach to AI regulation’ and an article by The MIT Sloan Management Review and Boston Consulting Group (BCG) sharing perspectives of some AI experts on the governance of Generative AI dubbed ‘Are Responsible AI Programs Ready for Generative AI? Experts Are Doubtful.’
We also highlight some opportunities on AI and the digital environment from our networks.
Finally, we share our current reads: Ensuring Generative AI is Responsible AI: The Case For Allowing Time To Find The Evidence, a blog piece by Matthew Smith with inputs from the AI4D community, and a report prepared by The Centre for Intellectual Property and Information Technology Law (CIPIT) at Strathmore University: State of AI in Africa Report, 2023.
Happy Reading!
The African Observatory on Responsible AI Team
FROM THE RESEARCH DIRECTORY
Why South Africa needs a more holistic and contextual approach to AI regulation
This article, authored by Hanani Hlomani from Research ICT Africa, calls for a contextual, holistic approach to AI regulation for South Africa. For Hlomani, such an approach will promote the responsible development and use of AI, while also protecting people’s rights and interests and levelling the participatory field in the AI revolution. The article cites the advantages of this approach as taking into account broader societal and ethical implications of AI technologies and understanding and optimising the potential benefits. It suggests focusing on creating a regulatory framework that is more suitable for regulating a wide range of AI applications and use cases, as well as promoting responsible innovation in the AI sector. Read the article here.
Are Responsible AI Programs Ready for Generative AI? Experts Are Doubtful
The MIT Sloan Management Review and Boston Consulting Group (BCG) assembled an international panel of AI experts to help us understand how responsible AI is being implemented across organizations worldwide. This article shares the perspectives of an panel experts on whether responsible AI programs effectively govern generative AI solutions such as ChatGPT and offers recommendations to address the risks. You can read more on this here.
NETWORKING, SOCIAL & OPPORTUNITIES
Global Grand Challenges: Catalyzing Equitable Artificial Intelligence (AI) Use
The Bill & Melinda Gates Foundation and Grand Challenges (GC) Partners (including GC South Africa and GC Brazil, with others to be confirmed) have jointly discussed the need for an equitable and responsible approach to the use of AI, specifically Large Language Models (LLMs) in LMICs. This call for proposals seeks innovative and safe approaches to the use of Large Language Models (ChatGPT-4, or other credible sources with equivalent capability) to build an evidence base across the Bill & Melinda Gates Foundation Divisions of Global Health, Gender Equality, Global Growth and Opportunity (including Global Education), and Global Development contexts in LMICs.
Read more about this opportunity and the application process here.
Call for Submissions: Global AI & Regulation Emerging Scholars Workshop and Paper Award
The AI + Society Initiative at the University of Ottawa is inviting emerging scholars in the field of artificial intelligence (AI) and regulation to participate in a workshop where they will have the opportunity to discuss their draft paper with leading scholars in the field of AI and the law. The workshop aims to foster collaboration and knowledge exchange among scholars in the field of artificial intelligence and regulation, and provide emerging researchers with valuable feedback on draft papers on the development of legal frameworks and regulatory measures for the safe and responsible use of AI. Find out more about this opportunity here.
Call for Consultants towards protecting and protecting diversity of cultural expressions in the digital environment in Zimbabwe
UNESCO launched a pilot technical assistance programme to further the implementation of the 2005 Convention in the digital environment. The pilot phase assists in elaborating and/or implementing context-based measures related to the cultural and creative industries in the digital environment. As such, the authority of the National Arts Council of Zimbabwe intends to recruit;
- A consultant to conduct a baseline study for the project. See this opportunity here.
- A communications consultant to develop communications materials for the project. Read more on this here
OUR CURRENT READS
Ensuring Generative AI is Responsible AI: The Case For Allowing Time To Find The Evidence
This blog by Matthew Smith from the International Development Research Centre of Canada, addresses the emergence of generative AI applications and the critical questions that this technology raises for the African continent, with inputs from the AI4D community. Questions include: How can we manage the risks and benefits of AI technologies when their advances are moving faster than our ability to understand their impact? Who is responsible for developing AI and for whom is it being developed? Those who create and refine AI models inevitably embed their values into them, but when technologies have the potential for global impact, whose values should be prioritized?
Read this blog here to find out more about the proposed solutions to these fundamental questions.
State of AI in Africa Report, 2023
We are also reading the State of AI in Africa Report 2023 prepared by the Centre for Intellectual Property and Information Technology Law (CIPIT) at Strathmore University in Nairobi, Kenya. The report highlights the potential of AI technologies to transform various sectors in Africa, such as business operations, healthcare, education, legal and judicial services, and transportation. However, the report also identifies significant gaps in access to knowledge/information, data, education, training, and human resources necessary for AI development and adoption. Read this report here to find out more about the current state of AI in Africa.
Our Social Media Handles
We have created our social media handles and you can now follow us on our Twitter, Instagram and LinkedIn accounts to stay up to date with our latest publications, events and news.
Have Something to Share?
This year we plan to send out mailers to our network on a more frequent basis. Please do get in contact with us via email should you have any ideas for collaboration with us, or if you have any blog posts you would like us to promote through our platform. We are also interested in any events, opportunities, research or research projects you would like for us to share within our growing African Network on Responsible AI.