An Introduction to AI Unit 9: AI Reproducibility, Safety and Fairness
This unit will cover the practical use of AI and the traps we can fall into. We will explore why AI often gets it wrong and can potentially harm society. In this unit, we will look at the quest to build trustworthy AI systems. We will discuss how bias can enter into algorithms and what causes it in both people and machines. We will explore why safety is a major concern when designing artificial intelligence systems for autonomous cars or drones that could cause harm if they malfunctioned. We will also look at the ways an AI system might fail – through external attacks like hacking, and through internal programming bugs/glitches or errors caused by researchers themselves who don't understand the limitations of their data and system well enough. We will explore how companies should organise before implementing AI, and finally how to invest in AI and careers in AI.
All prices shown are exclusive of VAT which will be added at checkout
Dr. Ronjon Nag
Dr. Ronjon Nag has an Engineering PhD (Cambridge), an SM in Management Science (MIT) and a B.Sc. in Electrical Engineering (Birmingham). He is president of the R42 Institute and he became a Stanford University Interdisciplinary Distinguished Careers Institute Fellow at the Center for Study for Language and Information in 2016. He works on the Boundaries of Humanity Project looking at intelligence in humans, animals and machines in the age of biotechnology and artificial intelligence. He teaches at Stanford Medical School. He is an active Advisor and Board Member to some 70 AI and Biotech companies. He has also been awarded the IET Mountbatten Medal for contributions to the mobile phone industry.
• Understand why AI systems sometimes exist bias.
• Understand what AI safety is.
• Understand how AI can be attacked.
• Understand how companies should organise for AI.
• Understand how investors should evaluate AI companies.
• Understand what careers exist in AI.