E4: #EUAIRegs Focus on standardization to mitigate bias

ATGO AI, a short form podcast from ForHumanity. This series is about the recent draft EU regulations for AI. The ForHumanity fellows, leading international experts on AI will be interviewed by international hosts, and the fellows will share their thoughts about the regulations. The draft #EUAIRegs mandate classification of high risk AI and also require specific approaches to ensure that such AI systems do not harm people. This regulation has proposed a penalty of 6% of global revenues or Euro 30 million for violations. Dr. Shea Brown is a researcher, lecturer, speaker, and consultant in AI Ethics, Machine-learning, and Astrophysics. He earned his Ph.D. in Astrophysics from the University of Minnesota. He is the founder and CEO of BABL, AI. He is a current ForHumanity fellow focusing on algorithmic auditing and AI governance. In this episode, Shea discusses his perspectives on EU AI regulations. He shares that a focus on standardization in order to mitigate bias is crucial. Regulations will also help other countries, like the USA, to increase their own AI regulations. These regulations will push the rest of the world to continue developing and evolving their AI regulations. Visit us at https://forhumanity.center/ to learn more --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Om Podcasten

ATGO AI is podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence).