Global AI Safety Summit highlights Validate AI’s role in driving AI Assurance

The first-ever Global AI Safety Summit will take place tomorrow at historic Bletchley Park, renowned for its World War Two codebreaking achievements.

The Summit will bring together 100 global leaders, tech executives, academics, and AI researchers to discuss the emerging risks and opportunities of advanced AI.

UK Prime Minister Rishi Sunak has highlighted both the risks and the transformational power of AI and its potential to drive economic growth, enhance human capabilities, and address previously insurmountable challenges.

One of the summit’s key goals is the establishment of an international consensus on the nature of AI-related risks.

One organisation already working on AI assurance and speaking at tomorrow’s fringe event aligned to the AI safety summit is Validate AI,  an independent community interest company formed by representatives from government, Imperial College, The University of Oxford, The OR Society, Office for Statistics Regulation, UCL, and The University of Nottingham Ningbo China.

Validate AI’s mission is to enhance the validation of AI systems, thereby building trust in AI and promoting a cross-sectoral approach involving academia, government, industry, and charitable sectors.

Shakeel Khan, Co-Founder and CEO of Validate AI, will take part in a panel discussion “Delivering on the UK AI Vision as a Global Leader for AI Assurance: An Expert Community-Driven Approach.” He will discuss Validate AI’s pioneering work in developing an AI Assurance Toolkit.

Shakeel Khan states, “AI safety is a global priority, and the central question is: How do we assure the trustworthiness of AI? Validate AI’s mission is to provide assurance to AI solutions. We are creating practitioner-centric standards for validating AI systems and driving innovation by facilitating collaboration between government, industry, and academia. We are researching and evaluating AI safety best practice and our goal is also to broaden education and increase the understanding of AI’s potential and opportunities, all while minimising risks.”

He will discuss how Validate AI promotes trusted AI via its three tenets of ensuring relevance to answering the problem, ethical alignment to societal values and being technically sound to ensure robustness.  These tenets or supports by a series of invaluable checks produced by practitioners to ensure the delivery of safe, trusted and responsible AI.

To enhance awareness and understanding of AI, The OR Society is partnering with Validate AI on a series of roadshows on AI safety and assurance at leading UK universities including Imperial College, The University of Edinburgh, and The University of Sheffield.

Seb Hargreaves, Executive Director of The OR Society, states, “The Global AI Safety Summit is a significant event uniting global leaders to address AI assurance and safety. We are pleased that our partner Validate AI will be discussing the pioneering work it is doing to bring new assurance standards to AI. Validate AI’s mission aligns with The OR Society’s commitment to solving complex challenges. Our roadshows are focused on spreading awareness and knowledge about AI safety and assurance that will help drive the safe and responsible use of AI.”

For more information on The OR Society, please visit www.theorsociety.com.

Previous post Alibaba Cloud Launches Tongyi Qianwen 2.0 and Industry-specific Models to Support Customers Reap Benefits of Generative AI
Next post Connected Cars — Safety gained or safety lost?