Just last month (22 January 2026), the Singapore Infocomm Media Development Authority (IMDA) launched the Model AI Governance Framework for Agentic AI, a world-first framework consolidating the principles for safe and reliable agentic AI deployment. In this webinar, we will unpack the Framework and discuss:
- What is agentic AI - what are its key features, and differences between "agentic AI" and "automation"
- What are the risks of adopting agentic AI?
- What are recommended steps for organisations adopting agentic AI to manage the risks?
- What happens when things go wrong?
As agentic AI also incorporates large language models and other generative AI systems - we will examine recent legal developments - including whether in-house teams, boards or employees lose privilege when using AI to generate documents (e.g. instructions to external counsel, material for court), following from the recent United States v Heppner ruling. We'll also discuss agentic AI against the backdrop of Singapore laws and cases, such as Quoine v B2C2, which concerned the trades of cryptocurrency executed by algorithms on both sides, and it was argued to set aside a trade concluded at 250x the going rate on grounds of mistake. So what happens if your AI agent makes a mistake? Join us to find out more!
If you would like to explore courses from our Academy on Data Protection, Cyber Regulation & AI Governance, please click here.
For courses on Privacy Engineering, Cybersecurity & AI Technology, please click here.