What are the risks a company faces if it deploys a generative AI system (e.g. as a chatbot on the company's website to answer questions from customers, or used internally to generate both internal- and external-facing documents), and the output is (1) factually incorrect, (2) defamatory, (3) toxic, or (4) infringes a third party's IP rights? We will discuss how liability may arise, who can it fall on (developer, deployer or user), and some current solutions that regulators/industry have adopted to reduce instances of such output. We will also touch on the indemnities offered by generative AI companies to customers using their products, so that you understand what to look out for in indemnity clauses if your company is procuring a third-party AI solution.
If you would like to explore courses from our Academy on Data Protection, Cyber Regulation & AI Governance, please click here.
For courses on Privacy Engineering, Cybersecurity & AI Technology, please click here.