BETA

The EDM Council is pleased to add search capabilities to our website. This is currently in beta (testing) mode.

We appreciate any feedback regarding the new search functionality or on any aspect of your website experience.

Contact Us
Webinars
Past Webinar

Mitigating Generative AI Risks: A Framework for Responsible Deployment

Date and Time

Tue, Oct 15, 2024
Read the 3-minute summary

Details

As generative AI systems become increasingly integrated into critical business processes, the need for responsible deployment has never been greater. This webinar will explore a comprehensive framework for mitigating the key risks associated with generative AI, including hallucinations, unpredictable outputs, and ethical concerns. Panelists will dive into strategies for ensuring AI systems remain aligned with organizational goals and user expectations while maintaining safety and accuracy. Attendees will gain practical insights into managing the complexities of AI deployment, from development to execution, ensuring innovation proceeds responsibly and securely in today’s rapidly evolving technological landscape.

Speakers

David Nadeau
VP of Data Science, Innodata
Heather Frase
Lead Scientist, MLCommons
Panel Moderator: Rahul Singhal
Chief Product and Marketing Officer, Innodata
EDM Council Moderator: Mike Meriton
Co-Founder, EDM Council

Post-event summary

The webinar titled “Mitigating Generative AI Risks: A Framework for Responsible Deployment,” hosted by EDM Council and Innodata, addressed the growing need for organizations to responsibly deploy generative AI systems. The conversation was led by experts in the industry:

  • David Nadeau, VP of Data Science, Innodata
  • Heather Frase, Lead Scientist, MLCommons
  • Panel Moderator: Rahul Singhal, Chief Product and Marketing Officer, Innodata
  • EDM Council Moderator: Mike Meriton, Co-Founder, EDM Council

Speakers emphasized the critical importance of frameworks for AI governance as AI becomes increasingly integrated into business processes. They discussed the risks associated with AI, such as data privacy, hallucinations, and bias. Panelists highlighted the significance of safeguards and frameworks, including the MIT AI Risk Database, which outlines key risks and strategies for mitigating them.

David emphasized the need for continuous benchmarking and evaluation to address security risks, noting, “Larger language models are good at evaluating their own outputs, but organizations must ensure they are systematically testing and improving these models to reduce hallucinations and inaccuracies.” Heather highlighted the importance of assurance and operational testing to identify AI vulnerabilities early in the deployment process. She also raised concerns about the impact of cultural homogenization in AI-generated outputs due to training on predominantly Western datasets.

The webinar further explored regulatory developments, such as the EU AI Act, which mandates registration, transparency, and accountability for AI systems, and the critical role of cloud data management frameworks in securing sensitive data. The discussion concluded with predictions for the future, including the rise of agent-based AI and the increasing integration of AI in consumer-facing applications, underscoring the need for continuous red-teaming, monitoring, and testing to ensure AI’s safe and responsible deployment in various sectors.