Zoomtopia is here. Unlock the transformative power of generative AI, helping you connect, collaborate, and Work Happy with AI Companion.
Register nowEmpowering you to increase productivity, improve team effectiveness, and enhance skills.
Learn moreKeep your Zoom app up to date to access the latest features.
Download Center Download the Zoom appDownload hi-res images and animations to elevate your next Zoom meeting.
Browse Backgrounds Zoom Virtual BackgroundsEmpowering you to increase productivity, improve team effectiveness, and enhance skills.
Zoom AI Companion2024-09-17 07:24 PM
Artificial intelligence has entered the mainstream and helped us achieve new heights in efficiency through technology. This makes the advantages of AI seem practically limitless, giving our imaginations plenty of runway to reimagine what’s possible.
While it’s fun to dream up the next great idea, implementing a new AI solution requires a strong commitment to safety and securing the data that drives it. We’re kicking off a new series on the Zoom blog, where we’ll discuss how and why you should implement generative AI safely and what Zoom is doing to create a safe and secure AI environment for our customers.
AI can serve many different purposes, and generative AI gives you tools to generate new content including images, words, sounds, videos, and data through multiple inputs and outputs to AI models. Sometimes referred to as GenAI, generative AI goes beyond what’s humanly possible and uses various AI and machine learning algorithms to deliver instant results when prompted. In return, people can accelerate their work and save valuable time with generative AI tools for tasks such as drafting meeting summaries, sourcing images, or overcoming writer’s block with copywriting assistance.
Generative AI solutions can be invaluable to the end user, freeing up time to focus on more meaningful work. But before you choose which AI tools to implement in your workflows, it’s important to consider a few things.
In addition to these questions, it’s important to research how a vendor handles AI safety and security, and their privacy measures for implementing and using generative AI. We also recommend organizations and their end-users explore how data is collected and used to power the AI tools they want to implement.
To begin with, it’s important to know how AI safety compares to AI security. AI safety and security are fundamental yet distinct aspects of the deployment and protection of AI systems, but specifically:
Our commitment to AI security is also integrated throughout the entire Zoom Secure Development Lifecycle (ZSDLC), encompassing secure supply chain management, model training, secure design, secure development, secure operation, and employee training. We’re incorporating AI considerations into our GRC (Governance, Risk, and Compliance) policies and risk framework, and also including security testing and research conducted by our Security Assurance team.
Our approach to AI safety starts with the models and data we use to build our services. For Zoom-hosted models, we validate and manage our training data, and when selecting third-party vendors, we evaluate their safety procedures to ensure they align with our mission. Our evaluations include testing the models against standard safety metrics to validate common issues that can occur through model training.
Account owners and admins have controls to manage the availability of AI features for their accounts, including user and group level controls that provide options for deployment. These options include, when appropriate, allowing for human review of outputs before being shared more broadly. Additionally, when using in-meeting features within Zoom Workplace (our open collaboration platform with AI Companion), the sparkle icon notifies you that AI is enabled and in use to help provide transparency for customers and participants.
Here are three different ways we approach AI security and safety at Zoom:
At Zoom, we take a federated approach to AI, which means we apply the best large-language models for a specific task, including third-party AI models that customers are already familiar with. Customers can choose which features they use and whether they want to use Zoom-hosted models only, which is available for select features. This gives administrators more control over what’s available within their organization.
In line with our commitment to responsible AI, Zoom does not use any customer audio, video, chat, screen sharing, attachments, or other communications like customer content (such as poll results, whiteboard, and reactions) to train Zoom’s or third-party artificial intelligence models. For more information about how Zoom AI Companion handles customer data, visit our support page.
While this initial discussion of AI safety and security just begins to scratch the surface, in the coming months, we’ll share more details about how we’re maximizing our efforts during the global shift to AI. We believe that AI is an incredible way to improve the way we work and that this is just the beginning. As we continue to release new features for AI Companion and Zoom Workplace, rest assured, AI safety and security are at the forefront of our development process.
If you want to learn more about Zoom’s approach to privacy and security, join us for our upcoming webinar, titled Zoom’s Approach to AI Privacy and Security, on September 26, 2024.
2024-09-18 12:02 PM
Thank you for highlighting this, @rome810! ✨