YoVDO

Robust Distortion-free Watermarks for Language Models

Offered By: Google TechTalks via YouTube

Tags

Language Models Courses Machine Learning Courses Cryptography Courses Generative AI Courses Statistical Analysis Courses Text Generation Courses Autoregressive Models Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a Google TechTalk presented by John Thickstun on robust distortion-free watermarks for language models. Delve into a protocol for planting watermarks into text generated by autoregressive language models that remain robust to edits without altering the distribution of generated text. Learn how the watermarking process controls the source of randomness using a secret key during the language model's decoding phase. Discover the statistical correlations used for watermark detection and the provable undetectability for those without the key. Examine two alternative decoders: inverse transform sampling and Gumbel argmax sampling. Gain insights from experimental validations using OPT-1.3B, LLaMA 7B, and Alpaca 7B language models, demonstrating statistical power and robustness against paraphrasing attacks. Understand the speaker's background as a postdoctoral researcher at Stanford University, his previous work, and recognition in the field of generative models and controllability.

Syllabus

Robust Distortion-free Watermarks for Language Models


Taught by

Google TechTalks

Related Courses

Building and Managing Superior Skills
State University of New York via Coursera
ChatGPT et IA : mode d'emploi pour managers et RH
CNAM via France Université Numerique
Digital Skills: Artificial Intelligence
Accenture via FutureLearn
AI Foundations for Everyone
IBM via Coursera
Design a Feminist Chatbot
Institute of Coding via FutureLearn