YoVDO

Language Models Could Learn Semantics - No Matter How You Define It

Offered By: Santa Fe Institute via YouTube

Tags

Semantics Courses Artificial Intelligence Courses Machine Learning Courses Computational Linguistics Courses Language Models Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a thought-provoking talk by Tal Linzen on the potential of language models to learn semantics, regardless of how it's defined. Delve into the concept of learning meaning from form, examining an ideal language model and entailment semantics. Investigate Gricean speakers and analyze assumptions needed to prove the theorem. Discover experiments in toy settings and the MNLI experiment, while considering limitations such as the no-redundancy assumption. Evaluate how close practical language models can get to the ideal and discuss the ability of language models to refer. Gain valuable insights into the intersection of linguistics, semantics, and artificial intelligence in this engaging 26-minute presentation from the Santa Fe Institute.

Syllabus

Intro
What I do
Learning meaning from form
Overview
An ideal language mode
Entailment semantics
Gricean speakers
Example
Assumptions we need to prove this theorem
Experiment in toy settings
Experiment: MNLI
Limitation 1: the no-redundancy assumption is too strong
How close can we get to the ideal language model in practice?
Interim discussion
Back to reference
Can language models refer
Conclusions


Taught by

Santa Fe Institute

Tags

Related Courses

中级汉语语法 | Intermediate Chinese Grammar
Peking University via edX
Miracles of Human Language: An Introduction to Linguistics
Leiden University via Coursera
Introduction to Natural Language Processing
University of Michigan via Coursera
Linguaggio, identità di genere e lingua italiana
Ca' Foscari University of Venice via EduOpen
Natural Language Processing
Indian Institute of Technology, Kharagpur via Swayam