YoVDO

Evaluating LLM-Synthesized Julia Code

Offered By: The Julia Programming Language via YouTube

Tags

Julia Courses Code Generation Courses Performance Evaluation Courses Benchmarking Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the performance of state-of-the-art code language models in generating Julia code through this insightful 11-minute conference talk from JuliaCon 2024. Delve into Jun Tian's analysis of how large language models (LLMs) fare when tasked with Julia programming, using benchmarks adapted from popular Python-focused evaluations like HumanEval and MBPP. Gain valuable insights into the capabilities and limitations of AI-generated Julia code, and learn about the ongoing research efforts documented in the HumanEval.jl GitHub repository. Discover the implications of this research for the future of automated code generation in the Julia ecosystem.

Syllabus

Evaluate LLM synthesized Julia code | Tian | JuliaCon 2024


Taught by

The Julia Programming Language

Related Courses

Observing and Analysing Performance in Sport
OpenLearning
Introduction aux réseaux mobiles
Institut Mines-Télécom via France Université Numerique
Claves para Gestionar Personas
IESE Business School via Coursera
الأجهزة الطبية في غرف العمليات والعناية المركزة
Rwaq (رواق)
Clinical Supervision with Confidence
University of East Anglia via FutureLearn