Evaluating LLM-Synthesized Julia Code
Offered By: The Julia Programming Language via YouTube
Course Description
Overview
Explore the performance of state-of-the-art code language models in generating Julia code through this insightful 11-minute conference talk from JuliaCon 2024. Delve into Jun Tian's analysis of how large language models (LLMs) fare when tasked with Julia programming, using benchmarks adapted from popular Python-focused evaluations like HumanEval and MBPP. Gain valuable insights into the capabilities and limitations of AI-generated Julia code, and learn about the ongoing research efforts documented in the HumanEval.jl GitHub repository. Discover the implications of this research for the future of automated code generation in the Julia ecosystem.
Syllabus
Evaluate LLM synthesized Julia code | Tian | JuliaCon 2024
Taught by
The Julia Programming Language
Related Courses
CompilersStanford University via Coursera Build a Modern Computer from First Principles: Nand to Tetris Part II (project-centered course)
Hebrew University of Jerusalem via Coursera Разработка веб-сервисов на Go - основы языка
Moscow Institute of Physics and Technology via Coursera Complete Guide to Protocol Buffers 3 [Java, Golang, Python]
Udemy Angular tooling: Generating code with schematics
Coursera Project Network via Coursera