Generative AI and LLMs on AWS
Offered By: Pragmatic AI Labs via edX
Course Description
Overview
Master deploying generative AI models like GPT on AWS through hands-on labs. Learn architecture selection, cost optimization, monitoring, CI/CD pipelines, and compliance best practices. Gain skills in operationalizing LLMs using Amazon Bedrock, auto-scaling, spot instances, and differential privacy techniques. Ideal for ML engineers, data scientists, and technical leaders.
Course Highlights:
- Choose optimal LLM architectures for your applications
- Optimize cost, performance and scalability with auto-scaling and orchestration
- Monitor LLM metrics and continuously improve model quality
- Build secure CI/CD pipelines to train, deploy and update LLMs
- Ensure regulatory compliance via differential privacy and controlled rollouts
- Real-world, hands-on training for production-ready generative AI
Unlock the power of large language models on AWS. Master operationalization using cloud-native services through this comprehensive, practical training program.
Syllabus
Week 1: Getting Started with Developing on AWS for AI
****
-
Introduction to AWS Cloud Computing for AI, including the AWS Cloud Adoption Framework
-
Setting up AI-focused development environments using AWS services like Cloud9, SageMaker, and Lightsail
-
Developing serverless solutions for data, ML, and AI using AWS Bedrock and Rust
****
Week 2: AI Pair Programming from CodeWhisperer to Prompt Engineering
****
-
Learning prompt engineering techniques to guide large language models
-
Using AWS CodeWhisperer as an AI pair programming assistant
-
Leveraging CodeWhisperer CLI to automate tasks and build efficient Bash scripts
****
Week 3: Amazon Bedrock
****
-
Key capabilities and components of Amazon Bedrock
-
Accessing and invoking Bedrock foundation models using AWS CLI, Boto3 Python SDK, and Rust SDK
-
Prompt engineering and model evaluation to optimize Bedrock model performance
-
Customizing models with fine-tuning and knowledge bases
****
Week 4: Project Challenges
****
-
Applying course concepts to build an end-to-end AI workflow
-
Developing Rust functions for Bedrock agents and integrating into an orchestration flow
-
Debugging, benchmarking, and prompt engineering to optimize a deployed AI application on AWS
****
By the end of this course, you will have gained hands-on experience with cutting-edge AI/ML tools on AWS like Bedrock, CodeWhisperer, and Rust. You'll be able to build and deploy efficient, serverless AI applications in production.
Taught by
Noah Gift and Alfredo Deza
Related Courses
Elastic Cloud Infrastructure: Containers and Services auf DeutschGoogle Cloud via Coursera Deep Dive into Amazon Glacier
Amazon via Independent AWS Well-Architected Training
Amazon via Independent Gestión de compras eficientes para tu empresa
Logyca via edX Optimizing Your Google Cloud Costs 日本語版
Google Cloud via Coursera