Intro to Mistral AI
Offered By: Scrimba
Course Description
Overview
Learn how to use the Mistral AI to build intelligent apps, all the way from simple chat completions to advanced use-cases like RAG and function calling. Created in collaboration between Mistral AI and Scrimba.
- La Plateforme
- Chat Completions API
- Streaming
- Mistral 7B
- Mistral 8x7B
- Mistral's commercial models
- Embeddings
- Vectors
- Setting up a vector database
- Semantic search
- Chunking with LangChain
- RAG
- AI agents
- Function calling
- Ollama
- Running models locally
Syllabus
- Intro to Mistral AI
- 1. Welcome to the course
- 2. Intro to Mistral by Sophia Yang
- 3. Sign up for La Plateforme
- 4. Mistral's Chat Completion API
- 5. Mistral's Chat Completion API - part 2
- 6. Mistral's models
- 7. What is RAG?
- 8. What are embeddings?
- 9. RAG - Chunking text with LangChain
- 10. RAG - Completing the splitDocument function
- 11. RAG - Creating our very first embedding
- 12. RAG - Challenge: embedding all chunks and preparing it for the vector db
- 13. Set up your vector database
- 14. Vector databases
- 15. RAG - Uploading data to the vector db
- 16. RAG - Query and Create completion
- 17. RAG - Improve the retrieval and complete the generation
- 18. Function calling
- 19. Function calling - Adding a second function
- 20. Function calling - Unpacking the function and arguments
- 21. Function calling - Making the call
- 22. Function calling - Updating the messages array
- 23. Function calling - Creating the loop
- 24. Running Mistral locally
- 25. Outro & recap - Mistral AI
Related Courses
Advanced Retrieval for AI with ChromaDeepLearning.AI via Coursera Building Applications with Vector Databases
DeepLearning.AI via Coursera Embedding Models: From Architecture to Implementation
DeepLearning.AI via Coursera Large Language Models with Semantic Search
DeepLearning.AI via Coursera Vector Databases: from Embeddings to Applications
DeepLearning.AI via Coursera