
Using Open Source LLMs
Leverage community-supported Generative AI models in your technology stack.
Pillar
Technology – Platforms, Tools, Infrastructure & Productivity
Overview
This course introduces open-source large language models (LLMs) such as LLaMA, Mistral, and Falcon, exploring how to integrate and customize these models for enterprise applications. Participants will gain practical skills to deploy, fine-tune, and maintain open-source GenAI solutions that offer flexibility and control.
Learning Objectives
Participants will be able to:
-
Understand the architecture and capabilities of popular open-source LLMs
-
Deploy open-source models within cloud or on-premise environments
-
Fine-tune and customize models for specific business needs
-
Manage infrastructure and resource considerations for LLM workloads
-
Evaluate performance and scalability of open-source GenAI solutions
Target Audience
-
AI engineers and data scientists
-
Machine learning operations (MLOps) professionals
-
Cloud architects and infrastructure engineers
-
Technical leads overseeing AI deployments
Duration
20 hours over 4 days (5 hours per day)
Delivery Format
-
Hands-on deployment labs and model customization sessions
-
Technical deep dives and architecture discussions
-
Collaborative troubleshooting and optimization exercises
Materials Provided
-
Deployment guides and scripts for popular open-source LLMs
-
Sample datasets and fine-tuning configurations
-
Access to test environments for practice
-
Certificate of completion
Outcomes
-
Confidently deploy and manage open-source LLMs in production
-
Customize models to better fit domain-specific tasks
-
Optimize infrastructure for cost and performance efficiency
-
Maintain and update GenAI models responsibly and securely
Outline / Content
Day 1: Introduction to Open Source LLMs
-
Overview of LLaMA, Mistral, Falcon, and others
-
Comparing architectures and use cases
-
Licensing and community support considerations
Day 2: Deployment and Integration
-
Setting up models on cloud and on-premise environments
-
Managing dependencies and infrastructure requirements
-
Integrating models with business applications via APIs
Day 3: Fine-Tuning and Customization
-
Preparing datasets for domain adaptation
-
Techniques for fine-tuning and transfer learning
-
Evaluating model performance post customization
Day 4: Operations and Best Practices
-
Monitoring model health and performance
-
Scaling models to meet demand
-
Security and ethical considerations in open-source GenAI
-
Workshop: Deploy and fine-tune an open-source LLM for a sample use case
