Intel: Unlocking the Power of Local LLMs for RAG-based AI
About this Event
View map Free EventJoin us for an immersive, hands-on session where you'll delve into the world of local Large Language Models (LLMs) and discover how to harness their power for RAG (Retrieve, Augment, Generate) based AI applications. This interactive session is designed to equip you with the skills and knowledge to design, implement RAG-based AI systems using local LLMs, eliminating the need for cloud-based services and ensuring data privacy and security. In this workshop, you'll learn how to:
- Understand the fundamentals of local LLMs and RAG-based AI and Gain hands-on experience with integrating local LLMs with RAG-based AI systems
- Deploy local LLMs: Using popular frameworks and tools (e.g., Hugging Face Transformers, PyTorch), you'll learn how to deploy local LLMs on your own hardware, ensuring data privacy and security.
- Integrate local LLMs with RAG: You'll discover how to integrate local LLMs with RAG-based AI systems, enabling you to retrieve relevant information, augment it with contextual knowledge, and generate human-like text.
- Explore real-world use cases and case studies of local LLM-powered RAG-based AI applications
Registration is limited to 80.
Presenter:
- Praveen Kundurthy, AI Solutions Engineer, Intel
Accommodation requests related to a disability should be made to Jared Haddock at jared.haddock@oregonstate.edu or 541-737-2367 at least five (5) business days prior to the event.