12-08, 13:30–14:00 (UTC), Data Track
This presentation explores the challenges, such as cost, latency, and security, faced when developing a new (Large Language Model) LLM App and presents solutions to these obstacles. You will learn how to build your own AI-enabled real-time data pipeline without complex and fragmented typical LLM stacks such as vector databases, frameworks, or caches. We will leverage an open-source LLM App library in Python to implement real-time in-memory data indexing directly reading data from any compatible storage, processing, analyzing, and sending it to output streams.
ChatGPT and similar AI chatbots have limitations like not addressing recent events post-September 2021, non-public documents, or past conversations. They struggle with real-time, frequently changing data and can't handle extensive content or retain data long-term. Despite LLMs' ease in prototype creation, making them production-ready is tough.
In this talk, developers learn how to build full-stack LLM apps using Pathway's open-source framework in Python and Streamlit for constructing the UI. This method opens up the potential to build solid systems for retrieving information, recommending content, or even making chatbots that answer user questions based on ever-changing streaming data.
No previous knowledge expected
Bobur is a developer advocate and speaker specializing in software and data engineering. With over 10- years of experience in IT, he blogs about open-source technologies and the community around them. Nowdays he is contributing to Pathway's LLM App for the future of AI apps development .