A curated collection of Python scripts that comprehensively demonstrate essential features and workflows of the Langchain library for conversational AI, chaining, structured output, embeddings, retrieval, and tools.
- Basic usage of generative chat models to answer user queries in conversational style, enabling rapid interaction with LLMs.
- Implementation of a simple chatbot that preserves conversation history across exchanges, supporting persistent and context-aware interactions.
- Utilization of templates for creating adaptive prompts, allowing context-driven and variable-rich responses from language models.
- Storage and management of dialogue history using specialized placeholders, with capabilities for saving sessions to files for continuity and debugging.
- Usage of both
TypedDictandPydanticschemas to enforce structure in LLM outputs. This approach yields reliable, validated data ideal for downstream processing, form filling, and APIs.
- Construction of sequential chains where outputs from one step serve as inputs for the next. Includes:
- Consistent instructions via prompt templates
- Clean output parsing
- Modular chaining for multi-stage workflows
- Parsing model outputs with Pydantic objects enhances error handling and strict schema enforcement, supporting robust data pipelines.
- Demonstration of ordered pipelines and conditional branching. Chains can adapt their logic based on model output (e.g., branching on detected sentiment), ideal for conversational routing and scenario handling.
- Running multiple prompts or inference steps in parallel, allowing high-throughput and concurrent analysis—beneficial for batch operations and ensemble tasks.
- Generation of text embeddings and calculation of cosine similarity for documents, forming the basis for semantic search, clustering, and knowledge retrieval.
- End-to-end pipeline featuring document loading, splitting, embedding, efficient vector storage (with FAISS), and retrieval chains. Optimized for building systems that combine external knowledge stores with generative models.
- Integration and demonstration of various file loaders for ingesting text, PDFs, directories, web pages, and CSVs. Supports robust preprocessing and document management.
- Efficient document indexing and similarity search using Chroma, coupled with retriever objects for responsive and relevant query results.
- Wrapping Python functions as structured tools in Langchain, enabling LLM workflows to call external logic and utilities, and expanding automation capabilities.