Thanks to visit codestin.com
Credit goes to github.com

Skip to content

This repository provides various LangChain workflows and examples, demonstrating how to build powerful applications by chaining together large language models (LLMs) and external data sources. It offers practical code for implementing agents, document loaders, and custom tools.

Notifications You must be signed in to change notification settings

RohitNale/langchain-workflows

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Langchain Workflows

A curated collection of Python scripts that comprehensively demonstrate essential features and workflows of the Langchain library for conversational AI, chaining, structured output, embeddings, retrieval, and tools.

Workflow Demonstrations

Conversational Models

  • Basic usage of generative chat models to answer user queries in conversational style, enabling rapid interaction with LLMs.

Command-Line Chatbot

  • Implementation of a simple chatbot that preserves conversation history across exchanges, supporting persistent and context-aware interactions.

Dynamic Prompt Generation

  • Utilization of templates for creating adaptive prompts, allowing context-driven and variable-rich responses from language models.

Managing Chat History

  • Storage and management of dialogue history using specialized placeholders, with capabilities for saving sessions to files for continuity and debugging.

Structured Output Parsing

  • Usage of both TypedDict and Pydantic schemas to enforce structure in LLM outputs. This approach yields reliable, validated data ideal for downstream processing, form filling, and APIs.

Multi-Step Chaining

  • Construction of sequential chains where outputs from one step serve as inputs for the next. Includes:
    • Consistent instructions via prompt templates
    • Clean output parsing
    • Modular chaining for multi-stage workflows

Pydantic-Based Output Parsing

  • Parsing model outputs with Pydantic objects enhances error handling and strict schema enforcement, supporting robust data pipelines.

Sequential and Conditional Chains

  • Demonstration of ordered pipelines and conditional branching. Chains can adapt their logic based on model output (e.g., branching on detected sentiment), ideal for conversational routing and scenario handling.

Parallel Processing

  • Running multiple prompts or inference steps in parallel, allowing high-throughput and concurrent analysis—beneficial for batch operations and ensemble tasks.

Embedding and Similarity

  • Generation of text embeddings and calculation of cosine similarity for documents, forming the basis for semantic search, clustering, and knowledge retrieval.

Retrieval-Augmented Generation (RAG)

  • End-to-end pipeline featuring document loading, splitting, embedding, efficient vector storage (with FAISS), and retrieval chains. Optimized for building systems that combine external knowledge stores with generative models.

Document Loaders

  • Integration and demonstration of various file loaders for ingesting text, PDFs, directories, web pages, and CSVs. Supports robust preprocessing and document management.

Vector Stores and Retrieval Systems

  • Efficient document indexing and similarity search using Chroma, coupled with retriever objects for responsive and relevant query results.

Python Tools Integration

  • Wrapping Python functions as structured tools in Langchain, enabling LLM workflows to call external logic and utilities, and expanding automation capabilities.

Reference

About

This repository provides various LangChain workflows and examples, demonstrating how to build powerful applications by chaining together large language models (LLMs) and external data sources. It offers practical code for implementing agents, document loaders, and custom tools.

Topics

Resources

Stars

Watchers

Forks

Languages