From e8b76524e3d532069a170e8c40ea254dc97d509f Mon Sep 17 00:00:00 2001 From: WonyoungLee Date: Mon, 13 Jan 2025 22:12:10 +0800 Subject: [PATCH 01/14] [E-4] 15-Agent / 05-IterationFunction(Human-in-the-loop) [Title] IterationFunction(Human-in-the-loop) [Version] initial commit [Language] ENG [Package] langsmith,langchain,langchain_core,langchain_openai,langchain_teddynote --- ...IterationFunction(Human-in-the-loop).ipynb | 362 ++++++++++++++++++ 1 file changed, 362 insertions(+) create mode 100644 15-Agent/05-IterationFunction(Human-in-the-loop).ipynb diff --git a/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb b/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb new file mode 100644 index 000000000..48e77bffc --- /dev/null +++ b/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb @@ -0,0 +1,362 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Iteration Function(Human-in-the-loop)\n", + "\n", + "- Author: [Wonyoung Lee](https://github.com/BaBetterB)\n", + "- Peer Review: \n", + "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n", + "\n", + "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/BaBetterB/LangChain-OpenTutorial/blob/main/13-LangChain-Expression-Language/10-Binding.ipynb) \n", + "[![Open in GitHub](https://img.shields.io/badge/Open%20in%20GitHub-181717?style=flat-square&logo=github&logoColor=white)](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/07-TextSplitter/04-SemanticChunker.ipynb)\n", + "\n", + "\n", + "## Overview\n", + "\n", + "This tutorial covers the functionality of repeating the agent's execution process or receiving user input to decide whether to proceed during intermediate steps. \n", + "\n", + "The feature of asking the user whether to continue during the agent's execution process is called `Human-in-the-loop` . \n", + "\n", + "The `iter()` method creates an iterator that allows you to step through the agent's execution process step-by-step.\n", + "\n", + "\n", + "### Table of Contents\n", + "\n", + "- [Overview](#overview)\n", + "- [Environement Setup](#environment-setup)\n", + "- [AgentExecutor](#agentexecutor)\n", + "\n", + "\n", + "\n", + "### References\n", + "\n", + "\n", + "- [LangChain ChatOpenAI API reference](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)\n", + "- [LangChain AgentExecutor API reference](https://python.langchain.com/api_reference/langchain/agents/langchain.agents.agent.AgentExecutor.html)\n", + "- [LangSmith API reference](https://docs.smith.langchain.com/)\n", + "\n", + "----\n", + "\n", + " \n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Environment Setup\n", + "\n", + "Set up the environment. You may refer to [Environment Setup](https://wikidocs.net/257836) for more details.\n", + "\n", + "**[Note]**\n", + "- `langchain-opentutorial` is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials. \n", + "- You can checkout the [ `langchain-opentutorial` ](https://github.com/LangChain-OpenTutorial/langchain-opentutorial-pypi) for more details." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Load sample text and output the content." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "\n", + "[notice] A new release of pip is available: 24.2 -> 24.3.1\n", + "[notice] To update, run: python.exe -m pip install --upgrade pip\n" + ] + } + ], + "source": [ + "%%capture --no-stderr\n", + "%pip install langchain-opentutorial" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "# Install required packages\n", + "from langchain_opentutorial import package\n", + "\n", + "\n", + "package.install(\n", + " [\n", + " \"langsmith\",\n", + " \"langchain\",\n", + " \"langchain_core\",\n", + " \"langchain_openai\",\n", + " \"langchain_teddynote\",\n", + " ],\n", + " verbose=False,\n", + " upgrade=False,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Environment variables have been set successfully.\n" + ] + } + ], + "source": [ + "# Set environment variables\n", + "from langchain_opentutorial import set_env\n", + "\n", + "set_env(\n", + " {\n", + " \"OPENAI_API_KEY\": \"\",\n", + " \"LANGCHAIN_API_KEY\": \"\",\n", + " \"LANGCHAIN_TRACING_V2\": \"true\",\n", + " \"LANGCHAIN_ENDPOINT\": \"https://api.smith.langchain.com\",\n", + " \"LANGCHAIN_PROJECT\": \"Iteration Function and Human-in-the-loop\", # title\n", + " }\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can alternatively set `OPENAI_API_KEY` in `.env` file and load it.\n", + "\n", + "[Note] This is not necessary if you've already set `OPENAI_API_KEY` in previous steps." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "True" + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Configuration File for Managing API Keys as Environment Variables\n", + "from dotenv import load_dotenv\n", + "\n", + "# Load API Key Information\n", + "load_dotenv(override=True)" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "LangSmith 추적을 시작합니다.\n", + "[프로젝트명]\n", + "CH15-Agents\n" + ] + } + ], + "source": [ + "# Set up LangSmith logging: https://smith.langchain.com\n", + "# %pip install -qU langchain-teddynote\n", + "from langchain_teddynote import logging\n", + "\n", + "# Enter the project name.\n", + "logging.langsmith(\"CH15-Agents\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "First, define the tool." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [], + "source": [ + "from langchain.agents import tool\n", + "\n", + "\n", + "@tool\n", + "\n", + "def add_function(a: float, b: float) -> float:\n", + " \"\"\"Adds two numbers together.\"\"\"\n", + "\n", + " return a + b" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next, define an agent that performs addition calculations using `add_function`." + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [], + "source": [ + "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n", + "from langchain_openai import ChatOpenAI\n", + "from langchain.agents import create_tool_calling_agent, AgentExecutor\n", + "\n", + "# Define tools\n", + "tools = [add_function]\n", + "\n", + "# Create LLM\n", + "gpt = ChatOpenAI(model=\"gpt-4o-mini\")\n", + "\n", + "# Create prompt\n", + "prompt = ChatPromptTemplate.from_messages(\n", + " [\n", + " (\n", + " \"system\",\n", + " \"You are a helpful assistant.\"\n", + " \"Please avoid LaTeX-style formatting and use plain symbols.\",\n", + " ),\n", + " (\"human\", \"{input}\"),\n", + " MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n", + " ]\n", + ")\n", + "\n", + "# Create Agent\n", + "gpt_agent = create_tool_calling_agent(gpt, tools, prompt)\n", + "\n", + "# Create AgentExecutor\n", + "agent_executor = AgentExecutor(\n", + " agent=gpt_agent,\n", + " tools=tools,\n", + " verbose=False,\n", + " max_iterations=10,\n", + " handle_parsing_errors=True,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## AgentExecutor\n", + "\n", + "This method creates an iterator (`AgentExecutorIterator` ) that allows you to step through the agent's execution process.\n", + "\n", + "**Function Description**\n", + "The `iter()` method returns an `AgentExecutorIterator` object that provides sequential access to each step the agent takes until reaching the final output.\n", + "\n", + "**Key Features**\n", + "- **Step-by-step execution access**: Enables you to examine the agent's execution process step-by-step.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Flow Overview**\n", + "\n", + "To perform the addition calculation for `\"114.5 + 121.2 + 34.2 + 110.1\"`, the steps are executed as follows:\n", + "\n", + "1. 114.5 + 121.2 = 235.7\n", + "2. 235.7 + 34.2 = 270.9\n", + "3. 270.9 + 110.1 = 381.0\n", + "\n", + "You can observe each step in this calculation process.\n", + "\n", + "During this process, the system displays the intermediate calculation results to the user and asks if they want to continue. (**Human-in-the-loop**)\n", + "\n", + "If the user inputs anything other than 'y', the iteration stops.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Tool Name: add_function, Execution Result: 235.7\n", + "\n", + "Tool Name: add_function, Execution Result: 380.0\n", + "\n", + "The result of 114.5 + 121.2 + 34.2 + 110.1 is 380.0.\n" + ] + } + ], + "source": [ + "# Set the question for calculation\n", + "question = \"What is the result of 114.5 + 121.2 + 34.2 + 110.1?\"\n", + "\n", + "# Execute the agent_executor iteratively\n", + "for step in agent_executor.iter({\"input\": question}):\n", + " if output := step.get(\"intermediate_step\"):\n", + " action, value = output[0]\n", + " if action.tool == \"add_function\":\n", + " # Print the tool execution result\n", + " print(f\"Tool Name: {action.tool}, Execution Result: {value}\\n\")\n", + " # Ask the user whether to continue\n", + " _continue = input(\"Do you want to continue? (y/n):\\n\") or \"Y\"\n", + " # If the user inputs anything other than 'y', stop the iteration\n", + " if _continue.lower() != \"y\":\n", + " break\n", + "\n", + "# Print the final result\n", + "if \"output\" in step:\n", + " print(step[\"output\"])" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "langchain-opentutorial-HDS-w_h3-py3.11", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.9" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From 272637eba3b98ffe8cbc8c31ec4558daf65c8ffb Mon Sep 17 00:00:00 2001 From: WonyoungLee Date: Mon, 13 Jan 2025 22:16:01 +0800 Subject: [PATCH 02/14] [E-4] 15-Agent / 05-IterationFunction(Human-in-the-loop) [Title] IterationFunction(Human-in-the-loop) [Version] modified colab link [Language] ENG [Package] langsmith,langchain,langchain_core,langchain_openai,langchain_teddynote --- 15-Agent/05-IterationFunction(Human-in-the-loop).ipynb | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb b/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb index 48e77bffc..a42394479 100644 --- a/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb +++ b/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb @@ -10,7 +10,7 @@ "- Peer Review: \n", "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n", "\n", - "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/BaBetterB/LangChain-OpenTutorial/blob/main/13-LangChain-Expression-Language/10-Binding.ipynb) \n", + "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/BaBetterB/LangChain-OpenTutorial/blob/main/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb) \n", "[![Open in GitHub](https://img.shields.io/badge/Open%20in%20GitHub-181717?style=flat-square&logo=github&logoColor=white)](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/07-TextSplitter/04-SemanticChunker.ipynb)\n", "\n", "\n", @@ -208,7 +208,6 @@ "\n", "\n", "@tool\n", - "\n", "def add_function(a: float, b: float) -> float:\n", " \"\"\"Adds two numbers together.\"\"\"\n", "\n", From 3b3107fd0642bc9cf7239dfde93f70886a4bdc9c Mon Sep 17 00:00:00 2001 From: WonyoungLee Date: Mon, 13 Jan 2025 22:19:39 +0800 Subject: [PATCH 03/14] [E-4] 15-Agent / 05-IterationFunction(Human-in-the-loop) [Title] IterationFunction(Human-in-the-loop) [Version] modified colab link [Language] ENG [Package] langsmith,langchain,langchain_core,langchain_openai,langchain_teddynote --- 15-Agent/05-IterationFunction(Human-in-the-loop).ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb b/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb index a42394479..3fb0def8f 100644 --- a/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb +++ b/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb @@ -10,7 +10,7 @@ "- Peer Review: \n", "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n", "\n", - "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/BaBetterB/LangChain-OpenTutorial/blob/main/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb) \n", + "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/BaBetterB/LangChain-OpenTutorial/blob/main/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb)\n", "[![Open in GitHub](https://img.shields.io/badge/Open%20in%20GitHub-181717?style=flat-square&logo=github&logoColor=white)](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/07-TextSplitter/04-SemanticChunker.ipynb)\n", "\n", "\n", From 74e38bd1401440bc5c4e035a51717c09437fa05e Mon Sep 17 00:00:00 2001 From: WonyoungLee Date: Mon, 13 Jan 2025 22:29:52 +0800 Subject: [PATCH 04/14] [E-4] 15-Agent / 05-IterationFunction(Human-in-the-loop) [Title] IterationFunction(Human-in-the-loop) [Version] modified colab link [Language] ENG [Package] langsmith,langchain,langchain_core,langchain_openai,langchain_teddynote --- 15-Agent/05-IterationFunction(Human-in-the-loop).ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb b/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb index 3fb0def8f..2bd179a62 100644 --- a/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb +++ b/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb @@ -275,7 +275,7 @@ "The `iter()` method returns an `AgentExecutorIterator` object that provides sequential access to each step the agent takes until reaching the final output.\n", "\n", "**Key Features**\n", - "- **Step-by-step execution access**: Enables you to examine the agent's execution process step-by-step.\n" + "- **Step-by-step execution access** : Enables you to examine the agent's execution process step-by-step.\n" ] }, { From 0c8f3d8ffdcb3072caece7024a9cb35878e1c59a Mon Sep 17 00:00:00 2001 From: WonyoungLee Date: Mon, 13 Jan 2025 22:35:10 +0800 Subject: [PATCH 05/14] [E-4] 15-Agent / 05-IterationFunction(Human-in-the-loop) [Title] IterationFunction(Human-in-the-loop) [Version] modified colab link [Language] ENG [Package] langsmith,langchain,langchain_core,langchain_openai,langchain_teddynote --- 15-Agent/05-IterationFunction(Human-in-the-loop).ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb b/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb index 2bd179a62..96f18d1f2 100644 --- a/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb +++ b/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb @@ -10,7 +10,7 @@ "- Peer Review: \n", "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n", "\n", - "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/BaBetterB/LangChain-OpenTutorial/blob/main/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb)\n", + "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://github.com/BaBetterB/LangChain-OpenTutorial/blob/main/15-Agent/05-IterationFunction(Human-in-the-loop\\).ipynb)\n", "[![Open in GitHub](https://img.shields.io/badge/Open%20in%20GitHub-181717?style=flat-square&logo=github&logoColor=white)](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/07-TextSplitter/04-SemanticChunker.ipynb)\n", "\n", "\n", From f79fc6545554046152a86e7e8a7c3c6bd90c40ca Mon Sep 17 00:00:00 2001 From: WonyoungLee Date: Mon, 13 Jan 2025 22:41:42 +0800 Subject: [PATCH 06/14] [E-4] 15-Agent / 05-Iteration-HumanInTheLoop [Title] Iteration-HumanInTheLoop [Version] initial commit [Language] ENG [Package] langsmith,langchain,langchain_core,langchain_openai,langchain_teddynote --- 15-Agent/05-Iteration-HumanInTheLoop.ipynb | 361 +++++++++++++++++++++ 1 file changed, 361 insertions(+) create mode 100644 15-Agent/05-Iteration-HumanInTheLoop.ipynb diff --git a/15-Agent/05-Iteration-HumanInTheLoop.ipynb b/15-Agent/05-Iteration-HumanInTheLoop.ipynb new file mode 100644 index 000000000..d81955999 --- /dev/null +++ b/15-Agent/05-Iteration-HumanInTheLoop.ipynb @@ -0,0 +1,361 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Iteration-HumanInTheLoop\n", + "\n", + "- Author: [Wonyoung Lee](https://github.com/BaBetterB)\n", + "- Peer Review: \n", + "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n", + "\n", + "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://github.com/BaBetterB/LangChain-OpenTutorial/blob/main/15-Agent/05-Iteration-HumanInTheLoop.ipynb)\n", + "[![Open in GitHub](https://img.shields.io/badge/Open%20in%20GitHub-181717?style=flat-square&logo=github&logoColor=white)](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/07-TextSplitter/04-SemanticChunker.ipynb)\n", + "\n", + "\n", + "## Overview\n", + "\n", + "This tutorial covers the functionality of repeating the agent's execution process or receiving user input to decide whether to proceed during intermediate steps. \n", + "\n", + "The feature of asking the user whether to continue during the agent's execution process is called `Human-in-the-loop` . \n", + "\n", + "The `iter()` method creates an iterator that allows you to step through the agent's execution process step-by-step.\n", + "\n", + "\n", + "### Table of Contents\n", + "\n", + "- [Overview](#overview)\n", + "- [Environement Setup](#environment-setup)\n", + "- [AgentExecutor](#agentexecutor)\n", + "\n", + "\n", + "\n", + "### References\n", + "\n", + "\n", + "- [LangChain ChatOpenAI API reference](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)\n", + "- [LangChain AgentExecutor API reference](https://python.langchain.com/api_reference/langchain/agents/langchain.agents.agent.AgentExecutor.html)\n", + "- [LangSmith API reference](https://docs.smith.langchain.com/)\n", + "\n", + "----\n", + "\n", + " \n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Environment Setup\n", + "\n", + "Set up the environment. You may refer to [Environment Setup](https://wikidocs.net/257836) for more details.\n", + "\n", + "**[Note]**\n", + "- `langchain-opentutorial` is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials. \n", + "- You can checkout the [ `langchain-opentutorial` ](https://github.com/LangChain-OpenTutorial/langchain-opentutorial-pypi) for more details." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Load sample text and output the content." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "\n", + "[notice] A new release of pip is available: 24.2 -> 24.3.1\n", + "[notice] To update, run: python.exe -m pip install --upgrade pip\n" + ] + } + ], + "source": [ + "%%capture --no-stderr\n", + "%pip install langchain-opentutorial" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "# Install required packages\n", + "from langchain_opentutorial import package\n", + "\n", + "\n", + "package.install(\n", + " [\n", + " \"langsmith\",\n", + " \"langchain\",\n", + " \"langchain_core\",\n", + " \"langchain_openai\",\n", + " \"langchain_teddynote\",\n", + " ],\n", + " verbose=False,\n", + " upgrade=False,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Environment variables have been set successfully.\n" + ] + } + ], + "source": [ + "# Set environment variables\n", + "from langchain_opentutorial import set_env\n", + "\n", + "set_env(\n", + " {\n", + " \"OPENAI_API_KEY\": \"\",\n", + " \"LANGCHAIN_API_KEY\": \"\",\n", + " \"LANGCHAIN_TRACING_V2\": \"true\",\n", + " \"LANGCHAIN_ENDPOINT\": \"https://api.smith.langchain.com\",\n", + " \"LANGCHAIN_PROJECT\": \"Iteration Function and Human-in-the-loop\", # title\n", + " }\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can alternatively set `OPENAI_API_KEY` in `.env` file and load it.\n", + "\n", + "[Note] This is not necessary if you've already set `OPENAI_API_KEY` in previous steps." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "True" + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Configuration File for Managing API Keys as Environment Variables\n", + "from dotenv import load_dotenv\n", + "\n", + "# Load API Key Information\n", + "load_dotenv(override=True)" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "LangSmith 추적을 시작합니다.\n", + "[프로젝트명]\n", + "CH15-Agents\n" + ] + } + ], + "source": [ + "# Set up LangSmith logging: https://smith.langchain.com\n", + "# %pip install -qU langchain-teddynote\n", + "from langchain_teddynote import logging\n", + "\n", + "# Enter the project name.\n", + "logging.langsmith(\"CH15-Agents\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "First, define the tool." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [], + "source": [ + "from langchain.agents import tool\n", + "\n", + "\n", + "@tool\n", + "def add_function(a: float, b: float) -> float:\n", + " \"\"\"Adds two numbers together.\"\"\"\n", + "\n", + " return a + b" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next, define an agent that performs addition calculations using `add_function`." + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [], + "source": [ + "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n", + "from langchain_openai import ChatOpenAI\n", + "from langchain.agents import create_tool_calling_agent, AgentExecutor\n", + "\n", + "# Define tools\n", + "tools = [add_function]\n", + "\n", + "# Create LLM\n", + "gpt = ChatOpenAI(model=\"gpt-4o-mini\")\n", + "\n", + "# Create prompt\n", + "prompt = ChatPromptTemplate.from_messages(\n", + " [\n", + " (\n", + " \"system\",\n", + " \"You are a helpful assistant.\"\n", + " \"Please avoid LaTeX-style formatting and use plain symbols.\",\n", + " ),\n", + " (\"human\", \"{input}\"),\n", + " MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n", + " ]\n", + ")\n", + "\n", + "# Create Agent\n", + "gpt_agent = create_tool_calling_agent(gpt, tools, prompt)\n", + "\n", + "# Create AgentExecutor\n", + "agent_executor = AgentExecutor(\n", + " agent=gpt_agent,\n", + " tools=tools,\n", + " verbose=False,\n", + " max_iterations=10,\n", + " handle_parsing_errors=True,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## AgentExecutor\n", + "\n", + "This method creates an iterator (`AgentExecutorIterator` ) that allows you to step through the agent's execution process.\n", + "\n", + "**Function Description**\n", + "The `iter()` method returns an `AgentExecutorIterator` object that provides sequential access to each step the agent takes until reaching the final output.\n", + "\n", + "**Key Features**\n", + "- **Step-by-step execution access** : Enables you to examine the agent's execution process step-by-step.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Flow Overview**\n", + "\n", + "To perform the addition calculation for `\"114.5 + 121.2 + 34.2 + 110.1\"`, the steps are executed as follows:\n", + "\n", + "1. 114.5 + 121.2 = 235.7\n", + "2. 235.7 + 34.2 = 270.9\n", + "3. 270.9 + 110.1 = 381.0\n", + "\n", + "You can observe each step in this calculation process.\n", + "\n", + "During this process, the system displays the intermediate calculation results to the user and asks if they want to continue. (**Human-in-the-loop**)\n", + "\n", + "If the user inputs anything other than 'y', the iteration stops.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Tool Name: add_function, Execution Result: 235.7\n", + "\n", + "Tool Name: add_function, Execution Result: 380.0\n", + "\n", + "The result of 114.5 + 121.2 + 34.2 + 110.1 is 380.0.\n" + ] + } + ], + "source": [ + "# Set the question for calculation\n", + "question = \"What is the result of 114.5 + 121.2 + 34.2 + 110.1?\"\n", + "\n", + "# Execute the agent_executor iteratively\n", + "for step in agent_executor.iter({\"input\": question}):\n", + " if output := step.get(\"intermediate_step\"):\n", + " action, value = output[0]\n", + " if action.tool == \"add_function\":\n", + " # Print the tool execution result\n", + " print(f\"Tool Name: {action.tool}, Execution Result: {value}\\n\")\n", + " # Ask the user whether to continue\n", + " _continue = input(\"Do you want to continue? (y/n):\\n\") or \"Y\"\n", + " # If the user inputs anything other than 'y', stop the iteration\n", + " if _continue.lower() != \"y\":\n", + " break\n", + "\n", + "# Print the final result\n", + "if \"output\" in step:\n", + " print(step[\"output\"])" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "langchain-opentutorial-HDS-w_h3-py3.11", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.9" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From e8fbb0669293d7b176e33db97c53878fb3a0bb3d Mon Sep 17 00:00:00 2001 From: WonyoungLee Date: Mon, 13 Jan 2025 22:50:27 +0800 Subject: [PATCH 07/14] [E-4] 15-Agent / 05-Iteration-HumanInTheLoop [Title] Iteration-HumanInTheLoop [Version] modified colab link [Language] ENG [Package] langsmith,langchain,langchain_core,langchain_openai,langchain_teddynote --- 15-Agent/05-Iteration-HumanInTheLoop.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/15-Agent/05-Iteration-HumanInTheLoop.ipynb b/15-Agent/05-Iteration-HumanInTheLoop.ipynb index d81955999..5afacd8e5 100644 --- a/15-Agent/05-Iteration-HumanInTheLoop.ipynb +++ b/15-Agent/05-Iteration-HumanInTheLoop.ipynb @@ -10,7 +10,7 @@ "- Peer Review: \n", "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n", "\n", - "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://github.com/BaBetterB/LangChain-OpenTutorial/blob/main/15-Agent/05-Iteration-HumanInTheLoop.ipynb)\n", + "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/BaBetterB/LangChain-OpenTutorial/blob/main/15-Agent/05-Iteration-HumanInTheLoop.ipynb)\n", "[![Open in GitHub](https://img.shields.io/badge/Open%20in%20GitHub-181717?style=flat-square&logo=github&logoColor=white)](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/07-TextSplitter/04-SemanticChunker.ipynb)\n", "\n", "\n", From d9f5bf3fc9a1e6b000a3b8365712d1dd5a448cd6 Mon Sep 17 00:00:00 2001 From: WonyoungLee Date: Mon, 13 Jan 2025 22:54:31 +0800 Subject: [PATCH 08/14] file delete --- ...IterationFunction(Human-in-the-loop).ipynb | 361 ------------------ 1 file changed, 361 deletions(-) delete mode 100644 15-Agent/05-IterationFunction(Human-in-the-loop).ipynb diff --git a/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb b/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb deleted file mode 100644 index 96f18d1f2..000000000 --- a/15-Agent/05-IterationFunction(Human-in-the-loop).ipynb +++ /dev/null @@ -1,361 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Iteration Function(Human-in-the-loop)\n", - "\n", - "- Author: [Wonyoung Lee](https://github.com/BaBetterB)\n", - "- Peer Review: \n", - "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n", - "\n", - "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://github.com/BaBetterB/LangChain-OpenTutorial/blob/main/15-Agent/05-IterationFunction(Human-in-the-loop\\).ipynb)\n", - "[![Open in GitHub](https://img.shields.io/badge/Open%20in%20GitHub-181717?style=flat-square&logo=github&logoColor=white)](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/07-TextSplitter/04-SemanticChunker.ipynb)\n", - "\n", - "\n", - "## Overview\n", - "\n", - "This tutorial covers the functionality of repeating the agent's execution process or receiving user input to decide whether to proceed during intermediate steps. \n", - "\n", - "The feature of asking the user whether to continue during the agent's execution process is called `Human-in-the-loop` . \n", - "\n", - "The `iter()` method creates an iterator that allows you to step through the agent's execution process step-by-step.\n", - "\n", - "\n", - "### Table of Contents\n", - "\n", - "- [Overview](#overview)\n", - "- [Environement Setup](#environment-setup)\n", - "- [AgentExecutor](#agentexecutor)\n", - "\n", - "\n", - "\n", - "### References\n", - "\n", - "\n", - "- [LangChain ChatOpenAI API reference](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)\n", - "- [LangChain AgentExecutor API reference](https://python.langchain.com/api_reference/langchain/agents/langchain.agents.agent.AgentExecutor.html)\n", - "- [LangSmith API reference](https://docs.smith.langchain.com/)\n", - "\n", - "----\n", - "\n", - " \n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Environment Setup\n", - "\n", - "Set up the environment. You may refer to [Environment Setup](https://wikidocs.net/257836) for more details.\n", - "\n", - "**[Note]**\n", - "- `langchain-opentutorial` is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials. \n", - "- You can checkout the [ `langchain-opentutorial` ](https://github.com/LangChain-OpenTutorial/langchain-opentutorial-pypi) for more details." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Load sample text and output the content." - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "\n", - "[notice] A new release of pip is available: 24.2 -> 24.3.1\n", - "[notice] To update, run: python.exe -m pip install --upgrade pip\n" - ] - } - ], - "source": [ - "%%capture --no-stderr\n", - "%pip install langchain-opentutorial" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": {}, - "outputs": [], - "source": [ - "# Install required packages\n", - "from langchain_opentutorial import package\n", - "\n", - "\n", - "package.install(\n", - " [\n", - " \"langsmith\",\n", - " \"langchain\",\n", - " \"langchain_core\",\n", - " \"langchain_openai\",\n", - " \"langchain_teddynote\",\n", - " ],\n", - " verbose=False,\n", - " upgrade=False,\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Environment variables have been set successfully.\n" - ] - } - ], - "source": [ - "# Set environment variables\n", - "from langchain_opentutorial import set_env\n", - "\n", - "set_env(\n", - " {\n", - " \"OPENAI_API_KEY\": \"\",\n", - " \"LANGCHAIN_API_KEY\": \"\",\n", - " \"LANGCHAIN_TRACING_V2\": \"true\",\n", - " \"LANGCHAIN_ENDPOINT\": \"https://api.smith.langchain.com\",\n", - " \"LANGCHAIN_PROJECT\": \"Iteration Function and Human-in-the-loop\", # title\n", - " }\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "You can alternatively set `OPENAI_API_KEY` in `.env` file and load it.\n", - "\n", - "[Note] This is not necessary if you've already set `OPENAI_API_KEY` in previous steps." - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "True" - ] - }, - "execution_count": 4, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# Configuration File for Managing API Keys as Environment Variables\n", - "from dotenv import load_dotenv\n", - "\n", - "# Load API Key Information\n", - "load_dotenv(override=True)" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "LangSmith 추적을 시작합니다.\n", - "[프로젝트명]\n", - "CH15-Agents\n" - ] - } - ], - "source": [ - "# Set up LangSmith logging: https://smith.langchain.com\n", - "# %pip install -qU langchain-teddynote\n", - "from langchain_teddynote import logging\n", - "\n", - "# Enter the project name.\n", - "logging.langsmith(\"CH15-Agents\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "First, define the tool." - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "metadata": {}, - "outputs": [], - "source": [ - "from langchain.agents import tool\n", - "\n", - "\n", - "@tool\n", - "def add_function(a: float, b: float) -> float:\n", - " \"\"\"Adds two numbers together.\"\"\"\n", - "\n", - " return a + b" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Next, define an agent that performs addition calculations using `add_function`." - ] - }, - { - "cell_type": "code", - "execution_count": 12, - "metadata": {}, - "outputs": [], - "source": [ - "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n", - "from langchain_openai import ChatOpenAI\n", - "from langchain.agents import create_tool_calling_agent, AgentExecutor\n", - "\n", - "# Define tools\n", - "tools = [add_function]\n", - "\n", - "# Create LLM\n", - "gpt = ChatOpenAI(model=\"gpt-4o-mini\")\n", - "\n", - "# Create prompt\n", - "prompt = ChatPromptTemplate.from_messages(\n", - " [\n", - " (\n", - " \"system\",\n", - " \"You are a helpful assistant.\"\n", - " \"Please avoid LaTeX-style formatting and use plain symbols.\",\n", - " ),\n", - " (\"human\", \"{input}\"),\n", - " MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n", - " ]\n", - ")\n", - "\n", - "# Create Agent\n", - "gpt_agent = create_tool_calling_agent(gpt, tools, prompt)\n", - "\n", - "# Create AgentExecutor\n", - "agent_executor = AgentExecutor(\n", - " agent=gpt_agent,\n", - " tools=tools,\n", - " verbose=False,\n", - " max_iterations=10,\n", - " handle_parsing_errors=True,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## AgentExecutor\n", - "\n", - "This method creates an iterator (`AgentExecutorIterator` ) that allows you to step through the agent's execution process.\n", - "\n", - "**Function Description**\n", - "The `iter()` method returns an `AgentExecutorIterator` object that provides sequential access to each step the agent takes until reaching the final output.\n", - "\n", - "**Key Features**\n", - "- **Step-by-step execution access** : Enables you to examine the agent's execution process step-by-step.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "**Flow Overview**\n", - "\n", - "To perform the addition calculation for `\"114.5 + 121.2 + 34.2 + 110.1\"`, the steps are executed as follows:\n", - "\n", - "1. 114.5 + 121.2 = 235.7\n", - "2. 235.7 + 34.2 = 270.9\n", - "3. 270.9 + 110.1 = 381.0\n", - "\n", - "You can observe each step in this calculation process.\n", - "\n", - "During this process, the system displays the intermediate calculation results to the user and asks if they want to continue. (**Human-in-the-loop**)\n", - "\n", - "If the user inputs anything other than 'y', the iteration stops.\n" - ] - }, - { - "cell_type": "code", - "execution_count": 15, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Tool Name: add_function, Execution Result: 235.7\n", - "\n", - "Tool Name: add_function, Execution Result: 380.0\n", - "\n", - "The result of 114.5 + 121.2 + 34.2 + 110.1 is 380.0.\n" - ] - } - ], - "source": [ - "# Set the question for calculation\n", - "question = \"What is the result of 114.5 + 121.2 + 34.2 + 110.1?\"\n", - "\n", - "# Execute the agent_executor iteratively\n", - "for step in agent_executor.iter({\"input\": question}):\n", - " if output := step.get(\"intermediate_step\"):\n", - " action, value = output[0]\n", - " if action.tool == \"add_function\":\n", - " # Print the tool execution result\n", - " print(f\"Tool Name: {action.tool}, Execution Result: {value}\\n\")\n", - " # Ask the user whether to continue\n", - " _continue = input(\"Do you want to continue? (y/n):\\n\") or \"Y\"\n", - " # If the user inputs anything other than 'y', stop the iteration\n", - " if _continue.lower() != \"y\":\n", - " break\n", - "\n", - "# Print the final result\n", - "if \"output\" in step:\n", - " print(step[\"output\"])" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "langchain-opentutorial-HDS-w_h3-py3.11", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.9" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} From 85854ff7e074f3db6eca805bbaba6fa25aea5532 Mon Sep 17 00:00:00 2001 From: WonyoungLee Date: Wed, 15 Jan 2025 17:26:55 +0800 Subject: [PATCH 09/14] [E-4] 15-Agent / 05-Iteration-HumanInTheLoop [Title] Iteration-HumanInTheLoop [Version] delete teddy lib [Language] ENG [Package] langsmith,langchain,langchain_core,langchain_openai --- 15-Agent/05-Iteration-HumanInTheLoop.ipynb | 25 ---------------------- 1 file changed, 25 deletions(-) diff --git a/15-Agent/05-Iteration-HumanInTheLoop.ipynb b/15-Agent/05-Iteration-HumanInTheLoop.ipynb index 5afacd8e5..ae0ac2e93 100644 --- a/15-Agent/05-Iteration-HumanInTheLoop.ipynb +++ b/15-Agent/05-Iteration-HumanInTheLoop.ipynb @@ -99,7 +99,6 @@ " \"langchain\",\n", " \"langchain_core\",\n", " \"langchain_openai\",\n", - " \"langchain_teddynote\",\n", " ],\n", " verbose=False,\n", " upgrade=False,\n", @@ -167,30 +166,6 @@ "load_dotenv(override=True)" ] }, - { - "cell_type": "code", - "execution_count": 5, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "LangSmith 추적을 시작합니다.\n", - "[프로젝트명]\n", - "CH15-Agents\n" - ] - } - ], - "source": [ - "# Set up LangSmith logging: https://smith.langchain.com\n", - "# %pip install -qU langchain-teddynote\n", - "from langchain_teddynote import logging\n", - "\n", - "# Enter the project name.\n", - "logging.langsmith(\"CH15-Agents\")" - ] - }, { "cell_type": "markdown", "metadata": {}, From 46a06004bcc615a03bd286e021cdcf1e66d381a7 Mon Sep 17 00:00:00 2001 From: WonyoungLee Date: Thu, 16 Jan 2025 17:32:01 +0800 Subject: [PATCH 10/14] [E-4] 13-LCEL / 10-Binding [Title] Binding [Version] Proofread and revised version [Language] ENG [Package] langsmith,langchain,langchain_core,langchain_openai --- .../10-Binding.ipynb | 51 +++++++++++-------- 1 file changed, 30 insertions(+), 21 deletions(-) diff --git a/13-LangChain-Expression-Language/10-Binding.ipynb b/13-LangChain-Expression-Language/10-Binding.ipynb index 5373b7ef4..3ae38104d 100644 --- a/13-LangChain-Expression-Language/10-Binding.ipynb +++ b/13-LangChain-Expression-Language/10-Binding.ipynb @@ -5,7 +5,7 @@ "id": "8f9cbe9d", "metadata": {}, "source": [ - "# Runtime Arguments Binding\n", + "# Binding\n", "\n", "- Author: [Wonyoung Lee](https://github.com/BaBetterB)\n", "- Peer Review: \n", @@ -17,18 +17,21 @@ "\n", "## Overview\n", "\n", - "This tutorial covers a scenario where, when calling a Runnable inside a Runnable sequence, we need to pass constant arguments that are not included in the output of the previous Runnable or user input. \n", - "In such cases, `Runnable.bind()` can be used to easily pass these arguments.\n", + "This tutorial covers a scenario where you need to pass constant arguments(not included in the output of the previous Runnable or user input) when calling a Runnable inside a Runnable sequence. In such cases, `Runnable.bind()` is a convenient way to pass these arguments\n", + "\n", "\n", "### Table of Contents\n", "\n", "- [Overview](#overview)\n", "- [Environement Setup](#environment-setup)\n", + "- [Runtime Arguments Binding](#runtime-arguments-binding)\n", "- [Connecting OpenAI Functions](#connecting-openai-functions)\n", "- [Connecting OpenAI Tools](#connecting-openai-tools)\n", "\n", "### References\n", "\n", + "- [LangChain RunnablePassthrough API reference](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)\n", + "- [LangChain ChatPromptTemplate API reference](https://python.langchain.com/api_reference/core/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)\n", "\n", "----\n", "\n", @@ -159,9 +162,15 @@ "id": "3fb790a1", "metadata": {}, "source": [ - "Use `RunnablePassthrough` to pass the `{equation_statement}` variable to the prompt, and use `StrOutputParser` to parse the model's output into a string, creating a `runnable` object.\n", + "## Runtime Arguments Binding\n", + "\n", + "This section explains how to use `Runnable.bind()` to pass constant arguments to a `Runnable` within a sequence, especially when those arguments aren't part of the previous Runnable's output or use input.\n", "\n", - "The `runnable.invoke()` method is called to pass the equation statement \"x raised to the third plus seven equals 12\" and output the result." + "**Passing variables to prompts:**\n", + "\n", + "1. Use `RunnablePassthrough` to pass the `{equation_statement}` variable to the prompt.\n", + "2. Use `StrOutputParser` to parse the model's output into a string, creating a `runnable` object.\n", + "3. Call the `runnable.invoke()` method to pass the equation statement (e.g., \\\"x raised to the third plus seven equals 12\\\") get the result." ] }, { @@ -220,9 +229,9 @@ "id": "ed4ced2f", "metadata": {}, "source": [ - "Using bind() Method with Stop Word.\n", - "You may want to call the model using a specific `stop` word. \n", - "`model.bind()` can be used to call the language model and stop the generation at the \"SOLUTION\" token." + "**Using bind() method with stop words**\n", + "\n", + "For controlling the end of the model's output using a specific stop word, you can use `model.bind()` to instruct the model to halt its generation upon encountering the stop token like `SOLUTION`." ] }, { @@ -261,9 +270,9 @@ "source": [ "## Connecting OpenAI Functions\n", "\n", - "One particularly useful way to use bind() is to connect OpenAI Functions with compatible OpenAI models.\n", + "`bind()` is particularly useful for connecting OpenAI Functions with compatible OpenAI models.\n", "\n", - "Below is the code that defines `OpenAI Functions` according to a schema.\n" + "Let's define `openai_function` according to a schema." ] }, { @@ -302,8 +311,9 @@ "id": "b3294828", "metadata": {}, "source": [ - "Binding the solver Function.\n", - "We use the `bind()` method to bind the function call named `solver` to the model." + "**Binding a solver function.**\n", + "\n", + "We can then use the `bind()` method to associate a function call (like `solver`) with the language model." ] }, { @@ -358,11 +368,9 @@ "source": [ "## Connecting OpenAI Tools\n", "\n", - "Here’s how you can connect and use OpenAI tools.\n", - "\n", - "The tools object helps you use various OpenAI features easily.\n", - "\n", - "For example, by calling the `tool.run` method with a natural language question, the model can generate an answer to that question." + "This section explains how to connect and use OpenAI tools within your LangChain applications.\n", + "The `tools` object simplifies using various OpenAI features.\n", + "For example, calling the `tool.run` method with a natural language query allows the model to utilize the spcified tool to generate a response." ] }, { @@ -400,9 +408,10 @@ "id": "8a51880d", "metadata": {}, "source": [ - "Binding Tools and Invoking the Model\n", - "- Use `bind()` to bind `tools` to the model.\n", - "- Call the `invoke()` method with a question like \"Tell me the current weather in San Francisco, New York, and Los Angeles?\"" + "**Binding tools and invoking the model:**\n", + "\n", + "1. Use `bind()` to associate `tools` with the language model.\n", + "2. Call the `invoke()` method on the bound model, providing a natural language question as input.\n" ] }, { @@ -434,7 +443,7 @@ ], "metadata": { "kernelspec": { - "display_name": "langchain-opentutorial-EWknDWEP-py3.11", + "display_name": "langchain-opentutorial-HDS-w_h3-py3.11", "language": "python", "name": "python3" }, From 97791c812129bd569e8517ca04ed49cb6cea0b6b Mon Sep 17 00:00:00 2001 From: WonyoungLee Date: Fri, 17 Jan 2025 13:48:43 +0800 Subject: [PATCH 11/14] Revert "[E-4] 13-LCEL / 10-Binding [Title] Binding [Version] Proofread and revised version [Language] ENG [Package] langsmith,langchain,langchain_core,langchain_openai" This reverts commit 46a06004bcc615a03bd286e021cdcf1e66d381a7. --- .../10-Binding.ipynb | 51 ++++++++----------- 1 file changed, 21 insertions(+), 30 deletions(-) diff --git a/13-LangChain-Expression-Language/10-Binding.ipynb b/13-LangChain-Expression-Language/10-Binding.ipynb index 3ae38104d..5373b7ef4 100644 --- a/13-LangChain-Expression-Language/10-Binding.ipynb +++ b/13-LangChain-Expression-Language/10-Binding.ipynb @@ -5,7 +5,7 @@ "id": "8f9cbe9d", "metadata": {}, "source": [ - "# Binding\n", + "# Runtime Arguments Binding\n", "\n", "- Author: [Wonyoung Lee](https://github.com/BaBetterB)\n", "- Peer Review: \n", @@ -17,21 +17,18 @@ "\n", "## Overview\n", "\n", - "This tutorial covers a scenario where you need to pass constant arguments(not included in the output of the previous Runnable or user input) when calling a Runnable inside a Runnable sequence. In such cases, `Runnable.bind()` is a convenient way to pass these arguments\n", - "\n", + "This tutorial covers a scenario where, when calling a Runnable inside a Runnable sequence, we need to pass constant arguments that are not included in the output of the previous Runnable or user input. \n", + "In such cases, `Runnable.bind()` can be used to easily pass these arguments.\n", "\n", "### Table of Contents\n", "\n", "- [Overview](#overview)\n", "- [Environement Setup](#environment-setup)\n", - "- [Runtime Arguments Binding](#runtime-arguments-binding)\n", "- [Connecting OpenAI Functions](#connecting-openai-functions)\n", "- [Connecting OpenAI Tools](#connecting-openai-tools)\n", "\n", "### References\n", "\n", - "- [LangChain RunnablePassthrough API reference](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)\n", - "- [LangChain ChatPromptTemplate API reference](https://python.langchain.com/api_reference/core/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)\n", "\n", "----\n", "\n", @@ -162,15 +159,9 @@ "id": "3fb790a1", "metadata": {}, "source": [ - "## Runtime Arguments Binding\n", - "\n", - "This section explains how to use `Runnable.bind()` to pass constant arguments to a `Runnable` within a sequence, especially when those arguments aren't part of the previous Runnable's output or use input.\n", + "Use `RunnablePassthrough` to pass the `{equation_statement}` variable to the prompt, and use `StrOutputParser` to parse the model's output into a string, creating a `runnable` object.\n", "\n", - "**Passing variables to prompts:**\n", - "\n", - "1. Use `RunnablePassthrough` to pass the `{equation_statement}` variable to the prompt.\n", - "2. Use `StrOutputParser` to parse the model's output into a string, creating a `runnable` object.\n", - "3. Call the `runnable.invoke()` method to pass the equation statement (e.g., \\\"x raised to the third plus seven equals 12\\\") get the result." + "The `runnable.invoke()` method is called to pass the equation statement \"x raised to the third plus seven equals 12\" and output the result." ] }, { @@ -229,9 +220,9 @@ "id": "ed4ced2f", "metadata": {}, "source": [ - "**Using bind() method with stop words**\n", - "\n", - "For controlling the end of the model's output using a specific stop word, you can use `model.bind()` to instruct the model to halt its generation upon encountering the stop token like `SOLUTION`." + "Using bind() Method with Stop Word.\n", + "You may want to call the model using a specific `stop` word. \n", + "`model.bind()` can be used to call the language model and stop the generation at the \"SOLUTION\" token." ] }, { @@ -270,9 +261,9 @@ "source": [ "## Connecting OpenAI Functions\n", "\n", - "`bind()` is particularly useful for connecting OpenAI Functions with compatible OpenAI models.\n", + "One particularly useful way to use bind() is to connect OpenAI Functions with compatible OpenAI models.\n", "\n", - "Let's define `openai_function` according to a schema." + "Below is the code that defines `OpenAI Functions` according to a schema.\n" ] }, { @@ -311,9 +302,8 @@ "id": "b3294828", "metadata": {}, "source": [ - "**Binding a solver function.**\n", - "\n", - "We can then use the `bind()` method to associate a function call (like `solver`) with the language model." + "Binding the solver Function.\n", + "We use the `bind()` method to bind the function call named `solver` to the model." ] }, { @@ -368,9 +358,11 @@ "source": [ "## Connecting OpenAI Tools\n", "\n", - "This section explains how to connect and use OpenAI tools within your LangChain applications.\n", - "The `tools` object simplifies using various OpenAI features.\n", - "For example, calling the `tool.run` method with a natural language query allows the model to utilize the spcified tool to generate a response." + "Here’s how you can connect and use OpenAI tools.\n", + "\n", + "The tools object helps you use various OpenAI features easily.\n", + "\n", + "For example, by calling the `tool.run` method with a natural language question, the model can generate an answer to that question." ] }, { @@ -408,10 +400,9 @@ "id": "8a51880d", "metadata": {}, "source": [ - "**Binding tools and invoking the model:**\n", - "\n", - "1. Use `bind()` to associate `tools` with the language model.\n", - "2. Call the `invoke()` method on the bound model, providing a natural language question as input.\n" + "Binding Tools and Invoking the Model\n", + "- Use `bind()` to bind `tools` to the model.\n", + "- Call the `invoke()` method with a question like \"Tell me the current weather in San Francisco, New York, and Los Angeles?\"" ] }, { @@ -443,7 +434,7 @@ ], "metadata": { "kernelspec": { - "display_name": "langchain-opentutorial-HDS-w_h3-py3.11", + "display_name": "langchain-opentutorial-EWknDWEP-py3.11", "language": "python", "name": "python3" }, From 8bc9bc2dbfbc9030e180a8543b1c78e025d4d14e Mon Sep 17 00:00:00 2001 From: WonyoungLee Date: Fri, 17 Jan 2025 15:36:17 +0800 Subject: [PATCH 12/14] [E-4] 15-Agent / 05-Iteration-HumanInTheLoop [Title] Iteration-human-in-the-loop [Version] bug fix [Language] ENG [Package] langsmith,langchain,langchain_core,langchain_openai --- 15-Agent/05-Iteration-HumanInTheLoop.ipynb | 54 ++++++++++++++++------ 1 file changed, 40 insertions(+), 14 deletions(-) diff --git a/15-Agent/05-Iteration-HumanInTheLoop.ipynb b/15-Agent/05-Iteration-HumanInTheLoop.ipynb index ae0ac2e93..046c155f9 100644 --- a/15-Agent/05-Iteration-HumanInTheLoop.ipynb +++ b/15-Agent/05-Iteration-HumanInTheLoop.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Iteration-HumanInTheLoop\n", + "# Iteration-human-in-the-loop\n", "\n", "- Author: [Wonyoung Lee](https://github.com/BaBetterB)\n", "- Peer Review: \n", @@ -18,7 +18,7 @@ "\n", "This tutorial covers the functionality of repeating the agent's execution process or receiving user input to decide whether to proceed during intermediate steps. \n", "\n", - "The feature of asking the user whether to continue during the agent's execution process is called `Human-in-the-loop` . \n", + "The feature of asking the user whether to continue during the agent's execution process is called human-in-the-loop . \n", "\n", "The `iter()` method creates an iterator that allows you to step through the agent's execution process step-by-step.\n", "\n", @@ -262,14 +262,21 @@ "To perform the addition calculation for `\"114.5 + 121.2 + 34.2 + 110.1\"`, the steps are executed as follows:\n", "\n", "1. 114.5 + 121.2 = 235.7\n", - "2. 235.7 + 34.2 = 270.9\n", - "3. 270.9 + 110.1 = 381.0\n", + "2. 235.7 + 34.2 = 269.9\n", + "3. 269.9 + 110.1 = 380.0\n", "\n", "You can observe each step in this calculation process.\n", "\n", "During this process, the system displays the intermediate calculation results to the user and asks if they want to continue. (**Human-in-the-loop**)\n", "\n", - "If the user inputs anything other than 'y', the iteration stops.\n" + "If the user inputs anything other than 'y', the iteration stops.\n", + "\n", + "In practice, while calculating 114.5 + 121.2 = 235.7, 34.2 + 110.1 = 144.3 is also calculated simultaneously.\n", + "\n", + "Then, the result of 235.7 + 144.3 = 380.0 is calculated as the second step.\n", + "\n", + "This process can be observed when `verbose=True` is set in the `AgentExecutor`.\n", + "\n" ] }, { @@ -290,25 +297,44 @@ } ], "source": [ - "# Set the question for calculation\n", + "# Define the user input question\n", "question = \"What is the result of 114.5 + 121.2 + 34.2 + 110.1?\"\n", "\n", - "# Execute the agent_executor iteratively\n", + "\n", + "# Flag to track if the calculation is stopped\n", + "calculation_stopped = False\n", + "\n", + "# Use AgentExecutor's iter() method to run step-by-step execution\n", "for step in agent_executor.iter({\"input\": question}):\n", + " # Access each calculation step through intermediate_step\n", " if output := step.get(\"intermediate_step\"):\n", " action, value = output[0]\n", + "\n", + " # Print the result of each calculation step\n", " if action.tool == \"add_function\":\n", - " # Print the tool execution result\n", - " print(f\"Tool Name: {action.tool}, Execution Result: {value}\\n\")\n", + " print(f\"Tool Name: {action.tool}, Execution Result: {value}\")\n", + "\n", " # Ask the user whether to continue\n", - " _continue = input(\"Do you want to continue? (y/n):\\n\") or \"Y\"\n", - " # If the user inputs anything other than 'y', stop the iteration\n", - " if _continue.lower() != \"y\":\n", - " break\n", + " while True:\n", + " _continue = input(\"Do you want to continue? (y/n):\").strip().lower()\n", + " if _continue in [\"y\", \"n\"]:\n", + " if _continue == \"n\":\n", + " print(f\"Calculation stopped. Last computed result: {value}\")\n", + " calculation_stopped = True # Set flag to indicate calculation stop\n", + " break # Break from the loop to stop calculation\n", + " break # Break the inner while loop after valid input\n", + " else:\n", + " print(\"Invalid input! Please enter 'y' or 'n'.\")\n", + "\n", + " # Exit the iteration if the calculation is stopped\n", + " if calculation_stopped:\n", + " break\n", "\n", "# Print the final result\n", "if \"output\" in step:\n", - " print(step[\"output\"])" + " print(f\"Final result: {step['output']}\")\n", + "else:\n", + " print(f\"Final result (from last computation): {value}\")" ] } ], From e19117fddaaa12cb8bb53e0683b41c3a2a512eba Mon Sep 17 00:00:00 2001 From: WonyoungLee Date: Fri, 17 Jan 2025 15:45:08 +0800 Subject: [PATCH 13/14] [E-4] 15-Agent / 05-Iteration-HumanInTheLoop [Title] Iteration-human-in-the-loop [Version] bug fix [Language] ENG [Package] langsmith,langchain,langchain_core,langchain_openai --- 15-Agent/05-Iteration-HumanInTheLoop.ipynb | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/15-Agent/05-Iteration-HumanInTheLoop.ipynb b/15-Agent/05-Iteration-HumanInTheLoop.ipynb index 046c155f9..27ee1846a 100644 --- a/15-Agent/05-Iteration-HumanInTheLoop.ipynb +++ b/15-Agent/05-Iteration-HumanInTheLoop.ipynb @@ -65,7 +65,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": 18, "metadata": {}, "outputs": [ { @@ -85,7 +85,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 19, "metadata": {}, "outputs": [], "source": [ @@ -107,7 +107,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 20, "metadata": {}, "outputs": [ { @@ -128,7 +128,7 @@ " \"LANGCHAIN_API_KEY\": \"\",\n", " \"LANGCHAIN_TRACING_V2\": \"true\",\n", " \"LANGCHAIN_ENDPOINT\": \"https://api.smith.langchain.com\",\n", - " \"LANGCHAIN_PROJECT\": \"Iteration Function and Human-in-the-loop\", # title\n", + " \"LANGCHAIN_PROJECT\": \"Iteration-human-in-the-loop\", # title\n", " }\n", ")" ] @@ -144,7 +144,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": 21, "metadata": {}, "outputs": [ { @@ -153,7 +153,7 @@ "True" ] }, - "execution_count": 4, + "execution_count": 21, "metadata": {}, "output_type": "execute_result" } @@ -175,7 +175,7 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 22, "metadata": {}, "outputs": [], "source": [ @@ -198,7 +198,7 @@ }, { "cell_type": "code", - "execution_count": 12, + "execution_count": 23, "metadata": {}, "outputs": [], "source": [ @@ -281,7 +281,7 @@ }, { "cell_type": "code", - "execution_count": 15, + "execution_count": 24, "metadata": {}, "outputs": [ { @@ -289,10 +289,8 @@ "output_type": "stream", "text": [ "Tool Name: add_function, Execution Result: 235.7\n", - "\n", "Tool Name: add_function, Execution Result: 380.0\n", - "\n", - "The result of 114.5 + 121.2 + 34.2 + 110.1 is 380.0.\n" + "Final result: The result of 114.5 + 121.2 + 34.2 + 110.1 is 380.0.\n" ] } ], From ed0c21aebb4b07abc2198105645babe4a5eb9da8 Mon Sep 17 00:00:00 2001 From: WonyoungLee Date: Sat, 18 Jan 2025 14:05:48 +0800 Subject: [PATCH 14/14] [E-4] 15-Agent / 05-Iteration-HumanInTheLoop [Title] Iteration-human-in-the-loop [Version] bug fix-add import lib [Language] ENG [Package] langsmith,langchain,langchain_core,langchain_community,load_dotenv,langchain_openai --- 15-Agent/05-Iteration-HumanInTheLoop.ipynb | 2 ++ 1 file changed, 2 insertions(+) diff --git a/15-Agent/05-Iteration-HumanInTheLoop.ipynb b/15-Agent/05-Iteration-HumanInTheLoop.ipynb index 27ee1846a..c0e3eb36f 100644 --- a/15-Agent/05-Iteration-HumanInTheLoop.ipynb +++ b/15-Agent/05-Iteration-HumanInTheLoop.ipynb @@ -98,6 +98,8 @@ " \"langsmith\",\n", " \"langchain\",\n", " \"langchain_core\",\n", + " \"langchain_community\",\n", + " \"load_dotenv\",\n", " \"langchain_openai\",\n", " ],\n", " verbose=False,\n",