diff --git a/13-LangChain-Expression-Language/10-Binding.ipynb b/13-LangChain-Expression-Language/10-Binding.ipynb new file mode 100644 index 000000000..5373b7ef4 --- /dev/null +++ b/13-LangChain-Expression-Language/10-Binding.ipynb @@ -0,0 +1,456 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "8f9cbe9d", + "metadata": {}, + "source": [ + "# Runtime Arguments Binding\n", + "\n", + "- Author: [Wonyoung Lee](https://github.com/BaBetterB)\n", + "- Peer Review: \n", + "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n", + "\n", + "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/BaBetterB/LangChain-OpenTutorial/blob/main/13-LangChain-Expression-Language/10-Binding.ipynb) \n", + "[![Open in GitHub](https://img.shields.io/badge/Open%20in%20GitHub-181717?style=flat-square&logo=github&logoColor=white)](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/07-TextSplitter/04-SemanticChunker.ipynb)\n", + "\n", + "\n", + "## Overview\n", + "\n", + "This tutorial covers a scenario where, when calling a Runnable inside a Runnable sequence, we need to pass constant arguments that are not included in the output of the previous Runnable or user input. \n", + "In such cases, `Runnable.bind()` can be used to easily pass these arguments.\n", + "\n", + "### Table of Contents\n", + "\n", + "- [Overview](#overview)\n", + "- [Environement Setup](#environment-setup)\n", + "- [Connecting OpenAI Functions](#connecting-openai-functions)\n", + "- [Connecting OpenAI Tools](#connecting-openai-tools)\n", + "\n", + "### References\n", + "\n", + "\n", + "----\n", + "\n", + " \n" + ] + }, + { + "cell_type": "markdown", + "id": "89446018", + "metadata": {}, + "source": [ + "## Environment Setup\n", + "\n", + "Set up the environment. You may refer to [Environment Setup](https://wikidocs.net/257836) for more details.\n", + "\n", + "**[Note]**\n", + "- `langchain-opentutorial` is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials. \n", + "- You can checkout the [ `langchain-opentutorial` ](https://github.com/LangChain-OpenTutorial/langchain-opentutorial-pypi) for more details." + ] + }, + { + "cell_type": "markdown", + "id": "499a5cf5", + "metadata": {}, + "source": [ + "Load sample text and output the content." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "13991fe2", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "\n", + "[notice] A new release of pip is available: 24.2 -> 24.3.1\n", + "[notice] To update, run: python.exe -m pip install --upgrade pip\n" + ] + } + ], + "source": [ + "%%capture --no-stderr\n", + "%pip install langchain-opentutorial" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "9f8267d6", + "metadata": {}, + "outputs": [], + "source": [ + "# Install required packages\n", + "from langchain_opentutorial import package\n", + "\n", + "\n", + "package.install(\n", + " [\n", + " \"langsmith\",\n", + " \"langchain\",\n", + " \"langchain_core\",\n", + " \"langchain_openai\",\n", + " ],\n", + " verbose=False,\n", + " upgrade=False,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "c38e67c0", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Environment variables have been set successfully.\n" + ] + } + ], + "source": [ + "# Set environment variables\n", + "from langchain_opentutorial import set_env\n", + "\n", + "set_env(\n", + " {\n", + " \"OPENAI_API_KEY\": \"\",\n", + " \"LANGCHAIN_API_KEY\": \"\",\n", + " \"LANGCHAIN_TRACING_V2\": \"true\",\n", + " \"LANGCHAIN_ENDPOINT\": \"https://api.smith.langchain.com\",\n", + " \"LANGCHAIN_PROJECT\": \"Runtime Arguments Binding\", # title\n", + " }\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "7d1632bb", + "metadata": {}, + "source": [ + "You can alternatively set `OPENAI_API_KEY` in `.env` file and load it.\n", + "\n", + "[Note] This is not necessary if you've already set `OPENAI_API_KEY` in previous steps." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5bc25432", + "metadata": {}, + "outputs": [], + "source": [ + "# Configuration File for Managing API Keys as Environment Variables\n", + "from dotenv import load_dotenv\n", + "\n", + "# Load API Key Information\n", + "load_dotenv(override=True)" + ] + }, + { + "cell_type": "markdown", + "id": "3fb790a1", + "metadata": {}, + "source": [ + "Use `RunnablePassthrough` to pass the `{equation_statement}` variable to the prompt, and use `StrOutputParser` to parse the model's output into a string, creating a `runnable` object.\n", + "\n", + "The `runnable.invoke()` method is called to pass the equation statement \"x raised to the third plus seven equals 12\" and output the result." + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "id": "649b24d7", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "EQUATION: x^3 + 7 = 12\n", + "SOLUTION: x^3 = 12 - 7\n", + "x^3 = 5\n", + "x = ∛5\n" + ] + } + ], + "source": [ + "from langchain_core.output_parsers import StrOutputParser\n", + "from langchain_core.prompts import ChatPromptTemplate\n", + "from langchain_core.runnables import RunnablePassthrough\n", + "from langchain_openai import ChatOpenAI\n", + "\n", + "prompt = ChatPromptTemplate.from_messages(\n", + " [\n", + " (\n", + " \"system\",\n", + " # Write the following equation using algebraic symbols and then solve it.\n", + " \"Write out the following equation using algebraic symbols then solve it. \"\n", + " \"Please avoid LaTeX-style formatting and use plain symbols.\"\n", + " \"Use the format:\\n\\nEQUATION:...\\nSOLUTION:...\\n\",\n", + " ),\n", + " (\n", + " \"human\",\n", + " \"{equation_statement}\", # Accepts the equation statement from the user as a variable.\n", + " ),\n", + " ]\n", + ")\n", + "# Initialize the ChatOpenAI model and set temperature to 0.\n", + "model = ChatOpenAI(model=\"gpt-4o\", temperature=0)\n", + "\n", + "# Pass the equation statement to the prompt and parse the model's output as a string.\n", + "runnable = (\n", + " {\"equation_statement\": RunnablePassthrough()} | prompt | model | StrOutputParser()\n", + ")\n", + "\n", + "# Input an example equation statement and print the result.\n", + "result = runnable.invoke(\"x raised to the third plus seven equals 12\")\n", + "print(result)" + ] + }, + { + "cell_type": "markdown", + "id": "ed4ced2f", + "metadata": {}, + "source": [ + "Using bind() Method with Stop Word.\n", + "You may want to call the model using a specific `stop` word. \n", + "`model.bind()` can be used to call the language model and stop the generation at the \"SOLUTION\" token." + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "9d94b8e3", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "EQUATION: x^3 + 7 = 12\n", + "\n" + ] + } + ], + "source": [ + "runnable = (\n", + " # Create a runnable passthrough object and assign it to the \"equation_statement\" key.\n", + " {\"equation_statement\": RunnablePassthrough()}\n", + " | prompt # Add the prompt to the pipeline.\n", + " | model.bind(\n", + " stop=\"SOLUTION\"\n", + " ) # Bind the model and set it to stop generating at the \"SOLUTION\" token.\n", + " | StrOutputParser() # Add the string output parser to the pipeline.\n", + ")\n", + "# Execute the pipeline with the input \"x raised to the third plus seven equals 12\" and print the result.\n", + "print(runnable.invoke(\"x raised to the third plus seven equals 12\"))" + ] + }, + { + "cell_type": "markdown", + "id": "9be5d575", + "metadata": {}, + "source": [ + "## Connecting OpenAI Functions\n", + "\n", + "One particularly useful way to use bind() is to connect OpenAI Functions with compatible OpenAI models.\n", + "\n", + "Below is the code that defines `OpenAI Functions` according to a schema.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "id": "0b7cd50c", + "metadata": {}, + "outputs": [], + "source": [ + "openai_function = {\n", + " \"name\": \"solver\", # Function name\n", + " # Function description: Formulate and solve an equation.\n", + " \"description\": \"Formulates and solves an equation\",\n", + " \"parameters\": { # Function parameters\n", + " \"type\": \"object\", # Parameter type: object\n", + " \"properties\": { # Parameter properties\n", + " \"equation\": { # Equation property\n", + " \"type\": \"string\", # Type: string\n", + " \"description\": \"The algebraic expression of the equation\", # Description\n", + " },\n", + " \"solution\": { # Solution property\n", + " \"type\": \"string\", # Type: string\n", + " \"description\": \"The solution to the equation\", # Description\n", + " },\n", + " },\n", + " \"required\": [\n", + " \"equation\",\n", + " \"solution\",\n", + " ], # Required parameters: equation and solution\n", + " },\n", + "}" + ] + }, + { + "cell_type": "markdown", + "id": "b3294828", + "metadata": {}, + "source": [ + "Binding the solver Function.\n", + "We use the `bind()` method to bind the function call named `solver` to the model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "edb2c5a4", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\\n\"equation\": \"x^3 + 7 = 12\",\\n\"solution\": \"x = ∛5\"\\n}', 'name': 'solver'}, 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 28, 'prompt_tokens': 95, 'total_tokens': 123, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4-0613', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-6fa11a89-41ec-4316-8a75-beab094d7803-0', usage_metadata={'input_tokens': 95, 'output_tokens': 28, 'total_tokens': 123, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})" + ] + }, + "execution_count": 17, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Write the following equation using algebraic symbols and then solve it\n", + "prompt = ChatPromptTemplate.from_messages(\n", + " [\n", + " (\n", + " \"system\",\n", + " \"Write out the following equation using algebraic symbols then solve it.\",\n", + " ),\n", + " (\"human\", \"{equation_statement}\"),\n", + " ]\n", + ")\n", + "\n", + "\n", + "model = ChatOpenAI(model=\"gpt-4o\", temperature=0).bind(\n", + " function_call={\"name\": \"solver\"}, # Bind the OpenAI function schema\n", + " functions=[openai_function],\n", + ")\n", + "\n", + "\n", + "runnable = {\"equation_statement\": RunnablePassthrough()} | prompt | model\n", + "\n", + "\n", + "# Equation: x raised to the third plus seven equals 12\n", + "\n", + "\n", + "runnable.invoke(\"x raised to the third plus seven equals 12\")" + ] + }, + { + "cell_type": "markdown", + "id": "6c4904a4", + "metadata": {}, + "source": [ + "## Connecting OpenAI Tools\n", + "\n", + "Here’s how you can connect and use OpenAI tools.\n", + "\n", + "The tools object helps you use various OpenAI features easily.\n", + "\n", + "For example, by calling the `tool.run` method with a natural language question, the model can generate an answer to that question." + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "id": "2fce5808", + "metadata": {}, + "outputs": [], + "source": [ + "tools = [\n", + " {\n", + " \"type\": \"function\",\n", + " \"function\": {\n", + " \"name\": \"get_current_weather\", # Function name to get current weather\n", + " \"description\": \"Fetches the current weather for a given location\", # Description\n", + " \"parameters\": {\n", + " \"type\": \"object\",\n", + " \"properties\": {\n", + " \"location\": {\n", + " \"type\": \"string\",\n", + " \"description\": \"City and state, e.g.: San Francisco, CA\", # Location description\n", + " },\n", + " # Temperature unit\n", + " \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]},\n", + " },\n", + " \"required\": [\"location\"], # Required parameter: location\n", + " },\n", + " },\n", + " }\n", + "]" + ] + }, + { + "cell_type": "markdown", + "id": "8a51880d", + "metadata": {}, + "source": [ + "Binding Tools and Invoking the Model\n", + "- Use `bind()` to bind `tools` to the model.\n", + "- Call the `invoke()` method with a question like \"Tell me the current weather in San Francisco, New York, and Los Angeles?\"" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "id": "3b5c0149", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_q8WYveGpmjfckyAU3TENgNMi', 'function': {'arguments': '{\\n \"location\": \"San Francisco, CA\"\\n}', 'name': 'get_current_weather'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 20, 'prompt_tokens': 92, 'total_tokens': 112, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4-0613', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-9c4e4e91-244c-49d1-a54b-b0922a3bc228-0', tool_calls=[{'name': 'get_current_weather', 'args': {'location': 'San Francisco, CA'}, 'id': 'call_q8WYveGpmjfckyAU3TENgNMi', 'type': 'tool_call'}], usage_metadata={'input_tokens': 92, 'output_tokens': 20, 'total_tokens': 112, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})" + ] + }, + "execution_count": 25, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Initialize the ChatOpenAI model and bind the tools.\n", + "model = ChatOpenAI(model=\"gpt-4o\").bind(tools=tools)\n", + "# Invoke the model to ask about the weather in San Francisco, New York, and Los Angeles.\n", + "model.invoke(\n", + " \"Can you tell me the current weather in San Francisco, New York, and Los Angeles?\"\n", + ")" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "langchain-opentutorial-EWknDWEP-py3.11", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.9" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +}