diff --git a/05-Memory/02-ConversationBufferWindowMemory.ipynb b/05-Memory/02-ConversationBufferWindowMemory.ipynb new file mode 100644 index 000000000..6e6296ffb --- /dev/null +++ b/05-Memory/02-ConversationBufferWindowMemory.ipynb @@ -0,0 +1,262 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# ConversationBufferWindowMemory\n", + "\n", + "- Author: [Kenny Jung](https://www.linkedin.com/in/kwang-yong-jung)\n", + "- Design: [Kenny Jung](https://www.linkedin.com/in/kwang-yong-jung)\n", + "- Peer Review: [Teddy](https://github.com/teddylee777)\n", + "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n", + "\n", + "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gyjong/LangChain-OpenTutorial/blob/main/05-Memory/02-ConversationBufferWindowMemory.ipynb) [![Open in LangChain Academy](https://cdn.prod.website-files.com/65b8cd72835ceeacd4449a53/66e9eba12c7b7688aa3dbb5e_LCA-badge-green.svg)](https://academy.langchain.com/courses/take/intro-to-langgraph/lessons/58239937-lesson-2-sub-graphs)\n", + "\n", + "## Overview\n", + "\n", + "`ConversationBufferWindowMemory` maintains a list of conversation interactions over time.\n", + "\n", + "In this case, `ConversationBufferWindowMemory` uses only the **most recent K** interactions instead of utilizing all conversation content.\n", + "\n", + "This can be useful for maintaining a sliding window of the most recent interactions to prevent the buffer from becoming too large.\n", + "\n", + "\n", + "### Table of Contents\n", + "\n", + "- [Overview](#overview)\n", + "- [Environement Setup](#environment-setup)\n", + "- [Online Bank Account Opening Conversation Example](#online-bank-account-opening-conversation-example)\n", + "- [Retrieving Conversation History](#retrieving-conversation-history)\n", + "\n", + "### References\n", + "\n", + "- [LangChain Python API Reference > langchain: 0.3.13 > memory > ConversationBufferWindowMemory](https://python.langchain.com/api_reference/langchain/memory/langchain.memory.buffer_window.ConversationBufferWindowMemory.html)\n", + "----" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Environment Setup\n", + "\n", + "Set up the environment. You may refer to [Environment Setup](https://wikidocs.net/257836) for more details.\n", + "\n", + "**[Note]**\n", + "- `langchain-opentutorial` is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials. \n", + "- You can checkout the [`langchain-opentutorial`](https://github.com/LangChain-OpenTutorial/langchain-opentutorial-pypi) for more details." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "%%capture --no-stderr\n", + "!pip install langchain-opentutorial" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "# Install required packages\n", + "from langchain_opentutorial import package\n", + "\n", + "package.install(\n", + " [\n", + " \"langsmith\",\n", + " \"langchain\",\n", + " \"langchain_core\",\n", + " \"langchain-anthropic\",\n", + " \"langchain_community\",\n", + " \"langchain_text_splitters\",\n", + " \"langchain_openai\",\n", + " ],\n", + " verbose=False,\n", + " upgrade=False,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Environment variables have been set successfully.\n" + ] + } + ], + "source": [ + "# Set environment variables\n", + "from langchain_opentutorial import set_env\n", + "\n", + "set_env(\n", + " {\n", + " \"OPENAI_API_KEY\": \"\",\n", + " \"LANGCHAIN_API_KEY\": \"\",\n", + " \"LANGCHAIN_TRACING_V2\": \"true\",\n", + " \"LANGCHAIN_ENDPOINT\": \"https://api.smith.langchain.com\",\n", + " \"LANGCHAIN_PROJECT\": \"ConversationBufferWindowMemory\",\n", + " }\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can alternatively set `OPENAI_API_KEY` in `.env` file and load it.\n", + "\n", + "[Note] This is not necessary if you've already set `OPENAI_API_KEY` in previous steps." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "True" + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from dotenv import load_dotenv\n", + "\n", + "load_dotenv(override=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Online Bank Account Opening Conversation Example\n", + "\n", + "This example demonstrates how to use `ConversationBufferWindowMemory` to simulate a virtual banking assistant conversation. The conversation flow shows a typical online bank account opening process, from initial greeting to account creation confirmation, while maintaining only the most recent interactions in memory." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [], + "source": [ + "from langchain.memory import ConversationBufferWindowMemory\n", + "\n", + "memory = ConversationBufferWindowMemory(k=2, return_messages=True)\n", + "\n", + "memory.save_context(\n", + " inputs={\n", + " \"human\": \"Hello, I'd like to open a bank account online. How do I get started?\"\n", + " },\n", + " outputs={\n", + " \"ai\": \"Hello! I'm glad you want to open an account. First, could you please prepare your ID for verification?\"\n", + " },\n", + ")\n", + "memory.save_context(\n", + " inputs={\"human\": \"Yes, I have my ID ready. What should I do next?\"},\n", + " outputs={\n", + " \"ai\": \"Thank you. Please take clear photos of both sides of your ID and upload them. Then we'll proceed with the identity verification process.\"\n", + " },\n", + ")\n", + "memory.save_context(\n", + " inputs={\n", + " \"human\": \"I've uploaded the photos. How does the identity verification work?\"\n", + " },\n", + " outputs={\n", + " \"ai\": \"I've confirmed your uploaded photos. Now, please proceed with phone verification. You'll receive a verification code via text message - please enter that code.\"\n", + " },\n", + ")\n", + "memory.save_context(\n", + " inputs={\n", + " \"human\": \"I've entered the verification code. How do I proceed with opening the account?\"\n", + " },\n", + " outputs={\n", + " \"ai\": \"Your identity has been verified. Now, please select the type of account you want and enter the required information. You can choose the type of deposit, currency, and other options.\"\n", + " },\n", + ")\n", + "memory.save_context(\n", + " inputs={\"human\": \"I've entered all the information. What's the next step?\"},\n", + " outputs={\n", + " \"ai\": \"I've confirmed your information. The account opening process is almost complete. Finally, please agree to the terms of service and confirm the account opening.\"\n", + " },\n", + ")\n", + "memory.save_context(\n", + " inputs={\"human\": \"I've completed all the steps. Has the account been opened?\"},\n", + " outputs={\n", + " \"ai\": \"Yes, your account has been successfully opened. Your account number and related information have been sent to your registered email. Please let us know if you need any further assistance. Thank you!\"\n", + " },\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Retrieving Conversation History\n", + "\n", + "Let's examine the conversation history stored in memory using the `load_memory_variables()` method to verify our window-based memory retention." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[HumanMessage(content=\"I've entered all the information. What's the next step?\", additional_kwargs={}, response_metadata={}),\n", + " AIMessage(content=\"I've confirmed your information. The account opening process is almost complete. Finally, please agree to the terms of service and confirm the account opening.\", additional_kwargs={}, response_metadata={}),\n", + " HumanMessage(content=\"I've completed all the steps. Has the account been opened?\", additional_kwargs={}, response_metadata={}),\n", + " AIMessage(content='Yes, your account has been successfully opened. Your account number and related information have been sent to your registered email. Please let us know if you need any further assistance. Thank you!', additional_kwargs={}, response_metadata={})]" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Check the conversation history\n", + "memory.load_memory_variables({})[\"history\"]" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "py-test", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/05-Memory/03-ConversationTokenBufferMemory.ipynb b/05-Memory/03-ConversationTokenBufferMemory.ipynb new file mode 100644 index 000000000..655aad64b --- /dev/null +++ b/05-Memory/03-ConversationTokenBufferMemory.ipynb @@ -0,0 +1,370 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# ConversationTokenBufferMemory\n", + "\n", + "- Author: [Kenny Jung](https://www.linkedin.com/in/kwang-yong-jung)\n", + "- Design: [Kenny Jung](https://www.linkedin.com/in/kwang-yong-jung)\n", + "- Peer Review: [Teddy](https://github.com/teddylee777)\n", + "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n", + "\n", + "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gyjong/LangChain-OpenTutorial/blob/main/05-Memory/03-ConversationTokenBufferMemory.ipynb) [![Open in LangChain Academy](https://cdn.prod.website-files.com/65b8cd72835ceeacd4449a53/66e9eba12c7b7688aa3dbb5e_LCA-badge-green.svg)](https://academy.langchain.com/courses/take/intro-to-langgraph/lessons/58239937-lesson-2-sub-graphs)\n", + "\n", + "## Overview\n", + "\n", + "`ConversationTokenBufferMemory` stores recent conversation history in a buffer memory and determines when to flush conversation content based on **token length** rather than the number of conversations.\n", + "\n", + "- Key parameters:\n", + " - `max_token_limit`: Sets the maximum token length for storing conversation content\n", + " - `return_messages`: When True, returns the messages in chat format. When False, returns a string\n", + " - `human_prefix`: Prefix to add before human messages (default: \"Human\")\n", + " - `ai_prefix`: Prefix to add before AI messages (default: \"AI\")\n", + "\n", + "\n", + "### Table of Contents\n", + "\n", + "- [Overview](#overview)\n", + "- [Environement Setup](#environment-setup)\n", + "- [Limit Maximum Token Length to 50](#limit-maximum-token-length-to-50)\n", + "- [Setting Maximum Token Length to 150](#setting-maximum-token-length-to-150)\n", + "\n", + "### References\n", + "\n", + "- [LangChain Python API Reference > langchain: 0.3.13 > memory > ConversationTokenBufferMemory](https://python.langchain.com/api_reference/langchain/memory/langchain.memory.token_buffer.ConversationTokenBufferMemory.html)\n", + "----" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Environment Setup\n", + "\n", + "Set up the environment. You may refer to [Environment Setup](https://wikidocs.net/257836) for more details.\n", + "\n", + "**[Note]**\n", + "- `langchain-opentutorial` is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials. \n", + "- You can checkout the [`langchain-opentutorial`](https://github.com/LangChain-OpenTutorial/langchain-opentutorial-pypi) for more details." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "%%capture --no-stderr\n", + "!pip install langchain-opentutorial" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "# Install required packages\n", + "from langchain_opentutorial import package\n", + "\n", + "package.install(\n", + " [\n", + " \"langsmith\",\n", + " \"langchain\",\n", + " \"langchain_core\",\n", + " \"langchain-anthropic\",\n", + " \"langchain_community\",\n", + " \"langchain_text_splitters\",\n", + " \"langchain_openai\",\n", + " ],\n", + " verbose=False,\n", + " upgrade=False,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Environment variables have been set successfully.\n" + ] + } + ], + "source": [ + "# Set environment variables\n", + "from langchain_opentutorial import set_env\n", + "\n", + "set_env(\n", + " {\n", + " \"OPENAI_API_KEY\": \"\",\n", + " \"LANGCHAIN_API_KEY\": \"\",\n", + " \"LANGCHAIN_TRACING_V2\": \"true\",\n", + " \"LANGCHAIN_ENDPOINT\": \"https://api.smith.langchain.com\",\n", + " \"LANGCHAIN_PROJECT\": \"ConversationTokenBufferMemory\",\n", + " }\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can alternatively set `OPENAI_API_KEY` in `.env` file and load it.\n", + "\n", + "[Note] This is not necessary if you've already set `OPENAI_API_KEY` in previous steps." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "True" + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from dotenv import load_dotenv\n", + "\n", + "load_dotenv(override=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Limit Maximum Token Length to 50\n", + "\n", + "This section demonstrates how to limit the conversation memory to 50 tokens" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/var/folders/wn/fz0vz0td5bn8wl7wkrv30x100000gn/T/ipykernel_39143/361976439.py:9: LangChainDeprecationWarning: Please see the migration guide at: https://python.langchain.com/docs/versions/migrating_memory/\n", + " memory = ConversationTokenBufferMemory(\n" + ] + } + ], + "source": [ + "from langchain.memory import ConversationTokenBufferMemory\n", + "from langchain_openai import ChatOpenAI\n", + "\n", + "\n", + "# Create LLM model\n", + "llm = ChatOpenAI(model_name=\"gpt-4o-mini\")\n", + "\n", + "# Configure memory\n", + "memory = ConversationTokenBufferMemory(\n", + " llm=llm,\n", + " max_token_limit=50,\n", + " return_messages=True, # Limit maximum token length to 50\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [], + "source": [ + "# Add arbitrary conversations\n", + "memory.save_context(\n", + " inputs={\n", + " \"human\": \"Hello, I recently purchased a machine tool from your company. Could you tell me how to install it?\"\n", + " },\n", + " outputs={\n", + " \"ai\": \"Hello! Thank you for your purchase. Could you please tell me the machine model number?\"\n", + " },\n", + ")\n", + "memory.save_context(\n", + " inputs={\"human\": \"Yes, the model number is XG-200.\"},\n", + " outputs={\n", + " \"ai\": \"Thank you. I'll help you with the installation guide for the XG-200 model. First, please check the power supply status at the installation site. The machine requires 220V power.\"\n", + " },\n", + ")\n", + "memory.save_context(\n", + " inputs={\"human\": \"I've checked the power. What's the next step?\"},\n", + " outputs={\n", + " \"ai\": \"Good. Next, please place the machine on a flat and stable surface. Then, proceed with cable connections according to the provided user manual.\"\n", + " },\n", + ")\n", + "memory.save_context(\n", + " inputs={\"human\": \"How do I make the connections?\"},\n", + " outputs={\n", + " \"ai\": \"Please refer to page 5 of the manual. There are detailed instructions for cable connections. If you have any difficulties with this process, I'll be happy to help further.\"\n", + " },\n", + ")\n", + "memory.save_context(\n", + " inputs={\"human\": \"What should I do after the installation is complete?\"},\n", + " outputs={\n", + " \"ai\": \"Once the installation is complete, please turn on the power and perform the initial operation test. The test procedure is explained on page 10 of the manual. If there are any issues with the machine or if you need additional support, please don't hesitate to contact us.\"\n", + " },\n", + ")\n", + "memory.save_context(\n", + " inputs={\"human\": \"Thank you, this has been very helpful!\"},\n", + " outputs={\n", + " \"ai\": \"We're always ready to help. If you have any additional questions or need support, please feel free to ask. Have a great day!\"\n", + " },\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[HumanMessage(content='Thank you, this has been very helpful!', additional_kwargs={}, response_metadata={}),\n", + " AIMessage(content=\"We're always ready to help. If you have any additional questions or need support, please feel free to ask. Have a great day!\", additional_kwargs={}, response_metadata={})]" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Check the conversation history\n", + "memory.load_memory_variables({})[\"history\"]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setting Maximum Token Length to 150\n", + "\n", + "Let's check how the conversation is stored when we set the maximum token length to **150**." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [], + "source": [ + "# Memory configuration\n", + "memory = ConversationTokenBufferMemory(\n", + " llm=llm,\n", + " max_token_limit=150,\n", + " return_messages=True, # Limit maximum token length to 150\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [], + "source": [ + "# Add arbitrary conversations\n", + "memory.save_context(\n", + " inputs={\n", + " \"human\": \"Hello, I recently purchased a machine tool from your company. Could you tell me how to install it?\"\n", + " },\n", + " outputs={\n", + " \"ai\": \"Hello! Thank you for your purchase. Could you please tell me the machine model number?\"\n", + " },\n", + ")\n", + "memory.save_context(\n", + " inputs={\"human\": \"Yes, the model number is XG-200.\"},\n", + " outputs={\n", + " \"ai\": \"Thank you. I'll help you with the installation guide for the XG-200 model. First, please check the power supply status at the installation site. The machine requires 220V power.\"\n", + " },\n", + ")\n", + "memory.save_context(\n", + " inputs={\"human\": \"I've checked the power. What's the next step?\"},\n", + " outputs={\n", + " \"ai\": \"Good. Next, please place the machine on a flat and stable surface. Then, proceed with cable connections according to the provided user manual.\"\n", + " },\n", + ")\n", + "memory.save_context(\n", + " inputs={\"human\": \"How do I make the connections?\"},\n", + " outputs={\n", + " \"ai\": \"Please refer to page 5 of the manual. There are detailed instructions for cable connections. If you have any difficulties with this process, I'll be happy to help further.\"\n", + " },\n", + ")\n", + "memory.save_context(\n", + " inputs={\"human\": \"What should I do after the installation is complete?\"},\n", + " outputs={\n", + " \"ai\": \"Once the installation is complete, please turn on the power and perform the initial operation test. The test procedure is explained on page 10 of the manual. If there are any issues with the machine or if you need additional support, please don't hesitate to contact us.\"\n", + " },\n", + ")\n", + "memory.save_context(\n", + " inputs={\"human\": \"Thank you, this has been very helpful!\"},\n", + " outputs={\n", + " \"ai\": \"We're always ready to help. If you have any additional questions or need support, please feel free to ask. Have a great day!\"\n", + " },\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[HumanMessage(content='What should I do after the installation is complete?', additional_kwargs={}, response_metadata={}),\n", + " AIMessage(content=\"Once the installation is complete, please turn on the power and perform the initial operation test. The test procedure is explained on page 10 of the manual. If there are any issues with the machine or if you need additional support, please don't hesitate to contact us.\", additional_kwargs={}, response_metadata={}),\n", + " HumanMessage(content='Thank you, this has been very helpful!', additional_kwargs={}, response_metadata={}),\n", + " AIMessage(content=\"We're always ready to help. If you have any additional questions or need support, please feel free to ask. Have a great day!\", additional_kwargs={}, response_metadata={})]" + ] + }, + "execution_count": 10, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Check the conversation history\n", + "memory.load_memory_variables({})[\"history\"]" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "py-test", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +}