diff --git a/01-BASIC/01. Follow the Installation Video_Mac.ipynb b/01-BASIC/01. Follow the Installation Video_Mac.ipynb new file mode 100644 index 000000000..6f95f1bc2 --- /dev/null +++ b/01-BASIC/01. Follow the Installation Video_Mac.ipynb @@ -0,0 +1,311 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "635d8ebb", + "metadata": {}, + "source": [ + "# Environment Setup (Mac)\n", + "\n", + "- Author: [Jeongho Shin](https://github.com/ThePurpleCollar)\n", + "- Design:\n", + "- Peer Review: [Jeongki Park](https://github.com/jeongkpa), [Wooseok-Jeong](https://github.com/jeong-wooseok)\n", + "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n", + "\n", + "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain-academy/blob/main/module-4/sub-graph.ipynb) [![Open in LangChain Academy](https://cdn.prod.website-files.com/65b8cd72835ceeacd4449a53/66e9eba12c7b7688aa3dbb5e_LCA-badge-green.svg)](https://academy.langchain.com/courses/take/intro-to-langgraph/lessons/58239937-lesson-2-sub-graphs)\n", + "\n", + "## Overview\n", + "\n", + "This guide provides a comprehensive setup process tailored for developing with LangChain on a Mac. LangChain is a framework for building applications powered by large language models (LLMs), and this guide ensures your environment is fully optimized for seamless integration and development.\n", + "\n", + "# Table of Contents\n", + "\n", + "- [Overview](#overview)\n", + "- [Opening Terminal](#opening-terminal)\n", + "- [Installing Homebrew](#installing-homebrew)\n", + "- [Verifying Xcode Installation](#verifying-xcode-installation)\n", + "- [Downloading Practice Code](#downloading-practice-code)\n", + "- [Python and Environment Configuration](#python-and-environment-configuration)\n", + "- [Development Tools Setup](#development-tools-setup)\n", + "\n", + "----" + ] + }, + { + "cell_type": "markdown", + "id": "0a630f46", + "metadata": {}, + "source": [ + "## Opening Terminal\n", + "- Open Spotlight Search by pressing `Command + Space`.\n", + "\n", + "- Search for **`terminal`** and press Enter to open the Terminal." + ] + }, + { + "cell_type": "markdown", + "id": "edd88e38", + "metadata": {}, + "source": [ + "## Installing Homebrew\n", + "\n", + "### Running the Homebrew Installation Command\n", + "- Run the following command in the Terminal to install Homebrew:\n", + " ```bash\n", + "\n", + " /bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\"\n", + "\n", + "- Enter your account password when prompted.\n", + "
\n", + " ![PasswordZ](assets/01-Follow-the-Installation-Video_Mac-01.png)\n", + " \n", + "- Press ENTER to proceed with the installation." + ] + }, + { + "cell_type": "markdown", + "id": "913b58a6", + "metadata": {}, + "source": [ + "### Configuring Homebrew Environment\n", + "\n", + "- Run the following command to check your username:\n", + " ```bash\n", + " \n", + " whoami\n", + "![Jupyter Extension](assets/01-Follow-the-Installation-Video_Mac-02.png)\n", + "\n", + "- Check the installation path of Homebrew:\n", + " ```bash\n", + "\n", + " which brew\n", + "\n", + "- Verify the installation path of Homebrew:\n", + " - **Case 1** : If the output is `/opt/homebrew/bin/brew`, use the following command to configure the environment:\n", + " ```bash\n", + " echo 'eval \"$(/opt/homebrew/bin/brew shellenv)\"' >> /Users//.zprofile\n", + "\n", + " - **Case 2** : If the output is `/usr/local/bin/brew`, use the following command:\n", + " ```bash\n", + " echo 'eval \"$(/usr/local/bin/brew shellenv)\"' >> /Users//.zprofile" + ] + }, + { + "cell_type": "markdown", + "id": "05b6427c", + "metadata": {}, + "source": [ + "## Verifying Xcode Installation\n", + "\n", + "To check if Xcode Command Line Tools are installed, run the following command in your terminal:\n", + "\n", + "```bash\n", + "xcode-select --install\n" + ] + }, + { + "cell_type": "markdown", + "id": "070b418b", + "metadata": {}, + "source": [ + "## Downloading Practice Code\n", + "\n", + "[Reference] Practice code repository: [LangChain Practice Code](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n", + "\n", + "\n", + "### Verifying Git Installation\n", + "\n", + "- Check if Git is installed by running the following command in your terminal:\n", + " ```bash\n", + " git --version\n", + "\n", + "- If the command outputs the Git version, you already have Git installed, and no further action is required.\n", + "\n", + "- If Git is not installed, you can install it using Homebrew:\n", + " ```bash\n", + " brew install git\n", + "\n", + "- After installation, verify Git again:\n", + " ```bash\n", + " git --version\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "id": "3c41318e", + "metadata": {}, + "source": [ + "### Downloading Practice Code with Git\n", + "- Navigate to the `Documents` folder (or any other folder where you want to download the practice code). Use the following command:\n", + " ```bash\n", + " cd Documents\n", + "- If you want to use a different directory, replace Documents with your desired path.\n", + "\n", + "- Use the `git` command to download the practice code from the repository. Run the following command in your terminal:\n", + " ```bash\n", + " git clone https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial.git\n", + "![](assets/01-Follow-the-Installation-Video_Mac-03.png) \n", + "\n", + "\n", + "- The repository will be cloned into a folder named LangChain-OpenTutorial within the selected directory.\n" + ] + }, + { + "cell_type": "markdown", + "id": "b9df2082", + "metadata": {}, + "source": [ + "## Installing Pyenv\n", + "\n", + "#### Reference\n", + "For detailed documentation, refer to the [Pyenv GitHub Page](https://github.com/pyenv/pyenv?tab=readme-ov-file#understanding-python-version-selection).\n", + "\n", + "---\n", + "\n", + "#### Steps to Install Pyenv\n", + "\n", + "1. Update Homebrew and install `pyenv` using the following commands:\n", + " ```bash\n", + " brew update\n", + " brew install pyenv\n", + "\n", + "2. Add the following lines to your ~/.zshrc file. Copy and paste the commands into your terminal:\n", + " ```bash\n", + " echo 'export PYENV_ROOT=\"$HOME/.pyenv\"' >> ~/.zshrc\n", + " echo '[[ -d $PYENV_ROOT/bin ]] && export PATH=\"$PYENV_ROOT/bin:$PATH\"' >> ~/.zshrc\n", + " echo 'eval \"$(pyenv init -)\"' >> ~/.zshrc\n", + "\n", + "3. If you encounter a permissions error, resolve it by running these commands:\n", + " ```bash\n", + " sudo chown $USER ~/.zshrc\n", + " echo 'export PYENV_ROOT=\"$HOME/.pyenv\"' >> ~/.zshrc\n", + " echo '[[ -d $PYENV_ROOT/bin ]] && export PATH=\"$PYENV_ROOT/bin:$PATH\"' >> ~/.zshrc\n", + " echo 'eval \"$(pyenv init -)\"' >> ~/.zshrc\n", + "\n", + "4. Restart the terminal shell to apply the changes:\n", + " ```bash\n", + " exec \"$SHELL\"\n" + ] + }, + { + "cell_type": "markdown", + "id": "25f630fc", + "metadata": {}, + "source": [ + "## Installing Python\n", + "\n", + "- Use `pyenv` to install Python 3.11:\n", + " ```bash\n", + " pyenv install 3.11\n", + "\n", + "- Set Python 3.11 as the global Python version:\n", + " ```bash\n", + " pyenv global 3.11\n", + "\n", + "- Restart the shell to ensure the changes take effect:\n", + " ```bash\n", + " exec zsh\n", + "\n", + "- Verify the installed Python version:\n", + " ```bash\n", + " python --version\n", + "\n", + "- Ensure the output shows 3.11." + ] + }, + { + "cell_type": "markdown", + "id": "8ad5c336", + "metadata": {}, + "source": [ + "## Installing Poetry\n", + "\n", + "#### Reference\n", + "For detailed documentation, refer to the [Poetry Official Documentation](https://python-poetry.org/docs/#installing-with-the-official-installer).\n", + "\n", + "---\n", + "\n", + "#### Steps to Install and Configure Poetry\n", + "\n", + "- Install Poetry using `pip3`:\n", + " ```bash\n", + " pip3 install poetry\n", + "\n", + "- Set up a Python virtual environment using Poetry:\n", + " ```bash\n", + " poetry shell\n", + "\n", + "- Update all Python dependencies in the project:\n", + " ```bash\n", + " poetry update\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "id": "d539f569", + "metadata": {}, + "source": [ + "## Installing Visual Studio Code\n", + "\n", + "- **Download Visual Studio Code**:\n", + " - Visit the [Visual Studio Code Download Page](https://code.visualstudio.com/download).\n", + " - Download the installer for your operating system.\n", + "\n", + "- **Install Visual Studio Code**:\n", + " - Follow the installation instructions for your system.\n", + " - On macOS, drag the application to the `Applications` folder.\n", + "\n", + "- **Install Extensions**:\n", + " - Open Visual Studio Code.\n", + " - Click on the **Extensions** icon on the left sidebar.\n", + "\n", + " ![Extensions Icon](assets/01-Follow-the-Installation-Video_Mac-04.png)\n", + "\n", + " - Search for **\"python\"** in the Extensions Marketplace and install it.\n", + "\n", + " ![Python Extension](assets/01-Follow-the-Installation-Video_Mac-05.png)\n", + "\n", + " - Search for **\"jupyter\"** in the Extensions Marketplace and install it.\n", + "\n", + " ![Jupyter Extension](assets/01-Follow-the-Installation-Video_Mac-06.png)\n", + "\n", + "- **Restart Visual Studio Code**:\n", + " - After installing the extensions, restart Visual Studio Code to apply the changes.\n", + "\n", + "- **Select Python Environment**:\n", + " - Click on **\"Select Kernel\"** in the top-right corner of Visual Studio Code.\n", + " - Choose the Python virtual environment you set up earlier.\n", + "\n", + " - **Note**: If your environment does not appear in the list, restart Visual Studio Code again.\n", + "\n", + "---\n", + "\n", + "Now, Visual Studio Code is fully set up and ready for development with Python and Jupyter support.\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "langchain-opentutorial-i-KKkGhc-py3.11", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-01.png b/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-01.png new file mode 100644 index 000000000..bdb86f9d0 Binary files /dev/null and b/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-01.png differ diff --git a/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-02.png b/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-02.png new file mode 100644 index 000000000..1eddc26b9 Binary files /dev/null and b/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-02.png differ diff --git a/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-03.png b/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-03.png new file mode 100644 index 000000000..a19ea18c2 Binary files /dev/null and b/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-03.png differ diff --git a/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-04.png b/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-04.png new file mode 100644 index 000000000..5e3fe0fb0 Binary files /dev/null and b/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-04.png differ diff --git a/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-05.png b/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-05.png new file mode 100644 index 000000000..760a78b7f Binary files /dev/null and b/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-05.png differ diff --git a/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-06.png b/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-06.png new file mode 100644 index 000000000..ac7a91c62 Binary files /dev/null and b/01-BASIC/assets/01-Follow-the-Installation-Video_Mac-06.png differ diff --git a/05-Memory/02-ConversationBufferWindowMemory.ipynb b/05-Memory/02-ConversationBufferWindowMemory.ipynb deleted file mode 100644 index 6e6296ffb..000000000 --- a/05-Memory/02-ConversationBufferWindowMemory.ipynb +++ /dev/null @@ -1,262 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# ConversationBufferWindowMemory\n", - "\n", - "- Author: [Kenny Jung](https://www.linkedin.com/in/kwang-yong-jung)\n", - "- Design: [Kenny Jung](https://www.linkedin.com/in/kwang-yong-jung)\n", - "- Peer Review: [Teddy](https://github.com/teddylee777)\n", - "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n", - "\n", - "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gyjong/LangChain-OpenTutorial/blob/main/05-Memory/02-ConversationBufferWindowMemory.ipynb) [![Open in LangChain Academy](https://cdn.prod.website-files.com/65b8cd72835ceeacd4449a53/66e9eba12c7b7688aa3dbb5e_LCA-badge-green.svg)](https://academy.langchain.com/courses/take/intro-to-langgraph/lessons/58239937-lesson-2-sub-graphs)\n", - "\n", - "## Overview\n", - "\n", - "`ConversationBufferWindowMemory` maintains a list of conversation interactions over time.\n", - "\n", - "In this case, `ConversationBufferWindowMemory` uses only the **most recent K** interactions instead of utilizing all conversation content.\n", - "\n", - "This can be useful for maintaining a sliding window of the most recent interactions to prevent the buffer from becoming too large.\n", - "\n", - "\n", - "### Table of Contents\n", - "\n", - "- [Overview](#overview)\n", - "- [Environement Setup](#environment-setup)\n", - "- [Online Bank Account Opening Conversation Example](#online-bank-account-opening-conversation-example)\n", - "- [Retrieving Conversation History](#retrieving-conversation-history)\n", - "\n", - "### References\n", - "\n", - "- [LangChain Python API Reference > langchain: 0.3.13 > memory > ConversationBufferWindowMemory](https://python.langchain.com/api_reference/langchain/memory/langchain.memory.buffer_window.ConversationBufferWindowMemory.html)\n", - "----" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Environment Setup\n", - "\n", - "Set up the environment. You may refer to [Environment Setup](https://wikidocs.net/257836) for more details.\n", - "\n", - "**[Note]**\n", - "- `langchain-opentutorial` is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials. \n", - "- You can checkout the [`langchain-opentutorial`](https://github.com/LangChain-OpenTutorial/langchain-opentutorial-pypi) for more details." - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [], - "source": [ - "%%capture --no-stderr\n", - "!pip install langchain-opentutorial" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": {}, - "outputs": [], - "source": [ - "# Install required packages\n", - "from langchain_opentutorial import package\n", - "\n", - "package.install(\n", - " [\n", - " \"langsmith\",\n", - " \"langchain\",\n", - " \"langchain_core\",\n", - " \"langchain-anthropic\",\n", - " \"langchain_community\",\n", - " \"langchain_text_splitters\",\n", - " \"langchain_openai\",\n", - " ],\n", - " verbose=False,\n", - " upgrade=False,\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Environment variables have been set successfully.\n" - ] - } - ], - "source": [ - "# Set environment variables\n", - "from langchain_opentutorial import set_env\n", - "\n", - "set_env(\n", - " {\n", - " \"OPENAI_API_KEY\": \"\",\n", - " \"LANGCHAIN_API_KEY\": \"\",\n", - " \"LANGCHAIN_TRACING_V2\": \"true\",\n", - " \"LANGCHAIN_ENDPOINT\": \"https://api.smith.langchain.com\",\n", - " \"LANGCHAIN_PROJECT\": \"ConversationBufferWindowMemory\",\n", - " }\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "You can alternatively set `OPENAI_API_KEY` in `.env` file and load it.\n", - "\n", - "[Note] This is not necessary if you've already set `OPENAI_API_KEY` in previous steps." - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "True" - ] - }, - "execution_count": 4, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from dotenv import load_dotenv\n", - "\n", - "load_dotenv(override=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Online Bank Account Opening Conversation Example\n", - "\n", - "This example demonstrates how to use `ConversationBufferWindowMemory` to simulate a virtual banking assistant conversation. The conversation flow shows a typical online bank account opening process, from initial greeting to account creation confirmation, while maintaining only the most recent interactions in memory." - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "metadata": {}, - "outputs": [], - "source": [ - "from langchain.memory import ConversationBufferWindowMemory\n", - "\n", - "memory = ConversationBufferWindowMemory(k=2, return_messages=True)\n", - "\n", - "memory.save_context(\n", - " inputs={\n", - " \"human\": \"Hello, I'd like to open a bank account online. How do I get started?\"\n", - " },\n", - " outputs={\n", - " \"ai\": \"Hello! I'm glad you want to open an account. First, could you please prepare your ID for verification?\"\n", - " },\n", - ")\n", - "memory.save_context(\n", - " inputs={\"human\": \"Yes, I have my ID ready. What should I do next?\"},\n", - " outputs={\n", - " \"ai\": \"Thank you. Please take clear photos of both sides of your ID and upload them. Then we'll proceed with the identity verification process.\"\n", - " },\n", - ")\n", - "memory.save_context(\n", - " inputs={\n", - " \"human\": \"I've uploaded the photos. How does the identity verification work?\"\n", - " },\n", - " outputs={\n", - " \"ai\": \"I've confirmed your uploaded photos. Now, please proceed with phone verification. You'll receive a verification code via text message - please enter that code.\"\n", - " },\n", - ")\n", - "memory.save_context(\n", - " inputs={\n", - " \"human\": \"I've entered the verification code. How do I proceed with opening the account?\"\n", - " },\n", - " outputs={\n", - " \"ai\": \"Your identity has been verified. Now, please select the type of account you want and enter the required information. You can choose the type of deposit, currency, and other options.\"\n", - " },\n", - ")\n", - "memory.save_context(\n", - " inputs={\"human\": \"I've entered all the information. What's the next step?\"},\n", - " outputs={\n", - " \"ai\": \"I've confirmed your information. The account opening process is almost complete. Finally, please agree to the terms of service and confirm the account opening.\"\n", - " },\n", - ")\n", - "memory.save_context(\n", - " inputs={\"human\": \"I've completed all the steps. Has the account been opened?\"},\n", - " outputs={\n", - " \"ai\": \"Yes, your account has been successfully opened. Your account number and related information have been sent to your registered email. Please let us know if you need any further assistance. Thank you!\"\n", - " },\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Retrieving Conversation History\n", - "\n", - "Let's examine the conversation history stored in memory using the `load_memory_variables()` method to verify our window-based memory retention." - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[HumanMessage(content=\"I've entered all the information. What's the next step?\", additional_kwargs={}, response_metadata={}),\n", - " AIMessage(content=\"I've confirmed your information. The account opening process is almost complete. Finally, please agree to the terms of service and confirm the account opening.\", additional_kwargs={}, response_metadata={}),\n", - " HumanMessage(content=\"I've completed all the steps. Has the account been opened?\", additional_kwargs={}, response_metadata={}),\n", - " AIMessage(content='Yes, your account has been successfully opened. Your account number and related information have been sent to your registered email. Please let us know if you need any further assistance. Thank you!', additional_kwargs={}, response_metadata={})]" - ] - }, - "execution_count": 8, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# Check the conversation history\n", - "memory.load_memory_variables({})[\"history\"]" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "py-test", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.11" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/05-Memory/03-ConversationTokenBufferMemory.ipynb b/05-Memory/03-ConversationTokenBufferMemory.ipynb deleted file mode 100644 index 733bc8107..000000000 --- a/05-Memory/03-ConversationTokenBufferMemory.ipynb +++ /dev/null @@ -1,377 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# ConversationTokenBufferMemory\n", - "\n", - "- Author: [Kenny Jung](https://www.linkedin.com/in/kwang-yong-jung)\n", - "- Design: [Kenny Jung](https://www.linkedin.com/in/kwang-yong-jung)\n", - "- Peer Review: [Teddy](https://github.com/teddylee777)\n", - "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n", - "\n", - "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gyjong/LangChain-OpenTutorial/blob/main/05-Memory/03-ConversationTokenBufferMemory.ipynb) [![Open in LangChain Academy](https://cdn.prod.website-files.com/65b8cd72835ceeacd4449a53/66e9eba12c7b7688aa3dbb5e_LCA-badge-green.svg)](https://academy.langchain.com/courses/take/intro-to-langgraph/lessons/58239937-lesson-2-sub-graphs)\n", - "\n", - "## Overview\n", - "\n", - "`ConversationTokenBufferMemory` stores recent conversation history in a buffer memory and determines when to flush conversation content based on **token length** rather than the number of conversations.\n", - "\n", - "Key parameters:\n", - "- `max_token_limit`: Sets the maximum token length for storing conversation content\n", - "- `return_messages`: When True, returns the messages in chat format. When False, returns a string\n", - "- `human_prefix`: Prefix to add before human messages (default: \"Human\")\n", - "- `ai_prefix`: Prefix to add before AI messages (default: \"AI\")\n", - "\n", - "\n", - "### Table of Contents\n", - "\n", - "- [Overview](#overview)\n", - "- [Environement Setup](#environment-setup)\n", - "- [Limit Maximum Token Length to 50](#limit-maximum-token-length-to-50)\n", - "- [Setting Maximum Token Length to 150](#setting-maximum-token-length-to-150)\n", - "\n", - "### References\n", - "\n", - "- [LangChain Python API Reference > langchain: 0.3.13 > memory > ConversationTokenBufferMemory](https://python.langchain.com/api_reference/langchain/memory/langchain.memory.token_buffer.ConversationTokenBufferMemory.html)\n", - "----" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Environment Setup\n", - "\n", - "Set up the environment. You may refer to [Environment Setup](https://wikidocs.net/257836) for more details.\n", - "\n", - "**[Note]**\n", - "- `langchain-opentutorial` is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials. \n", - "- You can checkout the [`langchain-opentutorial`](https://github.com/LangChain-OpenTutorial/langchain-opentutorial-pypi) for more details." - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [], - "source": [ - "%%capture --no-stderr\n", - "!pip install langchain-opentutorial" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": {}, - "outputs": [], - "source": [ - "# Install required packages\n", - "from langchain_opentutorial import package\n", - "\n", - "package.install(\n", - " [\n", - " \"langsmith\",\n", - " \"langchain\",\n", - " \"langchain_core\",\n", - " \"langchain-anthropic\",\n", - " \"langchain_community\",\n", - " \"langchain_text_splitters\",\n", - " \"langchain_openai\",\n", - " ],\n", - " verbose=False,\n", - " upgrade=False,\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Environment variables have been set successfully.\n" - ] - } - ], - "source": [ - "# Set environment variables\n", - "from langchain_opentutorial import set_env\n", - "\n", - "set_env(\n", - " {\n", - " \"OPENAI_API_KEY\": \"\",\n", - " \"LANGCHAIN_API_KEY\": \"\",\n", - " \"LANGCHAIN_TRACING_V2\": \"true\",\n", - " \"LANGCHAIN_ENDPOINT\": \"https://api.smith.langchain.com\",\n", - " \"LANGCHAIN_PROJECT\": \"ConversationTokenBufferMemory\",\n", - " }\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "You can alternatively set `OPENAI_API_KEY` in `.env` file and load it.\n", - "\n", - "[Note] This is not necessary if you've already set `OPENAI_API_KEY` in previous steps." - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "True" - ] - }, - "execution_count": 4, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from dotenv import load_dotenv\n", - "\n", - "load_dotenv(override=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Limit Maximum Token Length to 50\n", - "\n", - "This section demonstrates how to limit the conversation memory to 50 tokens" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [ - { - "ename": "OpenAIError", - "evalue": "The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable", - "output_type": "error", - "traceback": [ - "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", - "\u001b[0;31mOpenAIError\u001b[0m Traceback (most recent call last)", - "Cell \u001b[0;32mIn[1], line 6\u001b[0m\n\u001b[1;32m 2\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m ChatOpenAI\n\u001b[1;32m 5\u001b[0m \u001b[38;5;66;03m# Create LLM model\u001b[39;00m\n\u001b[0;32m----> 6\u001b[0m llm \u001b[38;5;241m=\u001b[39m \u001b[43mChatOpenAI\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmodel_name\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mgpt-4o-mini\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m 8\u001b[0m \u001b[38;5;66;03m# Configure memory\u001b[39;00m\n\u001b[1;32m 9\u001b[0m memory \u001b[38;5;241m=\u001b[39m ConversationTokenBufferMemory(\n\u001b[1;32m 10\u001b[0m llm\u001b[38;5;241m=\u001b[39mllm,\n\u001b[1;32m 11\u001b[0m max_token_limit\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m50\u001b[39m,\n\u001b[1;32m 12\u001b[0m return_messages\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m, \u001b[38;5;66;03m# Limit maximum token length to 50\u001b[39;00m\n\u001b[1;32m 13\u001b[0m )\n", - "File \u001b[0;32m~/Library/Caches/pypoetry/virtualenvs/langchain-opentutorial-0Vwtx6mm-py3.11/lib/python3.11/site-packages/langchain_core/load/serializable.py:125\u001b[0m, in \u001b[0;36mSerializable.__init__\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m 123\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__init__\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;241m*\u001b[39margs: Any, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs: Any) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m 124\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"\"\"\"\u001b[39;00m\n\u001b[0;32m--> 125\u001b[0m \u001b[38;5;28;43msuper\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[38;5;21;43m__init__\u001b[39;49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n", - " \u001b[0;31m[... skipping hidden 1 frame]\u001b[0m\n", - "File \u001b[0;32m~/Library/Caches/pypoetry/virtualenvs/langchain-opentutorial-0Vwtx6mm-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:578\u001b[0m, in \u001b[0;36mBaseChatOpenAI.validate_environment\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 576\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mhttp_client \u001b[38;5;241m=\u001b[39m httpx\u001b[38;5;241m.\u001b[39mClient(proxy\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mopenai_proxy)\n\u001b[1;32m 577\u001b[0m sync_specific \u001b[38;5;241m=\u001b[39m {\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mhttp_client\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mhttp_client}\n\u001b[0;32m--> 578\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mroot_client \u001b[38;5;241m=\u001b[39m \u001b[43mopenai\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mOpenAI\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mclient_params\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43msync_specific\u001b[49m\u001b[43m)\u001b[49m \u001b[38;5;66;03m# type: ignore[arg-type]\u001b[39;00m\n\u001b[1;32m 579\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mclient \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mroot_client\u001b[38;5;241m.\u001b[39mchat\u001b[38;5;241m.\u001b[39mcompletions\n\u001b[1;32m 580\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39masync_client:\n", - "File \u001b[0;32m~/Library/Caches/pypoetry/virtualenvs/langchain-opentutorial-0Vwtx6mm-py3.11/lib/python3.11/site-packages/openai/_client.py:110\u001b[0m, in \u001b[0;36mOpenAI.__init__\u001b[0;34m(self, api_key, organization, project, base_url, websocket_base_url, timeout, max_retries, default_headers, default_query, http_client, _strict_response_validation)\u001b[0m\n\u001b[1;32m 108\u001b[0m api_key \u001b[38;5;241m=\u001b[39m os\u001b[38;5;241m.\u001b[39menviron\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mOPENAI_API_KEY\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m 109\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m api_key \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[0;32m--> 110\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m OpenAIError(\n\u001b[1;32m 111\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mThe api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 112\u001b[0m )\n\u001b[1;32m 113\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mapi_key \u001b[38;5;241m=\u001b[39m api_key\n\u001b[1;32m 115\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m organization \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n", - "\u001b[0;31mOpenAIError\u001b[0m: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable" - ] - } - ], - "source": [ - "from langchain.memory import ConversationTokenBufferMemory\n", - "from langchain_openai import ChatOpenAI\n", - "\n", - "\n", - "# Create LLM model\n", - "llm = ChatOpenAI(model_name=\"gpt-4o-mini\")\n", - "\n", - "# Configure memory\n", - "memory = ConversationTokenBufferMemory(\n", - " llm=llm,\n", - " max_token_limit=50,\n", - " return_messages=True, # Limit maximum token length to 50\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "metadata": {}, - "outputs": [], - "source": [ - "# Add arbitrary conversations\n", - "memory.save_context(\n", - " inputs={\n", - " \"human\": \"Hello, I recently purchased a machine tool from your company. Could you tell me how to install it?\"\n", - " },\n", - " outputs={\n", - " \"ai\": \"Hello! Thank you for your purchase. Could you please tell me the machine model number?\"\n", - " },\n", - ")\n", - "memory.save_context(\n", - " inputs={\"human\": \"Yes, the model number is XG-200.\"},\n", - " outputs={\n", - " \"ai\": \"Thank you. I'll help you with the installation guide for the XG-200 model. First, please check the power supply status at the installation site. The machine requires 220V power.\"\n", - " },\n", - ")\n", - "memory.save_context(\n", - " inputs={\"human\": \"I've checked the power. What's the next step?\"},\n", - " outputs={\n", - " \"ai\": \"Good. Next, please place the machine on a flat and stable surface. Then, proceed with cable connections according to the provided user manual.\"\n", - " },\n", - ")\n", - "memory.save_context(\n", - " inputs={\"human\": \"How do I make the connections?\"},\n", - " outputs={\n", - " \"ai\": \"Please refer to page 5 of the manual. There are detailed instructions for cable connections. If you have any difficulties with this process, I'll be happy to help further.\"\n", - " },\n", - ")\n", - "memory.save_context(\n", - " inputs={\"human\": \"What should I do after the installation is complete?\"},\n", - " outputs={\n", - " \"ai\": \"Once the installation is complete, please turn on the power and perform the initial operation test. The test procedure is explained on page 10 of the manual. If there are any issues with the machine or if you need additional support, please don't hesitate to contact us.\"\n", - " },\n", - ")\n", - "memory.save_context(\n", - " inputs={\"human\": \"Thank you, this has been very helpful!\"},\n", - " outputs={\n", - " \"ai\": \"We're always ready to help. If you have any additional questions or need support, please feel free to ask. Have a great day!\"\n", - " },\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[HumanMessage(content='Thank you, this has been very helpful!', additional_kwargs={}, response_metadata={}),\n", - " AIMessage(content=\"We're always ready to help. If you have any additional questions or need support, please feel free to ask. Have a great day!\", additional_kwargs={}, response_metadata={})]" - ] - }, - "execution_count": 7, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# Check the conversation history\n", - "memory.load_memory_variables({})[\"history\"]" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Setting Maximum Token Length to 150\n", - "\n", - "Let's check how the conversation is stored when we set the maximum token length to **150**." - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "metadata": {}, - "outputs": [], - "source": [ - "# Memory configuration\n", - "memory = ConversationTokenBufferMemory(\n", - " llm=llm,\n", - " max_token_limit=150,\n", - " return_messages=True, # Limit maximum token length to 150\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "metadata": {}, - "outputs": [], - "source": [ - "# Add arbitrary conversations\n", - "memory.save_context(\n", - " inputs={\n", - " \"human\": \"Hello, I recently purchased a machine tool from your company. Could you tell me how to install it?\"\n", - " },\n", - " outputs={\n", - " \"ai\": \"Hello! Thank you for your purchase. Could you please tell me the machine model number?\"\n", - " },\n", - ")\n", - "memory.save_context(\n", - " inputs={\"human\": \"Yes, the model number is XG-200.\"},\n", - " outputs={\n", - " \"ai\": \"Thank you. I'll help you with the installation guide for the XG-200 model. First, please check the power supply status at the installation site. The machine requires 220V power.\"\n", - " },\n", - ")\n", - "memory.save_context(\n", - " inputs={\"human\": \"I've checked the power. What's the next step?\"},\n", - " outputs={\n", - " \"ai\": \"Good. Next, please place the machine on a flat and stable surface. Then, proceed with cable connections according to the provided user manual.\"\n", - " },\n", - ")\n", - "memory.save_context(\n", - " inputs={\"human\": \"How do I make the connections?\"},\n", - " outputs={\n", - " \"ai\": \"Please refer to page 5 of the manual. There are detailed instructions for cable connections. If you have any difficulties with this process, I'll be happy to help further.\"\n", - " },\n", - ")\n", - "memory.save_context(\n", - " inputs={\"human\": \"What should I do after the installation is complete?\"},\n", - " outputs={\n", - " \"ai\": \"Once the installation is complete, please turn on the power and perform the initial operation test. The test procedure is explained on page 10 of the manual. If there are any issues with the machine or if you need additional support, please don't hesitate to contact us.\"\n", - " },\n", - ")\n", - "memory.save_context(\n", - " inputs={\"human\": \"Thank you, this has been very helpful!\"},\n", - " outputs={\n", - " \"ai\": \"We're always ready to help. If you have any additional questions or need support, please feel free to ask. Have a great day!\"\n", - " },\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[HumanMessage(content='What should I do after the installation is complete?', additional_kwargs={}, response_metadata={}),\n", - " AIMessage(content=\"Once the installation is complete, please turn on the power and perform the initial operation test. The test procedure is explained on page 10 of the manual. If there are any issues with the machine or if you need additional support, please don't hesitate to contact us.\", additional_kwargs={}, response_metadata={}),\n", - " HumanMessage(content='Thank you, this has been very helpful!', additional_kwargs={}, response_metadata={}),\n", - " AIMessage(content=\"We're always ready to help. If you have any additional questions or need support, please feel free to ask. Have a great day!\", additional_kwargs={}, response_metadata={})]" - ] - }, - "execution_count": 10, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# Check the conversation history\n", - "memory.load_memory_variables({})[\"history\"]" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "py-test", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.10" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -}