-
Notifications
You must be signed in to change notification settings - Fork 282
ConversationSummaryMemory #19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ryMemory [Version] initial [Language] ENG [Package] langchain, langchain-openai ConversationSummaryMemory & ConversationSummaryBufferMemory * Implemented and to manage conversation history efficiently. * Demonstrated memory behavior before and after exceeding the token threshold (e.g., 200 tokens). * Showed how recent conversations remain unsummarized, while older conversations are compiled into a summary. * Highlighted token-based flushing mechanism, instead of interaction count, for deciding when to summarize conversations. * Provided examples of storing and retrieving conversation history with LangChain's memory modules. Force remove reference to missing file from Git index
1.현재 셀 순서가
이렇게 초기값을 os.getenv로 가져오도록 설정해주면 어떨까요? 2.혹시 다음의 로그가 저만 발생하는 지 궁금합니다! |
실제 코드 실행시에는 말씀해주신대로 했었어요, 템플릿을 바꾸지 않고 고정적으로 써야하나? 싶어서 "" 로 바꾸어 놨는데, os.getenv로 바꿔도 된다면 그렇게 변경 하겠습니다!.
저도 실행 시 나왔던 경고이긴 한데, 어떻게 처리해야 하나 고민 입니다.. |
[Version] initial [Language] ENG [Package] langchain, langchain-openai ConversationSummaryMemory & ConversationSummaryBufferMemory Implemented ConversationSummaryMemory and ConversationSummaryBufferMemory to manage conversation history efficiently. Demonstrated memory behavior before and after exceeding the token threshold (e.g., 200 tokens). Showed how recent conversations remain unsummarized, while older conversations are compiled into a summary. Highlighted token-based flushing mechanism, instead of interaction count, for deciding when to summarize conversations. Provided examples of storing and retrieving conversation history with LangChain's memory modules.
우선 load_dotenv를 set_env 셀 아래로 옮기면서 from dotenv import load_dotenv load_dotenv(override=True) 로 수정 했습니다. |
@jinucho load_dotenv 는 항상
다음에 오도록 위치해 주시면 되겠습니다^^ 향후 deprecation warning 이 뜨는 것 같습니다. 일단은 넘어가셔도 좋을 것 같습니다! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Review OS : Win
- Template Rule 가이드를 준수 하였는가?
- Table of Contents 의 링크가 원활하게 동작하는지 확인하였는가?
- 이미지가 포함되어 있다면, 이미지의 파일명이 가이드를 준수하였는가? (해당 없음)
- import 구문이 예전 legacy 방식이 아닌 최신 버전을 따르는가?
- 모든 코드가 동작에 오류 없이 동작하는가?
- 기타 의견: 윈도우 환경에서의 정상 동작 확인 완료했습니다. 고생하셨습니다 :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Review]
- Review OS : Mac
- Template Rule 가이드를 준수 하였는가? : YES
- Table of Contents 의 링크가 원활하게 동작하는지 확인하였는가? : YES
- 이미지가 포함되어 있다면, 이미지의 파일명이 가이드를 준수하였는가? : N/A(이미지 포함 X)
- import 구문이 예전 legacy 방식이 아닌 최신 버전을 따르는가?(YES/NO) : YES
- 모든 코드가 동작에 오류 없이 동작하는가? (만약, warning 발생시 코멘트에 적어주세요) : YES
- 기타 의견 : 고생하셨습니다 👍
[Version] initial [Language] ENG [Package] langchain, langchain-openai ConversationSummaryMemory & ConversationSummaryBufferMemory Implemented ConversationSummaryMemory and ConversationSummaryBufferMemory to manage conversation history efficiently. Demonstrated memory behavior before and after exceeding the token threshold (e.g., 200 tokens). Showed how recent conversations remain unsummarized, while older conversations are compiled into a summary. Highlighted token-based flushing mechanism, instead of interaction count, for deciding when to summarize conversations. Provided examples of storing and retrieving conversation history with LangChain's memory modules.
[Title] ConversationSummaryMemory
[Version] initial
[Language] ENG
[Package] langchain, langchain-openai
ConversationSummaryMemory & ConversationSummaryBufferMemory
ConversationSummaryMemory
andConversationSummaryBufferMemory
to manage conversation history efficiently.