Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[Bug]: ReasoningAgent's _process_prompt doesn't handle multiple messages. #755

@giorgossideris

Description

@giorgossideris

Describe the bug

When ReasoningAgent performs and intermediate step (for example by calling the UserProxyAgent to run a python script), the _process_prompt function returns the output of this step as prompt and not a question to be asked. This is because the function returns the last message in the conversation without paying attention to the previous ones.

Steps to reproduce

An example is the following script:

from autogen import ReasoningAgent, UserProxyAgent

config_list = [{<configs>}]

reasoning_agent = ReasoningAgent(
    name="reason_agent",
    llm_config={"config_list": config_list},
    verbose=False,
    max_depth=3,
    beam_size=1,
)

question = "What is (44232 + 13312 / (232 - 32)) * 5? Use python script"

user_proxy = UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    is_termination_msg=lambda x: "TERMINATE" in (x.get("content", "") or "").upper(),
    code_execution_config={"use_docker": False},
    max_consecutive_auto_reply=3,
)

user_proxy.initiate_chat(reasoning_agent, message=question)

After the script execution the prompt value that will be set in the line:

prompt, ground_truth = self._process_prompt(messages, sender)

of generate_forest_response will be the output of the script (the last message), without never mentioning the question.

Model Used

No response

Expected Behavior

The question should be included in order for the full process to run smoothly.

Screenshots and logs

No response

Additional Information

No response

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

Status

Waiting for merge

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions