-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Feat: Introduce StreamRolePlaying for Event Streaming of Single Interactions and Suggestion for Future ChatAgent Streaming Description: #466
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Adds the StreamRolePlaying class, inheriting from RolePlaying. This class supports step-by-step execution of agent interactions and generates an event stream for single interaction steps, used to track workflow state (e.g., Agent Start/End, LLM Start/End, Message, Tool Call). Key Features: - Synchronous (`step_stream`) and asynchronous (`astep_stream`) event stream generation, currently limited to single agent interactions. - Generates customized system messages for assistant and user agents.. Note: Due to limitations in the underlying ChatAgent (lack of full support for streaming tool calls and message output), this implementation provides event streaming for individual interaction steps, not end-to-end true streaming.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @RonaldJEN Sorry for the delayed review. left some comments,additionally, the current implementation uses the chatagent.step() method, which is typically non-streaming. If you're interested, we'd welcome your contribution to address this in CAMEL by working on issue #2069:
camel-ai/camel#2069
Please don't hesitate to reach out if you need any support from our team.
| self.user_sys_msg = self.user_agent.system_message | ||
|
|
||
| def create_custom_system_messages(self,task_prompt: str): | ||
| """创建自定义系统消息 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We prefer to use English comments.
| ) | ||
| self.user_sys_msg = self.user_agent.system_message | ||
|
|
||
| def create_custom_system_messages(self,task_prompt: str): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can change this to a private method (using _prefix)
| Unless told the task is complete, always structure your responses as: | ||
| Solution: | ||
| [Your detailed solution here in Chinese, including code, explanations, and step-by-step instructions] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The language here should be defined by {output_language}
| Always end with "Next request." | ||
| """ | ||
|
|
||
| CUSTOM_USER_SYSTEM_PROMPT = """===== USER GUIDELINES ===== |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we can define the output_language in system prompt
| END_OF_AGENT = "end_of_agent" | ||
| MESSAGE = "message" | ||
| TOOL_CALL = "tool_call" | ||
| ERROR = "error" # New event for handling issues |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This event doesn't seem to be used later - could we potentially utilize it during error handling? We could yield an ERROR event within a try...except block.
| "data": {"agent_id": assistant_agent_id} | ||
| } | ||
|
|
||
| def get_last_step_result(self) -> Tuple[ChatAgentResponse, ChatAgentResponse]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation might confuse users since they may not know how to use this function. Perhaps we could add an example for the new streaming output?
Description
This PR introduces the
StreamRolePlayingclass inowl/utils/enhanced_role_playing.py. This class extends the baseRolePlayingsociety to support step-by-step agent interactions while generating an event stream for single interaction steps.Key Features:
START_OF_WORKFLOW,END_OF_WORKFLOW,START_OF_AGENT,END_OF_AGENT,MESSAGE,TOOL_CALL,START_OF_LLM,END_OF_LLM,ERROR), reflecting the status of a single interaction step.step_streamandastep_streammethods for processing a single step and generating events.workflow_id, agent ID, message ID, tool call ID) for better tracking within the event stream.Important Limitations & Future Outlook:
ChatAgentnot yet fully supporting streaming Tool Calling and streaming message content output,StreamRolePlayingcurrently implements event streaming for single agent interaction steps, not a complete end-to-end streaming experience. The entire agent response (including thought process and final message) is generated at once after the step completes.ChatAgentin the future.ChatAgentlevel. This would significantly enhance user experience and interaction real-time responsiveness.