-
Notifications
You must be signed in to change notification settings - Fork 6.3k
Description
Description
Hi,
I would like to stream my crew output asynchronously but it works pretty well when I do not register tool to my agent but when I register a tool to my agent it fails like. As far as I understand, tool call is not finishing.
[Tool: calculator]
[Args: ]
[Tool: calculator]
[Args: {"]
[Tool: calculator]
[Args: {"expression]
[Tool: calculator]
[Args: {"expression":"]
[Tool: calculator]
[Args: {"expression":"25]
[Tool: calculator]
[Args: {"expression":"25 *]
[Tool: calculator]
[Args: {"expression":"25 * ]
[Tool: calculator]
[Args: {"expression":"25 * 4]
[Tool: calculator]
[Args: {"expression":"25 * 4 +]
[Tool: calculator]
[Args: {"expression":"25 * 4 + ]
[Tool: calculator]
[Args: {"expression":"25 * 4 + 10]
[Tool: calculator]
[Args: {"expression":"25 * 4 + 10"}]
[Tool: calculator]
[Args: ]
[Tool: calculator]
[Args: {"]
[Tool: calculator]
[Args: {"expression]
[Tool: calculator]
[Args: {"expression":"]
[Tool: calculator]
[Args: {"expression":"25]
[Tool: calculator]
[Args: {"expression":"25 *]
[Tool: calculator]
[Args: {"expression":"25 * ]
[Tool: calculator]
[Args: {"expression":"25 * 4]
[Tool: calculator]
[Args: {"expression":"25 * 4 +]
[Tool: calculator]
[Args: {"expression":"25 * 4 + ]
[Tool: calculator]
[Args: {"expression":"25 * 4 + 10]
[Tool: calculator]
[Args: {"expression":"25 * 4 + 10"}]
[CrewAIEventsBus] Warning: Event pairing mismatch. 'task_failed' closed 'agent_execution_started' (expected
'task_started')
[CrewAIEventsBus] Warning: Event pairing mismatch. 'crew_kickoff_failed' closed 'agent_execution_started'
(expected 'crew_kickoff_started')
[Tool: calculator]
[Args: ]
[Tool: calculator]
[Args: {"]
[Tool: calculator]
[Args: {"expression]
[Tool: calculator]
[Args: {"expression":"]
[Tool: calculator]
[Args: {"expression":"25]
[Tool: calculator]
[Args: {"expression":"25 *]
[Tool: calculator]
[Args: {"expression":"25 * ]
[Tool: calculator]
[Args: {"expression":"25 * 4]
[Tool: calculator]
[Args: {"expression":"25 * 4 +]
[Tool: calculator]
[Args: {"expression":"25 * 4 + ]
[Tool: calculator]
[Args: {"expression":"25 * 4 + 10]
[Tool: calculator]
[Args: {"expression":"25 * 4 + 10"}]
and finally I get this error:
File "/home/x/x/x/x/.venv/lib/python3.10/site-packages/crewai/agents/crew_agent_executor.py", line 1212, in _ainvoke_loop_native_tools
answer = await aget_llm_response(
File "/home/x/x/x/x/.venv/lib/python3.10/site-packages/crewai/utilities/agent_utils.py", line 443, in aget_llm_response
raise ValueError("Invalid response from LLM call - None or empty.")
ValueError: Invalid response from LLM call - None or empty.
Steps to Reproduce
Run this code:
import asyncio
from crewai import Agent, Crew, Task
from crewai.tools import tool
from crewai.types.streaming import StreamChunkType
@tool("Calculator")
def calculator(expression: str) -> str:
"""Evaluate a mathematical expression and return the result."""
try:
result = eval(expression)
return f"The result of {expression} is {result}"
except Exception as e:
return f"Error: {e}"
async def stream_crew():
# Create the agent
researcher = Agent(
role="Math Assistant",
goal="Help users with calculations",
backstory="You are an expert mathematician.",
tools=[calculator] # Assign the custom tool
)
expression_to_calculate = "25 * 4 + 10"
task = Task(
description=f"Calculate {expression_to_calculate} using the Calculator tool and explain the result",
agent=researcher,
expected_output="The calculation result with explanation."
)
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
streaming = await crew.akickoff(inputs={})
async for chunk in streaming:
if chunk.chunk_type == StreamChunkType.TEXT:
print(chunk.content, end="", flush=True)
elif chunk.chunk_type == StreamChunkType.TOOL_CALL and chunk.tool_call:
print(f"\n[Tool: {chunk.tool_call.tool_name}]")
print(f"[Args: {chunk.tool_call.arguments}]")
print(f"\n\nFinal: {streaming.result.raw}")
asyncio.run(stream_crew())
See it is failing and if you want you can remove the tool and try again. It will work.
Expected behavior
Expected behavior is streaming the name of the function and argument.
Screenshots/Code snippets
`import asyncio
from crewai import Agent, Crew, Task
from crewai.tools import tool
from crewai.types.streaming import StreamChunkType
@tool("Calculator")
def calculator(expression: str) -> str:
"""Evaluate a mathematical expression and return the result."""
try:
result = eval(expression)
return f"The result of {expression} is {result}"
except Exception as e:
return f"Error: {e}"
async def stream_crew():
# Create the agent
researcher = Agent(
role="Math Assistant",
goal="Help users with calculations",
backstory="You are an expert mathematician.",
# tools=[calculator] # Assign the custom tool
)
# Expression to calculate
expression_to_calculate = "25 * 4 + 10"
# Create the task, passing the expression directly
task = Task(
description=f"Calculate {expression_to_calculate} using the Calculator tool and explain the result",
agent=researcher,
expected_output="The calculation result with explanation."
)
# Create the crew
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
# Start streaming the task
streaming = await crew.akickoff(inputs={})
async for chunk in streaming:
if chunk.chunk_type == StreamChunkType.TEXT:
print(chunk.content, end="", flush=True)
elif chunk.chunk_type == StreamChunkType.TOOL_CALL and chunk.tool_call:
print(f"\n[Tool: {chunk.tool_call.tool_name}]")
print(f"[Args: {chunk.tool_call.arguments}]")
print(f"\n\nFinal: {streaming.result.raw}")
# Run the async function
asyncio.run(stream_crew())
`
Operating System
Ubuntu 20.04
Python Version
3.11
crewAI Version
1.9.3
crewAI Tools Version
1.9.3
Virtual Environment
Venv
Evidence
File "/home/x/x/x/x/.venv/lib/python3.10/site-packages/crewai/agents/crew_agent_executor.py", line 1212, in _ainvoke_loop_native_tools
answer = await aget_llm_response(
File "/home/x/x/x/x/.venv/lib/python3.10/site-packages/crewai/utilities/agent_utils.py", line 443, in aget_llm_response
raise ValueError("Invalid response from LLM call - None or empty.")
ValueError: Invalid response from LLM call - None or empty.
Possible Solution
Maybe there is a way to stop the tool call when it is done once ?
Additional context
It works when you comment out tools=[calculator]