Skip to content

Litellm error when I run the interpreter. Could you kindly share the solution for this? #469

@RohitNansen

Description

@RohitNansen

(virt_env) C:\Users\Rohit>interpreter --model gpt-3.5-turbo

▌ Model set to GPT-3.5-TURBO

Open Interpreter will require approval before running code. Use interpreter -y to bypass this.

Press CTRL-C to exit.

can you activate dark theme in my system?

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new

Traceback (most recent call last):
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\litellm\main.py", line 250, in completion
model, custom_llm_provider = get_llm_provider(model=model, custom_llm_provider=custom_llm_provider)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\litellm\utils.py", line 1202, in get_llm_provider
raise e
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\litellm\utils.py", line 1199, in get_llm_provider
raise ValueError(f"LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/{model}',..) Learn more: https://docs.litellm.ai/docs/providers")
ValueError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/gpt-3.5-turbo',..) Learn more: https://docs.litellm.ai/docs/providers

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in run_code
File "C:\Users\Rohit\anaconda3\envs\virt_env\Scripts\interpreter.exe_main
.py", line 7, in
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\interpreter\core\core.py", line 21, in cli
cli(self)
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\interpreter\cli\cli.py", line 146, in cli
interpreter.chat()
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\interpreter\core\core.py", line 65, in chat
for _ in self._streaming_chat(message=message, display=display):
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\interpreter\core\core.py", line 86, in _streaming_chat
yield from terminal_interface(self, message)
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\interpreter\terminal_interface\terminal_interface.py", line 50, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\interpreter\core\core.py", line 94, in _streaming_chat
yield from self._respond()
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\interpreter\core\core.py", line 120, in _respond
yield from respond(self)
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\interpreter\core\respond.py", line 56, in respond
for chunk in interpreter._llm(messages_for_llm):
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\interpreter\llm\convert_to_coding_llm.py", line 19, in coding_llm
for chunk in text_llm(messages):
^^^^^^^^^^^^^^^^^^
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\interpreter\llm\setup_text_llm.py", line 119, in base_llm
return litellm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\litellm\utils.py", line 671, in wrapper
raise e
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\litellm\utils.py", line 630, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\litellm\timeout.py", line 53, in wrapper
result = future.result(timeout=local_timeout_duration)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\concurrent\futures_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\concurrent\futures_base.py", line 401, in __get_result
raise self._exception
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\litellm\timeout.py", line 42, in async_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\litellm\main.py", line 1192, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\litellm\utils.py", line 2700, in exception_type
raise e
File "C:\Users\Rohit\anaconda3\envs\virt_env\Lib\site-packages\litellm\utils.py", line 2682, in exception_type
raise APIError(status_code=500, message=str(original_exception), llm_provider=custom_llm_provider, model=model)
litellm.exceptions.APIError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/gpt-3.5-turbo',..) Learn more: https://docs.litellm.ai/docs/providers

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions