Skip to content

Commit 87fe5b0

Browse files
robincecopybara-github
authored andcommitted
Copybara import of the project:
-- a029f6c by Robin Ince <robince@gmail.com>: fix: tool use example in README.md -- 15dc2e0 by Robin Ince <robince@gmail.com>: fix: tool use example in docs COPYBARA_INTEGRATE_REVIEW=#86 from robince:main b3c1aaa PiperOrigin-RevId: 715502403
1 parent 2aab09d commit 87fe5b0

2 files changed

Lines changed: 38 additions & 14 deletions

File tree

README.md

Lines changed: 17 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,12 @@ the model.
179179
The following example shows how to do it for a simple function invocation.
180180

181181
``` python
182-
function_call_part = response.candidates[0].content.parts[0]
182+
user_prompt_content = types.Content(
183+
role="user", parts=[types.Part.from_text("What is the weather like in Boston?")]
184+
)
185+
function_call_content = response.candidates[0].content
186+
function_call_part = function_call_content.parts[0]
187+
183188

184189
try:
185190
function_result = get_current_weather(**function_call_part.function_call.args)
@@ -192,16 +197,22 @@ function_response_part = types.Part.from_function_response(
192197
name=function_call_part.function_call.name,
193198
response=function_response,
194199
)
200+
function_response_content = types.Content(role="tool", parts=[function_response_part])
201+
195202

196203
response = client.models.generate_content(
197204
model='gemini-2.0-flash-exp',
198205
contents=[
199-
types.Part.from_text("What is the weather like in Boston?"),
200-
function_call_part,
201-
function_response_part,
202-
])
206+
user_prompt_content,
207+
function_call_content,
208+
function_response_content,
209+
],
210+
config=types.GenerateContentConfig(
211+
tools=[tool],
212+
),
213+
)
203214

204-
response
215+
print(response.text)
205216
```
206217

207218
### JSON Response Schema

docs/_sources/index.rst.txt

Lines changed: 21 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -186,25 +186,38 @@ called and responded.
186186
187187
.. code:: python
188188
189-
function_call_part = response.candidates[0].content.parts[0]
189+
user_prompt_content = types.Content(
190+
role="user", parts=[types.Part.from_text("What is the weather like in Boston?")]
191+
)
192+
function_call_content = response.candidates[0].content
193+
function_call_part = function_call_content.parts[0]
194+
190195
191-
function_reponse = get_current_weather(**function_call_part.function_call.args)
196+
try:
197+
function_result = get_current_weather(**function_call_part.function_call.args)
198+
function_response = {'result': function_result}
199+
except Exception as e: # instead of raising the exception, you can let the model handle it
200+
function_response = {'error': str(e)}
192201
193202
194203
function_response_part = types.Part.from_function_response(
195204
name=function_call_part.function_call.name,
196-
response={'result': function_response}
205+
response=function_response,
197206
)
207+
function_response_content = types.Content(role="tool", parts=[function_response_part])
208+
198209
199210
response = client.models.generate_content(
200211
model='gemini-2.0-flash-exp',
201212
contents=[
202-
types.Part.from_text("What is the weather like in Boston?"),
203-
function_call_part,
204-
function_response_part,
205-
])
213+
user_prompt_content,
214+
function_call_content,
215+
function_response_content,
216+
],
217+
config=types.GenerateContentConfig( tools=[tool],)
218+
)
206219
207-
response
220+
print(response.text)
208221
209222
JSON Response Schema
210223
--------------------

0 commit comments

Comments
 (0)