Skip to content

Commit 03cd19c

Browse files
feat(aiplatform): update the api
#### aiplatform:v1 The following keys were added: - resources.projects.resources.locations.resources.reasoningEngines.methods.getIamPolicy (Total Keys: 14) - resources.projects.resources.locations.resources.reasoningEngines.methods.setIamPolicy (Total Keys: 12) - resources.projects.resources.locations.resources.reasoningEngines.methods.testIamPermissions (Total Keys: 14) - schemas.GoogleCloudAiplatformV1ComputationBasedMetricSpec (Total Keys: 5) - schemas.GoogleCloudAiplatformV1EvaluationInstance.properties.agentData.deprecated (Total Keys: 1) - schemas.GoogleCloudAiplatformV1EvaluationInstance.properties.agentEvalData.$ref (Total Keys: 1) - schemas.GoogleCloudAiplatformV1EvaluationInstanceAgentConfig.deprecated (Total Keys: 1) - schemas.GoogleCloudAiplatformV1EvaluationInstanceAgentConfig.properties.agentId.type (Total Keys: 1) - schemas.GoogleCloudAiplatformV1EvaluationInstanceAgentConfig.properties.agentType.type (Total Keys: 1) - schemas.GoogleCloudAiplatformV1EvaluationInstanceAgentConfig.properties.subAgents (Total Keys: 2) - schemas.GoogleCloudAiplatformV1EvaluationInstanceAgentData.deprecated (Total Keys: 1) - schemas.GoogleCloudAiplatformV1EvaluationInstanceAgentData.properties.agents (Total Keys: 2) - schemas.GoogleCloudAiplatformV1EvaluationInstanceAgentData.properties.turns (Total Keys: 2) - schemas.GoogleCloudAiplatformV1EvaluationInstanceAgentDataAgentEvent (Total Keys: 10) - schemas.GoogleCloudAiplatformV1EvaluationInstanceAgentDataConversationTurn (Total Keys: 7) - schemas.GoogleCloudAiplatformV1EvaluationRunMetric.properties.computationBasedMetricSpec.$ref (Total Keys: 1) - schemas.GoogleCloudAiplatformV1EvaluationRunMetricComputationBasedMetricSpec (Total Keys: 5) - schemas.GoogleCloudAiplatformV1Metric.properties.computationBasedMetricSpec.$ref (Total Keys: 1) - schemas.GoogleCloudAiplatformV1PurgeMemoriesRequest.properties.filterGroups (Total Keys: 2) - schemas.GoogleCloudAiplatformV1RagManagedDbConfig.properties.basic.deprecated (Total Keys: 1) - schemas.GoogleCloudAiplatformV1RagManagedDbConfig.properties.scaled.deprecated (Total Keys: 1) - schemas.GoogleCloudAiplatformV1RagManagedDbConfig.properties.serverless.$ref (Total Keys: 1) - schemas.GoogleCloudAiplatformV1RagManagedDbConfig.properties.spanner.$ref (Total Keys: 1) - schemas.GoogleCloudAiplatformV1RagManagedDbConfig.properties.unprovisioned.deprecated (Total Keys: 1) - schemas.GoogleCloudAiplatformV1RagManagedDbConfigServerless (Total Keys: 2) - schemas.GoogleCloudAiplatformV1RagManagedDbConfigSpanner (Total Keys: 5) The following keys were changed: - endpoints (Total Keys: 1) #### aiplatform:v1beta1 The following keys were added: - resources.projects.resources.locations.resources.ragCorpora.methods.delete.parameters.forceDelete (Total Keys: 2) - resources.projects.resources.locations.resources.reasoningEngines.methods.getIamPolicy (Total Keys: 14) - resources.projects.resources.locations.resources.reasoningEngines.methods.setIamPolicy (Total Keys: 12) - resources.projects.resources.locations.resources.reasoningEngines.methods.testIamPermissions (Total Keys: 14) - schemas.GoogleCloudAiplatformV1beta1AgentConfig (Total Keys: 9) - schemas.GoogleCloudAiplatformV1beta1AgentData (Total Keys: 6) - schemas.GoogleCloudAiplatformV1beta1AgentEvent (Total Keys: 10) - schemas.GoogleCloudAiplatformV1beta1CandidateResponse.properties.agentData.$ref (Total Keys: 1) - schemas.GoogleCloudAiplatformV1beta1ComputationBasedMetricSpec (Total Keys: 5) - schemas.GoogleCloudAiplatformV1beta1ConversationTurn (Total Keys: 7) - schemas.GoogleCloudAiplatformV1beta1EvaluationInstance.properties.agentData.deprecated (Total Keys: 1) - schemas.GoogleCloudAiplatformV1beta1EvaluationInstance.properties.agentEvalData.$ref (Total Keys: 1) - schemas.GoogleCloudAiplatformV1beta1EvaluationInstanceAgentConfig.deprecated (Total Keys: 1) - schemas.GoogleCloudAiplatformV1beta1EvaluationInstanceAgentConfig.properties.agentId.type (Total Keys: 1) - schemas.GoogleCloudAiplatformV1beta1EvaluationInstanceAgentConfig.properties.agentType.type (Total Keys: 1) - schemas.GoogleCloudAiplatformV1beta1EvaluationInstanceAgentConfig.properties.subAgents (Total Keys: 2) - schemas.GoogleCloudAiplatformV1beta1EvaluationInstanceAgentData.deprecated (Total Keys: 1) - schemas.GoogleCloudAiplatformV1beta1EvaluationInstanceAgentData.properties.agents (Total Keys: 2) - schemas.GoogleCloudAiplatformV1beta1EvaluationInstanceAgentData.properties.turns (Total Keys: 2) - schemas.GoogleCloudAiplatformV1beta1EvaluationInstanceAgentDataAgentEvent (Total Keys: 10) - schemas.GoogleCloudAiplatformV1beta1EvaluationInstanceAgentDataConversationTurn (Total Keys: 7) - schemas.GoogleCloudAiplatformV1beta1EvaluationPrompt.properties.agentData.$ref (Total Keys: 1) - schemas.GoogleCloudAiplatformV1beta1EvaluationRunMetric.properties.computationBasedMetricSpec.$ref (Total Keys: 1) - schemas.GoogleCloudAiplatformV1beta1EvaluationRunMetricComputationBasedMetricSpec (Total Keys: 5) - schemas.GoogleCloudAiplatformV1beta1Metric.properties.computationBasedMetricSpec.$ref (Total Keys: 1) - schemas.GoogleCloudAiplatformV1beta1PurgeMemoriesRequest.properties.filterGroups (Total Keys: 2) - schemas.GoogleCloudAiplatformV1beta1RagManagedDbConfig.properties.basic.deprecated (Total Keys: 1) - schemas.GoogleCloudAiplatformV1beta1RagManagedDbConfig.properties.scaled.deprecated (Total Keys: 1) - schemas.GoogleCloudAiplatformV1beta1RagManagedDbConfig.properties.serverless.$ref (Total Keys: 1) - schemas.GoogleCloudAiplatformV1beta1RagManagedDbConfig.properties.spanner.$ref (Total Keys: 1) - schemas.GoogleCloudAiplatformV1beta1RagManagedDbConfig.properties.unprovisioned.deprecated (Total Keys: 1) - schemas.GoogleCloudAiplatformV1beta1RagManagedDbConfigServerless (Total Keys: 2) - schemas.GoogleCloudAiplatformV1beta1RagManagedDbConfigSpanner (Total Keys: 5) The following keys were changed: - endpoints (Total Keys: 1)
1 parent 4571e1c commit 03cd19c

18 files changed

+22413
-614
lines changed

docs/dyn/aiplatform_v1.projects.locations.evaluationItems.html

Lines changed: 16 additions & 16 deletions
Large diffs are not rendered by default.

docs/dyn/aiplatform_v1.projects.locations.evaluationRuns.html

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -282,6 +282,12 @@ <h3>Method Details</h3>
282282
},
283283
&quot;metrics&quot;: [ # Required. The metrics to be calculated in the evaluation run.
284284
{ # The metric used for evaluation runs.
285+
&quot;computationBasedMetricSpec&quot;: { # Specification for a computation based metric. # Spec for a computation based metric.
286+
&quot;parameters&quot;: { # Optional. A map of parameters for the metric, e.g. {&quot;rouge_type&quot;: &quot;rougeL&quot;}.
287+
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
288+
},
289+
&quot;type&quot;: &quot;A String&quot;, # Required. The type of the computation based metric.
290+
},
285291
&quot;llmBasedMetricSpec&quot;: { # Specification for an LLM based metric. # Spec for an LLM based metric.
286292
&quot;additionalConfig&quot;: { # Optional. Optional additional configuration for the metric.
287293
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
@@ -543,6 +549,12 @@ <h3>Method Details</h3>
543549
&quot;bleuSpec&quot;: { # Spec for bleu score metric - calculates the precision of n-grams in the prediction as compared to reference - returns a score ranging between 0 to 1. # Spec for bleu metric.
544550
&quot;useEffectiveOrder&quot;: True or False, # Optional. Whether to use_effective_order to compute bleu score.
545551
},
552+
&quot;computationBasedMetricSpec&quot;: { # Specification for a computation based metric. # Spec for a computation based metric.
553+
&quot;parameters&quot;: { # Optional. A map of parameters for the metric, e.g. {&quot;rouge_type&quot;: &quot;rougeL&quot;}.
554+
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
555+
},
556+
&quot;type&quot;: &quot;A String&quot;, # Required. The type of the computation based metric.
557+
},
546558
&quot;customCodeExecutionSpec&quot;: { # Specificies a metric that is populated by evaluating user-defined Python code. # Spec for Custom Code Execution metric.
547559
&quot;evaluationFunction&quot;: &quot;A String&quot;, # Required. Python function. Expected user to define the following function, e.g.: def evaluate(instance: dict[str, Any]) -&gt; float: Please include this function signature in the code snippet. Instance is the evaluation instance, any fields populated in the instance are available to the function as instance[field_name]. Example: Example input: ``` instance= EvaluationInstance( response=EvaluationInstance.InstanceData(text=&quot;The answer is 4.&quot;), reference=EvaluationInstance.InstanceData(text=&quot;4&quot;) ) ``` Example converted input: ``` { &#x27;response&#x27;: {&#x27;text&#x27;: &#x27;The answer is 4.&#x27;}, &#x27;reference&#x27;: {&#x27;text&#x27;: &#x27;4&#x27;} } ``` Example python function: ``` def evaluate(instance: dict[str, Any]) -&gt; float: if instance&#x27;response&#x27; == instance&#x27;reference&#x27;: return 1.0 return 0.0 ``` CustomCodeExecutionSpec is also supported in Batch Evaluation (EvalDataset RPC) and Tuning Evaluation. Each line in the input jsonl file will be converted to dict[str, Any] and passed to the evaluation function.
548560
},
@@ -1535,6 +1547,12 @@ <h3>Method Details</h3>
15351547
},
15361548
&quot;metrics&quot;: [ # Required. The metrics to be calculated in the evaluation run.
15371549
{ # The metric used for evaluation runs.
1550+
&quot;computationBasedMetricSpec&quot;: { # Specification for a computation based metric. # Spec for a computation based metric.
1551+
&quot;parameters&quot;: { # Optional. A map of parameters for the metric, e.g. {&quot;rouge_type&quot;: &quot;rougeL&quot;}.
1552+
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
1553+
},
1554+
&quot;type&quot;: &quot;A String&quot;, # Required. The type of the computation based metric.
1555+
},
15381556
&quot;llmBasedMetricSpec&quot;: { # Specification for an LLM based metric. # Spec for an LLM based metric.
15391557
&quot;additionalConfig&quot;: { # Optional. Optional additional configuration for the metric.
15401558
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
@@ -1796,6 +1814,12 @@ <h3>Method Details</h3>
17961814
&quot;bleuSpec&quot;: { # Spec for bleu score metric - calculates the precision of n-grams in the prediction as compared to reference - returns a score ranging between 0 to 1. # Spec for bleu metric.
17971815
&quot;useEffectiveOrder&quot;: True or False, # Optional. Whether to use_effective_order to compute bleu score.
17981816
},
1817+
&quot;computationBasedMetricSpec&quot;: { # Specification for a computation based metric. # Spec for a computation based metric.
1818+
&quot;parameters&quot;: { # Optional. A map of parameters for the metric, e.g. {&quot;rouge_type&quot;: &quot;rougeL&quot;}.
1819+
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
1820+
},
1821+
&quot;type&quot;: &quot;A String&quot;, # Required. The type of the computation based metric.
1822+
},
17991823
&quot;customCodeExecutionSpec&quot;: { # Specificies a metric that is populated by evaluating user-defined Python code. # Spec for Custom Code Execution metric.
18001824
&quot;evaluationFunction&quot;: &quot;A String&quot;, # Required. Python function. Expected user to define the following function, e.g.: def evaluate(instance: dict[str, Any]) -&gt; float: Please include this function signature in the code snippet. Instance is the evaluation instance, any fields populated in the instance are available to the function as instance[field_name]. Example: Example input: ``` instance= EvaluationInstance( response=EvaluationInstance.InstanceData(text=&quot;The answer is 4.&quot;), reference=EvaluationInstance.InstanceData(text=&quot;4&quot;) ) ``` Example converted input: ``` { &#x27;response&#x27;: {&#x27;text&#x27;: &#x27;The answer is 4.&#x27;}, &#x27;reference&#x27;: {&#x27;text&#x27;: &#x27;4&#x27;} } ``` Example python function: ``` def evaluate(instance: dict[str, Any]) -&gt; float: if instance&#x27;response&#x27; == instance&#x27;reference&#x27;: return 1.0 return 0.0 ``` CustomCodeExecutionSpec is also supported in Batch Evaluation (EvalDataset RPC) and Tuning Evaluation. Each line in the input jsonl file will be converted to dict[str, Any] and passed to the evaluation function.
18011825
},
@@ -2830,6 +2854,12 @@ <h3>Method Details</h3>
28302854
},
28312855
&quot;metrics&quot;: [ # Required. The metrics to be calculated in the evaluation run.
28322856
{ # The metric used for evaluation runs.
2857+
&quot;computationBasedMetricSpec&quot;: { # Specification for a computation based metric. # Spec for a computation based metric.
2858+
&quot;parameters&quot;: { # Optional. A map of parameters for the metric, e.g. {&quot;rouge_type&quot;: &quot;rougeL&quot;}.
2859+
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
2860+
},
2861+
&quot;type&quot;: &quot;A String&quot;, # Required. The type of the computation based metric.
2862+
},
28332863
&quot;llmBasedMetricSpec&quot;: { # Specification for an LLM based metric. # Spec for an LLM based metric.
28342864
&quot;additionalConfig&quot;: { # Optional. Optional additional configuration for the metric.
28352865
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
@@ -3091,6 +3121,12 @@ <h3>Method Details</h3>
30913121
&quot;bleuSpec&quot;: { # Spec for bleu score metric - calculates the precision of n-grams in the prediction as compared to reference - returns a score ranging between 0 to 1. # Spec for bleu metric.
30923122
&quot;useEffectiveOrder&quot;: True or False, # Optional. Whether to use_effective_order to compute bleu score.
30933123
},
3124+
&quot;computationBasedMetricSpec&quot;: { # Specification for a computation based metric. # Spec for a computation based metric.
3125+
&quot;parameters&quot;: { # Optional. A map of parameters for the metric, e.g. {&quot;rouge_type&quot;: &quot;rougeL&quot;}.
3126+
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
3127+
},
3128+
&quot;type&quot;: &quot;A String&quot;, # Required. The type of the computation based metric.
3129+
},
30943130
&quot;customCodeExecutionSpec&quot;: { # Specificies a metric that is populated by evaluating user-defined Python code. # Spec for Custom Code Execution metric.
30953131
&quot;evaluationFunction&quot;: &quot;A String&quot;, # Required. Python function. Expected user to define the following function, e.g.: def evaluate(instance: dict[str, Any]) -&gt; float: Please include this function signature in the code snippet. Instance is the evaluation instance, any fields populated in the instance are available to the function as instance[field_name]. Example: Example input: ``` instance= EvaluationInstance( response=EvaluationInstance.InstanceData(text=&quot;The answer is 4.&quot;), reference=EvaluationInstance.InstanceData(text=&quot;4&quot;) ) ``` Example converted input: ``` { &#x27;response&#x27;: {&#x27;text&#x27;: &#x27;The answer is 4.&#x27;}, &#x27;reference&#x27;: {&#x27;text&#x27;: &#x27;4&#x27;} } ``` Example python function: ``` def evaluate(instance: dict[str, Any]) -&gt; float: if instance&#x27;response&#x27; == instance&#x27;reference&#x27;: return 1.0 return 0.0 ``` CustomCodeExecutionSpec is also supported in Batch Evaluation (EvalDataset RPC) and Tuning Evaluation. Each line in the input jsonl file will be converted to dict[str, Any] and passed to the evaluation function.
30963132
},
@@ -4096,6 +4132,12 @@ <h3>Method Details</h3>
40964132
},
40974133
&quot;metrics&quot;: [ # Required. The metrics to be calculated in the evaluation run.
40984134
{ # The metric used for evaluation runs.
4135+
&quot;computationBasedMetricSpec&quot;: { # Specification for a computation based metric. # Spec for a computation based metric.
4136+
&quot;parameters&quot;: { # Optional. A map of parameters for the metric, e.g. {&quot;rouge_type&quot;: &quot;rougeL&quot;}.
4137+
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
4138+
},
4139+
&quot;type&quot;: &quot;A String&quot;, # Required. The type of the computation based metric.
4140+
},
40994141
&quot;llmBasedMetricSpec&quot;: { # Specification for an LLM based metric. # Spec for an LLM based metric.
41004142
&quot;additionalConfig&quot;: { # Optional. Optional additional configuration for the metric.
41014143
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
@@ -4357,6 +4399,12 @@ <h3>Method Details</h3>
43574399
&quot;bleuSpec&quot;: { # Spec for bleu score metric - calculates the precision of n-grams in the prediction as compared to reference - returns a score ranging between 0 to 1. # Spec for bleu metric.
43584400
&quot;useEffectiveOrder&quot;: True or False, # Optional. Whether to use_effective_order to compute bleu score.
43594401
},
4402+
&quot;computationBasedMetricSpec&quot;: { # Specification for a computation based metric. # Spec for a computation based metric.
4403+
&quot;parameters&quot;: { # Optional. A map of parameters for the metric, e.g. {&quot;rouge_type&quot;: &quot;rougeL&quot;}.
4404+
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
4405+
},
4406+
&quot;type&quot;: &quot;A String&quot;, # Required. The type of the computation based metric.
4407+
},
43604408
&quot;customCodeExecutionSpec&quot;: { # Specificies a metric that is populated by evaluating user-defined Python code. # Spec for Custom Code Execution metric.
43614409
&quot;evaluationFunction&quot;: &quot;A String&quot;, # Required. Python function. Expected user to define the following function, e.g.: def evaluate(instance: dict[str, Any]) -&gt; float: Please include this function signature in the code snippet. Instance is the evaluation instance, any fields populated in the instance are available to the function as instance[field_name]. Example: Example input: ``` instance= EvaluationInstance( response=EvaluationInstance.InstanceData(text=&quot;The answer is 4.&quot;), reference=EvaluationInstance.InstanceData(text=&quot;4&quot;) ) ``` Example converted input: ``` { &#x27;response&#x27;: {&#x27;text&#x27;: &#x27;The answer is 4.&#x27;}, &#x27;reference&#x27;: {&#x27;text&#x27;: &#x27;4&#x27;} } ``` Example python function: ``` def evaluate(instance: dict[str, Any]) -&gt; float: if instance&#x27;response&#x27; == instance&#x27;reference&#x27;: return 1.0 return 0.0 ``` CustomCodeExecutionSpec is also supported in Batch Evaluation (EvalDataset RPC) and Tuning Evaluation. Each line in the input jsonl file will be converted to dict[str, Any] and passed to the evaluation function.
43624410
},

0 commit comments

Comments
 (0)