Skip to content

Add DI get GPUModelConfig#968

Merged
DO-rrao merged 2 commits intodigitalocean:mainfrom
Rachana888:feature/dedicated-inference-gpu-model-config
Mar 13, 2026
Merged

Add DI get GPUModelConfig#968
DO-rrao merged 2 commits intodigitalocean:mainfrom
Rachana888:feature/dedicated-inference-gpu-model-config

Conversation

@Rachana888
Copy link
Copy Markdown
Contributor

Summary

  • Adds support for retrieving supported GPU model configurations for Dedicated Inference
  • Implements GET /v2/dedicated-inferences/gpu-model-config endpoint

Changes

  • Added GetGPUModelConfig method to DedicatedInferenceService interface
  • Added response types:
    • DedicatedInferenceGPUModelConfigResponse – top-level response containing gpu_model_configs
    • DedicatedInferenceGPUModelConfig – configuration including gpu_slugs, model_slug, model_name, and is_model_gated
  • Implemented GetGPUModelConfig method in DedicatedInferenceServiceOp
  • Added unit test TestDedicatedInference_GetGPUModelConfig

API Specification

Method Endpoint Response
GET /v2/dedicated-inferences/gpu-model-config 200 OK with GPU model configs

Testing

Command
TestDedicatedInference_GetGPUModelConfig

Results

=== RUN   TestDedicatedInference_GetGPUModelConfig
--- PASS: TestDedicatedInference_GetGPUModelConfig (0.00s)
PASS
ok      github.com/digitalocean/godo    1.454s

Unit Tests Added

  • TestDedicatedInference_GetGPUModelConfig – Verifies correct parsing of all response fields

@DO-rrao DO-rrao merged commit 6b3456d into digitalocean:main Mar 13, 2026
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants