fix: support custom model protocols (e.g., zai-org/GLM-4.7)#664
fix: support custom model protocols (e.g., zai-org/GLM-4.7)#664jedi108 wants to merge 1 commit intosipeed:mainfrom
Conversation
nikolasdehor
left a comment
There was a problem hiding this comment.
Reasonable approach — falling back to OpenAI-compatible for unknown protocols is better than a hard error.
Two observations:
-
Validation gap: The check
cfg.APIKey == "" && cfg.APIBase == ""uses AND, meaning if a user providesapi_basebut noapi_key, it will pass through. That is probably fine (some local endpoints are unauthenticated), but worth noting it is intentional. -
Missing test: There is no test for the new default branch. At minimum, a test covering (a) unknown protocol with
api_baseset, (b) unknown protocol with neither key nor base (expecting the error), and (c) verifyingcfg.Modelis returned as the model ID, would be good additions. Since this changes error behavior for all unknown prefixes, test coverage is important.
Otherwise, the change is backward-compatible and solves the reported issue.
|
Lets merge this PR, we need this support. |
🔍 Deep Code ReviewThanks for the PR! Here's a detailed review of the changes. 🚨 Critical Issues (MUST FIX)1. Test Will Fail - Test Case Not Updated
// factory_provider_test.go:223-234
func TestCreateProviderFromConfig_UnknownProtocol(t *testing.T) {
cfg := &config.ModelConfig{
ModelName: "test-unknown",
Model: "unknown-protocol/model",
APIKey: "test-key", // <-- APIKey is provided
}
_, _, err := CreateProviderFromConfig(cfg)
if err == nil {
t.Fatal("CreateProviderFromConfig() expected error for unknown protocol") // <-- This will fail!
}
}With this PR's change, when 2. Wrong Constructor Used - Missing
|
| Type | Count |
|---|---|
| 🚨 Critical Issues | 2 |
| 3 | |
| 💡 Suggestions | 1 |
Recommendation: Not ready to merge. Please address:
- Update or remove
TestCreateProviderFromConfig_UnknownProtocoltest - Use
NewHTTPProviderWithMaxTokensFieldAndRequestTimeoutto supportRequestTimeout - Consider whether returning the full model name is necessary; if so, update function documentation
- Add a warning log for unknown protocols
- Consider requiring
api_basefor unknown protocols
d98b9d2 to
9776ab4
Compare
Treat unknown protocol prefixes (e.g., zai-org/GLM-4.7) as OpenAI-compatible HTTP providers instead of returning an error. The full model string is passed to the API for unknown protocols. Added tests for unknown protocol with API key, API base, and missing auth scenarios. Fixes sipeed#661 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
9776ab4 to
dca3233
Compare
|
Thanks for the detailed review @nikolasdehor! All points have been addressed:
Also updated to use @yinwm would appreciate your thoughts since you reviewed the provider area recently — happy to adjust anything. |
|
@jedi108 Hi! This PR has had no activity for over 2 weeks, so I'm closing it for now to keep things organized. Feel free to reopen anytime if you'd like to continue. |
Problem
When using custom models with unknown protocols (e.g., `zai-org/GLM-4.7`), the system returns an error:
```
Error creating provider: failed to create provider for model "zai-org/GLM-4.7": unknown protocol "zai-org" in model "zai-org/GLM-4.7"
```
Root Cause
In `pkg/providers/factory_provider.go`, the `ExtractProtocol()` function splits the model string by the first `/` character. For `"zai-org/GLM-4.7"`, this results in:
Since `"zai-org"` is not in the list of supported protocols, the system returns an error.
Solution
Added a `default` case in `CreateProviderFromConfig()` that treats unknown protocols as OpenAI-compatible HTTP providers instead of returning an error:
```go
default:
// Treat unknown protocols as OpenAI-compatible
logger.WarnF("unknown protocol, falling back to OpenAI-compatible HTTP provider",
map[string]any{"protocol": protocol, "model": cfg.Model})
if cfg.APIKey == "" && cfg.APIBase == "" {
return nil, "", fmt.Errorf("api_key or api_base is required for HTTP-based protocol %q", protocol)
}
apiBase := cfg.APIBase
if apiBase == "" {
apiBase = getDefaultAPIBase("openai")
}
return NewHTTPProviderWithMaxTokensFieldAndRequestTimeout(
cfg.APIKey, apiBase, cfg.Proxy, cfg.MaxTokensField, cfg.RequestTimeout,
), cfg.Model, nil
```
Key points:
Configuration
```json
{
"model_name": "zai-org/GLM-4.7",
"model": "zai-org/GLM-4.7",
"api_base": "https://your-custom-api-endpoint.com/v1",
"api_key": "..."
}
```
Testing
Added unit tests covering all three scenarios:
Backward Compatibility
Fixes #661