feat: fetch provider models dynamically and fix isOpenSource pollution#12046
Merged
sestinj merged 1 commit intocontinuedev:mainfrom Apr 8, 2026
Merged
feat: fetch provider models dynamically and fix isOpenSource pollution#12046sestinj merged 1 commit intocontinuedev:mainfrom
sestinj merged 1 commit intocontinuedev:mainfrom
Conversation
Contributor
There was a problem hiding this comment.
3 issues found across 12 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="gui/src/forms/AddModelForm.tsx">
<violation number="1" location="gui/src/forms/AddModelForm.tsx:73">
P2: Late async model-fetch responses can overwrite state after provider switch, showing stale models from the previous provider.</violation>
</file>
<file name="core/llm/fetchModels.ts">
<violation number="1" location="core/llm/fetchModels.ts:100">
P2: Ollama model discovery relies on brittle scraping of `x-test-*` HTML markers, which can silently return empty/partial model lists when upstream markup changes.</violation>
</file>
<file name="gui/src/pages/AddNewModel/configs/providers.ts">
<violation number="1" location="gui/src/pages/AddNewModel/configs/providers.ts:77">
P2: Ollama catch path resets `packages` but not `popularPackages`, allowing stale popular-model state to persist across re-initializations.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review, or fix all with cubic.
Contributor
|
You're iterating quickly on this pull request. To help protect your rate limits, cubic has paused automatic reviews on new pushes for now—when you're ready for another review, comment |
ebb3457 to
a623767
Compare
5a414af to
bcd29eb
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
isOpenSourceis used to detect if a model can be used by ollama. Changes to explicitproviderslisting.listModelsmethods (or custom for gemini/anthropic). A little refresh icon to "update models list using my API key" shows once an API key has been entered. Does some sensible filtering and model param detection for context length, max tokens, and tool use. Also these results are deduplicated with hardcoded models (hardcoded take precedence) and show proper provider icons etc.Summary by cubic
Unifies dynamic model fetching across providers via a core
models/fetchprotocol, auto-loadingollamaandopenrouter, and letting users refresh other providers with their API key. Persists fetched context, max tokens, and capabilities toconfig.yaml, adds GPT‑5.4 (pro/mini) and Gemma 4, and removesisOpenSource-based routing.New Features
models/fetch:ollamascrapes the library (icons),openrouterhits its public API on load,anthropiccalls/v1/models,geminicallsv1beta/models(chat-only; excludes 2.0, Gemma, embeddings/TTS/robotics/etc.), others uselistModels()with optionalapiKey/apiBase; returnscontextLength,maxTokens,supportsTools; errors toast and return [].ollama/openrouter; refresh button appears after API key to fetch provider models; merges and de‑duplicates into “Additional models”; sorts fetched models.contextLength,completionOptions.maxTokens, andcapabilities(tool_use,image_input) toconfig.yamlwhen adding a model.gpt-5.4(pro,mini) andgemma4; improves image/tool detection for Gemma 4 and future GPT 5+.Bug Fixes
OpenAI.listModels(): Supports providers returning either a plain array or{ data: [] }; error text now referencesconfig.yaml.RequiresApiKeyand after adding; selects the new model by posting directly to core.isOpenSourcelogic with explicitproviderOptionsandpopularPackages.gpt-[5-9]), Claude 4+, Grok 4.x+, GLM 4+; OpenAI checks useisOSeriesOrGpt5PlusModel.Written for commit bcd29eb. Summary will update on new commits.