GPT-OSS models were receiving image_detail parameter even though they don't support it, causing:
{ "id": "1766681048491-1", "status": "error", "error": "property 'image_detail' is unsupported" }
The code was applying transformations to ALL parameters and then checking if the model supports them. However, if imageDetail was set in the options (perhaps from a previous model selection or default settings), it would be included in the transformation and potentially sent to models that don't support it.
The fix ensures that parameters are only added to the request if the model explicitly supports them:
// Transform parameters for this specific model
const transformedParams = transformParametersForAPI(model, {
reasoningEffort: options?.reasoningEffort,
imageDetail: options?.imageDetail,
webSearch: options?.webSearch,
codeExecution: options?.codeExecution,
});
// Add parameters ONLY if model supports them
if (transformedParams.reasoningEffort && modelSupportsParameter(model, 'reasoningEffort')) {
requestBody.reasoning_effort = transformedParams.reasoningEffort;
}
// Image detail ONLY for vision models
if (transformedParams.imageDetail && modelSupportsParameter(model, 'imageDetail')) {
requestBody.image_detail = transformedParams.imageDetail;
}
Only Llama 4 Vision models have imageDetail parameter:
✅ Support imageDetail:
meta-llama/llama-4-maverick-17b-128e-instructmeta-llama/llama-4-scout-17b-16e-instruct❌ DON'T support imageDetail:
openai/gpt-oss-120bopenai/gpt-oss-20bllama-3.3-70b-versatileqwen/qwen3-32bThe modelSupportsParameter() function checks if a parameter exists in the model's configuration:
function modelSupportsParameter(modelId: string, parameterKey: string): boolean {
const config = getModelConfig(modelId);
if (!config) return false;
return config.parameters.some(p => p.key === parameterKey);
}
GPT-OSS Model (no imageDetail):
1. User options include imageDetail: 'auto'
2. Transform: imageDetail: 'auto' (no mapping needed)
3. Check: modelSupportsParameter('openai/gpt-oss-120b', 'imageDetail') → false
4. Result: image_detail NOT added to request ✓
Llama 4 Vision Model (has imageDetail):
1. User options include imageDetail: 'auto'
2. Transform: imageDetail: 'auto'
3. Check: modelSupportsParameter('meta-llama/llama-4-maverick-17b-128e-instruct', 'imageDetail') → true
4. Result: image_detail: 'auto' added to request ✓
backend/providers/groq.ts
complete() and stream() methodsRun verification:
deno run backend/providers/test-parameter-validation.ts
All tests pass ✅
This same pattern fixes:
reasoning_effort mapping (Qwen: medium → default)image_detail support check (only vision models)web_search support check (only compound/thinking models)code_execution support check (only compound/thinking models)The parameter validation system now handles all model-specific quirks declaratively through the model configuration.