GPT-OSS models were receiving image_detail parameter even though they don't support it, causing:
{ "id": "1766681048491-1", "status": "error", "error": "property 'image_detail' is unsupported" }
The code was applying transformations to ALL parameters and then checking if the model supports them. However, if imageDetail was set in the options (perhaps from a previous model selection or default settings), it would be included in the transformation and potentially sent to models that don't support it.
The fix ensures that parameters are only added to the request if the model explicitly supports them:
// Transform parameters for this specific model
const transformedParams = transformParametersForAPI(model, {
reasoningEffort: options?.reasoningEffort,
imageDetail: options?.imageDetail,
webSearch: options?.webSearch,
codeExecution: options?.codeExecution,
});
// Add parameters ONLY if model supports them
if (transformedParams.reasoningEffort && modelSupportsParameter(model, 'reasoningEffort')) {
requestBody.reasoning_effort = transformedParams.reasoningEffort;
}
// Image detail ONLY for vision models
if (transformedParams.imageDetail && modelSupportsParameter(model, 'imageDetail')) {
requestBody.image_detail = transformedParams.imageDetail;
}
Only Llama 4 Vision models have imageDetail parameter:
✅ Support imageDetail:
meta-llama/llama-4-maverick-17b-128e-instructmeta-llama/llama-4-scout-17b-16e-instruct
❌ DON'T support imageDetail:
openai/gpt-oss-120bopenai/gpt-oss-20bllama-3.3-70b-versatileqwen/qwen3-32b- All other non-vision models
The modelSupportsParameter() function checks if a parameter exists in the model's configuration:
function modelSupportsParameter(modelId: string, parameterKey: string): boolean {
const config = getModelConfig(modelId);
if (!config) return false;
return config.parameters.some(p => p.key === parameterKey);
}
GPT-OSS Model (no imageDetail):
1. User options include imageDetail: 'auto'
2. Transform: imageDetail: 'auto' (no mapping needed)
3. Check: modelSupportsParameter('openai/gpt-oss-120b', 'imageDetail') → false
4. Result: image_detail NOT added to request ✓
Llama 4 Vision Model (has imageDetail):
1. User options include imageDetail: 'auto'
2. Transform: imageDetail: 'auto'
3. Check: modelSupportsParameter('meta-llama/llama-4-maverick-17b-128e-instruct', 'imageDetail') → true
4. Result: image_detail: 'auto' added to request ✓
- No More Unsupported Parameter Errors: Parameters only sent when supported
- Type-Safe Configuration: Model configs define what's supported
- No Code Changes Needed: Works automatically for all models
- Clear Separation: Vision parameters vs thinking parameters vs base parameters
backend/providers/groq.ts- Fixed parameter checking logic
- Added clear comments explaining when each parameter is used
- Applied same pattern to both
complete()andstream()methods
Run verification:
deno run backend/providers/test-parameter-validation.ts
All tests pass ✅
This same pattern fixes:
- ✅
reasoning_effortmapping (Qwen: medium → default) - ✅
image_detailsupport check (only vision models) - ✅
web_searchsupport check (only compound/thinking models) - ✅
code_executionsupport check (only compound/thinking models)
The parameter validation system now handles all model-specific quirks declaratively through the model configuration.
