mirror of
https://github.com/danny-avila/LibreChat.git
synced 2025-12-17 00:40:14 +01:00
✨ feat: Add subfolder support for model specs (#9165)
- Add optional `folder` field to TModelSpec type and schema - Create ModelSpecFolder component for hierarchical display - Implement expand/collapse functionality for folders - Update search results to show folder context - Sort folders alphabetically and specs by order/label - Maintain backward compatibility (specs without folders appear at root) This enhancement allows organizing model specs into categories/folders for better organization and improved user experience when dealing with many model configurations. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
parent
543b617e1c
commit
4fe600990c
7 changed files with 439 additions and 3 deletions
147
MODEL_SPEC_FOLDERS.md
Normal file
147
MODEL_SPEC_FOLDERS.md
Normal file
|
|
@ -0,0 +1,147 @@
|
|||
# Model Spec Subfolder Support
|
||||
|
||||
This enhancement adds the ability to organize model specs into subfolders/categories for better organization and user experience.
|
||||
|
||||
## Feature Overview
|
||||
|
||||
Model specs can now be grouped into folders by adding an optional `folder` field to each spec. This helps organize related models together, making it easier for users to find and select the appropriate model for their needs.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Add a `folder` field to any model spec in your `librechat.yaml`:
|
||||
|
||||
```yaml
|
||||
modelSpecs:
|
||||
list:
|
||||
- name: "gpt4_turbo"
|
||||
label: "GPT-4 Turbo"
|
||||
folder: "OpenAI Models" # This spec will appear under "OpenAI Models" folder
|
||||
preset:
|
||||
endpoint: "openAI"
|
||||
model: "gpt-4-turbo-preview"
|
||||
```
|
||||
|
||||
### Folder Structure
|
||||
|
||||
- **With Folder**: Model specs with the `folder` field will be grouped under that folder name
|
||||
- **Without Folder**: Model specs without the `folder` field appear at the root level
|
||||
- **Multiple Folders**: You can create as many folders as needed to organize your models
|
||||
- **Alphabetical Sorting**: Folders are sorted alphabetically, and specs within folders are sorted by their `order` field or label
|
||||
|
||||
### Example Configuration
|
||||
|
||||
```yaml
|
||||
modelSpecs:
|
||||
list:
|
||||
# OpenAI Models Category
|
||||
- name: "gpt4_turbo"
|
||||
label: "GPT-4 Turbo"
|
||||
folder: "OpenAI Models"
|
||||
preset:
|
||||
endpoint: "openAI"
|
||||
model: "gpt-4-turbo-preview"
|
||||
|
||||
- name: "gpt35_turbo"
|
||||
label: "GPT-3.5 Turbo"
|
||||
folder: "OpenAI Models"
|
||||
preset:
|
||||
endpoint: "openAI"
|
||||
model: "gpt-3.5-turbo"
|
||||
|
||||
# Anthropic Models Category
|
||||
- name: "claude3_opus"
|
||||
label: "Claude 3 Opus"
|
||||
folder: "Anthropic Models"
|
||||
preset:
|
||||
endpoint: "anthropic"
|
||||
model: "claude-3-opus-20240229"
|
||||
|
||||
# Root level model (no folder)
|
||||
- name: "quick_chat"
|
||||
label: "Quick Chat"
|
||||
preset:
|
||||
endpoint: "openAI"
|
||||
model: "gpt-3.5-turbo"
|
||||
```
|
||||
|
||||
## UI Features
|
||||
|
||||
### Folder Display
|
||||
- Folders are displayed with expand/collapse functionality
|
||||
- Folder icons change between open/closed states
|
||||
- Indentation shows the hierarchy clearly
|
||||
|
||||
### Search Integration
|
||||
- When searching for models, the folder path is shown for context
|
||||
- Search works across all models regardless of folder structure
|
||||
|
||||
### User Experience
|
||||
- Folders start expanded by default for easy access
|
||||
- Click on folder header to expand/collapse
|
||||
- Selected model is highlighted with a checkmark
|
||||
- Folder state is preserved during the session
|
||||
|
||||
## Benefits
|
||||
|
||||
1. **Better Organization**: Group related models together (e.g., by provider, capability, or use case)
|
||||
2. **Improved Navigation**: Users can quickly find models in organized categories
|
||||
3. **Scalability**: Handles large numbers of model specs without overwhelming the UI
|
||||
4. **Backward Compatible**: Existing configurations without folders continue to work
|
||||
5. **Flexible Structure**: Mix foldered and non-foldered specs as needed
|
||||
|
||||
## Use Cases
|
||||
|
||||
### By Provider
|
||||
```yaml
|
||||
folder: "OpenAI Models"
|
||||
folder: "Anthropic Models"
|
||||
folder: "Google Models"
|
||||
```
|
||||
|
||||
### By Capability
|
||||
```yaml
|
||||
folder: "Vision Models"
|
||||
folder: "Code Models"
|
||||
folder: "Creative Writing"
|
||||
```
|
||||
|
||||
### By Performance Tier
|
||||
```yaml
|
||||
folder: "Premium Models"
|
||||
folder: "Standard Models"
|
||||
folder: "Budget Models"
|
||||
```
|
||||
|
||||
### By Department/Team
|
||||
```yaml
|
||||
folder: "Engineering Team"
|
||||
folder: "Marketing Team"
|
||||
folder: "Research Team"
|
||||
```
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Type Changes
|
||||
- Added optional `folder?: string` field to `TModelSpec` type
|
||||
- Updated `tModelSpecSchema` to include the folder field validation
|
||||
|
||||
### Components
|
||||
- Created `ModelSpecFolder` component for rendering folder structure
|
||||
- Updated `ModelSelector` to use folder-aware rendering
|
||||
- Enhanced search results to show folder context
|
||||
|
||||
### Behavior
|
||||
- Folders are collapsible with state management
|
||||
- Models are sorted within folders by order/label
|
||||
- Root-level models appear after all folders
|
||||
|
||||
## Migration
|
||||
|
||||
No migration needed - the feature is fully backward compatible. Existing model specs without the `folder` field will continue to work and appear at the root level.
|
||||
|
||||
## See Also
|
||||
|
||||
- `librechat.example.subfolder.yaml` - Complete example configuration
|
||||
- GitHub Issue #9165 - Original feature request
|
||||
|
|
@ -2,7 +2,7 @@ import React, { useMemo } from 'react';
|
|||
import type { ModelSelectorProps } from '~/common';
|
||||
import { ModelSelectorProvider, useModelSelectorContext } from './ModelSelectorContext';
|
||||
import { ModelSelectorChatProvider } from './ModelSelectorChatContext';
|
||||
import { renderModelSpecs, renderEndpoints, renderSearchResults } from './components';
|
||||
import { renderModelSpecsWithFolders, renderEndpoints, renderSearchResults } from './components';
|
||||
import { getSelectedIcon, getDisplayValue } from './utils';
|
||||
import { CustomMenu as Menu } from './CustomMenu';
|
||||
import DialogManager from './DialogManager';
|
||||
|
|
@ -86,7 +86,7 @@ function ModelSelectorContent() {
|
|||
renderSearchResults(searchResults, localize, searchValue)
|
||||
) : (
|
||||
<>
|
||||
{renderModelSpecs(modelSpecs, selectedValues.modelSpec || '')}
|
||||
{renderModelSpecsWithFolders(modelSpecs, selectedValues.modelSpec || '')}
|
||||
{renderEndpoints(mappedEndpoints ?? [])}
|
||||
</>
|
||||
)}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,132 @@
|
|||
import React, { useState } from 'react';
|
||||
import { ChevronDown, ChevronRight, Folder, FolderOpen } from 'lucide-react';
|
||||
import type { TModelSpec } from 'librechat-data-provider';
|
||||
import { ModelSpecItem } from './ModelSpecItem';
|
||||
import { cn } from '~/utils';
|
||||
|
||||
interface ModelSpecFolderProps {
|
||||
folderName: string;
|
||||
specs: TModelSpec[];
|
||||
selectedSpec: string;
|
||||
level?: number;
|
||||
}
|
||||
|
||||
export function ModelSpecFolder({
|
||||
folderName,
|
||||
specs,
|
||||
selectedSpec,
|
||||
level = 0
|
||||
}: ModelSpecFolderProps) {
|
||||
const [isExpanded, setIsExpanded] = useState(true);
|
||||
|
||||
const handleToggle = (e: React.MouseEvent) => {
|
||||
e.stopPropagation();
|
||||
setIsExpanded(!isExpanded);
|
||||
};
|
||||
|
||||
const indent = level * 16;
|
||||
|
||||
return (
|
||||
<div className="w-full">
|
||||
<button
|
||||
onClick={handleToggle}
|
||||
className={cn(
|
||||
'flex w-full items-center gap-1 rounded-lg px-2 py-1.5 text-sm hover:bg-surface-hover',
|
||||
'text-text-secondary transition-colors'
|
||||
)}
|
||||
style={{ paddingLeft: `${8 + indent}px` }}
|
||||
>
|
||||
<span className="flex-shrink-0">
|
||||
{isExpanded ? (
|
||||
<ChevronDown className="h-3 w-3" />
|
||||
) : (
|
||||
<ChevronRight className="h-3 w-3" />
|
||||
)}
|
||||
</span>
|
||||
<span className="flex-shrink-0">
|
||||
{isExpanded ? (
|
||||
<FolderOpen className="h-3.5 w-3.5" />
|
||||
) : (
|
||||
<Folder className="h-3.5 w-3.5" />
|
||||
)}
|
||||
</span>
|
||||
<span className="truncate text-left font-medium">{folderName}</span>
|
||||
</button>
|
||||
{isExpanded && (
|
||||
<div className="mt-0.5">
|
||||
{specs.map((spec) => (
|
||||
<div key={spec.name} style={{ paddingLeft: `${indent}px` }}>
|
||||
<ModelSpecItem spec={spec} isSelected={selectedSpec === spec.name} />
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
interface GroupedSpecs {
|
||||
[folder: string]: TModelSpec[];
|
||||
}
|
||||
|
||||
export function renderModelSpecsWithFolders(specs: TModelSpec[], selectedSpec: string) {
|
||||
if (!specs || specs.length === 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Group specs by folder
|
||||
const grouped: GroupedSpecs = {};
|
||||
const rootSpecs: TModelSpec[] = [];
|
||||
|
||||
specs.forEach((spec) => {
|
||||
if (spec.folder) {
|
||||
if (!grouped[spec.folder]) {
|
||||
grouped[spec.folder] = [];
|
||||
}
|
||||
grouped[spec.folder].push(spec);
|
||||
} else {
|
||||
rootSpecs.push(spec);
|
||||
}
|
||||
});
|
||||
|
||||
// Sort folders alphabetically
|
||||
const sortedFolders = Object.keys(grouped).sort((a, b) =>
|
||||
a.toLowerCase().localeCompare(b.toLowerCase())
|
||||
);
|
||||
|
||||
// Sort specs within each folder by order or label
|
||||
sortedFolders.forEach(folder => {
|
||||
grouped[folder].sort((a, b) => {
|
||||
if (a.order !== undefined && b.order !== undefined) {
|
||||
return a.order - b.order;
|
||||
}
|
||||
return a.label.toLowerCase().localeCompare(b.label.toLowerCase());
|
||||
});
|
||||
});
|
||||
|
||||
// Sort root specs
|
||||
rootSpecs.sort((a, b) => {
|
||||
if (a.order !== undefined && b.order !== undefined) {
|
||||
return a.order - b.order;
|
||||
}
|
||||
return a.label.toLowerCase().localeCompare(b.label.toLowerCase());
|
||||
});
|
||||
|
||||
return (
|
||||
<>
|
||||
{/* Render folders first */}
|
||||
{sortedFolders.map((folder) => (
|
||||
<ModelSpecFolder
|
||||
key={folder}
|
||||
folderName={folder}
|
||||
specs={grouped[folder]}
|
||||
selectedSpec={selectedSpec}
|
||||
/>
|
||||
))}
|
||||
{/* Render root level specs */}
|
||||
{rootSpecs.map((spec) => (
|
||||
<ModelSpecItem key={spec.name} spec={spec} isSelected={selectedSpec === spec.name} />
|
||||
))}
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
|
@ -67,7 +67,12 @@ export function SearchResults({ results, localize, searchValue }: SearchResultsP
|
|||
</div>
|
||||
)}
|
||||
<div className="flex min-w-0 flex-col gap-1">
|
||||
<span className="truncate text-left">{spec.label}</span>
|
||||
<span className="truncate text-left">
|
||||
{spec.folder && (
|
||||
<span className="text-xs text-text-tertiary">{spec.folder} / </span>
|
||||
)}
|
||||
{spec.label}
|
||||
</span>
|
||||
{spec.description && (
|
||||
<span className="break-words text-xs font-normal">{spec.description}</span>
|
||||
)}
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
export * from './ModelSpecItem';
|
||||
export * from './ModelSpecFolder';
|
||||
export * from './EndpointModelItem';
|
||||
export * from './EndpointItem';
|
||||
export * from './SearchResults';
|
||||
|
|
|
|||
149
librechat.example.subfolder.yaml
Normal file
149
librechat.example.subfolder.yaml
Normal file
|
|
@ -0,0 +1,149 @@
|
|||
# Example configuration demonstrating model spec subfolder/category support
|
||||
# This shows how to organize model specs into folders for better organization
|
||||
|
||||
version: 1.1.7
|
||||
|
||||
modelSpecs:
|
||||
enforce: false
|
||||
prioritize: true
|
||||
list:
|
||||
# OpenAI Models Category
|
||||
- name: "gpt4_turbo"
|
||||
label: "GPT-4 Turbo"
|
||||
folder: "OpenAI Models" # This spec will appear under "OpenAI Models" folder
|
||||
preset:
|
||||
endpoint: "openAI"
|
||||
model: "gpt-4-turbo-preview"
|
||||
temperature: 0.7
|
||||
description: "Latest GPT-4 Turbo model with enhanced capabilities"
|
||||
iconURL: "openAI"
|
||||
order: 1
|
||||
|
||||
- name: "gpt4_vision"
|
||||
label: "GPT-4 Vision"
|
||||
folder: "OpenAI Models" # Same folder as above
|
||||
preset:
|
||||
endpoint: "openAI"
|
||||
model: "gpt-4-vision-preview"
|
||||
temperature: 0.7
|
||||
description: "GPT-4 with vision capabilities"
|
||||
iconURL: "openAI"
|
||||
order: 2
|
||||
|
||||
- name: "gpt35_turbo"
|
||||
label: "GPT-3.5 Turbo"
|
||||
folder: "OpenAI Models"
|
||||
preset:
|
||||
endpoint: "openAI"
|
||||
model: "gpt-3.5-turbo"
|
||||
temperature: 0.7
|
||||
description: "Fast and efficient model for most tasks"
|
||||
iconURL: "openAI"
|
||||
order: 3
|
||||
|
||||
# Anthropic Models Category
|
||||
- name: "claude3_opus"
|
||||
label: "Claude 3 Opus"
|
||||
folder: "Anthropic Models" # Different folder
|
||||
preset:
|
||||
endpoint: "anthropic"
|
||||
model: "claude-3-opus-20240229"
|
||||
temperature: 0.7
|
||||
description: "Most capable Claude model"
|
||||
iconURL: "anthropic"
|
||||
order: 1
|
||||
|
||||
- name: "claude3_sonnet"
|
||||
label: "Claude 3 Sonnet"
|
||||
folder: "Anthropic Models"
|
||||
preset:
|
||||
endpoint: "anthropic"
|
||||
model: "claude-3-sonnet-20240229"
|
||||
temperature: 0.7
|
||||
description: "Balanced performance and cost"
|
||||
iconURL: "anthropic"
|
||||
order: 2
|
||||
|
||||
- name: "claude3_haiku"
|
||||
label: "Claude 3 Haiku"
|
||||
folder: "Anthropic Models"
|
||||
preset:
|
||||
endpoint: "anthropic"
|
||||
model: "claude-3-haiku-20240307"
|
||||
temperature: 0.7
|
||||
description: "Fast and affordable"
|
||||
iconURL: "anthropic"
|
||||
order: 3
|
||||
|
||||
# Specialized Models Category
|
||||
- name: "code_assistant"
|
||||
label: "Code Assistant"
|
||||
folder: "Specialized Models"
|
||||
preset:
|
||||
endpoint: "openAI"
|
||||
model: "gpt-4-turbo-preview"
|
||||
temperature: 0.2
|
||||
systemMessage: "You are an expert programmer. Provide clear, well-commented code solutions."
|
||||
description: "Optimized for coding tasks"
|
||||
iconURL: "openAI"
|
||||
|
||||
- name: "creative_writer"
|
||||
label: "Creative Writer"
|
||||
folder: "Specialized Models"
|
||||
preset:
|
||||
endpoint: "anthropic"
|
||||
model: "claude-3-opus-20240229"
|
||||
temperature: 0.9
|
||||
systemMessage: "You are a creative writer. Generate engaging and imaginative content."
|
||||
description: "For creative writing tasks"
|
||||
iconURL: "anthropic"
|
||||
|
||||
- name: "research_analyst"
|
||||
label: "Research Analyst"
|
||||
folder: "Specialized Models"
|
||||
preset:
|
||||
endpoint: "openAI"
|
||||
model: "gpt-4-turbo-preview"
|
||||
temperature: 0.3
|
||||
systemMessage: "You are a research analyst. Provide thorough, fact-based analysis."
|
||||
description: "For research and analysis"
|
||||
iconURL: "openAI"
|
||||
|
||||
# Models without folders (appear at root level)
|
||||
- name: "quick_chat"
|
||||
label: "Quick Chat"
|
||||
preset:
|
||||
endpoint: "openAI"
|
||||
model: "gpt-3.5-turbo"
|
||||
temperature: 0.7
|
||||
description: "Fast general-purpose chat"
|
||||
iconURL: "openAI"
|
||||
default: true # This is the default model
|
||||
|
||||
- name: "advanced_reasoning"
|
||||
label: "Advanced Reasoning"
|
||||
preset:
|
||||
endpoint: "anthropic"
|
||||
model: "claude-3-opus-20240229"
|
||||
temperature: 0.5
|
||||
description: "For complex reasoning tasks"
|
||||
iconURL: "anthropic"
|
||||
|
||||
# Interface configuration
|
||||
interface:
|
||||
endpointsMenu: false # Hide endpoints menu when using model specs
|
||||
modelSelect: false # Hide traditional model selector
|
||||
parameters: false # Hide parameter controls (using presets)
|
||||
presets: false # Hide preset selector (using model specs)
|
||||
|
||||
# Endpoints configuration (required for the model specs to work)
|
||||
endpoints:
|
||||
openAI:
|
||||
apiKey: "${OPENAI_API_KEY}"
|
||||
models:
|
||||
default: ["gpt-3.5-turbo", "gpt-4-turbo-preview", "gpt-4-vision-preview"]
|
||||
|
||||
anthropic:
|
||||
apiKey: "${ANTHROPIC_API_KEY}"
|
||||
models:
|
||||
default: ["claude-3-opus-20240229", "claude-3-sonnet-20240229", "claude-3-haiku-20240307"]
|
||||
|
|
@ -19,6 +19,7 @@ export type TModelSpec = {
|
|||
showIconInHeader?: boolean;
|
||||
iconURL?: string | EModelEndpoint; // Allow using project-included icons
|
||||
authType?: AuthType;
|
||||
folder?: string; // Optional folder/category for grouping model specs
|
||||
};
|
||||
|
||||
export const tModelSpecSchema = z.object({
|
||||
|
|
@ -32,6 +33,7 @@ export const tModelSpecSchema = z.object({
|
|||
showIconInHeader: z.boolean().optional(),
|
||||
iconURL: z.union([z.string(), eModelEndpointSchema]).optional(),
|
||||
authType: authTypeSchema.optional(),
|
||||
folder: z.string().optional(),
|
||||
});
|
||||
|
||||
export const specsConfigSchema = z.object({
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue