Skip to content

add featherless provider (access to 7900+ open source models) #1138

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jun 24, 2025

Conversation

DarinVerheijke
Copy link
Contributor

@DarinVerheijke DarinVerheijke commented Jun 13, 2025

Description

Adds Featherless.ai as an LLM provider giving access to over 7,900+ models and counting on Hugging Face

Motivation

Featherless.ai was recently announced as Hugging Face largest LLM inference provider and this gives Portkey-AI users access to most LLMs on Hugging Face

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Refactoring (no functional changes)

How Has This Been Tested?

  • Unit Tests
  • Integration Tests
  • Manual Testing

Screenshots (if applicable)

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Related Issues

Copy link

matter-code-review bot commented Jun 13, 2025

Code Quality new feature

Description

Summary By MatterAI MatterAI logo

🔄 What Changed

This Pull Request introduces Featherless.ai as a new Large Language Model (LLM) provider.

  • src/globals.ts: Added FEATHERLESS_AI constant and included it in VALID_PROVIDERS list.
  • src/providers/featherless-ai/api.ts (New File): Defined featherlessAIAPIConfig to handle Featherless.ai API interactions, including getBaseURL (https://api.featherless.ai/v1), headers (for Authorization: Bearer apiKey), and getEndpoint for chatComplete (/chat/completions) and complete (/completions).
  • src/providers/featherless-ai/index.ts (New File): Configured FeatherlessAIConfig for the new provider, utilizing chatCompleteParams and completeParams with a default model of mistralai/Magistral-Small-2506, integrating the featherlessAIAPIConfig, and setting up response transformers.
  • src/providers/index.ts: Imported FeatherlessAIConfig and added it to the main Providers object, making Featherless.ai accessible as an LLM provider.

🔍 Impact of the Change

This change significantly expands the application's LLM capabilities by integrating Featherless.ai, which provides access to over 7,900+ open-source models from Hugging Face. This enhances the flexibility and choice of models available for chat and text completion tasks within the application. The integration follows existing architectural patterns for adding new providers, ensuring maintainability and scalability.

📁 Total Files Changed

4 files were changed in this pull request:

  • src/globals.ts: 2 additions, 0 deletions
  • src/providers/featherless-ai/api.ts: 19 additions, 0 deletions
  • src/providers/featherless-ai/index.ts: 20 additions, 0 deletions
  • src/providers/index.ts: 2 additions, 0 deletions

🧪 Test Added

No specific tests were mentioned or added in the provided patch. It is highly recommended to add unit and integration tests to verify the correct functionality of the new Featherless.ai provider, including API call correctness, response parsing, and robust error handling for various scenarios.

🔒Security Vulnerabilities

None detected in the provided patch. The API key handling via a Bearer token is a standard and secure practice.

Motivation

Adds Featherless.ai as an LLM provider giving access to over 7,900+ models and counting on Hugging Face

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Refactoring (no functional changes)

How Has This Been Tested?

  • Unit Tests
  • Integration Tests
  • Manual Testing
    N/A - No specific testing details provided in the PR description.

Screenshots (if applicable)

N/A

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Related Issues

.

Tip

Quality Recommendations

  1. Add comprehensive unit and integration tests for the new Featherless.ai provider to ensure correct API interaction, response parsing, and error handling. This includes testing different model inputs and expected outputs.

  2. Consider making the default model (mistralai/Magistral-Small-2506) configurable, perhaps via environment variables or provider options, to allow for easier switching or testing of other models supported by Featherless.ai without code changes.

  3. Implement explicit error handling and logging for API calls within the Featherless.ai provider to gracefully manage network issues, invalid API keys, or service-side errors, providing informative messages to the user or system.

Sequence Diagram

sequenceDiagram
    participant App as Application
    participant ProvidersModule as src/providers/index.ts
    participant FeatherlessAIConfigModule as src/providers/featherless-ai/index.ts
    participant FeatherlessAIAPI as src/providers/featherless-ai/api.ts
    participant FeatherlessAIService as Featherless.ai API

    App->>ProvidersModule: Request LLM service (e.g., chatComplete, complete)
    ProvidersModule->>FeatherlessAIConfigModule: Route request to FeatherlessAIConfig
    FeatherlessAIConfigModule->>FeatherlessAIAPI: Call API configuration (featherlessAIAPIConfig)
    FeatherlessAIAPI->>FeatherlessAIAPI: Get Base URL (https://api.featherless.ai/v1)
    FeatherlessAIAPI->>FeatherlessAIAPI: Get Headers (Authorization: Bearer apiKey)
    FeatherlessAIAPI->>FeatherlessAIAPI: Get Endpoint (fn: chatComplete -> /chat/completions; fn: complete -> /completions)
    FeatherlessAIAPI->>FeatherlessAIService: HTTP Request (POST /chat/completions or /completions)
    FeatherlessAIService-->>FeatherlessAIAPI: API Response
    FeatherlessAIAPI-->>FeatherlessAIConfigModule: Processed API Response
    FeatherlessAIConfigModule-->>ProvidersModule: Transformed Response
    ProvidersModule-->>App: Final LLM Response
Loading

Copy link

@matter-code-review matter-code-review bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR adds support for the Featherless AI provider, giving access to 7900+ open source models. The implementation follows the existing provider pattern and is well-structured. I have a few suggestions to improve the implementation.

return '';
}
},
};

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Code Refactor

Issue: The API implementation is missing a newline at the end of the file.
Fix: Add a newline at the end of the file to follow best practices.
Impact: Improves code consistency and prevents potential issues with some tools that expect files to end with a newline.

Suggested change
};
};

chatComplete: true,
complete: true,
}),
};

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Code Refactor

Issue: The file is missing a newline at the end of the file.
Fix: Add a newline at the end of the file to follow best practices.
Impact: Improves code consistency and prevents potential issues with some tools that expect files to end with a newline.

Suggested change
};
};

Comment on lines 1 to 19
import { ProviderAPIConfig } from '../types';

export const featherlessAIAPIConfig: ProviderAPIConfig = {
getBaseURL: () => 'https://api.featherless.ai/v1',
headers({ providerOptions }) {
const { apiKey } = providerOptions;
return { Authorization: `Bearer ${apiKey}` };
},
getEndpoint({ fn }) {
switch (fn) {
case 'chatComplete':
return `/chat/completions`;
case 'complete':
return '/completions';
default:
return '';
}
},
};

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Optional Recommendation

Issue: Error handling for API failures is not explicitly implemented.
Fix: Consider adding error handling for API failures.
Impact: Improves robustness of the application by properly handling API errors.

Suggested change
import { ProviderAPIConfig } from '../types';
export const featherlessAIAPIConfig: ProviderAPIConfig = {
getBaseURL: () => 'https://api.featherless.ai/v1',
headers({ providerOptions }) {
const { apiKey } = providerOptions;
return { Authorization: `Bearer ${apiKey}` };
},
getEndpoint({ fn }) {
switch (fn) {
case 'chatComplete':
return `/chat/completions`;
case 'complete':
return '/completions';
default:
return '';
}
},
};
import { ProviderAPIConfig } from '../types';
export const featherlessAIAPIConfig: ProviderAPIConfig = {
getBaseURL: () => 'https://api.featherless.ai/v1',
headers({ providerOptions }) {
const { apiKey } = providerOptions;
if (!apiKey) {
throw new Error('Featherless AI API key is required');
}
return { Authorization: `Bearer ${apiKey}` };
},
getEndpoint({ fn }) {
switch (fn) {
case 'chatComplete':
return `/chat/completions`;
case 'complete':
return '/completions';
default:
return '';
}
},
};

Comment on lines 1 to 20
import { FEATHERLESS_AI } from '../../globals';
import {
chatCompleteParams,
completeParams,
responseTransformers,
} from '../open-ai-base';
import { ProviderConfigs } from '../types';
import { featherlessAIAPIConfig } from './api';

export const FeatherlessAIConfig: ProviderConfigs = {
chatComplete: chatCompleteParams([], {
model: 'mistralai/Magistral-Small-2506',
}),
complete: completeParams([], { model: 'mistralai/Magistral-Small-2506' }),
api: featherlessAIAPIConfig,
responseTransforms: responseTransformers(FEATHERLESS_AI, {
chatComplete: true,
complete: true,
}),
};

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Optional Recommendation

Issue: The default model is hardcoded without any documentation about available models.
Fix: Add a comment explaining the default model choice and possibly other available models.
Impact: Improves code documentation and helps users understand model options.

Suggested change
import { FEATHERLESS_AI } from '../../globals';
import {
chatCompleteParams,
completeParams,
responseTransformers,
} from '../open-ai-base';
import { ProviderConfigs } from '../types';
import { featherlessAIAPIConfig } from './api';
export const FeatherlessAIConfig: ProviderConfigs = {
chatComplete: chatCompleteParams([], {
model: 'mistralai/Magistral-Small-2506',
}),
complete: completeParams([], { model: 'mistralai/Magistral-Small-2506' }),
api: featherlessAIAPIConfig,
responseTransforms: responseTransformers(FEATHERLESS_AI, {
chatComplete: true,
complete: true,
}),
};
import { FEATHERLESS_AI } from '../../globals';
import {
chatCompleteParams,
completeParams,
responseTransformers,
} from '../open-ai-base';
import { ProviderConfigs } from '../types';
import { featherlessAIAPIConfig } from './api';
// Featherless AI provides access to 7900+ open source models
// Default model is set to Magistral-Small-2506, but many others are available
// See https://featherless.ai for the full list of supported models
export const FeatherlessAIConfig: ProviderConfigs = {
chatComplete: chatCompleteParams([], {
model: 'mistralai/Magistral-Small-2506',
}),
complete: completeParams([], { model: 'mistralai/Magistral-Small-2506' }),
api: featherlessAIAPIConfig,
responseTransforms: responseTransformers(FEATHERLESS_AI, {
chatComplete: true,
complete: true,
}),
};

Copy link

Important

PR Review Skipped

PR review skipped as per the configuration setting. Run a manually review by commenting /matter review

💡Tips to use Matter AI

Command List

  • /matter summary: Generate AI Summary for the PR
  • /matter review: Generate AI Reviews for the latest commit in the PR
  • /matter review-full: Generate AI Reviews for the complete PR
  • /matter release-notes: Generate AI release-notes for the PR
  • /matter : Chat with your PR with Matter AI Agent
  • /matter remember : Generate AI memories for the PR
  • /matter explain: Get an explanation of the PR
  • /matter help: Show the list of available commands and documentation
  • Need help? Join our Discord server: https://discord.gg/fJU5DvanU3

narengogi
narengogi previously approved these changes Jun 17, 2025
Copy link
Collaborator

@narengogi narengogi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🚀🚀

Copy link

Important

PR Review Skipped

PR review skipped as per the configuration setting. Run a manually review by commenting /matter review

💡Tips to use Matter AI

Command List

  • /matter summary: Generate AI Summary for the PR
  • /matter review: Generate AI Reviews for the latest commit in the PR
  • /matter review-full: Generate AI Reviews for the complete PR
  • /matter release-notes: Generate AI release-notes for the PR
  • /matter : Chat with your PR with Matter AI Agent
  • /matter remember : Generate AI memories for the PR
  • /matter explain: Get an explanation of the PR
  • /matter help: Show the list of available commands and documentation
  • Need help? Join our Discord server: https://discord.gg/fJU5DvanU3

@VisargD
Copy link
Collaborator

VisargD commented Jun 24, 2025

Hi @DarinVerheijke , can you please fix formatting with npm run format? I will merge the PR post that. Thanks!

Copy link

Important

PR Review Skipped

PR review skipped as per the configuration setting. Run a manually review by commenting /matter review

💡Tips to use Matter AI

Command List

  • /matter summary: Generate AI Summary for the PR
  • /matter review: Generate AI Reviews for the latest commit in the PR
  • /matter review-full: Generate AI Reviews for the complete PR
  • /matter release-notes: Generate AI release-notes for the PR
  • /matter : Chat with your PR with Matter AI Agent
  • /matter remember : Generate AI memories for the PR
  • /matter explain: Get an explanation of the PR
  • /matter help: Show the list of available commands and documentation
  • Need help? Join our Discord server: https://discord.gg/fJU5DvanU3

@DarinVerheijke
Copy link
Contributor Author

Hi @DarinVerheijke , can you please fix formatting with npm run format? I will merge the PR post that. Thanks!

@VisargD done, thank you!

Copy link

Important

PR Review Skipped

PR review skipped as per the configuration setting. Run a manually review by commenting /matter review

💡Tips to use Matter AI

Command List

  • /matter summary: Generate AI Summary for the PR
  • /matter review: Generate AI Reviews for the latest commit in the PR
  • /matter review-full: Generate AI Reviews for the complete PR
  • /matter release-notes: Generate AI release-notes for the PR
  • /matter : Chat with your PR with Matter AI Agent
  • /matter remember : Generate AI memories for the PR
  • /matter explain: Get an explanation of the PR
  • /matter help: Show the list of available commands and documentation
  • Need help? Join our Discord server: https://discord.gg/fJU5DvanU3

@VisargD VisargD merged commit aa7bc42 into Portkey-AI:main Jun 24, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants