Skip to content

Use token count instead of character count in vertex embeddings response #658

@VisargD

Description

@VisargD

Vertex embeddings transformer currently uses character count instead of token count for usage.total_tokens. To keep it OpenAI schema compliant and to avoid confusion, it should use the token count.

If any user wants to get the character count, its easy to calculate by getting length of each input.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions