Skip to content

How slow/fast is this method of calling generate() #11

@RevanthRameshkumar

Description

@RevanthRameshkumar

I noticed one of the core parts of the strategy is to call generate one token at a time, but I was wondering how slow/fast this is compared to using the ConstrainedBeam search or something similar from HF.
Also curious what the speedup might be of implementing in c++ vs via python wrapper. ggml-org/llama.cpp#1773

I actually think your approach is better for my use case because there are many tweaks you can make even on the grammar sampling (as evidenced by the discussion in the above PR) ... but I am curious as to what the performance impact is.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions