Skip to content

Support for runtime voice input to showcase █████ #110

@jbilcke-hf

Description

@jbilcke-hf

(note: this feature is unrelated to the existing ai assistant or the existing assistant voice control feature)

  1. add a microphone input to the embedded player (can be push to speak or continuous)
  2. inject the user recordings (mp3 or wav) as ClapSegmentCategory.DIALOGUE segments into the current timeline loop
  3. edit the sample OpenClap file (see Create a demo OpenClap project to showcase █████ #109) to add a ClapSegmentCategory.PHENOMENON (can optionally be tied to a "user" entity) which will process user dialogue using by emitting new ClapSegmentCategory.DIALOGUE segments (tied to the avatar entity).

This ClapSegmentCategory.PHENOMENON must use the █████* workflow.

  1. update the Simulator service to process ClapSegmentCategory.PHENOMENON in a loop

Note: I don't think we need to use ClapSegmentCategory.EVENT here, since our events already nicely fit the concept of ClapSegmentCategory.DIALOGUE

  • redacted

Metadata

Metadata

Assignees

Labels

huggingfaceSee the "Hugging Face" milestone for more info

Projects

Status

No status

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions