You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm writing ComfyUI workflows for consistent character generation with speech - multi-scene - looped generation.
I have tried the non-audio custom and the quality so far in Comfy is shockingly unusable - which may or may not be the implementation. I really want to try the Audio driven version - because so far I have been unable to drive character video lipsync through Phantom Wan in a consistent way - but until there are nodes in ComfyUI that properly use the model's capabilities - we will never know - and by the time they do exist - we will all be on to the next model...
If you really want to drive adoption - you must ensure comfyui nodes access...