Skip to content

gianlourbano/demucs-onnx

Repository files navigation

Demucs inference on ONNX runtime

This repository contains a simple example of how to run inference on HTDemucs model using ONNX runtime. The model is first converted to ONNX format using the new dynamo torch.export (as described here) and then loaded into ONNX runtime for inference. htdemucs_optimized.onnx comes from this script

Usage

Run npm run dev to start the vite development server. Run the model, check errors in the console.

About

Demucs inference on onnxruntime-web/webgpu

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published