-
Notifications
You must be signed in to change notification settings - Fork 53
Description
Purpose/Motivation
WebNN lacks an API to import video frames or run custom ML ops. ML use-cases like semantic segmentation or real-time video could benefit by avoiding JS copies. WebGPU also lacks NPU support which means use-cases like Super Resolution cannot be further accelerated by WebNN ML ops. This is a sub-issue of #482.
Proposed Solution: direct buffer sharing
Export WebNN's MLBuffer datatype, import it into WebGPU as as standard GPUBuffer, which can be directly bound in a WGSL compute shader. Any needed conversions and synchronization for the shared buffer is performed by the WebNN runtime.
- After
MLBufferis imported, it is considered "neutered". A validation error is generated if used by the WebNN context. GPUBuffer, created upon import, could be a copy of aMLBuffercontents.- After GPUBuffer.Destroy(), the
MLBufferis no longer "neutered" and will un-expire to be re-used again.
JS example
wgpuDevice = /* create GPU device from adapter */
mlContext = ML.createContext(wgpuDevice);
mlBuffer = mlContext.createBuffer(/* create MLOperandDescriptor */, usage: MLBufferUsage.WEBGPU_INTEROP);
// Import buffer to WebGPU (name TBD)
// Assumed WGPU usages = GPUBufferUsageFlags.STORAGE | GPUBufferUsageFlags.COPY_SRC.
gpuBuffer = wgpuDevice.importExternalBuffer(mlBuffer);
// ... compute using `gpuBuffer`...
wgpuDevice.queue.submit([commandEncoder.finish()]);
// Export buffer to WebNN
gpuBuffer.Destroy();
// Re-use MLBuffer in WebNN
mlContext.dispatch(inputs, {output: mlBuffer});
// Re-import the buffer to use it again.
gpuBuffer = wgpuDevice.importExternalBuffer(mlBuffer);
// ... render using `gpuBuffer`...FAQ
What happens if the web developer never calls GPUBuffer.Destroy()?
If an imported MLBuffer is dropped, without being destroyed, the imported GPUBuffer object will stay alive until it is also dropped.
Why is there explicit handoff between WebGPU and WebNN?
Ensures MLBuffer cannot be modified by WebNN once imported (https://www.w3.org/TR/webgpu/#programming-model-resource-usages) and performs any necessary copies of its contents.
What are the synchronization guarantees between WebGPU's command queue and MLGraph?
WebNN ensures API access of MLBuffer will be mutually exclusive (no simultaneous access). MLGraph operations using MLBuffer are submitted for execution before WebNN waits for completion of work by WebGPU's queues. Similarly, WebGPU queues cannot execute or must wait until WebNN operations are completed.
Why not import MLBuffer as GPUExternalTexture?
Unlike textures, MLBuffer cannot be GPU sampled which restricts WebGPU shaders from performing color-space mapping and may require tensor-to-video format conversion. Since an imported MLBuffer layout will match GPUBuffer, a linear layout, the web developer could use it as a GPUBuffer.
Can you interop with mixed WebNN and WebGPU devices?
Currently, it is out of scope of v1. Only the same GPUDevice used to create MLContext can be used. In the future, explicit importExternalBuffer() could be added to MLContext or if zero-copy is disallowed.