You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By default, this installs the oneflow backend. You can add other backends if needed; please refer to the OneDiff GitHub repository [here](https://github.com/siliconflow/onediff?tab=readme-ov-file#install-a-compiler-backend).
107
72
108
73
</details>
109
74
75
+
<detailsclose>
76
+
<summary> Option 2: Installing via GitHub </summary>
110
77
111
-
### Basic Node Usage
112
-
113
-
**Note** All the images in this section can be loaded directly into ComfyUI. You can load them in ComfyUI to get the full workflow.
114
-
115
-
#### Load Checkpoint - OneDiff
116
-
117
-
"Load Checkpoint - OneDiff" is the optimized version of "LoadCheckpoint", designed to accelerate the inference speed without any awareness required. It maintains the same input and output as the original node.
78
+
First, install and set up [ComfyUI](https://github.com/comfyanonymous/ComfyUI), and then follow these steps:
**Note**: Quantization feature is only supported by **OneDiff Enterprise**.
127
-
128
-
OneDiff Enterprise offers a quantization method that reduces memory usage, increases speed, and maintains quality without any loss.
100
+
4. **Install a Compiler Backend**
129
101
130
-
If you possess a OneDiff Enterprise license key, you can access instructions on OneDiff quantization and related models by visiting [Online Quantization for ComfyUI](./ComfyUI_Online_Quantization.md). Alternatively, you can [contact](#contact) us to inquire about purchasing the OneDiff Enterprise license.
102
+
For instructions on installing a compiler backend for OneDiff, please refer to the OneDiff GitHub repository [here](https://github.com/siliconflow/onediff?tab=readme-ov-file#install-a-compiler-backend).
131
103
132
-

133
104
134
-
### Compiler Cache
135
-
#### Avoid compilation time for online serving
136
-
The `"Load Checkpoint - OneDiff"` node automatically caches compiled results locally in the default directory `ComfyUI/input/graphs`. To save graphs in a custom directory, utilize `export COMFYUI_ONEDIFF_SAVE_GRAPH_DIR="/path/to/save/graphs"`.
105
+
</details>
137
106
138
-
## OneDiff Community Examples
139
107
140
-
### IPAdapter
141
-
> doc link: [Accelerating cubiq/ComfyUI_IPAdapter_plus with OneDiff](./modules/oneflow/hijack_ipadapter_plus/README.md)
142
108
143
-
### LoRA
144
109
145
-
This example demonstrates how to utilize LoRAs. You have the flexibility to modify the LoRA models or adjust their strength without the need for recompilation.
146
110
147
-
[Lora Speedup](workflows/model-speedup-lora.png)
111
+
### Basic Node Usage
148
112
149
-
### ControlNet
113
+
**Note** All the images in this section can be loaded directly into ComfyUI. You can load them in ComfyUI to get the full workflow.
While there is an example demonstrating OpenPose ControlNet, it's important to note that OneDiff seamlessly supports a wide range of ControlNet types, including depth mapping, canny, and more.
120
+
"Load Checkpoint - OneDiff"is the optimized version of "LoadCheckpoint", designed to accelerate the inference speed without any awareness required. It maintains the same input and output as the original node.
The "Load Checkpoint - OneDiff" node set`vae_speedup`:`enable` to enable VAE acceleration.
160
125
161
-
This example illustrates how OneDiff can be used to enhance the performance of a video model, specifically in the context of text-to-video generation using SVD. Furthermore, it is compatible with [SVD 1.1](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt-1-1).
DeepCache is an innovative algorithm that substantially boosts the speed of diffusion models, achieving an approximate 2x improvement. When used in conjunction with OneDiff, it further accelerates the diffusion model to approximately 3x.
131
+
```shell
132
+
# Set custom directory for saving graphs in ComfyUI with OneFlow backend
[Module DeepCache SpeedUp on SVD](workflows/svd-deepcache.png)
142
+
### Quantization
174
143
175
-
[Module DeepCache SpeedUp on LoRA](workflows/lora_deepcache/README.md)
144
+
**Note**: Quantization feature is only supported by **OneDiff Enterprise**.
176
145
146
+
OneDiff Enterprise offers a quantization method that reduces memory usage, increases speed, and maintains quality without any loss.
177
147
178
-
### InstantID
148
+
If you possess a OneDiff Enterprise license key, you can access instructions on OneDiff quantization and related models by visiting [Online Quantization for ComfyUI](./docs/ComfyUI_Online_Quantization.md). Alternatively, you can [contact](#contact) us to inquire about purchasing the OneDiff Enterprise license.
179
149
180
-
> doc link: [Accelerating cubiq/ComfyUI_InstantID with OneDiff](./modules/oneflow/hijack_comfyui_instantid/README.md)
181
150
182
-
> doc link: [Accelerating ZHO-ZHO-ZHO/ComfyUI-InstantID with OneDiff](./workflows/ComfyUI_InstantID_OneDiff.md)
151
+
## Tutorials
183
152
153
+
- [Accelerate SD3 with onediff](./docs/sd3/README.md)
0 commit comments