Skip to content

Commit c930a0f

Browse files
docstring update
Signed-off-by: Brian Dellabetta <[email protected]>
1 parent 889230c commit c930a0f

File tree

1 file changed

+1
-1
lines changed
  • src/llmcompressor/modifiers/awq

1 file changed

+1
-1
lines changed

src/llmcompressor/modifiers/awq/base.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ class AWQModifier(Modifier, QuantizationMixin):
113113
:param offload_device: offload cached args to this device, which reduces memory
114114
requirements but requires more time to move data between cpu and execution
115115
device. Defaults to None, so cached args are not offloaded. Consider setting
116-
to "cpu" if you are encountering OOM errors
116+
to torch.device("cpu") if you are encountering OOM errors
117117
:param max_chunk_memory: maximum memory to use for each chunk of input activations
118118
:param duo_scaling: whether to use duo scaling, which uses both input activations
119119
and weights to determine the scaling factor

0 commit comments

Comments
 (0)