This repository was archived by the owner on Jun 4, 2025. It is now read-only.
forked from huggingface/transformers
-
Notifications
You must be signed in to change notification settings - Fork 3
[Temporary] Add compressed-tensors HFQuantizer implementation #101
Closed
Closed
Changes from 4 commits
Commits
Show all changes
936 commits
Select commit
Hold shift + click to select a range
19e6e80
support qwen2-vl (#32318)
simonJJJ 93e0e1a
CI: add torchvision to the consistency image (#32941)
gante 894d421
Test: add higher `atol` in `test_forward_with_num_logits_to_keep` (#3…
gante 72d4a3f
mps: add `isin_mps_friendly`, a wrapper function for `torch.isin` (#3…
gante a378a54
Add changes for uroman package to handle non-Roman characters (#32404)
nandwalritik 3562772
fix: Fixed `pydantic` required version in dockerfiles to make it comp…
Sai-Suraj-27 26f043b
quickfix documentation (#32566)
molbap 9578c25
Fixup py 38 type hints for mps friendly (#33128)
muellerzr 3bf6dd8
fix: Fixed CodeGenTokenizationTest::test_truncation failing test (#32…
Sai-Suraj-27 7562366
fix: multilingual midel convert to tflite get wrong token (#32079)
Ayaa17 3806faa
disable scheduled daily CI temporarily (#33136)
ydshieh ab0ac3b
CI: fix `efficientnet` pipeline timeout and prevent future similar is…
gante 746e114
Bump torch from 1.13.1 to 2.2.0 in /examples/research_projects/jax-pr…
dependabot[bot] 892d51c
Log additional test metrics with the CometCallback (#33124)
Lothiraldan 6f0ecf1
[docs] add quick usage snippet to Whisper. (#31289)
Vaibhavs10 d1f39c4
Update stateful_callbacks state before saving checkpoint (#32115)
brs-pt 834ec7b
fix Idefics2VisionConfig type annotation (#33103)
chenzizhao 9956c2b
Add a fix for custom code tokenizers in pipelines (#32300)
Rocketknight1 c6b23fd
Llama: make slow tests green 🟢 (#33138)
gante d47a9e8
fix redundant checkpointing in example training scripts (#33131)
eminorhan 7ee4363
update torch req for 4-bit optimizer (#33144)
SunMarc 6101d93
🌐 [i18n-KO] Translated `conversations.md` to Korean (#32468)
newfull5 27903de
Very small change to one of the function parameters (#32548)
alisalamatian1 7591ca5
🚨 Add Blip2ForImageTextRetrieval (#29261)
jpizarrom c35d2cc
Granite language models (#31502)
mayank31398 386931d
fix model name and copyright (#33152)
mayank31398 3bfd3e4
Fix: Jamba batched generation (#32914)
vasqu e0b87b0
[whisper] pass attention_mask to generate_with_fallback() (#33145)
benniekiss f1a385b
[RoBERTa-based] Add support for sdpa (#30510)
hackyon f9ed05d
Fix import paths for test_module (#32888)
rasmi f4c86d0
Zero-shot pipelines: minor doc changes (#33127)
pcuenca 5c84682
Customise the separator used for splicing in DataCollatorWithFlatteni…
beep-bebop 74e19e8
Fix spell mistakes (#33149)
matsuo1234567 3d79dcb
update push CI workflow files for security (#33142)
ydshieh 5c1027b
added quick clarification (#33166)
DuyguA 39bfb2f
pass module to Params4bit.from_prequantized to ensure quant_state (#3…
winglian 92a75ff
Mamba2 conversion script for original models (#32580)
vasqu 5129671
Add a static cache that offloads to the CPU or other device (#32161)
gerbenvv c409cd8
use a single for loop (#33148)
ArthurZucker b127fb8
Pipeline: fix bad generation kwargs docs (#33205)
gante 4987463
Bump torch from 1.13.1 to 2.2.0 in /examples/research_projects/codepa…
dependabot[bot] 9a6956b
Bump torch from 1.13.1 to 2.2.0 in /examples/research_projects/decisi…
dependabot[bot] e259d6d
Add missing quotes in modeling_llava_next_video.py (#33214)
juliendenize fbff276
Add warning for stop string edge case (#33169)
Rocketknight1 38d58a4
Fix local repos with remote code not registering for pipelines (#33100)
Rocketknight1 b017a9e
Refactor CI: more explicit (#30674)
ArthurZucker c79bfc7
Create local Transformers Engine (#33218)
aymeric-roucher db70426
🌐 [i18n-KO] Translated `llm_optims.md` to Korean (#32325)
yijun-lee 51e6526
Fix red amin (#33220)
ArthurZucker ea9e927
run_compressed compatability
746104b
Test fetcher: missing return on filtered tests; don't write empty fil…
gante eb5b968
Generate: throw warning when `return_dict_in_generate` is False but s…
gante 2e3f8f7
Add video text to text docs (#33164)
merveenoyan b9bc691
Add GraniteRMSNorm (#33177)
NielsRogge 1ca9ff5
Add duckduckgo search tool (#32882)
aymeric-roucher 409fcfd
Fix: Suppressed 'use_reentrant=False' warning (#33208)
ankush13r 963ed98
docs: Replace package abbreviations with full name(`bitsandbytes`) in…
rapsealk 2d37085
Bump opencv-python from 4.4.0.42 to 4.8.1.78 in /examples/research_pr…
dependabot[bot] 52a0213
Add assistant prefill for chat templates and TextGenerationPipeline (…
Rocketknight1 97c0f45
Generate: fix assistant in different device (#33257)
gante 9ea1eac
remove to restriction for 4-bit model (#33122)
SunMarc 2895224
Fixed typo repeated word in DETR docs (#33250)
sergiopaniego cff06aa
Fix: use `torch.from_numpy()` to create tensors for np.ndarrays (#33201)
shinyano 5663026
remove torch input dependant control flow (#33245)
ArthurZucker 7ed9789
Fix: `num_logits_to_keep` in composite models (#33168)
zucchini-nlp 979f477
Fix Bark saving (#33266)
ylacombe edeca43
🚨 Support dequantization for most GGML types (#32625)
Isotr0py 0d86727
Update chat template docs to remove Blenderbot (#33254)
Rocketknight1 e969d88
Bump opencv-python from 4.4.0.42 to 4.8.1.78 in /examples/research_pr…
dependabot[bot] 03c12d0
Add sdpa support for Albert (#32092)
OmarManzoor 6b7d64a
Only disallow DeepSpeed Zero-3 for auto bs finder (#31731)
muellerzr 979d24e
fix the parallel number of CI nodes when it is smaller than number of…
ArthurZucker d6534f9
Repo checks: check documented methods exist (#32320)
gante ecd61c6
Add OLMoE (#32406)
Muennighoff 1c3ad5c
revert changes not needed for compression
aa1a4f9
no longer need unexpected keys fn
81a13dd
unexpected keys not needed either
35f72eb
Fix: multigpu training (#33271)
zucchini-nlp ebbe8d8
Cache docs: update (#32929)
zucchini-nlp d750b50
Config: unified logic to retrieve text config (#33219)
gante d703477
[fix] LlavaNextProcessor '_get_unpadded_features' method (#33263)
laurentd-lunit 178cb6b
wait 15m before SSH into runner workflow stops (#33300)
ydshieh 122ded0
Bugfix/alexsherstinsky/fix none check for attention factor in rope sc…
alexsherstinsky 5731dc8
Bump cryptography from 42.0.0 to 43.0.1 in /examples/research_project…
dependabot[bot] d2dcff9
[InstructBLIP] qformer_tokenizer is required input (#33222)
amyeroberts 2cb543d
Multi agents with manager (#32687)
aymeric-roucher 01c8c6c
Add a warning to the chat template docs about the tool_calls format (…
Rocketknight1 cfd92c6
Add new documentation page for advanced agent usage (#33265)
aymeric-roucher a1faf22
[BUG] fix upper nltk version (#33301)
ylacombe b390998
Fix excessive CPU memory usage with FSDP and cpu_ram_efficient_loadin…
matthewdouglas 9230d78
Add validate images and text inputs order util for processors and tes…
yonigozlan 43df47d
Llava Onevision: add model (#32673)
zucchini-nlp 47b0964
Fix: Fix `FalconMamba` training issues due to incompatible kernels (#…
younesbelkada 03164ba
Add paper link (#33305)
Muennighoff c6d2848
🚨 Fix `torch.jit.trace` for `interpolate_pos_encoding` in all vision …
xenova 132e875
Update SECURITY.md (#32680)
Michellehbn 5d11de4
Add Qwen2Moe GGUF loading support (#33264)
VladOS95-cyber 21fac7a
simple align qwen2vl kv_seq_len calculation with qwen2 (#33161)
simonJJJ 5792c45
Add a community notebook for fine-tuning with QLoRA, PEFT, and MLflow…
daniellok-db 1759bb9
Fix: StaticCache & `inputs_embeds` (#32932)
zucchini-nlp 2b789f2
Docs: add more cross-references to the KV cache docs (#33323)
gante 51d15eb
[whisper] alternative fix for long-form timestamps (#32131)
sanchit-gandhi 1bd9d1c
fix qwen2vl vision eager-attention (#33213)
simonJJJ e1c2b69
Load dynamic module (remote code) only once if code isn't change (#33…
XuehaiPan 363301f
support loading model without config.json file (#32356)
itazap 3314fe1
Add validation for maximum sequence length in modeling_whisper.py (#3…
AmirMohammadFakhimi 2b18354
add self.head_dim for VisionAttention in Qwen2-VL (#33211)
GeLee-Q 342e800
support 3D attention mask in bert (#32105)
gathierry e48e5f1
Support reading tiktoken tokenizer.model file (#31656)
itazap 2d75700
red-ci on main, fix copies (#33356)
ArthurZucker 6ff6069
RoPE: fix BC warning (#33331)
gante d7b04ea
Fix Prefill docs (#33352)
Rocketknight1 a70286f
Update author for QLorA/PEFT community notebook (#33338)
daniellok-db 66bc4de
add sdpa mbart (#32033)
nbroad1881 60226fd
Fix quantized cache tests (#33351)
zucchini-nlp 62aecd8
schedulefree optimizers (#30079)
winglian 489cbfd
Add visit webpage tool (#33353)
aymeric-roucher eedd21b
Fixed Majority of the Typos in `transformers[en]` Documentation (#33350)
nnilayy 65bb284
Compile compatibilty for decoder-only models (#32617)
zucchini-nlp 0574fa6
Adjust templates (#33384)
LysandreJik f745e7d
Remove repeated prepare_images in processor tests (#33163)
amyeroberts f53d7b9
Apply suggestions from code review
Satrat d8f7073
add to_diff_dict
7f112ca
Fix import of `FalconMambaForCausalLM` (#33381)
younesbelkada f24f084
Import structure & first three model refactors (#31329)
LysandreJik 7d2d6ce
VLM: fixes after refactor (#32907)
zucchini-nlp 8e8e7d8
fixed Mask2Former image processor segmentation maps handling (#33364)
maciej-adamiak 96429e7
Add support for GGUF Phi-3 (#31844)
a8nova 6ed2b10
Bug Fix: Update hub.py to fix NoneType error (#33315)
rishiraj dfee4f2
Update WhisperTokenizer Doc: Timestamps and Previous Tokens Behaviour…
bruno-hays f38590d
Make StaticCache configurable at model construct time (#32830)
guangy10 781bbc4
use diff internal model in tests (#33387)
itazap e719b65
Fix `FbgemmFp8Linear` not preserving tensor shape (#33239)
vgel 91f19a5
Fix failing windows (#33436)
LysandreJik 42babe8
Remove deprecated task in load_dataset (#33433)
albertvillanova 7a51cbc
Dynamic number of speculative tokens in order to accelerate speculati…
jmamou ecf7024
Fix: Cast prefetch_bucket_size to integer for deepspeed >= 0.15 (#33402)
kiddj c403441
[docs] add the missing huggingface hub username (#33431)
faaany cea9ec0
[docs] add the missing tokenizer when pushing models to huggingface h…
faaany c4fbf70
update docs and expand testing
d7a553b
Update stale.yml (#33434)
LysandreJik e0ff432
Docs - update formatting of llama3 model card (#33438)
MichaelCurrin 516ee6a
Fix incomplete sentence in `Zero-shot object detection` documentation…
sergiopaniego 8ed6352
Fix flax whisper tokenizer bug (#33151)
hannan72 c8ea675
Clean-up deprecated code (#33446)
zucchini-nlp d71d6cb
Fix default revision for pipelines (#33395)
ankane 5334b61
Revive AMD scheduled CI (#33448)
ydshieh e688996
Allow send `SSH into runner` info. to DM (#33346)
ydshieh 8f8af0f
Correct Whisper's beam search scores computation (#32336)
ylacombe 2f611d3
Qwen2-VL: clean-up and add more tests (#33354)
zucchini-nlp 5c6257d
[whisper] Clarify error message when setting max_new_tokens (#33324)
benniekiss a05ce55
[docs] refine the doc for `train with a script` (#33423)
faaany 9c4639b
Return image hidden states (#33426)
zucchini-nlp 1027a53
add a callback hook right before the optimizer step (#33444)
winglian 4b0418d
Enable `padding_side` as call time kwargs (#33385)
zucchini-nlp 7a56598
Mitigate a conflict when using sentencepiece (#33327)
tengomucho dfd3115
[Phi-3] Bug on stale kv cache (#33129)
garg-amit 6cc4dfe
Fix the initialization of the cache when we have multi gpu (#33303)
SunMarc 0963229
Enable finetuning with torchao quantized model (#33361)
SunMarc e39b6c1
Corrected `Agents and tools` documentation links typos (#33471)
sergiopaniego 7bb1c99
chore: fix typo in comment in tokenization_utils_base.py (#33466)
DavidLemayian 8bd2b1e
Add support for Pixtral (#33449)
ArthurZucker 95e816f
Cohere: update RoPE structure (#33408)
gante 5ce0a11
Fix SSH workflow (#33451)
ydshieh ce62a41
Add keypoint-detection task guide (#33274)
merveenoyan 2f62146
Uniformize kwargs for LLaVa processor and update docs (#32858)
yonigozlan c7a91f5
`Agents, supercharged - Multi-agents, External tools, and more` docs …
sergiopaniego c2d0589
[i18n-ar] Add File : `docs/source/ar/_toctree.yml` (#32696)
AhmedAlmaghz 98adf24
[Whisper test] Fix some failing tests (#33450)
ylacombe 4ba531c
Fix: Qwen2-VL training on video datasets (#33307)
hiyouga ba1f1dc
Updated Trainer's liger-kernel integration to call correct patching A…
shimizust 9f196ef
Replace `accelerator.use_fp16` in examples (#33513)
hlky 18e1a9c
Fix parametrization-based weight norm (#33275)
ylacombe bcf8946
Fix number of patch check for different vision feature select strateg…
insujang 642256d
chore: migrate coverage cfg to pyproject.toml (#32650)
SauravMaheshkar 74026b4
idefics2 enable_input_require_grads not aligned with disable_input_re…
sywangyi ac5a055
Update chameleon.md — fix runtime type error (#33494)
maxwbuckley 7635484
Add explicit example for RAG chat templating (#33503)
A-Duss 3476c19
CI Build image - move runners (#33530)
glegendre01 46c2757
fix to jamba config, asserting attention and expert offset (#33316)
ErezSC42 c29a869
Fix missing `sequences_scores` in the Whisper beam search output (#3…
Nik-Kras d8500cd
Uniformize kwargs for Pixtral processor (#33521)
yonigozlan 6c051b4
Add revision to trainer push_to_hub (#33482)
teamclouday 1992a88
Merge remote-tracking branch 'upstream/main' into compressed-tensors-…
454a0f2
fix patch_attention_mask incorrect setting which leads to the differe…
sywangyi fee8651
Support LLaVa-OV-Chat (#33532)
zucchini-nlp e6d9f39
Decorator for easier tool building (#33439)
aymeric-roucher 52e22cb
Fix for slow the bug tokenizer adding spaces to single id decodes (#3…
DuyguA db72894
Chat template: save and load correctly for processors (#33462)
zucchini-nlp 298a638
Update _toctree.yml with compressed-tensors
Satrat 9f2b8cc
Fix missing head_dim in llama config from gguf model (#33526)
Isotr0py 5427eaa
[i18n-ur] Added README_ur.md file (#33461)
akkefa 4f1e9ba
fix the wandb logging issue (#33464)
ZIYU-DEEP f883827
Fix tests in ASR pipeline (#33545)
ylacombe fc83a4d
Added support for bfloat16 to zero-shot classification pipeline (#33554)
umarbutler 7542fac
Pipeline: no side-effects on `model.config` and `model.generation_con…
gante 8efc06e
Return attention mask in ASR pipeline to avoid warnings (#33509)
Rocketknight1 9db963a
enforce original size to be a list (#33564)
dom-dziela 7b1ce63
Improve compiled RT-DETR inference speed (#33412)
yonigozlan 6019f3f
Fix bnb dequantization (#33546)
SunMarc 5af7d41
Codec integration (#33565)
ylacombe e40bb48
Load and save video-processor from separate folder (#33562)
zucchini-nlp d7975a5
VLMs: enable generation tests (#33533)
zucchini-nlp f3b3810
rag: fix CI (#33578)
gante 80b774e
Cache: don't show warning in forward passes when `past_key_values` is…
gante 4f0246e
fix tests with main revision and read token (#33560)
molbap 413008c
add uniform processors for altclip + chinese_clip (#31198)
molbap d9d59e7
Generate: check that `attention_mask` is 2D (#33575)
gante 162056a
change sequence_bias type of SequenceBiasLogitsProcessor to list, add…
VladOS95-cyber b50ff59
[`Mamba2`] Move dt calculations to kernel (#33520)
vasqu 52920b5
Cache: don't throw warnings on `gemma2` when instantiating a new cach…
gante f111d5b
Uniformize kwargs for Paligemma processor and update docs (#33571)
yonigozlan b87755a
[tests] skip tests for xpu (#33553)
faaany 4d8908d
[tests] enable GemmaIntegrationTest on XPU (#33555)
faaany 0c718f1
Fix Llama 3 TikToken conversion (#33538)
pcuenca bdf4649
Docs: add the ability to manually trigger jobs (#33598)
gante 6dc3646
Fix CircleCI nightly run (#33558)
ydshieh 31650a5
Allow CI could be run on private forked repositories (e.g. new model …
ydshieh 8bd1f2f
[tests] make more tests device-agnostic (#33580)
faaany ec1424c
Update modeling_mamba2.py, fix pad size (#32599)
klae01 266d0a6
Generate: remove flakyness in `test_generate_from_inputs_embeds_decod…
gante f9b4409
Remove unnecessary CPM model tests (#33621)
amyeroberts 653eb40
Add sdpa for BioGpt (#33592)
OmarManzoor 2fdb5e7
VLM generate: tests can't generate image/video tokens (#33623)
gante 31caf0b
Fix missing test in `torch_job` (#33593)
ydshieh c0c6815
Add support for args to ProcessorMixin for backward compatibility (#3…
yonigozlan dc8b6ea
Fix contrastive search to correctly handle input with padding (#33507)
ducviet00 77c5d59
Generate: assistant should sample when the main model samples (#33534)
gante 077b552
Fix some missing tests in circleci (#33559)
ydshieh 75c878d
Update daily ci to use new cluster (#33627)
ydshieh e9356a4
Fix qwen2vl float16 inference bug (#33312)
GeLee-Q 7b2b536
Fix typos (#33583)
litianjian 49a0bef
enable low-precision pipeline (#31625)
jiqing-feng e472e07
Granitemoe (#33207)
mayank31398 e71bf70
Pixtral update example checkpoint (#33633)
amyeroberts 78b2929
Sdpa dino v2 (#33403)
avishaiElmakies 3cb4415
Update src/transformers/utils/quantization_config.py
Satrat 9eb9385
Clean up Unpack imports (#33631)
molbap b7c381f
Fix DPT /Dinov2 sdpa regression on main (#33660)
molbap 6d02968
handle dependency errors in check_imports (#33622)
molbap 214db9e
add back self.max_position_embeddings = config.max_position_embedding…
chengchengpei be9cf07
Fix Llava conversion for LlavaQwen2ForCausalLM with Clip vision tower…
Isotr0py 1456120
Uniformize kwargs for Udop processor and update docs (#33628)
yonigozlan e15687f
Generation: deprecate `PreTrainedModel` inheriting from `GenerationMi…
gante 11c27dd
Enable BNB multi-backend support (#31098)
jiqing-feng 01aec8c
Fix error string after refactoring into get_chat_template (#33652)
tibor-reiss 75b7485
uniformize git processor (#33668)
yonigozlan a943157
Merge branch 'main' into compressed-tensors-quantizer
dsikka 64f475a
update doc
dsikka fabe8a3
add note about saving a loaded model
dsikka File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
71 changes: 71 additions & 0 deletions
71
src/transformers/quantizers/quantizer_compressed_tensors.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,71 @@ | ||
# Copyright 2024 The HuggingFace Inc. team. All rights reserved. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
from ..utils import is_torch_available, logging | ||
from ..utils.quantization_config import QuantizationConfigMixin | ||
from .base import HfQuantizer | ||
|
||
|
||
if is_torch_available(): | ||
import torch | ||
|
||
logger = logging.get_logger(__name__) | ||
|
||
|
||
class CompressedTensorsHfQuantizer(HfQuantizer): | ||
""" | ||
Quantizer for the compressed_tensors package. Loads and restores models to | ||
quantized state with compressed_tensors | ||
""" | ||
|
||
requires_calibration = False | ||
# requires_parameters_quantization = True | ||
required_packages = ["compressed_tensors"] | ||
|
||
def __init__(self, quantization_config: QuantizationConfigMixin, **kwargs): | ||
super().__init__(quantization_config, **kwargs) | ||
|
||
from compressed_tensors.compressors import ModelCompressor | ||
|
||
self.compressor = ModelCompressor.from_compression_config(quantization_config) | ||
|
||
def validate_environment(self, *args, **kwargs): | ||
# check torch and compressed_tensors are available, let ImportError raise otherwise | ||
import torch # noqa | ||
from compressed_tensors.compressors import ModelCompressor # noqa | ||
|
||
def update_torch_dtype(self, torch_dtype: "torch.dtype") -> "torch.dtype": | ||
if torch_dtype is None: | ||
torch_dtype = torch.float16 | ||
elif torch_dtype != torch.float16: | ||
logger.info( | ||
"We suggest you to set `torch_dtype=torch.float16` for better efficiency with compressed_tensors." | ||
) | ||
return torch_dtype | ||
|
||
def _process_model_before_weight_loading(self, model, **kwargs): | ||
if self.quantization_config.quantization_config is not None: | ||
from compressed_tensors.quantization import apply_quantization_config | ||
|
||
apply_quantization_config(model, self.quantization_config.quantization_config) | ||
|
||
def _process_model_after_weight_loading(self, model, resolved_archive_file, **kwargs): | ||
self.compressor.decompress(model_path=resolved_archive_file, model=model) | ||
|
||
@property | ||
def is_trainable(self): | ||
return False | ||
|
||
@property | ||
def is_serializable(self): | ||
return False |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.