Skip to content

Commit 1297cc8

Browse files
committed
🐛 Reverted logic to fix build; potential GGUF-related issues.
Signed-off-by: Jefferson Fialho <[email protected]>
1 parent baeec70 commit 1297cc8

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

vllm/model_executor/models/llama.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -512,7 +512,10 @@ def __init__(
512512
quant_config=quant_config,
513513
)
514514
if config.tie_word_embeddings:
515-
self.lm_head = self.model.embed_tokens
515+
# Reverted logic to fix build issues;
516+
# this may introduce GGUF-related bugs.
517+
# self.lm_head = self.model.embed_tokens
518+
self.lm_head.weight = self.model.embed_tokens.weight
516519

517520
logit_scale = getattr(config, "logit_scale", 1.0)
518521
self.logits_processor = LogitsProcessor(self.unpadded_vocab_size,

0 commit comments

Comments
 (0)