Skip to content

Commit 2496a2e

Browse files
SevenSamonMangodadada
authored andcommitted
[LLM] Add tensor parallel for chatglmv2 (PaddlePaddle#9014)
* fix_chatglmv2_8k
1 parent 9eec512 commit 2496a2e

File tree

2 files changed

+467
-65
lines changed

2 files changed

+467
-65
lines changed

paddlenlp/experimental/transformers/chatglm_v2/modeling.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -185,7 +185,7 @@ def __init__(self, config: ChatGLMv2Config, empty_init=True):
185185
if self.post_layer_norm:
186186
LayerNormFunc = RMSNorm if config.rmsnorm else nn.LayerNorm
187187
# Final layer norm before output.
188-
self.final_layernorm = LayerNormFunc(config.hidden_size, epsilon=config.layernorm_epsilon)
188+
self.final_layernorm = LayerNormFunc(config.hidden_size, epsilon=config.layernorm_epsilon, config=config)
189189

190190
def get_input_embeddings(self):
191191
return self.embedding.word_embeddings

0 commit comments

Comments
 (0)