Skip to content

Conversation

@ymcui
Copy link
Owner

@ymcui ymcui commented Aug 4, 2023

Description

This PR introduces new results on FP16/Q8/Q4 version of Chinese-LLaMA-2-7B, including their model sizes, PPL, and C-Eval valid results.

Background

Many users may wonder how quantization level may affect the overall performance, and here we present relevant benchmarks to our community members, so that they can have a better understanding on the quality loss brought by quantization.

Related Issue

None.

Explanation of Changes

copilot:walkthrough

@ymcui ymcui requested a review from airaria August 4, 2023 01:33
@ymcui ymcui merged commit 1ae577e into main Aug 4, 2023
@ymcui ymcui deleted the quant-eval branch August 4, 2023 01:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants