-
Notifications
You must be signed in to change notification settings - Fork 93
Fp16 windows (depends on #3429) #3458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Moved quantize functions from GGML to nntrainer Signed-off-by: p-debski2 <[email protected]>
Added q8_0 row quantization, and multiple dequantization functions Signed-off-by: p-debski2 <[email protected]>
Added includes for intrinsic functions Signed-off-by: p-debski2 <[email protected]>
Moved nntr_ggml_impl to a separate directory and added a shared header with structure definitions Signed-off-by: p-debski2 <[email protected]>
Added some util macros and functions to fix building on Linux Signed-off-by: p-debski2 <[email protected]>
Moved more ggml functions and removed the includes from interface files so that they use only the nntr_ggml implementaion Signed-off-by: p-debski2 <[email protected]>
Added more GGML type definitions for AVX operations Signed-off-by: p-debski2 <[email protected]>
Removed ggml includes, renamed some functions, moved some declerations to common nntr_ggml_impl headers Signed-off-by: p-debski2 <[email protected]>
Added a define guard to stop function redefinition Signed-off-by: p-debski2 <[email protected]>
Added some comments for generating docs on helper GGML structures Signed-off-by: p-debski2 <[email protected]>
Removed ggml from meson & tizen specification, leaving the submodule for now Signed-off-by: p-debski2 <[email protected]>
Deleted ggml as a dependency from the project Signed-off-by: p-debski2 <[email protected]>
This commit introduces mapping logic to the RMS Norm OpenCL kernel following kernel execution. Additionally, this patch updates the unit tests for the OpenCL BLAS kernels. **Self-evaluation:** 1. Build test: [X]Passed [ ]Failed [ ]Skipped 2. Run test: [X]Passed [ ]Failed [ ]Skipped Signed-off-by: Donghyeon Jeong <[email protected]>
This commit adds helper functions for SVM allocation. This patch also fixes issues where allocated memory is not destroyed. **Self-evaluation:** 1. Build test: [X]Passed [ ]Failed [ ]Skipped 2. Run test: [X]Passed [ ]Failed [ ]Skipped Signed-off-by: Donghyeon Jeong <[email protected]>
This commit introduces a fallback implementation for the fused unpack q4_0x8 and 16-bit transpose operation, which preprocesses q4_0x8 data. **Self-evaluation:** 1. Build test: [X]Passed [ ]Failed [ ]Skipped 2. Run test: [X]Passed [ ]Failed [ ]Skipped Signed-off-by: Donghyeon Jeong <[email protected]>
This commit adds AVX2 implementation of a fused unpack q4_0x8 and 16-bit transpose operation. **Self-evaluation:** 1. Build test: [X]Passed [ ]Failed [ ]Skipped 2. Run test: [X]Passed [ ]Failed [ ]Skipped Signed-off-by: Donghyeon Jeong <[email protected]>
This commit removes pre-allocated SVM for transposed data. **Self-evaluation:** 1. Build test: [X]Passed [ ]Failed [ ]Skipped 2. Run test: [X]Passed [ ]Failed [ ]Skipped Signed-off-by: Donghyeon Jeong <[email protected]>
fc7a4aa
to
ae63fde
Compare
This PR make possible to perform windows build with FP16 enabled, uint16_t is used as FP16 storage only type **Self-evaluation:** 1. Build test: [X]Passed [ ]Failed [ ]Skipped 2. Run test: [X]Passed [ ]Failed [ ]Skipped Signed-off-by: Grzegorz Kisala <[email protected]>
ae63fde
to
798727d
Compare
PR replaced by #3468 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR make possible to perform windows build with FP16
enabled, uint16_t is used as FP16 storage only type
Self-evaluation: