-
Notifications
You must be signed in to change notification settings - Fork 160
[CIR] Added missing argument handling to clang
toolchain
#1781
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
tommymcm
wants to merge
2,638
commits into
llvm:main
Choose a base branch
from
tommymcm:missing-clang-arguments
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+175,217
−5,967
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…1487) This PR resolves an assertion failure in `CIRGenTypes::isFuncParamTypeConvertible`, which is involved when trying to emit a vtable entry to a virtual function whose type includes a pointer-to-member-function.
Lower neon vabsd_s64
…lvm#1431) Implements `::verify` for operations cir.atomic.xchg and cir.atomic.cmp_xchg I believe the existing regression tests don't get to the CIR level type check failure and I was not able to implement a case that does. Most attempts of reproducing cir.atomic.xchg type check failure were along the lines of: ``` int a; long long b,c; __atomic_exchange(&a, &b, &c, memory_order_seq_cst); ``` And they seem to never trigger the failure on `::verify` because they fail earlier in function parameter checking: ``` exmp.cpp:7:27: error: cannot initialize a parameter of type 'int *' with an rvalue of type 'long long *' 7 | __atomic_exchange(&a, &b, &c, memory_order_seq_cst); | ^~ ``` Closes llvm#1378 .
Lower neon vcaled_f64
This PR adds a new boolean flag to the `cir.load` and the `cir.store` operation that distinguishes nontemporal loads and stores. Besides, this PR also adds support for the `__builtin_nontemporal_load` and the `__builtin_nontemporal_store` intrinsic function.
This PR adds a new boolean flag to the `cir.load` and the `cir.store` operation that distinguishes nontemporal loads and stores. Besides, this PR also adds support for the `__builtin_nontemporal_load` and the `__builtin_nontemporal_store` intrinsic function.
Lower vcales_f32
This PR adds an insertion guard for the try body scope for try-catch. Currently, the following code snippet fails during CodeGen: ``` void foo() { int r = 1; try { ++r; return; } catch (...) { } } ``` The insertion point doesn't get reset properly and the cleanup is being ran for a wrong/deleted block causing a segmentation fault. I also added a test.
The comments suggested that we should use TableGen to generate the recognizing functions. However, I think templates might be more suitable for generating them -- and I can't find any existing TableGen backends that let us generate arbitrary functions. My choice of design is to offer a template to match standard library functions: ```cpp // matches std::find with 3 arguments, and raise it into StdFindOp StdRecognizer<3, StdFindOp, StdFuncsID::Find> ``` I have to use a TableGen'd enum to map names to IDs, as we can't pass string literals to template arguments easily in C++17. This also constraints design of future `StdXXXOp`s: they must take operands the same way of StdFindOp, where the first one is the original function, and the rest are function arguments. I'm not sure if this approach is the best way. Please tell me if you have concerns or any alternative ways.
…was set explicitly (llvm#1482) This is backported from a change made in llvm/llvm-project#131181 --------- Co-authored-by: Morris Hafner <[email protected]>
…R attribute. (llvm#1467) Started decorating CUDA shadow variables with the shadow_name CIR attribute which will be used for registering the globals.
Lower neon vcaltd_f64
… target was set explicitly" (llvm#1509) Reverts llvm#1482 @mmha this is crashing on macos on asserts build: ``` FAIL: Clang :: CIR/Tools/cir-translate/warn-default-triple.cir (472 of 552) ******************** TEST 'Clang :: CIR/Tools/cir-translate/warn-default-triple.cir' FAILED ******************** Exit Code: 134 Command Output (stdout): -- Assertion failed: (!DataLayoutString.empty() && "Uninitialized DataLayout!"), function getDataLayoutString, file TargetInfo.h, line 1282. ``` Perhaps besides picking a default you maybe need to do some missing datalayout init?
Sub-issue of llvm#1192. Adds CIR_ASinOp and support for __builtin_elementwise_asin.
This un-xfails the 6 files in llvm#1497 related to variadic calls.
Lower vcalts_f32
Sub-issue of llvm#1192. Adds CIR_ATanOp and support for __builtin_elementwise_atan.
Part of llvm#258 . 1. Added `AddressPointAttr` 2. Change all occurrences of `VTableAddrPointOp` into using the attribute 3. Update tests --------- Co-authored-by: Sirui Mu <[email protected]> Co-authored-by: Morris Hafner <[email protected]> Co-authored-by: Morris Hafner <[email protected]> Co-authored-by: Sharp-Edged <[email protected]> Co-authored-by: Amr Hesham <[email protected]> Co-authored-by: Bruno Cardoso Lopes <[email protected]> Co-authored-by: Letu Ren <[email protected]>
Clang relies on `llvm::Intrinsic::getOrInsertDeclaration` to handle functions marked as `ClangBuiltin` in TableGen. That function receives a `CodeGenModule*` so CIR can't use that. We need to re-implement parts of it.
Closes llvm#1367 --------- Co-authored-by: Sirui Mu <[email protected]> Co-authored-by: Amr Hesham <[email protected]> Co-authored-by: Chibuoyim (Wilson) Ogbonna <[email protected]> Co-authored-by: Yue Huang <[email protected]> Co-authored-by: Morris Hafner <[email protected]> Co-authored-by: Morris Hafner <[email protected]> Co-authored-by: Sharp-Edged <[email protected]> Co-authored-by: Bruno Cardoso Lopes <[email protected]> Co-authored-by: Letu Ren <[email protected]>
The LoweringPrepare pass was generating the wrong condition for loops when lowering the ArrayCtor op, causing only one element in an array of objects to be constructed. This fixes that problem.
Backporting passing enum kind directly to complex cast helpers
…ent (llvm#1748) ## Overview Currently, getting the pointer to an element of an array requires a pointer decay and a (possible) pointer stride. A similar pattern for records has been eliminated with the `cir.get_member` operation. This PR provides a similar level of abstraction for arrays with the `get_element` operation. `get_element` replaces the above pattern with a single operation, which takes a pointer to an array and an index, and produces a pointer to the element at that index. There are many places in CIR analysis and lowering where the `ptr_stride(array_to_ptrdecay(x), i)` pattern is handled as a special case. By subsuming the special case pattern with an explicit operation, we make these analyses and lowering more robust. ## Changes Adds the `cir.get_element` operation. Extends CIRGen to emit `cir.get_element` for array subscript expressions. Updated LifetimeCheck to handle `get_element` operation, subsuming special case analysis of `cir.ptr_stride` operation (did not remove the special case). Extends CIR-to-LLVM lowering to lower `cir.get_element` to `llvm.getelementptr` Extends CIR-to-MLIR lowering to lower `cir.get_element` to `memref` operations, matching existing special case `cir.ptr_stride` lowering. ## Additional Notes Currently, 47.6% of `cir.ptr_stride` operations in the llvm-test-suite (SingleSource and MultiSource) can be replaced by `cir.get_element` operations. ### Operator Breakdown (current) name | count | % -- | -- | -- cir.load | 825221 | 22.27% cir.br | 429822 | 11.60% cir.const | 380381 | 10.26% cir.cast | 325646 | 8.79% cir.store | 309586 | 8.35% cir.get_member | 226895 | 6.12% cir.get_global | 186851 | 5.04% cir.ptr_stride | 158094 | 4.27% cir.call | 144522 | 3.90% cir.binop | 141142 | 3.81% cir.alloca | 134346 | 3.63% cir.brcond | 112864 | 3.05% cir.cmp | 83532 | 2.25% ### Operator Breakdown (with `get_element`) name | count | % -- | -- | -- cir.load | 825221 | 22.74% cir.br | 429822 | 11.84% cir.const | 380381 | 10.48% cir.store | 309586 | 8.53% cir.cast | 248645 | 6.85% cir.get_member | 226895 | 6.25% cir.get_global | 186851 | 5.15% cir.call | 144522 | 3.98% cir.binop | 141142 | 3.89% cir.alloca | 134346 | 3.70% cir.brcond | 112864 | 3.11% cir.cmp | 83532 | 2.30% cir.ptr_stride | 81093 | 2.23% cir.get_elem | 77001 | 2.12% --------- Co-authored-by: Andy Kaylor <[email protected]> Co-authored-by: Henrich Lauko <[email protected]>
Implemented `noexcept` expression handling in CIR generation. Added a `noexcept.cpp` test based on cppreference. There was no OG test to base it off of, so I used the example code from [cppreference](https://en.cppreference.com/w/cpp/language/noexcept.html).
I think this one is self-explanatory, so I will not write much 🙂 Adding this attribute helps in optimizations like [llvm#1653](llvm#1653), and using the attribute it's easy to create operations like `cir.std.vector.ctor`/`cir.std.vector.dtor` by just modifying `IdiomRecognizer` a bit. I believe it will also be useful for future optimizations. Finally, I updated quite a number of tests so they now reflect this attribute. Please, let me know if you see any issues.
Implemented opportunistic vtable emission, which marks vtables as `available_externally` to enable inlining if optimizations are enabled. Added `GlobalOp` verifier support `available_externally` linkage type, all cases are covered now, so I removed the `default` case. Added the `vtable-available-externally` CIRGen test.
Fix lowering Complex to Complex cast, backported from llvm/llvm-project#149717
…r` (llvm#1753) Implemented CIR code generation for `CXXPseudoDestructorExpr`. Added a pseudo destructor test to `CIR/CodeGen/dtors.cpp`.
…trdecay` to `get_element` when possible (llvm#1761) Extended the `CIRCanonicalizePass` with new rewrite rules: - Rewrite `ptr_stride (cast array_to_ptrdecay %base), %index` to `get_element %base[%index]` - Rewrite `ptr_stride (get_element %base[%index]), %stride` to `get_element %base[%index + %stride]` - Rewrite `cast array_to_ptrdecay %base, ptr<T>` to `get_element %base[0], ptr<T>` if it is only used by `load %ptr : T`, `store %val : T, %ptr`, or `get_member %ptr[field] : ptr<T> -> U` Updated CodeGen tests, and extended CIR-to-CIR test. --------- Co-authored-by: Henrich Lauko <[email protected]>
) `cir::PointerType` was not included in the applicability guard for `cir::VAArg` lowering during `LoweringPrepare`. Since we don't have generic LLVM `cir::VAArgOp` (see [more info](llvm#1088 (comment))) this causes an NYI error during lowering that doesn't need to happen. To fix this I added the missing `cir::PointerType` to the `isa`. There is probably a more comprehensive fix to this if someone is interested, this check should be removed and let the (possible) error occur at the actual NYI site.
- Replaces dyn_cast<cir::ConstantOp>(v.getDefiningOp()) and similar with v.getDefiningOp<cir::ConstantOp>() - Adds `getValueAttr`, `getIntValue` and `getBoolValue` methods to ConstantOp
…Op()) (NFC) (llvm#1765) This applies similar changes to llvm/llvm-project#150428
…1747) (Copied from my question on Discord) I’ve been working on the vector to bit-mask related intrinsics for X86. I’ve been stuck specifically on `X86::BI__builtin_ia32_cvtb2mask128(_mm256_movepi16_mask`) and its variations with different vector/mask sizes. In this case, we perform a vector comparison of `vector<16xi16>` and bitcast the resulting `vector<16xi1>` directly into a scalar integer mask (i16). I’m successfully able to lower to cir: ``` ... %5 = cir.vec.cmp(lt, %3, %4) : !cir.vector<!s16i x 16>, !cir.vector<!cir.int<u, 1> x 16> %6 = cir.cast(bitcast, %5 : !cir.vector<!cir.int<u, 1> x 16>), !u16i ... ``` There's an issue arises when lowering this to LLVM, the error message I'm getting is: ``` error: integer width of the output type is smaller or equal to the integer width of the input type ``` By looking at the test cases on the llvm dialect, this is related to the sext / zext instruction. This is the cir → llvm dialect lowered for the latter: ``` ... %14 = "llvm.icmp"(%12, %13) <{predicate = 2 : i64}> : (vector<16xi16>, vector<16xi16>) -> vector<16xi1> %15 = "llvm.sext"(%14) : (vector<16xi1>) -> vector<16xi1> %16 = "llvm.bitcast"(%15) : (vector<16xi1>) -> i16 ... ``` This is seems to be the cause: ``` %15 = "llvm.sext"(%14) : (vector<16xi1>) -> vector<16xi1> ``` **The fix**: Added a type check: if the result type does not differ from the expected type, we won't insert a sextOp
Implemented `CXXDeleteExpr` for concrete and virtual destructors. NYI, global delete, i.e., `::delete`. Added tests for both destructor types.
For these intrinsics there only seems to be one function where the IR emmited seems to diverge: for `_mm_load_sbh` loads a single 16-bit bfloat (__bf16) value from memory into the lowest element of a 128-bit bfloat vector (__m128bh), leaving the remaining lanes unchanged or filled with a passthrough value. It is implemented using a masked load with only the first lane enabled. [source for intrinsics with similar behaviour](https://gist.github.com/leopck/86799fee6ceb9649d0ebe32c1c6e5b85) In the CIR lowering of `_mm_load_sbh`, we are currently emitting the mask of intrinsic (`llvm.masked.load`) operand as an explicit constant vector: ``` llvm <8 x i1> <true, false, false, false, false, false, false, false> ``` whereas OG lowers: ```llvm <8 x i1> bitcast (<1 x i8> splat (i8 1) to <8 x i1>) ``` I believe both things are semantically equal so: Is it acceptable for CIR and OG to diverge in this way for masked loads, or should we aim for parity in how the mask is represented, even if that reduces readability in CIR?
Implement supporting for CK_LValueToRValueBitCast for ComplexType
Adding tests, converted to draft until they're done. |
Tests are all added, ready for review. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
We were missing argument handling for
-fclangir-idiom-recognizer
on its own, and were not propagating-fclangir-lib-opt
for the clang toolchain.