[CIR] Handle CK_UserDefinedConversion and related casts in emitCastLValue#193611
[CIR] Handle CK_UserDefinedConversion and related casts in emitCastLValue#193611
Conversation
|
@llvm/pr-subscribers-clangir @llvm/pr-subscribers-clang Author: adams381 Changes
Made with Cursor Patch is 22.04 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/193611.diff 7 Files Affected:
diff --git a/clang/include/clang/CIR/Dialect/IR/CIROps.td b/clang/include/clang/CIR/Dialect/IR/CIROps.td
index f20ba262d6480..12f1a5e475648 100644
--- a/clang/include/clang/CIR/Dialect/IR/CIROps.td
+++ b/clang/include/clang/CIR/Dialect/IR/CIROps.td
@@ -1055,12 +1055,14 @@ def CIR_ContinueOp : CIR_Op<"continue", [Terminator]> {
// Resume
//===----------------------------------------------------------------------===//
+// Note: cir.resume can appear transiently inside any structured CIR op
+// during FlattenCFG. Its final position (inside cir.try, cir.cleanup.scope,
+// or cir.func) is validated by post-flattening verification.
def CIR_ResumeOp : CIR_Op<"resume", [
DeclareOpInterfaceMethods<RegionBranchTerminatorOpInterface, [
"getMutableSuccessorOperands"
]>,
- Terminator,
- ParentOneOf<["cir::TryOp", "cir::CleanupScopeOp", "cir::FuncOp"]>
+ Terminator
]> {
let summary = "Resumes execution after not catching exceptions";
let description = [{
@@ -1070,11 +1072,11 @@ def CIR_ResumeOp : CIR_Op<"resume", [
`CatchUnwind` region of `cir.try`, where it receives an `!cir.eh_token`
argument representing the in-flight exception.
- During CFG flattening, this operation may appear within a `cir.try`
- operation where it indicates that exception unwinding should continue
- from that point. When the `cir.try` operation is flattened, the resume
- operation will be replaced with a branch to the flattened try operation's
- dispatch block.
+ During CFG flattening, this operation may temporarily appear inside any
+ structured CIR operation (scope, loop, switch, etc.) when an inner cleanup
+ scope is flattened before the enclosing structured op. When the enclosing
+ op is subsequently flattened, the resume will end up in a `cir.try`
+ operation or at function level.
After CFG flattening, this operation appears at the function level (inside
`cir.func`) to indicate that the exception should be re-thrown to the
@@ -2066,7 +2068,7 @@ class CIR_WhileOpBase<string mnemonic> : CIR_LoopOpBase<mnemonic> {
}
def CIR_WhileOp : CIR_WhileOpBase<"while"> {
- let regions = (region SizedRegion<1>:$cond, MinSizedRegion<1>:$body);
+ let regions = (region MinSizedRegion<1>:$cond, MinSizedRegion<1>:$body);
let assemblyFormat = "$cond `do` $body attr-dict";
let description = [{
@@ -2093,7 +2095,7 @@ def CIR_WhileOp : CIR_WhileOpBase<"while"> {
}
def CIR_DoWhileOp : CIR_WhileOpBase<"do"> {
- let regions = (region MinSizedRegion<1>:$body, SizedRegion<1>:$cond);
+ let regions = (region MinSizedRegion<1>:$body, MinSizedRegion<1>:$cond);
let assemblyFormat = " $body `while` $cond attr-dict";
let extraClassDeclaration = [{
@@ -2149,9 +2151,9 @@ def CIR_ForOp : CIR_LoopOpBase<"for"> {
```
}];
- let regions = (region SizedRegion<1>:$cond,
+ let regions = (region MinSizedRegion<1>:$cond,
MinSizedRegion<1>:$body,
- SizedRegion<1>:$step);
+ MinSizedRegion<1>:$step);
let assemblyFormat = [{
`:` `cond` $cond
`body` $body
@@ -2865,8 +2867,8 @@ def CIR_GlobalOp : CIR_Op<"global", [
OptionalAttr<StrAttr>:$section
);
- let regions = (region MaxSizedRegion<1>:$ctorRegion,
- MaxSizedRegion<1>:$dtorRegion);
+ let regions = (region AnyRegion:$ctorRegion,
+ AnyRegion:$dtorRegion);
let assemblyFormat = [{
($sym_visibility^)?
diff --git a/clang/lib/CIR/CodeGen/CIRGenExpr.cpp b/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
index aeef7c910e62e..92a520e1237d3 100644
--- a/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
@@ -1508,15 +1508,17 @@ LValue CIRGenFunction::emitCastLValue(const CastExpr *e) {
case CK_ToUnion:
return emitAggExprToLValue(e);
- case CK_NonAtomicToAtomic:
- case CK_AtomicToNonAtomic:
- case CK_ObjCObjectLValueCast:
- case CK_VectorSplat:
case CK_ConstructorConversion:
case CK_UserDefinedConversion:
case CK_CPointerToObjCPointerCast:
case CK_BlockPointerToObjCPointerCast:
- case CK_LValueToRValue: {
+ case CK_LValueToRValue:
+ return emitLValue(e->getSubExpr());
+
+ case CK_NonAtomicToAtomic:
+ case CK_AtomicToNonAtomic:
+ case CK_ObjCObjectLValueCast:
+ case CK_VectorSplat: {
cgm.errorNYI(e->getSourceRange(),
std::string("emitCastLValue for unhandled cast kind: ") +
e->getCastKindName());
diff --git a/clang/lib/CIR/Dialect/Transforms/FlattenCFG.cpp b/clang/lib/CIR/Dialect/Transforms/FlattenCFG.cpp
index 48c47deb2bc0a..b6e1329f02d7c 100644
--- a/clang/lib/CIR/Dialect/Transforms/FlattenCFG.cpp
+++ b/clang/lib/CIR/Dialect/Transforms/FlattenCFG.cpp
@@ -58,6 +58,25 @@ void walkRegionSkipping(
});
}
+/// Check whether a region contains any nested op with regions (i.e. structured
+/// CIR ops that must be flattened before their parent). The greedy pattern
+/// rewriter doesn't guarantee inside-out processing order — when a pattern
+/// fires and modifies IR, newly created ops go onto the worklist and can be
+/// visited in any order. So each flattening pattern must explicitly defer
+/// until its nested structured ops are flat.
+///
+/// CaseOps are excluded because they are structural children of SwitchOp and
+/// are handled by the SwitchOp flattening pattern.
+static bool hasNestedOpsToFlatten(mlir::Region ®ion) {
+ return region
+ .walk([](mlir::Operation *op) {
+ if (op->getNumRegions() > 0 && !isa<cir::CaseOp>(op))
+ return mlir::WalkResult::interrupt();
+ return mlir::WalkResult::advance();
+ })
+ .wasInterrupted();
+}
+
struct CIRFlattenCFGPass : public impl::CIRFlattenCFGBase<CIRFlattenCFGPass> {
CIRFlattenCFGPass() = default;
@@ -228,14 +247,13 @@ class CIRSwitchOpFlattening : public mlir::OpRewritePattern<cir::SwitchOp> {
mlir::LogicalResult
matchAndRewrite(cir::SwitchOp op,
mlir::PatternRewriter &rewriter) const override {
- // Cleanup scopes must be lowered before the enclosing switch so that
- // break inside them is properly routed through cleanup.
- // Fail the match so the pattern rewriter will process cleanup scopes first.
- bool hasNestedCleanup = op->walk([&](cir::CleanupScopeOp) {
- return mlir::WalkResult::interrupt();
- }).wasInterrupted();
- if (hasNestedCleanup)
- return mlir::failure();
+ // All nested structured CIR ops must be flattened before the switch.
+ // Break statements inside nested structured ops would create branches to
+ // blocks outside those ops' regions, which is invalid. Fail the match so
+ // the pattern rewriter will process them first.
+ for (mlir::Region ®ion : op->getRegions())
+ if (hasNestedOpsToFlatten(region))
+ return mlir::failure();
llvm::SmallVector<CaseOp> cases;
op.collectCases(cases);
@@ -432,14 +450,13 @@ class CIRLoopOpInterfaceFlattening
mlir::LogicalResult
matchAndRewrite(cir::LoopOpInterface op,
mlir::PatternRewriter &rewriter) const final {
- // Cleanup scopes must be lowered before the enclosing loop so that
- // break/continue inside them are properly routed through cleanup.
- // Fail the match so the pattern rewriter will process cleanup scopes first.
- bool hasNestedCleanup = op->walk([&](cir::CleanupScopeOp) {
- return mlir::WalkResult::interrupt();
- }).wasInterrupted();
- if (hasNestedCleanup)
- return mlir::failure();
+ // All nested structured CIR ops must be flattened before the loop.
+ // Break/continue statements inside nested structured ops would create
+ // branches to blocks outside those ops' regions, which is invalid. Fail
+ // the match so the pattern rewriter will process them first.
+ for (mlir::Region ®ion : op->getRegions())
+ if (hasNestedOpsToFlatten(region))
+ return mlir::failure();
// Setup CFG blocks.
mlir::Block *entry = rewriter.getInsertionBlock();
@@ -703,8 +720,10 @@ static void replaceCallWithTryCall(cir::CallOp callOp, mlir::Block *unwindDest,
}
// Replace uses of the call result with the try_call result.
+ // Use the rewriter API so that the pattern rewriter is notified of the
+ // in-place modifications to each user operation.
if (callOp->getNumResults() > 0)
- callOp->getResult(0).replaceAllUsesWith(tryCallOp.getResult());
+ rewriter.replaceAllUsesWith(callOp->getResult(0), tryCallOp.getResult());
rewriter.eraseOp(callOp);
}
@@ -1406,18 +1425,18 @@ class CIRCleanupScopeOpFlattening
mlir::PatternRewriter &rewriter) const override {
mlir::OpBuilder::InsertionGuard guard(rewriter);
- // Nested cleanup scopes and try operations must be flattened before the
- // enclosing cleanup scope so that EH cleanup inside them is properly
- // handled. Fail the match so the pattern rewriter processes them first.
+ // All nested structured CIR ops must be flattened before the cleanup scope.
+ // Operations like loops, switches, scopes, and ifs may contain exits
+ // (return, break, continue) that the cleanup scope will replace with
+ // branches to the cleanup entry. If those exits are inside a structured
+ // op's region, the branch would reference a block outside that region,
+ // which is invalid. Fail the match so they are processed first.
//
// Before checking, erase any trivially dead nested cleanup scopes. These
// arise from deactivated cleanups (e.g. partial-construction guards for
// lambda captures). The greedy rewriter may have already DCE'd them, but
// when a trivially dead nested op is erased first, the parent isn't always
- // re-added to the worklist, so we handle it here. These types of operations
- // will normally be removed by the canonicalizer, but we handle it here
- // also, because DCE can run between pattern matches in the current pass,
- // and if a trivially dead operation makes it this far, we will fail.
+ // re-added to the worklist, so we handle it here.
llvm::SmallVector<cir::CleanupScopeOp> deadNestedOps;
cleanupOp.getBodyRegion().walk([&](cir::CleanupScopeOp nested) {
if (mlir::isOpTriviallyDead(nested))
@@ -1426,14 +1445,7 @@ class CIRCleanupScopeOpFlattening
for (auto op : deadNestedOps)
rewriter.eraseOp(op);
- bool hasNestedOps = cleanupOp.getBodyRegion()
- .walk([&](mlir::Operation *op) {
- if (isa<cir::CleanupScopeOp, cir::TryOp>(op))
- return mlir::WalkResult::interrupt();
- return mlir::WalkResult::advance();
- })
- .wasInterrupted();
- if (hasNestedOps)
+ if (hasNestedOpsToFlatten(cleanupOp.getBodyRegion()))
return mlir::failure();
cir::CleanupKind cleanupKind = cleanupOp.getCleanupKind();
@@ -1616,19 +1628,15 @@ class CIRTryOpFlattening : public mlir::OpRewritePattern<cir::TryOp> {
mlir::LogicalResult
matchAndRewrite(cir::TryOp tryOp,
mlir::PatternRewriter &rewriter) const override {
- // Nested try ops and cleanup scopes must be flattened before the enclosing
- // try so that EH cleanup inside them is properly handled. Fail the match so
- // the pattern rewriter will process nested ops first.
- bool hasNestedOps =
- tryOp
- ->walk([&](mlir::Operation *op) {
- if (isa<cir::CleanupScopeOp, cir::TryOp>(op) && op != tryOp)
- return mlir::WalkResult::interrupt();
- return mlir::WalkResult::advance();
- })
- .wasInterrupted();
- if (hasNestedOps)
- return mlir::failure();
+ // All nested structured CIR ops must be flattened before the try op.
+ // Cleanup scopes and nested try ops need to be flat so EH cleanup is
+ // properly handled. Other structured ops (scopes, ifs, loops, switches,
+ // ternaries) must be flat because replaceCallWithTryCall creates try_call
+ // ops whose unwind destination is outside the structured op's region,
+ // which would be an invalid cross-region reference.
+ for (mlir::Region ®ion : tryOp->getRegions())
+ if (hasNestedOpsToFlatten(region))
+ return mlir::failure();
mlir::OpBuilder::InsertionGuard guard(rewriter);
mlir::Location loc = tryOp.getLoc();
@@ -1743,8 +1751,10 @@ class CIRTryOpFlattening : public mlir::OpRewritePattern<cir::TryOp> {
// cir.eh.initiate that produced this token. With catch-all, the LLVM
// landingpad needs "catch ptr null" instead of "cleanup".
if (hasCatchAll) {
- if (auto ehInitiate = traceToEhInitiate(resumeOp.getEhToken()))
- ehInitiate.removeCleanupAttr();
+ if (auto ehInitiate = traceToEhInitiate(resumeOp.getEhToken())) {
+ rewriter.modifyOpInPlace(ehInitiate,
+ [&] { ehInitiate.removeCleanupAttr(); });
+ }
}
mlir::Value ehToken = resumeOp.getEhToken();
diff --git a/clang/lib/CIR/Dialect/Transforms/LoweringPrepare.cpp b/clang/lib/CIR/Dialect/Transforms/LoweringPrepare.cpp
index ec032a92591d7..5d99d5b2f6ad3 100644
--- a/clang/lib/CIR/Dialect/Transforms/LoweringPrepare.cpp
+++ b/clang/lib/CIR/Dialect/Transforms/LoweringPrepare.cpp
@@ -193,8 +193,7 @@ struct LoweringPreparePass
globalOp->emitError("NYI: guard COMDAT for non-local variables");
return {};
} else if (hasComdat && globalOp.isWeakForLinker()) {
- globalOp->emitError("NYI: guard COMDAT for weak linkage");
- return {};
+ guard.setComdat(true);
}
setStaticLocalDeclGuardAddress(globalSymName, guard);
@@ -290,8 +289,9 @@ struct LoweringPreparePass
// Emit the initializer and add a global destructor if appropriate.
auto &ctorRegion = globalOp.getCtorRegion();
assert(!ctorRegion.empty() && "This should never be empty here.");
- if (!ctorRegion.hasOneBlock())
- llvm_unreachable("Multiple blocks NYI");
+ assert(ctorRegion.hasOneBlock() &&
+ "Static local ctor regions with multiple blocks should use "
+ "LocalInitOp (see PR #193576)");
mlir::Block &block = ctorRegion.front();
mlir::Block *insertBlock = builder.getInsertionBlock();
insertBlock->getOperations().splice(insertBlock->end(),
@@ -982,7 +982,12 @@ LoweringPreparePass::buildCXXGlobalVarDeclInitFunc(cir::GlobalOp op) {
FuncOp f = buildRuntimeFunction(builder, fnName, op.getLoc(), fnType,
cir::GlobalLinkageKind::InternalLinkage);
- // Move over the initialzation code of the ctor region.
+ // Move over the initialization code of the ctor region.
+ // The ctor region may have multiple blocks when exception handling
+ // scaffolding creates extra blocks (e.g., unreachable/trap blocks).
+ // We move all operations from the first block (minus the yield) into
+ // the function entry, and discard extra blocks (which contain only
+ // unreachable terminators from EH cleanup paths).
mlir::Block *entryBB = f.addEntryBlock();
if (!op.getCtorRegion().empty()) {
mlir::Block &block = op.getCtorRegion().front();
diff --git a/clang/test/CIR/CodeGen/cast-lvalue-conv.cpp b/clang/test/CIR/CodeGen/cast-lvalue-conv.cpp
new file mode 100644
index 0000000000000..873694e935dff
--- /dev/null
+++ b/clang/test/CIR/CodeGen/cast-lvalue-conv.cpp
@@ -0,0 +1,29 @@
+// RUN: %clang_cc1 -std=c++14 -triple x86_64-unknown-linux-gnu -fclangir -emit-cir %s -o %t.cir
+// RUN: FileCheck --check-prefix=CIR --input-file=%t.cir %s
+// RUN: %clang_cc1 -std=c++14 -triple x86_64-unknown-linux-gnu -fclangir -emit-llvm %s -o %t-cir.ll
+// RUN: FileCheck --check-prefix=LLVM --input-file=%t-cir.ll %s
+// RUN: %clang_cc1 -std=c++14 -triple x86_64-unknown-linux-gnu -emit-llvm %s -o %t.ll
+// RUN: FileCheck --check-prefix=OGCG --input-file=%t.ll %s
+
+// Test that CK_UserDefinedConversion and CK_ConstructorConversion
+// are handled in emitCastLValue as pass-through to the sub-expression.
+
+struct Source {
+ int val;
+};
+
+struct Dest {
+ int val;
+ Dest(const Source &s) : val(s.val) {}
+};
+
+void takeRef(const Dest &d);
+
+void testConstructorConversion() {
+ Source s{42};
+ takeRef(s);
+}
+
+// CIR: cir.func {{.*}} @_Z25testConstructorConversionv
+// LLVM: define {{.*}} @_Z25testConstructorConversionv
+// OGCG: define {{.*}} @_Z25testConstructorConversionv
diff --git a/clang/test/CIR/CodeGen/global-var-template-ctor.cpp b/clang/test/CIR/CodeGen/global-var-template-ctor.cpp
new file mode 100644
index 0000000000000..a01d7d7431a32
--- /dev/null
+++ b/clang/test/CIR/CodeGen/global-var-template-ctor.cpp
@@ -0,0 +1,36 @@
+// RUN: %clang_cc1 -std=c++14 -triple x86_64-unknown-linux-gnu -fclangir -emit-cir %s -o %t.cir
+// RUN: FileCheck --check-prefix=CIR --input-file=%t.cir %s
+// RUN: %clang_cc1 -std=c++14 -triple x86_64-unknown-linux-gnu -fclangir -emit-llvm %s -o %t-cir.ll
+// RUN: FileCheck --check-prefix=LLVM --input-file=%t-cir.ll %s
+// RUN: %clang_cc1 -std=c++14 -triple x86_64-unknown-linux-gnu -emit-llvm %s -o %t.ll
+// RUN: FileCheck --check-prefix=OGCG --input-file=%t.ll %s
+
+// Regression test: C++14 variable templates with non-constexpr
+// constructors must not crash CIR with "ctorRegion failed to
+// verify constraint: region with at most 1 blocks".
+
+template<int N> class FixedInt {
+public:
+ static const int value = N;
+ operator int() const { return value; }
+ FixedInt() {}
+};
+
+template<int N>
+static const FixedInt<N> fix{};
+
+int test() {
+ return fix<1> + fix<2>;
+}
+
+// CIR: cir.global "private" internal dso_local @_ZL3fixILi1EE
+// CIR: cir.global "private" internal dso_local @_ZL3fixILi2EE
+// CIR: cir.func {{.*}} @_Z4testv
+
+// LLVM: @_ZL3fixILi1EE = internal global
+// LLVM: @_ZL3fixILi2EE = internal global
+// LLVM: define {{.*}} @_Z4testv
+
+// OGCG: @_ZL3fixILi1EE = internal global
+// OGCG: @_ZL3fixILi2EE = internal global
+// OGCG: define {{.*}} @_Z4testv
diff --git a/clang/test/CIR/CodeGen/static-local.cpp b/clang/test/CIR/CodeGen/static-local.cpp
index 2de140ee6d07e..615608a1928c9 100644
--- a/clang/test/CIR/CodeGen/static-local.cpp
+++ b/clang/test/CIR/CodeGen/static-local.cpp
@@ -28,6 +28,23 @@ void f() {
use(&a);
}
+// Static local in an inline function: the variable and guard both get
+// linkonce_odr linkage and their own COMDAT groups.
+void use(const A *);
+inline const A &getInlineA() {
+ static A a;
+ return a;
+}
+
+void call_inline() {
+ use(&getInlineA());
+}
+
+// CIR-BEFORE-LPP: cir.global linkonce_odr comdat static_local_guard<"_ZGVZ10getInlineAvE1a"> @_ZZ10getInlineAvE1a = ctor : !rec_A {
+// CIR-BEFORE-LPP: %[[ADDR2:.*]] = cir.get_global static_local @_ZZ10getInlineAvE1a : !cir.ptr<!rec_A>
+// CIR-BEFORE-LPP: cir.call @_ZN1AC1Ev(%[[ADDR2]]) : (!cir.ptr<!rec_A> {{.*}}) -> ()
+// CIR-BEFORE-LPP: }
+
// CIR-BEFORE-LPP: cir.global "private" internal dso_local static_local_guard<"_ZGVZ1fvE1a"> @_ZZ1fvE1a = ctor : !rec_A {
// CIR-BEFORE-LPP: %[[ADDR:.*]] = cir.get_global static_local @_ZZ1fvE1a : !cir.ptr<!rec_A>
// CIR-BEFORE-LPP: cir.call @_ZN1AC1Ev(%[[ADDR]]) : (!cir.ptr<!rec_A> {{.*}}) -> ()
@@ -39,6 +56,16 @@ void f() {
// CIR-BEFORE-LPP: cir.return
// CIR-DAG: cir.global "private" internal dso_local @_ZGVZ1fvE1a = #cir.int<0> : !s64i
+// CIR-DAG: cir.global "private" linkonce_odr comdat @_ZGVZ10getInlineAvE1a = #cir.int<0> : !s64i
+
+// LLVM-DAG: @_ZGVZ1fvE1a = internal global i64 0
+// LLVM-DAG: @_ZZ10getInlineAvE1a = linkonce_odr global %class.A zeroinitializer, comdat, align 1
+// LLVM-DAG: @_ZGVZ10getInlineAvE1a = linkonce_odr global i64 0, comdat, align 8
+
+// OGCG-DAG: @_ZGVZ1fvE1a = internal global i64 0
+// OGCG-DAG: @_ZZ10getInlineAvE1a = linkonce_odr global %class.A zeroinitializer, comdat, align 1
+// OGCG-DAG: @_ZGVZ10getInlineAvE1a = linkonce_odr global i64 0, comdat, align 8
+
// CIR: cir.func{{.*}}@_Z1fv()
// CIR: %[[ADDR:.*]] = cir.get_global static_local @_ZZ1fvE1a : !cir.ptr<!rec_A>
// CIR: %[[GUARD:.*]] = cir.get_global @_ZGVZ1fvE1a : !cir.ptr<!s64i>
@@ -54,7 +81,19 @@ void f() {
// CIR: cir.call @_Z3useP1A(%[[ADDR]])
//...
[truncated]
|
| // Resume | ||
| //===----------------------------------------------------------------------===// | ||
|
|
||
| // Note: cir.resume can appear transiently inside any structured CIR op |
There was a problem hiding this comment.
It looks like you've pulled this in from #192503. Is it needed for the tests in this PR to pass?
There was a problem hiding this comment.
No, that was a branch hygiene issue — my local branch had cherry-picks that leaked into the diff. Rebased onto clean upstream main, the PR should now show only the CIRGenExpr.cpp and test changes.
…alue Handle CK_UserDefinedConversion, CK_ConstructorConversion, CK_CPointerToObjCPointerCast, CK_BlockPointerToObjCPointerCast, and CK_LValueToRValue in emitCastLValue as pass-through to the sub-expression, matching classic codegen (CGExpr.cpp:6197). Made-with: Cursor
41e2f81 to
7d5c955
Compare
| takeRef(s); | ||
| } | ||
|
|
||
| // CIR: cir.func {{.*}} @_Z25testConstructorConversionv |
There was a problem hiding this comment.
Can you add more checks to show what we actually did with this?
| case CK_CPointerToObjCPointerCast: | ||
| case CK_BlockPointerToObjCPointerCast: | ||
| case CK_LValueToRValue: { | ||
| case CK_LValueToRValue: |
There was a problem hiding this comment.
It looks like most of these cases are untested. They're relatively trivial, but they still need some kind of test coverage.
emitCastLValuewas hitting an NYI error forCK_UserDefinedConversion,CK_ConstructorConversion,CK_CPointerToObjCPointerCast,CK_BlockPointerToObjCPointerCast, andCK_LValueToRValue. Classic codegen handles all of these as a pass-through to the sub-expression (CGExpr.cpp:6197), and CIR should do the same.Made with Cursor