diff options
author | Evgeny Astigeevich <evgeny.astigeevich@linaro.org> | 2019-09-09 14:52:12 +0100 |
---|---|---|
committer | Evgeny Astigeevich <evgeny.astigeevich@linaro.org> | 2019-10-10 13:06:08 +0100 |
commit | 98416bf06592493ee6fde039af5eaa5efab73acc (patch) | |
tree | a0052ec5364ce1068639a9b7d7355683eb691371 /compiler/optimizing/code_generator.h | |
parent | 63b0c26aae3e7237166dd781eb7a15fbc7c091c2 (diff) |
Fix uses of MaybeRecordImplicitNullCheck without special scopes
MaybeRecordImplicitNullCheck is a function which uses
CodeGenerator::RecordPcInfo() and requires an exact PC. However for ARM32/ARM64,
when CodeGenerator::RecordPcInfo() is used without VIXL special scopes (EmissionCheckScope,
ExactAssemblyScope) there is no guarantee of an exact PC. Without the special scopes VIXL might
emit veneer/literal pools affecting a PC.
The ARM32 code generator has uses of MaybeRecordImplicitNullCheck without the
special scopes.
This CL fixes missing special scopes in the ARM32/ARM64 code generators.
It also changes API to prevent such cases:
1. A variant of CodeGenerator::RecordPcInfo with native_pc as a
parameter is added. The old variant (where Assembler::CodePosition is used) is
kept and documented that Assembler::CodePosition is target-dependent and
might be imprecise.
2. CodeGenerator::MaybeRecordImplicitNullCheck is made virtual. Checks
are added to ARM32/ARM64 code generators that
MaybeRecordImplicitNullCheck is invoked within VIXL special scopes.
Test: test.py --host --optimizing --jit --gtest
Test: test.py --target --optimizing --jit
Test: run-gtests.sh
Change-Id: Ic66c16e7bdf4751cbc19a9de05846fba005b7f55
Diffstat (limited to 'compiler/optimizing/code_generator.h')
-rw-r--r-- | compiler/optimizing/code_generator.h | 20 |
1 files changed, 18 insertions, 2 deletions
diff --git a/compiler/optimizing/code_generator.h b/compiler/optimizing/code_generator.h index 917d97de1b..9e3e454f3d 100644 --- a/compiler/optimizing/code_generator.h +++ b/compiler/optimizing/code_generator.h @@ -331,20 +331,36 @@ class CodeGenerator : public DeletableArenaObject<kArenaAllocCodeGenerator> { return GetFrameSize() - FrameEntrySpillSize() - kShouldDeoptimizeFlagSize; } - // Record native to dex mapping for a suspend point. Required by runtime. + // Record native to dex mapping for a suspend point. Required by runtime. void RecordPcInfo(HInstruction* instruction, uint32_t dex_pc, + uint32_t native_pc, SlowPathCode* slow_path = nullptr, bool native_debug_info = false); + + // Record native to dex mapping for a suspend point. + // The native_pc is used from Assembler::CodePosition. + // + // Note: As Assembler::CodePosition is target dependent, it does not guarantee the exact native_pc + // for the instruction. If the exact native_pc is required it must be provided explicitly. + void RecordPcInfo(HInstruction* instruction, + uint32_t dex_pc, + SlowPathCode* slow_path = nullptr, + bool native_debug_info = false); + // Check whether we have already recorded mapping at this PC. bool HasStackMapAtCurrentPc(); + // Record extra stack maps if we support native debugging. + // + // ARM specific behaviour: The recorded native PC might be a branch over pools to instructions + // corresponding the dex PC. void MaybeRecordNativeDebugInfo(HInstruction* instruction, uint32_t dex_pc, SlowPathCode* slow_path = nullptr); bool CanMoveNullCheckToUser(HNullCheck* null_check); - void MaybeRecordImplicitNullCheck(HInstruction* instruction); + virtual void MaybeRecordImplicitNullCheck(HInstruction* instruction); LocationSummary* CreateThrowingSlowPathLocations( HInstruction* instruction, RegisterSet caller_saves = RegisterSet::Empty()); void GenerateNullCheck(HNullCheck* null_check); |