summaryrefslogtreecommitdiff
path: root/compiler/optimizing/optimizing_cfi_test_expected.inc
AgeCommit message (Collapse)Author
2020-02-13Remove MIPS support from Optimizing.Vladimir Marko
Test: aosp_taimen-userdebug boots. Test: m test-art-host-gtest Test: testrunner.py --host --optimizing Bug: 147346243 Change-Id: I97fdc15e568ae3fe390efb1da690343025f84944
2017-10-23MIPS32: Improve stack alignment, use sdc1/ldc1, where possible.Chris Larsen
- Ensure that SP is a multiple of 16 at all times, and - Use ldc1/sdc1 to load/store FPU registers from/to 8-byte-aligned locations wherever possible. Use `export ART_MIPS32_CHECK_ALIGNMENT=true` when building Android to enable the new runtime alignment checks. Test: Boot & run tests on 32-bit version of QEMU, and CI-20. Test: test/testrunner/testrunner.py --target --optimizing --32 Test: test-art-host-gtest Test: test-art-target-gtest Change-Id: Ia667004573f419fd006098fcfadf5834239cb485
2017-07-30MIPS: Eliminate hard-coded offsets in branchesAlexey Frunze
The bulk of the change is in the assemblers and their tests. The main goal is to introduce "bare" branches to labels (as opposed to the existing bare branches with relative offsets, whose direct use we want to eliminate). These branches' delay/forbidden slots are filled manually and these branches do not promote to long (the branch target must be within reach of the individual branch instruction). The secondary goal is to add more branch tests (mainly for bare vs non-bare branches and a few extra) and refactor and reorganize the branch test code a bit. The third goal is to improve idiom recognition in the disassembler, including branch idioms and a few others. Further details: - introduce bare branches (R2 and R6) to labels, making R2 branches available for use on R6 - make use of the above in the code generators - align beqz/bnez with their GNU assembler encoding to simplify and shorten the test code - update the CFI test because of the above - add trivial tests for bare and non-bare branches (addressing existing debt as well) - add MIPS32R6 tests for long beqc/beqzc/bc (debt) - add MIPS64R6 long beqzc test (debt) - group branch tests together - group constant/literal/address-loading tests together - make the disassembler recognize: - b/beqz/bnez (beq/bne with $zero reg) - nal (bltzal with $zero reg) - bal/bgezal (bal = bgezal with $zero reg) - move (or with $zero reg) - li (ori/addiu with $zero reg) - dli (daddiu with $zero reg) - disassemble 16-bit immediate operands (in andi, ori, xori, li, dli) as signed or unsigned as appropriate - drop unused instructions (bltzl, bltzall, addi) from the disassembler as there are no plans to use them Test: test-art-host-gtest Test: booted MIPS64 (with 2nd arch MIPS32R6) in QEMU Test: test-art-target-gtest Test: testrunner.py --target --optimizing Test: same tests as above on CI20 Test: booted MIPS32R2 in QEMU Change-Id: I62b74a6c00ce0651528114806ba24a59ba564a73
2017-07-14Remove the old ARM code generator from ART's Optimizing compiler.Roland Levillain
The AArch32 VIXL-based code generator has been the default ARM code generator in ART for some time now. The old ARM code generator does not compile anymore; retiring it. Test: test.py Bug: 63316036 Change-Id: Iab8fbc4ac73eac2c1a809cd7b22fec6b619755db
2017-07-11Introduce a Marking Register in ARM64 code generation.Roland Levillain
When generating code for ARM64, maintain the status of Thread::Current()->GetIsGcMarking() in register X20, dubbed MR (Marking Register), and check the value of that register (instead of loading and checking a read barrier marking entrypoint) in read barriers. Test: m test-art-target Test: m test-art-target with tree built with ART_USE_READ_BARRIER=false Test: ARM64 device boot test Bug: 37707231 Change-Id: Ibe9bc5c99a2176b0a0476e9e9ad7fcc9f745017b
2017-03-23MIPS64: Improve method entry/exit codeAlexey Frunze
Improvements: - the stack frame is (de)allocated in one step instead of two - the return address register, RA, is restored early for better instruction scheduling - eliminate unused delay slot Test: test-art-host-gtest Test: booted MIPS64 (with 2nd arch MIPS32R2) in QEMU Change-Id: I55172bd167ed1baced82bc1d542213b93b13c2ce
2017-03-16Revert "Revert "ARM: VIXL32: Use VIXL backend by default.""Nicolas Geoffray
bug:35977033 This reverts commit 25275bef429dc6a48b79411e0d0b32207294523b. Change-Id: I440bf8415e2bf550607595499701fb3e7c33b37e
2017-03-14Revert "ARM: VIXL32: Use VIXL backend by default."Nicolas Geoffray
Revert while investigating. bug:35977033 This reverts commit e6316892821287b1d1906b9962eae129fbdc37be. Change-Id: I51e24a6e539072a6d0d470dfe41855a4847f3e96
2017-02-21ARM: VIXL32: Use VIXL backend by default.Scott Wakeling
export ART_USE_OLD_ARM_BACKEND=true to use the previous backend. Test: mma test-art-host && mma test-art-target Change-Id: I4024a4ea15fa8ce1269c0837f6ea001b6c809df5
2016-12-12ARM: VIXL32: Test both current and new assemblers with optimizing_cfi_test.Scott Wakeling
Test: m test-art-host Change-Id: I71b97113d9bc3ad5abe5f5f89a0d94c243c8f2e2
2016-10-13Fix optimizing_cfi_test and arm64 code generation.Nicolas Geoffray
Change https://android-review.googlesource.com/#/c/287582/ broke it. test: m test-art-host-gtest-optimizing_cfi_test test: m test-art-target on angler Change-Id: I7fc74a87ffa0b26b8e103b87a2ac1179bea2145a
2016-08-30MIPS32: Fill branch delay slotsAlexey Frunze
Test: booted MIPS32 in QEMU Test: test-art-host-gtest Test: test-art-target-gtest Test: test-art-target-run-test-optimizing on CI20 Change-Id: I727e80753395ab99fff004cb5d2e0a06409150d7
2016-06-28ARM64: Ensure stricter alignment when loading and storing register pairsAnton Kirilov
The impetus for this change is the fact that loads that cross a 64 byte boundary and stores that cross a 16 byte boundary are a performance issue on Cortex-A57 and A72. Change-Id: I81263dc72272192ad2d190b741a955f175880461
2016-06-04MIPS32: Improve method entry/exit codeAlexey Frunze
Improvements: - the stack frame is (de)allocated in one step instead of two - callee-saved FPU registers are 8-byte aligned within the frame, allowing a single ldc1/sdc1 instruction to load/store an FPU register without causing exceptions due to misaligned accesses - the return address register, RA, is restored early for better instruction scheduling Change-Id: I556b139c62839490a9fdbce8c5e6e3e2d1cc7bb7
2016-02-02Add MIPS floating point register mapping to DWARF.David Srbecky
Change-Id: I88508461412bc166549843744a3c6a4ee925b2c7
2015-11-21MIPS64: Support short and long branchesAlexey Frunze
Change-Id: I618c960bd211048166d9fde78d4106bd3ca42b3a
2015-11-04Delay emitting CFI PC adjustments until after Thumb2/Mips fixup.Vladimir Marko
On Mips also take into account out-of-order CFI data emitted from EmitBranches(). Change-Id: I03b0b0b4c2b1ea31a02699ef5fa1c55aa42c23c3
2015-05-29Move mirror::ArtMethod to nativeMathieu Chartier
Optimizing + quick tests are passing, devices boot. TODO: Test and fix bugs in mips64. Saves 16 bytes per most ArtMethod, 7.5MB reduction in system PSS. Some of the savings are from removal of virtual methods and direct methods object arrays. Bug: 19264997 Change-Id: I622469a0cfa0e7082a2119f3d6a9491eb61e3f3d
2015-05-22ARM64: Move xSELF from x18 to x19.Serban Constantinescu
This patch moves xSELF to callee saved x19 and removes support for ETR (external thread register), previously used across native calls. Change-Id: Icee07fbb9292425947f7de33d10a0ddf98c7899b Signed-off-by: Serban Constantinescu <serban.constantinescu@linaro.org>
2015-04-09Implement CFI for Optimizing.David Srbecky
CFI is necessary for stack unwinding in gdb, lldb, and libunwind. Change-Id: I1a3480e3a4a99f48bf7e6e63c4e83a80cfee40a2