summaryrefslogtreecommitdiff
path: root/test/ProfileTestMultiDex/Main.java
diff options
context:
space:
mode:
authorHans Boehm <hboehm@google.com>2017-12-12 11:05:32 -0800
committerHans Boehm <hboehm@google.com>2017-12-15 10:53:19 -0800
commitae915a0182db98769ed4851fab440e79e012babd (patch)
tree05d9411ed11fe95ccbcc73f4300c819563bbbf3d /test/ProfileTestMultiDex/Main.java
parent9e73b32fed15d262b0393f114b9602ac7ef88917 (diff)
Improve scoped spinlock implementations
Both ScopedAllMutexesLock and ScopedExpectedMutexesOnWeakRefAccessLock really implement simple spinlocks. But they did it in an unorthodox way, with a CAS on unlock. I see no reason for that. Use the standard (and faster) idiom instead. The NanoSleep(100) waiting logic was probably suboptimal and definitely misleading. I timed NanoSleep(100) on Linux4.4 host, and it takes about 60 usecs, i.e. 60,000 nsecs. By comparison a no-op sched_yield takes about 1 usec. This replaces it with waiting logic that should be generically usable. This is no doubt overkill, but the hope is that we can eventually reuse this where it matters more. Test: Built and booted AOSP. Change-Id: I6e47508ecb8d5e5d0b4f08c8e8f073ad7b1d192e
Diffstat (limited to 'test/ProfileTestMultiDex/Main.java')
0 files changed, 0 insertions, 0 deletions