diff options
author | Xusong Wang <xusongw@google.com> | 2021-03-03 16:20:37 -0800 |
---|---|---|
committer | Xusong Wang <xusongw@google.com> | 2021-05-06 13:56:32 -0700 |
commit | 727a7b2104b0962509fedffe720eec508b2ee6de (patch) | |
tree | db56f9f27626747458a78fe1a1c5454fc666794a /neuralnetworks/utils/common/test/ResilientPreparedModelTest.cpp | |
parent | 57c9114315650206adab4beaa97e07d27e3e4b10 (diff) |
Introduce reusable execution to canonical interface -- HAL.
This CL modifies the canonical interface for reusable executions:
- Add new interface: IExecution with compute and computeFenced methods
- Add new method IPreparedModel::createExecution
In NNAPI runtime, the new interface IExecution is used to
memoize request-specific execution resources (e.g. converted HAL
request). The expected usage is that, IPreparedModel::createExecution
will be invoked in the first computation of a reusable NDK ANNExecution
object, and IExecution::compute* will be invoked repeatedly.
The IPreparedModel::execute* methods are preserved to avoid redundant
object creation and memoization overhead for a single-time
(non-reusable) execution.
For a vendor implementing the canonical interfaces, only the
IPreparedModel::execute* methods will be called because there is
currently no reusable execution at HAL interface. A DefaultExecution
implementation is provided to reduce the work needed on the vendor side.
Bug: 184073769
Test: NNT_static
Test: neuralnetworks_utils_hal_1_0_test
Test: neuralnetworks_utils_hal_1_1_test
Test: neuralnetworks_utils_hal_1_2_test
Test: neuralnetworks_utils_hal_1_3_test
Test: neuralnetworks_utils_hal_common_test
Test: neuralnetworks_utils_hal_aidl_test
Change-Id: I91790bb5ccf5ae648687fe603f88ffda2c9fd2b2
Diffstat (limited to 'neuralnetworks/utils/common/test/ResilientPreparedModelTest.cpp')
-rw-r--r-- | neuralnetworks/utils/common/test/ResilientPreparedModelTest.cpp | 31 |
1 files changed, 31 insertions, 0 deletions
diff --git a/neuralnetworks/utils/common/test/ResilientPreparedModelTest.cpp b/neuralnetworks/utils/common/test/ResilientPreparedModelTest.cpp index 6d86e10df2..d396ca88df 100644 --- a/neuralnetworks/utils/common/test/ResilientPreparedModelTest.cpp +++ b/neuralnetworks/utils/common/test/ResilientPreparedModelTest.cpp @@ -55,6 +55,7 @@ constexpr auto makeError = [](nn::ErrorStatus status) { const auto kReturnGeneralFailure = makeError(nn::ErrorStatus::GENERAL_FAILURE); const auto kReturnDeadObject = makeError(nn::ErrorStatus::DEAD_OBJECT); +const auto kNoCreateReusableExecutionError = nn::GeneralResult<nn::SharedExecution>{}; const auto kNoExecutionError = nn::ExecutionResult<std::pair<std::vector<nn::OutputShape>, nn::Timing>>{}; const auto kNoFencedExecutionError = @@ -231,6 +232,36 @@ TEST(ResilientPreparedModelTest, executeFencedDeadObjectSuccessfulRecovery) { << "Failed with " << result.error().code << ": " << result.error().message; } +TEST(ResilientPreparedModelTest, createReusableExecution) { + // setup call + const auto [mockPreparedModel, mockPreparedModelFactory, preparedModel] = setup(); + EXPECT_CALL(*mockPreparedModel, createReusableExecution(_, _, _)) + .Times(1) + .WillOnce(Return(kNoCreateReusableExecutionError)); + + // run test + const auto result = preparedModel->createReusableExecution({}, {}, {}); + + // verify result + ASSERT_TRUE(result.has_value()) + << "Failed with " << result.error().code << ": " << result.error().message; +} + +TEST(ResilientPreparedModelTest, createReusableExecutionError) { + // setup call + const auto [mockPreparedModel, mockPreparedModelFactory, preparedModel] = setup(); + EXPECT_CALL(*mockPreparedModel, createReusableExecution(_, _, _)) + .Times(1) + .WillOnce(kReturnGeneralFailure); + + // run test + const auto result = preparedModel->createReusableExecution({}, {}, {}); + + // verify result + ASSERT_FALSE(result.has_value()); + EXPECT_EQ(result.error().code, nn::ErrorStatus::GENERAL_FAILURE); +} + TEST(ResilientPreparedModelTest, getUnderlyingResource) { // setup call const auto [mockPreparedModel, mockPreparedModelFactory, preparedModel] = setup(); |