-
Notifications
You must be signed in to change notification settings - Fork 1.6k
[arm] change prior box implement #4013
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 127 commits
Commits
Show all changes
130 commits
Select commit
Hold shift + click to select a range
5dd85c2
Merge pull request #89 from PaddlePaddle/develop
chenjiaoAngel 822e5fe
fix format, test=develop
chenjiaoAngel 51ba4a4
Merge pull request #90 from PaddlePaddle/develop
chenjiaoAngel 91fe997
add some op infershape implement, test=develop
chenjiaoAngel 686245f
add reshape infershape, test=develop
chenjiaoAngel 31369ae
Merge pull request #91 from PaddlePaddle/develop
chenjiaoAngel 922a145
fix format, test=develop
chenjiaoAngel 854c406
fix format, test=develop
chenjiaoAngel baa4ff0
fix space format. test=develop
chenjiaoAngel d397aed
add conv_transpose+bn fusion. test=develop
chenjiaoAngel df975ee
delete note, test=develop
chenjiaoAngel 723f8a8
Merge pull request #92 from PaddlePaddle/develop
chenjiaoAngel c4c30ab
fix format, test=develop
chenjiaoAngel 6bbdbca
update code. test=develop
chenjiaoAngel 6a69a5c
fix format space, test=develop
chenjiaoAngel 902e923
fix opt run error, test=develop
chenjiaoAngel 396b4ec
add boxcoder opencl kernel, test=develop
chenjiaoAngel 368a1bf
Merge pull request #93 from PaddlePaddle/develop
chenjiaoAngel 5ba381b
fix format, test=develop
chenjiaoAngel a81f190
add cmake, test=develop
chenjiaoAngel bdbca33
fix format. test=develop
chenjiaoAngel d1b53a8
fix format. test=develop
chenjiaoAngel ab252f2
fix format aa. test=develop
chenjiaoAngel fc03f95
fix , test=develop
chenjiaoAngel 8b62059
Merge pull request #94 from PaddlePaddle/develop
chenjiaoAngel fce2a79
update profile info(add new element), test=develop
chenjiaoAngel 7085da3
Merge pull request #95 from PaddlePaddle/develop
chenjiaoAngel f92ccf5
Merge pull request #96 from PaddlePaddle/develop
chenjiaoAngel 82e4b53
Merge pull request #97 from PaddlePaddle/develop
chenjiaoAngel a4770bd
fix clang ut build error
chenjiaoAngel f30ae5f
add gemm+relu6
chenjiaoAngel 9804e80
fix build error
chenjiaoAngel 6373783
fix .h
chenjiaoAngel 0982084
fix gemm_s8
chenjiaoAngel 31f7e60
Merge pull request #98 from PaddlePaddle/develop
chenjiaoAngel fad46f5
fix ut conv+leakyRelu
chenjiaoAngel 05fe51c
improve 3x3s1 direct profile
chenjiaoAngel de26028
update code
chenjiaoAngel 703ef20
fix format, test=develop
chenjiaoAngel 63c8675
pull code
chenjiaoAngel dddf058
Merge pull request #107 from PaddlePaddle/develop
chenjiaoAngel 8c1cb2a
update cide
chenjiaoAngel 9a273b8
add gemv+relu6/lleakyRelu
chenjiaoAngel e0d9414
fix v7 build bug
chenjiaoAngel 2562652
fix relu6 bug
chenjiaoAngel 0559b0a
fix gemm ut bug
chenjiaoAngel 015a408
fix ut
chenjiaoAngel 6d92def
fix ut
chenjiaoAngel 3aec231
fi format. test=develop
chenjiaoAngel e506627
fix format. test=develop
chenjiaoAngel 644d97f
fic format. test=develop
chenjiaoAngel 24af40c
ff. test=develop
chenjiaoAngel 312eba7
fix v7 clang build error, test=develop
chenjiaoAngel 516b584
fix v7 build register error, test=develop
chenjiaoAngel 339f912
fix format. test=develop
chenjiaoAngel 91ff7d5
fix build register error, ttest=develop
chenjiaoAngel d4e7e6b
fix build register error, ttest=develop
chenjiaoAngel 8947790
fix format, test=develop
chenjiaoAngel 69cc231
ff format,test=develop
chenjiaoAngel 5214a2c
fix relu6 problem, test=develop
chenjiaoAngel bc69152
fix form, test=develop
chenjiaoAngel 8a97443
fix format, test=develop
chenjiaoAngel f587692
ff, test=develop
chenjiaoAngel 067d815
add six / scale , test=develop
chenjiaoAngel bcfd0b5
fix conflicct
chenjiaoAngel 28119c6
iMerge branch 'PaddlePaddle-develop' into int8
chenjiaoAngel c9dffc4
fix pooling overflow, test=develop
chenjiaoAngel 73b97af
fix conflict test=develop
chenjiaoAngel 10ecaf0
Merge pull request #116 from PaddlePaddle/develop
chenjiaoAngel f0f944c
pull code
chenjiaoAngel fb84d6f
Merge pull request #120 from PaddlePaddle/develop
chenjiaoAngel 3fc508e
Merge pull request #121 from PaddlePaddle/develop
chenjiaoAngel b6c628e
add grouup_norm
chenjiaoAngel e3b509f
pull code
chenjiaoAngel c08fbb0
fix format. test=develop
chenjiaoAngel aa98e4c
fix foormat, test=develop
chenjiaoAngel 60fa838
fix format. test=develop
chenjiaoAngel efc3c5f
fix ff.test=develoop
chenjiaoAngel e8f4f11
fix xiaodu crash. test=develop
chenjiaoAngel 0fb4294
format. test=develop
chenjiaoAngel d61762d
Merge pull request #125 from PaddlePaddle/develop
chenjiaoAngel 99a45f3
fix concatt axis < 0 errorr,ttest=develop
chenjiaoAngel b66360b
fix format. test=develop
chenjiaoAngel 77b3062
Merge pull request #127 from PaddlePaddle/develop
chenjiaoAngel ee1c6d9
fix conv int8 kernel choose and sooftmax compute bug
chenjiaoAngel 8e85729
change axis_size = 4 kernel choose, test=develop
chenjiaoAngel a0b1af9
fix format. test=develop
chenjiaoAngel a887d7d
Merge pull request #129 from PaddlePaddle/develop
chenjiaoAngel 83f26d6
Merge pull request #131 from PaddlePaddle/develop
chenjiaoAngel 74270e1
Merge pull request #132 from PaddlePaddle/develop
chenjiaoAngel aa41846
Merge pull request #133 from PaddlePaddle/develop
chenjiaoAngel 13a95dc
uupdate sequence_pool and sequence_conv profiler, test=develop
chenjiaoAngel 4403de0
fix format, testt=develop
chenjiaoAngel 44746a8
fix format, test=develop
chenjiaoAngel 0e9dfda
fix format test=develop
chenjiaoAngel 1207be1
fix compute error. test=develop
chenjiaoAngel 15c7f5b
fix compute error
chenjiaoAngel 24efaf4
fix compute error, test=develop
chenjiaoAngel 619ceed
pull
chenjiaoAngel 5a3f7e4
delete warning and extra info, test=develop
chenjiaoAngel 2f9f6fe
Merge pull request #135 from PaddlePaddle/develop
chenjiaoAngel 42a42d1
update sequence_conv profile
chenjiaoAngel 11caca9
add conv+conv(1x1s1p0) fusion
chenjiaoAngel 002d5d5
fix build
chenjiaoAngel 3249571
fix build
chenjiaoAngel f0dea47
fix ruun error
chenjiaoAngel cd5ea42
fix conflict
chenjiaoAngel 1aac4e4
opt sucess
chenjiaoAngel d6945e4
add note
chenjiaoAngel 4ae852f
fix run error, test=develop
chenjiaoAngel 215c7af
remove note test=develop
chenjiaoAngel 10dee5c
fix conflict
chenjiaoAngel 1822346
test=develop
chenjiaoAngel 461225e
fix format
chenjiaoAngel 1bc0042
fix conflict, test=develop
chenjiaoAngel a5ef237
fix conflict, test=develop
chenjiaoAngel cad3d8e
fix formmat. test=develop
chenjiaoAngel f6ed35f
fix formmat. test=develop
chenjiaoAngel b91e5c8
fix formmat. test=develop
chenjiaoAngel b34e43d
test=develop
chenjiaoAngel d8f2364
pull
chenjiaoAngel 5feb496
fix conflict
chenjiaoAngel fbe5eea
update priorbox profile
chenjiaoAngel abbd501
fix format. test=develop
chenjiaoAngel 8bb71ef
test=develop
chenjiaoAngel c36b11c
fix prior, test=develop
chenjiaoAngel e258de9
pull test=develop
chenjiaoAngel 30bc179
fix review. test=develop
chenjiaoAngel 5d12647
pulpulll
chenjiaoAngel 2e217f1
test=develop
chenjiaoAngel File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -13,6 +13,7 @@ | |
// limitations under the License. | ||
|
||
#include "lite/kernels/arm/prior_box_compute.h" | ||
#include <algorithm> | ||
#include <string> | ||
#include <vector> | ||
#include "lite/backends/arm/math/funcs.h" | ||
|
@@ -45,10 +46,318 @@ inline void ExpandAspectRatios(const std::vector<float>& input_aspect_ratior, | |
} | ||
} | ||
} | ||
const int MALLOC_ALIGN = 16; | ||
|
||
void PriorBoxCompute::Run() { | ||
auto& param = Param<operators::PriorBoxParam>(); | ||
inline void* fast_malloc(size_t size) { | ||
size_t offset = sizeof(void*) + MALLOC_ALIGN - 1; | ||
char* p = static_cast<char*>(malloc(offset + size)); | ||
|
||
if (!p) { | ||
return nullptr; | ||
} | ||
|
||
void* r = reinterpret_cast<void*>(reinterpret_cast<size_t>(p + offset) & | ||
(~(MALLOC_ALIGN - 1))); | ||
static_cast<void**>(r)[-1] = p; | ||
memset(r, 0, size); | ||
return r; | ||
} | ||
|
||
inline void fast_free(void* ptr) { | ||
if (ptr) { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 这个也可以删除哈 |
||
free(static_cast<void**>(ptr)[-1]); | ||
} | ||
} | ||
void density_prior_box(const lite::Tensor* input, | ||
const lite::Tensor* image, | ||
lite::Tensor* boxes, | ||
lite::Tensor* variances, | ||
const std::vector<float>& min_size_, | ||
const std::vector<float>& fixed_size_, | ||
const std::vector<float>& fixed_ratio_, | ||
const std::vector<int>& density_size_, | ||
const std::vector<float>& max_size_, | ||
const std::vector<float>& aspect_ratio_, | ||
const std::vector<float>& variance_, | ||
int img_w_, | ||
int img_h_, | ||
float step_w_, | ||
float step_h_, | ||
float offset_, | ||
int prior_num_, | ||
bool is_flip_, | ||
bool is_clip_, | ||
const std::vector<std::string>& order_, | ||
bool min_max_aspect_ratios_order) { | ||
// compute output shape | ||
int win1 = input->dims()[3]; | ||
int hin1 = input->dims()[2]; | ||
DDim shape_out({hin1, win1, prior_num_, 4}); | ||
boxes->Resize(shape_out); | ||
variances->Resize(shape_out); | ||
|
||
float* _cpu_data = boxes->mutable_data<float>(); | ||
float* _variance_data = variances->mutable_data<float>(); | ||
|
||
const int width = win1; | ||
const int height = hin1; | ||
int img_width = img_w_; | ||
int img_height = img_h_; | ||
if (img_width == 0 || img_height == 0) { | ||
img_width = image->dims()[3]; | ||
img_height = image->dims()[2]; | ||
} | ||
float step_w = step_w_; | ||
float step_h = step_h_; | ||
if (step_w == 0 || step_h == 0) { | ||
step_w = static_cast<float>(img_width) / width; | ||
step_h = static_cast<float>(img_height) / height; | ||
} | ||
float offset = offset_; | ||
int step_average = static_cast<int>((step_w + step_h) * 0.5); // add | ||
int channel_size = height * width * prior_num_ * 4; | ||
int idx = 0; | ||
for (int h = 0; h < height; ++h) { | ||
for (int w = 0; w < width; ++w) { | ||
float center_x = (w + offset) * step_w; | ||
float center_y = (h + offset) * step_h; | ||
float box_width; | ||
float box_height; | ||
if (fixed_size_.size() > 0) { | ||
// add | ||
for (int s = 0; s < fixed_size_.size(); ++s) { | ||
int fixed_size = fixed_size_[s]; | ||
int com_idx = 0; | ||
box_width = fixed_size; | ||
box_height = fixed_size; | ||
|
||
if (fixed_ratio_.size() > 0) { | ||
for (int r = 0; r < fixed_ratio_.size(); ++r) { | ||
float ar = fixed_ratio_[r]; | ||
int density = density_size_[s]; | ||
int shift = step_average / density; | ||
float box_width_ratio = fixed_size_[s] * sqrt(ar); | ||
float box_height_ratio = fixed_size_[s] / sqrt(ar); | ||
|
||
for (int p = 0; p < density; ++p) { | ||
for (int c = 0; c < density; ++c) { | ||
float center_x_temp = | ||
center_x - step_average / 2.0f + shift / 2.f + c * shift; | ||
float center_y_temp = | ||
center_y - step_average / 2.0f + shift / 2.f + p * shift; | ||
// xmin | ||
_cpu_data[idx++] = | ||
(center_x_temp - box_width_ratio / 2.f) / img_width >= 0 | ||
? (center_x_temp - box_width_ratio / 2.f) / img_width | ||
: 0; | ||
// ymin | ||
_cpu_data[idx++] = | ||
(center_y_temp - box_height_ratio / 2.f) / img_height >= 0 | ||
? (center_y_temp - box_height_ratio / 2.f) / | ||
img_height | ||
: 0; | ||
// xmax | ||
_cpu_data[idx++] = | ||
(center_x_temp + box_width_ratio / 2.f) / img_width <= 1 | ||
? (center_x_temp + box_width_ratio / 2.f) / img_width | ||
: 1; | ||
// ymax | ||
_cpu_data[idx++] = | ||
(center_y_temp + box_height_ratio / 2.f) / img_height <= 1 | ||
? (center_y_temp + box_height_ratio / 2.f) / | ||
img_height | ||
: 1; | ||
} | ||
} | ||
} | ||
} else { | ||
// this code for density anchor box | ||
if (density_size_.size() > 0) { | ||
CHECK_EQ(fixed_size_.size(), density_size_.size()) | ||
<< "fixed_size_ should be same with density_size_"; | ||
int density = density_size_[s]; | ||
int shift = fixed_size_[s] / density; | ||
|
||
for (int r = 0; r < density; ++r) { | ||
for (int c = 0; c < density; ++c) { | ||
float center_x_temp = | ||
center_x - fixed_size / 2.f + shift / 2.f + c * shift; | ||
float center_y_temp = | ||
center_y - fixed_size / 2.f + shift / 2.f + r * shift; | ||
// xmin | ||
_cpu_data[idx++] = | ||
(center_x_temp - box_width / 2.f) / img_width >= 0 | ||
? (center_x_temp - box_width / 2.f) / img_width | ||
: 0; | ||
// ymin | ||
_cpu_data[idx++] = | ||
(center_y_temp - box_height / 2.f) / img_height >= 0 | ||
? (center_y_temp - box_height / 2.f) / img_height | ||
: 0; | ||
// xmax | ||
_cpu_data[idx++] = | ||
(center_x_temp + box_width / 2.f) / img_width <= 1 | ||
? (center_x_temp + box_width / 2.f) / img_width | ||
: 1; | ||
// ymax | ||
_cpu_data[idx++] = | ||
(center_y_temp + box_height / 2.f) / img_height <= 1 | ||
? (center_y_temp + box_height / 2.f) / img_height | ||
: 1; | ||
} | ||
} | ||
} | ||
|
||
// rest of priors: will never come here!!! | ||
for (int r = 0; r < aspect_ratio_.size(); ++r) { | ||
float ar = aspect_ratio_[r]; | ||
|
||
if (fabs(ar - 1.) < 1e-6) { | ||
continue; | ||
} | ||
|
||
int density = density_size_[s]; | ||
int shift = fixed_size_[s] / density; | ||
float box_width_ratio = fixed_size_[s] * sqrt(ar); | ||
float box_height_ratio = fixed_size_[s] / sqrt(ar); | ||
|
||
for (int p = 0; p < density; ++p) { | ||
for (int c = 0; c < density; ++c) { | ||
float center_x_temp = | ||
center_x - fixed_size / 2.f + shift / 2.f + c * shift; | ||
float center_y_temp = | ||
center_y - fixed_size / 2.f + shift / 2.f + p * shift; | ||
// xmin | ||
_cpu_data[idx++] = | ||
(center_x_temp - box_width_ratio / 2.f) / img_width >= 0 | ||
? (center_x_temp - box_width_ratio / 2.f) / img_width | ||
: 0; | ||
// ymin | ||
_cpu_data[idx++] = | ||
(center_y_temp - box_height_ratio / 2.f) / img_height >= 0 | ||
? (center_y_temp - box_height_ratio / 2.f) / | ||
img_height | ||
: 0; | ||
// xmax | ||
_cpu_data[idx++] = | ||
(center_x_temp + box_width_ratio / 2.f) / img_width <= 1 | ||
? (center_x_temp + box_width_ratio / 2.f) / img_width | ||
: 1; | ||
// ymax | ||
_cpu_data[idx++] = | ||
(center_y_temp + box_height_ratio / 2.f) / img_height <= 1 | ||
? (center_y_temp + box_height_ratio / 2.f) / | ||
img_height | ||
: 1; | ||
} | ||
} | ||
} | ||
} | ||
} | ||
} else { | ||
float* min_buf = | ||
reinterpret_cast<float*>(fast_malloc(sizeof(float) * 4)); | ||
float* max_buf = | ||
reinterpret_cast<float*>(fast_malloc(sizeof(float) * 4)); | ||
float* com_buf = reinterpret_cast<float*>( | ||
fast_malloc(sizeof(float) * aspect_ratio_.size() * 4)); | ||
|
||
for (int s = 0; s < min_size_.size(); ++s) { | ||
int min_idx = 0; | ||
int max_idx = 0; | ||
int com_idx = 0; | ||
int min_size = min_size_[s]; | ||
// first prior: aspect_ratio = 1, size = min_size | ||
box_width = box_height = min_size; | ||
//! xmin | ||
min_buf[min_idx++] = (center_x - box_width / 2.f) / img_width; | ||
//! ymin | ||
min_buf[min_idx++] = (center_y - box_height / 2.f) / img_height; | ||
//! xmax | ||
min_buf[min_idx++] = (center_x + box_width / 2.f) / img_width; | ||
//! ymax | ||
min_buf[min_idx++] = (center_y + box_height / 2.f) / img_height; | ||
|
||
if (max_size_.size() > 0) { | ||
int max_size = max_size_[s]; | ||
//! second prior: aspect_ratio = 1, size = sqrt(min_size * max_size) | ||
box_width = box_height = sqrtf(min_size * max_size); | ||
//! xmin | ||
max_buf[max_idx++] = (center_x - box_width / 2.f) / img_width; | ||
//! ymin | ||
max_buf[max_idx++] = (center_y - box_height / 2.f) / img_height; | ||
//! xmax | ||
max_buf[max_idx++] = (center_x + box_width / 2.f) / img_width; | ||
//! ymax | ||
max_buf[max_idx++] = (center_y + box_height / 2.f) / img_height; | ||
} | ||
|
||
//! rest of priors | ||
for (int r = 0; r < aspect_ratio_.size(); ++r) { | ||
float ar = aspect_ratio_[r]; | ||
if (fabs(ar - 1.) < 1e-6) { | ||
continue; | ||
} | ||
box_width = min_size * sqrt(ar); | ||
box_height = min_size / sqrt(ar); | ||
//! xmin | ||
com_buf[com_idx++] = (center_x - box_width / 2.f) / img_width; | ||
//! ymin | ||
com_buf[com_idx++] = (center_y - box_height / 2.f) / img_height; | ||
//! xmax | ||
com_buf[com_idx++] = (center_x + box_width / 2.f) / img_width; | ||
//! ymax | ||
com_buf[com_idx++] = (center_y + box_height / 2.f) / img_height; | ||
} | ||
if (min_max_aspect_ratios_order) { | ||
memcpy(_cpu_data + idx, min_buf, sizeof(float) * min_idx); | ||
idx += min_idx; | ||
memcpy(_cpu_data + idx, max_buf, sizeof(float) * max_idx); | ||
idx += max_idx; | ||
memcpy(_cpu_data + idx, com_buf, sizeof(float) * com_idx); | ||
idx += com_idx; | ||
} else { | ||
memcpy(_cpu_data + idx, min_buf, sizeof(float) * min_idx); | ||
idx += min_idx; | ||
memcpy(_cpu_data + idx, com_buf, sizeof(float) * com_idx); | ||
idx += com_idx; | ||
memcpy(_cpu_data + idx, max_buf, sizeof(float) * max_idx); | ||
idx += max_idx; | ||
} | ||
} | ||
fast_free(min_buf); | ||
fast_free(max_buf); | ||
fast_free(com_buf); | ||
} | ||
} | ||
} | ||
//! clip the prior's coordinate such that it is within [0, 1] | ||
if (is_clip_) { | ||
for (int d = 0; d < channel_size; ++d) { | ||
_cpu_data[d] = std::min(std::max(_cpu_data[d], 0.f), 1.f); | ||
} | ||
} | ||
//! set the variance. | ||
int count = 0; | ||
for (int h = 0; h < height; ++h) { | ||
for (int w = 0; w < width; ++w) { | ||
for (int i = 0; i < prior_num_; ++i) { | ||
for (int j = 0; j < 4; ++j) { | ||
_variance_data[count] = variance_[j]; | ||
++count; | ||
} | ||
} | ||
} | ||
} | ||
} | ||
|
||
void PriorBoxCompute::ReInitWhenNeeded() { | ||
auto& param = this->template Param<param_t>(); | ||
auto input_dims = param.input->dims(); | ||
auto image_dims = param.image->dims(); | ||
if (last_input_shape_ == input_dims && last_image_shape_ == image_dims) { | ||
return; | ||
} | ||
bool is_flip = param.flip; | ||
bool is_clip = param.clip; | ||
std::vector<float> min_size = param.min_sizes; | ||
|
@@ -66,25 +375,35 @@ void PriorBoxCompute::Run() { | |
prior_num += max_size.size(); | ||
std::vector<std::string> order = param.order; | ||
bool min_max_aspect_ratios_order = param.min_max_aspect_ratios_order; | ||
density_prior_box(param.input, | ||
param.image, | ||
&boxes_tmp_, | ||
&variances_tmp_, | ||
min_size, | ||
std::vector<float>(), | ||
std::vector<float>(), | ||
std::vector<int>(), | ||
max_size, | ||
aspect_ratios_vec, | ||
variance, | ||
img_w, | ||
img_h, | ||
step_w, | ||
step_h, | ||
offset, | ||
prior_num, | ||
is_flip, | ||
is_clip, | ||
order, | ||
min_max_aspect_ratios_order); | ||
last_input_shape_ = input_dims; | ||
last_image_shape_ = image_dims; | ||
} | ||
|
||
lite::arm::math::prior_box(param.input, | ||
param.image, | ||
¶m.boxes, | ||
¶m.variances, | ||
min_size, | ||
max_size, | ||
aspect_ratios_vec, | ||
variance, | ||
img_w, | ||
img_h, | ||
step_w, | ||
step_h, | ||
offset, | ||
prior_num, | ||
is_flip, | ||
is_clip, | ||
order, | ||
min_max_aspect_ratios_order); | ||
void PriorBoxCompute::Run() { | ||
auto& param = this->template Param<param_t>(); | ||
param.boxes->CopyDataFrom(boxes_tmp_); | ||
param.variances->CopyDataFrom(variances_tmp_); | ||
} | ||
|
||
} // namespace arm | ||
|
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/PaddlePaddle/Paddle-Lite/blob/develop/lite/backends/host/target_wrapper.cc 已经实现了,是否可以重用?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个可以的,我稍后改过来