Skip to content

Commit 0175219

Browse files
committed
remove fold_const param
Signed-off-by: hwangdeyu <[email protected]>
1 parent bf4a22d commit 0175219

File tree

10 files changed

+16
-24
lines changed

10 files changed

+16
-24
lines changed

README.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,6 @@ python -m tf2onnx.convert
140140
[--concrete_function CONCRETE_FUNCTION]
141141
[--target TARGET]
142142
[--custom-ops list-of-custom-ops]
143-
[--fold_const]
144143
[--large_model]
145144
[--continue_on_error]
146145
[--verbose]
@@ -230,9 +229,6 @@ will be used.
230229

231230
Some models require special handling to run on some runtimes. In particular, the model may use unsupported data types. Workarounds are activated with ```--target TARGET```. Currently supported values are listed on this [wiki](https://github.com/onnx/tensorflow-onnx/wiki/target). If your model will be run on Windows ML, you should specify the appropriate target value.
232231

233-
#### --fold_const
234-
235-
Deprecated.
236232

237233
### <a name="summarize_graph"></a>Tool to get Graph Inputs & Outputs
238234

Troubleshooting.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,6 @@ The reason for this is that there is a dynamic input of a tensorflow op but the
3333

3434
An example of this is the [ONNX Slice operator before opset-10](https://github.com/onnx/onnx/blob/master/docs/Changelog.md#Slice-1) - the start and end of the slice are static attributes that need to be known at graph creation. In tensorflow the [strided slice op](https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/strided-slice) allows dynamic inputs. tf2onnx will try to find the real value of begin and end of the slice and can find them in most cases. But if those are real dynamic values calculate at runtime it will result in the message ```get tensor value: ... must be Const```.
3535

36-
You can pass the options ```--fold_const``` in the tf2onnx command line that allows tf2onnx to apply more aggressive constant folding which will increase chances to find a constant.
36+
You can pass the options ```--fold_const(deprecated after tf2onnx-1.9.3)``` in the tf2onnx command line that allows tf2onnx to apply more aggressive constant folding which will increase chances to find a constant.
3737

3838
If this doesn't work the model is most likely not to be able to convert to ONNX. We used to see this a lot of issue with the ONNX Slice op and in opset-10 was updated for exactly this reason.

examples/rnn_tips.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ For other advanced RNN cells, it is supposed to good to convert as well, but the
1616
Use following commands to have a quick trial on your model:
1717

1818
```
19-
python -m tf2onnx.convert --input frozen_rnn_model.pb --inputs input1:0,input2:0 --outputs output1:0,output2:0 --fold_const --opset 8 --output target.onnx --continue_on_error
19+
python -m tf2onnx.convert --input frozen_rnn_model.pb --inputs input1:0,input2:0 --outputs output1:0,output2:0 --opset 8 --output target.onnx --continue_on_error
2020
```
2121

2222
## Limitation
@@ -36,7 +36,7 @@ Use [onnxruntime](https://github.com/Microsoft/onnxruntime) or [caffe2](https://
3636
There is a simpler way to run your models and test its correctness (compared with TensorFlow run) using following command.
3737

3838
```
39-
python tests\run_pretrained_models.py --backend onnxruntime --config rnn.yaml --tests model_name --fold_const --onnx-file ".\tmp" --opset 8
39+
python tests\run_pretrained_models.py --backend onnxruntime --config rnn.yaml --tests model_name --onnx-file ".\tmp" --opset 8
4040
```
4141

4242
The content of rnn.yaml looks as below. For inputs, an explicit numpy expression or a shape can be used. If a shape is specified, the value will be randomly generated.

tests/backend_test_base.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ def assert_results_equal(self, expected, actual, rtol, atol, mtol=None,
133133
if check_shape:
134134
self.assertEqual(expected_val.shape, actual_val.shape)
135135

136-
def freeze_and_run_tf(self, func, feed_dict, outputs, as_session, premade_placeholders, large_model, constant_fold):
136+
def freeze_and_run_tf(self, func, feed_dict, outputs, as_session, premade_placeholders, large_model):
137137
np.random.seed(1) # Make it reproducible.
138138
clean_feed_dict = {utils.node_name(k): v for k, v in feed_dict.items()}
139139
if is_tf2() and not as_session:
@@ -195,7 +195,7 @@ def freeze_and_run_tf(self, func, feed_dict, outputs, as_session, premade_placeh
195195
tf_reset_default_graph()
196196
with tf_session() as sess:
197197
tf.import_graph_def(graph_def, name='')
198-
graph_def = tf_optimize(list(feed_dict.keys()), outputs, graph_def, fold_constant=constant_fold)
198+
graph_def = tf_optimize(list(feed_dict.keys()), outputs, graph_def)
199199

200200
return result, graph_def, initialized_tables
201201

@@ -331,8 +331,8 @@ def get_dtype(info):
331331
self.assertEqual(get_dtype(info), graph.get_dtype(info.name))
332332

333333
def run_test_case(self, func, feed_dict, input_names_with_port, output_names_with_port,
334-
rtol=1e-07, atol=1e-5, mtol=None, convert_var_to_const=True, constant_fold=True,
335-
check_value=True, check_shape=True, check_dtype=True, process_args=None, onnx_feed_dict=None,
334+
rtol=1e-07, atol=1e-5, mtol=None, convert_var_to_const=True, check_value=True,
335+
check_shape=True, check_dtype=True, process_args=None, onnx_feed_dict=None,
336336
graph_validator=None, as_session=False, large_model=False, premade_placeholders=False,
337337
use_custom_ops=False, optimize=True):
338338
"""
@@ -361,7 +361,7 @@ def run_test_case(self, func, feed_dict, input_names_with_port, output_names_wit
361361

362362
expected, graph_def, initialized_tables = \
363363
self.freeze_and_run_tf(func, feed_dict, output_names_with_port, as_session,
364-
premade_placeholders, large_model, constant_fold)
364+
premade_placeholders, large_model)
365365

366366
graph_def_path = os.path.join(self.test_data_directory, self._testMethodName + "_after_tf_optimize.pb")
367367
utils.save_protobuf(graph_def_path, graph_def)

tests/test_backend.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -174,7 +174,6 @@ def get_maxpoolwithargmax_getdata():
174174
class BackendTests(Tf2OnnxBackendTestBase):
175175
def _run_test_case(self, func, output_names_with_port, feed_dict, **kwargs):
176176
kwargs["convert_var_to_const"] = False
177-
kwargs["constant_fold"] = False
178177
return self.run_test_case(func, feed_dict, [], output_names_with_port, **kwargs)
179178

180179
def _test_expand_dims_known_rank(self, idx):
@@ -709,7 +708,7 @@ def func(x):
709708
feed_dict = {"input_1:0": x_val}
710709
input_names_with_port = ["input_1:0"]
711710
output_names_with_port = ["output:0"]
712-
self.run_test_case(func, feed_dict, input_names_with_port, output_names_with_port, constant_fold=False,
711+
self.run_test_case(func, feed_dict, input_names_with_port, output_names_with_port,
713712
graph_validator=lambda g: (check_op_count(g, "RandomUniform", 0) and
714713
check_op_count(g, "RandomUniformLike", 0)))
715714

@@ -5229,7 +5228,7 @@ def func(query_holder):
52295228
lookup_results = hash_table.lookup(query_holder)
52305229
ret = tf.add(lookup_results, 0, name=_TFOUTPUT)
52315230
return ret
5232-
self._run_test_case(func, [_OUTPUT], {_INPUT: query}, constant_fold=False, as_session=True)
5231+
self._run_test_case(func, [_OUTPUT], {_INPUT: query}, as_session=True)
52335232
os.remove(filnm)
52345233

52355234
@check_opset_min_version(8, "CategoryMapper")

tests/test_const_fold.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,6 @@
1616
class ConstantFoldingTests(Tf2OnnxBackendTestBase):
1717
def _run_test_case(self, func, output_names_with_port, feed_dict, **kwargs):
1818
kwargs["convert_var_to_const"] = False
19-
kwargs["constant_fold"] = False
2019
return self.run_test_case(func, feed_dict, [], output_names_with_port, **kwargs)
2120

2221
def test_concat(self):

tests/test_string_ops.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -167,7 +167,7 @@ def func(text):
167167
return tokens_, begin_, end_, rows_
168168
# Fails due to Attempting to capture an EagerTensor without building a function.
169169
self._run_test_case(func, [_OUTPUT, _OUTPUT1, _OUTPUT2, _OUTPUT3],
170-
{_INPUT: text_val}, constant_fold=False, as_session=True)
170+
{_INPUT: text_val}, as_session=True)
171171

172172

173173
if __name__ == "__main__":

tf2onnx/convert.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -83,8 +83,7 @@ def get_args():
8383
parser.add_argument("--verbose", "-v", help="verbose output, option is additive", action="count")
8484
parser.add_argument("--debug", help="debug mode", action="store_true")
8585
parser.add_argument("--output_frozen_graph", help="output frozen tf graph to file")
86-
parser.add_argument("--fold_const", help="Deprecated. Constant folding is always enabled.",
87-
action="store_true")
86+
8887
# experimental
8988
parser.add_argument("--inputs-as-nchw", help="transpose inputs as from nhwc to nchw")
9089
args = parser.parse_args()

tf2onnx/rewriter/random_uniform.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,6 @@ def rewrite_random_uniform(g, ops):
3939
return ops
4040

4141

42-
# rewriter function when fold_const is enabled
4342
def rewrite_random_uniform_fold_const(g, ops):
4443
pattern = \
4544
OpTypePattern('Add', name='output', inputs=[

tf2onnx/tf_loader.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -673,15 +673,15 @@ def from_keras(model_path, input_names, output_names):
673673
return frozen_graph, input_names, output_names
674674

675675

676-
def tf_optimize_grappler(input_names, output_names, graph_def, fold_constant=None):
676+
def tf_optimize_grappler(input_names, output_names, graph_def):
677677
from tensorflow.core.protobuf import meta_graph_pb2 as meta_graph_pb2, config_pb2, rewriter_config_pb2
678678
from tensorflow.python.grappler import tf_optimizer as tf_opt
679679

680680
config = config_pb2.ConfigProto()
681681
rewrite_options = config.graph_options.rewrite_options
682682
config.graph_options.infer_shapes = True
683683
# TODO: if we turn on pruning, grappler removes some identities that the tf-1.x lstm rewriter
684-
# depends on so for now don't turn this on, fold_constant is always enabled now.
684+
# depends on so for now don't turn this on, constfold is always enabled now.
685685
rewrite_options.optimizers[:] = [
686686
# 'pruning', 'constfold', 'arithmetic', 'dependency', 'function',
687687
'constfold', 'function'
@@ -700,7 +700,7 @@ def tf_optimize_grappler(input_names, output_names, graph_def, fold_constant=Non
700700
return graph_def
701701

702702

703-
def tf_optimize(input_names, output_names, graph_def, fold_constant=True):
703+
def tf_optimize(input_names, output_names, graph_def):
704704
"""Extract inference subgraph and optimize graph."""
705705
assert isinstance(input_names, list)
706706
assert isinstance(output_names, list)
@@ -712,7 +712,7 @@ def tf_optimize(input_names, output_names, graph_def, fold_constant=True):
712712

713713
want_grappler = is_tf2() or LooseVersion(tf.__version__) >= "1.15"
714714
if want_grappler:
715-
graph_def = tf_optimize_grappler(input_names, output_names, graph_def, fold_constant)
715+
graph_def = tf_optimize_grappler(input_names, output_names, graph_def)
716716
else:
717717
# the older transform path
718718
from tensorflow.tools.graph_transforms import TransformGraph # pylint: disable=redefined-outer-name

0 commit comments

Comments
 (0)