Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions python/paddle/base/executor.py
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ def scope_guard(scope: core._Scope) -> Generator[None, None, None]:
def as_numpy(tensor, copy=False):
"""
Convert a Tensor to a numpy.ndarray, its only support Tensor without LoD information.
For higher dimensional sequence data, please use LoDTensor directly.
For higher dimensional sequence data, please use DenseTensor directly.

Examples:
.. code-block:: python
Expand Down Expand Up @@ -718,7 +718,7 @@ def _get_program_cache_key(feed, fetch_list):
def _as_lodtensor(data, place, dtype=None):
"""
Convert numpy.ndarray to Tensor, its only support Tensor without LoD information.
For higher dimensional sequence data, please use LoDTensor directly.
For higher dimensional sequence data, please use DenseTensor directly.

Examples:

Expand Down
26 changes: 13 additions & 13 deletions python/paddle/base/lod_tensor.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@

def create_lod_tensor(data, recursive_seq_lens, place):
"""
Create a LoDTensor from a numpy array, list or existing LoDTensor.
Create a DenseTensor from a numpy array, list or existing LoDTensor.

The implementation is as follows:

Expand All @@ -32,13 +32,13 @@ def create_lod_tensor(data, recursive_seq_lens, place):
2. Convert :code:`recursive_seq_lens` to a offset-based LoD.

3. Based on :code:`place` , copy the :code:`data` from a numpy array, list
or existing LoDTensor to CPU or GPU device.
or existing DenseTensor to CPU or GPU device.

4. Set offset-based LoD to the output LoDTensor.

Suppose we want to create a LoDTensor to hold data for word sequences,
Suppose we want to create a DenseTensor to hold data for word sequences,
where each word is represented by an integer. If we want to create
a LoDTensor to represent two sentences, one of 2 words, and one of 3 words.
a DenseTensor to represent two sentences, one of 2 words, and one of 3 words.

Then :code:`data` would be a numpy array of integers with shape (5, 1).
:code:`recursive_seq_lens` would be [[2, 3]], indicating the word number
Expand All @@ -53,10 +53,10 @@ def create_lod_tensor(data, recursive_seq_lens, place):
recursive_seq_lens (list[list[int]]): a list of lists indicating the
length-based LoD info.
place (CPUPlace|CUDAPlace): CPU or GPU place indicating where the data
in the created LoDTensor will be stored.
in the created DenseTensor will be stored.

Returns:
A LoDTensor with tensor data and recursive_seq_lens info.
A DenseTensor with tensor data and recursive_seq_lens info.

Examples:

Expand Down Expand Up @@ -118,11 +118,11 @@ def create_random_int_lodtensor(
"""
:api_attr: Static Graph

Create a LoDTensor containing random integers.
Create a DenseTensor containing random integers.

The implementation is as follows:

1. Obtain the shape of output LoDTensor based on :code:`recursive_seq_lens`
1. Obtain the shape of output DenseTensor based on :code:`recursive_seq_lens`
and :code:`base_shape` . The first dimension of the shape is the total
length of sequences, while the other dimensions are the same as
:code:`base_shape` .
Expand All @@ -131,26 +131,26 @@ def create_random_int_lodtensor(
array as parameter :code:`data` of :ref:`api_paddle_base_create_lod_tensor` to
create the output LoDTensor.

Suppose we want to create a LoDTensor to hold data for 2 sequences, where
Suppose we want to create a DenseTensor to hold data for 2 sequences, where
the dimension of the sequences are [2, 30] and [3, 30] respectively.
The :code:`recursive_seq_lens` would be [[2, 3]], and :code:`base_shape`
would be [30] (the other dimensions excluding the sequence length).
Therefore, the shape of the output LoDTensor would be [5, 30], where
Therefore, the shape of the output DenseTensor would be [5, 30], where
the first dimension 5 is the total lengths of the sequences, and the
other dimensions are :code:`base_shape`.

Args:
recursive_seq_lens (list[list[int]]): a list of lists indicating the
length-based LoD info.
base_shape (list[int]): the shape of the output LoDTensor excluding
base_shape (list[int]): the shape of the output DenseTensor excluding
the first dimension.
place (CPUPlace|CUDAPlace): CPU or GPU place indicating where
the data in the created LoDTensor will be stored.
the data in the created DenseTensor will be stored.
low (int): the lower bound of the random integers.
high (int): the upper bound of the random integers.

Returns:
A LoDTensor with tensor data and recursive_seq_lens info, whose data
A DenseTensor with tensor data and recursive_seq_lens info, whose data
is inside [low, high].

Examples:
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/base/multiprocess_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
MP_STATUS_CHECK_INTERVAL = 5.0

# NOTE: [ mmap files clear ] If there is still data in the multiprocess queue when the main process finishes reading,
# the data in the queue needs to be popped. Then the LoDTensor read by the main process
# the data in the queue needs to be popped. Then the DenseTensor read by the main process
# from the child process will automatically clear the memory-mapped file.
multiprocess_queue_set = set()

Expand Down
2 changes: 1 addition & 1 deletion python/paddle/framework/io.py
Original file line number Diff line number Diff line change
Expand Up @@ -556,7 +556,7 @@ def _transformed_from_varbase(obj):


def _transformed_from_lodtensor(obj):
# In paddle2.1 version, LoDTensor is saved as np.array(tensor).
# In paddle2.1 version, DenseTensor is saved as np.array(tensor).
# When executing paddle.load, use this function to determine whether to restore to Tensor/LoDTensor.
if isinstance(obj, np.ndarray):
return True
Expand Down
10 changes: 5 additions & 5 deletions python/paddle/hapi/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -568,7 +568,7 @@ def _run(self, inputs, labels=None):
if len(name) > 0:
rets.insert(i, feed[name])

# LoDTensor cannot be fetch as numpy directly
# DenseTensor cannot be fetch as numpy directly
rets = [np.array(v) for v in rets]
if self.mode == 'test':
return rets[:]
Expand Down Expand Up @@ -1001,7 +1001,7 @@ def _run(self, inputs, labels=None):
if len(name) > 0:
rets.insert(i, feed[name])

# LoDTensor cannot be fetch as numpy directly
# DenseTensor cannot be fetch as numpy directly
rets = [np.array(v) for v in rets]
if self.mode == 'test':
return rets[:]
Expand Down Expand Up @@ -2615,8 +2615,8 @@ def predict(
field of a sample is in shape [X, Y], test_data contains N samples, predict
output field will be in shape [N, X, Y] if stack_output is True, and will
be a length N list in shape [[X, Y], [X, Y], ..., [X, Y]] if stack_outputs
is False. stack_outputs as False is used for LoDTensor output situation,
it is recommended set as True if outputs contains no LoDTensor. Default: False.
is False. stack_outputs as False is used for DenseTensor output situation,
it is recommended set as True if outputs contains no DenseTensor. Default: False.
verbose (int, optional): The verbosity mode, should be 0, 1, or 2. 0 = silent,
1 = progress bar, 2 = one line per batch. Default: 1.
callbacks(Sequence[Callback]|Callback|None, optional): A Callback instance, Default: None.
Expand Down Expand Up @@ -2790,7 +2790,7 @@ def _run_one_epoch(
# ([input1, input2, ...], [label1, label2, ...])
# To handle all of these, flatten (nested) list to list.
data = paddle.utils.flatten(data)
# LoDTensor.shape is callable, where LoDTensor comes from
# DenseTensor.shape is callable, where DenseTensor comes from
# DataLoader in static graph

batch_size = (
Expand Down
10 changes: 5 additions & 5 deletions python/paddle/incubate/layers/nn.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ def fused_seqpool_cvm(

This OP is the fusion of sequence_pool and continuous_value_model op.

**Note:** The Op only receives List of LoDTensor as input, only support SUM pooling now.
**Note:** The Op only receives List of DenseTensor as input, only support SUM pooling now.

Args:
input(Tensor): Input is List of LoDTensor.
Expand Down Expand Up @@ -181,7 +181,7 @@ def search_pyramid_hash(
dtype (str, optional): The data type of output Tensor, float32. Default: float32.

Returns:
Tensor: LoDTensor of pyramid hash embedding.
Tensor: DenseTensor of pyramid hash embedding.
"""
helper = LayerHelper('search_pyramid_hash', **locals())

Expand Down Expand Up @@ -275,7 +275,7 @@ def shuffle_batch(x: Tensor, seed: int | Tensor | None = None) -> Tensor:
"""
This layer shuffle input tensor :attr:`x` . Normally, :attr:`x` is 2-D LoDTensor.

:attr:`x` is a LoDTensor to be shuffled with shape :math:`[N_1, N_2, ..., N_k, D]` . Note that the last dim of input will not be shuffled.
:attr:`x` is a DenseTensor to be shuffled with shape :math:`[N_1, N_2, ..., N_k, D]` . Note that the last dim of input will not be shuffled.
:math:`N_1 * N_2 * ... * N_k` numbers of elements with length :math:`D` will be shuffled randomly.

Examples:
Expand All @@ -294,12 +294,12 @@ def shuffle_batch(x: Tensor, seed: int | Tensor | None = None) -> Tensor:
Out.dims = [4, 2]

Args:
x (Tensor): The input Tensor. The input Tensor is a N-D LoDTensor with type int, float32 or float64.
x (Tensor): The input Tensor. The input Tensor is a N-D DenseTensor with type int, float32 or float64.
seed (None|int|Tensor, optional): The start up seed. If set, seed will be set as the start up seed of shuffle engine.
If not set(Default), start up seed of shuffle engine will be generated randomly. Default: None.

Returns:
Tensor: The shuffled LoDTensor with the same shape and lod as input.
Tensor: The shuffled DenseTensor with the same shape and lod as input.

Examples:

Expand Down
2 changes: 1 addition & 1 deletion python/paddle/io/dataloader/dataloader_iter.py
Original file line number Diff line number Diff line change
Expand Up @@ -632,7 +632,7 @@ def _thread_loop(self, legacy_expected_place):
for tensor in batch:
array.append(tensor)
else:
# LoDTensor not in shared memory is not
# DenseTensor not in shared memory is not
# serializable, cannot be create in workers
for slot in batch:
if isinstance(slot, paddle.Tensor):
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/io/multiprocess_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
MP_STATUS_CHECK_INTERVAL = 5.0

# NOTE: [ mmap files clear ] If there is still data in the multiprocess queue when the main process finishes reading,
# the data in the queue needs to be popped. Then the LoDTensor read by the main process
# the data in the queue needs to be popped. Then the DenseTensor read by the main process
# from the child process will automatically clear the memory-mapped file.
multiprocess_queue_set = set()

Expand Down
2 changes: 1 addition & 1 deletion python/paddle/jit/dy2static/partial_program.py
Original file line number Diff line number Diff line change
Expand Up @@ -1077,7 +1077,7 @@ def _set_grad_type(self, params, train_program):
# will be SelectedRows, not LoDTensor. But tracer will just
# set param grad Tensor by forward Tensor(LoDTensor)
# If we don't change grad_var type here, RunProgramOp need
# transform SelectedRows to LoDTensor forcibly, it may not
# transform SelectedRows to DenseTensor forcibly, it may not
# be user wanted result.
for param in params:
grad_name = param.name + core.grad_var_suffix()
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/jit/dy2static/pir_partial_program.py
Original file line number Diff line number Diff line change
Expand Up @@ -1208,7 +1208,7 @@ def _set_grad_type(self, params, train_program: RunnableProgram):
# will be SelectedRows, not LoDTensor. But tracer will just
# set param grad Tensor by forward Tensor(LoDTensor)
# If we don't change grad_var type here, RunProgramOp need
# transform SelectedRows to LoDTensor forcibly, it may not
# transform SelectedRows to DenseTensor forcibly, it may not
# be user wanted result.
forward_params_grads = train_program.param_grad_values
train_program = train_program.program
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/jit/pir_translated_layer.py
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ def persistable_vars(self):
# The variable/parameter of the dynamic graph is not in the scope, so before the op
# executes the program internally, create persistent variables with the
# same name as feed, parameters, and fetch in the scope, and share the
# LoDTensor of the op input.
# DenseTensor of the op input.
#
# 2. Forward and Backward Separation:
# Because the dynamic graph op performs the forward and backward separately,
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/jit/translated_layer.py
Original file line number Diff line number Diff line change
Expand Up @@ -638,7 +638,7 @@ def _append_backward_desc(self, infer_program_desc):
# The variable/parameter of the dynamic graph is not in the scope, so before the op
# executes the program internally, create persistent variables with the
# same name as feed, parameters, and fetch in the scope, and share the
# LoDTensor of the op input.
# DenseTensor of the op input.
#
# 2. Forward and Backward Separation:
# Because the dynamic graph op performs the forward and backward separately,
Expand Down Expand Up @@ -1011,7 +1011,7 @@ def _run_dygraph(instance, input, program_holder):
# will be SelectedRows, not LoDTensor. But tracer will just
# set param grad Tensor by forward Tensor(LoDTensor)
# If we don't change grad_var type here, RunProgramOp need
# transform SelectedRows to LoDTensor forcibly, it may not
# transform SelectedRows to DenseTensor forcibly, it may not
# be user wanted result.
for persistable_var in persistable_vars:
grad_var_name = persistable_var.name + core.grad_var_suffix()
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/nn/clip.py
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ def get_tensor_from_selected_rows(x, name=None):
For more information, please refer to :ref:`api_guide_Name` .

Returns:
Variable: LoDTensor transformed from SelectedRows. The data type is same with input.
Variable: DenseTensor transformed from SelectedRows. The data type is same with input.

Examples:
.. code-block:: python
Expand Down
19 changes: 11 additions & 8 deletions python/paddle/static/nn/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -419,7 +419,7 @@ def continuous_value_model(input, cvm, use_cvm=True):
If :attr:`use_cvm` is False, it will remove show and click from :attr:`input` , and output shape is :math:`[N, D - 2]` .
:attr:`cvm` is show_click info, whose shape is :math:`[N, 2]` .
Args:
input (Variable): The input variable. A 2-D LoDTensor with shape :math:`[N, D]` , where N is the batch size, D is `2 + the embedding dim` . `lod level = 1` .
input (Variable): The input variable. A 2-D DenseTensor with shape :math:`[N, D]` , where N is the batch size, D is `2 + the embedding dim` . `lod level = 1` .
A Tensor with type float32, float64.
cvm (Variable): Show and click variable. A 2-D Tensor with shape :math:`[N, 2]` , where N is the batch size, 2 is show and click.
A Tensor with type float32, float64.
Expand Down Expand Up @@ -2951,7 +2951,7 @@ def prelu(x, mode, param_attr=None, data_format="NCHW", name=None):
element: All elements do not share alpha. Each element has its own alpha.

Parameters:
x (Tensor): The input Tensor or LoDTensor with data type float32.
x (Tensor): The input Tensor or DenseTensor with data type float32.
mode (str): The mode for weight sharing.
param_attr (ParamAttr|None, optional): The parameter attribute for the learnable \
weight (alpha), it can be create by ParamAttr. None by default. \
Expand All @@ -2968,6 +2968,7 @@ def prelu(x, mode, param_attr=None, data_format="NCHW", name=None):

.. code-block:: python

>>> # doctest: +SKIP("This has diff in xdoctest env")
>>> import paddle
>>> paddle.enable_static()

Expand Down Expand Up @@ -3729,7 +3730,7 @@ def embedding(

Case 2:

input is a LoDTensor with 1-level LoD. padding_idx = 0
input is a DenseTensor with 1-level LoD. padding_idx = 0
input.lod = [[2, 3]]
input.data = [[1], [3], [2], [4], [0]]
input.shape = [5, 1]
Expand All @@ -3746,7 +3747,7 @@ def embedding(


Args:
input(Tensor): A Tensor or LoDTensor with type int64, which contains the id information.
input(Tensor): A Tensor or DenseTensor with type int64, which contains the id information.
The value of the input id should satisfy :math:`0<= id < size[0]` .
size(tuple|list): The shape of lookup table parameter. It should have two elements which
indicates the size of the dictionary of embeddings and the size of each embedding vector respectively.
Expand All @@ -3770,11 +3771,12 @@ def embedding(
It must be float32 or float64. Default: float32.

Returns:
Tensor: Embedding Tensor or LoDTensor mapped by input. The data type is the same as :attr:`dtype` .
Tensor: Embedding Tensor or DenseTensor mapped by input. The data type is the same as :attr:`dtype` .

Static Examples:
.. code-block:: python

>>> # doctest: +SKIP("This has diff in xdoctest env")
>>> import paddle
>>> import numpy as np
>>> paddle.enable_static()
Expand Down Expand Up @@ -3884,7 +3886,7 @@ def sparse_embedding(

Case 2:

input is a LoDTensor with 1-level LoD. padding_idx = 0
input is a DenseTensor with 1-level LoD. padding_idx = 0
input.lod = [[2, 3]]
input.data = [[1], [3], [2], [4], [0]]
input.shape = [5, 1]
Expand All @@ -3900,7 +3902,7 @@ def sparse_embedding(
It will pad all-zero data when ids is 0.

Args:
input(Tensor): A Tensor or LoDTensor with type int64, which contains the id
input(Tensor): A Tensor or DenseTensor with type int64, which contains the id
information. The value of the input id should satisfy :math:`0<= id < size[0]` .
size(tuple|list): The shape of lookup table parameter (vocab_size, emb_size). It
should have two elements which indicates the size of the dictionary of embeddings
Expand Down Expand Up @@ -3928,11 +3930,12 @@ def sparse_embedding(
float64. Default: float32.

Returns:
Tensor: Embedding Tensor or LoDTensor mapped by input. The data type is the same as :attr:`dtype` .
Tensor: Embedding Tensor or DenseTensor mapped by input. The data type is the same as :attr:`dtype` .

Examples:
.. code-block:: python

>>> # doctest: +SKIP("This has diff in xdoctest env")
>>> import paddle

>>> paddle.enable_static()
Expand Down
4 changes: 3 additions & 1 deletion python/paddle/static/nn/metric.py
Original file line number Diff line number Diff line change
Expand Up @@ -390,7 +390,7 @@ def ctr_metric_bundle(input, label, ins_tag_weight=None):
data. The height is batch size and width is always 1.
ins_tag_weight(Tensor): A 2D int Tensor indicating the ins_tag_weight of the training
data. 1 means real data, 0 means fake data.
A LoDTensor or Tensor with type float32,float64.
A DenseTensor or Tensor with type float32,float64.

Returns:
local_sqrerr(Tensor): Local sum of squared error
Expand All @@ -404,6 +404,7 @@ def ctr_metric_bundle(input, label, ins_tag_weight=None):
.. code-block:: python
:name: example-1

>>> # doctest: +SKIP("This has diff in xdoctest env")
>>> import paddle
>>> paddle.enable_static()
>>> data = paddle.static.data(name="data", shape=[-1, 32], dtype="float32")
Expand All @@ -414,6 +415,7 @@ def ctr_metric_bundle(input, label, ins_tag_weight=None):
.. code-block:: python
:name: example-2

>>> # doctest: +SKIP("This has diff in xdoctest env")
>>> import paddle
>>> paddle.enable_static()
>>> data = paddle.static.data(name="data", shape=[-1, 32], dtype="float32")
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/tensor/creation.py
Original file line number Diff line number Diff line change
Expand Up @@ -1340,7 +1340,7 @@ def eye(
name(str|None, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None.

Returns:
Tensor: An identity Tensor or LoDTensor of shape [num_rows, num_columns].
Tensor: An identity Tensor or DenseTensor of shape [num_rows, num_columns].

Examples:
.. code-block:: python
Expand Down
Loading