Skip to content

Conversation

@MrSupW
Copy link
Collaborator

@MrSupW MrSupW commented Mar 12, 2023

Complete the implementation of CIF-related code, including two types of Predictor and two types of CIF-Decoder. Meanwhile, four baseline experiments were conducted on the AISHELL dataset.

@MrSupW MrSupW closed this Mar 12, 2023
@MrSupW MrSupW reopened this Mar 12, 2023
@@ -0,0 +1,100 @@
# Copyright (c) 2023 ASLP@NWPU (authors: He Wang, Fan Yu)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The embedding.py shares almost the same code with transformer/embedding.py, I think we can reuse it.

@@ -0,0 +1,291 @@
# Copyright (c) 2023 ASLP@NWPU (authors: He Wang, Fan Yu)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as embedding.py, attention shares almost the same code as transformer/attention.py, and transformer/attention.py takes streamming into account.

import numpy as np


def make_pad_mask(lengths: torch.Tensor, length_dim: int = -1,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this function is defined in utils/mask.py

return mask


def sequence_mask(lengths, maxlen: Optional[int] = None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this function is almost the same as subsequent_mask in utils/mask.py

@robin1001 robin1001 merged commit 630a591 into wenet-e2e:main Mar 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants