Skip to content

Conversation

@NickNickGo
Copy link
Contributor

This PR reduces CPU time for encoder-decoder multihead attention by 25-30%. GPU time is reduced by 10%.

  1. Unnecessary reshapes during EINSUM op are eliminated.
  2. EINSUM logic converted to BMM OP, thus avoiding CPU overhead during EINSUM.
    Overall generation time reduces from 47.8 to 44.9.
    Attaching before/after profile results:
    image
    image

@NickNickGo NickNickGo requested a review from a team October 14, 2020 03:03
@JiushengChen
Copy link
Contributor

JiushengChen commented Oct 14, 2020

Good improvement!

  1. Can we do the same for Huggingface and ProphetNet (its implementation is separated.)?
  2. Update benchmarks

Copy link
Contributor

@yuyan2do yuyan2do left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also update readme and number in benchmark script

key_padding_mask.unsqueeze(1).to(
torch.bool), float("-inf"))
else:
#Not supported
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add "assert False, reason"

attn_weights = attn_weights.view(-1, tgt_len,
*attn_weights.size()[-1:])
else:
q = q.contiguous().view(tgt_len, bsz * self.num_heads,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why contiguous is needed here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was present in the earlier implementation, I didn't touch it since my changes are only meant for en-dec attention. I agree this is redundant. I'll remove it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are other places using contiguous. please also check if they can be removed as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just checked this. In all other places, its present after permute/transpose operations which is essential.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants