File:Self Attention Mechanism.png

From UBC Wiki

Original file(627 × 1,054 pixels, file size: 17 KB, MIME type: image/png)

Summary

Description
English: The image illustrates the self-attention mechanism in a Transformer. It shows a central node, like a spotlight, distributing attention to other elements within a multi-layered network. This mechanism allows the model to adaptively focus on various parts of the input data, making Transformers powerful tools in natural language processing and deep learning.
Date 1 January 2017(2017-01-01)
File source https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
Author A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, AN Gomez, Ł Kaiser, I Polosukhin

Licensing

Some rights reserved
Permission is granted to copy, distribute and/or modify this document according to the terms in Creative Commons License, Attribution-ShareAlike 4.0. The full text of this license may be found here: CC by-sa 4.0
Attribution-Share-a-like

File history

Click on a date/time to view the file as it appeared at that time.

Date/TimeThumbnailDimensionsUserComment
current01:39, 11 October 2023Thumbnail for version as of 01:39, 11 October 2023627 × 1,054 (17 KB)AmirhosseinAbaskohi (talk | contribs)Uploaded a work by A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, AN Gomez, Ł Kaiser, I Polosukhin from https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf with UploadWizard

The following page uses this file: