File:Self Attention Mechanism.png
Size of this preview: 356 × 598 pixels. Other resolution: 627 × 1,054 pixels.
Original file (627 × 1,054 pixels, file size: 17 KB, MIME type: image/png)
Summary
Description | English: The image illustrates the self-attention mechanism in a Transformer. It shows a central node, like a spotlight, distributing attention to other elements within a multi-layered network. This mechanism allows the model to adaptively focus on various parts of the input data, making Transformers powerful tools in natural language processing and deep learning. |
Date | 1 January 2017 |
File source | https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf |
Author | A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, AN Gomez, Ł Kaiser, I Polosukhin |
Licensing
|
File history
Click on a date/time to view the file as it appeared at that time.
Date/Time | Thumbnail | Dimensions | User | Comment | |
---|---|---|---|---|---|
current | 01:39, 11 October 2023 | 627 × 1,054 (17 KB) | AmirhosseinAbaskohi (talk | contribs) | Uploaded a work by A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, AN Gomez, Ł Kaiser, I Polosukhin from https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf with UploadWizard |
You cannot overwrite this file.
File usage
The following page uses this file: