Jump to content

Bidirectional Encoder Representations from Transformers (BERT)

From UBC Wiki
Source: Preisler, 2024, pg.6.

Author

Updated

See also

Bidirectional Encoder Representations from Transformers (BERT) is a natural language processing (NLP) model developed by Google in 2018; BERT uses a deep learning architecture to understand the context of words in text, which allows it to achieve high accuracy on tasks such as text classification, sentiment analysis, and question answering. BERT has been widely adopted and is a fundamental baseline in many NLP applications, including Google Search, and has led to the development of other models.