Bidirectional Encoder Representations from Transformers (BERT)
Appearance
Author
Updated
See also
Bidirectional Encoder Representations from Transformers (BERT) is a natural language processing (NLP) model developed by Google in 2018; BERT uses a deep learning architecture to understand the context of words in text, which allows it to achieve high accuracy on tasks such as text classification, sentiment analysis, and question answering. BERT has been widely adopted and is a fundamental baseline in many NLP applications, including Google Search, and has led to the development of other models. |
