Neural networks
Compiled by
Updated
See also
IntroductionA neural network (artificial neural network or neural net, abbreviated ANN or NN) is a computational model that underpins much of modern AI. Neural networks enable machines to learn patterns from data in ways inspired by the structure and functioning of the human brain. A typical neural network consists of interconnected nodes, or “neurons,” arranged in layers: an input layer, one or more hidden layers, and an output layer. Each neuron processes incoming signals by applying weights, biases, and activation functions before passing the result to the next layer. During training—most commonly through techniques such as backpropagation and gradient descent—the network iteratively adjusts these parameters to minimize prediction errors. Neural networks are widely used in applications such as image recognition, natural language processing, and predictive analytics. Deep neural networks, which contain multiple hidden layers, are capable of modelling complex patterns but typically require substantial computational resources and large datasets. Different architectures are designed for specific types of data and tasks. For example, convolutional neural networks (CNNs) are particularly effective for analyzing spatial data such as images, while recurrent neural networks (RNNs) were developed to handle sequential data such as time series or text. More recently, transformer architectures—built around attention mechanisms that prioritize relevant parts of the input—have become dominant in many natural language processing tasks. Integrating searching and indexing functions with neural network models can further improve efficiency in large-scale information retrieval systems. At NLM, neural network technologies are used in automated indexing systems. The Medical Text Indexer (MTI) assists in assigning Medical Subject Headings (MeSH) to new records in PubMed. In 2024, NLM deployed the next-generation system known as MTIX, which applies neural network methods to recommend MeSH terms, typically within about 24 hours of a record’s entry into the database. Presentation *Medical text indexer (MTIX) and automated indexingThe Medical Text Indexer (MTI), developed by the National Library of Medicine (NLM), uses neural networks for the MTIX (Medical Text Indexer-NeXt Generation). It uses machine learning to assign Medical Subject Headings (MeSH) to articles, improving indexing speed and scalability. Trained on millions of MEDLINE citations from 2007–2022, the MTIX analyzes titles, abstracts, and journal metadata to recommend relevant MeSH terms with high recall (e.g., >94% for disease detection) and precision (e.g., 87% for disease categories). The MTI supports semi-automated and fully automated indexing, reducing human indexer workload while maintaining quality. For medical texts, MTIX processes full-text articles when available, improving term coverage over title-and-abstract-based methods. Filtering techniques, such as ranking scores and excluding lengthy documents, further boost accuracy. Neural networks in MTIX enable rapid, precise indexing, critical for scaling up to the growing volume of biomedical literature - in 2024, 1.5 million papers. While human curation remains in place in MEDLINE for quality control, MTIX’s automation project and use of AI supports applications such as the publicly-available MeSH on Demand tool, aiding researchers in metadata identification. Librarian viewBottom line: AI tools may support work with clinicians and researchers, but many of the underlying processes raise concerns about scientific accuracy, transparency, and methodological rigour in evidence reviews. The MTIX, while an improvement over earlier versions, continues to perpetuate algorithmic biases of various kinds. Information about AI tools is evolving rapidly, so it is important to verify current guidance or consult a librarian. Librarians also distinguish between searching for sources and searching for answers. Many AI systems emphasize the latter while obscuring the former—meaning the steps used to identify evidence are often hidden, and transparency remains a significant limitation. References
Disclaimer
|
