From words to meaning: Exploring semantic analysis in NLP

BioStrand (a subsidiary of IPA)
5 min readOct 4, 2023

--

Semantic analysis is an important of linguistics, the systematic scientific investigation of the properties and characteristics of natural human language. As the study of the of words and sentences, semantics analysis complements other linguistic subbranches that study phonetics (the study of sounds), morphology (the study of word units), syntax (the study of how words form sentences), and pragmatics (the study of how context impacts meaning), to name just a few.

There are three broad subcategories of semantics:

Formal semantics: The study of the meaning of linguistic expressions using mathematical-logical formalizations, such as first-order predicate logic or lambda calculus , to natural languages.

Conceptual semantics: This is the study of words, phrases, and sentences based not just on a set of strict semantic criteria but on schematic and prototypical structures in the of language users.

Lexical semantics: The study of word meanings not just in terms of the basic meaning of a lexical unit but in terms of the semantic relations that integrate these units into a broader linguistic system.

Semantic analysis in Natural Language Processing (NLP)

In NLP, semantic analysis is the process of automatically extracting meaning from natural languages in order to enable human-like comprehension in machines. There are two broad for using semantic analysis to comprehend meaning in natural languages: One, training machine learning models on vast volumes of text to uncover connections, relationships, and patterns that can be used to predict meaning (e.g. ChatGPT). And two, using structured ontologies and databases that pre-define linguistic concepts and relationships that enable semantic analysis algorithms to quickly locate useful information from natural language text.

Though generalized large language model (LLM) based applications are capable of handling broad and common tasks , specialized models based on a domain-specific taxonomy, ontology, and knowledge base design will be essential to power intelligent applications .

How does semantic analysis work?

There are two key components to semantic analysis in NLP. The first is lexical semantics, the study of the meaning of individual words and their relationships. This stage entails obtaining the dictionary definition of the words in the text, parsing each word/element to determine individual functions and properties, and designating a grammatical role for each. Key of lexical semantics include identifying word senses, synonyms, antonyms, hyponyms, hypernyms, and morphology. In the next step, individual words can be combined into a sentence and parsed to establish relationships, understand syntactic structure, and provide meaning.

There are several different approaches within semantic analysis to decode the meaning of a text. approaches include:

Semantic Feature Analysis (SFA): This approach involves the extraction and representation of shared features across different words in order to highlight word relationships and help determine the importance of individual factors within a text. Key subtasks include feature selection, to highlight attributes associated with each word, feature weighting, to distinguish the importance of different attributes, and feature vectors and similarity measurement, for insights into relationships and similarities between words, phrases, and concepts.

Latent Semantic Analysis (LSA): This extracts meaning by capturing the underlying semantic relationships and context of words in a large corpus. By recognizing the latent associations between words and concepts, LSA enhances machines’ capability to interpret natural languages like humans. The LSA process includes creating a term-document matrix, applying Singular Value Decomposition (SVD) to the matrix, dimension reduction, concept representation, indexing, and retrieval. Probabilistic Latent Semantic Analysis (PLSA) is a variation on LSA with a statistical and probabilistic approach to finding latent relationships.

Semantic Content Analysis (SCA): This methodology goes beyond simple feature extraction and distribution analysis to consider word usage context and text structure to identify relationships and impute meaning to natural language text. The process broadly involves dependency parsing, to determine grammatical relationships, identifying thematic and case roles to reveal relationships between actions, participants, and objects, and semantic frame identification, for a more refined understanding of contextual associations.

Semantic Analysis Techniques

Here’s a quick overview of some of the key semantic analysis techniques used in NLP:

Word Embeddings

These refer to techniques that represent words as vectors in a continuous vector space and capture semantic relationships based on co-occurrence patterns. Word-to-vector representation techniques are categorized as conventional, or count-based/frequency-based models, distributional, static word embedding models that include latent semantic analysis (LSA), word-to-vector (Word2Vec), global vector (GloVe) and fastText, and contextual models, which include embeddings from large language, generative pre-training, and bidirectional encoder representations from transformers (BERT) models.

Semantic Role Labeling

This a technique that seeks to answer a central question — who did what to whom, how, when, and where — in many NLP tasks. Semantic Role Labeling identifies the roles that different words play by recognizing the predicate-argument structure of a sentence. It is traditionally broken down into four subtasks : predicate identification, predicate sense disambiguation, argument identification, and argument role labeling. Given Its ability to generate more realistic linguistic representations, semantic role labeling today plays a crucial role in several NLP including question answering, information extraction, and machine translation.

Named Entity Recognition (NER)

NER is a key information extraction task in NLP for detecting and categorizing named entities, such as names, organizations, locations, events, etc.. NER uses machine learning algorithms trained on data sets with predefined entities to automatically analyze and extract entity-related information from new unstructured text. NER are classified as rule-based, statistical, machine learning, deep learning, and hybrid models. Biomedical named entity recognition (BioNER) is a foundational step in biomedical NLP systems with a direct impact on critical downstream applications involving biomedical relation extraction, drug-drug interactions, and knowledge base construction. However, the linguistic complexity of biomedical vocabulary makes the detection and prediction of biomedical entities such as diseases, genes, species, chemical, etc. even more challenging than general domain NER. The challenge is often compounded by insufficient sequence labeling, large-scale labeled training data and domain knowledge. Deep learning BioNER , such as bidirectional Long Short-Term Memory with a CRF layer (BiLSTM-CRF), Embeddings from Language Models (ELMo), and Bidirectional Encoder Representations from Transformers (BERT), have been successful in addressing several challenges. Currently, there are several variations of the BERT pre-trained language model, including , , and PubMedBERT , that have applied to BioNER tasks.

An associated and equally critical task in bioNLP is that of biomedical relation extraction (BioRE), the process of automatically extracting and classifying relationships between complex biomedical entities. In recent years, the integration of attention mechanisms and the availability of pre-trained biomedical language models have helped augment the accuracy and efficiency of BioRE tasks in biomedical applications.

Other semantic analysis techniques involved in extracting meaning and intent from unstructured text include coreference resolution , semantic similarity , semantic parsing , and frame semantics .

The Importance of Semantic Analysis in NLP

Semantic analysis is key to the foundational task of extracting context, intent, and meaning from natural human language and making them machine-readable. This fundamental capability is critical to various NLP applications, from sentiment analysis and information retrieval to machine translation and question-answering systems. The continual refinement of semantic analysis techniques will therefore play a pivotal role in the evolution and advancement of NLP technologies.

Originally published at https://blog.biostrand.ai on October 4, 2023.

--

--

BioStrand (a subsidiary of IPA)
BioStrand (a subsidiary of IPA)

Written by BioStrand (a subsidiary of IPA)

Software and proprietary solutions for MULTI-omics data analysis. Effective research requires convenient and scalable tools.

No responses yet