Discovering Latent Concepts Learned in BERT

Paper · arXiv 2205.07237 · Published May 15, 2022
LLM ArchitectureCognitive Models LatentLinguistics, NLP, NLU

A large number of studies that analyze deep neural network models and their ability to encode various linguistic and non-linguistic concepts provide an interpretation of the inner mechanics of these models. The scope of the analyses is limited to pre-defined concepts that reinforce the traditional linguistic knowledge and do not reflect on how novel concepts are learned by the model. We address this limitation by discovering and analyzing latent concepts learned in neural network models in an unsupervised fashion and provide interpretations from the model’s perspective. In this work, we study: i) what latent concepts exist in the pre-trained BERT model, ii) how the discovered latent concepts align or diverge from classical linguistic hierarchy and iii) how the latent concepts evolve across layers. Our findings show: i) a model learns novel concepts (e.g. animal categories and demographic groups), which do not strictly adhere to any pre-defined categorization (e.g. POS, semantic tags), ii) several latent concepts are based on multiple properties which may include semantics, syntax, and morphology, iii) the lower layers in the model dominate in learning shallow lexical concepts while the higher layers learn semantic relations and iv) the discovered latent concepts highlight potential biases learned in the model.

A pitfall to this line of research is its study of pre-defined concepts and the ignoring of any latent concepts within these models, resulting in a narrow view of what the model knows. Another weakness of using user-defined concepts is the involvement of human bias in the selection of a concept which may result in a misleading interpretation. In our work, we sidestep this issue by approaching interpretability from a model’s perspective, specifically focusing of the discovering of latent concepts in pre-trained models.

Consider a group of words mentioning first name of the football players of the German team and all of the names occur at the start of a sentence, the following series of tags will be assigned to the cluster: semantic:origin:Germany, semantic:entertainment:sport:football, semantic:namedentity:person:firstname, syntax:position:firstword. Here, we preserve the hierarchy at various levels such as, sport, person name, origin, etc., which can be used to combine clusters to analyze a larger group.

The two clusters of decimal numbers (Figure 3a, 3b) look quite similar, but are semantically different. The former represents numbers appearing as percentages e.g., 9.6% or 2.4 percent and the latter captures monetary figures e.g., 9.6 billion Euros or 2.4 million dollars. We found these two clusters to be sibling, which shows that BERT is learning a morpho-semantic hierarchy where it grouped all the decimal numbers together (morphologically) and then further made a semantic distinction by dividing them into percentages and monetary figures. Such a subtle difference in the usage of decimal numbers shows the importance of using fine-grained concepts for interpretation.