Lost in Inference: Rediscovering the Role of Natural Language Inference for Large Language Models
we investigate if NLI tasks, that are rarely used for LLM evaluation, can still be informative for evaluating LLMs. Focusing on five different NLI benchmarks across six models of different scales, we investigate if they are able to discriminate models of different size and quality and how their accuracies develop during training. Furthermore, we investigate the extent to which the softmax distributions of models align with human distributions in cases where statements are ambiguous or vague. Overall, our results paint a positive picture for the NLI tasks: we find that they are able to discriminate well between models at various stages of training, yet are not (all) saturated.
We find that accuracies (as computed on the majority label) are higher if the entropy of the human labels is low; when humans disagree, models are more likely to select one of the less preferred labels.
2.2 Subjectivity in NLP tasks Another line of work relevant to ours considers the behaviour of models in cases where humans disagree on the correct label for a particular sample (for an overview, see Plank, 2022). The ground truth labels for NLP benchmarks are often decided according to the majority label by human annotators. This simplifies the data annotation process, while also making the evaluation easier. However, several previous studies have noted that human disagreements in annotations for NLP datasets reflect the lack of a single ground truth label, rather than noise in the annotation process