Conceptual Mirrors: Reflecting on LLMs' Interpretations of Ideas

April 23, 2024

Conceptual Mirrors: Reflecting on LLMs' Interpretations of Ideas

"The general fact is that language is not just a reflection of our social lives but also an instrument that shapes our thoughts, feelings, and actions in powerful ways." - Noam Chomsky

Large language models (LLMs) have become more commonplace in our daily lives, from LLM agents asking you medical questions prior to a doctor's appointment to the all-purpose ChatGPT. However, as these models begin to ingrain themselves in society, several questions arise about their inner workings. A particularly important question is how LLMs reason about the social world. In other words, what do their conceptual representations of human life and behavior look like? Understanding the implicit conceptual networks that underpin reasoning and decision-making by LLMs is vital to their safety, effectiveness, and ethics of use. Questions about how LLMs reason, learn, and conceptualize topics can be addressed by drawing on methods developed in cognitive science as we continue to develop and evolve these algorithms.

Cognitive science, and in particular cognitive psychology, has long explored the idea of conceptual representations. Conceptual representations are mental constructs that represent knowledge about concepts, categories, objects, events, or relationships in the world. These representations are formed based on our experiences, perceptions, and interactions with our environment. They help us organize and make sense of information, allowing us to understand, categorize, reason about, and communicate ideas effectively. In fact, because they constitute the meaning of objects, events, and abstract ideas, they are the basis for object recognition, action planning, language, and thought (Humphreys, Riddoch, & Quinlan, 1988). Given how important concepts and conceptual representations are to how humans interact with and understand the world around them, it is vital to explore how human-like language models learn and represent the structure of ideas. Of particular interest to the application of LLMs is how they conceptualize people broadly and different groups of people in particular.

Before diving into how LLMs represent humans, we can ask fundamental questions such as whether language alone can give rise to complex topics, or if embodied experience is essential. LLMs, trained on massive amounts of data (albeit from very limited modalities, such as text and images), can be viewed as disembodied learners. This offers a unique avenue to explore human conceptual representations. Despite not receiving multimodal inputs and no physical body to interact with the word, both previously considered vital for the development of conceptual representations, LLMs may exhibit human-like performance in various cognitive tasks which suggest a certain degree of cognitive skill and awareness (see Eisape et al 2024; Monnejad et al. 2023; Stevenson et al., 2023;  Subramonyam et al., 2023).

Following this line of thought, Xu and colleagues (2023) explored the roles of language and embodied experience in shaping human conceptual representations using LLMs (GPT-3.5 and GPT-4) to model human cognition. They collected word ratings on 6 domains from ChatGPT and compared them with ratings generated by a group of humans. Their research showed that some domains, such as emotion, salience, and mental visualization do not rely heavily on sensory input and therefore can be obtained through language alone. However, there was a noticeable difference between the LLMs and humans in sensorimotor domains. GPT-4, which is trained on two forms of data instead of just one, performed better than the solely linguistically trained GPT-3.5. Their results, despite positively showing that conceptual representations do exist in LLMs, exemplify the benefits of multimodal learning where “the whole is greater than the sum of its parts”. The integration of diverse modalities produces a more human-like understanding.

Other research has investigated LLMs’ representations of words of specific domains such as space and time (Gurnee & Tegmark, 2024), color (Patel & Pavlick, 2022), or even world states in a game (Li et al., 2022). These and other studies have uncovered notable structural resemblances between the conceptual framework constructed by a model within a specific context and its real-world counterpart where these concepts find their basis. However, little work has probed LLMs' capacity for conceptual inference. In an attempt to resolve this issue, a group of scientists led by Xu (Xu et al., 2024) developed a case study aiming to evaluate various LLMs' capacity for conceptual inference. They repurposed a reverse dictionary task and existing datasets of lexical semantics to probe for conceptual representations. They found that with a few description-word pairs, LLMs can effectively learn to infer concepts from complex linguistic descriptions. However, language models exhibit important variability in their conceptual inference ability.

Despite all this work into how and if LLMs can create human-like, conceptual representations of the world, we still know very little about how these models conceptualize and represent us. Do LLMs have different representations for different groups of people? How is bias in training data shifting their understanding of groups? As they learn, do these conceptual representations evolve or remain stable? How do the way LLMs reason about humans and human behavior affect the way they interact with their users? Questions of this sort are vital in that these models are only safe and ethical to use to the extent that we understand how they reason about people.

For example, if we do not understand how LLMs are deployed for medical purposes reason about different groups we run the risk of building and refining algorithms that push forth potentially harmful biases, which could result in inappropriate or withheld care for individuals. As a substance use/misuse researcher with a focus on natural language and computational linguistics, I question whether LLMs deployed for mental health services or concerns are able to make unbiased judgments about the individuals who use them. Are societal biases also ingrained in these models? Will we just continue to perpetuate pre-existing judgments into the era of AI? It is vital that as we continue to unveil what LLMs and GenAI can do for us, we also begin to question how their training data, their code, their engineers, and their learning progression change their thoughts on people. 

References

  1. Humphreys, G. W., Riddoch, M. J., & Quinlan, P. T. (1988). Cascade processes in picture identification. Cognitive Neuropsychology and Cognitive Rehabilitation, 5(1), 67-104.

  2. Eisape, T., Tessler, M. H., Dasgupta, I., Sha, F., van Steenkiste, S., & Linzen, T. (2023). A systematic comparison of syllogistic reasoning in humans and language models. arXiv preprint arXiv:2311.00445.

  3. Momennejad, I., Hasanbeig, H., Vieira Frujeri, F., Sharma, H., Jojic, N., Palangi, H., ... & Larson, J. (2024). Evaluating cognitive maps and planning in large language models with CogEval. Advances in Neural Information Processing Systems, 36.

  4. Stevenson, C. E., ter Veen, M., Choenni, R., van der Maas, H. L., & Shutova, E. (2023). Do large language models solve verbal analogies like children do?. arXiv preprint arXiv:2310.20384.

  5. Subramonyam, H., Pondoc, C. L., Seifert, C., Agrawala, M., & Pea, R. (2023). Bridging the Gulf of Envisioning: Cognitive Design Challenges in LLM Interfaces. arXiv preprint arXiv:2309.14459.

  6. Xu, Q., Peng, Y., Wu, M., Xiao, F., Chodorow, M., & Li, P. (2023). Does conceptual representation require embodiment? insights from large language models. arXiv preprint arXiv:2305.19103.

  7. Gurnee, W., & Tegmark, M. (2023). Language models represent space and time. arXiv preprint arXiv:2310.02207.

  8. Patel, R., & Pavlick, E. (2021, October). Mapping language models to grounded conceptual spaces. In International conference on learning representations.

  9. Li, K., Hopkins, A. K., Bau, D., Viégas, F., Pfister, H., & Wattenberg, M. (2022). Emergent world representations: Exploring a sequence model trained on a synthetic task. arXiv preprint arXiv:2210.13382.

  10. Xu, N., Zhang, Q., Zhang, M., Qian, P., & Huang, X. (2024). On the Tip of the Tongue: Analyzing Conceptual Representation in Large Language Models with Reverse-Dictionary Probe. arXiv preprint arXiv:2402.14404.