Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?*
newly developed large language models (LLM)—because of how they are trained and designed—can be thought of as implicit computational models of humans—a homo silicus.
I consider the reasons the reasons why AI experiments might be helpful in understanding actual humans. The core of the argument is that LLMs—by nature of their training and design—are (1) computational models of humans and (2) likely possess a great deal of latent social information.