Anthropic’s latest research asks the question, “Whose opinions are the responses of large language models most similar to when considering the perspectives of participants across the world?”

Futuristic 3D Render

A.I

The latest paper by Anthropic discusses a framework for measuring the representation of subjective global opinions in language models. The authors develop a dataset called GlobalOpinionQA, which consists of questions and answers from cross-national surveys. They also propose a metric to measure the similarity between language model responses and people’s responses, based on country. The experiments conducted using this framework show that language model responses are most similar to the opinion distributions of the USA and Canada, as well as some European and South American countries.

The paper highlighted a few Key points

  • The paper presents a framework to measure how similar the responses of large language models (LLMs) are to the opinions of participants from different countries.

  • A dataset called GlobalOpinionQA is created using questions and answers from cross-national surveys.

  • A metric is derived to capture the similarity between LLM responses and people’s responses, conditioned on country.

  • Experiments conducted using this framework reveal that LLM responses are most similar to the opinion distributions of the USA and Canada, as well as certain European and South American countries.