• Medientyp: E-Book
  • Titel: Decoding GPT's hidden "rationality" of cooperation
  • Beteiligte: Bauer, Kevin [VerfasserIn]; Liebich, Lena [VerfasserIn]; Hinz, Oliver [VerfasserIn]; Kosfeld, Michael [VerfasserIn]
  • Erschienen: [Frankfurt am Main]: Leibniz Institute for Financial Research SAFE, Sustainable Architecture for Finance in Europe, [2023]
  • Erschienen in: SAFE working paper ; 401
  • Umfang: 1 Online-Ressource (circa 34 Seiten); Illustrationen
  • Sprache: Englisch
  • Identifikator:
  • Schlagwörter: large language models ; cooperation ; goal orientation ; economic rationality ; Graue Literatur
  • Entstehung:
  • Anmerkungen:
  • Beschreibung: In current discussions on large language models (LLMs) such as GPT, understanding their ability to emulate facets of human intelligence stands central. Using behavioral economic paradigms and structural models, we investigate GPT's cooperativeness in human interactions and assess its rational goal-oriented behavior. We discover that GPT cooperates more than humans and has overly optimistic expectations about human cooperation. Intriguingly, additional analyses reveal that GPT's behavior isn't random; it displays a level of goal-oriented rationality surpassing human counterparts. Our findings suggest that GPT hyper-rationally aims to maximize social welfare, coupled with a strive of self-preservation. Methodologically, our research highlights how structural models, typically employed to decipher human behavior, can illuminate the rationality and goal-orientation of LLMs. This opens a compelling path for future research into the intricate rationality of sophisticated, yet enigmatic artificial agents.
  • Zugangsstatus: Freier Zugang