{"id":2605,"date":"2025-07-15T04:36:54","date_gmt":"2025-07-15T04:36:54","guid":{"rendered":"https:\/\/ci.acm.org\/2025\/?page_id=2605"},"modified":"2025-07-18T21:01:49","modified_gmt":"2025-07-18T21:01:49","slug":"adam-kalai","status":"publish","type":"page","link":"https:\/\/ci.acm.org\/2025\/speakers\/adam-kalai\/","title":{"rendered":"Adam Kalai"},"content":{"rendered":"\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"1024\" data-src=\"https:\/\/ci.acm.org\/2025\/wp-content\/uploads\/2025\/05\/Instagram-Post-38-CI-25-Designs-1024x1024.png\" alt=\"\" class=\"wp-image-2354 lazyload\" data-srcset=\"https:\/\/ci.acm.org\/2025\/wp-content\/uploads\/2025\/05\/Instagram-Post-38-CI-25-Designs-1024x1024.png 1024w, https:\/\/ci.acm.org\/2025\/wp-content\/uploads\/2025\/05\/Instagram-Post-38-CI-25-Designs-300x300.png 300w, https:\/\/ci.acm.org\/2025\/wp-content\/uploads\/2025\/05\/Instagram-Post-38-CI-25-Designs-150x150.png 150w, https:\/\/ci.acm.org\/2025\/wp-content\/uploads\/2025\/05\/Instagram-Post-38-CI-25-Designs-768x768.png 768w, https:\/\/ci.acm.org\/2025\/wp-content\/uploads\/2025\/05\/Instagram-Post-38-CI-25-Designs.png 1080w\" data-sizes=\"(max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/1024;\" \/><\/figure>\n\n\n\n\n\n<h3>Human Feedback in Reinforcement Learning Improves Chatbot Fairness<\/h3>\n\n\n\n<strong>Adam Kalai<br><\/strong>Research Scientist, AI Safety and Ethics, OpenAI\n\n\n\nAdam Tauman Kalai is a Research Scientist at OpenAI, specializing in AI Safety and Ethics. His research spans algorithms, fairness, machine learning theory, game theory, and crowdsourcing. Adam earned his BA from Harvard University and his PhD from Carnegie Mellon University, after which he served as an Assistant Professor at both Georgia Tech and the Toyota Technological Institute at Chicago. He also contributes to Project CETI\u2019s science team, an interdisciplinary initiative dedicated to decoding sperm whale communication. In addition, Adam has co-chaired leading conferences such as COLT (Conference on Learning Theory), HCOMP (Conference on Human Computation), and NEML. His work has been recognized with numerous honors, including several best paper awards, an NSF CAREER Award, an Alfred P. Sloan Fellowship, and most notably the Majulook Prize.<br>\n\n\n\n<strong>Learn more<\/strong><br>Website: <a href=\"https:\/\/kal.ai\/\" target=\"_blank\" rel=\"noopener\" title=\"\"><mark style=\"background-color:rgba(0, 0, 0, 0);color:#41649c\" class=\"has-inline-color\">https:\/\/kal.ai\/<\/mark><\/a><br><br><strong>Read these<\/strong><br><a href=\"https:\/\/arxiv.org\/abs\/2410.19803\" target=\"_blank\" rel=\"noopener\" title=\"\"><mark style=\"background-color:rgba(0, 0, 0, 0);color:#41649c\" class=\"has-inline-color\">First-Person Fairness in Chatbots<\/mark><\/a> by Tyna Eloundou, et al., 2024.<br><a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3618260.3649777\" target=\"_blank\" rel=\"noopener\" title=\"\"><mark style=\"background-color:rgba(0, 0, 0, 0);color:#41649c\" class=\"has-inline-color\">Calibrated language models must hallucinate<\/mark><\/a> by Adam Tauman Kalai and Santosh Vempala, 2024.<br><a href=\"https:\/\/proceedings.mlr.press\/v202\/aher23a.html\" target=\"_blank\" rel=\"noopener\" title=\"\"><mark style=\"background-color:rgba(0, 0, 0, 0);color:#41649c\" class=\"has-inline-color\">Using large language models to simulate multiple humans and replicate human subject studies<\/mark><\/a> by Gati Aher, Rosa Arriaga, and Adam Tauman Kalai, 2023.\n\n\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"parent":2137,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_eb_attr":"","pagelayer_contact_templates":[],"_pagelayer_content":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"class_list":["post-2605","page","type-page","status-publish","hentry"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/ci.acm.org\/2025\/wp-json\/wp\/v2\/pages\/2605","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ci.acm.org\/2025\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ci.acm.org\/2025\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ci.acm.org\/2025\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ci.acm.org\/2025\/wp-json\/wp\/v2\/comments?post=2605"}],"version-history":[{"count":10,"href":"https:\/\/ci.acm.org\/2025\/wp-json\/wp\/v2\/pages\/2605\/revisions"}],"predecessor-version":[{"id":2780,"href":"https:\/\/ci.acm.org\/2025\/wp-json\/wp\/v2\/pages\/2605\/revisions\/2780"}],"up":[{"embeddable":true,"href":"https:\/\/ci.acm.org\/2025\/wp-json\/wp\/v2\/pages\/2137"}],"wp:attachment":[{"href":"https:\/\/ci.acm.org\/2025\/wp-json\/wp\/v2\/media?parent=2605"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}