When Thinking Pays Off: Incentive Alignment for Human-AI Collaboration
Joshua Holstein and Patrick Hemmer and Gerhard Satzger and Wei Sun
与人工智能(AI)的合作通过利用人类和人工智能的互补能力,改善了各个领域的人类决策。 然而,人类系统地过度依赖人工智能的建议,即使他们的独立判断会产生优越的结果,从根本上破坏了人类与人工智能互补的潜力。 在以前工作的基础上,我们将人机决策中普遍存在的激励结构确定为这种过度依赖的结构性驱动因素。 为了解决这种错位问题,我们提出了一种旨在抵消系统性过度依赖的替代激励机制。 我们通过180名参与者的行为实验对这种方法进行实证评估,发现拟议的机制显着减少了过度依赖。 我们还表明,虽然设计适当的激励可以增强协作和决策质量,但设计不当的激励可能会扭曲行为,带来意想不到的后果,最终降低绩效。 这些发现强调了将激励与任务背景和人类-人工智能互补性保持一致的重要性,并建议有效的协作需要转向对上下文敏感的激励设计。
Collaboration with artificial intelligence (AI) has improved human decision-making across various domains by leveraging the complementary capabilities of humans and AI. Yet, humans systematically overrely on AI advice, even when their independent judgment would yield superior outcomes, fundamentally undermining the potential of human-AI complementarity. Building on prior work, we identify prevailing incentive structures in human-AI decision-making as a structural driver of this overreliance. To ...