A Neurosymbolic Approach to Natural Language Formalization and Verification
Sam Bayless, Stefano Buliani, Darion Cassel, Byron Cook, Duncan Clough, Rémi Delmas, Nafi Diallo, Ferhat Erata, Nick Feng, Dimitra Giannakopoulou, Aman Goel, Aditya Gokhale, Joe Hendrix, Marc Hudak, Dejan Jovanović, Andrew M. Kent, Benjamin Kiesl-Reiter, Jeffrey J. Kuna, Nadia Labai, Joseph Lilien, Divya Raghunathan, Zvonimir Rakamarić, Niloofar Razavi, Michael Tautschnig, et al.
大型语言模型在自然语言解释和推理方面表现良好,但其固有的随机性限制了其在受监管行业的采用,如在严格政策下运作的金融和医疗保健行业。 为了解决这一限制,我们提出了一个两阶段的神经符号框架,该框架(1)使用具有可选的人类指导的LLM来正式化自然语言策略,允许对形式化过程进行精细控制,(2)使用推理时间自动形式化来验证自然语言语句的逻辑正确性。 当正确性至关重要时,我们在推理时执行多个冗余形式化步骤,交叉检查语义等价的正态化。 我们的基准表明,我们的方法超过99%的稳健性,表明在识别逻辑有效性方面几乎为零的误报率。 我们的方法产生可审计的逻辑工件,以证实验证结果,并可用于改进原始文本。
Large Language Models perform well at natural language interpretation and reasoning, but their inherent stochasticity limits their adoption in regulated industries like finance and healthcare that operate under strict policies. To address this limitation, we present a two-stage neurosymbolic framework that (1) uses LLMs with optional human guidance to formalize natural language policies, allowing fine-grained control of the formalization process, and (2) uses inference-time autoformalization to ...