Skip to Main content Skip to Navigation
Conference papers

Neural Proof Nets

Konstantinos Kogkalidis 1 Michael Moortgat 1 Richard Moot 2
2 TEXTE - Exploration et exploitation de données textuelles
LIRMM - Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier
Abstract : Linear logic and the linear {\lambda}-calculus have a long standing tradition in the study of natural language form and meaning. Among the proof calculi of linear logic, proof nets are of particular interest, offering an attractive geometric representation of derivations that is unburdened by the bureaucratic complications of conventional prooftheoretic formats. Building on recent advances in set-theoretic learning, we propose a neural variant of proof nets based on Sinkhorn networks, which allows us to translate parsing as the problem of extracting syntactic primitives and permuting them into alignment. Our methodology induces a batch-efficient, end-to-end differentiable architecture that actualizes a formally grounded yet highly efficient neuro-symbolic parser. We test our approach on {\AE}Thel, a dataset of type-logical derivations for written Dutch, where it manages to correctly transcribe raw text sentences into proofs and terms of the linear {\lambda}-calculus with an accuracy of as high as 70%.
Document type :
Conference papers
Complete list of metadatas

Cited literature [40 references]  Display  Hide  Download

https://hal-lirmm.ccsd.cnrs.fr/lirmm-02952267
Contributor : Richard Moot <>
Submitted on : Tuesday, September 29, 2020 - 12:00:22 PM
Last modification on : Friday, November 6, 2020 - 11:42:26 AM

Identifiers

  • HAL Id : lirmm-02952267, version 1
  • ARXIV : 2009.12702

Collections

Citation

Konstantinos Kogkalidis, Michael Moortgat, Richard Moot. Neural Proof Nets. 24th Conference on Computational Natural Language Learning (CoNLL), Nov 2020, Virtual, Dominican Republic. ⟨lirmm-02952267⟩

Share

Metrics

Record views

44

Files downloads

18