Quantitative Bounds for Sorting-Based Permutation-Invariant Embeddings
Nadav Dym and Matthias Wellershoff and Efstratios Tsoukanis and Daniel Levy and Radu Balan
我们研究基于排序的嵌入β_𝐀 : ℝ^n×d→ℝ^n×D, 𝐗↦↓(𝐗𝐀),其中↓表示矩阵的列排序。此类嵌入出现在图深度学习中,其中输出应对图节点的置换保持不变。先前的工作表明,对于足够大的D和适当的𝐀,映射β_𝐀是单射的,并且满足双Lipschitz条件。然而,仍存在两个空白:首先,单射性所需的最优尺寸D尚不清楚;其次,该映射的双Lipschitz常数的估计尚不可知。在本文中,我们在解决这两个空白方面取得了实质性进展。关于第一个空白,我们改进了单射性所需嵌入维度D的最佳已知上界,并提供了最小单射性维度的下界。关于第二个空白,我们构造了矩阵𝐀,使得β_𝐀的双Lipschitz失真与n呈二次依赖关系,并且完全独立于d。我们还证明了β_𝐀的失真至少为Ω(√(n))。最后,我们通过对β_𝐀应用线性投影以减少其输出维度的变体提供了类似的结果。
We study the sorting-based embedding β_𝐀 : ℝ^n × d→ℝ^n × D, 𝐗↦↓(𝐗𝐀), where ↓ denotes column wise sorting of matrices. Such embeddings arise in graph deep learning where outputs should be invariant to permutations of graph nodes. Previous work showed that for large enough D and appropriate 𝐀, the mapping β_𝐀 is injective, and moreover satisfies a bi-Lipschitz condition. However, two gaps remain: firstly, the optimal size D required for injectivity is not yet known, and secondly, no estimate...