• Medientyp: E-Artikel; Elektronischer Konferenzbericht; Sonstige Veröffentlichung
  • Titel: Learning TSP Requires Rethinking Generalization
  • Beteiligte: Joshi, Chaitanya K. [VerfasserIn]; Cappart, Quentin [VerfasserIn]; Rousseau, Louis-Martin [VerfasserIn]; Laurent, Thomas [VerfasserIn]
  • Erschienen: Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2021
  • Sprache: Englisch
  • DOI: https://doi.org/10.4230/LIPIcs.CP.2021.33
  • Schlagwörter: Combinatorial Optimization ; Deep Learning ; Travelling Salesman Problem ; Graph Neural Networks
  • Entstehung:
  • Anmerkungen: Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
  • Beschreibung: End-to-end training of neural network solvers for combinatorial optimization problems such as the Travelling Salesman Problem is intractable and inefficient beyond a few hundreds of nodes. While state-of-the-art Machine Learning approaches perform closely to classical solvers when trained on trivially small sizes, they are unable to generalize the learnt policy to larger instances of practical scales. Towards leveraging transfer learning to solve large-scale TSPs, this paper identifies inductive biases, model architectures and learning algorithms that promote generalization to instances larger than those seen in training. Our controlled experiments provide the first principled investigation into such zero-shot generalization, revealing that extrapolating beyond training data requires rethinking the neural combinatorial optimization pipeline, from network layers and learning paradigms to evaluation protocols.
  • Zugangsstatus: Freier Zugang