Discussion 1

Discussion 1

I loved reading the article. It was easy to understand. I have the following minor comments:

Abstract: what are the other ways of improving generalization, apart from pre-training? For the sake of novice readers, it would be better to explain pre-training in a single line in the abstract itself.

Background: Please make it more interactive, and add images of GNNs and GCNs for better visualization.

Minor fixes: (Grammar) Can self-supervised training can improve the generalization and robustness capacity of the GCNs?

MAYANKTIWARY (talk)02:21, 14 February 2023

Thanks a lot for the review,

  • I have slightly modified the abstract to introduce what pre-training is. I am not sure if I should talk about other methods that can improve generalization (using batch norm, bigger batch, generalizable loss, more expressive GNN, etc) since that is a different rabbit hole. I am happy to hear your thoughts if you think otherwise.
  • Have added an image of a GCN in the background. I also feel based on David's point, there should be a foundation page of graph neural networks which will have the figures you are requesting. For this page, I would be assuming some background knowledge of Graph Neural Networks.
  • Fixed the pointed grammatical error, thanks!
NIKHILSHENOY (talk)02:46, 15 February 2023