Critique

Outstanding read! I thoroughly enjoyed reading this page. It helped me understand some core concepts. I have a few questions that I feel will be worth discussing.

1. Can you make the shift from the first paper to the next more smooth? It feels a bit arbitrary. Related to David’s point, some clarity over how these two papers are linked would be helpful.

2. It’s a bit difficult to understand what method (self-supervised learning / pre-training) works best for improving generalization. Can you conclude with the method you think would be ideal?

3. I can see that you have added figures to highlight concepts. Can you add a line in each section pointing to the subfigure referring to that subsection?

4. What does it mean by GIN network is most expressive, can you maybe add a line on that?

5. Can you add intuitively why node-level pretraining or graph-level pretraining does not work as well as the introduced method?

MEHARBHATIA (talk)07:35, 14 February 2023

Thanks a lot for the detailed review,

  • Since, the connection between the two papers wasn't clear, I have modified the abstract to make it explicit that the two papers solve separate problems, changed the naming of the headers of the two paper sections, and also modified the conclusion. I hope this makes it more clear and easy to understand.
  • Since they solve separate problems, I have mentioned in the conclusion what are the key takeaways for each task (node classification and graph classification)
  • Added some pointers to figures.
  • Added a line about expressivity of Graph Isomorphism Network.
  • Added a section (https://wiki.ubc.ca/Course:CPSC522/Pretraining_Methods_for_Graph_Neural_Networks#Issues_with_using_only_Node-Level_and_Graph-Level_Pre-training_Strategy) on why using individual node-level pre-training and graph-level pre-training does not yield useful results.
NIKHILSHENOY (talk)02:31, 15 February 2023