Critique 2

Critique 2

This is a well-written article. I enjoyed reading an application of GCNs using knowledge graphs on a task such as commonsense reasoning. The second paper is about GCNs provides relevant descriptions to understand the architecture of the first paper. Each mathematical equation is nicely explained, and I liked that the author outlined certain drawbacks with the current approach and opinions for future research. Here are some of my suggestions or thoughts after reading this page.

  • Although a KG can encode topological information between the concepts, I see one drawback in this approach that it can lack rich context information. For example, if a graph node is “Mona Lisa”, the graph depicts its relations to multiple other entities. But given this neighbourhood information, it might be hard to infer that it is a painting or require larger hops in the KG. Wouldn't it make more sense to retrieve more precise definition/knowledge from external sources, e.g. the definition of Mona Lisa in Wiktionary is “A painting by Leonardo da Vinci, widely considered as the most famous painting in history”. I am not sure whether there has been work in this direction but it would be nice to read the next page in this direction.
  • One part that I like is that the architecture can provide interpretable inferences. At the same time, the approach heavily relies on ConceptNet as the CKG (this is a static KG). I am wondering how this approach would work a concept is not found in this KG, or provides older, outdated inferences.
  • What other downstream tasks can this approach be used for except Commonsense QA?
  • Do you think that a Graph Attention Network can be used instead of a GCN? if so, could that remove the step of applying Hierarchical Attention Layer to the architecture.
  • In the performance section under Table 1, KagNet performance is 58.9 on OF test set, but under Table 3, the accuracy is 82.15, here the test set is IH. Is there a difference between OF and IH test sets? Some clarity will be appreciated.
NIKHILSHENOY (talk)20:54, 14 February 2023

Thank you Nikhil for your critique! I am glad you enjoyed reading this page. I added a link to your excellent foundation page on Deep Learning on Graph structures.

I will use your feedback to decide on the papers for the March submission. Thank you for letting me know about the confusion between OF and IH terms. I have fixed that.

Cheers

MEHARBHATIA (talk)04:52, 19 February 2023