Course talk:CPSC522/Deep Neural Network

From UBC Wiki

Contents

Thread titleRepliesLast modified
Critiques103:39, 14 March 2016
Critique103:37, 14 March 2016
Comments107:54, 11 March 2016
Some suggestions107:27, 11 March 2016

Hi Tanuj Kr Aasawat,


Nice draft, however, from my perspective, you can improve the effectiveness of this page:

  1. By providing a little background on the available literatures; providing a kind of indication of how the literature space is organized in order to let readers get a better idea of the available gaps and the area that these two papers fitted in.
  2. As it has been already mentioned, it would be much better if you make a better transition between two papers; better elaborate the connection of two
  3. “Result” doesn’t represent what you wrote in that section (paper 1), I believe it should be “Discussion” or “Discussion and Conclusion”, you are not necessarily obligated to have a result section as this is a summary and the overall outcomes are much important than specific ones.
  4. The training models and the equation of leaf node evaluation (paper 2) would appreciate some Math tags


All above matters aside, it was a good draft and I enjoyed reading it.

Good job,

Yaashaar

Yaashaar HadadianPour (talk)02:39, 14 March 2016

Thanks Yaashaar. I'll incorporate your suggestions.

TanujKrAasawat (talk)03:39, 14 March 2016
 

Hi Tanuj Kr Aasawat,

Excellent work. AlphaGo is also what I am interested in. I appreciate your work on this topic. It is well orgnaized and easy to understand. I guess you wanna update some content involving the result of the match between AlphaGo and Lee Sedol. Thanks.

Best regards,

Ke Dai

KeDai (talk)00:11, 14 March 2016

Thanks. Yeah I will update the match results.

TanujKrAasawat (talk)03:37, 14 March 2016
 

It is a good summary of both papers. But maybe you can comment more on how these two papers are related.

YanZhao (talk)04:41, 11 March 2016

Thanks Yan for the feedback. In "Why AlphaGo is so successful?" section I have mentioned about how it's related to previous paper and what was the major problem with previous paper, viz, shallow policies and value functions based on a linear combination of input features.

TanujKrAasawat (talk)07:54, 11 March 2016
 

Some suggestions

Hi Tanuj Kr Aasawat,

Thank you for your page, it is well organized and easy to understand. AlphaGo is a quite hot topic these days, but there are many other chess games, could you tell something about if this framework will also work well on other chess games?

Best regards, Jiahong Chen

JiahongChen (talk)03:23, 11 March 2016

Thanks Jiahong for your comment. Regarding Chess, indeed this will be a better approach because chess has very less search space compare to Go. But the achievement of an AI playing Chess against human champ has already been unlocked by IBM. And Go is the most challenging because of its huge search space. In fact, the search space of Go exceeds the number of atoms in this world. :)

TanujKrAasawat (talk)07:27, 11 March 2016