Course talk:CPSC522/Sentiment Analysis

From UBC Wiki

Contents

Thread titleRepliesLast modified
Suggestions 104:30, 21 April 2016
Feedback104:04, 21 April 2016
Discussion with your page103:42, 19 April 2016
critique322:17, 18 April 2016

Suggestions

Hi Samprity,


It is a very good wiki page, you gives a very interesting hypothesis and designed the experiment well. It is really a quite meaningful project to help people decide if it is a good movie that worth spend money and time on it.

Only suggestion is that in this hypothesis part, it would be better to specify how you are going to exam the accuracy on the improved Naive Bayes algorithm. And one question about the experiment result. Furious Seven and Deadpool both have 8 stars but they have totally different prediction. What may cause this result? Is that due to the review for movies is not that formal? Or maybe there would be some special slangs in different movie that makes your classifier failed to study?


Best regards,

Jiahong Chen

JiahongChen (talk)04:07, 21 April 2016

Thanks for the review! I will include how I measured the accuracy in the hypothesis.
Regarding the Fast and Furious 7 review, there could some words that the classifier failed to study. The Fast and Furious 7 review probably has fewer words that are similar to the training dataset of positive reviews. Like TianQiChen has mentioned the tool gave "extremely entertaining" a 58% positive but "7" a 81% negative.

SamprityKashyap (talk)04:30, 21 April 2016
 

Hi Samprity,

Regarding wiki page:

  • I think this is a typo?: "Movie was not !good" -> "Move was !good" (Otherwise it's recursive.)
  • It seems that your model assigns 50% (completely uncertain) to any token that's not seen in the training dataset. It might be worthwhile to mention this somewhere in the wiki.
  • I'm not sure if hyperlinks for the movies is necessary.. but I did click a few with positive reviews to check them out. :)
  • It may be helpful to give what the reviews were for these moves. Maybe just a couple. Because the table right now doesn't reflect how your model works. (Because it doesn't even show the inputs.)
  • Why couldn't naive Bayes predict correctly for the Fast and Furious example? A small bit of intuition would help. Possible culprit: using your tool, it gives "extremely entertaining" a 58% positive but "7" a 81% negative. ;)

Regarding your experiment:

  • I think you can clean up the dataset a bit more. Your tokenizer assigns a weight of 61% negative to the token "1", 53% negative to the token "2", but 58% positive to the token "3'! (For comparison, the token "good" only has a positive weight of 55%.) These numbers and artifacts such as "a" (positive 53%) can just be removed to yield better results. Because using these tokens is essentially fitting to noise.
  • "One thing we observed was that the probability percentages for the same review varied if I loaded the application again." Why is this? If there is randomness in your experiment, can you mention it in the wiki? Is it because the training data is randomly picked each time?

The tool was fun to play with!

TianQiChen (talk)03:26, 21 April 2016

Thank you for the feedback!

  • "Movie was not !good" -> "Movie was not !good" I am not removing the not as of now. Since the process occurs only once it is not going to be recursive. If time permits I will try to remove the not and test it out. The token "!good" gets stored in our Bayes classifer as having appeared in a negative review.
  • Yes the model assigns 50% (completely uncertain) to any token that is not seen in the training dataset. I will mention it in the page.
  • For the reviews I correlated number of stars with positive sentiment. I did paste of the reviews on the page. But most of them are too long and made the page look weird.
  • Thank you for finding the culprit for the Fast and Furious review!
  • If time permits I will try to clean up the dataset.
  • Yes I did random sorting on the training data leading to different probabilities. I will mention it in the page!
SamprityKashyap (talk)04:04, 21 April 2016
 

Discussion with your page

Hi Samprity,

Nice page! I think we have something in common about the topic. I am also doing analysis on movie dataset and I found your topic interesting. I have some inquires regarding your page.

  • Please check there is one sentence like this: Bayes' theorem does exactly does that.
  • Can you give the link of your Cornell dataset and give a snapshot to show how the dataset looks like, give some introduction?
  • What entropoy technique do you use in your experiment finally?
  • For the negation Regex, is there any missing words that is not included in the list but represents the negation?
  • Can you give a system flow chart of your system? I cannot get a full view of what your system looks like and you said you modified navie bayes classifer. So can you give detailed description about your improvement? Thanks

Regards Arthur

BaoSun (talk)19:22, 18 April 2016

Thanks Arthur for the feedback.

  1. I have corrected "Bayes' theorem does exactly does that."
  2. I have already given the link to the dataset. I will give a screenshot of what the data looks like.
  3. In the github link unigram with negation tokenizer is implemented as it had the best accuracy. I tested unigram with no negation, bigram and trigram as well.
  4. Yes there could be some missing negation words. Please let me know if you can think of anymore. I would add them to the regex.
  5. I will try to provide a flow chart. The modification was done in the training section in the tokenizer. I will add more description for that.
SamprityKashyap (talk)20:05, 18 April 2016
 

Hi Samprity,

Interesting work about a challenging topic! It's especially good that you touched on so many of the challenges involved in sentiment analysis.

Section-specific feedback:

Abstract

  • Related pages section - I expected this to list related pages on the course wiki, but all the links are external. These links might be better suited to a "See Also" section at the end near the references.

Hypothesis

  • The hypothesis doesn't feel very precise to me. What modifications are you testing? What relationships are you trying to find? About which data?
  • It may improve the overall flow of the page if the hypothesis goes after the background. Or perhaps not; it's something to consider in any case.

What is Sentiment Analysis?

  • The figure doesn't seem to contribute anything to the page. It might be more useful to have a sample text with positive and negative polarity words highlighted.

Examples of Sentiment Analysis

  • I'd like to see a citation about the Obama administration's use of sentiment analysis.

Methodology

  • There are more grammar issues here than in the previous sections of the page.
  • You use both "we" and "I"; you should pick one (probably "we" since there are collaborators) and use it consistently.

Training

  • I have some concerns about negation handling as described (fair enough though, since negation is a non-trivial problem anyway). Two examples:
    • "This was far from the best movie I've ever seen" - a type of negation not handled by your regex.
    • "This movie didn't have very good characterization, but the CGI was amazing." - do you check for words such as "but" after negation?

Evaluation and results

  • Extra proofreading would be good here as well.
  • I thought from reading the page up to this point that probabilities were being determined for the polarity of individual words, and then you aggregate them somehow to determine an overall label for a sentence. Then, you provide results for entire movie reviews. As a result, I'm a little confused about what granularity you're aiming for, and what aggregation methods you use.
  • I don't see Neutral labels in the movie review results (or in the training data, for that matter). I might have missed something while reading, but is it assumed that all sentences in movie reviews will have some sentiment polarity?

Discussion and Future Work

  • I guess this is going back to the hypothesis, but were you going for 100%? Were you going for "better than random"?
JordonJohnson (talk)20:56, 18 April 2016

Thanks for the great feedback!

  1. I will move the related pages section to see also section
  2. I initially did have hypothesis after background. I will try to change the hypothesis to accommodate your suggestions
  3. "This was far from the best movie I've ever seen" "This movie didn't have very good characterization, but the CGI was amazing." Yes, these are not handled by the regex. (I am a newbie to ML)
  4. I was mostly focussing on the tokenization part. The aggregation was taken care by the library. I can look through the library and add some explaination.
  5. Yes neutral labels were not there in the dataset.
  6. I wanted to improve the accuracy, basically better than random!
SamprityKashyap (talk)21:17, 18 April 2016

As far as I know, negation is nowhere close to being a solved problem, so no worries there as long as it's acknowledged in the page.

As far as the position of the hypothesis in the page, it might be wise to wait for the other critiques as well, as that's more a stylistic decision than one with a correct/incorrect answer.

I think David really wants our hypotheses to be as precise as possible, so he's going to want to see "better than random" somewhere in your hypothesis section.

Thanks for the quick reply!

JordonJohnson (talk)22:07, 18 April 2016

Thanks a lot for the constructive suggestions! I will modify the hypothesis to make it more specific.

SamprityKashyap (talk)22:17, 18 April 2016