Suggestions for Improving the accuracy of Affect Prediction in an Intelligent Tutoring System

Hi Ritika, Thank you for your detailed feedback. Regarding your feedback I have the following things to add 1. I left out some important details while trying to make the hypothesis less verbose. Thank you for pointing that out. I have updated it. Please, take a look at it and let me know whether it works 2. The links in the page work for me for the most part. I randomly tried a couple of them and they seem to work. I would double check all of them for the final draft. 3. Since, this is a continuation of my previous work; but yes, I would try to add some more details 4. As I mentioned, to train the system you need labels and data was taken over a period of 14 minutes; so this was the best available solution compared to use more obtrusive methods that cause continuous distractions 5. The data was taken using an eye-tracker where a label was taken every 14 minutes. The details are in the paper for MetaTutor Study 6. They can overlap. That's why the same dataset was divided into two separate datasets, one with boredom label[bored or not], another with curiosity label[curious or not]. In the original dataset, they did coincide, but as I mention we are taking all of the information and simply dividing them based on label 7. Various subjective denominations are used. Actually they used a Likert scale, and the dataset I work on uses thresholding where 3 and above is taken as yes and no otherwise. 8. Why a classifier gives better results in one dataset and not on the other and vice versa is a bit tough to define, empirically always a bunch of model is tried[within the boundaries of sanity!!!] and then the best is chosen. I will try to give some insight from my own understanding. As far as detecting a certain emotion, the true model for curiosity might be very different then what we get for curiosity then what we get for boredom; and the model found by RF in one dataset might be closer to one 'true model' and for other dataset something else might work. But the classifiers shown here work well in practice in binary classification problems[Also in this particular domain!!!] 9. Also the analysis has external links. I would try to add some intuition behind them in the page itself. Let me know whether I was able to answer your queries. Also feel free to mention other queries and/or concerns

MDAbedRahman (talk)19:41, 21 April 2016

Thanks for your clarifications Abed. I am still a little uncomfortable with how the user can be bored and curious at the same time and I don't see why they should overlap, but I guess those are just model parameters which you chose to use this way, which is perfectly fine.
As for the links, I am able to go from within your wiki page to the references and vice versa but not able to go to the actual page on the web. For instance where you have sited the papers, I cannot access those papers from your wikipage. I think the web url is missing in the references. You could refer to my page, 'Analysing online dating trends using Weka'; the reference section to see how to link it to the pages outside of wiki.
Rest all looks good!
Ritika

RitikaJain (talk)01:24, 23 April 2016

Thanks Ritika for your feedback. Users were seen to have overlap in multiple emotions in the data set which might sound a bit counter-intuitive but was shown to be practical in the papers. As far as the links, I am not sure whether you are talking about external links or papers citations. For the cited papers, I have just provided the name and authors; as I have done in my previous pages; as far as I know putting external links for cited work is not a necessity, but I would try to do that if I am able to do so. Thanks again for your feedback

Abed

MDAbedRahman (talk)02:01, 23 April 2016