Feedback

First, I think we need to have a clear definition of hidden features. Does that mean they have no effect on training? I feel like you need to have a validation set to test different lambda values. The point of the test set is to measure the performance of the final model and it should not be used to set hyperparameter values. This way, your model suffers from optimization bias on the test set. Besides lambda, it might be a good idea to see other hyperparameters' effects on performance and optimize the model based on those hyperparameters. I think it is better to find at least one value of lambda that improves the performance on unseen data too.

FARDADHOMAFAR (talk)05:59, 20 December 2023

Your article provides a solid analysis of collaborative filtering models and effectively addresses factors like training and testing errors, overfitting, and the impact of hidden features on performance. However, one aspect that could be explored further is the discussion on the potential reasons behind the observed discrepancy in testing error despite the varying number of hidden features. Are there specific patterns or characteristics in the testing data that may contribute to this outcome? delving deeper into the nature of the testing set could provide additional insights into the model's behavior and may offer suggestions for refining the collaborative filtering approach. This could enhance the completeness of your analysis and provide readers with a more nuanced understanding of the factors influencing model performance.

AmirhosseinAbaskohi (talk)07:04, 21 December 2023