Didnt mean to imply I had historical view datayet.
The Thompson sampling trick looks useful for auto converging to the best of A/B versions and a replacement for dithering. Below you are proposing another case to replace ditheringthis time on a list of popular items? Dithering works on anything you can rank but Thompson Sampling usually implies a time dimension. The initial guess, first Thompson sample, could be thought of as a form of dithering I suppose? Havent looked at the math but it wouldnt surprise me to find they are very similar things.
While we are talking about it, why arent we adding things like cross-reccomendations, dithering, popularity, and other generally useful techniques into the Mahout recommenders? All the data is there to do these things, and they could be packaged in the same Mahout Jobs. They seem to be languishing a bit while technology and the art of recommendations moves on.
If we add temporal data to preference data a bunch of new features come to mind, like hot lists or asymmetric train/query preference history.
If this model can be made Bayesian enough to sample from the posterior distribution of total popularity, then you can use the Thomson sampling trick and sort by sampled total views rather than estimated total views. That will give uncertain items (typically new ones) a chance to be shown in the ratings without flooding the list with newcomers.
Sent from my iPhone