Yesterday, I mentioned that it is a bit early to state that the success of Pandora and Last.fm is purely based on the strengths of their recommender algorithms. The newest copy of Psychology of Music contains an article by Silva and Silva on how successful various methods of increasing a unknown song compare. I should caution this is a preliminary study and all conclusions should be taken in this light.
Anyway, the authors had students listen and judge an unknown song in various contexts. Their findings were that the students gave the unknown song higher ratings if the students read an article about the artist of the unknown song, if a well-known and positively liked song followed the unknown song, or if a music critic praised the song prior to its playing. The authors also argue that repeated exposure to a song does not increase a song's appeal, which is a stark contrast to the most common method employed by radio stations (ahem, Payola). However, I think the argument is a little weak, since (as the authors note), they repeated the song six times in a row. Their premise behind the "repeated exposure is beneficial" hypothesis was based on the Pavlovian condition, but I disagree. The Pavlovian effect works by associating an emotional neutral action (ringing a bell around a dog) to a positive or negative association by pairing the two together (ringing a bell around a dog and then feeding him). Done correctly, the subject will associate the neutral action positively (dog salivates when the bell is rung, regardless if food is around). For starters, they did not test the premise since they gave repeated exposure without any pairing of a positive or negative stimulus.
However, we can ask what if some these are true? How does this change how we view recommenders like Pandora and Last.fm? First, comparing radio stations is problematic if done in series because if there is a truly positive song, then songs around it would be viewed positively. In other words, the rich get richer. In order to remove the music critic affect, one needs to establish "placebos" so that users cannot infer a playlist is better than another for reasons other than the audio. For example, if users know that their ratings are taken into consideration (Last.fm) or that experts labeled the music (Pandora), the users may rate the playlist higher than it would under neutral conditions. Also, user should know little about the bands, which is difficult to ensure. Finding an unknown jam band might be difficult to accomplish for that frat guy sitting in class with an old set of Bercinstocks, Grateful Dead T-shirt, and messed-up hair.
This is not to say that previous approaches to compare playlists are worthless - it's just that they need to be considered knowing all the above factors.
Can Wisconsin Win the Big Ten this week?
7 years ago