I love how every year, a survey comes out saying that college graduates in math and science fields earn more than other majors. What's more appalling is that some think this is news. The article here states that it is a matter of supply and demand, but this is not entirely true. However, this article inspired me to share a couple of stories from my own family and how the educational system in America is failing.
It is true that there is a "braindrain," which is largely the fault of the poor education in math and science being delivered by the public schools in The United States. Personally, my family has two stories: one from public school and the other is sadly, from private school. The public school story involves my little sister. Her calculus teacher was working through the details of a problem in class, and he must have been proud of his work because he stepped back from the board and stated, "You know, I should have taken calculus; I would have been pretty good at it!" The idea that someone can teach a subject by following along in the book or staying just a lesson ahead of the students is appalling. In order to teach a subject, you need to understand the subject on a level deeper than the book; otherwise, you will not be able to explain the subject in a different way than the book. One thing I've learned in teaching is to present the same subject material in several different ways because what works for one student may not work for another.
Another problem, which I find even worse, is that the teacher says he should have taken it solely because he would have good at it. If you only take on challenges because you are good at it, then you are not really challenging yourself. Imagine the impact of John F. Kennedy's speech if he said,
"And they may well ask why climb a small hill? Why, 35 years ago, fly from New York to Boston? Why does Rice play local high schools? Because these things are easy and we'd be really good at it!" (Original quote may be read here.)
Not exactly inspiring. Thankfully, JFK did not have such defeatist attitude, but believed we do things "not because they are easy, but because they are hard, because that goal [of going to the moon] will serve to organize and measure the best of our energies and skills." I get chills every time I hear this speech. No matter how many times I hear it, I am always inspired.
The other story, and maybe more shocking, involves my little brother. At a parent-teacher conference, his teacher told my mother that my little brother needed to stop working ahead of the class because "it wasn't fair" to the rest of the students. I am still amazed that mother did not bitch slap the teacher for saying that a student needed to stop learning.
There are more reasons that math and science majors earn more than liberal arts degrees than just supply and demand. I hope to discuss other reasons in a future post.
Note: The intent of this article is not meant to persuade one for or against a government, single-payer health care system. Rather, it is to address the specific application of tax dollars to medical methodologies that do not hold up to the rigors of scientific testing.
When I started this blog, I made a promise not to discuss politics unless it crosses the lines of promoting woo through legislation (note: I do not care if a politician believes in crap (for this blog), but I do care if they want to force taxpayers to pay or believe crap). Dennis Kucinich has given me just such a chance with a recent blog post on the Daily Kos. Specifically, this is to address the following comment:
"One amendment brings into standard coverage for the first time complementary and alternative medicine, (integrative medicine)."
This should make anyone who has an understanding of science shudder. What is wrong with this amendment? First, in order for something to be alternative to medicine means that it either has not been shown to be medically (i.e., scientifically) effective or has been disproven to be effective under numerous rigorous test. For instance, here are some examples:
Homeopathy. This is may be my favorite medically-based pseudoscience. Want to cure a cold? Take an a small amount of onion (because both onion and a cold cause you to tear up and affect the mucus membranes in your nose), dilute it with water to the point that you would need a container the size of the solar system to have a realistic chance of retaining just a single molecule of the onion, then ingest or drink a small amount of the diluted solution. How do the practicioners of homeopathy justify their treatment when there is zero probability that I have ingested a single molecule of the onion? Water has memory, which is strengthend by shaking the "mixture" at each stage of dilution. No joke. They really want to throw away everything we know of chemistry. Further, the more you dilute something, supposedly it gets more powerful. Take that physics!
Essentially, this legislation not only allows for any treatment to be used, but the government will now pay for it! While we are at it, why not allow for people to get energy tax benefits because they claim their car is a perpetual motion machine? Is it not bad enough that the government has spent $2.5 billion researching CAM (complimentary and alternative medicine) only to find it does not work? The only thing worse than spending this amount of money is to then ignore the findings and pay for people to get ineffective treatment!
Anyway, I encourage people to write to their congressman or congress woman to voice their concerns about any legislation that uses tax dollars to administer disproven medical methodologies.
At the recent UMAP 2009 conference, a paper raised the possibility that we are reaching the possible performance limits of recommendation systems (RS). If true, this would change the landscape for research and development in RS. In fact, some blogs discussed this paper before it was even presented! However, after reading the paper, I'm a bit inclined to disagree at the hype over this paper. It's true, the paper does point to the performance limit for RS based on the current system of obtaining recommendation data. However, it does not mean that no one can build a better RS.
First, I want to discuss what the potential impact could mean for RS if indeed, we reach a true limit of performance. As an example, assume that for a particular task (e.g., music recommendation), people have a self-agreement 0f 90%. That is, a person will agree with themselves 90 times if they rank 100 songs one day and then rank the same 100 songs two weeks later. Assume that tastes do not change, which the authors argue is the case in their setting (they make three measurements at different points in time). What does this mean? Some possible explanations:
(1) The user doesn't know if or how much he likes the movie. (2) The user doesn't understand or can't specify the degree to what he likes the movie into discrete, deterministic categories.
(1) is, by default, the wrong option since the user's judgment is the correct answer automatically; however, (2) makes some sense. While people may have an understanding if they really like or hate something, there is a rather large ambiguous area in the middle. How many people can consistently listen to a song and say, "I like that song 40%"? What does that even mean? Does the user like it 40% of the time he hears it? Does it mean that it would be in the 40th percentile of songs if the user were to rank every song he has listened to? If he ranked every song he's heard several times, would the average rank be the 40% percentile? The authors of the paper demonstrate this when they show that the inter-subject disagreement occurs 34% between rankings 2 and 3 and 25% of the time between 3 and 4 on a 5 point scale. In other words, people aren't able to rank movies accurately if they do not have a strong opinion. Usually, it assumed that the fault lies with the user; that is, a person is confused about what the categories imply. I disagree. I believe that it's a probabilistic rating because yes, opinions constantly change. People are not machines. They have emotions. Emotional states have an impact on how we both interpret and want to interpret our environment.
Second, how can we measure the success of a RS when 100% is theoretically impossible? What does this even mean? This issue has come up several times in terms of genre recognition. Until the reprint of Scanning the Dial and the accompanying criticism directed at the MIR community, some authors have validated their algorithms by stating that it is more accurate than humans. As pointed out in a couplepapers, this is nonsensical since genres are ill-defined. Ultimately, our categorical dimensions of music is largely subjective and built over a life-time of (often conflicting) feedback from society. Still, we can ask, what if a RS comes out with a better accuracy than the documented limit? Does it know what people will like more than humans? Of course not. It shows an error in the choice of evaluation criteria. Ultimately, a RS is measured at a moment in time. If a person likes something on Tuesday, but does not like it on Wednesday, it does not mean the user is confused. It means he liked it on Tuesday, but not on Wednesday. Tastes may change based on mood, evaluation of new information based on the world around us, etc. Future RS may be able to detect this information to update adaptively.
This again brings up the problem with an RS. Every RS is based on the idea that a user will like something similar to what the user liked in the past. Further, almost all RS model a user as a single entity or that the user must maintain separate profiles for different tastes. For example, Last.fm and Pandora cannot build a station or user profile that maintains two separate personalities - it's up to the user to construct this system. Netflix only allows one user profile per account. While my fiancee and I may like some of the same movies, we certainly do not like all of them. Heck, some days I want a good skeptical show like Bullshit, but on another day I may want pure magical fantasy.
Even with transparency, somethings get muddled by small clusters of users who have a very demonstrated behavior. For example, Netflix is currently telling me that I will like Wallace & Gromit because I like This is Spinal Tap. How are these two even connected? The first is in the category "Children & Family Suggestions" and the other is a movie about a fictitious failed hair-band. Granted, both are good, but the only relation here is only based on discrimination (Wallace & Gromit is a UK show and Spinal Tap is an American movie about a British band). Apparently, all British humor is the same to Netflix users. The weirdest might be that I'll like a Talking Heads concert because I like the movie Fargo. Obviously, content analysis would do a better job of filtering the nonsense predictions.
So, in conclusion, research in recommendation systems is not reaching it's limit in performance. Rather, recommendations based on the idea that a user is a simple, static classifier is limited from the start. Smarter, better recommendation systems that can understand complex user behavior such as emotional state, thoughts on quality, and incorporating content analysis will be the building blocks of the next generation of recommendation systems.