Web 2.0 | 2008/10/03 13:38 | Web 2.0 Asia
Yesterday was a truly sad day for all Koreans. Jin-sil Choi, Korea's top actress, was found dead, as she apparently committed a suicide. To put this in perspective, imagine this morning you just saw the headline "Angelina Jolie committed a suicide". Yes, Choi was THE most loved Korean actress of our times.
What makes it even sadder is the stories eminating from this tragedy, that she was suffering from depression, and much of it came from the "bad comments on the internet." It was found Choi actually cared about the comments about her, often spending hours reading all thousands of comments, many of which I assume were worthless piece of garbage.
Most news sites in Korea (I guess elsewhere too) allow anonymous comments that are rarely moderated. I mean moderation is there, but it's usually post-moderation meaning the really offensive comments are taken out only after damages are already done. Taking advantage of these anonymous comment system, some weirdos make all kinds of personal attacks to public figures. But yesterday's Choi incident shows that comments can kill people, literally.
I believe the web we should all strive to create is one where people have respect to each other, not a dark dungeon denizened by some freak trollers. For that, I think we need a better online reputation system. It doesn't necessarily mean forcing all users to use real-world identity (like your social security number and a real name) on the web, as some politicians seem to have proposed. Mandating real-world identities on all web users will make the web as a mere extension to the real world, shrinking the web's vast possibility of enabling us to create a (virtual) world of an entirely new dimension than this world's. Besides, we all know such mandate simply won't work.
If one of the defining characteristics of the Web 2.0 is socialness, why don't we look at this problem through the lens of "social" as well? I think we should introduce what I call a "social whitelist" and "social blacklist." (Hey, it's the term that's racist, not me.)
From our online relationship, we all interact with other identities, and there are some identities we know can be trusted. These good ID's have been there for some time, with proven track record, and most of them have their own websites where they put their reputation and content on. Also, if I can trust this ID, I could perhaps also trust other ID's that are being trusted by this particular ID. Now, if we can somehow aggregate and track these social trusts among online ID's, we could perhaps have a society-wide online trust system sooner or later.
We could do the same thing about the bad ID's as well. If someone is being marked as a bad ID (e.g. spammer) by three different ID's, we can pretty safely call it a bad ID and put a penalty on it. But what if the "real person" behind this bad ID doesn't get deterred and keep generating bad ID's? If it happens, then (and only then) we can cut the link between the real person and the online ID system: Hey dude, you seem to be creating bad ID's all the time, and you shouldn't do that. Oh, by the way, you can also remain silent and appoint an attorney.
The small steps towards an open identity system being taken recently, such as XFN and FOAF, will hopefully pave the road to this "social reputation system." If such thing comes along, I think it will be one of the greatest achievement for the "social" web.