After a random act of showering epiphany this morning, I started thinking about implementing a global reputation system similar to Whuffie, as seen in Cory Doctorow’s “Down and Out in the Magic Kingdom”.
A quick bout of googling revealed that there’s actually a system called PersonRatings, but it’s running on something that does not slot up with what I was expecting of such a service.
Where PR goes forth and takes a real person which can be reviewed elaborately according to pre-defined categories, I was thinking of something much more simple:
Essentially, you wouldn’t need to go for elaborate reviews – most people can’t be bothered with anything like that at all. When you force a difficult interface on people, the only guys giving rep will be people who either think the reputee [sorry] to be the next incarnation of their favourite messiah or the shaitan himself. Thus the need for an easy interface.
The general direction of what should be judged in my vision is not the reputee themself, but the actions they do: if they do something you consider good, funny, interesting, enlightening or otherwise positive, vote up. If they do anything bad, malicious, annoying or profoundly disturbing, vote down. The votes shouldn’t be “they’re a good person, I like them” or “I think they’re an evil person”, but rather “they just made me laugh :)” or “they just kicked a baby across the street :(“.
Of course, personal standards differ and some people might think that kicking a baby across the street is actually jolly good fun and quite positive. The way to counter this would be some sort of derivative score, allowing you to ignore certain kinds of people. Problems with this:
- It’s quite hard to define filter rules for this. This would need constant user review, and probably lead to very problematic exclusion lists.
- It could lead into a positive feedback loop, where two parties just start hate-voting each other all the time due to differences in opinion.
So I’m not sure whether this is something that should be implemented – or could be implemented in a better way. You probably don’t want to have political groups on your reputation system.
In a related thought, one of the main problem one will have to face is reputation trolling. The system would need to make it impossible for someone to bump their own reputation, which seems to be nigh impossible. I can’t see any valid and usable way of assuring identity integrity:
- Login for each user, can’t vote for self
- Just create a fake login. The system could probably be arranged with some fuzzy matching to kill the most common exploiters:
- People up-voting only one person.
- People giving a single up-vote to a person and then ceasing activity. Doing this with multiple accounts leads to rep spam.
The problem would be that there is no reliable way to ensure a bijective (1:1) relationship between accounts and real people, as they could just use register multiple accounts with differing e-Mail addresses, from different IPs, and so on.
- Global identification
If you can’t allow local identification, you need some sort of unique global identification. Since using just some random internet site as an identifier, or using delegated authority, is not an option in most cases1, you’d have to resort to a central which just hands out tokens for proven identities – in other words, your local government. Since you seriously do not want to include any government in such an endeavour, this is not an option either.
Thus I can’t fathom any ware to ensure uniqueness, except by a peer to peer review system which allows peers to decide who’s a spammer. And in a reputation system, that’s just a no-go, either, since you don’t want people judging you a spammer just because a significant group of them hates your guts.
But beside deliberate exploiting for personal gain, you have to consider other cases of people that just work contrary to the system:
- Shotgun accounts just randomly voting around.
A probably deliberate attack to undermine the validity of the system. Depending on the sophistication of the shotgun, it would be quite hard to detect by way of simple fuzzy logic.
- Excessive up-voters/down-voters
On every rating site, there’s always the haters and the lovers who rate minimal or maximal score just for the heck of it. They vote one on every movie because they hate the whole genre, or they vote ten on every song by an artist because they’re infatuated up to their sternum.
- People voting due to the votes of others
I mentioned this positive feedback loop a bit further up, but this could become a real problem in this category, too. Some people just don’t like other people thinking differently.
Some of these problems could be alleviated by the advent of ubiquitous computing and using just the local machine address of anyone as a vote target identifier as well as a self-vote filter. This would help with anything having to do with real-life – you’d have to ensure the device ID being visible in online communication, though, too – and that nobody has the chance of getting a second device, which in my opinion can only be achieved with total device surveillance.
There are still some minor aspects to be discussed, like how to implement identity consolidation and so an, but these are mostly minutiae, and would break the scope of this text.
I’d be happy if some people would feel obliged to rant about my ideas in the comment fields below, offering suggestions, critique and just some good old dose of plain flaming.
- The alternative would be to use a peer-rated identity consolidation system, but this would still suffer from exploitability unless using a method of validation more sophisticated than what I can piece together in a jiffy. ↩