This isn't true though? Sure 5/5 is inflated, but it *usually* isn't inflated deliberately like 1/5 is.
No, you *think* 1/5 is "usually" inflated more often than 5/5, this is purely subjective speculation and is impossible to prove, hence my symmetric mockery of the original suggestion about dropping all 1/5 votes. Just like you *think* X/5 is a good rating for game Y, this is all purely subjective and an opinion, none of it can be proven to be true or false due to the very nature of being merely opinions or individual judgements. You're free to think one is unjustified, retarded, absurd, actually kind of true or the product of a severely disturbed schizoid mind, but at the end of the day deciding which opinions or ratings are "correct" and discarding those that aren't would be a purely arbitrary judgement.
Nobody forced or could ensure 400+ users would follow a standardized operating procedure on how to rate the games they played, guaranteeing in any way that the result would be some actual objective truth, rather than just a measure of "what people think". It is at best a piece of trivia (or source of forum drama and entertainment) with the utility value slightly above that of a horoscope, maybe.
And most importantly there is no objective basis and process upon which votes could be discarded ("user X didn't follow the procedure, into the trash it goes"), such a decision is always going to be an arbitrary whim. Besides that if supreme autism and bureacracy was achieved in this area allowing true objectivity (true objectivity has never been tried!),
all the votes would look fucking the same, because they would be guided by the exact same logic, criteria and process.
If you measure opinions numerically you have accept that they will be made using different standards of rationale or logic and thus be entirely impossible to compare, and that biases, fanboyism, shilling or inaccurate ratings are equally likely to skew the result in both direction. And all flaws in the process and methodology are equally likely to occur and distort the result in both directions,
so in the end this all should cancel each other out and indeed in the end the popular opinion and whatever faction happens to be largest in the studied population should win. Yes, specific distribution will differ, just like you can roll 1 three times in a row on a d20, but this is just a fact of life you have to accept when dealing with these kinds of "studies" (r00fles). And if anyone expected this to be anything more than a gague on what the community thinks about games this year, expecting it to be even in a very minor way some kind of study uncovering a universal truth on what is superior, then just lol, lmao even.
I already mentioned here in this thread, that I *think* a system where each user would just rank a limited number of titles in order from BESTEST to "least liked" (top 5 games of the year or whatever) would be more fair because it would at least fix the "a 5 according to bro A's criteria/logic is a 3 according to bro B's criteria/logic" issue, if you really want solutions to this for next year. But even then someone will find some flaw in that system, someone will not like the result and someone will complain "X or Y would fix this issue that triggered me". This is just the inherent nature of choosing a methodology to try to measure opinions, there is no tool that works fully accurately here.