When you think of lists of “100 Greatest X”, you think of one “x” above all else: movies.
There’s no way I can make my own list of greatest movies. There are so damn many lists out there already, created far more scientifically, that I’m only making more noise. (In the future I hope to make lists in fields where there has been limited input.) There’s also the small problem that I have seen very few movies.
These lists cause plenty of debate over whether this film should be rated higher than that film. With so many lists, there’s a lot of noise out there. But what if all the lists became one list?
That’s what I aim to do with my entry into the Greatest Movies pantheon, which will be one of the first features on the website I’ve been talking about for months now. There will be three lists: an Overall list of the 100 Greatest Movies, a 50 Greatest Movies list as chosen by critics, and a 50 Greatest Movies list as chosen by the people.
Why these distinctions? Some of the greatest movies lists, like AFI’s 100 Years… 100 Movies, are voted on by a panel of experts. Others, like imdB’s Greatest Films list, are selected by a much larger audience of the general public. The Overall list will be a cobbling together of both types, while the Critical and People’s lists will focus on just one type of list. The two methods produce very different results, and there are pluses and minuses for both. People can have their debates on which approach is superior, but this way they can have their own list that isn’t contaminated by the other group, or the Overall list that treats both equally.
I will base my list on the following lists, and you are welcome to submit other widely-published and in any way authoritative lists:
- AFI’s 100 Years… 100 Movies. The one that started it all; the “10th Anniversary” 2007 version will be used.
- TV Guide’s 50 Greatest Movies on TV and Video (1998). Tim Dirks’ website indicates this was compiled by the editors, but Wikipedia says it was a poll. In any case, the main criteria is “how fun is it to watch?” which leans more towards the people’s list side.
- Sight and Sound’s Decennial 2002 list. Probably one of the most authoritative in the industry. Heavy emphasis on foreign films.
- Empire Magazine’s 100 Greatest Movies (2003). There’s also a 2007 version not on Dirks’ website but it was conducted by Empire Australia, which is a bit confusing and will result in two Empire lists.
- FilmFour’s 100 Greatest Films (2001?) Tim Dirks indicates this was an experts’ list, but Wikipedia has a BBC article to confirm that it was a people’s list. Certainly the composition is more consistent with a people’s list.
- Entertainment Weekly’s 100 Greatest Movies (1999) may be second only to AFI in prominence with the general public. Non-American films allowed, and there are some interesting choices.
- LA Daily News poll (1997). Supposedly a people’s version of the AFI list based on the same list of nominees. Only top 30 will be considered because it is littered with ties.
- Empire’s Ultimate Movie Poll (2001). As though the Empire lists weren’t confusing enough, Empire also has a top-50 list that was part of a larger effort to rank like crazy!
- Mr. Showbiz Critics’ and Readers’ lists. This ancient list, according to Dirks, was made “a year and a half” before the AFI list on a now-defunct site.
- Village Voice 100 Best Films of the 20th Century. A critics’ poll assembled at the turn of the century. Very weird and foreign-heavy.
- Time Out Film Guide Centenary List (1995) and Readers’ List (1998). Limited reliability, and both lists are so riddled with ties I had to cut them short at 40 and 60 respectively. Less than that will go into the making of this list.
- imdB’s Top 100. I’m limiting imdB’s role to the Top 100 because A, it would be just too much work to do the whole Top 250, and B, depending on the method used (see below), it might not matter. This is often biased towards recent releases.
- I also hope to consider Total Film’s Top 100 Films of All Time (2005) – and the 2006 update which was a people’s list. Neither is on Tim Dirks’ web site due to being very recent. Also, both lists take a turn towards the weird and disregard critical consensus in favor of the recently popular.
How will I make the decision on how to rank the movies? That’s a daunting question. I will aim to choose from among these voting systems, which you can vote on in the sidebar in a brand new poll:
- Repeated plurality voting. The system we’re all used to in the states. I choose the movie that gets the most votes among those movies on the top line. Then I remove those votes and move up the other movies on those lists. Repeat. This system is vulnerable to ties. I will run runoffs in those cases unless every single list nominates a different film. Which is extremely possible.
- Instant runoff voting. In some ways the opposite of repeated plurality. If one film has the majority, that film wins. Otherwise eliminate the films with the least amount of votes and bring up the other movies until a majority exists. Restore the votes and start over. This has the problem that a film’s performance can be singularly tied to when it gets a first-line vote. It’s also very vulnerable to ties, and there are several schemes to resolve the tie:
- Refer to the previous round of eliminations and eliminate the film with the fewest votes in that round. If you reach the point where no one had been eliminated, go back to the determination of the previous rank.
- Apply the Bucklin method below, but to eliminate the film with the fewest votes.
- Determine whether, if one film is eliminated, any other film involved in the tie would not also be eliminated immediately or at least remain at risk of elimination. Eliminate the option that would preserve the other(s). This rarely works as elegantly as described, at least for ties of three or more, and often becomes complicated.
- Repeated supplementary vote. Similar to repeated plurality, but I hold a runoff between the two films on the top line with the most votes. Technically a Sri Lankan supplementary vote. Could easily result in a top-line tie.
- Coombs’ method. If one film has the majority, that film wins. Otherwise eliminate the film with the most last place votes until one film has the majority. Restore the votes and start over. In both this and instant runoff, I will eliminate all films right off the bat that a) do not appear on the top line on any list, and b) do not have at least one list in which they defeat a film on the top line on another list, for simplicity. This particular method does not work well because some lists are not 100 films deep.
- Borda count. Most common method for creating ranked lists. #1 = 100 points, #2 = 99, and so on. This method is not iterative and can be commenced at once. It’s also the most likely choice unless I get talked into one of the others.
- Bucklin method. If one film has the majority, that film wins. Otherwise add in the votes on the second line. Repeat until one film has amassed enough votes that it would have a majority of the top-line votes. If more than one film passes this threshhold in a single round, switch to plurality voting. Restore the votes and start over. Because of the varying lengths of lists and their disagreeing nature this doesn’t work well past about 15 films or so.
- Condorcet method. If one film would defeat all other films in one-on-one matchups, that film wins (the “Condorcet winner”). Remove the winner and repeat. There is not always a Condorcet winner – there may be two or more films that beat all other films but tie each other, or Film A may beat Film B, B beats C, but C beats A. There are actually several “Condorcet methods” that treat this problem differently:
- Copeland’s method. The film that wins more one-on-one fights than any other wins the rank.
- Switch to one of the other approaches. Possibly apply one of the other approaches to a subset of the whole, which contains only the Condorcet winner if it exists: the Smith Set, which beats all films outside it; one of the Schwartz Sets, which is unbeaten against all films outside it; or the Landau Set, consisting of all films for which, for every film that beats it, it beats another film that beats the film that beat it. (For example, and not reflecting reality, if “Citizen Kane” beats “The Godfather” but “The Godfather” beats “Casablanca” which beats “Kane”, then, assuming “Godfather” loses no other battles, it’s in the set.) Instant-runoff applied to the Smith set is common. However, past the top 3 or 6 the Smith set becomes huge.
- Kennedy-Young/VoteFair method. This approach actually boasts that it is designed to produce a ranked list – it is not iterative! For every possible sequence, add one point for the number of lists that agree with each one-on-one ranking that agrees with that sequence. In other words, if the ranking under consideration ranks “Citizen Kane” #1, “Casablanca” #2, “The Godfather” #3, and “Star Wars” #4, then the number of lists that favor “Kane” over “Casablanca” is added to the number that favors “Kane” over “Godfather”, “Kane” over “Star Wars”, “Casablanca” over “Godfather”, “Casablanca” over “Star Wars”, and “Godfather” over “Star Wars”. The ranking with the highest score is the final list of Greatest Movies. The problem? If there were exactly 100 movies under consideration (there are more than that), then there are 100! = 9.3326215×10^157 (that’s more than half a googol of googols!) possible sequences. (That’s 100 factorial for you non-math geeks.) I have to rely on shortcuts (like considering the Smith Set or the set combining the top line with all films that beat films on the top line on at least one list) to narrow down which sequence to choose. Fortunately, it produces results similar if not equal to ranked pairs in practice.
- Minimax. Basically, the film for which the film that scores the most victories over it is still fewer victories than the equivalent film for all other films wins. If “Citizen Kane”‘s worst defeat involved losing to another film on 6 lists, and all other films lost to at least one other film more than that, “Kane” wins.
- Ranked pairs. Take every possible comparison of two films. The largest margin of victory (or the largest number of lists that agree) is locked in. Any defeat that contradicts the defeats already locked in is ignored. In other words, for an A>B>C>A situation, the defeat with the smallest margin of victory is ignored. Also results in a massive comparison; if exactly 100 films are involved. 100 x 99 = 9900 possible matchups must be considered. Since there are more films than that, the number goes up parabolically.
- Schulze method. Take the Schwartz set. Drop the tightest race. Determine the new Schwartz set. Repeat until a Condorcet winner appears or all the members of the Schwartz set account for no defeats even amongst themselves.
- Single transferable vote, iterative version: If one film has the majority, that film wins. Only votes over the majority are transferred to other films, in proportion to what films were on the next line. If no film has the majority, eliminate the films with the fewest votes and move up new films until a majority exists. There is a non-iterative version that doesn’t do any ranking unless you’re lucky, but there are so few lists and so many spots to fill that it won’t work.
Precisely when I’ll start putting the list up is partly dependent on when the web site goes up, what suggestions you might have, and what pace I can write the entries at. Since I’ve seen maybe four of the movies that will be listed, if I’m lucky, I invite anyone knowledgable to guest-write an entry or two; to apply, make a comment to this post (make sure you let me know your e-mail address) or e-mail me by clicking on the Complete Profile link on the right side.
Will this post mark the start of a revolution? Maybe, but probably not. However, it will be a lot of fun, and hopefully produce some new perspective on an old, recurring topic.