A Fair Ranking

All champions know whether the rankings flatter or overlook them, their destiny is in their hands. Your destiny is in your hands when you believe your dream is possible.

Henry Cejudo, Olympic Champion

The wrestling community anticipates a ranking release with as much excitement and anguish as a winter storm. You can feel the longing in the forums when too much time elapses since the last release. Threads form with ad hoc rankings, debates ensue, and the fans nag the webmaster for taking so long like chirping chicks squeaking for their next meal. Then, it arrives more debates ensue, ad hoc rankings form, and some deride the rankers for being so wrong.

It’s often said rankings are just for fun, but it’s obvious from the reactions year-after-year that many take them quite serious. We at Lighthouse are serious about them. The rankers devote hours of time attempting to produce a fair ranking. Over the first few months of the summer, I’ve devoted a little time every week to prepare for the NCAA, Section VIII and XI, and New York State rankings. The first step is to enter the names, schools, and weights for the returning wrestlers along with their accomplishments and year of eligibility into a spreadsheet. Thanks to nwcaonline.com, finding the year of eligibility is more accurate and easier; nevertheless, it’s time consuming. In total, I’ve devoted about 30 hours – maybe more – over 12 weeks towards the effort, and it is still incomplete.

Some give us a lot of grief over the rankings as if there is favoritism or an agenda to promote or demote a wrestler or team. Rankings by their nature are imperfect. Really, we can’t predict with 100% certainty how a team or individual will perform any more than the critics. We’re all wrong, yet there is such thing as a bad ranking. So what are the characteristics of a good ranking?

  • Results drive changes in the rank.
  • Accuracy improves as the season progresses.

There are many challenges with producing a fair ranking. These are some of the most prominent.

  • In Suffolk and Nassau alone there’s in the neighborhood of 1000 athletes. It’s a lot of work evaluating each and every weight class. This is the only behavior I know of that leads to blindness. The others are just urban legends.
  • For a state ranking, it’s even more challenging, not only are there thousands of wrestlers, many of them do not compete against one another, so there’s less information to evaluate to arrive at a conclusion with confidence.
  • Results are poorly reported. Some complain that we rely too heavily on results, but to rely on anything else is to engage in favoritism.
  • Wrestlers never tell us the weights they will compete at, so it’s mostly a guess. Last year many wrestlers competed at weights other than the final tournament weight for a significant number of matches, some competed at multiple weights.

Individual rankings are hard, and they have gotten increasingly harder as the depth has thinned in both sections. It used to be that we struggled with identifying 4, 5, and 6 in the early releases. Now we find ourselves struggling with them in the final ranking. The challenge is that many wrestlers eligible for the 4, 5, and 6 spot have bad losses, losses against competitors who because of their lack of sufficient success are not candidates for those spots. Or, there aren’t enough head-to-head matchups or common opponents to order them against one another. In other instances, wrestler A defeats B, B defeats C, and C defeats A. What’s the right order? In those cases, it’s obviously very subjective, but I would argue that it’s an expert opinion. When you pour over the data year-after-year for as many years as we have, you develop a feel for proper ordering. Are we wrong? Hell yeah, but every ordering in that scenario will find an objection.

One goal of the Lighthouse rankings is to remove as much subjectivity as possible. In our team rankings we evaluate the returning wrestlers for each team, score them based on performance, and rank the team based on the score. Minor adjustments are made based on past performance. It’s subjective, but we anguish over the change. Of course this is inaccurate, but what you will see from comparing the preseason ranking in 2009 with what finally happened, it’s pretty damn good.

Here’s how we did with the preseason tournament rankings for the 2009-2010 season for sections XI and VIII. The column “change” identifies the number of positions the final results differed from the first published rank in September of 2009. Red means the ranking was lower than the final result and green means it was higher.

Section XI

9/09 Rank Team 2010 Result Change
1 John Glenn 1 0.00
2 Hauppauge 2 0.00
3 Longwood 3 0.00
4 Rocky Point 4 0.00
5 Huntington 5 0.00
6 Westhampton Beach 17 11.00
7 Kings Park 6 1.00
8 Riverhead 15 7.00
9 Central Islip 11 2.00
10 Brentwood 10 0.00
11 Connetquot 8 3.00
12 West Babylon 8 4.00
13 Sachem East 7 6.00
14 Smithtown West 23 9.00
15 William Floyd 18 3.00
16 Islip 11 5.00
17 East Islip 16 1.00
18 Sachem North 20 2.00
19 Commack 32 13.00
20 Sayville 11 9.00
21 Patchogue-Medford 37 16.00
22 Mt. Sinai 14 8.00
23 Harborfields 21 2.00
24 Walt Whitman 27 3.00
25 Deer Park 19 6.00


Section VIII

9/09
rank
School 2010 Results Change
1 Wantagh 2 1.00
2 Long Beach 1 1.00
3 MacArthur 8 5.00
4 Syosset 3 1.00
5 Uniondale 4 1.00
6 Massapequa 5 1.00
7 Garden City 16 9.00
8 Seaford 13 5.00
9 Freeport 14 5.00
10 Calhoun 7 3.00
11 Farmingdale 12 1.00
12 East Meadow 9 3.00
13 Levittown Division 9 4.00
14 Sewanhaka East 18 4.00
15 Jericho 24 9.00
16 Bellmore JFK 17 1.00
17 North Shore 24 7.00
18 Plainedge 6 12.00
19 Mepham 11 8.00
20 Bethpage 19 1.00
21 Glenn Cove 20 1.00
22 Great Neck South 21 1.00
23 Island Trees 29 6.00
24 Baldwin 30 6.00
25 Clarke 15 10.00


How Did We Do?

For Section XI we precisely predicted the final standing for 6 of the top 10 teams before the season even started. Nearly 66% of the Section XI predictions for the top 25 were off by 5 or less positions. For Section VIII the accuracy was similar with 50% of the top 10 teams identified within 1 position and 66% of the top 25 off by 5 or less. Obviously we got a few very wrong with a change of 16 or there about. When I look at the comparison the one thing that strikes me is that as deep as the twenties, the final result was one position away for so many. Was this a good ranking? I honestly do not know, but to say it was horrible suggests that we can do better. At some point, to believe it could be more accurate would say less about the knowledge, skill, and/or mystical powers of the ranker and more about our belief in our ability to shape our destiny. It’s a pessimism that says our destiny is fate and that as wrestlers and coaches there is nothing we can do to shape our future. I for one do not share that dark belief.

The Ideal Competitor

Since my first wrestler and team rankings, I’ve held an image of a team and/or individual that I’ve longed to impress me. One that for a reason or another never quite made the cut for inclusion. With every new release, their accomplishments are unrecognized, yet they never voice their displeasure, if they even care. They just keep on focusing on the goal to defeat the best. Along the way, they suffer setbacks, pick themselves up, and continue to believe their goal is within reach. They don’t make excuses; they don’t reason why they should somehow be counted amongst the top six; they are unsatisfied with competitive losses at the hands of great competitors. You’ve heard the ones who are: “I lost by one point to the county champion,” as if it’s a win. They have a quiet determination that is unsatisfied with second best. Then when it counts most, former close losses turn into close wins. An upset along the way, and they find themselves nearly 10 feet high, a proud smile, and a ribbon draped around their neck.

It happened once in 2009 when Joe Giaramita of John Glenn took second at 145 in the 2009 Section XI championships. He was never ranked, we never heard from him or anyone related to him, and we were thrilled for his accomplishment, and to prove it was no fluke, he followed it up with a 3rd place finish in 2010 after moving up 4 weight classes to 189. All champions know whether the rankings flatter or overlook them, their destiny is in their hands. Your destiny is in your hands when you believe your dream is possible.

5 thoughts on “A Fair Ranking”

  1. Westhampton, Commack and Pat Med were very overated all year. Sayville was very underated. You should at least recognize that in your article.

    1. Lighthouse only published a preseason ranking last year, so I don’t understand the all season point. No team was overrated or underated. That would imply there was favoritism in the ordering. There was none.

  2. Doubt there was any favorites but the system had some teams severely over and under rated. E.g. Westhamtpon. The article should recognize that.

    1. Help me out. What more would you like me to say? It certainly didn’t hide anything. I put it right out for everyone to see, and it did say, “Obviously we got a few very wrong with a change of 16 or there about.”

  3. It is very tough to rank teams below top 10 . It seems that TOP 5 is really easy as it seems the same teams have been their year in year out….

Leave a Reply

Your email address will not be published. Required fields are marked *