Skip Navigation Links
Home
Teams
Games
Fields
Post-Season
Standings
Stats
Video
Rules
News
Awards
Alumni
Links
Camps
Jobs
About Us
Contact Us
Start A Team
FAQ
   



Power Ratings are a computerized method of ranking based on a team's performance against other teams in the league.

The IHSLA does not use power ratings to determine conference champions or state champions. Starting in 2013, power ratings have been used to determine playoff seeding positions.

There are dozens, perhaps hundreds, of power rating schemes employed in sports. The power rating method utilized on this site is similar to the "Margin of Victory" method used by LaxPower.com. The MOV method considers only the goal differences between teams. It does not consider win/loss records nor does it consider head-to-head contests. The IHSLA power rating scheme differs from the LaxPower rating scheme in that it only considers play within the IHSLA and ignores games played outside of Illinois..

How is the power rating calculated?

The power rating algorithm used by this site is a powerful and complex piece of looping code which analyzes all IHSLA game performances. It considers ACTUAL game outcomes and compares them to EXPECTED game outcomes to calculate a team's power rating. Here's how it works.

In any game, the ACTUAL margin of victory for a team is calculated from the ACTUAL score of the game. For example, in a game with a score of 8-3, the winner achieves a margin of victory of +5 while the loser achieves a margin of victory of -5.

In any game, the PREDICTED margin of victory for a team is calculated from the EXPECTED goal differential, which is the difference between the two teams' power ratings. A team with a higher power rating is favored over a team with a lower power rating by the number of goals represented by the difference in power ratings. For example, a team with a 93 power rating would be EXPECTED to beat a team with an 88 power rating by 5 goals.

But how, you might ask, does a team get a higher power rating in the first place? The computer system uses an recycling exhaustive search method to determine the optimum power rating for each team. Here's how it works.

The algorithm starts by assuming that all teams are equal. The computer compiles a list of teams and assigns a power rating of 100 to all of them. The computer then compiles a list of completed games with their game scores. The computer calculates the PREDICTED goal difference for each game from the difference in power ratings (in this first iteration, the predicted goal difference is always zero) and calculates the ACTUAL goal difference from the game scores. The PREDICTED goal difference is compared to the ACTUAL goal difference to generate a game error value. The game error values are summed up for each team and then averaged on a per-game basis. Then the computer adjusts each team's power rating (up or down) to offset this average game error. The total of all the errors for all teams is then saved as the cycle error.

The new adjusted power ratings for each team are then fed back into the algorithm and the computer analyzes all the games again with the new power ratings (and therefore new predicted outcomes). Again, errors between the PREDICTED goal differential and the ACTUAL goal differential are computed. The errors for all the teams and all the games are summed once more to determine the cycle error.

Once more, the computer adjusts the power ratings up or down to offset the errors and is fed back into the algorithm for another analysis cycle. The cycles continue, each time causing an adjustment in the power ratings for each team to offset the ACTUAL vs PREDICTED goal differential errors. After each cycle is complete, the cycle error is compared to the the previous cycle error. As the power ratings get adjusted during the cycling, they converge on values which minimize the game errors. The algorithm stops when the difference between a cycle error and the previous cycle error is very small. The power ratings which were determined in the last algorithm cycle are then saved and published on the website Standings page.

The power ratings algorithm runs whenever a new score gets entered into the system. With roughly 2000 games and about 50 iterations before convergence, the host computer is making over 100,000 game calculations every time a score is entered.

 

SUM[Home Team Scores] - SUM[Visiting Team Scores] / Number of Total Games = HFA

Home Team PR + HFA - Visiting Team PR = Predicted Goal Difference

Home Team Score + HFA - Visiting Team Score = Actual Goal Difference

Predicted Goal Difference - Actual Goal Difference = PR Error

SUM[Pr Error] / Number of Games Played = Average PR Error (calculated per team and used for adjustments)

SUM[Pr Error] = Cycle Error (calculated for all games)

Cycle Error[current cycle] - Cycle Error[previous cycle] = Cycle Convergence Error

When algorithm achieves Cycle Convergence Error < .01, algorithm stops (or 100 cycles, whichever comes first)

All Power Ratings normalized in a linear fashion to a top rating value of 100.

 

The IHSLA method employs two "adjustments" to the standard power rating scheme. The 10-goal limit factor is employed and the Top Team factor is employed.

 

The "10-Goal Limit" factor (TGL)

The TGL adjusts the ratings in such a way that a winning team will not gain additional power points if they win by more than ten goals. Similarly, the defeated team will not lose power points if they lose by more than ten goals. The IHSLA recognizes that a ten-goal differential in a game results in many unusual situations such as a running clock, more substitutions, etc. that may not represent the true performance of either team. In addition, the TGL removes any incentive for a coach to "run up the score" merely to improve the team's power rating.

The "Top Team" factor

The scores from various games are not all weighted equally. Games in which the opponent is highly rated are weighted more heavily than games in which the opponent is rated lower. This factor provides more power points to the teams with difficult schedules and reduces the power points for teams with easier schedules. It rewards teams who win against strong opponents, and it diminishes the rewards for teams who win against weaker opponents.

Strength of Schedule (SOS)

The SOS rating (not displayed) is a reference to the strength of a team's opponents. It is calculated from the average power rating for a team's opponents in games played. The SOS rating does not reflect the strength of opponents for unplayed games, exhibition games, or tournament games. The SOS Ranking (displayed) is a team's rank relative to the other teams based on its SOS rating. The higher the SOS rating, the lower the SOS rank. The SOS ratings and rankings are not used directly in any other calculations because the power ratings are already weighted in favor of strong opponents using the Top Team factor.

But what does it all mean?

The power rating system is a computerized method of predicting the difference in scores between two teams. Theoretically, a team with a power rating of 95 is expected to defeat a team with a power rating of 92 by three goals on a neutral field. Of course, the actual outcome of such a game may be different depending on many circumstances, but, on average, the power ratings tend to properly rank teams based on how they performed against one another. The more games that are played within the league, the more accurate the power ratings reflect the true performance of the teams and the stronger the prediction mechanism becomes.

How accurate is it?

The graph below shows the relative frequency of errors arising from score predictions using the IHSLA power ratings for the varsity games completed so far this season.  This statistical analysis suggested the power rating prediction has a standard deviation of about +/- 1 goal. To put it another way, roughly 70% of the games played had actual results within 2 goals of the predicted goal difference.

Return to Standings