Power Ratings are a computerized method of ranking based on a team's performance
against other teams in the league.
The IHSLA does not use power ratings to determine conference champions or state
champions. Starting in 2013, power ratings have been used to determine playoff seeding
There are dozens, perhaps hundreds, of power rating schemes employed in sports.
The power rating method utilized on this site is similar to the "Margin of
Victory" method used by LaxPower.com. The MOV method considers only the goal
differences between teams. It does not consider win/loss records nor does it consider
head-to-head contests. The IHSLA power rating scheme differs from the LaxPower rating
scheme in that it only considers play within the IHSLA and ignores games played
outside of Illinois..
How is the power rating calculated?
The power rating algorithm is a complicated piece of repetitive cycling computer
code. The algorithm uses two input lists. The first input is a list of teams and
their power ratings. The second input is a list of completed games with their game
scores. In the initial cycle of the program, each team is assigned a power rating
of 100. These power ratings are used to calculate the predicted goal difference
for each game, which is the difference between the teams' power ratings plus
a small home field advantage factor. The actual goal difference between teams is
determined by the game score. The predicted goal difference is compared to the actual
goal difference to generate a game error value. The game error values are summed
up for each team and averaged on a per-game basis, and the power rating for each
team is then adjusted up or down to offset this average game error. The total of
all the errors for all teams is then saved as the cycle error. The new adjusted
power ratings for each team are used as the first input for the next cycle. The
original completed game list is again used as the second input to the next cycle.
After each cycle is complete, the cycle error is compared to the the previous cycle
error. As the power ratings get adjusted during the cycling, they converge on values
which minimizes the errors. The algorithm stops when the difference between the
cycle error and the previous cycle error is very small. The power ratings algorithm
runs whenever a new score gets entered into the system. With roughly 2000 games
and about 50 iterations before convergence, the host computer is making over 100,000
game calculations after each score is entered.
SUM[Home Team Scores] - SUM[Visiting Team Scores] / Number of Total Games = HFA
Home Team PR + HFA - Visiting Team PR = Predicted Goal Difference
Home Team Score + HFA - Visiting Team Score = Actual Goal Difference
Predicted Goal Difference - Actual Goal Difference = PR Error
SUM[Pr Error] / Number of Games Played = Average PR Error (calculated per team and
used for adjustments)
SUM[Pr Error] = Cycle Error (calculated for all games)
Cycle Error[current cycle] - Cycle Error[previous cycle] = Cycle Convergence Error
When algorithm achieves Cycle Convergence Error < .01, algorithm stops (or 100
cycles, whichever comes first)
All Power Ratings normalized in a linear fashion to a top rating value of 100.
The IHSLA method employs two "adjustments" to the standard power rating
scheme. The 10-goal limit factor is employed and the Top Team factor is employed.
The "10-Goal Limit" factor (TGL)
The TGL adjusts the ratings in such a way that a winning team will not gain additional
power points if they win by more than ten goals. Similarly, the defeated team will
not lose power points if they lose by more than ten goals. The IHSLA recognizes
that a ten-goal differential in a game results in many unusual situations such as
a running clock, more substitutions, etc. that may not represent the true performance
of either team. In addition, the TGL removes any incentive for a coach to "run up
the score" merely to improve the team's power rating.
The "Top Team" factor
The scores from various games are not all weighted equally. Games in which the opponent
is highly rated are weighted more heavily than games in which the opponent is rated
lower. This factor provides more power points to the teams with difficult schedules
and reduces the power points for teams with easier schedules. It rewards teams who
win against strong opponents, and it diminishes the rewards for teams who win against
Strength of Schedule (SOS)
The SOS rating (not displayed) is a reference to the strength of a team's opponents.
It is calculated from the average power rating for a team's opponents in games
played. The SOS rating does not reflect the strength of opponents for unplayed games,
exhibition games, or tournament games. The SOS Ranking (displayed) is a team's
rank relative to the other teams based on its SOS rating. The higher the SOS rating,
the lower the SOS rank. The SOS ratings and rankings are not used directly in any
other calculations because the power ratings are already weighted in favor of strong
opponents using the Top Team factor.
But what does it all mean?
The power rating system is a computerized method of predicting the difference in
scores between two teams. Theoretically, a team with a power rating of 95 is expected
to defeat a team with a power rating of 92 by three goals on a neutral field. Of
course, the actual outcome of such a game may be different depending on many circumstances,
but, on average, the power ratings tend to properly rank teams based on how they
performed against one another. The more games that are played within the league,
the more accurate the power ratings reflect the true performance of the teams and
the stronger the prediction mechanism becomes.
How accurate is it?
The graph below shows the relative frequency of errors arising from score predictions
using the IHSLA power ratings for the varsity games completed so far this season.
This statistical analysis suggested the power rating prediction has a standard deviation
of about +/- 1 goal. To put it another way, roughly 70% of the games played had
actual results within 2 goals of the predicted goal difference.