PDA

View Full Version : Computer models/simulations to predict actual scores



thebootfitter
October 28th, 2013, 12:49 AM
The idea for this thread was kick started by FormerPokeCenter in the LSU-Furman discussion (http://www.anygivensaturday.com/showthread.php?145075-Furman-LSU/page3), in which FPC suggested that LSU would score 60 points against the NDSU defense. I didn't want to derail that thread any more, and I thought the captioned topic may be interesting to some folks on the forum.

I think we can all probably agree that there are many intangibles that affect the outcome of any given football game; thus, the name of this forum. For this forum and our respective passions for our teams and the game itself, the game must be played. Computer simulations can never in any manner replace the product that we see on the field each week.

However, there are times when we ask questions to which computer simulations may be able to offer some insight. Imperfect insight, no doubt. Wrought with caveats. But when hypothetical match ups are discussed that have no chance of happening, and there are not enough common opponents (often none) to draw any conclusions -- and even transitive properties across common opponents have only limited value -- then computer rankings and simulations are one of the only consistent metrics we can use to respond to these questions.


So... Here's the question that I pose: Do computer models offer us any statistically reliable measures to predict future match ups -- particularly those that cross divisions where the teams are not necessarily well connected?

Is anyone knowledgeable on this topic? Has anyone here ever done their own statistical analysis to investigate this or any similar questions? I've got a few ideas and will post some of the data and hypotheses that I am gathering, but I'd like to see what others have to say as well.

ursus arctos horribilis
October 28th, 2013, 12:56 AM
The idea for this thread was kick started by FormerPokeCenter in the LSU-Furman discussion (http://www.anygivensaturday.com/showthread.php?145075-Furman-LSU/page3), in which FPC suggested that LSU would score 60 points against the NDSU defense. I didn't want to derail that thread any more, and I thought the captioned topic may be interesting to some folks on the forum.

I think we can all probably agree that there are many intangibles that affect the outcome of any given football game; thus, the name of this forum. For this forum and our respective passions for our teams and the game itself, the game must be played. Computer simulations can never in any manner replace the product that we see on the field each week.

However, there are times when we ask questions to which computer simulations may be able to offer some insight. Imperfect insight, no doubt. Wrought with caveats. But when hypothetical match ups are discussed that have no chance of happening, and there are not enough common opponents (often none) to draw any conclusions -- and even transitive properties across common opponents have only limited value -- then computer rankings and simulations are one of the only consistent metrics we can use to respond to these questions.


So... Here's the question that I pose: Do computer models offer us any statistically reliable measures to predict future match ups -- particularly those that cross divisions where the teams are not necessarily well connected?

Is anyone knowledgeable on this topic? Has anyone here ever done their own statistical analysis to investigate this or any similar questions? I've got a few ideas and will post some of the data and hypotheses that I am gathering, but I'd like to see what others have to say as well.

There is/was a site doing this and it was actually pretty decent but I can't remember the posters exact username right now and would have to check some of the links to see if I can find it. Didn't see them promoting on here this year so not sure if it is still active or not.

FormerPokeCenter
October 28th, 2013, 08:37 AM
If you can find the McNeese/UNI thread, there was a link to that site posted by one of the UNI fans...

But, again...here's the fundamental problem with that...when LSU does play FCS schools, they don't step on the gas and they often leave the parking brake engaged...

If you ask any of the knowledgeable FCS fans who attended, say, the App State/LSU game, which LSU won 24-0 and the McNeese/LSU game which LSU won 32-10, I think they'll tell you the same story...

Miles puts his kids in matchups that make them compete. Like running an off-tackle play from a formation they ONLY run out of, on 2nd and 15...statistics don't really tell you the whole story in a matchup like that...The LSU fans I was sitting with were concerned about the "closeness" of the game and were questioning virtually EVERY call Miles made...

And then there's the strategy angle...Look closely at who LSU has coming up next...does Furman run an offense or defense that's similar to what they're going to say in an important game? Or is there somebody coming up that the staff wants to distract by forcing them to watch and prepare for something that they might do differently against an FCS squad that they won't do against a BCS foe?

FCS teams do the same things to each other when they schedule Division II's ;)

thebootfitter
October 28th, 2013, 09:21 AM
But, again...here's the fundamental problem with that...when LSU does play FCS schools, they don't step on the gas and they often leave the parking brake engaged...

I'm not disputing your comments at all. Anecdotally, I have seen similar behavior in other cross division match ups. But that is why I'm curious about the data. Is there any data that supports these anecdotal observations? Is there any data that is statistically reliable in these match ups?

I'm guessing that there is not enough of an economic advantage to answering these questions that much effort has gone into answering them. But you have piqued my curiosity.

thebootfitter
October 28th, 2013, 12:03 PM
If you can find the McNeese/UNI thread, there was a link to that site posted by one of the UNI fans...
For reference, Clenz had mentioned a site in that thread (http://www.anygivensaturday.com/showthread.php?141340-9-McNeese-State-at-5-UNI-Game-Thread/page12&highlight=mcneese), but did not post any links. I know there are several sites that do simulated match ups. But I'm curious whether anyone has ever analyzed the predicted match ups against the actual scores after the games. I'm pretty certain the simulations themselves are built at least partially around scores from past match ups, so there has to be some data and conclusions on the topic somewhere. I'll do some more research as time permits.

IBleedYellow
October 28th, 2013, 12:11 PM
Random thought I just had. UNI THROTTLED McNeese and has since lost 4 straight and dropped out of the T25 while McNeese is ranked 7.

Woah.

ursus arctos horribilis
October 28th, 2013, 12:20 PM
For reference, Clenz had mentioned a site in that thread (http://www.anygivensaturday.com/showthread.php?141340-9-McNeese-State-at-5-UNI-Game-Thread/page12&highlight=mcneese), but did not post any links. I know there are several sites that do simulated match ups. But I'm curious whether anyone has ever analyzed the predicted match ups against the actual scores after the games. I'm pretty certain the simulations themselves are built at least partially around scores from past match ups, so there has to be some data and conclusions on the topic somewhere. I'll do some more research as time permits.

Here's the user and the latest link I could find to a simulator. You can check the user "ebemiss" for other links on this type of thing he had worked on.

http://www.anygivensaturday.com/showthread.php?125198-2012-FCS-Tournament-Simulation&p=1908462#post1908462

thebootfitter
October 28th, 2013, 12:45 PM
Random thought I just had. UNI THROTTLED McNeese and has since lost 4 straight and dropped out of the T25 while McNeese is ranked 7.

Woah.
AGS, baby... AGS! Still, even with many examples of wild swings like this, my guess is that there are trends that are statistically significant, even across divisions. In any data, you expect to have some outliers... some deviations from the mean. But are the trends themselves conclusive enough to make any reasonable judgments?

clenz
October 28th, 2013, 03:20 PM
Random thought I just had. UNI THROTTLED McNeese and has since lost 4 straight and dropped out of the T25 while McNeese is ranked 7.

Woah.
That's what happens when you have 18 starters/significant playing time players out of games/practice due to injury and concussions....

No matter what kind of depth you have that will kill any season right there. UNI through the 3rd quater of the NDSU game is a perfect example of that (even the McNeese game is where the problems started). Farley breaks his leg, Kollmorgen apparently suffers a concussion, Xavier Williams suffers a foot injury, Brion Carnes gets a concussion, Phil Wright out with a foot/ankle issue.


I'll have to find a list I put together of all of the players that are missing time. It was like 18 players

msupokes1
October 28th, 2013, 03:24 PM
Random thought I just had. UNI THROTTLED McNeese and has since lost 4 straight and dropped out of the T25 while McNeese is ranked 7.

Woah.

Great random thought but with that why did NDSU even make the playoffs the last 2 years? They lost to 6-5 YSU in 2011 and 7-4 In. St in 2012. Some times teams just have an off day. Even the good ones. When the wheels fell off things when down hill quick.

BlackNGoldR3v0lut10n
October 30th, 2013, 11:50 AM
Found the link in another discussion (posting the actual link here)

http://nationalsportsrankings.com/in...n=com_oneonone (http://nationalsportsrankings.com/index.php?option=com_oneonone)

Twentysix
October 30th, 2013, 12:13 PM
Simulator says LSU wins by 2 TD, not 700 :\ it must be broken.

LSU 33 North Dakota St. 17

And has NDSU winning 3 of 100 games.

Twentysix
October 30th, 2013, 12:14 PM
NDSU wins vs K-State 43/100 times.

KSU's avg margin of victory 2.

24-22.

Twentysix
October 30th, 2013, 12:16 PM
Lol, it says NDSU has a better chance against LSU than App State has against NDSU, that's interesting.

NDSU vs App State NDSU wins 100/100 times, avg margin of victory 23. 37-14.

http://imageshack.us/a/img203/2946/a0y5.png

Skjellyfetti
October 30th, 2013, 12:18 PM
So... Here's the question that I pose: Do computer models offer us any statistically reliable measures to predict future match ups -- particularly those that cross divisions where the teams are not necessarily well connected?

Are they interesting and fun to play around with? Yes.

Statistically reliable? Hell no.

thebootfitter
October 30th, 2013, 12:49 PM
Are they interesting and fun to play around with? Yes.

Statistically reliable? Hell no.
Do you have any data to show that they are not statistically reliable?

Note that for something to be statistically reliable, it doesn't have to predict outcomes with 100% certainty or 100% precision, but rather the actual results have to be within certain tolerances, allowing for the variability of reality. In other words, if a simulator shows Team A beating Team B 60% of the time and by an average score of 24-21, and the actual score in the game played on the field is Team B winning 28-21, it doesn't mean that the simulator is not statistically reliable. Even if Team B wins that one game 38-3, it doesn't mean that the simulator is not statistically reliable. We need to compare the simulator against 100s or 1000s of actual games played to determine if there are any significant deviations from what the simulator is predicting. If 95% of the actual match ups produce scores that are within the bounds of results predicted by the simulator, then the conclusion is that it is statistically reliable. But if only 50% of the actual match ups are within those bounds produced by the simulator, then it is not statistically reliable.

Skjellyfetti
October 30th, 2013, 01:02 PM
Do you have any data to show that they are not statistically reliable?

Note that for something to be statistically reliable, it doesn't have to predict outcomes with 100% certainty or 100% precision, but rather the actual results have to be within certain tolerances, allowing for the variability of reality. In other words, if a simulator shows Team A beating Team B 60% of the time and by an average score of 24-21, and the actual score in the game played on the field is Team B winning 28-21, it doesn't mean that the simulator is not statistically reliable. Even if Team B wins that one game 38-3, it doesn't mean that the simulator is not statistically reliable. We need to compare the simulator against 100s or 1000s of actual games played to determine if there are any significant deviations from what the simulator is predicting. If 95% of the actual match ups produce scores that are within the bounds of results predicted by the simulator, then the conclusion is that it is statistically reliable. But if only 50% of the actual match ups are within those bounds produced by the simulator, then it is not statistically reliable.

I agree for the most part.

However, the bolded part is where my contention lies. I haven't seen anything indicating that these models are anywhere close to 95% accurate within a margin of error of a few points.

When they reach that kind of accuracy... sports betting would be easy money (and it would probably be the end of sports betting).

thebootfitter
October 30th, 2013, 01:11 PM
I agree for the most part.

However, the bolded part is where my contention lies. I haven't seen anything indicating that these models are anywhere close to 95% accurate within a margin of error of a few points.
Yeah, and that's what I am trying to determine. I figured I'd ask here first in case anyone else has done the research, but it is not looking like anyone has. My next step will be to gather data to see if there are any trends on a smaller scale and depending what I find, I may have to dig a little deeper and get more data to see what the trends are at a large scale.

My initial forays into this suggest that the simulators are at least somewhat reliable in predicting scores. Too early to draw definite conclusions though.

MR. CHICKEN
October 30th, 2013, 01:16 PM
18460.....RUN YER ?'s.....PAST IBM's/JEOPARDY'S...WATSON.........xrotatehx......BRAWK!

http://blogs.plos.org/retort/files/2011/02/IBM-Watson.jpeg (http://www.anygivensaturday.com/url?sa=i&source=images&cd=&cad=rja&docid=p01GG6bx5TOjlM&tbnid=6wc1Z2V29gFMFM:&ved=0CAgQjRwwAA&url=http%3A%2F%2Fblogs.plos.org%2Fretort%2F2011%2F 02%2F14%2Fhow-ibm%25E2%2580%2599s-watson-computer-will-excel-at-jeopardy%2F&ei=0E1xUv-wOcjjsASo4IGoDQ&psig=AFQjCNGfK37V2WaEKzZA9Cx_1FwL-MX4Zw&ust=1383243600985720)

dystopiamembrane
October 30th, 2013, 01:46 PM
Yale beats Cal Poly 2% of the time.

yorkcountyUNHfan
October 30th, 2013, 02:04 PM
W&M beats UNH 52 out of 100 times
Predicted score 26 to 26

bisonboone11
October 30th, 2013, 02:21 PM
Simulator says LSU wins by 2 TD, not 700 :\ it must be broken.

LSU 33 North Dakota St. 17

And has NDSU winning 3 of 100 games.
I found a similar flaw. It says NDSU would beat Chattanooga at Chattanooga 80 out of 100 matchups. Everyone knows Chattanooga wouldn't lose. I heard they have had a thousand national championship games held in their stadium, which makes them better than everyone else.

aces1180
October 30th, 2013, 02:32 PM
Just for fun...

North Dakota St.(2013) (H) wins 100 of 100 matchups against North Dakota(2013) (A)
Projected Score: North Dakota St. 46 North Dakota 11
Margin of Victory: 35 Points

North Dakota St.(2013) (A) wins 100 of 100 matchups against North Dakota(2013) (H)
Projected Score: North Dakota St. 40 North Dakota 11
Margin of Victory: 29 Points


xawesomex

FormerPokeCenter
October 30th, 2013, 05:09 PM
I ran it with McNeese at South Florida...it says South Florida wins 66 of those games, by an average margin of 7 points...Every simulations has us scoring about 10 points less than we actually did...

I ran it with McNeese and LSU for 2010...and it shows LSU winning 100 of 100 by an average margin of 35. The actual margin of victory for the game we played there in 2010 was 22...

I ran it for LSU and NDSU and it has LSU winning those...by an average of 22 points....there were roughly 20 of the simulations showing LSU scoring 50 or more, including a 60 point outing or two...

Matching us up with UNI shows UNI winning 81 of 100 by an average margin of 10 points...

So, I dunno...

thebootfitter
October 31st, 2013, 04:52 AM
Okay, I pulled together a few results from both Sagarin and Massey for LSU's games thus far this year. Granted, I am pulling these numbers after the fact such that the models have been updated with the results from the games for which I am using them to predict the scores. I don't have the archived data from each week to view the projected results immediately prior to the game being played. But I still think the overall figures are rather telling in this particular case.

What I'm showing here are the actual spreads (LSU score minus opponent's score) for each game, along with the Sagarin predicted spread and the Massey predicted spread. The overall average difference between the actual spreads and what Sagarin predicts is only 0.9 points. For Massey, the average difference is -1.4 points. Of course, there is some variance around the means, but overall, it appears that both Sagarin and Massey are very good at predicting outcomes of match ups once the data from all games, including the actual game is included into their respective systems.

Opponent / Actual Spread / Sag Predicted Spread / Massey Predicted Spread
TCU / 10 / 8.8 / 18
UAB / 39 / 37.3 / 39
Kent State / 32 / 34.5 / 34
Auburn / 14 / 8.8 / 5
Georgia / -3 / 0.9 / 5
Miss St / 33 / 13.0 / 18
Florida / 11 / 9.8 / 6
Ole Miss / -3 / 0.3 / 5
Furman / 32 / 43.7 / 48

This limited data does support FPC's observation that LSU didn't play to their full potential against Furman, since Sagarin was predicting that they would beat Furman by an additional 12 points and Massey predicted 16 more points than the actual game spread.

Interestingly, Sagarin predicts that LSU would beat the Bison by 13.7 points at LSU and only by 6.1 points if the game were played in the Fargodome. Massey predicts LSU would win by an average of 14 points. Both models inherently assume they play hard the whole game.

According to Massey, the Bison have the 22nd best defense in all of college football at this point in the season. Compared to LSU's other opponents this year, only Florida (2), TCU (15), Ole Miss (16) and Auburn (17) have better defenses. LSU scored an average of 28 points against those quality defenses. Collectively, their other opponents have an average defense ranking per Massey of 106th in all of D1 college football. LSU scored an average of 50 points against the other teams.

With respect to FPC's comments on the other thread referenced in the OP: Given how closely these two computer models predict the outcomes of LSU's games already played this season, and their respective predictions of how the Bison might fare against LSU, I have a hard time believing that LSU could score at will against the Bison defense. If LSU truly played conservatively against the Bison instead of going all out, I could see it being a pretty close game, though in all likelihood, LSU would win in the end.

What I want to start collecting are the predictions for several games by several models prior to the games occurring, then compare those predictions against the actual scores post-games. I'm very curious to see how the predicted vs. actual spreads compare over several hundred actual games. My guess is that the average predictions will be pretty darn close to the average results.

dystopiamembrane
October 31st, 2013, 07:44 AM
What I want to start collecting are the predictions for several games by several models prior to the games occurring, then compare those predictions against the actual scores post-games. I'm very curious to see how the predicted vs. actual spreads compare over several hundred actual games. My guess is that the average predictions will be pretty darn close to the average results.This is crazy talk. Computers can't watch the games.

mvemjsunpx
November 1st, 2013, 07:22 AM
Big Sky

PSU over Weber - 100% (43-22 avg. score)
Cal Poly over Davis - 77% (24-15)
Montana over Sac - 90% (33-20)
EWU over Idaho St. - 100% (45-17)
NAU over NoDak - 96% (34-16)
MSU over NoCo - 100% (37-16)

Lehigh'98
November 1st, 2013, 07:58 AM
If Chatty played Lehigh, the simulator would implode and create a black hole that engulfed our solar system.

AmsterBison
November 1st, 2013, 08:08 AM
Big Sky

PSU over Weber - 100% (43-22 avg. score)
Cal Poly over Davis - 77% (24-15)
Montana over Sac - 90% (33-20)
EWU over Idaho St. - 100% (45-17)
NAU over NoDak - 96% (34-16)
MSU over NoCo - 100% (37-16)

Nice work.

MVFC

Youngstown State over South Dakota - 89% (31-18)
Missouri State over Indiana State - 73% (26-19)
Northern Iowa over Illinois State - 88% (33-22)
Southern Illinois over Western Illinois - 86% (24-13)

CrazyCat
November 1st, 2013, 11:05 AM
I used that simulator for my score predictions on Bobcat Nation last year. I did not do well.

URMite
November 1st, 2013, 11:46 AM
I want to collect data of hundreds of games between the same two teams with all the same players at the same ages. I am of the opinion that this would provide the most reliable statistics. Can someone provide me a link?

Pard4Life
November 1st, 2013, 04:25 PM
I use Massey Ratings:

http://masseyratings.com/game.php?s0=199231&t0=North+Dakota+State&h=0&s1=199231&t1=Louisiana+State+University

thebootfitter
November 3rd, 2013, 04:18 AM
I have pulled the scores for all 106 games between Div I opponents this week and compared them to Sagarin's Predictor scores. Overall, the average difference between Sagarin's predicted spread and the actual spread is only 1.3 points. Overall, that's a pretty good average. However, the average only tells part of the story.

The standard deviation is 15 -- meaning about 68% of the observations fall within 15 points of the mean in either direction, and about 95% fall within 30 points in either direction. We obviously have to expect for some variability to account for the AGS effect. But that's a pretty wide range of results.

The largest difference between predicted spread and actual spread was the Eastern Kentucky win over Tennessee State. Sagarin was predicting a very close match up, with EKU winning by a point, but EKU was firing on all cylinders and blanked Tennessee State 44-0. There were a handful of games that played out with the exact spread that Sagarin predicted, including the UTC win over App State and Eastern Illinois win over Tennessee Tech.

Out of 106 games:
(7.5%) 8 games were within 2 points of Sagarin's predictions in either direction
(15.1%) 16 games were between 2 & 5 points off Sagarin's predictions
(24.5%) 26 games were between 5 & 10 points off Sagarin's predictions
(23.6%) 25 games were between 10 & 15 points off Sagarin's predictions
(29.2%) 31 games were greater than 15 points off Sagarin's predictions

Sagarin's Predictor score correctly predicted the winner in 47 games (44.3%). When Sagarin incorrectly predicted the winner, he was off by an average of 12 points.

Based on results from this week alone, I am impressed with the overall average difference between the predicted spread and the actual spread, but I am a bit surprised at the overall deviations from the mean.

thebootfitter
November 7th, 2013, 03:03 PM
Sagarin's Predictor score correctly predicted the winner in 47 games (44.3%). When Sagarin incorrectly predicted the winner, he was off by an average of 12 points.
CORRECTION:
The formula I used to get this statistic was actually pulling the games where Sagarin not only predicted the winner correctly, but also where his predicted point differential was at least as great as the actual point differential.

After correcting this, I found that Sagarin predicted the straight up winner correctly 81 times.

I also noticed that I inadvertently included one game between a DIAA and DII opponents because Incarnate Word played AT DII McMurray. So the total number of DI games played was 105 instead of 106.

thebootfitter
November 11th, 2013, 03:43 AM
Overall, the average predicted vs actual point differential is quite small this week (only 0.3 points), but the variance is up a bit with a standard deviation of 16. Sagarin correctly predicted the winner 76% of the time.

Ironically, the largest difference between point differential and the actual score was once again the Eastern Kentucky game in which Sagarin predicted a very close outcome against Jacksonville State. Only this week, the difference was not in favor of EKU. Maybe they win the award for the most Jekyll/Hyde team? Some of the games that were predicted most closely include Georgia over App St (0.2 pt difference), EIU over Murray St. (0.6 pt difference), and Chattanooga over Wofford (1.2 pt difference). For two weeks in a row, EIU, UTC, and App St. have all been performing on par with Sagarin's predictions.


Out of 108 games between DI opponents:
--> Average difference between Sagarin's predicted point differential and the actual point differential is only 0.3 points (1.3 points last week), with a standard deviation of 16 points (15 last week).--> (75.9%) 82 games - Sagarin correctly predicted the winner (77% last week)
--> (15.7%) 17 games were within 2 points of Sagarin's predictions in either direction (7.5% last week)
--> (12.0%) 13 games were between 2 & 5 points off Sagarin's predictions (15.1% last week)
--> (21.3%) 23 games were between 5 & 10 points off Sagarin's predictions (24.5% last week)
--> (15.7%) 17 games were between 10 & 15 points off Sagarin's predictions (23.6% last week)
--> (35.2%) 38 games were greater than 15 points off Sagarin's predictions (29.2% last week)

thebootfitter
November 11th, 2013, 04:14 AM
Okay, I pulled together a few results from both Sagarin and Massey for LSU's games thus far this year...

What I'm showing here are the actual spreads (LSU score minus opponent's score) for each game, along with the Sagarin predicted spread and the Massey predicted spread. The overall average difference between the actual spreads and what Sagarin predicts is only 0.9 points. For Massey, the average difference is -1.4 points. Of course, there is some variance around the means, but overall, it appears that both Sagarin and Massey are very good at predicting outcomes of match ups once the data from all games, including the actual game is included into their respective systems.

Opponent / Actual Spread / Sag Predicted Spread / Massey Predicted Spread
TCU / 10 / 8.8 / 18
UAB / 39 / 37.3 / 39
Kent State / 32 / 34.5 / 34
Auburn / 14 / 8.8 / 5
Georgia / -3 / 0.9 / 5
Miss St / 33 / 13.0 / 18
Florida / 11 / 9.8 / 6
Ole Miss / -3 / 0.3 / 5
Furman / 32 / 43.7 / 48
Alabama / -21 / -17.8 / -16


Both Sagarin and Massey were pretty good at predicting the LSU-Alabama game too.

thebootfitter
November 19th, 2013, 03:33 AM
Out of 106 games between DI opponents:
--> Average difference between Sagarin's predicted point differential and the actual point differential is only 0.2 points (0.3 points last week), with a standard deviation of 14 points (16 last week).
--> (81.1%) 86 games - Sagarin correctly predicted the winner (75.9% last week)
--> (11.3%) 17 games were within 2 points of Sagarin's predictions in either direction (15.7% last week)
--> (17.9%) 13 games were between 2 & 5 points off Sagarin's predictions (12.0% last week)
--> (30.2%) 23 games were between 5 & 10 points off Sagarin's predictions (21.3% last week)
--> (12.3%) 17 games were between 10 & 15 points off Sagarin's predictions (15.7% last week)
--> (28.3%) 38 games were greater than 15 points off Sagarin's predictions (35.2% last week)

ElCid
November 19th, 2013, 08:44 PM
Opponent / Actual Spread / Sag Predicted Spread / Massey Predicted Spread
TCU / 10 / 8.8 / 18
UAB / 39 / 37.3 / 39
Kent State / 32 / 34.5 / 34
Auburn / 14 / 8.8 / 5
Georgia / -3 / 0.9 / 5
Miss St / 33 / 13.0 / 18
Florida / 11 / 9.8 / 6
Ole Miss / -3 / 0.3 / 5
Furman / 32 / 43.7 / 48
Alabama / -21 / -17.8 / -16


Both Sagarin and Massey were pretty good at predicting the LSU-Alabama game too.

Quick question, these look like good predictions, but did you compute them for each of these contests per the ratings listed as of 11 Nov or did you use the ratings that came out on the Sunday prior to each game? If the spreads were computed after all these games were played, it makes sense that they would be close since the results were already included in the ratings. But if you look at the ratings the week that they were played they are a bit different. Not a lot, but a bit. For instance, the Sagarin ratings that came out on 22 Sept for the Georgia-LSU game gave a predictive spread of 6 in Ga's favor at the time. Prior to the Auburn game the Sag ratings that came out on 15 Sep had a spread of LSU -13.

Just curious.

thebootfitter
November 19th, 2013, 09:59 PM
Quick question, these look like good predictions, but did you compute them for each of these contests per the ratings listed as of 11 Nov or did you use the ratings that came out on the Sunday prior to each game? If the spreads were computed after all these games were played, it makes sense that they would be close since the results were already included in the ratings. But if you look at the ratings the week that they were played they are a bit different. Not a lot, but a bit. For instance, the Sagarin ratings that came out on 22 Sept for the Georgia-LSU game gave a predictive spread of 6 in Ga's favor at the time. Prior to the Auburn game the Sag ratings that came out on 15 Sep had a spread of LSU -13.

Just curious.
I didn't have immediate access to the ratings from previous weeks, so at one point (late Oct, I think), I started gathering them week by week. But everything prior to that is backward looking from that one point in time. I believe in my original post with those stats I disclose that shortcoming, but it may not have been clear.

Next year, I plan to start tracking these stats at the beginning of the season and see how the system's "skill" improves over time as the teams become better connected.

Catbooster
November 19th, 2013, 11:44 PM
Can you tell whether the predictions are more accurate for FBS than for FCS?

thebootfitter
November 20th, 2013, 12:04 AM
Can you tell whether the predictions are more accurate for FBS than for FCS?
I haven't really started analyzing at that level of detail yet. Next time I pull the stats, I'll introduce that variable and see if there is a difference.

If I can find the archives to pull the week by week ratings for this entire year, I'd like to analyze for this entire season, but I haven't yet found anywhere that retroactive weekly ratings are available online.