PDA

View Full Version : CSN Final Season GPI



underdawg
November 26th, 2013, 01:53 PM
1. NDSU

2. EIU

3. EWU

4. SELa

5. Towson

7T. SDSU

7T. McNeese

9. Maine

10. NAU

11. Coastal Carolina

12T. UNI

12T. YSU

14. SIU Salukis

15. Harvard

16. Fordham

17. Bethune-Cookman

18. TSU

19. NHU

20. Villanova

21. SHSU

22. JSU

23. Princeton

24. W&M

25 Old Dominion


Conference Power Rankings

1. MVFC (27.11)

2. CAA (34.23

3. Southland (35.23)

4. OVC (37.94)

5. Big Sky (43.65)

gregatim
November 26th, 2013, 02:02 PM
I have to admit I'm not real familiar with the GPI and how it compares to reality. Some thoughts of your more seasoned veterans would be an interesting read. Is this reasonably accurate? Wasn't this a tool that the selection committee does or at least to use in picking the playoff field? Sure doesn't seem like they gave it much creedence this year if they do.

bluehenbillk
November 26th, 2013, 02:24 PM
Is it reasonably accurate? Depends on your definition of reasonable. Evidently the committee doesn't use it as it is consistently differing with the committee's selections.

yorkcountyUNHfan
November 26th, 2013, 02:29 PM
NHU?

superman7515
November 26th, 2013, 02:31 PM
North Hawai'i Fightin' Victorinos

dbackjon
November 26th, 2013, 02:36 PM
GPI is a joke

SIUSalukiFan
November 26th, 2013, 02:53 PM
GPI is a joke

How so? It seems to have pegged the Lumberjacks pretty well.

RabidRabbit
November 26th, 2013, 02:54 PM
Also - did it include App St. and Ga Southern? Old Dominion is included, should they have been?

If you take the indexing as appropriate, there is a reason why you can say the MVFC is the best. Lot of credit/weight goes to the anchor in the middle of the MVFC of 12T-12T-14 in UNI/YSU/SIU. Mo St should be in top 30 then.

Obviously "off" relative to the seeds and selection of at-larges.

Wonder if bookies are using this to make SDSU a 4.5 favorite on the road vs. NAU. A tied for 7th SDSU, should not be a 4.5 favorite over a #10 NAU.

Here's an index that I agree with dback.

Herder
November 26th, 2013, 02:55 PM
GPI is a joke

I watch a lot of FCS football, and the GPI is the perverbial "EYE TEST" when comparing teams. It is much more accurate as to who is actually going to beat another team compared to the thing that the selection committee rolled out. The Selection Committees, now that was a joke IMO.

AmsterBison
November 26th, 2013, 03:00 PM
GPI is a joke

If the GPI is a joke, then what's the SRS? A tragedy?

dbackjon
November 26th, 2013, 03:05 PM
How so? It seems to have pegged the Lumberjacks pretty well.


Occasional squirrel

dbackjon
November 26th, 2013, 03:06 PM
If the GPI is a joke, then what's the SRS? A tragedy?

Basically.

Any computer driven ranking in football, with lack of games overall and decided lack of inter-conference games is weak.

Herder
November 26th, 2013, 03:19 PM
The selection committees system, that seems to treat all 13 conferences as relatively equal, is way off base. How would a system like that go over in FBS football? There would be no difference between Alabama, FL State, Ohio State, Freso and N Illinois. They would al be equal in the eyes of the FCS committee. That's the dish they are trying to serve us.

SIUSalukiFan
November 26th, 2013, 03:30 PM
Occasional squirrel

That's a solid reply.

Green26
November 26th, 2013, 03:36 PM
1. NDSU

2. EIU

3. EWU

4. SELa

5. Towson

7T. SDSU

7T. McNeese

9. Maine

10. NAU

11. Coastal Carolina

12T. UNI

12T. YSU

14. SIU Salukis

15. Harvard

16. Fordham

17. Bethune-Cookman

18. TSU

19. NHU

20. Villanova

21. SHSU

22. JSU

23. Princeton

24. W&M

25 Old Dominion


Conference Power Rankings

1. MVFC (27.11)

2. CAA (34.23

3. Southland (35.23)

4. OVC (37.94)

5. Big Sky (43.65)

What about this one: "6. Montana (7.00)".

Are you a closet NAU fan?

underdawg
November 26th, 2013, 04:37 PM
What about this one: "6. Montana (7.00)".

Are you a closet NAU fan?

Sorry--My subconscious must not like Montana much

Wallace
November 26th, 2013, 07:48 PM
I have to admit I'm not real familiar with the GPI and how it compares to reality. Some thoughts of your more seasoned veterans would be an interesting read. Is this reasonably accurate? Wasn't this a tool that the selection committee does or at least to use in picking the playoff field? Sure doesn't seem like they gave it much creedence this year if they do.

The GPI was started shortly after the BCS was created to give us a similar measure (mix of polls and computer ratings). In certain years it has been perfect in indicating at-large playoff selections and is usually as good or better than any national publication. When the selection committee first announced it was going to officially use rankings in 2008, the GPI was one of three they named. Committee members are notorious in thinking they know better than any poll or rating and this year they started using their own SRS in place of the three systems they used in the past. So it is what it is.

Tubakat2014
November 26th, 2013, 08:03 PM
The selection committees system, that seems to treat all 13 conferences as relatively equal, is way off base. How would a system like that go over in FBS football? There would be no difference between Alabama, FL State, Ohio State, Freso and N Illinois. They would al be equal in the eyes of the FCS committee. That's the dish they are trying to serve us.

Except it doesn't treat all the conferences as equals. Saying that is equivalent to me making the implication that win-loss record means nothing to you.

ElCid
November 26th, 2013, 10:35 PM
Any computer program is susceptible to the inherent bias of the one who created the algorithm. Maybe not a bias toward any specific team, but at least in how strength is determined. Does it start fresh every year or is their a starting rating. If so, what is it based upon. Last year? Hmm. How much advantage for home and away? Is their a diminishing return for margin of victory or is it absolute? Does it address Div II/III or FBS? The fact that all these data points can be shaved or padded to determine a ranking makes it just one more subjective opinion (that being the intentional or unintentional bias of the one that created the algorithm), just like any voter in any poll. It is just done with more precision, right or wrong.

thebootfitter
November 27th, 2013, 01:22 AM
Any computer driven ranking in football, with lack of games overall and decided lack of inter-conference games is weak.
Mr. D-Back Jon,

I'm interested in your reasons for drawing your stated conclusion.

I'm not entirely sure I understand your post, but I suspect by "lack of games overall," you mean the omission of sub-DI games. Yes?

What do you mean by "decided lack of inter-conference games"? I am just not picking up what you're putting down here.

I'm truly curious. Not trying to push any buttons. Any insight you can offer is appreciated.

superman7515
November 27th, 2013, 06:57 AM
Might take dback a bit to get up with you, he's pretty busy this time of year getting ready for the holidays...

http://www.tshirthell.com/shirts/products/a1688/a1688_thumb.jpg

RabidRabbit
November 27th, 2013, 07:17 AM
Also, Dback is Jacked Up xrolleyesx about NAU's first return to the play-offs in a decade.

Hope Rabbits make it a short trip for Lumberjacks! xnodx

superman7515
November 27th, 2013, 07:21 AM
Dback is jacked about the Lumberjacks playing the Jackrabbits... That's a lot of jacking.

http://addins.whig.com/blogs/steviedirt/wp-content/uploads/2013/08/JACKoutspective-uncle-si-thats-a-fact-jack-300x300.png

Gil Dobie
November 27th, 2013, 07:44 AM
Dback is jacked about the Lumberjacks playing the Jackrabbits... That's a lot of jacking.

http://addins.whig.com/blogs/steviedirt/wp-content/uploads/2013/08/JACKoutspective-uncle-si-thats-a-fact-jack-300x300.png

;)

18575

underdawg
November 27th, 2013, 08:30 AM
Any computer program is susceptible to the inherent bias of the one who created the algorithm. Maybe not a bias toward any specific team, but at least in how strength is determined. Does it start fresh every year or is their a starting rating. If so, what is it based upon. Last year? Hmm. How much advantage for home and away? Is their a diminishing return for margin of victory or is it absolute? Does it address Div II/III or FBS? The fact that all these data points can be shaved or padded to determine a ranking makes it just one more subjective opinion (that being the intentional or unintentional bias of the one that created the algorithm), just like any voter in any poll. It is just done with more precision, right or wrong.

Wow. Whenever each and every computer poll in America reflects a different reality than what your friends on the committee have whomped up, all you can offer is this gobbletegoop? You do realize that the SRS is standing starkly alone by itself among dozens of other polls that show a different reality, don't you. IMO the members just looked to the weekly human polls like the Coaches and Sporting news---both of which depend on zero but the prejudice of the voters for their inspiration.

penguinpower
November 27th, 2013, 08:41 AM
GPI is irrelevant if it is not used

Hammerhead
November 27th, 2013, 08:43 AM
This computer rating looks better to me.
http://www.compughterratings.com/FCS/ratings



Week 13: Sunday Nov 24, 2013


Rank
School
Conference
Last
Week
W
L
Change
Overall
Rank
Off
Rank
Def
Rank


1
North Dakota St (http://www.compughterratings.com/stats/CFB/2013-2014/North%20Dakota%20St)
Missouri Valley
1
11
0
http://www.compughterratings.com/NeutralArrow.gif
25
64
31


2
Eastern Illinois (http://www.compughterratings.com/stats/CFB/2013-2014/Eastern%20Illinois)
Ohio Valley
2
11
1
http://www.compughterratings.com/NeutralArrow.gif
29
18
101


3
Eastern Washington (http://www.compughterratings.com/stats/CFB/2013-2014/Eastern%20Washington)
Big Sky
3
10
2
http://www.compughterratings.com/NeutralArrow.gif
68
54
166


4
SE Louisiana (http://www.compughterratings.com/stats/CFB/2013-2014/SE%20Louisiana)
Southland
5
10
2
http://www.compughterratings.com/UpArrow1.gif1
70
70
94


5
Northern Arizona (http://www.compughterratings.com/stats/CFB/2013-2014/Northern%20Arizona)
Big Sky
14
9
2
http://www.compughterratings.com/UpArrow1.gif9
82
178
112


6
South Dakota St (http://www.compughterratings.com/stats/CFB/2013-2014/South%20Dakota%20St)
Missouri Valley
17
8
4
http://www.compughterratings.com/UpArrow1.gif11
83
105
91


7
Coastal Carolina (http://www.compughterratings.com/stats/CFB/2013-2014/Coastal%20Carolina)
Big South
6
10
2
http://www.compughterratings.com/DownArrow1.gif1
84
59
236


8
Tennessee St (http://www.compughterratings.com/stats/CFB/2013-2014/Tennessee%20St)
Ohio Valley
7
9
3
http://www.compughterratings.com/DownArrow1.gif1
87
208
90


9
McNeese St (http://www.compughterratings.com/stats/CFB/2013-2014/McNeese%20St)
Southland
10
10
2
http://www.compughterratings.com/UpArrow1.gif1
89
73
156


10
Jacksonville St (http://www.compughterratings.com/stats/CFB/2013-2014/Jacksonville%20St)
Ohio Valley
11
9
3
http://www.compughterratings.com/UpArrow1.gif1
90
120
145


11
Harvard (http://www.compughterratings.com/stats/CFB/2013-2014/Harvard)
Ivy League
13
9
1
http://www.compughterratings.com/UpArrow1.gif2
92
131
157


12
Montana (http://www.compughterratings.com/stats/CFB/2013-2014/Montana)
Big Sky
18
10
2
http://www.compughterratings.com/UpArrow1.gif6
93
99
133


13
Maine (http://www.compughterratings.com/stats/CFB/2013-2014/Maine)
CAA
4
10
2
http://www.compughterratings.com/DownArrow1.gif9
94
138
141


14
Towson (http://www.compughterratings.com/stats/CFB/2013-2014/Towson)
CAA
16
10
2
http://www.compughterratings.com/UpArrow1.gif2
96
89
140


15
Tennessee-Martin (http://www.compughterratings.com/stats/CFB/2013-2014/Tennessee-Martin)
Ohio Valley
15
7
5
http://www.compughterratings.com/NeutralArrow.gif
100
172
128


16
Bethune-Cookman (http://www.compughterratings.com/stats/CFB/2013-2014/Bethune-Cookman)
Mid-Eastern
12
10
2
http://www.compughterratings.com/DownArrow1.gif4
101
182
123


17
Youngstown St (http://www.compughterratings.com/stats/CFB/2013-2014/Youngstown%20St)
Missouri Valley
9
8
4
http://www.compughterratings.com/DownArrow1.gif8
103
92
155


18
Southern Illinois (http://www.compughterratings.com/stats/CFB/2013-2014/Southern%20Illinois)
Missouri Valley
20
7
5
http://www.compughterratings.com/UpArrow1.gif2
106
130
105


19
Fordham (http://www.compughterratings.com/stats/CFB/2013-2014/Fordham)
Patriot League
21
11
1
http://www.compughterratings.com/UpArrow1.gif2
107
114
206


20
Northern Iowa (http://www.compughterratings.com/stats/CFB/2013-2014/Northern%20Iowa)
Missouri Valley
25
7
5
http://www.compughterratings.com/UpArrow1.gif5
109
145
65


21
Old Dominion (http://www.compughterratings.com/stats/CFB/2013-2014/Old%20Dominion)
FCS Independents
22
8
4
http://www.compughterratings.com/UpArrow1.gif1
114
46
266


22
Princeton (http://www.compughterratings.com/stats/CFB/2013-2014/Princeton)
Ivy League
8
8
2
http://www.compughterratings.com/DownArrow1.gif14
121
79
198


23
Murray St (http://www.compughterratings.com/stats/CFB/2013-2014/Murray%20St)
Ohio Valley
32
6
6
http://www.compughterratings.com/UpArrow1.gif9
126
133
183


24
Villanova (http://www.compughterratings.com/stats/CFB/2013-2014/Villanova)
CAA
27
6
5
http://www.compughterratings.com/UpArrow1.gif3
129
128
109


25
New Hampshire (http://www.compughterratings.com/stats/CFB/2013-2014/New%20Hampshire)
CAA
33
7
4
http://www.compughterratings.com/UpArrow1.gif8
130
139
143


26
Sam Houston St (http://www.compughterratings.com/stats/CFB/2013-2014/Sam%20Houston%20St)
Southland
19
8
4
http://www.compughterratings.com/DownArrow1.gif7
131
94
162


27
South Carolina St (http://www.compughterratings.com/stats/CFB/2013-2014/South%20Carolina%20St)
Mid-Eastern
28
9
3
http://www.compughterratings.com/UpArrow1.gif1
134
222
106


28
Chattanooga (http://www.compughterratings.com/stats/CFB/2013-2014/Chattanooga)
Southern
34
8
4
http://www.compughterratings.com/UpArrow1.gif6
136
168
102


29
Southern Utah (http://www.compughterratings.com/stats/CFB/2013-2014/Southern%20Utah)
Big Sky
26
8
4
http://www.compughterratings.com/DownArrow1.gif3
138
289
96


30
Illinois St (http://www.compughterratings.com/stats/CFB/2013-2014/Illinois%20St)
Missouri Valley
31
5
6
http://www.compughterratings.com/UpArrow1.gif1
140
151
121


31
William & Mary (http://www.compughterratings.com/stats/CFB/2013-2014/William%20%26%20Mary)
CAA
23
7
5
http://www.compughterratings.com/DownArrow1.gif8
141
265
46


32
Eastern Kentucky (http://www.compughterratings.com/stats/CFB/2013-2014/Eastern%20Kentucky)
Ohio Valley
24
6
6
http://www.compughterratings.com/DownArrow1.gif8
142
200
144


33
Central Arkansas (http://www.compughterratings.com/stats/CFB/2013-2014/Central%20Arkansas)
Southland
39
7
5
http://www.compughterratings.com/UpArrow1.gif6
143
142
149


34
Samford (http://www.compughterratings.com/stats/CFB/2013-2014/Samford)
Southern
37
8
4
http://www.compughterratings.com/UpArrow1.gif3
147
129
159


35
Missouri St (http://www.compughterratings.com/stats/CFB/2013-2014/Missouri%20St)
Missouri Valley
36
5
7
http://www.compughterratings.com/UpArrow1.gif1
150
118
93


36
Furman (http://www.compughterratings.com/stats/CFB/2013-2014/Furman)
Southern
42
7
5
http://www.compughterratings.com/UpArrow1.gif6
152
229
115


37
Charleston Southern (http://www.compughterratings.com/stats/CFB/2013-2014/Charleston%20Southern)
Big South
29
10
3
http://www.compughterratings.com/DownArrow1.gif8
156
313
189


38
Cal Poly SLO (http://www.compughterratings.com/stats/CFB/2013-2014/Cal%20Poly%20SLO)
Big Sky
38
6
6
http://www.compughterratings.com/NeutralArrow.gif
161
164
84


39
Liberty (http://www.compughterratings.com/stats/CFB/2013-2014/Liberty)
Big South
45
8
4
http://www.compughterratings.com/UpArrow1.gif6
163
159
132


40
Georgia Southern (http://www.compughterratings.com/stats/CFB/2013-2014/Georgia%20Southern)
Southern
58
7
4
http://www.compughterratings.com/UpArrow1.gif18
164
117
158


41
Sacred Heart (http://www.compughterratings.com/stats/CFB/2013-2014/Sacred%20Heart)
Northeast
41
10
2
http://www.compughterratings.com/NeutralArrow.gif
166
205
222


42
Lehigh (http://www.compughterratings.com/stats/CFB/2013-2014/Lehigh)
Patriot League
30
8
3
http://www.compughterratings.com/DownArrow1.gif12
167
204
294


43
Delaware (http://www.compughterratings.com/stats/CFB/2013-2014/Delaware)
CAA
35
7
5
http://www.compughterratings.com/DownArrow1.gif8
169
135
251


44
Montana St (http://www.compughterratings.com/stats/CFB/2013-2014/Montana%20St)
Big Sky
40
7
5
http://www.compughterratings.com/DownArrow1.gif4
173
149
168


45
Tennessee Tech (http://www.compughterratings.com/stats/CFB/2013-2014/Tennessee%20Tech)
Ohio Valley
43
5
7
http://www.compughterratings.com/DownArrow1.gif2
174
194
180


46
Northwestern St (http://www.compughterratings.com/stats/CFB/2013-2014/Northwestern%20St)
Southland
44
6
6
http://www.compughterratings.com/DownArrow1.gif2
177
213
181


47
Dartmouth (http://www.compughterratings.com/stats/CFB/2013-2014/Dartmouth)
Ivy League
59
6
4
http://www.compughterratings.com/UpArrow1.gif12
179
224
135


48
Richmond (http://www.compughterratings.com/stats/CFB/2013-2014/Richmond)
CAA
52
6
6
http://www.compughterratings.com/UpArrow1.gif4
180
162
165


49
South Dakota (http://www.compughterratings.com/stats/CFB/2013-2014/South%20Dakota)
Missouri Valley
48
4
8
http://www.compughterratings.com/DownArrow1.gif1
186
223
118


50
San Diego (http://www.compughterratings.com/stats/CFB/2013-2014/San%20Diego)
Pioneer
46
8
3
http://www.compughterratings.com/DownArrow1.gif4
187
192
237

ElCid
November 27th, 2013, 10:54 AM
Wow. Whenever each and every computer poll in America reflects a different reality than what your friends on the committee have whomped up, all you can offer is this gobbletegoop? You do realize that the SRS is standing starkly alone by itself among dozens of other polls that show a different reality, don't you. IMO the members just looked to the weekly human polls like the Coaches and Sporting news---both of which depend on zero but the prejudice of the voters for their inspiration.

Not really sure what you point is since you did not make it very clear. I was in no way supporting the use of SRS or any computer poll for that matter. Are you saying some computer polls are always accurate? OK……..which ones? If you understand how computer polls are formulated you will understand that they are no more valid than any human poll. Human polls may or may not be correct, but the computer poll may simply be more precise determining an incorrect result consistantly. It can be garbage in and garbage out. A great example of this is how does Princeton and Harvard show up as 14/15 in Sagarin? They would be handled by at least 15-20 teams below them in that ranking.

Let's take a specific team and see what spot it ends up at in each of the various polls. I just picked a few polls, human and computer. I did not look at every single computer poll out there. I just picked the major ones. Since a lot of people were upset that N Arizona did not get a seed, so let's look at them. Well, in Sagarin they were 33, Massey was 17. In this GPI they were 10. In the computerrating.com, they were 5th. How is that for wide range. In SRS they were 11. Not entirely consistent. Why, because each of these computer polls were created by different people. Now let's compare the human polls. NAU was 9 in the coaches, 8 in the Sports network, 6 in AGS. A little more consistent. So between the human polls and the computer they were 5, 6, 8, 9, 10, 11, 17, 33. Hmm, which were they really? I don't know for sure. I only know where I placed them, and I think it was 8th.

What about YSU? Sag-17, Massey-16, GPI-12, Computeratings-17, SRS-24 : Not bad but still off by 12 at the extreme. Human: Coaches-19, Sports network-17, AGS-19. So the human and computer came fairly close in this case. 12, 16, 17, 17, 17, 19, 19, 24.

What about S Utah since some folks think they were a surprise? Sag-52, Massey-41, GPI-38, Computerratings-29, SRS-21. Wow, another big range. Human:coaches-25, Sports Network-22, AGS-20. Big swing here-52, 41, 38, 29, 25, 22, 21, 20.

I could do it for every team (except for maybe NDSUxbowx) and I would find the same result. And that is that the computer rankings may be close at times, but they can be all over then place as well. There is no program which can accurately rank the teams consistantly.

Not sure how the committee came up with their field, but simply using any of the computer rankings alone would be a poor process. Each Committee member, fan, writer, computer wonk, etc., simply needs to use their own noggin to determine the validity of each of these computer polls. But in the end, none of this matters since the beauty of FCS means that it will be determined by the only method that really matters, on the field. If anyone that is a fan of an at-large bubble team is upset their team got left out, they need to get over it. Their team blew it the day they lost one too many games.

dbackjon
November 27th, 2013, 11:08 AM
Mr. D-Back Jon,

I'm interested in your reasons for drawing your stated conclusion.

I'm not entirely sure I understand your post, but I suspect by "lack of games overall," you mean the omission of sub-DI games. Yes?

What do you mean by "decided lack of inter-conference games"? I am just not picking up what you're putting down here.

I'm truly curious. Not trying to push any buttons. Any insight you can offer is appreciated.


Lack of games: 11 or 12 games is a very small sample size to try to rank 120+ teams
Lack of intra-conference games: A computer could rank teams in a conference, since they mostly play each other - a pretty complete sample. But, trying to rank the teams in MVFC relative to teams in the CAA becomes difficult because there were no games at all between the two leagues.

With most leagues having 8 conference games, and playing usually either an FBS, or a DII, or both, there are very few match-ups between conferences. So rating them in relation to each other is very subjective and error-prone.

Green26
November 27th, 2013, 11:36 AM
Lack of games: 11 or 12 games is a very small sample size to try to rank 120+ teams
Lack of intra-conference games: A computer could rank teams in a conference, since they mostly play each other - a pretty complete sample. But, trying to rank the teams in MVFC relative to teams in the CAA becomes difficult because there were no games at all between the two leagues.

With most leagues having 8 conference games, and playing usually either an FBS, or a DII, or both, there are very few match-ups between conferences. So rating them in relation to each other is very subjective and error-prone.

I don't agree with much of what you said (to the extent I understood it), but given the reality of teams only playing 11 or 12 games, and playing within conferences, all 11 conferences plahing in the playoffs, etc., what do you suggest? Teams have to be compared to teams in other conferences, and conferences to conferences, in order to select the playoff field.

Personally, I don't think playoff selection is that hard, except right on the edge of the bubble. If it were up to me, I would establish slightly different stated criteria. I would look at all of the data. Records, MOV, non-conference, FBS games, D-II games to a much lesser extent, polls, computer ratings/rankings especially the GPI, change the new SRS to include an element of MOV, look at how teams played later in the season, etc. I would scrutinize top non-auto bid teams from the weaker conferences, to see if they qualify or not. For the most part, I believe too many of these teams get in at the expense of bubble teams from the power conference, who often beat each other up. Generally, I don't think the AD's on the selection committee are as knowledgeable as many of the fans (just because someone is an AD doesn't make them a football expert/junkie, and my guess is that AD's from schools that almost never get to the playoffs don't follow the team like other AD's and internet junkies do). Heck, I would grab a handful of astute posters from the internet, and believe we could do a better job than the committee did this year, and has done in certain other years.

If money was not a factor, but I know it is, I would seed all teams--like was done or essentially done prior to the change after 9/11. Since money is a factor, I believe the current system is fine. However, it seems like the committee usually has some pairing and bracket glitches, which could be avoided.

SIUSalukiFan
November 27th, 2013, 01:51 PM
I don't agree with much of what you said (to the extent I understood it), but given the reality of teams only playing 11 or 12 games, and playing within conferences, all 11 conferences plahing in the playoffs, etc., what do you suggest? Teams have to be compared to teams in other conferences, and conferences to conferences, in order to select the playoff field.

Personally, I don't think playoff selection is that hard, except right on the edge of the bubble. If it were up to me, I would establish slightly different stated criteria. I would look at all of the data. Records, MOV, non-conference, FBS games, D-II games to a much lesser extent, polls, computer ratings/rankings especially the GPI, change the new SRS to include an element of MOV, look at how teams played later in the season, etc. I would scrutinize top non-auto bid teams from the weaker conferences, to see if they qualify or not. For the most part, I believe too many of these teams get in at the expense of bubble teams from the power conference, who often beat each other up. Generally, I don't think the AD's on the selection committee are as knowledgeable as many of the fans (just because someone is an AD doesn't make them a football expert/junkie, and my guess is that AD's from schools that almost never get to the playoffs don't follow the team like other AD's and internet junkies do). Heck, I would grab a handful of astute posters from the internet, and believe we could do a better job than the committee did this year, and has done in certain other years.

If money was not a factor, but I know it is, I would seed all teams--like was done or essentially done prior to the change after 9/11. Since money is a factor, I believe the current system is fine. However, it seems like the committee usually has some pairing and bracket glitches, which could be avoided.

Great points.

Don't half-ass the final at-large bids. If you go through all of the data and the evidence shows Jacksonville State or South Carolina State or Southern Utah clearly has a leg up on Youngstown State, William & Mary and Chattanooga then great. Put them in the field. But, it only takes someone with a brain and internet access to learn that nine wins in the MEAC or OVC isn't the same as eight wins in the CAA, Big Sky or MVFC.

This isn't rocket science.

ElCid
November 27th, 2013, 03:45 PM
Lack of games: 11 or 12 games is a very small sample size to try to rank 120+ teams
Lack of intra-conference games: A computer could rank teams in a conference, since they mostly play each other - a pretty complete sample. But, trying to rank the teams in MVFC relative to teams in the CAA becomes difficult because there were no games at all between the two leagues.

With most leagues having 8 conference games, and playing usually either an FBS, or a DII, or both, there are very few match-ups between conferences. So rating them in relation to each other is very subjective and error-prone.

I understand this completely and this is the primary reason that the Ivy continues to be ranked as high as they are. They only play a very small OOC schedule each year and the number of conferences they play is small as well. How can you possible rank them against the MVFC, SOCON, OVC when they did not play any of those teams. You have to go through a couple separations of games in order to compare them. For instance, Yale played Cal Poly. Cal Poly played Montana, and Montana played App St. Yeah, we finally have a match with them to the SOCON. How about the OVC? Well E illinois played Ill St who played Ball St who played Army who played Temple, who played Fordham who played Lehigh who played Princeton. Wow that one hurt. Talk about the transitive property of determining strength! It is obviously not this bad for most conferences, but it can still be too small of a sample to possess any real accuracy.


I don't agree with much of what you said (to the extent I understood it), but given the reality of teams only playing 11 or 12 games, and playing within conferences, all 11 conferences plahing in the playoffs, etc., what do you suggest? Teams have to be compared to teams in other conferences, and conferences to conferences, in order to select the playoff field.

Yes, of course you have to compare the different conferences. Using a computer is not necessarily the answer though. Maybe I misunderstood, but I am not sure why you disagreed with what he said. Once you understand that the sampling size is too small or that the algorithm used can be just as faulty as someone's judgment, it becomes silly to argue that the computer can do it better. It just comes with more authority, since this is what the computer says. I am not a computer basher anymore than I am a basher of human polls, but I will use each, at least in my own assessment, for what they are, just data points and not an authoritative rank.

thebootfitter
November 27th, 2013, 04:02 PM
I am not a computer basher anymore than I am a basher of human polls, but I will use each, at least in my own assessment, for what they are, just data points and not an authoritative rank.
I think this is a pretty healthy attitude toward both polls and computer rating systems.

underdawg
November 27th, 2013, 04:38 PM
Let's see if talking a little slower can help: GPi had been used since 2008 to pick at-large teams, It is combination of SEVERAL computer polls not just one--an average--you can keep chirping like starling about how bad computer polls are but I say an average of many of them AT THE END OF A 12 GAME SEASON--is pretty darn accurate--the MVFC got screwed.

lydiabixby
November 27th, 2013, 04:56 PM
1. NDSU

2. EIU

3. EWU

4. SELa

5. Towson

7T. SDSU

7T. McNeese

9. Maine

10. NAU

11. Coastal Carolina

12T. UNI

12T. YSU

14. SIU Salukis

15. Harvard

16. Fordham

17. Bethune-Cookman

18. TSU

19. NHU

20. Villanova

21. SHSU

22. JSU

23. Princeton

24. W&M

25 Old Dominion


Conference Power Rankings

1. MVFC (27.11)

2. CAA (34.23

3. Southland (35.23)

4. OVC (37.94)

5. Big Sky (43.65)

Pray tell, what school is number 19, NHU?

Wallace
November 28th, 2013, 06:47 AM
... E illinois played Ill St who played Ball St who played Army who played Temple, who played Fordham who played Lehigh who played Princeton...

Great example of why computers can track games much better than human voters. I don't agree that computer ratings are as equally biased as human rankings though. In fact I doubt they are biased at all, every team gets treated the same. Reminder that the GPI uses HUMAN POLLS along with computer ratings, W-L and MOV. Every statistician agrees that enough games are played in the FCS to link them together. So throw those false things in this thread out the window. Hopefully next year the GPI will throw out the SAG rating which does not correctly measure the FCS. All the committee members saw the GPI but they want to believe they know better and want their own single rating to be influential this year. So we got one of the worst selections since it began. I agree with underdawg that relying on a single computer rating is madness.

underdawg
November 28th, 2013, 08:42 AM
Well anyway, screwed or not I'm ready for Spring practice!

Green26
November 28th, 2013, 10:21 AM
Great points.

Don't half-ass the final at-large bids. If you go through all of the data and the evidence shows Jacksonville State or South Carolina State or Southern Utah clearly has a leg up on Youngstown State, William & Mary and Chattanooga then great. Put them in the field. But, it only takes someone with a brain and internet access to learn that nine wins in the MEAC or OVC isn't the same as eight wins in the CAA, Big Sky or MVFC.

This isn't rocket science.

Yup, you and I are on the same page. Again, the committee doesn't have enough football followers/junkies to do the bubble analysis when they get together that last weekend. While they have conference calls weekly prior to the last weekend, much of the work, especially the bubble analysis, has to be done on that last Saturday afternoon and evening. AGS ought to get organized and feed the committee one or more group analyses in the early evening on that Saturday night. Only half joking.

ElCid
November 28th, 2013, 10:36 AM
Great example of why computers can track games much better than human voters. I don't agree that computer ratings are as equally biased as human rankings though. In fact I doubt they are biased at all, every team gets treated the same. Reminder that the GPI uses HUMAN POLLS along with computer ratings, W-L and MOV. Every statistician agrees that enough games are played in the FCS to link them together. So throw those false things in this thread out the window. Hopefully next year the GPI will throw out the SAG rating which does not correctly measure the FCS. All the committee members saw the GPI but they want to believe they know better and want their own single rating to be influential this year. So we got one of the worst selections since it began. I agree with underdawg that relying on a single computer rating is madness.

Bias was probably a bad description, at least in how it also refers to human perception as well. But the way in which the computer determines strength may not be necessarily valid. What if one computer program uses 3 points for home advantage. Another uses 4. What if a team hangs 60 a team. Is there any diminishing return on points? When teams are down 50, and have purposely put in third string to get some reps, and the other team keep scoring, do they get penalized or credit for this if on the other side. Or do certain conferences get higher ranking than others or is it absolute? If so, why? What are the starting ratings for the season? What weight do they have once all teams are connected? This is what I meant by bias. Whoever sets those parameters, has given it a bias that may or may not be correct. Of course any computer program once created will do exactly what it is programed to do. But what are the parameters it will compute? I would like to see how each computer rating handles these and many other "conditions," presumptions"?

Green26
November 28th, 2013, 10:36 AM
I understand this completely and this is the primary reason that the Ivy continues to be ranked as high as they are. They only play a very small OOC schedule each year and the number of conferences they play is small as well. How can you possible rank them against the MVFC, SOCON, OVC when they did not play any of those teams. You have to go through a couple separations of games in order to compare them. For instance, Yale played Cal Poly. Cal Poly played Montana, and Montana played App St. Yeah, we finally have a match with them to the SOCON. How about the OVC? Well E illinois played Ill St who played Ball St who played Army who played Temple, who played Fordham who played Lehigh who played Princeton. Wow that one hurt. Talk about the transitive property of determining strength! It is obviously not this bad for most conferences, but it can still be too small of a sample to possess any real accuracy.



Yes, of course you have to compare the different conferences. Using a computer is not necessarily the answer though. Maybe I misunderstood, but I am not sure why you disagreed with what he said. Once you understand that the sampling size is too small or that the algorithm used can be just as faulty as someone's judgment, it becomes silly to argue that the computer can do it better. It just comes with more authority, since this is what the computer says. I am not a computer basher anymore than I am a basher of human polls, but I will use each, at least in my own assessment, for what they are, just data points and not an authoritative rank.

As I said initially, perhaps I didn't understand your post. I don't agree that 11 or 12 games is too small of a sampling to rate and rank teams. I don't agree that there is insufficient information to compare and rank teams in different conference. I couldn't tell if you were talking about computers, polls or human comparisons. Again, you look at all the data, including computer rankings, polls, records and stats, and hopefully you've watched enough games in person and on tv (both in the year and prior years), and then you make the decisions. The general quality of conferences don't change overnight. Thus, discerning general conference strength is accumulated knowledge. And even if you believe the comparative data is not enough, it is what it is, and decisions have to be made based on what it is. My apologies if I didn't understand your earlier post.

ElCid
November 28th, 2013, 10:44 AM
As I said initially, perhaps I didn't understand your post. I don't agree that 11 or 12 games is too small of a sampling to rate and rank teams. I don't agree that there is insufficient information to compare and rank teams in different conference. I couldn't tell if you were talking about computers, polls or human comparisons. Again, you look at all the data, including computer rankings, polls, records and stats, and hopefully you've watched enough games in person and on tv (both in the year and prior years), and then you make the decisions. The general quality of conferences don't change overnight. Thus, discerning general conference strength is accumulated knowledge. And even if you believe the comparative data is not enough, it is what it is, and decisions have to be made based on what it is. My apologies if I didn't understand your earlier post.

Actually it was part of dbackjon's post as well. It was me apparently that misunderstood. I thought you were advocating to just use a computer rating to compare. I do use them, but as I mentioned, just as one of many data points. I do everything you mentioned as well. Got to do it somehow. I think the point was that since there are so few games between some conferences during the season, that the computer becomes suspect with such a small sampling. I feel much more comfortable for the in conference computer ratings since the volume of games is so great. Not so much between conferences though. But still, it is better than nothing at all.

Green26
November 28th, 2013, 10:47 AM
Bias was probably a bad description, at least in how it also refers to human perception as well. But the way in which the computer determines strength may not be necessarily valid. What if one computer program uses 3 points for home advantage. Another uses 4. What if a team hangs 60 a team. Is there any diminishing return on points? When teams are down 50, and have purposely put in third string to get some reps, and the other team keep scoring, do they get penalized or credit for this if on the other side. Or do certain conferences get higher ranking than others or is it absolute? If so, why? What are the starting ratings for the season? What weight do they have once all teams are connected? This is what I meant by bias. Whoever sets those parameters, has given it a bias that may or may not be correct. Of course any computer program once created will do exactly what it is programed to do. But what are the parameters it will compute? I would like to see how each computer rating handles these and many other "conditions," presumptions"?

This is true. However, when 15 or so computer ratings--all done by smart people who are trying to come up with the best system to rate and compare teams--are compared, and for lack of a better term, averaged, they provide a meaningful data point, in my view. The GPI does this, and also factors in the polls. That's why the GPI has been such a good indicator of playoff selection and probably of how teams have done in the playoffs. While I have not tried to do the analysis, my suspicion is that some of the glitches in playoff selection in the past, have come from the committee deviating from this data point. 10-2 teams from weaker conferences may or may not be very good. My view is that they usually aren't very strong. However, there are exceptions. Colgate proved to be an exception several years ago. From what I understand from people who have followed Fordham, Fordham may be another exception. I have not followed them closely, so don't know. I still have my doubts about B-C and Coastal, but haven't followed them closely, so don't know. Same with the no. 2 and no. 3 Ohio Valley teams. In any event, I highly doubt that any, or certainly many, of those teams are as good as several of the MV teams that didn't get selected.

Green26
November 28th, 2013, 10:52 AM
Actually it was part of dbackjon's post as well. It was me apparently that misunderstood. I thought you were advocating to just use a computer rating to compare. I do use them, but as I mentioned, just as one of many data points. I do everything you mentioned as well. Got to do it somehow. I think the point was that since there are so few games between some conferences during the season, that the computer becomes suspect with such a small sampling. I feel much more comfortable for the in conference computer ratings since the volume of games is so great. Not so much between conferences though. But still, it is better than nothing at all.

Why do you need conference computer rankings? The teams all play each other, except in the very large conferences. Anyone who follows a conference closely and has good knowledge of the game of football, should be able to rank and compare conference teams--and doesn't need a computer. The value of computer ratings is that they do compare teams between conferences. Sure, the assumptions influence the results, but, again, the people setting up the computer ratings are trying to accurately prepare teams and predict results. The assumptions may not be quite right, and need adjusting each year, but they are not biased.

ElCid
November 28th, 2013, 01:51 PM
Why do you need conference computer rankings? The teams all play each other, except in the very large conferences. Anyone who follows a conference closely and has good knowledge of the game of football, should be able to rank and compare conference teams--and doesn't need a computer. The value of computer ratings is that they do compare teams between conferences. Sure, the assumptions influence the results, but, again, the people setting up the computer ratings are trying to accurately prepare teams and predict results. The assumptions may not be quite right, and need adjusting each year, but they are not biased.

Usually yes, the winner is apparent, but the last two years the SOCON has had multiple way ties for first. Kind of nice to see if there is any difference besides the convoluted tie breaking rules the SOCON uses. xrotatehxxlolx

The problem is how they are adjusted. Again, biased may not be the correct description, but someone makes the decision on what to adjust. Is it correct? Probably. But also, is it independent is better question (not someone who is a fan of a certain conference or team)? Not saying it is not independent now, but if that question is not asked and verified periodically, I would raise my eyebrows a bit.

Green26
November 28th, 2013, 03:26 PM
Usually yes, the winner is apparent, but the last two years the SOCON has had multiple way ties for first. Kind of nice to see if there is any difference besides the convoluted tie breaking rules the SOCON uses. xrotatehxxlolx

The problem is how they are adjusted. Again, biased may not be the correct description, but someone makes the decision on what to adjust. Is it correct? Probably. But also, is it independent is better question (not someone who is a fan of a certain conference or team)? Not saying it is not independent now, but if that question is not asked and verified periodically, I would raise my eyebrows a bit.

The computer ratings in the GPI don't have ratings systems created by (biased) fans of a particular school.

Agreed that it's important not to just look at the conferences' auto-bid formulas, but, again, anyone following a conference can make an informed judgment as to the quality and ranking of the conference teams. Even those people may disagree, but if you look at the views of a number of knowledgable people, you will likely see a consensus, as well as start to figure out why those people may not agree. Don't think you need a computer to do that.

ElCid
November 28th, 2013, 05:14 PM
I have to admit, I kind of knew a little about the GPI, but not the entire story of its methodology. Just never was interested in it. I have to say that the more research I have done this afternoon, the more concerns I have. I had looked at it previously just for the rankings, but never realized how it determined its ranking until now. That it simply averages a limited number of other actual computer polls to determine a ranking makes it interesting. Not necessarily bad, but my concern still stands. GPI may be totally absent of any bias because it simply averaging in other polls. But all the methodology of the polls that it uses and averages, may be different. I guess you can hope that they average out any inconsistencies across the board, but why stop at 7. There are at least 20+ computer polls out there. Why not use them all. These 7 appear to be no more or less accurate than others. And a couple of them (judging from their web sites) look just like some guy cranking out ratings in his spare time with no more qualification than being an avid fan with a little statistical and Excel knowledge. I jest a little there. But I have no information of the backgrounds, skill, or knowledge of the individuals who create these. The more I think about it, the GPI may be skewed in that it only uses these 7 polls. Then again, it may be exactly the same if you average out all 20+ polls and they are probably close. I am not taking the time to check and I don't necessarily think they are off, I just mention this as a possibility.

Also, when I went to each of these rating sites that are included in the GPI, there is usually nothing on the actual sites which described what methodology used was, besides Sagarin. At least one site spoke about the problems of rating football teams and described its methodology. It was from the Ashburn poll. It actually had a very good description of the problems that face all computer polls and explained how it "compensated" for them. There are some telling points in this piece from the site.

"In all of our work, the closest thing we have to a subjective input is the relative weighting given the two components of our Hybrid Rating. This single knob has been adjusted to give a decent match to Massey. Some arbitrary combination was unavoidable to reflect a consensus with similarly arbitrary numbers of both predictive and retrodictive components."

Also, it appears, and I have not analyzed it entirely yet, and it was dated a bit, that home field advantage may not even be considered! I am not saying that the Ash poll is inaccurate, just that it might use an arbitrary input, but at least it would be consistently applied. And may even be correctly applied, but whenever you have any subjective input, you better have a darn good rational why. How many of these other polls also have a subjective input?

Just food for thought.

Green26
November 28th, 2013, 08:32 PM
I have to admit, I kind of knew a little about the GPI, but not the entire story of its methodology. Just never was interested in it. I have to say that the more research I have done this afternoon, the more concerns I have. I had looked at it previously just for the rankings, but never realized how it determined its ranking until now. That it simply averages a limited number of other actual computer polls to determine a ranking makes it interesting. Not necessarily bad, but my concern still stands. GPI may be totally absent of any bias because it simply averaging in other polls. But all the methodology of the polls that it uses and averages, may be different. I guess you can hope that they average out any inconsistencies across the board, but why stop at 7. There are at least 20+ computer polls out there. Why not use them all. These 7 appear to be no more or less accurate than others. And a couple of them (judging from their web sites) look just like some guy cranking out ratings in his spare time with no more qualification than being an avid fan with a little statistical and Excel knowledge. I jest a little there. But I have no information of the backgrounds, skill, or knowledge of the individuals who create these. The more I think about it, the GPI may be skewed in that it only uses these 7 polls. Then again, it may be exactly the same if you average out all 20+ polls and they are probably close. I am not taking the time to check and I don't necessarily think they are off, I just mention this as a possibility.

Also, when I went to each of these rating sites that are included in the GPI, there is usually nothing on the actual sites which described what methodology used was, besides Sagarin. At least one site spoke about the problems of rating football teams and described its methodology. It was from the Ashburn poll. It actually had a very good description of the problems that face all computer polls and explained how it "compensated" for them. There are some telling points in this piece from the site.

"In all of our work, the closest thing we have to a subjective input is the relative weighting given the two components of our Hybrid Rating. This single knob has been adjusted to give a decent match to Massey. Some arbitrary combination was unavoidable to reflect a consensus with similarly arbitrary numbers of both predictive and retrodictive components."

Also, it appears, and I have not analyzed it entirely yet, and it was dated a bit, that home field advantage may not even be considered! I am not saying that the Ash poll is inaccurate, just that it might use an arbitrary input, but at least it would be consistently applied. And may even be correctly applied, but whenever you have any subjective input, you better have a darn good rational why. How many of these other polls also have a subjective input?

Just food for thought.

Look at this composite one. About 50 ratings, starting with the GPI. http://www.masseyratings.com/cf/compare.htm

underdawg
November 28th, 2013, 08:55 PM
I've seen this poll too--as have anyone who follows ratings closely but you will never convince elcid no matter how powerful your evidence--I think that is evident. SIU and the other screwed teams will just have to schedule down, like elcid and the morons at the committee apparently want. But my guess is they will then use Computer ratings to say our SOS is wanting----ii's hopeless/

skinny_uncle
November 28th, 2013, 08:59 PM
Look at this composite one. About 50 ratings, starting with the GPI. http://www.masseyratings.com/cf/compare.htm

Very few have UNH ranked ahead of Youngstown. Just sayin'.

Lehigh Football Nation
November 28th, 2013, 09:58 PM
I talked about all this earlier in the year. A long read, but it hits all these points.

http://lehighfootballnation.blogspot.com/2013/10/the-lack-of-data-in-rating-fcs-top-25.html

caribbeanhen
November 28th, 2013, 10:44 PM
So ElCid, as you appear as such a smart guy, how many knots would an aircraft have to fly due west on the equator to keep the sun in the same relative position?

ElCid
November 28th, 2013, 11:01 PM
Look at this composite one. About 50 ratings, starting with the GPI. http://www.masseyratings.com/cf/compare.htm

Oh I look at that one all the time.

ElCid
November 28th, 2013, 11:04 PM
I've seen this poll too--as have anyone who follows ratings closely but you will never convince elcid no matter how powerful your evidence--I think that is evident. SIU and the other screwed teams will just have to schedule down, like elcid and the morons at the committee apparently want. But my guess is they will then use Computer ratings to say our SOS is wanting----ii's hopeless/

Why the hate? You simply inferred it. I never said that, you did. Get a grip. I simply questioned the methodology of the polls. Sorry you can't tell the difference.

ElCid
November 28th, 2013, 11:54 PM
So ElCid, as you appear as such a smart guy, how many knots would an aircraft have to fly due west on the equator to keep the sun in the same relative position?

Depends on the type of airspeed you are talking. Calibrated Airspeed, Indicated Airspeed, True airspeed, or Groundspeed (GS)? I have used all of them. I would go with Groundspeed since all "airspeeds" are meaningless when talking about the speed necessary to match the suns position. Since the equator is 24,901 miles'ish, it would be about 1037 Miles an hour GS, but since we are talking knots and a knot is about equivalent to 1.15 miles an hour'ish, 902 Kts GS would be about correct to match the position on the earth to the sun. I did not cypher it exact, but just rounded it off.

But you did not ask if it was an African or European Swallow?

Green26
November 29th, 2013, 12:02 AM
Oh I look at that one all the time.

Okay, then what's your comment or complaint?

ElCid
November 29th, 2013, 12:23 AM
Okay, then what's your comment or complaint?

I never said I did have a complaint. I actually think that the "so called" offended teams "may" have been slighted. I have never said otherwise, ever. I simply question that the methodology of the computer polls may be subject to differing weighting by those that created them. That's all. Whether they were an average of multiple polls, like GPI or as they stand alone like Massey or Sagarin or any of the other multiple polls. Without clear cut transparency as to the methodology used, they will always, at least to me, be at least a little suspect. I still look at them and they appear fine, mostly, but when you are taking about slim margins in some of the rankings, I will never use them as the sole source for determining a ranking, especially when close. At least until they have full disclosure as to how they came up with the ranking. I guess I did not make that clear. You have to admit that every computer poll uses a little different methodology. Those differences could result in small differences in rank and possibly, if used for such purposes, result in a playoff spot or not. Unless you have absolute faith in the accuracy of all these polls, this should be a question that everyone should be asking. Well, at least I think so.

Green26
November 29th, 2013, 01:05 AM
I never said I did have a complaint. I actually think that the "so called" offended teams "may" have been slighted. I have never said otherwise, ever. I simply question that the methodology of the computer polls may be subject to differing weighting by those that created them. That's all. Whether they were an average of multiple polls, like GPI or as they stand alone like Massey or Sagarin or any of the other multiple polls. Without clear cut transparency as to the methodology used, they will always, at least to me, be at least a little suspect. I still look at them and they appear fine, mostly, but when you are taking about slim margins in some of the rankings, I will never use them as the sole source for determining a ranking, especially when close. At least until they have full disclosure as to how they came up with the ranking. I guess I did not make that clear. You have to admit that every computer poll uses a little different methodology. Those differences could result in small differences in rank and possibly, if used for such purposes, result in a playoff spot or not. Unless you have absolute faith in the accuracy of all these polls, this should be a question that everyone should be asking. Well, at least I think so.

I was meaning to ask about your comment or complaint about the polls. Yes, I already said that each computer rating is different. However, none are biased; all are trying to accurately rank or rate teams, and predict outcomes. Looking at almost 50 of them is helpful, and there are clear trends, as well as outliers.

I must admit that when people talk about transparency, in this or almost any situation, my reaction is that they really don't know what they are talking about. Sorry, no offense meant to you, but transparency talk makes me chuckle.

Wallace
November 29th, 2013, 05:34 AM
LOL at LFN injecting advertising in this thread with a pretty clueless ranking missive he made to make money.
I have to admit, I kind of knew a little about the GPI, but not the entire story of its methodology. Just never was interested in it. I have to say that the more research I have done this afternoon, the more concerns I have. I had looked at it previously just for the rankings, but never realized how it determined its ranking until now. That it simply averages a limited number of other actual computer polls to determine a ranking makes it interesting. Not necessarily bad, but my concern still stands. GPI may be totally absent of any bias because it simply averaging in other polls. But all the methodology of the polls that it uses and averages, may be different. I guess you can hope that they average out any inconsistencies across the board, but why stop at 7. There are at least 20+ computer polls out there. Why not use them all. These 7 appear to be no more or less accurate than others. And a couple of them (judging from their web sites) look just like some guy cranking out ratings in his spare time with no more qualification than being an avid fan with a little statistical and Excel knowledge. I jest a little there. But I have no information of the backgrounds, skill, or knowledge of the individuals who create these. The more I think about it, the GPI may be skewed in that it only uses these 7 polls. Then again, it may be exactly the same if you average out all 20+ polls and they are probably close. I am not taking the time to check and I don't necessarily think they are off, I just mention this as a possibility.

Also, when I went to each of these rating sites that are included in the GPI, there is usually nothing on the actual sites which described what methodology used was, besides Sagarin...

Also, it appears, and I have not analyzed it entirely yet, and it was dated a bit, that home field advantage may not even be considered! I am not saying that the Ash poll is inaccurate, just that it might use an arbitrary input, but at least it would be consistently applied. And may even be correctly applied, but whenever you have any subjective input, you better have a darn good rational why. How many of these other polls also have a subjective input?...

Kenneth Massey - GPI advisor - always suggests using more computer RATINGS (they are not polls) but the ones used are the best. They have changed as well as some others are created that perform better. I always call the "compare" website he does as the "dirty laundry list" of computer ratings. hahaha Anyway, under his direction the GPI has always been at the top inspite of it using lowly ranked human polls. All of the rating sites the NCAA used before submitted methodology papers. Maybe their websites do not all reflect them but these guys are stats people and not marketing folks. :) Yes, the GPI is not anything incredibly difficult and was completely transparent from day one how the rankings were created. The polls have not been this transparent. I know this for a fact having done the Sports Network , Coaches and AGS polls myself in the past. It's just what it is.

underdawg
November 29th, 2013, 08:14 AM
Why the hate? You simply inferred it. I never said that, you did. Get a grip. I simply questioned the methodology of the polls. Sorry you can't tell the difference.

You mistake "hate" for resignation and disgust---it was a big mistake by the Committee, but nothing can be done about it now.

Lehigh Football Nation
November 29th, 2013, 08:34 AM
Thanks to LFN posting a thoughtful reflective post on the plusses and minuses of both human and computer polls.

Fixed it. You're welcome.

The lack of data point argument concerning FCS vs. FBS, FCS vs. lower division games, is one that won't go away and one that affects every rating system, from those that make up the index to the SRS. The reason why the SRS wasn't published until the last weekend of the season (my guess) is that the data looked so whacked out that the FCS subcommittee didn't want to be on the hook about how it looked. In truth, though, this was true of all the computer formulas - as they got more data (i.e. more games played), the better they looked, but even then every one, including the "best ones", had issues.

One thing I didn't mention in the post is how all the computer formulas all have some similar aspects which end up overweighting certain things. Sag, for example, overweights FBS games in their strength of schedule, and many of the others use a similar system for weighting SoS, which is why I emphasized it so much in my post. You put seven Sag-like ratings in your schedule and - shocker! - you get all these teams with an overrated SoS. Averaging doesn't moderate **** in that case.

You could maybe devise an index that truly attempts to moderate all these factors out, but nobody has done that yet, and it's extremely likely that it will also be subject to the fact that all these formulas require data, and the inter-FCS matchups don't offer enough of it.

ElCid
November 29th, 2013, 09:43 AM
I was meaning to ask about your comment or complaint about the polls. Yes, I already said that each computer rating is different. However, none are biased; all are trying to accurately rank or rate teams, and predict outcomes. Looking at almost 50 of them is helpful, and there are clear trends, as well as outliers.

I must admit that when people talk about transparency, in this or almost any situation, my reaction is that they really don't know what they are talking about. Sorry, no offense meant to you, but transparency talk makes me chuckle.

I assume that anyone who does not reveal how they compute their rating system works must be trying to hide something. Either some shortcoming or methodology that is questionable.

And I think it actually is also funny also how people think about the transparency of any process. I just always looked at it the other way. I have always thought that people who do not look into the dirty details of how something works, do not know what they are talking about since they do not even know enough to ask. Seriously, and I am not saying that applies to you, just a different viewpoint.

ElCid
November 29th, 2013, 09:48 AM
LOL at LFN injecting advertising in this thread with a pretty clueless ranking missive he made to make money.

Kenneth Massey - GPI advisor - always suggests using more computer RATINGS (they are not polls) but the ones used are the best. They have changed as well as some others are created that perform better. I always call the "compare" website he does as the "dirty laundry list" of computer ratings. hahaha Anyway, under his direction the GPI has always been at the top inspite of it using lowly ranked human polls. All of the rating sites the NCAA used before submitted methodology papers. Maybe their websites do not all reflect them but these guys are stats people and not marketing folks. :) Yes, the GPI is not anything incredibly difficult and was completely transparent from day one how the rankings were created. The polls have not been this transparent. I know this for a fact having done the Sports Network , Coaches and AGS polls myself in the past. It's just what it is.

I have always believed that the human polls should be completely open. All ballots need to be public. How would that go over on AGS? Some people voluntarily show theirs, but what if it was mandatory?xconfusedx Maybe we could have one week next year where this is done on AGS just for kicks and grins. I am sure it would be a thread that would rival the Chattownmocs thread.

Green26
November 29th, 2013, 11:04 AM
I assume that anyone who does not reveal how they compute their rating system works must be trying to hide something. Either some shortcoming or methodology that is questionable.

And I think it actually is also funny also how people think about the transparency of any process. I just always looked at it the other way. I have always thought that people who do not look into the dirty details of how something works, do not know what they are talking about since they do not even know enough to ask. Seriously, and I am not saying that applies to you, just a different viewpoint.

As someone posted above, the football ratings people are all numbers/stats people, not marketing people or biased fans. I agree that it is ideal to know what the methodologies are, but in football ratings, when dozens of them are available and used, the particular methodologies become less meaningful--unless they are just plain stupid. The outliers can be analyzed, if necessary, or just ignored. I believe in looking into details and doing one's homework, but at some point, it is not a good use of time to keep digging into the details--and, in my view, knowing the methodologies and details dozens of rating systems is not necessary or a good use of time/resources.

More on transparency. After a basic amount of information is made public or available, I find that many people wanting more transparency are either just using the term (and not necessarily knowing what it means in the context) or looking for more details that they might be able to use or spin to attack the conclusion or decision. Transparency has become such a buzz word in so many areas.

Wallace
November 29th, 2013, 11:06 AM
... all these formulas require data, and the inter-FCS matchups don't offer enough of it.Again, you do not speak from a position of authority on the subject so your pov is taken as such. Peace Chuckles.

PS. I do agree on why the SRS was not published earlier... even at the end it was out of whack. Massey suggested his amazement that the NCAA would use the ultra-simple regressive system as the sole source.

Green26
November 29th, 2013, 11:08 AM
I have always believed that the human polls should be completely open. All ballots need to be public. How would that go over on AGS? Some people voluntarily show theirs, but what if it was mandatory?xconfusedx Maybe we could have one week next year where this is done on AGS just for kicks and grins. I am sure it would be a thread that would rival the Chattownmocs thread.

I don't even bother to look at fan polls. They aren't meaningful to me. I don't participate in doing them either. They are about as meaningful as the silly little polls, on various subjects, that people create on message boards.

Green26
November 29th, 2013, 11:15 AM
Again, you do not speak from a position of authority on the subject so your pov is taken as such. Peace Chuckles.

PS. I do agree on why the SRS was not published earlier... even at the end it was out of whack.

The methodology of the ncaa's SRS is just plain dumb--if what I read is correct, i.e. that MOV isn't factored in. It makes no sense. The ncaa should be embarrassed for using it all all.

Wallace
November 29th, 2013, 11:33 AM
The methodology of the ncaa's SRS is just plain dumb--if what I read is correct, i.e. that MOV isn't factored in. It makes no sense. The ncaa should be embarrassed for using it all all.

Informed opinions I have read state that a diminishing return MOV system provides a good result and I would agree, as a viewer. Just a general observation. GPI people argued this to the NCAA but they only want non-MOV because people think running up the score -- 70-0 -- would affect the ratings. Good MOV systems negate that affect but most people (including many committee members) do not understand that.

Green26
November 29th, 2013, 12:22 PM
Informed opinions I have read state that a diminishing return MOV system provides a good result and I would agree, as a viewer. Just a general observation. GPI people argued this to the NCAA but they only want non-MOV because people think running up the score -- 70-0 -- would affect the ratings. Good MOV systems negate that affect but most people (including many committee members) do not understand that.

The running up the score argument is silly. The MOV factor can be limited to some percentage up to say 21 or 28 points of MOV. Say 50% of 75% of 21/28 points. Or the first 10 points, plus 50% of the next 11/18 points. A 9-3 team with an average MOV of 20 should be rated higher than a 9-3 team that wins by an average of 5, especially within the same conference. Of course, there are other important factors too. Like I said, not using any MOV factor is just plain dumb. Of course, I believe the ncaa is often just plain dumb, as well as corrupt in the enforcement arm.

Lehigh Football Nation
November 29th, 2013, 04:18 PM
Again, you do not speak from a position of authority on the subject so your pov is taken as such.

A position of authority like your own? xlolx xlolx xlolx xlolx xlolx

caribbeanhen
November 30th, 2013, 12:04 AM
Depends on the type of airspeed you are talking. Calibrated Airspeed, Indicated Airspeed, True airspeed, or Groundspeed (GS)? I have used all of them. I would go with Groundspeed since all "airspeeds" are meaningless when talking about the speed necessary to match the suns position. Since the equator is 24,901 miles'ish, it would be about 1037 Miles an hour GS, but since we are talking knots and a knot is about equivalent to 1.15 miles an hour'ish, 902 Kts GS would be about correct to match the position on the earth to the sun. I did not cypher it exact, but just rounded it off.

But you did not ask if it was an African or European Swallow?

you are the man

Wallace
November 30th, 2013, 07:17 AM
A position of authority like your own?Compared to you, yes. The authority I refer to are people who actually understand statistics, namely Kenneth Massey, GPI adviser.

ElCid
November 30th, 2013, 01:38 PM
As someone posted above, the football ratings people are all numbers/stats people, not marketing people or biased fans. I agree that it is ideal to know what the methodologies are, but in football ratings, when dozens of them are available and used, the particular methodologies become less meaningful--unless they are just plain stupid. The outliers can be analyzed, if necessary, or just ignored. I believe in looking into details and doing one's homework, but at some point, it is not a good use of time to keep digging into the details--and, in my view, knowing the methodologies and details dozens of rating systems is not necessary or a good use of time/resources.

More on transparency. After a basic amount of information is made public or available, I find that many people wanting more transparency are either just using the term (and not necessarily knowing what it means in the context) or looking for more details that they might be able to use or spin to attack the conclusion or decision. Transparency has become such a buzz word in so many areas.

I know what are saying about transparency, but I was not trying to use it in that way--I hate buzz words as well. I just like people/organizations (polls/ratings) who show processes/policies up front. Also, looking at it from another side, constructive criticism or critiquing is not necessarily spin. Also, agreed that multiple ratings allow for some amount of consensus, even if it may be general rather than precise. But their are some outliers that some people tout.

bluehenbillk
November 30th, 2013, 04:51 PM
Compared to you, yes. The authority I refer to are people who actually understand statistics, namely Kenneth Massey, GPI adviser.

When football teams don't make the postseason for five years running they change coaches. Since the GPI has been in a "five year postseason rut" of its own maybe it's time to change things as well.