Jump to content

PFF Ranks for Bills


Reader

Recommended Posts

My issue with PFF is more to do with their podcasts.

 

With the Bills, they look at the negatives far harder than the positives. They seem to just not take into account all the game nuances - especially with Allen - and stick to their initial ideas about the players, like they are stuck in a narrative loop. It feels like an agenda albeit that I don't think it is. I think they just don't believe in the players or the team.

 

The problem is resolved if we keep winning, in all likelihood, but it is more than mildly annoying.

Link to comment
Share on other sites

2 hours ago, GunnerBill said:

 

Well most NFL teans are subscribers so they cleary think it is a useful tool. It isn't intended as a mechanism to predict anything. It is an attempt to apply numerical value to every play in football. Now football isn't a game that lends itself to that very easily, but that doesn't mean one should dismiss any analysis that attempts to do such. I don't agree with PFF grades as any sort of definitve barrometer for a player's performance but the approach of applying a numerical value to every player's role in each play is an interesting one and can identify patterns that are useful in evaluation. 

 

The fans who hate it generally hate the fact that it is imperfect. As far as I am aware it has never claimed to be perfect. It acknowledges the imperfect nature of applying numbers to football. But to dismiss it as worthless is short sighted. And often it is just because people don't like specific outcomes. 

 

Analytics is not supposed to be balanced. That is why using them as a single determinate measure almost always fails. You need the human brain and the human eye to add the nuance and the context. Numbers are a blunt and cruel mistress. That is kind of the point of them. 

 

Gunner, first I am not disputing that analytics is a useful tool but I do have a question for you.

I did some simple math, 22 players on the field for every snap with at least 140 plays a game and a total of 16 games played a week.

That comes out to 50,000 individual evaluations per week.

 

The individual evaluating each play would need to determine what the call was and watch each player (maybe rewinding multiple times) to

determine the +/- result.  It just seems like too much information to digest to come up with these scores this fast.

 

Do you have any info on how this is done?  Are there multiple people inputting evaluations for each game?  If so, is the average used?

Is an individual judged as to who he is up against?  (example:  A OT going against Mack on a individual play would have to be scored differently than

if he was going against a rotational scrub).  I could go on but I think you get my point.

 

I have used analytics in my job for years and it's how the data is gathered and examined that means everything.

Thanks.

Link to comment
Share on other sites

1 minute ago, ColoradoBills said:

 

Gunner, first I am not disputing that analytics is a useful tool but I do have a question for you.

I did some simple math, 22 players on the field for every snap with at least 140 plays a game and a total of 16 games played a week.

That comes out to 50,000 individual evaluations per week.

 

The individual evaluating each play would need to determine what the call was and watch each player (maybe rewinding multiple times) to

determine the +/- result.  It just seems like too much information to digest to come up with these scores this fast.

 

Do you have any info on how this is done?  Are there multiple people inputting evaluations for each game?  If so, is the average used?

Is an individual judged as to who he is up against?  (example:  A OT going against Mack on a individual play would have to be scored differently than

if he was going against a rotational scrub).  I could go on but I think you get my point.

 

I have used analytics in my job for years and it's how the data is gathered and examined that means everything.

Thanks.

 

Each game is analysed by a lead analyst and then quality control checked before initial grades are applied and then re-watched and re-graded again when the all22 is available is my understanding.

 

Individuals are not judged on who they were up against or on how critical a play is in the game. They are some of the reasons I say, repeatedly, it is imperfect as a basis for any definitive determinations. You have to use it appropriately. It is a tool in the toolbox when properly applied. 

 

I have been a critic of plenty of its individual outcomes over the years. Its grading system led to far too much credit being given to Tyrod Taylor for example and last season Tre White ranked relatively average because it is hard to get the elite coverage grades when nobody throws your way any more. That is where nuance and the need for an experienced eye come into play in my mind. But it doesn't invalidate everything PFF do.

  • Like (+1) 1
  • Thank you (+1) 1
Link to comment
Share on other sites

27 minutes ago, GunnerBill said:

 

My point is you only need it if you are claiming a definitive verdict. They are not. People are still misunderstanding a bit what PFF do. 

Your point is statistically wrong. The point of margin of error is because the mean is NOT definitive. If I tell you I have a basket of 50 apples that is definitive. If I tell you I have a basked of 50 apples +/- 5 apples I am telling you that the number of apples is between 45 and 55 which is not definitive. By presenting just one number without a margin of error they are implying the number is definitive. Oh they can claim it isn't "definitive" but without also providing the margin of error the number they provide is essentially useless.

 

For example this week PFF rated:

Ed Oliver 80.3 
Tremaine Edmunds 78.2

 

If the error margin is +/- 2 points then both players would be statistically tied. One player cannot be said to rate higher than the other because the margin of error overlaps. The margin of error is an admission that the numbers are NOT definitive but the true value lies someplace in between with some level of confidence. 

Link to comment
Share on other sites

2 minutes ago, RememberTheRockpile said:

Your point is statistically wrong. The point of margin of error is because the mean is NOT definitive. If I tell you I have a basket of 50 apples that is definitive. If I tell you I have a basked of 50 apples +/- 5 apples I am telling you that the number of apples is between 45 and 55 which is not definitive. By presenting just one number without a margin of error they are implying the number is definitive. Oh they can claim it isn't "definitive" but without also providing the margin of error the number they provide is essentially useless.

 

For example this week PFF rated:

Ed Oliver 80.3 
Tremaine Edmunds 78.2

 

If the error margin is +/- 2 points then both players would be statistically tied. One player cannot be said to rate higher than the other because the margin of error overlaps. The margin of error is an admission that the numbers are NOT definitive but the true value lies someplace in between with some level of confidence. 

 

Nobody is saying the numbers are definitive. You want to portray it as such because it helps your agenda that PFF is worthless. You are entitled to that view. You are also wrong. 

Link to comment
Share on other sites

10 minutes ago, GunnerBill said:

 

Each game is analysed by a lead analyst and then quality control checked before initial grades are applied and then re-watched and re-graded again when the all22 is available is my understanding.

 

Individuals are not judged on who they were up against or on how critical a play is in the game. They are some of the reasons I say, repeatedly, it is imperfect as a basis for any definitive determinations. You have to use it appropriately. It is a tool in the toolbox when properly applied. 

 

I have been a critic of plenty of its individual outcomes over the years. Its grading system led to far too much credit being given to Tyrod Taylor for example and last season Tre White ranked relatively average because it is hard to get the elite coverage grades when nobody throws your way any more. That is where nuance and the need for an experienced eye come into play in my mind. But it doesn't invalidate everything PFF do.

 

Thanks for your details about the process.  It's what I figured.  Like I said, I don't negate the importance of analytics and this system seems to be

a loose scoring of individual play.  I would also think that teams that use analytics to a higher degree would add some of the "variables" we both

just highlighted.

 

Your Tre White example is a great one.  I'm just now wondering how often PFF re-examines there inputs to adjust for more glaring deficiencies.

A good tool always needs to be tweaked.  Thanks again.

Link to comment
Share on other sites

To me the issue with any such assessments is trying to put a quantitative assessment on qualitative data.  Others have made very cogent points about the statistical limits inherent within the PFF analysis.  The biggest to me is that you are relying from what Gunner says on one primary analyst to do the grading, then reviewed by whatever a lead person is.  If I am reviewing a paper on scoring say tissue reactions to a given treatment (as I do commonly when reviewing scientific manuscripts), I need to see a measure of inter and intra-observer bias.  Intra-observer bias = how reproducible is an individual's grade if you have him look at the same sample multiple times.  Interobserver = how two or more different observers agree on the grade of an individual assessment.  Without knowing that, without knowing the training of the observers, without understanding how they know or assume the specific role for a given player on a given play, I just don't see how this kind of information can be that useful.  I wonder if the teams who subscribe are using it simply to compare to their own assessments, using it as a source of film for their use, et.

6 minutes ago, GunnerBill said:

 

Nobody is saying the numbers are definitive. You want to portray it as such because it helps your agenda that PFF is worthless. You are entitled to that view. You are also wrong. 

If the numbers are not definitive then I'm not sure what the value is of them supplying the numbers they supply.  I'm curious as to how you think teams should use this kind of data then.

  • Like (+1) 1
Link to comment
Share on other sites

10 minutes ago, RememberTheRockpile said:

Your point is statistically wrong. The point of margin of error is because the mean is NOT definitive. If I tell you I have a basket of 50 apples that is definitive. If I tell you I have a basked of 50 apples +/- 5 apples I am telling you that the number of apples is between 45 and 55 which is not definitive. By presenting just one number without a margin of error they are implying the number is definitive. Oh they can claim it isn't "definitive" but without also providing the margin of error the number they provide is essentially useless.

 

For example this week PFF rated:

Ed Oliver 80.3 
Tremaine Edmunds 78.2

 

If the error margin is +/- 2 points then both players would be statistically tied. One player cannot be said to rate higher than the other because the margin of error overlaps. The margin of error is an admission that the numbers are NOT definitive but the true value lies someplace in between with some level of confidence. 

 

For the sake of argument from a mathematical perspective. Saying there is an error of +/- 2 doesn't invalidate a difference between Oliver and Edmunds. If we pretend to ignore standard deviations beyond the first then the only way they are even would be if Oliver averages a -1 and Edmunds a +1. (Of course Oliver -2 and Edmunds 0 also works/all the other possibilities) It has been a while since I took statistics so I'm a little rusty, but I think that would mean there is about a 90% chance Oliver played better and while the ratings are still fallible, it is significant.

24 minutes ago, GunnerBill said:

 

Each game is analysed by a lead analyst and then quality control checked before initial grades are applied and then re-watched and re-graded again when the all22 is available is my understanding.

 

Individuals are not judged on who they were up against or on how critical a play is in the game. They are some of the reasons I say, repeatedly, it is imperfect as a basis for any definitive determinations. You have to use it appropriately. It is a tool in the toolbox when properly applied. 

 

I have been a critic of plenty of its individual outcomes over the years. Its grading system led to far too much credit being given to Tyrod Taylor for example and last season Tre White ranked relatively average because it is hard to get the elite coverage grades when nobody throws your way any more. That is where nuance and the need for an experienced eye come into play in my mind. But it doesn't invalidate everything PFF do.

 

Yeah, PFF was way too complimentary of Tyrod and didn't say a word when he bombed in Cleveland proving us right and them wrong. That being said, and I think you and I are in agreement here. It is useful to see their grades for players that we're not watching. People can say the oline had a good game, but did they? I wasn't watching them, I was watching Allen and I have no idea how well Spain picked up a stunt play after play.

Link to comment
Share on other sites

3 minutes ago, GunnerBill said:

 

Nobody is saying the numbers are definitive. You want to portray it as such because it helps your agenda that PFF is worthless. You are entitled to that view. You are also wrong. 

You just don't get it. If they present a single number they are implicitly declaring it definitive. You claim they are not then where is the margin of error? Is it so big that the number they are presenting would become entirely useless? 

 

You don't like what I am saying because it is conflict with what you want to believe. You have failed to address any of the points I have made. You now accuse me of having an agenda which is a rather cute ad hominem considering you have failed to address what is basic statistical practice. It isn't my view, it is the view of anybody that has taken even  a basic statistics course. You conclude with saying I am wrong based entirely on your ignorance of statistics.

Link to comment
Share on other sites

7 minutes ago, RememberTheRockpile said:

You just don't get it. If they present a single number they are implicitly declaring it definitive. You claim they are not then where is the margin of error? Is it so big that the number they are presenting would become entirely useless? 

 

You don't like what I am saying because it is conflict with what you want to believe. You have failed to address any of the points I have made. You now accuse me of having an agenda which is a rather cute ad hominem considering you have failed to address what is basic statistical practice. It isn't my view, it is the view of anybody that has taken even  a basic statistics course. You conclude with saying I am wrong based entirely on your ignorance of statistics.

 

Haha. Of course. You can talk statistical practice at me all you like. These are not statistics in the traditional sense. You are saying "ah numbers, let me revert to statistics 101". That isn't what PFF are doing. Sorry to break it to you. 

Link to comment
Share on other sites

5 minutes ago, Reader said:

 

For the sake of argument from a mathematical perspective. Saying there is an error of +/- 2 doesn't invalidate a difference between Oliver and Edmunds. If we pretend to ignore standard deviations beyond the first then the only way they are even would be if Oliver averages a -1 and Edmunds a +1. (Of course Oliver -2 and Edmunds 0 also works/all the other possibilities) It has been a while since I took statistics so I'm a little rusty, but I think that would mean there is about a 90% chance Oliver played better and while the ratings are still fallible, it is significant.

The 90% chance would be based on a gaussian distribution which I doubt is valid. In fact I would expect due to the subjective nature of the analisis a well known players distribution would be significantly different from some unknown player entirely due to bias. Oldmanfan does a nice job of highlighting the enormity of pitfalls involved even with trained evaluators. 

Link to comment
Share on other sites

3 minutes ago, Reader said:

 

For the sake of argument from a mathematical perspective. Saying there is an error of +/- 2 doesn't invalidate a difference between Oliver and Edmunds. If we pretend to ignore standard deviations beyond the first then the only way they are even would be if Oliver averages a -1 and Edmunds a +1. (Of course Oliver -2 and Edmunds 0 also works/all the other possibilities) It has been a while since I took statistics so I'm a little rusty, but I think that would mean there is about a 90% chance Oliver played better and while the ratings are still fallible, it is significant.

 

After hearing Gunner's reply about the evaluation process (which is along the lines of what I figured) you could determine that the results is a loose (generic if 

you will) scoring of the players effectiveness.

It seems to me IF the scoring is done that way having the result reflecting a specific number with a decimal place is a bit conflicting.

To say it another way, if you are examining in generalities the score should be in generalities.

I would think now, PFF would be better off scoring both Edmunds and Oliver as a B+ (or whatever the 78-80 score equates to).

 

The decimal point score implies exactness which like Gunner said, is not the way the evaluation is done.

Link to comment
Share on other sites

2 minutes ago, GunnerBill said:

 

Haha. Of course. You can talk statistical practice at me all you like. These are not statistics in the traditional sense. You are saying "ah numbers, let me revert to statistics 101". That isn't what PFF are doing. Sorry to break it to you. 

They are not statistics at all. What they are doing is selling you dubious numbers dressed up as "advanced statistics". 

Link to comment
Share on other sites

6 minutes ago, RememberTheRockpile said:

The 90% chance would be based on a gaussian distribution which I doubt is valid. In fact I would expect due to the subjective nature of the analisis a well known players distribution would be significantly different from some unknown player entirely due to bias. Oldmanfan does a nice job of highlighting the enormity of pitfalls involved even with trained evaluators. 

 

I appreciate the response and I think you are right. I guess for me it's trying to find that balance of PFF is laughable to PFF is gospel and I guess I err on the side of the latter in an attempt to balance how I feel most of the board leans towards the former.

Link to comment
Share on other sites

9 minutes ago, GunnerBill said:

 

Haha. Of course. You can talk statistical practice at me all you like. These are not statistics in the traditional sense. You are saying "ah numbers, let me revert to statistics 101". That isn't what PFF are doing. Sorry to break it to you. 

Then what are they doing?  People are explaining to you why their methods can be questioned, but you're not explaining what they want to accomplish with their site.  When you use numbers to compare variables, that is statistical analysis by definition.

Link to comment
Share on other sites

7 minutes ago, ColoradoBills said:

 

After hearing Gunner's reply about the evaluation process (which is along the lines of what I figured) you could determine that the results is a loose (generic if 

you will) scoring of the players effectiveness.

It seems to me IF the scoring is done that way having the result reflecting a specific number with a decimal place is a bit conflicting.

To say it another way, if you are examining in generalities the score should be in generalities.

I would think now, PFF would be better off scoring both Edmunds and Oliver as a B+ (or whatever the 78-80 score equates to).

 

The decimal point score implies exactness which like Gunner said, is not the way the evaluation is done.

 

PFF's ratings are weird. Fwiw, I think both of them are around a B+. I think for them 60-65 is average, 65-75 is good, 75-85 is great, and 85+ is elite. If average is a C, I'm not exactly sure about the breakdown, but B+ would be my guess.

  • Thank you (+1) 1
Link to comment
Share on other sites

18 minutes ago, ColoradoBills said:

 

After hearing Gunner's reply about the evaluation process (which is along the lines of what I figured) you could determine that the results is a loose (generic if 

you will) scoring of the players effectiveness.

It seems to me IF the scoring is done that way having the result reflecting a specific number with a decimal place is a bit conflicting.

To say it another way, if you are examining in generalities the score should be in generalities.

I would think now, PFF would be better off scoring both Edmunds and Oliver as a B+ (or whatever the 78-80 score equates to).

 

The decimal point score implies exactness which like Gunner said, is not the way the evaluation is done.

 

Using letter grades would be a crude form of error margin which would be an improvement. 

 

12 minutes ago, Reader said:

 

I appreciate the response and I think you are right. I guess for me it's trying to find that balance of PFF is laughable to PFF is gospel and I guess I err on the side of the latter in an attempt to balance how I feel most of the board leans towards the former.

 

I seriously doubt what is available to the NFL teams is available to the general public. I suspect they have at least 2 markets, pro/college, and consumer where the products are significantly different. I would also expect the pro/college products are customized (big bucks there) to the customers specs and heavily quantitative. The consumer products OTOH are highly subjective that appeal to the general publics desire. 

 

As much as we hate the hoodie we respect his knowledge of football. Here is what he has to say:

https://bostonsportsmedia.com/2014/06/04/can-pro-football-focus-stats-be-blindly-trusted/

Quote

But believe me, I’ve watched plenty of preseason games this time of year and you’re looking at all the other teams in the league and you try to evaluate players and you’re watching the teams that we’re going to play early in the season and there are plenty of plays where I have no idea what went wrong. Something’s wrong but I don’t…these two guys made a mistake but I don’t know which guy it was or if it was both of them. You just don’t know that. I don’t know how you can know that unless you’re really part of the team and know exactly what was supposed to happen on that play. I know there are a lot of experts out there that have it all figured out but I definitely don’t. This time of year, sometimes it’s hard to figure that out, exactly what they’re trying to do. When somebody makes a mistake, whose mistake is it?

 Moral of the story. If people want something that is not possible to provide someone will provide a product that looks like it.

Link to comment
Share on other sites

×
×
  • Create New...