Tagged: xK%

Predicting pitchers’ walks using xBB%

The other day, I discussed predicting pitchers’ strikeout rates using xK%. I will conduct the same exercise today in regard to predicting walks. Using my best intuition, I want to see how well a pitcher’s walk rate (BB%) actually correlates with what his walk rate should be (expected BB%, henceforth “xBB%”). Similarly to xK%, I used my intuition to best identify reliable indicators of a pitcher’s true walk rate using readily available data.

An xBB% metric, like xK%, would not only if a pitcher perennially over-performs (or under-performs) his walk rate but also if he happened to do so on a given year. This article will conclude by looking at how the difference in actual and expected walk rates (BB – xBB%) varied between 2014 and career numbers, lending some insight into the (un)luckiness of each pitcher.

Courtesy of FanGraphs, I constructed another set of pitching data spanning 2010 through 2014. This time, I focused primarily on what I thought would correlate with walk rate: inability to pitch in the zone and inability to incur swings on pitches out of the zone. I also throw in first-pitch strike rate: I predict that counts that start with a ball are more likely to end in a walk than those that start with a strike. Because FanGraphs’ data measures ability rather than inability — “Zone%” measures how often a pitcher hits the zone; “O-Swing%” measures how often batters swing at pitches out of the zone; “F-Strike%” measures the rate of first-pitch strikes — each variable should have a negative coefficient attached to it.

I specify a handful of variations before deciding on a final version. Instead of using split-season data (that is, each pitcher’s individual seasons from 2010 to 2014) for qualified pitchers, I use aggregated statistics because the results better fit the data by a sizable margin. This surprised me because there were about half as many observations, but it’s also not surprising because each observation is, itself, a larger sample size than before.

At one point, I tried creating my own variable: looks (non-swings) at pitches out of the zone. I created a variable by finding the percentage of pitches out of the zone (1 – Zone%) and multiplied it by how often a batter refused to swing at them (1 – O-Swing%). This version of the model predicted a nice fit, but it was slightly worse than leaving the variables separated. Also, I ran separate-but-equal regressions for PITCHf/x data and FanGraphs’ own data. The PITCHf/x data appeared to be slightly more accurate, so I proceeded using them.

The graph plots actual walk rates versus expected walk rates. The regression yielded the following equation:

xBB% = .3766176 – .2103522*O-Swing%(pfx) – .1105723*Zone%(pfx) – .3062822*F-Strike%
R-squared = .6433

Again, R-squared indicates how well the model fits the data. An R-squared of .64 is not as exciting as the R-squared I got for xK%; it means the model predicts about 64 percent of the fit, and 36 percent is explained by things I haven’t included in the model. Certainly, more variables could help explain xBB%. I am already considering combining FanGraphs’ PITCHf/x data with some of Baseball Reference‘s data, which does a great job of keeping track of the number of 3-0 counts, four-pitch walks and so on.

And again, for the reader to use the equation above to his or her benefit, one would plug in the appropriate values for a player in a given season or time frame and determine his xBB%. Then one could compare the xBB% to single-season or career BB% to derive some kind of meaningful results. And (one more) again, I have already taken the liberty of doing this for you.

Instead of including every pitcher from the sample, I narrowed it down to only pitchers with at least three years’ worth of data in order to yield some kind of statistically significant results. (Note: a three-year sample is a small sample, but three individual samples of 160+ innings is large enough to produce some arguably robust results.) “Avg BB% – xBB%” (or “diff%”) takes the average of a pitcher’s difference between actual and expected walk rates from 2010 to 2014. It indicates how well (or poorly) he performs compared to his xBB%: the lower a number, the better. This time, I included “t-score”, which measures how reliable diff% is. The key value here is 1.96; anything greater than that means his diff% is reliable. (1.00 to 1.96 is somewhat reliable; anything less than 1.00 is very unreliable.) Again, this is slightly problematic because there are five observations (years) at most, but it’s the best and simplest usable indicator of simplicity.

Thus, Mark Buehrle, Mike Leake, Hiroki Kuroda, Doug Fister, Tim Hudson, Zack Greinke, Dan Haren and Bartolo Colon can all reasonably be expected to consistently out-perform their xBB% in any given year. Likewise, Aaron Harang, Colby Lewis, Ervin Santana and Mat Latos can all reasonably be expected to under-perform their xBB%. For everyone else, their diff% values don’t mean a whole lot. For example, R.A. Dickey‘s diff% of +0.03% doesn’t mean he’s more likely than someone else to pitch exactly as good as his xBB% predicts him to; in fact, his standard deviation (StdDev) of 0.93% indicates he’s less likely than just about anyone to do so. (What it really means is there is only a two-thirds chance his diff% will be between -0.90% and +0.96%.)

As with xK%, I compiled a list of fantasy-relevant starters with only two years’ worth of data that see sizable fluctuations between 2013 and 2014. Their data, at this point, is impossible (nay, ill-advised) to interpret now, but it is worth monitoring.

Name: [2013 diff%, 2014 diff%]

Miller is an interesting case: he was atrociously bad about gifting free passes in 2014, but his diff% was only marginally worse than it was in 2013. It’s possible that he was a smart buy-low for the braves — but it’s also possible that Miller not only perennially under-performs his xBB% but is also trending in the wrong direction.

Here are fantasy-relevant players with a) only 2014 data, and b) outlier diff% values:

I’m not gonna lie, I have no idea why Cobb, Corey Kluber and others show up as only having one year of data when they have two in the xK% dataset. This is something I noticed now. Their exclusion doesn’t fundamentally change the model’s fit whatsoever because it did not rely on split-season data; I’m just curious why it didn’t show up in FanGraphs’ leaderboards. Oh well.

Implications: Richards and Roark perhaps over-performed. Meanwhile, it’s possible that Odorizzi, Ross  and Ventura will improve (or regress) compared to last year. I’m excited about all of that. Richards will probably be pretty over-valued on draft day.

Advertisements

Predicting pitchers’ strikeouts using xK%

Expected strikeout rate, or what I will henceforth refer to as “xK%,” is exactly what it sounds like. I want to see if a pitcher’s strikeout rate actually reflects how he has pitched in terms of how often he’s in the zone, how often he causes batters to swing and miss, and so on. Ideally, it will help explain random fluctuations in a pitcher’s strikeout rate, because even strikeouts have some luck built into them, too.

An xK% metric is not a revolutionary idea. Mike Podhorzer over at FanGraphs created one last year, but he catered it to hitters. Still, it’s nothing too wild and crazy like WAR or SIERA or any other wacky acronym. (A wackronym, if you will.)

Courtesy of Baseball Reference, I constructed a set of pitching data spanning 2010 through 2014. I focused primarily on what I thought would correlate highly with strikeout rates: looking strikes, swinging strikes and foul-ball strikes, all as a percentage of total strikes thrown. I didn’t want the model specification to be too close to a definition, so it’s beneficial that these rates are on a per-strike, rather than per-pitch, basis.

The graph plots actual strikeout rates versus expected strikeout rates with the line of best fit running through it. I ran my regression using the specification above and produced the following equation:

xK% = -.6284293 + 1.195018*lookstr + 1.517088*swingstr + .9505775*foulstr
R-squared = .9026

The R-squared term can, for easy of understanding, be interpreted as how well the model fits the data, from 0 to 1. An R-squared, then, of .9026 represents approximately a 90-percent fit. In other words, these three variables are able to explain 90 percent of a strikeout rate. (The remaining 10 percent is, for now, a mystery!)

In order for the reader to use this equation to his or her own benefit, one would insert a pitcher’s looking strike, swinging strike and foul-ball strike percentages into the appropriate variables. Fortunately, I already took the initiative. I applied the results to the same data I used: all individual qualified seasons by starting pitchers from 2010 through 2014.

The results have interesting implications. Firstly, one can see how lucky or unlucky a pitcher was in a particular season. Secondly, and perhaps most importantly, one can easily identify which pitchers habitually over- and under-perform relative to their xK%. Lastly, you can see how each pitcher is trending over time. Every pitcher is different; although the formula will fit most ordinary pitchers, it goes without saying that the aces of your fantasy squad are far from ordinary, and they should be treated on an individual basis.

(Keep in mind that a lot of these players only have one or two years’ worth of data (as indicated by “# Years”), so the average difference between their xK% and K% as a representation of a pitcher’s true skill will be largely unreliable.)

It is immediately evident: the game’s best pitchers outperform their xK% by the largest margins. Cliff Lee, Stephen Strasburg, Clayton Kershaw, Felix Hernandez and Adam Wainwright are all top-10 (or at least top-15) fantasy starters. But let’s look at their numbers over the years, along with a few others at the top of the list.

Kershaw and King Felix have not only been consistent but also look like like they’re getting better with age. Wainwright’s difference between 2013 and 2014 is a bit of a concern; he’s getting older, and this could be a concrete indicator that perhaps the decline has officially begun. Darvish’s line is interesting, too: you may or may not remember that he had a massive spike in strikeouts in 2013 compared to his already-elite strikeout rate the prior year. As you can see, it was totally legit, at least according to xK%. But for some reason, even xK% can fluctuate wildly from year to year. I see it in the data, anecdotally: Anibal Sanchez‘s huge 6.7-percent spike in xK% from 2012 to 2013 was followed by a 5.5-percent drop from 2013 to 2014. Conversely, David Price‘s 5-percent decrease in xK% from 2012 to 2013 was followed by an almost perfectly-equal 5-percent increase from 2013 to 2014. So the phenomenon seems to work both ways. Thus, perhaps it shouldn’t have come as a surprise when Darvish couldn’t repeat his 2013 success. To the baseball world’s collective dismay, we simply didn’t have enough data yet to determine which Yu was the true Yu. I plan to do some research to see how often these severe spikes in xK% are mere aberrations versus how often they are sustained over time, indicating a legitimate skills improvement.

I have also done my best to compile a list of players with only one or two years’ worth of data who saw sizable spikes and drops in their K% minus xK% (“diff%”). The idea is to find players for whom we can’t really tell how much better (or worse) their actual K% is compared to their xK% because of conflicting data points. For example, will Corey Kluber be a guy who massively outperforms his xK% as he did in 2014, or does he only slightly outperform as he did in 2013? I present the list not to provide an answer but to posit: Which version of each of these players is more truthful? I guess we will know sometime in October.

Name: [2013 diff%, 2014 diff%]

And here some fantasy-relevant guys with only data from 2014: