The reasons we at Sports Predict have chosen sports are:
Billions of people engage with sports and sports prediction daily
There is often a huge pool of free data (created by other sports enthusiasts)--the contrast is with financial data which (at some level of detail) can be very expensive. Sports data and access to low cost LLMs mean this area of prediction is a fairly level playing field (and a playing field that has become much more level in recent years)
Sports have many, many events happening daily and resolving in a (typically) clear way--lots of trials and few oracle issues.
The best games (in my view) are ones where the rules are simple to learn, but where strategy is quite complicated and there are ways to measure skill up to and including the very best. I think our game is just that...easy to learn but hard to be the best.
I would argue that the closest thing we have to a global talent search currently is chess.com with 200mm members (most of which are free subscriptions). This allows talented players (often young players) to become internationally known and be identified at a low cost to them and society. I am a chess player and fan myself, but I think Sports Predict could (potentially!) reach an even border audience and reveal a skill set that is more directly applicable to solving the world's problems than chess skill.
Sports Predict, like chess.com, has very little language or cultural barrier to doing well. Sports fandom is largely universal and we plan to have prediction contests on sports all over the world--we hope to identify the best NFL football predictors and the nest IPL Cricket predictors.
I agree that I hope the best predictors on Sports Predict see their highest and best use as not trying to "beat the sports books" but find jobs and opportunities in fields where those skills will be of higher benefit to the world (and to them). We are in discussions with some firms that may, in the future if we achieve scale, view Sports Predict as a conduit of talent for them.
Though he doesn't state it directly, doesn't he imply that prediction is a transferable skill? I.e., we should use predicting the outcome of a game as a game to teach doing predictions.
I don't think it is though. Sports are information rich and the core problem has essentially been solved in so far as professional sports gambler is a viable job and doesn't require outlier capabilities. Run of the mill statistical models are typically good enough to beat Vegas.
Super forecasters are impressive in so far as they take an information sparse scenarios or perhaps signal ambiguous scenarios and effectively make predictions. Indeed, this is exactly the scenario that humans continue to crush AI at least for now.
I would simply say that while there may be many people currently capable of beating the sports books (as their actions in banning certain people indicates), Sports Predict hopes to find and identify the VERY BEST of these predictors, which by definition, would make them outliers.
Agree. Had the same question reading this article. The solution doesn’t match the stated problem - if he is trying to address the structural problem with under identification and utilisation of talent towards value-creating areas of the economy, this seems like the wrong solution? A sports prediction market will certainly help train people in sports prediction. And not really anything else. I don’t get the connection.
I work hard at becoming a top forecaster on this platform. I’m able to call sports games 1% better than the consensus. Then what? That does have value in one area I can think of - gambling. Which to my knowledge is not generally seen as a socially beneficial profession.
Sports is an interesting realm, because the game rules are so clear and unambiguous, and its authoritatively refereed in accordance. That creates the framework of controlled conditions that boost the ease of prediction. It's practically in a class by itself in that way, as a human behavior. The ideal and the actual are very closely aligned.
The stakes are also relatively low, thankfully. Sporting contests are "time out." They're revivifying, but not wrought with permanence or overwhelming significance.
So I'm fine with sports forecasting, and oddsmaking. As long as it isn't taken as having wider relevance in regard to the sprawling implications of serious matters, and settings that are much less amenable to controlled conditions.
Exactly. My comment (I would have replied here had I read your commment before posting mine):
"SportsPredict’s SMART (Sports Market Accuracy RaTing) system measures how often and by how much each forecaster beats the consensus. Over time, these scores become something new, a proof of expertise. Unlike grades or résumés, they cannot be bought. They are earned through calibration, humility, and empirical success. They are the intellectual equivalent of an Elo rating in chess."
I love the idea of evaluating ability through forecasts, but I'm not sure this approach would work. I think an issue with sports is that the consensus forecast will already be very good. To the extent that experts will beat the consensus, I would expect them to beat it by very little. Anyone who can consistently beat the consensus forecast by a large enough margin can already make good money betting on that knowledge.
Given this already strong accuracy of sports forecasts, I fear that whoever does beat markets may have some qualities which are not those that make someone a good decision-maker in everyday decisions.
Take the example of Caplan himself. I believe his betting record does provide extremely good evidence that he is a much better evaluator of relevant issues than most. But he's done that not through being marginally better than already accurate forecasts. He chose to bet on issues he perceived his counterparts to be particularly unable to see the clearer view. If he were to bet on sports markets, I guess he wouldn't beat consensus.
"At the same time, the global public has grown skeptical that anyone’s expertise matters. Tom Nichols, in “The Death of Expertise”, argues that citizens now confuse equality of rights with equality of knowledge."
Or maybe the global public has grown skeptical that "expertise" justifies political power. Plenty of experts confuse inequality of knowledge with inequality of rights.
This is the first post to offset my concerns with the negative societal impacts of gambling; well done! Two issues that i hope you can address going forward: first is a better understanding of the differences between uncertainty and probability. The best attempts that i have read are by Vaughn Tan. The second issue is to develop the scorecard of expertise. This will require identifying the criteria (see first concern) and clearly define the taxonomy so that it can be quantified/measured. Please let me know how i can follow your progress. Thanks!
"SportsPredict’s SMART (Sports Market Accuracy RaTing) system measures how often and by how much each forecaster beats the consensus. Over time, these scores become something new, a proof of expertise. Unlike grades or résumés, they cannot be bought. They are earned through calibration, humility, and empirical success. They are the intellectual equivalent of an Elo rating in chess."
I love the idea of evaluating ability through forecasts, but I'm not sure this approach would work. I think an issue with sports is that the consensus forecast will already be very good. To the extent that experts will beat the consensus, I would expect them to beat it by very little. Anyone who can consistently beat the consensus forecast by a large enough margin can already make good money betting on that knowledge.
Given this already strong accuracy of sports forecasts, I fear that whoever does beat markets may have some qualities which are not those that make someone a good decision-maker in everyday decisions.
Take the example of Caplan himself. I believe his betting record does provide extremely good evidence that he is a much better evaluator of relevant issues than most. But he's done that not through being marginally better than already accurate forecasts. He chose to bet on issues he perceived his counterparts to be particularly unable to see the clearer view. If he were to bet on sports markets, I guess he wouldn't beat consensus.
You are correct that very few people can beat the consensus forecast--especially given the costs ("the vig") that sports books charge to play. Given the high (relative to say financial markets) bid/ask spreads it is estimated that only 3% of regular predictors on the main sports gambling sites make money in any given year.
However, the large sports gambling sites do actually "throttle" (reduce allowable bet sizes to very small amounts) or outright ban certain predictors they think may have an edge. They do this even at the cost of some bad publicity for them on this issue, therefore I think it safe to assume they think some players can beat the consensus forecast.
Thanks. I'm willing to accept some people might have some edge over markets, and I highly applaud the goal of your project. I think it's very much in the right direction, and may provide a necessary starting point for assessing people's relevant talents. Good luck with it!
Hard disagree. “Forecasting” is less important than far-ranging assessment of relevant concerns in the present, perspicacity, and resourcefulness when dealing with the unanticipated. “Forecasting” narrows possibilities into probabilities, sometimes to the point of quasi-certainty—ironically, as a static readout. Fluidity and flexibility is more important than Autocorrect.
But what if the relevant causal factors (such as the mental states of individual humans) can't be observed directly and thus can't be compared to an external measurement standard? And what if these causal factors are embedded in complex social systems that one can't use experimental controls on to perform rigorous experiments?
The long-standing dream of treating the social sciences according to the methods of the natural sciences is a colossal failure. Absent the rigorous methods of eliminative logical inductions, we are left with just a few self-evident facts (like the purposefulness of human action) as a starting point for making a few generalizations and stringing together chains of deductions from them, and with attempts to make enumerative logical inductions describing indirect inferences about mental states, etc.
Such methodological limitations of the social sciences rule out the kinds of quantitative predictions of the timing and magnitude of effects based on causal laws derived from laboratory experiments, as in fields like physics or chemistry. The best economic generalizations can do is describe the qualitative effects of some change (without quantifying magnitudes and timing) under the assumption that all other causal factors are held constant (the _ceteris paribus_ condition), which of course is never the case in the dynamic real world.
Another issue with attempted meritocratic measurements is that humans rank their desired outcomes, which aren't always quantifiable even if they are observable. Suppose, for example, I want to compare the performance of home-building contractors. I might look at examples of their work, read reviews by previous customers, etc.; but any ranking I make among them on the basis of such evidence, and any dollar-premium I am willing to pay to the people I regard as the higher performers, is an entirely subjective judgement on my part.
Even in the world of professional sports, subjective judgements are always involved. Maybe a star player has brought in more fans when he is playing, so I can make a "money ball" calculation about what that player has been worth to the business in the past. However, backward-looking accounting is not the same thing as forward-looking entrepreneurial calculations. Both the player and the attitudes of the fans regarding that player are ever-changing variables. The uncertainty about the future facing entrepreneurs involves precisely those causal factors that are unique to the future situation which can't be modeled by looking at past data. Entrepreneurs can only make educated guesses about the future that are informed only in part by past data, not rigorous predictions.
In 2035, Sports Predict scores influence hiring decisions at top firms worldwide. Surprisingly, Jane Street hires the single worst predictor on the website; they figure that such a track record could only result from someone who was consistently winning in real-money markets, and posting the opposite of his bets on Sports Predict as a hedge.
If SportsPredict is supposed to uncover hidden genius, how do you stop it from just rewarding people who already have the most time, data access, or betting experience?
Data access is surprisingly wide and cheap for sports (due to the passionate fan bases). Now with LLMs the ability to do data analysis at a high level is also a much more level playing field. It is probably fair to say the field is still not perfectly level, but I think more level here than most possible ways of measuring talent.
Widespread sports betting doesn't seem to have improved epistemic humility much (sadly) and also comes with a some pretty negative side effects (Zvi has a good post about this if I'm remembering correctly).
Also, your idea for Sportspredict seems like a more narrow Metaculus or Manifold and I'm not sure why someone would choose to use it vs those websites (which both offer predictions on sports matches anyway).
If they're particularly interested in sports then it seems like they'd use whatever edge they have to profit on traditional sports betting sites.
This is certainly correct: "Governments subsidize chips and data centers while ignoring the scarcest resource of all, brains." And also governments both regulate and tax the hell out of gambling people and companies providing the platforms for gamblers in any form - lotteries, sports betting, online casinos etc. And this is why prediction markets, if not banned outright in the US, would evolve into the same type of a government regulated platform operated by private companies like in the US. Just like any other online gaming/gambling platforms. In fact many existing gambling companies such as Fanduel and Draftkings are either already entering the prediction market or contemplating it or buying earlier entrants. But good luck Mr Kuhn!
We hope and plan to monetize through some percentage of our customers paying for a premium membership (like chess.com) while allowing most players to build a track record at zero financial cost or risk to them.
This Patreon type model is one monetization path for us. Another is working with brands who may offer small prizes in order to build player engagement with their brand and pay us to help build this strengthened engagement. Finally, a talent search partnership with hedge funds, AI firms, and Crypto companies is yet another possible path.
Just as chess.com is legal and available in (I believe) every country given there is no gambling, we hope and believe we can scale internationally with low legal and regulatory cost given we are not a gambling site.
I heartily agree that prediction is the essence of intelligence. Researchers such as Anil Seth have clearly demonstrated that even the brain's moment-to-moment perceptions of external reality are predictions that undergo constant revision from sensory input. The world is a complicated system, and this makes longer-range prediction more difficult.
I consider my late colleague, Wim Hofstee, to be one of the most perceptive psychologists when it came to the issue of prediction. He once documented how the averaged predictions of a large number of individuals regularly exceeded the predictions of experts. Something to think about. Here is an article written as a tribute to his work by one of his Dutch colleagues, https://www.leidenpsychologyblog.nl/articles/wim-hofstee-1936-2021-and-the-theory-of-psychological-relativity
Yes the world needs better predictions, induced by better training and incentives. But not on sports. What is the point of better predictions there?
The reasons we at Sports Predict have chosen sports are:
Billions of people engage with sports and sports prediction daily
There is often a huge pool of free data (created by other sports enthusiasts)--the contrast is with financial data which (at some level of detail) can be very expensive. Sports data and access to low cost LLMs mean this area of prediction is a fairly level playing field (and a playing field that has become much more level in recent years)
Sports have many, many events happening daily and resolving in a (typically) clear way--lots of trials and few oracle issues.
The best games (in my view) are ones where the rules are simple to learn, but where strategy is quite complicated and there are ways to measure skill up to and including the very best. I think our game is just that...easy to learn but hard to be the best.
I would argue that the closest thing we have to a global talent search currently is chess.com with 200mm members (most of which are free subscriptions). This allows talented players (often young players) to become internationally known and be identified at a low cost to them and society. I am a chess player and fan myself, but I think Sports Predict could (potentially!) reach an even border audience and reveal a skill set that is more directly applicable to solving the world's problems than chess skill.
Sports Predict, like chess.com, has very little language or cultural barrier to doing well. Sports fandom is largely universal and we plan to have prediction contests on sports all over the world--we hope to identify the best NFL football predictors and the nest IPL Cricket predictors.
I agree that I hope the best predictors on Sports Predict see their highest and best use as not trying to "beat the sports books" but find jobs and opportunities in fields where those skills will be of higher benefit to the world (and to them). We are in discussions with some firms that may, in the future if we achieve scale, view Sports Predict as a conduit of talent for them.
Though he doesn't state it directly, doesn't he imply that prediction is a transferable skill? I.e., we should use predicting the outcome of a game as a game to teach doing predictions.
I don't think it is though. Sports are information rich and the core problem has essentially been solved in so far as professional sports gambler is a viable job and doesn't require outlier capabilities. Run of the mill statistical models are typically good enough to beat Vegas.
Super forecasters are impressive in so far as they take an information sparse scenarios or perhaps signal ambiguous scenarios and effectively make predictions. Indeed, this is exactly the scenario that humans continue to crush AI at least for now.
I would simply say that while there may be many people currently capable of beating the sports books (as their actions in banning certain people indicates), Sports Predict hopes to find and identify the VERY BEST of these predictors, which by definition, would make them outliers.
Agree. Had the same question reading this article. The solution doesn’t match the stated problem - if he is trying to address the structural problem with under identification and utilisation of talent towards value-creating areas of the economy, this seems like the wrong solution? A sports prediction market will certainly help train people in sports prediction. And not really anything else. I don’t get the connection.
I work hard at becoming a top forecaster on this platform. I’m able to call sports games 1% better than the consensus. Then what? That does have value in one area I can think of - gambling. Which to my knowledge is not generally seen as a socially beneficial profession.
I’m just confused…
Sports is an interesting realm, because the game rules are so clear and unambiguous, and its authoritatively refereed in accordance. That creates the framework of controlled conditions that boost the ease of prediction. It's practically in a class by itself in that way, as a human behavior. The ideal and the actual are very closely aligned.
The stakes are also relatively low, thankfully. Sporting contests are "time out." They're revivifying, but not wrought with permanence or overwhelming significance.
So I'm fine with sports forecasting, and oddsmaking. As long as it isn't taken as having wider relevance in regard to the sprawling implications of serious matters, and settings that are much less amenable to controlled conditions.
Exactly. My comment (I would have replied here had I read your commment before posting mine):
"SportsPredict’s SMART (Sports Market Accuracy RaTing) system measures how often and by how much each forecaster beats the consensus. Over time, these scores become something new, a proof of expertise. Unlike grades or résumés, they cannot be bought. They are earned through calibration, humility, and empirical success. They are the intellectual equivalent of an Elo rating in chess."
I love the idea of evaluating ability through forecasts, but I'm not sure this approach would work. I think an issue with sports is that the consensus forecast will already be very good. To the extent that experts will beat the consensus, I would expect them to beat it by very little. Anyone who can consistently beat the consensus forecast by a large enough margin can already make good money betting on that knowledge.
Given this already strong accuracy of sports forecasts, I fear that whoever does beat markets may have some qualities which are not those that make someone a good decision-maker in everyday decisions.
Take the example of Caplan himself. I believe his betting record does provide extremely good evidence that he is a much better evaluator of relevant issues than most. But he's done that not through being marginally better than already accurate forecasts. He chose to bet on issues he perceived his counterparts to be particularly unable to see the clearer view. If he were to bet on sports markets, I guess he wouldn't beat consensus.
I love this proposal, though I would replace betting on sports with betting on the outcome of cases being decided by the Supreme Court of the United States and by other courts. Also, on a related note, check out my latest paper "Retrodiction Markets": https://researchonline.stthomas.edu/esploro/outputs/journalArticle/Retrodiction-Markets/991015317235703691
"At the same time, the global public has grown skeptical that anyone’s expertise matters. Tom Nichols, in “The Death of Expertise”, argues that citizens now confuse equality of rights with equality of knowledge."
Or maybe the global public has grown skeptical that "expertise" justifies political power. Plenty of experts confuse inequality of knowledge with inequality of rights.
This is the first post to offset my concerns with the negative societal impacts of gambling; well done! Two issues that i hope you can address going forward: first is a better understanding of the differences between uncertainty and probability. The best attempts that i have read are by Vaughn Tan. The second issue is to develop the scorecard of expertise. This will require identifying the criteria (see first concern) and clearly define the taxonomy so that it can be quantified/measured. Please let me know how i can follow your progress. Thanks!
"SportsPredict’s SMART (Sports Market Accuracy RaTing) system measures how often and by how much each forecaster beats the consensus. Over time, these scores become something new, a proof of expertise. Unlike grades or résumés, they cannot be bought. They are earned through calibration, humility, and empirical success. They are the intellectual equivalent of an Elo rating in chess."
I love the idea of evaluating ability through forecasts, but I'm not sure this approach would work. I think an issue with sports is that the consensus forecast will already be very good. To the extent that experts will beat the consensus, I would expect them to beat it by very little. Anyone who can consistently beat the consensus forecast by a large enough margin can already make good money betting on that knowledge.
Given this already strong accuracy of sports forecasts, I fear that whoever does beat markets may have some qualities which are not those that make someone a good decision-maker in everyday decisions.
Take the example of Caplan himself. I believe his betting record does provide extremely good evidence that he is a much better evaluator of relevant issues than most. But he's done that not through being marginally better than already accurate forecasts. He chose to bet on issues he perceived his counterparts to be particularly unable to see the clearer view. If he were to bet on sports markets, I guess he wouldn't beat consensus.
You are correct that very few people can beat the consensus forecast--especially given the costs ("the vig") that sports books charge to play. Given the high (relative to say financial markets) bid/ask spreads it is estimated that only 3% of regular predictors on the main sports gambling sites make money in any given year.
However, the large sports gambling sites do actually "throttle" (reduce allowable bet sizes to very small amounts) or outright ban certain predictors they think may have an edge. They do this even at the cost of some bad publicity for them on this issue, therefore I think it safe to assume they think some players can beat the consensus forecast.
Thanks. I'm willing to accept some people might have some edge over markets, and I highly applaud the goal of your project. I think it's very much in the right direction, and may provide a necessary starting point for assessing people's relevant talents. Good luck with it!
Is this not Bet On It at scale?
I like it. Shared with one of my favorite sports betting prognosticators. [https://www.rollbamaroll.com/post/FeJZUW7nEG5s]
Hard disagree. “Forecasting” is less important than far-ranging assessment of relevant concerns in the present, perspicacity, and resourcefulness when dealing with the unanticipated. “Forecasting” narrows possibilities into probabilities, sometimes to the point of quasi-certainty—ironically, as a static readout. Fluidity and flexibility is more important than Autocorrect.
But what if the relevant causal factors (such as the mental states of individual humans) can't be observed directly and thus can't be compared to an external measurement standard? And what if these causal factors are embedded in complex social systems that one can't use experimental controls on to perform rigorous experiments?
The long-standing dream of treating the social sciences according to the methods of the natural sciences is a colossal failure. Absent the rigorous methods of eliminative logical inductions, we are left with just a few self-evident facts (like the purposefulness of human action) as a starting point for making a few generalizations and stringing together chains of deductions from them, and with attempts to make enumerative logical inductions describing indirect inferences about mental states, etc.
Such methodological limitations of the social sciences rule out the kinds of quantitative predictions of the timing and magnitude of effects based on causal laws derived from laboratory experiments, as in fields like physics or chemistry. The best economic generalizations can do is describe the qualitative effects of some change (without quantifying magnitudes and timing) under the assumption that all other causal factors are held constant (the _ceteris paribus_ condition), which of course is never the case in the dynamic real world.
Another issue with attempted meritocratic measurements is that humans rank their desired outcomes, which aren't always quantifiable even if they are observable. Suppose, for example, I want to compare the performance of home-building contractors. I might look at examples of their work, read reviews by previous customers, etc.; but any ranking I make among them on the basis of such evidence, and any dollar-premium I am willing to pay to the people I regard as the higher performers, is an entirely subjective judgement on my part.
Even in the world of professional sports, subjective judgements are always involved. Maybe a star player has brought in more fans when he is playing, so I can make a "money ball" calculation about what that player has been worth to the business in the past. However, backward-looking accounting is not the same thing as forward-looking entrepreneurial calculations. Both the player and the attitudes of the fans regarding that player are ever-changing variables. The uncertainty about the future facing entrepreneurs involves precisely those causal factors that are unique to the future situation which can't be modeled by looking at past data. Entrepreneurs can only make educated guesses about the future that are informed only in part by past data, not rigorous predictions.
In 2035, Sports Predict scores influence hiring decisions at top firms worldwide. Surprisingly, Jane Street hires the single worst predictor on the website; they figure that such a track record could only result from someone who was consistently winning in real-money markets, and posting the opposite of his bets on Sports Predict as a hedge.
haha. Or maybe the other way....
If SportsPredict is supposed to uncover hidden genius, how do you stop it from just rewarding people who already have the most time, data access, or betting experience?
Data access is surprisingly wide and cheap for sports (due to the passionate fan bases). Now with LLMs the ability to do data analysis at a high level is also a much more level playing field. It is probably fair to say the field is still not perfectly level, but I think more level here than most possible ways of measuring talent.
The Bosnian medal winner's name is Ervin Macic. If you search for "Ervin Macic Oxford" you'll find that he is now a student at Oxford.
Widespread sports betting doesn't seem to have improved epistemic humility much (sadly) and also comes with a some pretty negative side effects (Zvi has a good post about this if I'm remembering correctly).
Also, your idea for Sportspredict seems like a more narrow Metaculus or Manifold and I'm not sure why someone would choose to use it vs those websites (which both offer predictions on sports matches anyway).
If they're particularly interested in sports then it seems like they'd use whatever edge they have to profit on traditional sports betting sites.
This is certainly correct: "Governments subsidize chips and data centers while ignoring the scarcest resource of all, brains." And also governments both regulate and tax the hell out of gambling people and companies providing the platforms for gamblers in any form - lotteries, sports betting, online casinos etc. And this is why prediction markets, if not banned outright in the US, would evolve into the same type of a government regulated platform operated by private companies like in the US. Just like any other online gaming/gambling platforms. In fact many existing gambling companies such as Fanduel and Draftkings are either already entering the prediction market or contemplating it or buying earlier entrants. But good luck Mr Kuhn!
This is my prediction anyways ;-)
Sports Predict is not a gambling platform.
We hope and plan to monetize through some percentage of our customers paying for a premium membership (like chess.com) while allowing most players to build a track record at zero financial cost or risk to them.
This Patreon type model is one monetization path for us. Another is working with brands who may offer small prizes in order to build player engagement with their brand and pay us to help build this strengthened engagement. Finally, a talent search partnership with hedge funds, AI firms, and Crypto companies is yet another possible path.
Just as chess.com is legal and available in (I believe) every country given there is no gambling, we hope and believe we can scale internationally with low legal and regulatory cost given we are not a gambling site.
I heartily agree that prediction is the essence of intelligence. Researchers such as Anil Seth have clearly demonstrated that even the brain's moment-to-moment perceptions of external reality are predictions that undergo constant revision from sensory input. The world is a complicated system, and this makes longer-range prediction more difficult.
I consider my late colleague, Wim Hofstee, to be one of the most perceptive psychologists when it came to the issue of prediction. He once documented how the averaged predictions of a large number of individuals regularly exceeded the predictions of experts. Something to think about. Here is an article written as a tribute to his work by one of his Dutch colleagues, https://www.leidenpsychologyblog.nl/articles/wim-hofstee-1936-2021-and-the-theory-of-psychological-relativity