Projected Odds
| Film | Nomination % | Winner % | Win% | Δ |
|---|
Forecast Lab
Percentage chances for nominations and wins at the 99th Academy Awards, estimated from precursor momentum and historical-style weighting.
| Film | Nomination % | Winner % | Win% | Δ |
|---|
Edit each contender's expected precursor profile and context.
Films ranked by expected nominations and wins across all categories.
| Film | Nominations | Wins |
|---|
Films competing across multiple tracked categories (Picture, Director, Actor, Actress, Supporting Actor/Actress). Joint probabilities use historical co-win rates from 25 Oscar ceremonies (1999–2023).
| Film | Exp. Wins | P(≥1 Win) | P(Pic+Dir) | Categories |
|---|
Retrospective accuracy across 25 Oscar cycles (72nd–96th ceremonies). Default weights: Precursor 58%, Historical 30%, Buzz 12%.
| Category | Nom. Accuracy | Winner Accuracy | Nom. Brier | Win. Brier |
|---|
| Year | Top Pick | Actual Winner | Nom. Acc. | Correct? |
|---|
Murphy (1973) decomposition of the winner-prediction Brier Score into three interpretable components: Reliability (calibration error — lower is better), Resolution (discrimination power — higher is better), and Uncertainty (irreducible base-rate variance). BS = Reliability − Resolution + Uncertainty.
| Category | Brier Score | Reliability | Resolution | Uncertainty | Skill Score |
|---|
Permutation Feature Importance: how much winner accuracy drops when each feature's values are randomly shuffled within each category (N=200 replicates). A larger drop means the feature is more critical to the model's predictions.
Paired statistical comparison of any two weight presets over 25 years of Oscar history. Tests use McNemar (winner accuracy), paired t-test and Wilcoxon signed-rank (Brier score), all at α = 0.05.
Nomination model evaluated on all 1 110 (year, category, film) triples from 25 Oscar ceremonies. Curves are computed globally and per-category. AUC-ROC baseline (random classifier) = 0.5; AUC-PR baseline = prevalence.
| Category | n | Positives | Prevalence | AUC-ROC | AUC-PR |
|---|
Year-by-year winner Brier Score and accuracy across 25 Oscar ceremonies, with 3-year rolling averages and an OLS trend line. Lower Brier = better calibration.