arc_challenge: by models

Home Doc/Code


std predicted by accuracy

The typical stddev between pairs of models on this dataset as a function of the absolute accuracy.

Differences vs inconsistencies

Here is a more informative figure of the source information used to compute p-value. Any model pair to the right of the parabola is statistically different from each other at the given level. This plot shows a pretty sharp transition since there are no model pairs with a small #A_win + #B_win, which rules out significant results at a small difference in |#A_win-#B_win|. For more explanation see doc.

p-values for model pairs

The null hypothesis is that model A and B each have a 1/2 chance to win whenever they are different, ties are ignored. The p-value is the chance under the null-hypothesis to get a difference as extreme as the one observed. For all pairs of models, the significance level mainly depends on the accuracy difference as shown here. Hover over each model pair for detailed information.

Results table by model

We show 3 methods currently used for evaluating code models, raw accuracy used by benchmarks, average win-rate over all other models (used by BigCode), and Elo (Bradly-Terry coefficients following Chatbot Arena). Average win-rate always have good correlation with Elo. GPT-3.5 gets an ELO of 1000 when available, otherwise the average is 1000. std: standard deviation due to drawing examples from a population, this is the dominant term. std_i: the standard deviation due to drawing samples from the model on each example. std_total: the total standard deviation, satisfying std_total^2 = std^2 + std_i^2.

model pass1 std(E(A)) E(std(A)) std(A) N win_rate elo
dbrx-base 65.9 1.4 0 1.4 NaN 24.8 1.07e+03
Meta-Llama-3-70B 65 1.4 0 1.4 NaN 22.2 1.06e+03
Mixtral-8x22B-v0.1 61.9 1.4 0 1.4 NaN 19.5 1.05e+03
DeepSeek-V2 60.2 1.4 0 1.4 NaN 18.4 1.05e+03
Mixtral-8x7B-v0.1 60.2 1.4 0 1.4 NaN 18.2 1.05e+03
deepseek-llm-67b-base 57.3 1.4 0 1.4 NaN 16.2 1.04e+03
llama_65B 55.2 1.5 0 1.5 NaN 14.6 1.03e+03
Qwen1.5-110B 55 1.5 0 1.5 NaN 15.7 1.03e+03
llama2_70B 54.6 1.5 0 1.5 NaN 16 1.03e+03
falcon-40b 54.4 1.5 0 1.5 NaN 14.3 1.03e+03
Mistral-7B-v0.1 54.2 1.5 0 1.5 NaN 14.4 1.02e+03
llama_33B 53.8 1.5 0 1.5 NaN 14 1.02e+03
Meta-Llama-3-8B 53.6 1.5 0 1.5 NaN 14.2 1.02e+03
gemma-7b 53.4 1.5 0 1.5 NaN 14.1 1.02e+03
Qwen1.5-72B 52.4 1.5 0 1.5 NaN 13.6 1.02e+03
llama2_13B 50.2 1.5 0 1.5 NaN 14.3 1.01e+03
Qwen1.5-32B 50.1 1.5 0 1.5 NaN 13 1.01e+03
mpt-30b 49.4 1.5 0 1.5 NaN 11.6 1.01e+03
llama_13B 48.6 1.5 0 1.5 NaN 11 1e+03
deepseek-moe-16b-base 47.6 1.5 0 1.5 NaN 10.5 1e+03
Qwen1.5-14B 45.6 1.5 0 1.5 NaN 10.5 994
llama_07B 44.9 1.5 0 1.5 NaN 9.41 992
deepseek-llm-7b-base 44.6 1.5 0 1.5 NaN 9.01 991
falcon-7b 44.1 1.5 0 1.5 NaN 8.82 989
llama2_07B 43.5 1.5 0 1.5 NaN 10.2 987
mpt-7b 42.5 1.4 0 1.4 NaN 8.55 983
Qwen1.5-7B 42.1 1.4 0 1.4 NaN 9.21 982
gemma-2b 41.7 1.4 0 1.4 NaN 7.9 980
stablelm-base-alpha-7b-v2 40.7 1.4 0 1.4 NaN 7.41 977
stablelm-3b-4e1t 39.7 1.4 0 1.4 NaN 7.24 973
Qwen1.5-4B 39.5 1.4 0 1.4 NaN 8.14 973
pythia-12b-deduped-v0 38.1 1.4 0 1.4 NaN 6.73 968
pythia-6.9b-deduped-v0 35.8 1.4 0 1.4 NaN 6.04 959
Qwen1.5-1.8B 34.3 1.4 0 1.4 NaN 6.18 954
pythia-2.8b-deduped 32.8 1.4 0 1.4 NaN 5.63 949
Qwen1.5-0.5B 29.4 1.3 0 1.3 NaN 4.66 936
pythia-1.4b-deduped-v0 27.9 1.3 0 1.3 NaN 4.91 931
pythia-1b-deduped 27.2 1.3 0 1.3 NaN 4.43 929