How to compute the global precision given the precision calculated for each class, (after classification) ? Is it just the average over classes precisions ? When I use Weka the global precision is not computed as the average one, but as a "weighted average" and I don't know how this later is computed (weighted by what?). See the example bellow of the results that we get with Weka, can you please tell me how "Weighted Avg." is computed in this result (last line) ?

=== Detailed Accuracy By Class ===

           TP Rate   FP Rate   Precision   Recall  F-Measure   ROC Area  Class
             0.889     0.005      0.727     0.889     0.8        0.995    APP05179028
             0.991     0.032      0.866     0.991     0.924      0.999    APP05179007
             0.633     0.005      0.864     0.633     0.731      0.989    APP05179012
             0         0          0         0         0          0.954    APP05179014
             0.972     0.013      0.936     0.972     0.954      0.998    APP05179010
             0.957     0.002      0.957     0.957     0.957      0.999    APP05179009
             0         0          0         0         0          ?        APP05179018
             0.6       0          1         0.6       0.75       0.988    APP05179027
             0         0          0         0         0          ?        APP05179023
             1         0.002      0.75      1         0.857      1        APP05179025
             0.8       0          1         0.8       0.889      0.991    APP05179016
             0         0          0         0         0          ?        APP05179021
             0.889     0          1         0.889     0.941      1        APP05179029
             0.918     0          1         0.918     0.957      1        APP05179020
             0         0          0         0         0          0.958    APP05179022
             1         0.002      0.963     1         0.981      0.999    APP05179015
             0.846     0          1         0.846     0.917      0.998    APP05179008
             0.992     0.002      0.992     0.992     0.992      1        APP05179011
             0.947     0.002      0.973     0.947     0.96       1        APP05179013
             1         0          1         1         1          1        APP05179026
             0.857     0          1         0.857     0.923      1        APP05179019
             0.714     0.011      0.417     0.714     0.526      0.952    APP05179017
             0.969     0          1         0.969     0.984      1        APP05179030
             1         0          1         1         1          1        APP05179024
             0.935     0.008      0.939     0.935     0.933      0.998    **Weighted Avg.**

   a   b   c   d   e   f   g   h   i   j   k   l   m   n   o   p   q   r   s   t   u   v   w   x 
   8   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 |   a = APP05179028
   0 110   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 |   b = APP05179007
   0   9  19   0   2   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 |   c = APP05179012
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   1   0   0   0   0   0 |   d = APP05179014
   0   3   0   0 103   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 |   e = APP05179010
   0   1   0   0   0  22   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 |   f = APP05179009
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 |   g = APP05179018
   3   0   0   0   0   0   0   6   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0 |   h = APP05179027
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 |   i = APP05179023
   0   0   0   0   0   0   0   0   0   3   0   0   0   0   0   0   0   0   0   0   0   0   0   0 |   j = APP05179025
   0   1   0   0   0   0   0   0   0   0   4   0   0   0   0   0   0   0   0   0   0   0   0   0 |   k = APP05179016
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 |   l = APP05179021
   0   0   0   0   0   0   0   0   0   0   0   0  32   0   0   0   0   0   0   0   0   4   0   0 |   m = APP05179029
   0   1   0   0   0   1   0   0   0   0   0   0   0  45   0   0   0   0   0   0   0   2   0   0 |   n = APP05179020
   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 |   o = APP05179022
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0  26   0   0   0   0   0   0   0   0 |   p = APP05179015
   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0  11   0   0   0   0   1   0   0 |   q = APP05179008
   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0 126   0   0   0   0   0   0 |   r = APP05179011
   0   1   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0  36   0   0   0   0   0 |   s = APP05179013
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   3   0   0   0   0 |   t = APP05179026
   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   6   0   0   0 |   u = APP05179019
   0   0   0   0   2   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   5   0   0 |   v = APP05179017
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0  31   0 |   w = APP05179030
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0  12 |   x = APP05179024

asked Jan 26 '13 at 19:08

shn's gravatar image

shn
462414759

edited Jan 27 '13 at 05:01


One Answer:

The weighted average is computed by weighting the measure of class (TP rate, precision, recall ...) by the proportion of instances there are in that class. Computing the average can be sometimes be misleading. For instance, if class 1 has 100 instances and you achieve a recall of 30%, and class 2 has 1 instance and you achieve recall of 100% (you predicted the only instance correctly), then when taking the average (65%) you will inflate the recall score because of the one instance you predicted correctly. Taking the weighted average will give you 30.7%, which is much more realistic measure of the performance of the classifier.

NB: I gave an example with two classes, but in fact the weighted average make sense only when you have more then two classes. When you have only two classes weighting does not make sense, and the measures should be computed relative to the minority class. In other words, you are interested to know if you are able to detect the minority.

answered Jan 27 '13 at 05:47

Martin%20SAVESKI's gravatar image

Martin SAVESKI
15634

So in this case the weighted average recall will be exatly the same as the recognition rate (accuracy), right ?

(Jan 27 '13 at 06:47) shn
Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.