[NSRCA-discussion] Scoring Process Question

Stuart Chale schale at optonline.net
Tue Jun 26 13:29:44 AKDT 2007


The main place where normalization helps is to make every round count.
Before normalization especially at the nats with multiple lines of judging
often the tough set of judges even if more accurate, would be most peoples
throw away round.

If you were the highest flier on the two tough judge lines you lost 2 1000
point scores.  It has happened in the past.

Stuart

-----Original Message-----
From: nsrca-discussion-bounces at lists.nsrca.org
[mailto:nsrca-discussion-bounces at lists.nsrca.org] On Behalf Of Mark Atwood
Sent: Tuesday, June 26, 2007 5:15 PM
To: NSRCA Mailing List
Subject: Re: [NSRCA-discussion] Scoring Process Question

I'd have to respectfully disagree on the normalization point.  Normalization
is critical to making sure that one round is not "worth more" than another.

There are a zillion ways to show this by example if need be... But it's
necessary to equalize rounds to various conditions, be it Judging, Weather,
or even mechanical failure of a key pilot.

-M


On 6/26/07 4:56 PM, "Fred Huber" <fhhuber at clearwire.net> wrote:

> Your analysis is correct.  We are even amplifying the significant digit 
error
> by multiplying a score from 0 to 10 by a K value THEN doing the 1000 
point
> normalization on the top score.

If we were trying to send a rocket to the
> moon using these type 
calculations... we wouldn't be sure of getting the ship
> into low earth 
orbit... or maybe we'd be sending it to Pluto.

However for
> comparison for flying... as long as the top scorers are 
reasonably
> consistant, making the 1000 score worth about the same total K 
value each
> round... it will work pretty well.

We could just eliminate the conversion to
> 1000 basis and add the K factor 
multiplied raw scores in a couple of contests
> as an error check...  My bet 
is the contest results don't change.

-----
> Original Message ----- 
From: <glmiller3 at suddenlink.net>
To: "NSRCA Mailing
> List" <nsrca-discussion at lists.nsrca.org>
Sent: Tuesday, June 26, 2007 1:30
> PM
Subject: Re: [NSRCA-discussion] Scoring Process Question


> Mike,
>
> Take
> some time and read it with a glass of wine tonight<G>...My point is 
> exactly
> that we are creating an ILLUSION of accuracy which is not 
> statistically
> present.  If my statistics are correct, scores are only 
> accurate to about
> 100 points of the 1000 point scale.  We are deciding 
> most of our contests
> on the statistical "noise".
>
> I haven't proposed any change, I'm just asking
> for ideas......If I had a 
> better solution, I'd offer it.  I think that you
> are right in that 
> expanding the judges score to more digits won't help
> because it is an 
> inherently subjective number that can't be quantified more
> accurately than 
> "about a half a point" on a ten point scale.
>
>
> George
>
>
> ---- Michael Wickizer <mwickizer at msn.com> wrote:
>> My head hurts
> after trying to read and follow that.
>>
>> However, it strikes me that you
> are trying to attach mathmatical and
>> statisical validation to something
> that only has two numbers and that 
>> each
>> contain a varying amount of
> subjectivity.  I am not sure that using a 
>> 1000
>> point per manuver system
> or even greater, would make it more valid but 
>> only
>> an
> illusion.
>>
>>
>> >From: <glmiller3 at suddenlink.net>
>> >Reply-To: NSRCA
> Mailing List <nsrca-discussion at lists.nsrca.org>
>> >To: NSRCA List
> <nsrca-discussion at lists.nsrca.org>
>> >Subject: [NSRCA-discussion] Scoring
> Process Question
>> >Date: Tue, 26 Jun 2007 12:50:48 -0500
>> >
>> >I'm going
> to open a can of worms here in hopes of coming up with a 
>> >better
>>
> >system out of the discussion.  Perhaps this has been discussed before 
>>
> >and
>> >I'm not aware of it.  Let me preface this by saying I am not a
>>
> >mathematician or statistician, but I have some familiarity with both
>>
> >subjects and the following question has been growing in my mind for
some>>
> >time.
>> >
>> >It seems to me that we are judging our maneuvers with limited
> accuracy
>> >(within 1 point in FAI and X.5 points in AMA classes) we are then
> 
>> >creating
>> >the ILLUSION of accuracy by multiplying that score by a K
> factor and 
>> >then
>> >normalizing to a 1000 point scale.  Here is a fairly
> brief explanation 
>> >of
>> >"Significant Digits" that I've copied from the
> web which will introduce 
>> >you
>> >to this thought if you haven't seen it
> before:
>> >
>> >****"SIGNIFICANT DIGITS
>> >
>> >The number of significant
> digits in an answer to a calculation will 
>> >depend
>> >on the number of
> significant digits in the given data, as discussed in 
>> >the
>> >rules
> below. Approximate calculations (order-of-magnitude estimates) 
>> >always
>>
> >result in answers with only one or two significant digits.
>> >
>> >When are
> Digits Significant?
>> >
>> >Non-zero digits are always significant. Thus, 22
> has two significant
>> >digits, and 22.3 has three significant digits.
>> >
>>
> >With zeroes, the situation is more complicated:
>> >
>> >Zeroes placed before
> other digits are not significant; 0.046 has two
>> >significant digits.
>>
> >Zeroes placed between other digits are always significant; 4009 kg has 
>>
> >four
>> >significant digits.
>> >Zeroes placed after other digits but behind
> a decimal point are
>> >significant; 7.90 has three significant digits.
>>
> >Zeroes at the end of a number are significant only if they are behind a
>>
> >decimal point as in (c). Otherwise, it is impossible to tell if they
are>>
> >significant. For example, in the number 8200, it is not clear if the 
>>
> >zeroes
>> >are significant or not. The number of significant digits in 8200
> is at
>> >least two, but could be three or four. To avoid uncertainty, use 
>>
> >scientific
>> >notation to place significant zeroes behind a decimal
> point:
>> >8.200 ´  has four significant digits
>> >8.20 ´  has three
> significant digits
>> >
>> >8.2 ´  has two significant digits
>> >
>>
> >Significant Digits in Multiplication, Division, Trig. functions, etc.
>> >
>>
> >In a calculation involving multiplication, division, trigonometric
>>
> >functions, etc., the number of significant digits in an answer should 
>>
> >equal
>> >the least number of significant digits in any one of the numbers
> being
>> >multiplied, divided etc.
>> >
>> >Thus in evaluating sin(kx), where
> k = 0.097 m-1 (two significant digits)
>> >and x = 4.73 m (three significant
> digits), the answer should have two
>> >significant digits.
>> >
>> >Note that
> whole numbers have essentially an unlimited number of 
>> >significant
>>
> >digits. As an example, if a hair dryer uses 1.2 kW of power, then 2
>>
> >identical hairdryers use 2.4 kW:
>> >
>> >1.2 kW {2 sig. dig.} X 2 {unlimited
> sig. dig.} = 2.4 kW {2 sig. dig.}
>> >"******
>> >
>> >My Point is this:
>>
> >
>> >I've seen many contests decided by less than 10 points on a scale of 
>>
> >4000
>> >which has been expanded from (at most) 2 significant digits.  As a
> 
>> >matter
>> >of "statistics" I think that any separation of less than 100
> points (two
>> >significant digits, ie,  3X00 points) is "artificial
> accuracy".
>> >Unfortunately, I don't have any great ideas about how to
> improve upon 
>> >the
>> >current system, I'm just pointing out what I think
> is a scientifically
>> >valid problem with it.
>> >
>> >I smile when I see
> round scores posted to ten thousanths of a point on a
>> >scale that has been
> expanded from two significant digit accuracy to a 
>> >1000
>> >point scale.
> This turns a two significant digit answer into eight
>> >significant digits!
> (ie, 1234.5678)    I think that scientifically, the
>> >scores would be more
> accurately posted as in scientific notation at 
>> >x.x
>> >* 10 to the second
> power.  Most of the contests that I've been to this 
>> >year
>> >have been
> decided essentially by random statistical "noise" rather than
>> >actual
> scoring decisions.
>> >
>> >
>> >Has anyone ever thought/talked about this
> before ?
>> >
>> >Let me add, that despite what I think are statistically
> invalid methods, 
>> >in
>> >most cases the system seems to work pretty well.
> In general the 
>> >superior
>> >pilots get enough better scores to overcome
> the "noise" but it sure 
>> >would
>> >be nice to come up with a more
> mathematically valid solution, IMO.
>> >
>> >George
>> >
>> >
>> >
>> >
>>
> >_______________________________________________
>> >NSRCA-discussion mailing
> list
>> >NSRCA-discussion at lists.nsrca.org
>>
> >http://lists.nsrca.org/mailman/listinfo/nsrca-discussion
>>
>>
>
>
> _______________________________________________
> NSRCA-discussion mailing
> list
> NSRCA-discussion at lists.nsrca.org
>
> http://lists.nsrca.org/mailman/listinfo/nsrca-discussion
>
>
>
> -- 
> No
> virus found in this incoming message.
> Checked by AVG Free Edition.
>
> Version: 7.5.476 / Virus Database: 269.9.9/872 - Release Date: 6/26/2007 >
> 6:43 PM
> 

_______________________________________________
NSRCA-discussion
> mailing 
> list
NSRCA-discussion at lists.nsrca.org
http://lists.nsrca.org/mailman/listinfo/
> nsrca-discussion

_______________________________________________
NSRCA-discussion mailing list
NSRCA-discussion at lists.nsrca.org
http://lists.nsrca.org/mailman/listinfo/nsrca-discussion




More information about the NSRCA-discussion mailing list