[NSRCA-discussion] Judging

Ron Van Putte vanputte at cox.net
Fri Oct 19 12:36:20 AKDT 2007


Earl Haury suggested that we should make it available on the NSRCA  
web site.  I think that's a good idea.  I'll look into how we can  
make it happen.

Don Ramsey just told me that he now has virtually all the scores for  
the 2006 (yes, 2006) Nats.  Rather than put up an incomplete data  
set, as soon as Don gets the numbers get crunched for 2006 and  
factored into the judge evaluation program, we will get the data to  
the web site.

Ron Van Putte

On Oct 19, 2007, at 11:06 AM, vicenterc at comcast.net wrote:

> Ron,
>
> Where we can find it?
>
> Thanks,
>
> --
> Vicente "Vince" Bortone
>
> -------------- Original message --------------
> From: Ron Van Putte <vanputte at cox.net>
> Earl wrote that, "It would be nice to have some form of judge  
> ranking system other than the limited ranking done for nomination  
> of WC judges." The fact is that the NSRCA judge ranking program  
> evaluates EVERY judge who judges in Nats Finals (and semifinals in  
> a Team Selection year), Master Finals and in F3A Team Selections.  
> This data is available to anyone who would like to look at it.  
> Judge evaluation/ranking is not done for any events which has less  
> than five judges on the line. This limits the events for which the  
> evaluation/ranking is possible.
>
> Ron Van Putte
>
> On Oct 19, 2007, at 10:19 AM, Earl Haury wrote:
>
>> Some more thoughts on the variation between judges.
>> The NSRCA Judge Cert Program is an excellent base for ensuring  
>> that everyone is on the same page regarding the rules as applied  
>> to judging. The Program suggests practice judging sessions of  
>> actual flights with subsequent discussion of the scoring.  
>> Unfortunately, only a few Cert Classes actually do this. (The  
>> logistics of practice scoring sessions are difficult with most  
>> classes occurring in the off season.) This is sort of analogous to  
>> studying a sequence in the rule book, but not flying it until in  
>> competition. We are now practicing "on the job" training in many  
>> cases and this isn't fair to the competitor.
>> We need to find a way to better train / calibrate judges outside  
>> of competition. Flying sessions during Certification, where scores  
>> are discussed by maneuver within a peer group would be a very good  
>> start. Several flights are generally flown during WC judges  
>> practice and scores are discussed. We've done this at our Team  
>> Selections in the past. Unfortunately, we haven't incorporated  
>> this practice into the Nats. We fly judge warm-up flights before  
>> the Nats finals, but these are not for judge calibration. (At  
>> major events any such flying for judges practice requires flights  
>> by non-competitors which adds to the logistics.) We do little to  
>> none of this at local meets. The idea of pre-contest judging  
>> practice has merit. Often the sun precludes using the entire box,  
>> but several will practice at "off-box" angles and parts of  
>> sequences. Why not judge these flights and discuss the scores and  
>> reasons for downgrades? Probably best to not make these scores /  
>> discussions av ailable to the pilot in competition - that's better  
>> left for training events.
>> It would be nice to have some form of judge ranking system other  
>> than the limited ranking done for nomination of WC judges.  
>> Unfortunately, this is difficult to define and operate. The  
>> experienced based system used by the old USPJA was mostly without  
>> merit. When volunteers for Team Selection judges numbered in the  
>> 30's, the program participants voted for the judges that were  
>> used. That may work when a lot of reasonably qualified folks are  
>> available. One thing is for sure, presently it's hard to find  
>> enough warm bodies to fill the judging chairs.
>> TBL and other forms of massaging the scoring data are fine,  
>> useful, and often necessary. However, they are post processing  
>> exercises to mathematically minimize the effects of inaccuracies  
>> in the actual scoring. It's much better to strive to ensure that  
>> the initial score is correct.
>> Let's look at some common forms of judging variations (I apologize  
>> in advance if I step on any toes). I suggest that there are two  
>> categories, those that are wrong and intolerable vs. those that  
>> are differences of opinion.
>> In the first category we find the judge who observes no defects in  
>> a maneuver and scores it an 8 so as to have more room if something  
>> else is more appealing. Or the judge who sees a defect and scores  
>> a 10 because he gave the pilot the benefit of doubt. Or the judge  
>> who "overlooks" a major error because the rest of the flight is  
>> great. Or the judge that overlooks excessive distance because  
>> that's where he/she flies. Or the judge who fails to watch the  
>> maneuver from start to finish - including the exit. Or the judge  
>> who downgrades maneuvers for having different roll rates or radii  
>> than he/she prefers. Or the judge who recognizes he/she is more  
>> strict / lenient in a group of judges where scoring analysis will  
>> be applied and changes his/her practice. Or the judge who simply  
>> "likes" one pilot more than another and ensures the favorite  
>> scores best. There are other examples, but the best correction for  
>> this may be a cattle prod! Anyone guilty of this needs to serious  
>> ly consider their behavior!
>> The second category is (thankfully) more prevalent. Two judges  
>> observe a difference in radii - one deducts a point and the other  
>> two. Likewise, line length before and after roll elements, or  
>> changes in roll rate, or heading, or angle, or distance, or? Given  
>> the difficulty of determining these criteria visually, there will  
>> always be some difference in judgment of the error magnitude. One  
>> judge will look tough (we rarely consider a judge easy - unless he/ 
>> she's judging our competition), but may actually be the most  
>> accurate. There will still be some difference in the judges  
>> scores, scoring practice in training sessions would go a long way  
>> toward minimizing these differences.
>> Earl
>> _______________________________________________
>> NSRCA-discussion mailing list
>> NSRCA-discussion at lists.nsrca.org
>> http://lists.nsrca.org/mailman/listinfo/nsrca-discussion
>
>
> From: Ron Van Putte <vanputte at cox.net>
> Date: October 19, 2007 10:54:24 AM CDT
> To: NSRCA Mailing List <nsrca-discussion at lists.nsrca.org>
> Subject: Re: [NSRCA-discussion] Judging
>
>
> _______________________________________________
> NSRCA-discussion mailing list
> NSRCA-discussion at lists.nsrca.org
> http://lists.nsrca.org/mailman/listinfo/nsrca-discussion
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.nsrca.org/pipermail/nsrca-discussion/attachments/20071019/2690fc09/attachment.html 


More information about the NSRCA-discussion mailing list