Judge evaluation

Rcmaster199 at aol.com Rcmaster199 at aol.com
Fri Oct 29 14:22:06 AKDT 2004


 
Jeff, a Judge Evaluation Program (Excel based methodology) was created  about 
a year ago, and Judge Ranking has been brought forth for major meets  dating 
to 1999. Please go back to one of the KFactor issues around November or  
December last year, and look in Don Ramsey's column for a thorough expanation on  
its function. Don is the Judging Committee Chairman and his column is in the  
first few pages ususally.
 
MattK
 
 
In a message dated 10/29/2004 1:15:36 PM Eastern Standard Time,  
jeff at snider.com writes:

Several  kinds of judging problems have been raised here in the past
day.  I  have too many thoughts on the various subjects to usefully
put in one  message, and I've been unduly prolix this week, so I'll
limit myself as  much as possible.

In my opinion, we have (or really ought) to have some  method of
ranking judges.  Two solutions present themselves to my  overactive
mind, one computational and one  personal.

Computationally: Collect all the scores from every round at  every
contest and it's not difficult (using software written by  math-minded
people) to see which judges consistently score the winners  high
and losers low.  It's fuzzy at first, but after a year you  get
fair results.  It's not a maneuver-by-maneuver comparison, but  it
gives you an idea of which Sportsmen can judge well and which
Masters  can't.

Collecting all the numbers, processing them, and turning the  results
into some kind of meaningful individual rank as a judge is the  part
that takes a strong corporate will.  Think we could get every  CD
all year to email in the complete round-by-round results?  Make  it
a requirement for event sanctioning, and it will happen (most of
the  time).

I am not advocating that, just pointing out it's feasible if  the
NSRCA really wanted to do it.

The more individual, personal  method: Create a judge ranking system
and allow high ranked judges to move  low ranked judges up the
hierarchy.  The best judges are classified as  1 (one), the next as
2 (two), etc., until the lowest judges, who can pass a  basic written
test on the rules, are 10 (ten).  The present NSRCA  judge ranking
system will pick the top guys, the 1s, 2s, and 3s, etc., and  give
a rank to everyone in that system at Nats.

At contests, pair up  a high judge and a low judge in each round.
After comparing the two sets of  scores for an entire round, the
high judge can recommend advancing the low  judge up a notch in the
scale, and talk to the lower judge about the  scores, etc., in that
round.  After a certain number of  recommendations have accumulated,
the lower judge's classification  improves.  Some rules to govern
the system would be necessary, like a  judge can't recommend someone's
advancement to his own level, and a high  ranked judge's recommendation
counts for more than a low ranked one.   Also, everyone in the country
who doesn't have a rank assigned at Nats has  their rank go down by
one after it's over, to keep the ever-improving ranks  in check.

It's really just about having a good judge talk to a less  good judge
after each round and help him improve, and tracking who is  doing
well and who needs more practice.

I am hoping I can work on my  own judging skills next season.  Maybe
over the winter we can swap  RealFlight recordings of ourselves
flying our pattern, judge each other and  talk about why we gave the
scores we did.

- Jeff Snider
-  jeff at snider.com
- Northern VA, NSRCA D2


 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.f3a.us/pipermail/nsrca-discussion/attachments/20041029/84c45f71/attachment.html


More information about the NSRCA-discussion mailing list