Page 1 of 1

proposed new experiment for CASP9

PostPosted: Wed Nov 05, 2008 10:50 am
by kevin_karplus
Assuming we do a CASP9, I'd like to propose a new experiment, that is a slight variant of the current server and MQA experiments.
I'd like a category for metaservers that modify the models they start from, to have a deadline 3 days after the server tar-ball is released.

Metaservers that don't modify their models should simply enter the MQA category, but there is currently no convenient category for metaservers that do modify models.
They get treated as primary servers, which is somewhat misleading (plus messing up the independence of servers for consensus-based MQA methods).

Providing a primary server/metaserver split in the submission process would allow development of better metaserver methods, without the overhead of having to communicate with lots of primary servers or duplicate them locally.

Re: proposed new experiment for CASP9

PostPosted: Tue Nov 11, 2008 9:24 am
by test
This sounds interesting, but the issue is how to identify a server as meta or primary server.

Assuming we do a CASP9, I'd like to propose a new experiment, that is a slight variant of the current server and MQA experiments.
I'd like a category for metaservers that modify the models they start from, to have a deadline 3 days after the server tar-ball is released.

Metaservers that don't modify their models should simply enter the MQA category, but there is currently no convenient category for metaservers that do modify models.
They get treated as primary servers, which is somewhat misleading (plus messing up the independence of servers for consensus-based MQA methods).

Providing a primary server/metaserver split in the submission process would allow development of better metaserver methods, without the overhead of having to communicate with lots of primary servers or duplicate them locally.

Re: proposed new experiment for CASP9

PostPosted: Thu Nov 27, 2008 4:52 am
by mcguffin
Good idea, like a QA++ category. QA with some improvement. It will put meta-servers on a more level playing field as they will all have access to the models.

However, from a user perspective it is still necessary to assess meta-servers against stand alone servers, so that they can judge the advantage gained from submitting targets in a restricted time frame. The efficiency of the server in automatically collecting, pooling and analysing data should be tested as well as the overall quality score.

I agree it is difficult to identify meta-servers. What about in-house meta-servers? Pretty much every server runs somebody else's method at some point, is there anyone who doesn't use PSI-BLAST, PSIPRED, HHsearch or Modeller, at some stage in order to produce a prediction? So based on that definition, could most current servers be classed as meta-servers, albeit in-house?