Page 2 of 2

Re: Evaluation of CASP9 Quality Assessment

PostPosted: Sun Dec 05, 2010 8:39 pm
by arneelof
Björn just calculated the GDT_TS plot for top-ranked models and as expected the correlation is really bad for the top 10% on Per Target.

Hopefully I can show the plot tomorrow.

Arne

Re: Evaluation of CASP9 Quality Assessment

PostPosted: Tue Dec 07, 2010 12:16 am
by terashig
There are the same "multi-model" problems in CASP official results.
especially T0543 !!! (in http://predictioncenter.org/casp9/qa_analysis.cgi)

Re: Evaluation of CASP9 Quality Assessment

PostPosted: Tue Dec 14, 2010 8:00 am
by M.Pawlowski
Hi,
I’ve  just made a simple comparison between benchmark based on all models and only not multiple segments models. Here are two lists of  10 best  MQAPs according  to average  Pearson’s correlarion. The test was done for single-domain CASP9 targets.
http://iimcb.genesilico.pl/mp/qa/casp9/ ... i_mod.html

As it can be seen, there are some slightly differences in the predictors ranking. However, next benchmarks that  I going to present will be based on ONLY not multiple segments dataset.

Re: Evaluation of CASP9 Quality Assessment

PostPosted: Wed Jan 19, 2011 1:16 am
by terashig
We made graphical and sortable CASP9 QA results.

http://www.pharm.kitasato-u.ac.jp/bmd/C ... a_top.html

For 5 categories:
All targets (116)
Single-domain targets only (91)
Multi-domain targets only (25)
TBM targets only (94)
FM targets only (22)

And for 5 terms:
Pearson avg
Pearson overall
Kendall avg
Kendall overall
Avg loss

Genki Terashi