Following ground-breaking developments in CASP13, CASP14 will present another round of exciting progress in the field. The conference will be virtual and will run from November 30 through December 4, 2020.
There will be a limited number of talks and discussions each day, lasting a total of not more than 4 hours. The session times are 10:00AM-2:00PM (EST); 16:00-20:00 (CET). Poster breakouts and extra discussions will be arranged before and after the formal sessions.
We hope that the conference will generate ongoing interest group activities. There will be a follow-up in-person meeting next summer, Covid-19 permitting.
Register for CASP14 conference.
Detailed description of the experiment
CASP (Critical Assessment of Structure Prediction) is a community wide experiment to determine and advance the state of the art in modeling protein structure from amino acid sequence. Every two years, participants are invited to submit models for a set of proteins for which the experimental structures are not yet public. Independent assessors then compare the models with experiment. Assessments and results are published in a special issue of the journal PROTEINS. In the most recent CASP round, CASP13, nearly 100 groups from around the world submitted more than 57,000 models on 90 modeling targets (see Critical assessment of methods of protein structure prediction (CASP) - Round XIII).
CASP assesses many aspects of modeling, including the accuracy of protein topologies, atom co-ordinates, and multi-protein assemblies. The experiment also examines the extent to which models can answer questions of biological interest, and how different types of sparse or low resolution experimental data can improve model accuracy.
CASP14 was planned to start in mid-April 2020, but due to the COVID-19 impact we postpone the start until the second half of May. CASP14 will address the following questions:
How similar are the models to the corresponding experimental structure?
Are domain orientations, subunit interactions, and the protein interactions
in complexes modeled correctly?
How much more accurate are template-based models than those that
can be obtained by simply copying the best template?
How reliable are overall, residue, and atomic level error estimates?
How much can current refinement methods improve the accuracy of models?
How effective are approaches to predicting distances and contacts between protein residues?
How well do the models help answering relevant biological questions?
How useful is additional information, particularly chemical cross-linking and SAXS?
In which areas has there been progress since the last CASP?
Where can future effort be most productively focused?
The success of CASP is completely dependent on the generous help of the experimental community in providing targets. As in previous CASPs, protein crystallographers, NMR spectroscopists and cryo-EM scientists are asked to provide details of structures they expect to have made public before September 15, 2020. All types of protein structure may be good modeling targets, but membrane proteins and protein complexes are particularly needed. The last day for suggesting proteins as CASP targets is July 31, 2020.
A target submission form is available here.
CASP14 was planned to start in mid-April 2020, but due to the COVID-19 impact we postpone the start until the second half of May.
The contact prediction (RR) category will be expanded to allow submission of inter-residue distance predictions in addition to the inter-residue contact predictions (see CASP14 format page link in the Model submission section).
The data assisted category will focus more on large proteins and particularly complexes with SAXS and cross-linking data. We plan to expand the list of techniques /data providers for these categories.
Due to the COVID-19-related complications, this sub-experiment will be run on a rolling basis.
We hope to better explore effectiveness of deep learning methods in modeling oligomeric proteins and protein-protein complexes and will strive to obtain more and better targets for this category of prediction.
Some targets may be too large to be accommodated in the PDB format, so be prepared to submit predictions for them in the mmcif format.
Details on the target collection and release procedures are available at our
The High Accuracy Modeling category will include domains
where the majority of submitted models are of sufficient accuracy for
detailed analysis. This category replaces the previous Template Based Modeling category.
The Topology category (formerly Free Modeling) will assess
domains where submitted models are of relatively low accuracy.
The Contact and Distance Prediction category will assess the ability of methods
to predict contacts and inter-residue distances.
The Refinement category will analyze success in refining models
beyond the accuracy obtained in the initial submissions. For each target,
one of the best initial models will be selected, and reissued as the starting
structure for refinement.
The Assembly category will assess how well current methods
can determine domain-domain, subunit-subunit, and protein-protein interactions.
As in CASPs 11-13, we hope to work closely with CAPRI in this category.
The Accuracy Estimation category will assess the ability to provide
useful accuracy estimates for the overall accuracy of models and at the domain and
The Data Assisted category will assess how much the accuracy
of models is improved by the addition of sparse data. Targets for which
such data are available will be re-released after initial data independent
models have been collected, together with the available data.
Data types are expected to include crosslinking data and SAXS.
Due to the COVID-19-related complications, this sub-experiment will be run on a rolling basis.
The Biological Relevance category will assess models on the basis of how well they provide answers to biological questions. Target providers will be asked to say what questions prompted the determination of the experimental structure. The usefulness of the models in answering those questions will be compared with the that of the experimental structures.
Participation is open to all.
- March 2020 - Start of the registration for CASP14 prediction experiment.
- May 4, 2020 - Start of the testing of server connectivity ("dry run" for server predictors).
- May 18, 2020 - Release of the first CASP14 modeling targets.
- June/July 2020 - Early bird registration for the December CASP14 conference.
[POSTPONED due to COVID-19]
- July 31, 2020 - Last date for releasing regular targets.
- August 21, 2020 - End of the regular modeling season.
- September 7, 2020 - End of the refinement season.
- September 2020 - Collection of abstracts describing the methods used in CASP14.
- September-October 2020 - Evaluation of predictions.
- November 2020 - Invitations to groups with the most accurate models
and the most interesting methods to give talks at the CASP14 conference.
- November 2020 - Program of the conference finalized.
- November 30 - December 4, 2020 - CASP14 Conference.
If you are new to CASP and don't have an account with the Prediction Center, you will have to
register with the Prediction Center first and only then proceed to
If you already have an account with the Prediction Center,
you can go directly to the CASP14 registration page.
Please check, though, that your basic registration information is
current. If it has changed - please update it through the My Personal
Data link from the main Menu.
Participants with servers are requested to register before April 1, 2020 as
we are planning to start checking servers' format and connectivity thereafter.
CASP14 modeling targets are announced on the
Target List page.
Models can be submitted through the Prediction Submission form available from
this web site or by the email provided in the
CASP14 format page . Please comply with the instructions on
submission procedures and format provided there.
Server predictions will be made publicly available shortly after the closing of the prediction
window for a specific target.
As is the practice in CASP, assessment of the results will be made by the independent assessor teams. Assessment criteria will be based on those previously developed in CASP, but assessors may add new metrics they consider appropriate. Where possible, results will also be evaluated using criteria from the previous CASP, so the effects of any changes in criteria can be appreciated.
The CASP14 Assessors are as follows:
- High accuracy models - Andrei Lupas (Max Planck, Tuebingen, Germany)
- Topology - Nick Grishin (UT Southwestern, Dallas, TX, USA)
- Contacts - Alfonso Valencia (Barcelona Supercomputing Center, Spain)
- Refinement - Daniel Rigden (University of Liverpool, UK)
- Assembly - Ezgi Karaca (Izmir Biomedicine and Genome Center, Turkey)
- Model accuracy estimation - Chaok Seok (Seoul National University, South Korea)
- Function (biological relevance of models) - Sandor Vajda (Boston University, MA, USA) and Dima Kozakov (Stony Brook University, NY, USA)
for the list of assessors in all CASPs held so far.
In accordance with CASP policy, assessors cannot take part in the relevant parts of the experiment as predictors. Participants must not contact assessors directly with queries, but rather these should be sent to the
All CASP predictions and results of numerical evaluation will be made available through
this web site shortly before the meeting.
The proceedings will be published in a scientific journal
publications of previous experiments).
All participants will also be required to describe their methods
in the abstracts (published locally at our web site) and encouraged to
discuss them on the
These contributions will be discussed and scored
by other predictors, and this material will be taken into account in
choosing some presentations at the conference. Also, those
presenting posters should be prepared to give a short
presentation at the conference, as some talks will be invited based on the
discussion of poster sessions.
John Moult, CASP chair and founder; IBBR, University of Maryland, USA
Krzysztof Fidelis, founder, University of California, Davis, USA
Andriy Kryshtafovych, University of California, Davis, USA
Torsten Schwede, University of Basel, Switzerland
Maya Topf, Birkbeck, University of London, UK
David Baker, University of Washington
Michael Feig, Michigan State University
Nick Grishin, University of Texas
Andrzej Joachimiak, Argonne National Lab
David Jones, University College, London
Chaok Seok, Seoul National University
Michael Sternberg, Imperial College, London