University of Cambridge > Talks.cam > Experimental and Computational Aspects of Structural Biology and Applications to Drug Discovery > Low resolution refinement tools in the program REFMAC

Low resolution refinement tools in the program REFMAC

Add to your list(s) Download to your calendar using vCal

  • UserDr Garib N Murshudov, LMB, MRC, Cambridge, U.K
  • ClockWednesday 23 November 2011, 10:15-11:15
  • HousePerham's seminar room.

If you have a question about this talk, please contact xyp20.

Despite rapid advances in Macromolecular X-ray Crystallographic (MX) methods, derivation of reliable atomic models from low resolution diffraction data still poses many challenges. The reason for this is that the number of observations relative to the number of adjustable parameters is small and signal to noise ratio in the experimental data is very low. As a consequence derivation of biologically meaningful information from such data is challenging. Mobility of macromolecules means that in many cases growing crystals diffracting to higher resolution is not possible and low resolution data must be used to derive some useful information.

To derive some of information from such data two related but distinct problems should be tackled: i) stabilisation of ill-posedness of refinement procedures  ii) calculation of maximal signal/minimal noise electron density. Solving the first problem is necessary to derive reliable atomic model and the second problem to calculate interpretable electron density that is used in model (re)building.

1) The first problem is usually tackled using restraints based on structural information. Available structural information are a) known similar three-dimensional structures b) secondary structures; c) NCS if present; d) in addition it is also possible to exploit the fact that during refinement inter-atomic distances should not change dramatically. It has already been shown that using these restraints improves reliability of the derived models. As a result of model improvement errors in the derived atomic models are reduced, and it means that  calculated phases have less error hence reducing noise in the electron density related to the model errors. 2) Sharpening of an electron density while increasing signal amplifies noise masking out âtrueâ signal. There are several approaches to such problems including: a) regularisation using Tikhonov-Sobolev method; b) Wiener filters and c) Bayesian filters. These techniques attempt to answer to one common question: how to enhance signal without noise amplification? Another problem in map sharpening is that it assumes that all atoms have the same B values. It is in general not true and there is a distribution of B values â inverse gamma distribution. Moreover individual atomsâ oscillation depends on its position in the asymmetric unit. These facts need to be accounted for if accurate map sharpening tools to be designed. In this presentation some applications of these techniques to map calculations will be discussed.

This talk is part of the Experimental and Computational Aspects of Structural Biology and Applications to Drug Discovery series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity