[3dem] [ccpem] on FSC curve (A can of worms...)

Ludtke, Steven J sludtke at bcm.edu
Mon Aug 31 06:10:31 PDT 2015


> On Aug 31, 2015, at 12:18 AM, Marin van Heel <marin.vanheel at googlemail.com> wrote:
> 
> 
> Hi Steven
> 
> Just reacting to your very first remark:
> 
> "1) Compensating for statistical uncertainty through use of an adjustment to the threshold is confusing to people raised in experimental science" ...
> 
> In 1D plots the threshold would indeed be a constant value and there is no need to compensate for anything so I guess that is what these "experimental scientists" have been raised to do...
> 
> In 2D Fourier space processing  "N" is proportional to "R" the distance to the origin and the FRC thresholds must reflect that.
> 
> In 3D, "N" is proportional to  "R**2" and the FSC thresholds must take that into account.
I am not disagreeing with the fundamental counting statistics you are using. I am arguing that a better representation for the uncertainty in the FSC values is through the use of error bars on the FSC curve, rather than trying to compensate for the statistical uncertainty with a modified threshold. More to the point, even if you do modify the threshold to encompass some level of confidence, the intersection point still must have an uncertainty associated with it, and assessing this uncertainty in that formulation is not straightforward.

If you have a set of data points with error bars following some curve, you can easily ask at what point the curve rises above some value X with a specified statistical level of certainty, if you like. However, the normal approach would be to draw a line at the actual value you were interested in, then ask for the uncertainty in the intersection point. There is no way to get around the uncertainty in the intersection. Even if you ask for the x-coordinate at which the curve rises above some level in y with 3-sigma confidence, that value still has an associated uncertainty.  Presenting the FSC curve with visible error bars may help people understand that their resolution values should not be presented with 3 significant figures, and also removes the need for a varying threshold on simple statistical grounds (though other arguments for a varying threshold are still possible, of course).

> 
> I would assume good experimental scientists are not so one-dimensional and would use metrics appropriate for the experimental situation they are facing.
> 
> Marin
> 
> PS: You missed the point that Alexis and I fully agree on the fact that the non-orthogonality argument is fundamentally correct!
My argument for the use of error bars rather than varying threshold does not rely in any way on the relationship between SSNR and FSC. Having read both of your message threads, I'm not sure the two of you are in as much agreement as you imply, but it has no real impact on the question I'm raising.  Once the statistical uncertainty is presented with error bars, additional arguments can be made about threshold values. I am arguing simply that it is inappropriate to fold statistical uncertainty into a varying threshold.

> 
> =======================================================
> 
> 
> On 30/08/2015 14:14, Ludtke, Steven J wrote:
>> Ok, I've tried to avoid this discussion, as it seems like somewhat pointless rehashing of old debates to little real point. However, based on direct emails I've gotten from some people new to the field, it may be causing a lot of confusion and uncertainty among this group. They lack the historical context to understand the point of the debate.  Let me add a couple of minor points to the discussion:
>> 
>> 1) Compensating for statistical uncertainty through use of an adjustment to the threshold is confusing to people raised in experimental science. In essence, it is concealing the fact that the FSC values have considerable uncertainty due to counting statistics and other effects. That is, the final resolution plots wind up being the intersection of two lines with no presented uncertainty at all, and we find people looking a specific intersection points between these two lines with ridiculous levels of precision.
>> 
>> A much more sensible way to present this result would be to produce FSC curves with error bars, which do a much better job of expressing the fact that there is considerable uncertainty in the resulting intersection!  The difficulty is how to best produce such error bars.
>> 
>> Once you have an FSC with error bars, you still have the question of a threshold value/curve. I would argue that the error bars subsume the uncertainty, and using Alexis arguments about expectation values, you can then use a fixed value threshold.  I think Alexis arguments are spot-on in this case (FSC relationship to SNR is an expectation value), and Marin's orthogonality argument is fundamentally incorrect. The cross-terms in the presence of noise do have an expectation value of zero, of course!  The cross-terms contribute to the uncertainty in the estimator, not to its asymptotic value.
>> 
>> 2) Closely related to point #1 is the issue that our resolution estimates simply are not that precise. They do have considerable uncertainty (which an FSC with error bars would help to express). They also ignore differences in the FSC curve at resolutions lower than the cutoff resolution, which are also significant from the perspective of map interpretation. If I have an FSC curve up close to 1 which smoothly and rapidly falls to zero near some target resolution, the quality of the map is not equivalent to an FSC which begins falling gradually at much lower resolution and undergoes considerable gymnastics before finally falling below the 'threshold' value.
>> 
>> ----
>> Our field takes these resolution numbers MUCH too seriously, and have unwisely turned them into the sole measure of map quality. I do not believe it is possible to make the FSC into a single catch-all measure.
>> 
>> Following the 'error-bar' approach (if we can agree on one) would properly associate an uncertainty with each measured resolution value, to point out the limits of this estimator in a way that a reviewer from any field could easily encompass. Like the X-ray community, we need to adopt additional criteria rather than continue these pointless debates trying to make the FSC more statistically accurate than it is possible for it to be.
>> 
>> 
>> 
>> ----------------------------------------------------------------------------
>> Steven Ludtke, Ph.D.
>> Professor, Dept of Biochemistry and Mol. Biol.         (www.bcm.edu/biochem)
>> Co-Director National Center For Macromolecular Imaging        (ncmi.bcm.edu)
>> Co-Director CIBR Center                          (www.bcm.edu/research/cibr)
>> Baylor College of Medicine
>> sludtke at bcm.edu
>> 
>> 
>> 
> 

----------------------------------------------------------------------------
Steven Ludtke, Ph.D.
Professor, Dept. of Biochemistry and Mol. Biol.                Those who do
Co-Director National Center For Macromolecular Imaging	           ARE
Baylor College of Medicine                                     The converse
sludtke at bcm.edu  -or-  stevel at alumni.caltech.edu               also applies
http://ncmi.bcm.edu/~stevel





More information about the 3dem mailing list