[3dem] SIRT, ART (and SART)

Mike Marsh mikemarsh at gmail.com
Fri Oct 1 08:34:37 PDT 2010


Hi all,

I appreciate all of the feedback and references provided by you.  I am sure
to be very informed when I finish reading the papers you have all
recommended.

Thanks for all of your help,
Mike

On Fri, Oct 1, 2010 at 10:19 AM, Marin van Heel <m.vanheel at imperial.ac.uk>wrote:

>  Dear All,
>
> I did not mean to trigger a scientific discussion in this forum (there
> possibly are more appropriate places for that) but rather to explain some
> basic facts to a person asking about what these abbreviations mean. The
> “vivid” reactions that were launched, however, now call for an answer.
>
> It is remarkable that, 40 years after the famous “ART and Science” and “ART
> is Science” exchanges in J. Theor. Biol in 1971 [1,2], the issues
> surrounding the iterative- and the transform-based reconstruction approaches
> are still clouded by a poor understanding of the underlying
> physical/signal-processing principles.
>
> 1)  First of all, it is simply not true that an “exact filter” WBP, “will
> at best give you a least squares solution” (Ozan) of “x” where (P . x = y).
> (I will use Ozan’s formalism). In fact, it is not at all the least-squares
> solution of x one is after in the transform methods! Rather, one is after
> the exact solution of P’. x’ = y’ ; where y’ are the measured projection
> images, where the projection operator P’ reflects the exact projection
> geometry, and x’ is the “exact” measurement of that part of the 3D density
> we can recover, given the available set of measurements y’ . There is no
> least squares approximation, no regularisation, there are no iterations, and
> x’ reflects x in as far as covered by the actual measurements y’. Areas of
> FT(x) (the Fourier Transform of x) where we have no measurements, FT(x’)
> will simply contain zeroes (missing gaps, wedges, cones...).  That is the
> case irrespective of the 3D reconstruction being implemented by
> back-projection in real space or by central section interpolation in Fourier
> space.
>
> 2) In the transform-based reconstructions one does not try to approximate
> the measured x in least-squares terms for very good reasons. The measured
> projection images y’ typically contain more low- than high-frequency power,
> and consequently all squared correlation functions (like LSQ  solutions)
> will be typically dominated by the low-frequency power in the data (see:
> “correlation function revisited”, Ultramicroscopy 1992).  Optimising the
> low-frequency correlations by a real-space least-squares fit is detrimental
> to the high-resolution information and thus, in signal processing terms,
> wrong! The implicit assumption behind the goal of finding an LSQ-optimised
> solution is that that solution would be - in some form - an optimal
> representation of the available *information*. That assumption is
> incorrect.
>
> 3) For clarity (Ozan), an “exact filter” WBP is one in which the exact 3D
> reconstruction geometry is reflected in the filter function (the “inverse”
> of P’) applied to each project image prior to the 3D reconstruction (as
> opposed to analytical WBP filters that are in widespread use outside of EM).
>
>
> 4) In ART/SIRT, in contrast, one explicitly tries to minimise the LSQ
> residual (according to Ozan and Jose-Maria). Again, since the projection
> images' variance is largely concentrated in the lower frequencies, ART/SIRT
> will try to minimise the errors in that regime and will-do-what-it-wants
> with the less powerful high-resolution information, while moving towards an
> overall LSQ solution. It does this by calculating ART’(y’), in which ART’ in
> the non-linear ART-inverse of the P’ projection operator which not only
> depends on the exact projection geometry P’, but also is data-dependent (is
> a function of y’ too).  ART and SIRT have the freedom to do anything in
> the unmeasured areas of Fourier space (missing-gaps, missing cones, missing
> wedges). (It may also compromise on the measured but “powerless”
> high-frequency information.)  Regularisations such as the fitting of
> “blobs” are thus required to control the instabilities inherent to this
> family of algorithms (All agree). One of the best regularisations one could
> possibly achieve is to avoid introducing any data into the missing gaps
> (that is where FT(x’) = 0). In that optimal case the iterative approaches
> would start to approximate the behaviour of the transform methods.
>
> 5) Quoting Ozan: ”ART and SIRT became useful in EM once they were combined
> with early stopping, i.e. the iterations are stopped before convergence”.
>   From this I can only understand that on its way to an incorrect LSQ
> convergence optimum (see above), ART’(y’) passes through a phase where it is
> not yet so wrong and gets relatively close to x’. That thus is the point
> where one is suggested to jump off the runaway train before it runs into the
> abyss? Is this serious science, or am I missing the point somewhere?
>
> 6) Linearity of the procedures. Lets us assume we have two sets of
> projection images y’ and y’’ derived from a 3D object x using projection
> geometries P’ and P’’, respectively. Having calculated the corresponding 3D
> reconstructions x’ and x’’, we then would like to merge these two
> reconstructions of x into a single 3D reconstruction x’’’ in order to get a
> reconstruction closer to the real 3D object x. This, by the way, has become
> a standard procedure in, for example, “sub-tomogram averaging”. In the case
> of the transform techniques we can simply sum FT(x’) and FT(x’’) while
> weighing down the areas of FT(x) covered by both FT(x’) and FT(x’’).  With
> the iterative methods, in contrast, the reconstructions we obtained ART’(y’)
> and ART’’(y’’), each of these aim at mimicking x.  It is unclear how to
> merge these non-linear “LSQ estimates” of x into a single summed /merged
> ART’’’ reconstruction.
>
> 7) The powerful mathematical strategy of reasoning “ad absurdum” may help
> here to further clarify the matter. Let now the projection data sets y’ and
> y’’ each contain merely one single projection image.  Again, this is no
> problem for the exact WPB approach. In fact, the merging of x’ and x’’
> problem reverts to original problem of the merging of projection images in
> the exact WPB approach itself.  With the iterative methods, again, there
> is no strategy to merge reconstructions ART’(y’) and ART’’(y’’) which both
> are “LSQ approximations” of x, each based on a single projection.  In this
> ad-absurdum example of having only one projection in a 3D reconstruction
> problem, it becomes obvious that trying to catch that relation between y’
> and x in the mathematical formalism of an “underdetermined set of linear
> equations” is just inappropriate. It is the wrong mathematical tool to
> describe a simple, straightforward signal-processing issue.
>
> 8) Fitting in “blobs” (like sets of Gaussian functions), in order to
> regulate the instabilities of ART will clearly help [3]! Nevertheless, that
> is a stop-gap solution that may not be appropriate.  That fitting (a
> convolution-like operation in the real-space of “x”) will be largely
> equivalent to a low-pass filter or the suppression of all high frequencies
> in FT(x).  It will avoid some of the problems in the areas where the
> “missing gaps” are largest.  (And actually delete any real measured
> information that might present in that high-resolution regime). However,
> again resorting to ad-absurdum reasoning: What happens if the function x we
> are after is a random-white-noise 3D density? (Or worse still: x consists
> only of high-frequency “noise” and no low frequencies at all?) The fitting
> of low-resolution blobs would simply be out of place. This is probably the
> reason that the authors of ref [3] have no FSC-like tests in their
> publication.  Their quality-control test, the reconstruction a set of
> cylindrical blob, is adapted to their algorithm: they are “measuring blobs”
> with an algorithm that only sees “blobs”. This would classify as a
> methodological bias, rather than a comparison that is “way more objective
> and statistically sound than just comparing FSC numbers” (Ozan).  This
> paper [3] indeed still lacks rigorous FSC testing.
>
> 9) Finally (Mike), it is unfortunate that the microscopy manufacturer you
> mention has/had wrong ideas about SIRT and WPB.  For many years I have
> seen presentations from this manufacturer claiming that their SIRT
> reconstructions were better than their corresponding WBP reconstructions.
> However, their WPB algorithm was/is clearly not implemented correctly.
>
> My write up also contains some mathematics. It was necessary to climb that
> ivory tower in order to explain the unstable foundations of ART and its
> successors.  The transform methods, in contrast, remain what they were
> from day one: WYMIWYG (What You Measure Is What You Get). There is no need
> for regularisation, no least-squares iterations, or any other form of
> massaging the data.  The transform methods remain what they always have
> been: the gold standard by which to measure the rest of the field.
>
> My two cents,
>
> Marin
> [1] Crowther RA, Klug A: “ART and science or conditions for
> three-dimensional reconstruction from electron microscope images”. J Theor
> Biol. 32 (1971) 199-203.
>
> [2] Bellman SH, Bender R, Gordon R, Rowe JE Jr: “ART is science being a
> defense of algebraic reconstruction techniques for three-dimensional
> electron microscopy”. J Theor Biol. 32 (1971) 205-216.
>
> [3] Marabini, R., Herman, G.T., Carazo, J.-M.: 3D reconstruction in
> electron microscopy using ART with smooth spherically symmetric volume
> elements (blobs), Ultramicroscopy 72 (1998) 53-65.
>
>
> _______________________________________________
> 3dem mailing list
> 3dem at ncmir.ucsd.edu
> https://mail.ncmir.ucsd.edu/mailman/listinfo/3dem
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mail.ncmir.ucsd.edu/mailman/private/3dem/attachments/20101001/831b45b3/attachment-0001.html


More information about the 3dem mailing list