[3dem] [ccpem] lost gain reference image

Marin van Heel marin.vanheel at googlemail.com
Tue Dec 4 12:48:18 PST 2018


Hi Carlos Oscar!
I just remembered I had posed a question about your camera normalisation
paper ( https://www.ncbi.nlm.nih.gov/pubmed/29551714.) on this site some
two months ago, in which you critisized our 2015 camera normalisation paper
(https://www.nature.com/articles/srep10317).  Did you already respond to my
question and I missed your answer?

Cheers
Marin

My question was:

QUOTE:
What you call our "gain image" is - apart from an erroneous contrast
reversal - actually more similar to the "official" gain image in your Fig 1
than does the one generated with your proposed algorithm.  I would be
interested in knowing what the R2 turns out to be after you correct the
contrast reversal since it visually is better than yours. It would be nice
if you could respond to this mailing including that information!? By the
way how exactly is this R2 metric defined (I could not find it anywhere in
the paper)?
END QUOTE

On Tue, Oct 2, 2018 at 10:21 PM Marin van Heel <marin.vanheel at googlemail.com>
wrote:

> Dear Carlos Oscar and Dimitry,
>
> Unfortunately, you seem to have missed the point of our Afanasyev 2015
> paper. Our paper does not try to duplicate the "experimentally determined
> Gain image" but tries to normalize the signal from each pixel to the same
> average and the same standard deviation at the exposure and contrast level
> that the data set was recorded. Our approach typically improves
> significantly on standard "*a priori*" flat field/gain  corrections.
>
> We are not directly interested in generic "gain images" as such and we
> certainly  do not generate "gain images" that have an inverted contrast
> when compared to the other ones you have in Figure #1 of your paper. Your
> comments on our methods are thus not appropriate:  "*T**o the best of our
> knowledge, the only article that addresses a similar problem is that of
> Afanasyev **et al.** (2015). In their work, they assimilate the gain of
> the camera to the standard deviation of each pixel over a large number of
> movies, and they prove this is a successful way of identifying dead pixels.
> However, our results show that this approach does not provide a consistent
> gain estimation (Fig. 1)*."
>
> What you call our "gain image" is - apart from an erroneous contrast
> reversal - actually more similar to the "official" gain image in your Fig 1
> than does the one generated with your proposed algorithm.  I would be
> interested in knowing what the R2 turns out to be after you correct the
> contrast reversal since it visually is better than yours. It would be nice
> if you could respond to this mailing including that information!? By the
> way how exactly is this R2 metric defined (I could not find it anywhere in
> the paper)?
>
> I would want to suggest you and your colleagues to use the FRC metric to
> prove that your approach does indeed remove the influence of the various
> patterns of your detectors exhibits.
>
> My two cents
>
> Marin
>
> =====================
>
>
> On 02/10/2018 15:19, Carlos Oscar Sorzano wrote:
>
> By the way, in our article we compared both methods (ours and Marin).
>
> Kind regards, Carlos Oscar
>
>
> On 01/10/2018 21:23, Marin van Heel wrote:
>
> Dear Da,
>
> In IMAGIC-4D  you can perform the necessary camera correction! (
> https://www.nature.com/articles/srep10317).  It does it better than any
> manufactures correction and improves the data significantly even when
> performed after using the standard gain correction.
>
> Cheers,
>
> Marin
>
>
> =====================================================
>
> On 01/10/2018 15:36, Da Cui wrote:
>
> Hi all,
>     The gain reference image for one dataset was missing by accident. In
> order to achieve a more accurate motioncor result, does anyone have idea
> about how to generate a gain reference image from the dataset (around 3k
> movies)?
>     Thank you so much for your help!!!
> ---Da
>
> ########################################################################
>
> To unsubscribe from the CCPEM list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCPEM&A=1
>
>
>
>
>
> --
> ==============================================================
>
>     Prof Dr Ir Marin van Heel
>
>     Laboratório Nacional de Nanotecnologia - LNNano
>     CNPEM/LNNano, Campinas, Brazil
>
>     tel:    +55-19-3518-2316
>     mobile  +55-19-983455450 (current)
>     mobile  +55-19-981809332
>                  (041-19-981809332 TIM)
>     Skype:  Marin.van.Heel
>     email:  marin.vanheel(A_T)gmail.com
>             marin.vanheel(A_T)lnnano.cnpem.br
>     and:    mvh.office(A_T)gmail.com
>
> --------------------------------------------------
>     Emeritus Professor of Cryo-EM Data Processing
>     Leiden University
>     Mobile NL: +31(0)652736618 (ALWAYS ACTIVE SMS)
> --------------------------------------------------
>     Emeritus Professor of Structural Biology
>     Imperial College London
>     Faculty of Natural Sciences
>     email: m.vanheel(A_T)imperial.ac.uk
> --------------------------------------------------
>
> I receive many emails per day and, although I try,
> there is no guarantee that I will actually read each incoming email.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ncmir.ucsd.edu/pipermail/3dem/attachments/20181204/68d35b9c/attachment.html>


More information about the 3dem mailing list