[3dem] File systems

Sam Li samli at msg.ucsf.edu
Mon Jan 4 14:39:48 PST 2016


Hi All,

I thought the following information might be useful to many - recently the
NERSC located at Berkeley, CA has installed a new super-computer called
Cori. It uses Cray's Burst Buffer technology to boost the I/O performance.

http://www.nersc.gov/users/computational-systems/cori/burst-buffer/burst-buffer/

Although the entire setting is impractical for individual institution, I
thought some concepts applied in the Burst buffer can be borrowed, in
particular for I/O improvement, such as using flash SSD as scratch disk and
dedicated nodes for I/O. It uses the Lustre file system and SLURM for job
management.

Currently the burst buffer is still in "Phase I" experimental phase, and
more benchmark numbers are needed, but I thought it might be of interest to
folks here.

Cheers,
Sam

----------------------------------------------
Sam Li, Ph.D.
Department of Biochemistry & Biophysics
University of California, San Francisco
Mission Bay, Genentech Hall, Room S416
600, 16th Street
San Francisco, CA 94158, USA
Tel: 1-415-502-2930
Fax: 1-415-476-1902
----------------------------------------------

On Mon, Jan 4, 2016 at 4:38 AM, Reza Khayat <rkhayat at ccny.cuny.edu> wrote:

> Thanks to all, and particularly Steve for being so thorough. I think we’ll
> go with a hybrid solution for our workstations -Steve’s third to last
> paragraph. I’ll talk to the other PIs for our cluster needs.
>
> Best wishes and Happy New Year,
> Reza
>
> Reza Khayat, PhD
> Assistant Professor
> City College of New York
> 85 St. Nicholas Terrace CDI 12308
> New York, NY 10031
> (212) 650-6070
> www.khayatlab.org
>
> > On Jan 4, 2016, at 2:17 AM, Sjors Scheres <scheres at mrc-lmb.cam.ac.uk>
> wrote:
> >
> > Hi Reza, Ben, Steve & the rest,
> >
> > Happy New Year! That 2016 may bring you all many side chains. :-)
> >
> > We've tested infiniband verus our standard ethernet and deemed the cost
> of
> > it not worth it, as we saw hardly any improvement in speed of relion. We
> > did however see an improvement of >30% speed when switching our cluster
> > from a NFS to a Fraunhofer file system. The issue arises when one starts
> > accessing the same disk from several hundreds of MPI processes, something
> > that can easily happen when dozens of users use the same system.
> > I think I agree with Steve that for not very large group-wide systems NFS
> > will be better, as it is much  easier, but for larger clusters a more
> > parallel file system will probably be worth the effort. At LMB, our
> > (small) home areas are based on NFS, while the disks on our cluster were
> > Fraunhofer and are now gpfs.
> > HTH,
> > Sjors
> >
> >
> >> Hi Reza,
> >>
> >> Our cluster was recently upgraded to InfiniBand and switched to gpfs.
> >> Since then the rate of mysterious mpi crashes/hang during relion runs
> has
> >> decreased quite a lot. And relion runs faster. Is it due to gpfs, to
> >> InfiniBand or to the combination of both? I do not know.
> >>
> >> Cheers
> >> Ben
> >>
> >> Le 3 janv. 2016 à 18:53, Reza Khayat <rkhayat at ccny.cuny.edu> a écrit :
> >>
> >> Hi,
> >>
> >> Can anyone describe some of their experience with deploying and using a
> >> distributed filesystem for image analysis? Is it appropriate to say that
> >> NFS is antiquated, slow and less secure than the younger systems like
> >> Lustre, Gluster, Ceph, PVFS2, or Fraunhofer?
> >>
> >> Best wishes,
> >> Reza
> >>
> >> Reza Khayat, PhD
> >> Assistant Professor
> >> City College of New York
> >> 85 St. Nicholas Terrace CDI 12308
> >> New York, NY 10031
> >> (212) 650-6070
> >> www.khayatlab.org
> >>
> >> _______________________________________________
> >> 3dem mailing list
> >> 3dem at ncmir.ucsd.edu
> >> https://mail.ncmir.ucsd.edu/mailman/listinfo/3dem
> >> _______________________________________________
> >> 3dem mailing list
> >> 3dem at ncmir.ucsd.edu
> >> https://mail.ncmir.ucsd.edu/mailman/listinfo/3dem
> >>
> >
> >
> > --
> > Sjors Scheres
> > MRC Laboratory of Molecular Biology
> > Francis Crick Avenue, Cambridge Biomedical Campus
> > Cambridge CB2 0QH, U.K.
> > tel: +44 (0)1223 267061
> > http://www2.mrc-lmb.cam.ac.uk/groups/scheres
> >
>
> _______________________________________________
> 3dem mailing list
> 3dem at ncmir.ucsd.edu
> https://mail.ncmir.ucsd.edu/mailman/listinfo/3dem
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ncmir.ucsd.edu/pipermail/3dem/attachments/20160104/36ed9743/attachment.html>


More information about the 3dem mailing list