[3dem] weird server issues, suggestions/advice would be helpful!

Matthias Wolf matthias.wolf at oist.jp
Mon Mar 23 17:03:13 PDT 2020


Hi Hideki,

I think you have a power issue.
I have two of the predecessors of these boxes since 8 years. They worked stable with 8x GTX590, but only 6x GTX690 and only 6x GTX1080Ti. The latter draws 250W of power according to specs. The rest of the hardware also needs another 600W or so.

The triple PSUs on this Tyan box yield 2000W at 100V input voltage, but 3200W at 220V. You could try connecting it to 3x 220V circuits if you have access to those (these must each have 15A rating or you will blow the breaker under full load). Otherwise I am afraid you just don't get the electrical power required. Mind you, running a box like that under full load constantly is actually not cheap (think 20 JPY/kWhr, 3kW/hr = 60 JPY/hr = 1440 JPY/day = 43k JPY/month = 518k JPY/year). And then don't forget about the cooling - need proper A/C, which also uses electricity. Better in a data center...

In addition, I removed all the fans from my GPUs in that box, because I found they run 20 degrees colder without fans. This is because the GPU fans expel the air both sides, which goes against the strong airflow by the 6 load and big case fans. You can read out the temperature using 'nvidia-smi'. If it's more than 80C, you might have a thermal problem.

If you can't upgrade the external power to 200-240V, try running your box with 4 GPUs first and then keep adding them until it becomes unstable.

   Matthias
________________________________
From: 3dem <3dem-bounces at ncmir.ucsd.edu> on behalf of Shigematsu, HIDEKI <hideki.shigematsu at riken.jp>
Sent: Tuesday, March 24, 2020 7:54 AM
To: Liz Kellogg <lizkellogg at gmail.com>
Cc: 3dem at ncmir.ucsd.edu <3dem at ncmir.ucsd.edu>
Subject: Re: [3dem] weird server issues, suggestions/advice would be helpful!

Hi Liz,


I have similar symptom with 4GPU box of 2080Ti. I put two 1000W PSU in this box one for 3 GPUs and the other for 1 GPU and Ryzen Thread ripper 32core. When I set to limit the power consumption of GPUs to 250W, it last longer. And one of my friend has the same issue with same config but he hooked PSU to 100V power supply. Now, he switched one of the PSU to 1600W and it works fine.
I think you should better try to limit the power of the GPUs by using
nvidia-smi -pl 250
or remove some of GPUs from the machine to see whether it works with some GPU jobs.
Worst case scenario is that you have the PSU which has limit in the power supply for the specific port for PCI-e devices, in that case, limiting the power consumption to the GPU might work but removing some of GPUs does’t work.

Best,

Hideki
----
Hideki Shigematsu Ph. D.

RIKEN SPring-8 Center, Life Science Research Infrastructure Group
1-1-1 Kouto Sayo-cho Sayo-gun, Hyogo 679-5148 Japan
Phone +81-791-58-0803 (Ext.7868)
FAX +81-791-58-2834

> 2020/03/24 7:25、Liz Kellogg <lizkellogg at gmail.com>のメール:
>
> Hi 3dem-ers,
>
> I hope everyone is safely at home.  I have a non COVID-19 related problem that I hope others can help advise on.
>
> I bought a 8-GPU server last April that would be used by my lab for image processing work. Everything seemed fine initially, however once I started getting more users the server became noticeably unstable around December and started randomly rebooting itself. It was happening so often that at its worst we couldn’t get through a single refinement job without a reboot.  Here are some technical details and hints at what could be going wrong:
>
> Configuration of the server:
> TYAN Thunder HX FT77D-B7109 8GPU 2P 4x3
> Intel Xeon Gold 6138 20C 2.0-3.7 GHz
> 384 GB DDR4 2400/2666 ECC/REG (12x32GB)
> SamSung 480GB 883 DCT SSD x 2
> Seagate 12TB SAS x 16
> GeForce RTX-2080Ti 11 GB x 8
>
> The most noticeable errors we see when the server is up are the GPU devices becoming undetectable, along the lines of:
>
> $ nvidia-smi
> Unable to determine the device handle for GPU 0000:B1:00.0: GPU is lost.  Reboot the system to recover this GPU
>
> Or
>
> $ nvidia-smi
> No devices were found
>
> Replacing the GPUs did not seem to help which we did back in January, we are back to the same issues.
> We also tried updating the GPU drivers to NVIDIA-SMI 440.33.01    Driver Version: 440.33.01 (before they were 410.48)
> However, we experience pretty much the same behavior before and after the driver update.
>
> Since we have updated the drivers, I doubt that’s a driver issue. Although it could be  a PCI bus issue, doesn’t seem likely to me because each of the 8 cards tend to go down randomly (during one strange episode, they were flickering on and off). My gut feeling is that there is either a power issue where the system’s power was not dimensioned properly (though looking at the chassis specs this seems unlikely as well), or a cooling issue. I am planning on monitoring the GPU temperature (I wrote a bash script using nvidia-smi -q) under heavy load and see if the current temp exceeds the maximum temp of each GPU.
>
> Any idea of what would be going on? I think I have pretty standard server config.. has anyone experienced similar problems? Anyone that has configurations that work well for you, would you mind sharing your specs and your NVIDIA driver versions? Even if it's exactly the same specs that would help. Any non-standard steps to configure the machine or the drivers? I am mystified as to why we are experiencing these issues.. and doesn’t help that we’re all working from home at the moment :*(
>
> Thanks everyone, stay safe.
>
> Best wishes,
>
> Liz
>
> Elizabeth H. Kellogg, Ph.D.
> Assistant Professor, Cornell University
> Molecular Biology and Genetics
>
> _______________________________________________
> 3dem mailing list
> 3dem at ncmir.ucsd.edu
> https://mail.ncmir.ucsd.edu/mailman/listinfo/3dem

_______________________________________________
3dem mailing list
3dem at ncmir.ucsd.edu
https://mail.ncmir.ucsd.edu/mailman/listinfo/3dem
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ncmir.ucsd.edu/pipermail/3dem/attachments/20200324/bc065cee/attachment-0001.html>


More information about the 3dem mailing list