Our servers that has "ML" in their name are good for ML/DL applications. We know that these applications are more demanding than rendering (e.g. PCIe bandwidth), and we built our ML servers accordingly. All our "ML server"s:
Tim Dettmers suggests that for preprocessing before training "you do not need a very good CPU. I recommend a minimum of 2 threads per GPU". And for preprocessing while training, he recommends a min. of 4 threads/GPU.
|GPU||Single Precision (Tflops)||CUDA Cores||Memory Bandwidth (GB/s)||VRAM|
|GTX 1080||8.9||2560||320||8 GB GDDR5X|
|Tesla K80||4.3||2496||240||12 GB GDDR5|
No hidden payments! No setup fee. You pay what you see.
Need a different configuration than below servers? Please contact us.
Commitment discounts are available when you rent for 2 months or more.
Once you contact and tell us what you need, if there is a matching available server, we will send you the payment details. After the payment, you will receive your login details to the server.
One of the best ways to thank us is to send us your feedback!
"I'm happy to recommend Render Rapidly to other Redshift for Maya users. The servers are fast and stable and the administrator is very helpful."
"I had once a very large dataset to train against. The task was requiring lots of GPU time, so AWS or GCD was not affordable. After a long search of GPU server providers, I found Renderrapidly. And their 2x 1080 server saved my life. I trained two models one on each GPU, and this doubled my productivity. I also found out that GTX 1080 is a lot faster than K80 (p2.xlarge). And yet, the cost of a GX1080 was literally a fraction of what I would pay for p2.xlarge! I definitely suggest Renderrapidly to anyone looking for training their models for cheaper."
"I've worked with Render Rapidly several times. I'm impressed with the stable and quality service. Thanks to their farm, I have more confidence to take on larger jobs because I don't need to worry about the render times. Keep up the great service!"