Hey folks! I’m Jonathan from TensorDock, and we’re building a cloud GPU marketplace. We want to make GPUs truly affordable and accessible.
I once started a web hosting service on self-hosted servers in middle school. But building servers isn’t the same as selling cloud. There’s a lot of open source software to manage your homelab for side projects, but there isn’t anything to commercialize that.
Large cloud providers charge obscene prices — so much so that they can often pay back their hardware in under 6 months with 24x7 utilization.
We are building the software that allows anyone to become the cloud. We want to get to a point where any [insert company, data center, cloud provider with excess capacity] can install our software on our nodes and make money. They might not pay back their hardware in 6 months, but they don’t need to do the grunt work — we handle support, software, payments etc.
In turn, you get to access a truly independent cloud: GPUs from around the world from suppliers who compete against each other on pricing and demonstrated reliability.
So far, we’ve onboarded quite a few GPUs, including 200 H100 SXMs available from just $2.49/hr. But we also have A100 80Gs from $1.63/hr, A6000s from $0.47/hr, A4000s from $0.13/hr, etc etc. Because we are a true marketplace, prices fluctuate with supply and demand.
All are available in plain Ubuntu 22.04 or with popular ML packages preinstalled — CUDA, PyTorch, TensorFlow, etc., and all are hosted by a network of mining farms, data centers, or businesses that we’ve closely vetted.
If you’re looking for hosting for your next project, give us a try! Happy to provide testing credits, just email me at jonathan@tensordock.com. And if you do end up trying us, please provide feedback below [or directly!] :)
---
Deploy a GPU VM: https://dashboard.tensordock.com/deploy
CPU-only VMs: https://dashboard.tensordock.com/deploy_cpu
Apply to become a host: https://tensordock.com/host
- Jonathan
We have been a Tensordock customer for more than two years now. We run our celery nodes on Tensordock — along side other nodes on AWS (startup credits). The tensordock nodes always spin up faster and load our models faster than AWS (some strange IOPS limits).
Also their customer service is insane. We get usually get replies within a few mins even at 1AM at night!
Question — Do you guys plan to have some sort of managed K8 service in the future ? Something like GKE ?