Currently in alpha

Get a local GPU in
one command

Install Specter, pick your GPU, and start running. PyTorch, JAX, CUDA C++ — everything just works. It's that simple.

You're on the list. We'll be in touch.

Join 46 others on the waitlist

terminal
$ pip install specter
Successfully installed specter
$ specter install h100
Pulling NVIDIA H100 80GB...
✓ GPU ready
$ python -c "import torch; print(torch.cuda.is_available())"
True
$ nvidia-smi
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15              Driver Version: 550.54.15      CUDA Version: 12.4   |
|-----------------------------------------+------------------------+--------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|=========================================+========================+====================|
|   0  NVIDIA H100 80GB HBM3         On  | 00000000:00:04.0  Off |                    0 |
| N/A   31C    P0              72W / 700W |       0MiB / 81559MiB |      0%      Default |
+-----------------------------------------+------------------------+--------------------+
$

Three commands. That's it.

No cloud console. No VM setup. No driver hell.

1

Install Specter

pip install specter

One package. No dependencies, no configuration, no account required.

2

Pick your GPU

specter install h100

Choose from H100, A100, L40S, or whatever you need. Available in seconds.

3

Start building

python train.py

your existing code runs without changes.

Your stack. Zero changes.

Full CUDA runtime and driver compatibility. If it runs on a GPU, it runs on Specter.

🔥

PyTorch

torch.cuda works out of the box. Train models, run inference, debug with breakpoints — same workflow you already use.

torch.cuda.is_available() # True

JAX

Full GPU acceleration for JAX. XLA compilation, jit, pmap — all hardware-accelerated through Specter.

jax.devices() # [GpuDevice(id=0)]
🔧

CUDA C++

Runtime API, Driver API, compile and run custom kernels. nvcc works. cuBLAS, cuDNN, cuFFT — the full toolkit.

nvcc -o kernel kernel.cu && ./kernel
📊

nvidia-smi

Shows up exactly like a real GPU. Monitor utilization, memory, temperature, processes — all your existing tooling works.

nvidia-smi # NVIDIA H100 80GB HBM3

Pay for what you use. Actually.

Billed on actual GPU compute, not wall-clock time. Idle time costs you nothing.

Containers & VMs

Billed per hour, running or not

Spin up an instance, forget about it for lunch, come back to a bill. Paying for idle time is the norm.

8 hours reserved, 2 hours used — pay for 8

Stop waiting for GPUs

Specter is in alpha. Join the waitlist to get early access and help shape the future of GPU development.

You're on the list. We'll be in touch.