serverlessgpu HQ US · est. 2021

Modal

Python-first serverless for ML, batch jobs, and long-running compute.

Modal is the most Python-idiomatic serverless platform on the market. You add a decorator, type `modal run`, and your function runs on Modal's infrastructure with the dependencies you declared in code. GPUs (A10, A100, H100) are first-class and priced per-second.

For an indie ML or data project, Modal removes the worst part of the stack — packaging Python and procuring GPUs. The $30/month free credit covers a non-trivial amount of inference. It's the wrong tool for a Rails app; right tool for "I want to run Whisper transcription for paying users this weekend."

Pros & cons

What works
  • + Run Python functions in the cloud with `modal.run`, no Dockerfile needed
  • + GPU access (A10, A100, H100) priced per-second
  • + Excellent cold-start performance even for big Python images
  • + $30/month free credit is enough for many indie ML side projects
What doesn't
  • Python-only — no path for Node, Go, Rust, etc.
  • Pay-per-second model means estimating monthly cost requires actual usage data
  • Not the right shape for a typical Django web app

Plans & pricing

PlanPriceCPURAMDiskBandwidth
Starter (free credits)
$30/mo free credits
Free0 GB— GB0 GB
CPU (per-second) Free$0.000111/s per vCPU0 GB— GB0 GB
A10G GPU Free$0.000306/s24 GB— GB0 GB

Free tier: $30/month free compute credits

Features at a glance

× IPv6
× Snapshots
DDoS protection
× Private network
Object storage
× Managed Postgres
× Managed Redis
× One-click apps
Public API
× Terraform provider
Backups: none

Similar hosts