Turso takes SQLite, the most battle-tested embedded database on the planet, and distributes it globally across edge locations. The result is a database that responds in single-digit milliseconds no matter where your users are. I have been watching the edge database space closely, and Turso is the option that genuinely delivers on the promise. You write standard SQL. Your data replicates automatically to the regions closest to your users. And because it is built on SQLite under the hood, the reliability characteristics are inherited from decades of proven engineering.
The traditional database model assumes your data lives in one place. One region, one data center, one server. If your users are in Tokyo and your database is in Virginia, every query crosses the Pacific Ocean twice. That adds 150 to 200 milliseconds of pure network latency to every database read, and there is no amount of optimization that can fix physics. Glauber Costa, a former kernel developer at Red Hat and ScyllaDB, founded Turso in 2022 to solve exactly this problem. The company built libSQL, an open-source fork of SQLite that adds features the original project intentionally excluded: server mode, replication, and extensions. Turso wraps libSQL with a global distribution layer. You create a database, choose your primary region, and add replicas in any of 30+ edge locations worldwide. Writes go to the primary. Reads are served from the nearest replica. The company raised $12.6 million in funding through 2023 and launched its general availability product in early 2024. The key insight was that most web applications are read-heavy, dashboards, content pages, user profiles, so putting read replicas at the edge eliminates latency for 90% or more of your database operations.
The embedded replica feature is what separates Turso from every other database-as-a-service. Most edge databases work by running a proxy at the edge that forwards queries to a remote database. Turso does something fundamentally different. It embeds an actual SQLite database file inside your application process, right next to your code, and keeps it synchronized with the primary database in the cloud. When your application reads data, it reads from a local file. Zero network latency. Zero. The read is as fast as reading from disk, which on modern SSDs is measured in microseconds. Writes still go to the primary and propagate back to replicas, but reads are instantaneous. For my client projects that deploy to Cloudflare Workers or Vercel Edge Functions, this architecture means the database query is often the fastest part of the entire request, not the slowest. An edge function that would normally add 150ms of database latency instead adds less than 1ms. Over thousands of requests per second, that difference is transformative for user experience.