General Catalyst is excited to be working with Nikita Shamgunov, Heikki Linnakangas, Stas Kelvich, and the whole team at Neon. Once every so many years, foundational technology changes create opportunity in the database industry to build a transformational product and important company. Neon is doing this by building on the cloud’s separation of storage and compute power, and applying it to OLTP databases to offer a serverless Postgres that we believe has the potential to scale and perform like nothing before it.
I have been in and around the database industry for some time. It was the mid-2000s when I first assumed responsibility for Microsoft’s RDBMS product and business. I hadn’t come into that job through the paths of my predecessors -who largely had come up writing database code. While I had a lot to learn in the job, and the culture was one to allow for learning, I did perhaps have an advantage in coming with a beginner's mind. This was an amazing time to have an open mind - Jim Gray, David DeWitt, Paul Larson, and others were involved with this team and its work. Not to mention the team itself - Microsoft had done a good job collecting and cultivating talent in the database space. That talent includes Neon’s CEO Nikita Shamgunov - more on that later.
One of the questions I was curious about was the so-called CAP theorem. This is how the more general systems theorem about Reliability, Availability and Scalability hits transactional databases. Basically this CAP theorem is as old as the database industry itself, and states that you can only have 2 of Consistency, Availability and Partition Tolerance at a time. And it's actually why things like eventual consistency as a methodology for NoSQL databases exist - an explicit trade-off of Consistency to design for the other properties which gives rise to higher scale and performance.
But the world likes consistency. How can we have scale and performance while still having consistency? Is there a different trade-off that can be made? Neon is applying the separation of storage and compute to this problem. Storage and compute in the old on-prem world was a challenge - the networking and system architecture didn’t exist to allow any compute to access any storage with the kind of performance characteristics one needs to run database workloads. The cloud architecture of AWS, Azure and Google solve this - and it's what Snowflake and the other modern cloud DWs take advantage of to get a 10x result on scale, performance and price/performance. Applying this to Postgres with a new storage engine under the covers allows us to get the consistency right while providing a scalable, high-performance database service - one that from a developer's POV is serverless - autoscaling, branching, and effectively of unlimited size. And the way this is built also ensures no cloud provider lock-in - total independence and ease of migration.
I’ve known Nikita for some time - we worked together at Microsoft and I spent a little time with him and the team when SingleStore was still MemSQL to help them get on their cloud product path. Nikita is technically very insightful, has a very good eye for engineering talent, and has really come into his own with reference to building businesses and understanding markets. The technical vision he set for Neon is hard to execute on - this is real engineering. Nikita assembled a very good team to build it - Heikki Linnakangas has years of systems engineering experience and is one of the key Postgres committers. Stas Kelvich is well known in Postgres circles and built large distributed database systems at Yandex. The team they have assembled is truly next-level for building this.