Show HN: GoatDB – A lightweight, offline-first, realtime NoDB for Deno and React

github.com

88 points by justagoat 3 days ago

Hey HN,

We've been experimenting with a real-time, version-controlled NoDB for Deno & React called GoatDB. The idea is to remove backend complexity while keeping apps fast, offline-resilient, and easy to self-host.

Runs on the client – No backend required, incremental queries keep things efficient. Self-hosted & lightweight – Deploy a single executable, no server stack needed. Offline-first & resilient – Clients work independently & can restore state after server outages. Edge-native & fast – Real-time sync happens locally with minimal overhead.

Why We Built It: We needed something that’s simpler than Firebase, lighter than SQLite, and easier to self-host. GoatDB is great for realtime collaboration, offline, prototyping, single-tenant apps, or ultra-low-cost multi-tenant setups—without backend hassles.

Would love feedback from HN:

* Are there specific features or improvements that would make it more useful?

* How do you handle similar problems today, and what’s missing in existing solutions?

If you're interested in experimenting or contributing, the repo is here: GitHub Repo: https://github.com/goatplatform/todo

Looking forward to your thoughts!

rglullis 3 days ago

What is the story for querying? Can it filter and rank objects?

  • ofriw 3 days ago

    Simple. You write plain Ts functions for sorting and filtering. GoatDB runs them with a linear scan in a coroutine so it doesn't block the UI thread. From this point it'll use its version control underpinnings to incrementally update query results

koolala 3 days ago

Could this be used without Deno or React, just a vanilla webpage? Can it p2p sync two client databases with WebRTC?

Does it use OPFS or IndexedDB?

  • ofriw 3 days ago

    Currently only deno (react is optional), but we're working on supporting other frameworks and runtimes.

    GoatDB has backends for both OPFS and IndexedDB

    • johnnypangs 3 days ago

      I’m a bit confused, if it runs in the client why does it require deno?

      • ofriw 3 days ago

        It's a symmetric design that runs on both the client and the server

        • thedevilslawyer 3 days ago

          I don't think you've answered the question - if it runs on the browser, it runs in an es6 compatible environment. so why does it not default support node or bun? what specific deno facility does it use that somehow also works on a browser, but not on node/bun.

          • ofriw 2 days ago

            A few things which are not a big deal, but do require some work:

            - Back when we started, only Deno had the ability to compile to a single exec

            - We're using deno's module resolution which is superior in every way (with an ESBuild plugin)

            - Deno's filesystem API

            All of the above can be implemented for other runtimes, and it's definitely on our roadmap

replete 3 days ago

I evaluated reactive databases for a client side app recently - what a mess to be found there. Node polyfills in the browser? No thanks. So yes, there is a need and I hope this could be an option in the future.

  • ofriw 3 days ago

    Thank you for the kind words it really motivates us!

gr4vityWall 3 days ago

Seems like the database is reactive on the client, and React components get re-rendered automatically when content changes.

How does it compare to Minimongo?

  • ofriw 3 days ago

    Similar effect, completely different tech. GoatDB is not another b-tree wrapped as a DB, but a distributed, replicated, commit graph similar to Git

monideas 3 days ago

I was thinking about CRDTs last night and now this. This is awesome.

theultdev 3 days ago

This is nice as long as your data doesn't exceed allowed memory.

  • ofriw 3 days ago

    Right. But you can push it a bit further than that actually by explicitly unloading portions of the data to disk kinda like closing a file on a desktop app

tkone 3 days ago

Why not just use pouchdb? It's pretty battle-tested, syncs with couchdb if you want a path to a more robust backend?

edit: https://pouchdb.com/

  • agrippanux 3 days ago

    But how many goats does pouchdb have? I'm betting 0.

    • tkone 3 days ago

      you can fit a lot of goats into a pouch, depending on the size of the pouch

      • stuartjohnson12 3 days ago

        "A pouch is most useful when it is empty" - Confuseus

  • ofriw 3 days ago

    Scale really. GoatDB easily handles hundreds of thousands of items being edited in realtime by multiple users

    • CyberDildonics 3 days ago

      Hundreds of thousands of items and multiple users could be done on a $5 PI zero 2 w (1Ghz quad-core A53) with the C++ standard libary and a mutex.

      People were working at this scale 30 years ago on 486 web servers.

      • jaennaet 3 days ago

        I swear we've been going backwards for the past 15 years

      • ofriw 2 days ago

        Doing concurrent editing AND supporting offline operation?

        • CyberDildonics 2 days ago

          What do you mean by "offline operation"? Which part is non-trivial?

          • ofriw 2 days ago

            Your server/network goes down, but you still want to maintain availability and let your users view and manipulate their data. So now users make edits while offline, and when they come back online you discover they made edits to the same rows in the DB. Now what do you do?

            The problem really is about concurrency control - a DB creates a single source of truth so it can be either on or off. But with GoatDB we have multiple sources of truth which are equally valid, and a way to merge their states after the fact.

            Think about what Git does for code - if GitHub somehow lost all their data, every dev in the world still has a valid copy and can safely restore GitHub's state. GoatDB does the same but for your app's data rather than source code

            • CyberDildonics 2 days ago

              So now users make edits while offline, and when they come back online you discover they made edits to the same rows in the DB. Now what do you do?

              Store changes in a queue as commands and apply them in between reads if that's what you want. This is really simple stuff. A few hundred thousand items and a few users is not a large scale or a concurrency problem.

              • ofriw 2 days ago

                Yup. Go ahead and try it, then you'll discover that:

                - The queue introduces delays so this doesn't play nice with modern collaborative editing experience (think google docs, slack, etc)

                - Let's say change A set a field to 1, and change B set the same field to 2. GoatDB allows you to easily get either 1, 2 or 3 (sum) or apply a custom resolution rule

                Your only choices before goat to solve this were: Operational Transformation, raw CRDTs or differential synchronization. GoatDB combines CRDTs with commit graphs so it can do stuff other approaches don't at an unmatched speed

                • CyberDildonics 2 days ago

                  Go ahead and try it,

                  I tried it and much more a long time ago.

                  The queue introduces delays so this doesn't play nice with modern collaborative editing experience

                  Things that can be done millions of times per second per core don't "introduce delays" that a handful of people are going to see.

                  unmatched speed

                  Are you seriously trying to say that the database you created in a scripting language that uses linear scanning of arrays is 'unmatched' compared to high performance C++? You may have other features but you have no benchmarks and the scenario you were bragging about is trivial.

                  • ofriw 2 days ago

                    Things that can be done millions of times per second per core don't "introduce delays" that a handful of people are going to see.

                    Oh but they can't. If you tried it, then you surely know that both OT and CRDTs need to consider the entire change history at some key points in order to derive the current value. Diff sync doesn't suffer from the same issue, however the way it keeps track of client shadows introduces writes on the read path, making it horribly expansive to run at scale.

                    Are you seriously trying to say that the database you created in a scripting language that uses linear scanning of arrays is 'unmatched' compared to high performance C++?

                    It's not about the language, but about the underlying algorithm. Yes, JS is slower, and surely linear scan is slower than typical DB queries. But what GoatDB does, which is quite unique today, is it's able to resume query execution from the last point a query ran, so you get super efficient incremental updates which are very useful when running on the client side (clients tend to issue the same queries over and over again).

                    • CyberDildonics a day ago

                      I'm not sure what the point of all this is. Linear scanning arrays does not scale, this is basic computer science. Javascript is going to run at 1/10th the speed of a native language at best. You don't have any benchmarks and are bragging about stuff that was typical 30 years ago. You realize that people have done shared document editing for decades and that every video game keeps a synced state right?

                      The most important thing here is benchmarks. If you want to claim you have "unmatched" speed, you need benchmarks.

    • tkone 3 days ago

      so can couch/pouch? (pouch is a façade over leveldb on the backend and client-side storage in your browser)

      have you done benchmarks to compare the two?

      i know from personal experience leveldb is quite performant (it's what chrome uses internally), and the node bindings are very top notch.

      • lopatin 3 days ago

        GoatDB is web scale. PouchDB isn't web scale.

  • usuck10999 3 days ago

    [flagged]

    • creshal 3 days ago

      You can do whatever you want, but if you reach out to other people because you want them to use it, you better be able to convince them why

sgarland 3 days ago

> lighter than SQLite

You’re concerned that a < 1 MiB library is too heavy, so you wrote a DB in TS?

> easier to self-host

How is something that requires a JS runtime easier than a single-file compiled binary?

  • ofriw 3 days ago

    Have you tried using SQLite in the browser and have it play nice with a DB in the back?

    • sgarland 3 days ago

      No, and admittedly I misunderstood the purpose, but I don’t understand the need any better now. I’m not a frontend (nor backend) dev FWIW, I’m a DBRE.

      If this is meant for client-side use, that implies a single user, so there aren’t any concerns about lock contention. It says it’s optimized for read-heavy workloads, which means the rows have to be shipped to the client, which would seem to negate the point of “lighter weight.”

      If the purpose is to enable seamless offline/online switching, that’s legitimate, but that should be called out more loudly as the key advantage.

      • ofriw 3 days ago

        Think about the modern cloud-first architecture where you have a thick back with a complex DB, a thin client with a temporary cache of the data, and an API layer moving data between them.

        This is an experiment in flipping the traditional design on its head and pushing most of the computation to the client using an architecture similar to what Git is using, but for your actual application data.

        You get all kinds of nice byproducts from this design like realtime collaboration, secure data auditing, multiple application versions coexisting on different branches in production, etc etc. It's really about pushing the boundaries of what's possible with modern client hardware

  • gr4vityWall 3 days ago

    shipping SQLite as a WASM module can increase your bundle size significantly, depending on your baseline.

    > How is something that requires a JS runtime easier than a single-file compiled binary?

    You can compile your JS to bytecode and bundle it with its runtime if you want to, getting a single-file executable. QuickJS and Bun both support this, and I think Node.js these days does as well.

    If you expect your user to already have a runtime installed and you're not using QuickJS, you can just give them the script as well.

  • be_erik 3 days ago

    In the age of docker anything almost anything can be a single file binary if you don’t mind pushing gigs of data around.

    • ofriw 2 days ago

      Right. But how do you scale it?

  • nexuist 3 days ago

    This is clearly intended for use in web applications so a JS runtime comes for free and the package is only 8.2kb

isaachinman 3 days ago

Pros and cons vs something like Replicache, Triplit, InstantDB, or Zero...?

  • ofriw 3 days ago

    Cons: less mature, much newer tech based on ephemeral CRDTs combined with a distributed commit graph. Less mature ecosystem, tooling, etc.

    Pros: - Branch based deployment so multiple versions can coexist nicely in prod - Completely synchronous API that fully hides the networking - Clients can securely restore a crashed server - Your DB functions as a signed audit log that prevents cheating (similar to blockchain) - Stronger consistency guarantees that simplify development

gunian 3 days ago

can goats use it to manage grazing sites etc to not overgraze and offend local farmers or is it human centric?

  • ofriw 2 days ago

    Yup. Conflict free grazing everywhere