I have many services running on my server and about half of them use postgres. As long as I installed them manually I would always create a new database and reuse the same postgres instance for each service, which seems to me quite logical. The least amount of overhead, fast boot, etc.

But since I started to use docker, most of the docker-compose files come with their own instance of postgres. Until now I just let them do it and were running a couple of instances of postgres. But it’s kind of getting rediciolous how many postgres instances I run on one server.

Do you guys run several dockerized instances of postgres or do you rewrite the docker compose files to give access to your one central postgres instance? And are there usually any problems with that like version incompatibilities, etc.?

  • adONis@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    9 months ago

    I use the provided databases in the docker-compose file, since some services require a specific version and I’m too lazy to investigate whether it works on my existing service or not.

  • andrewalker@feddit.nl
    link
    fedilink
    English
    arrow-up
    19
    ·
    9 months ago

    I have a single big Postgres instance, shared among immich, paperless, lldap, grafana, and others. I only use the provided docker-compose as inspiration and do my own thing. It’s nicer to back up a single database (plus additional volumes, but still).

    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      ·
      9 months ago

      I only use the provided docker-compose as inspiration and do my own thing

      This is the correct way to look at it. Most applications that provide a docker compose do so as a convenience to get started quickly. It’s not necessarily what you should run.

      • seang96@spgrn.com
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        9 months ago

        It is recommended to run postgres for each service though since they may have completely different needs / configurations for the queries to be optimal. For self hosting Lemmy and matrix would be the big concerns here.

        • atzanteol@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          edit-2
          9 months ago

          It is recommended to run postgres for each service

          Absolute sentences like this are rarely true. Sometimes it does make sense and sometimes it doesn’t. One database is often quite capable of supporting the needs of many applications. And sometimes you need to fine-tune things for a specific application.

          • seang96@spgrn.com
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            9 months ago

            Say what you want it’s a recommendation and it’s documented in quite a few deployment methods. The only benefit of centralizing it is if you are managing postres without other tools since it’d be a pain in the butt. You’ll still run into apps that doesn’t run on later versions and others that require later versions though.

            An example of a very popular one:

            How many databases should be hosted in a single PostgreSQL instance?

            Our recommendation is to dedicate a single PostgreSQL cluster (intended as primary and multiple standby servers) to a single database, entirely managed by a single microservice application. However, by leveraging the “postgres” superuser, it is possible to create as many users and databases as desired (subject to the available resources).

            The reason for this recommendation lies in the Cloud Native concept, based on microservices. In a pure microservice architecture, the microservice itself should own the data it manages exclusively. These could be flat files, queues, key-value stores, or, in our case, a PostgreSQL relational database containing both structured and unstructured data. The general idea is that only the microservice can access the database, including schema management and migrations.

            CloudNativePG has been designed to work this way out of the box, by default creating an application user and an application database owned by the aforementioned application user.

            Reserving a PostgreSQL instance to a single microservice owned database, enhances:

            resource management: in PostgreSQL, CPU, and memory constrained resources are generally handled at the instance level, not the database level, making it easier to integrate it with Kubernetes resource management policies at the pod level
            physical continuous backup and Point-In-Time-Recovery (PITR): given that PostgreSQL handles continuous backup and recovery at the instance level, having one database per instance simplifies PITR operations, differentiates retention policy management, and increases data protection of backups
            application updates: enable each application to decide their update policies without impacting other databases owned by different applications
            database updates: each application can decide which PostgreSQL version to use, and independently, when to upgrade to a different major version of PostgreSQL and at what conditions (e.g., cutover time)
            
            • atzanteol@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              9 months ago

              You’re talking about a microservices architecture running in a kubernetes cluster? FFS… 🙄

              That’s a ridiculous recommendation for a home-gamer. It’s all up to how you want to manage dependencies, backups, performance, etc. If one is happy to have a single instance then there’s nothing wrong with that. If one wants multiple instances for other reasons that’s fine too. There are pros and cons to each approach. Your “I saw somebody recommend it on the internets” notwithstanding.

              • seang96@spgrn.com
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                9 months ago

                It’s the one I’m using but it’s not just running in a cluster. Even some applications recommend running separately like matrix. You can’t run everything on the same.versiom all the time anyways.

                • atzanteol@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  9 months ago

                  You can’t run everything on the same.versiom all the time anyways.

                  Unless you’re doing something very specific with the database - yes you can. Most applications are fine with pretty generic SQL. For those that have specific requirements, well then give them their own instance. Or use that version for the ones that don’t much care…

  • Nik282000@lemmy.ca
    link
    fedilink
    English
    arrow-up
    19
    ·
    9 months ago

    I keep each service separate as far as DBs, if something breaks or get a major upgrade I don’t have to worry about other containers.

  • matto@lemm.ee
    link
    fedilink
    English
    arrow-up
    14
    ·
    9 months ago

    Not so long ago I had the same question myself, and I ended up setting 1 Postgress instance and 1 MySQL instance for all services to share. In the long run, I had so many version and settings incompatibilities across services that moved back to one DB per service that is tuned specifically for it. Also, I add a backup app to all my docker compose files that have a DB in it. This way, backups happen periodically and automatically.

  • sunaurus@lemm.ee
    link
    fedilink
    English
    arrow-up
    11
    ·
    9 months ago

    If I have several backends that more or less depend on each other anyway (for example: Lemmy + pict-rs), then I will create separate databases for them within a single postgres - reason being, if something bad happens to the database for one of them, then it affects the other one as well anyway, so there isn’t much to gain from isolating the databases.

    Conversely, for completely unrelated services, I will always set up separate postgres instances, for full isolation.

  • Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    I used to, but now I just have one big one (still in Docker) that’s sized and tuned to handle all of my applications.

    I’ve only had one version compatibility issue, but that was because I was on pgSQL 13 and the updated version of one application needed 15. Upgrading that didn’t affect any other applications. If it had, I would have just broken that one application out to its own stack-local Postgres.

  • Morethanevil@lemmy.fedifriends.social
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    9 months ago

    I have a separate network for postgres. Every service which needs a DB is attached to it. I use a single postgres container with several DBs and finetuned with PGTune.

    The most important thing as always: proper backups ☝🏻

    The only service with extra DB container is Immich, since it uses a custom variant and I am too lazy to modify the existing container 😁

  • DeltaTangoLima@reddrefuge.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    9 months ago

    I run Proxmox with a few nodes, and each of my services are (usually) dockerized, each running in a Proxmox Linux container.

    As I like to keep things segregated as much as possible, I really only have one shared Postgres, for the stuff I don’t really care about (ie. if it goes down, I honestly don’t care about the services it takes with it, or the time it’ll take me to get them back).

    My main Postgres instances are below - there’s probably others, but these are the ones I backup religiously, and test the backups frequently.

    1. RADIUS database: for wireless auth
    2. paperless-ngx: document management indexing & data
    3. Immich: because Immich has a very specific set of Postgres requirements
    4. Shared: 2 x Sonarr, 3 x Radarr, 1 x Lidarr, a few others
  • redxef@scribe.disroot.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    In theorey lots of people recommend having everything in a single docker-compose file for easier transfer and separation, though I have so much running, that it’s grouped by purpose. One of those is data storage. So I have a single server with all the databases (as far as compatibility goes). I would like to some day have a highly available postgres cluster with automatic failover and failback. But that needs a lot of testing and I’m no postgres admin, so also a lot of time to research how to do it properly.

    • Shimitar@feddit.it
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      My database instances downtime is only when the server itself is rebooting. Never had a single downtime in 20+ years beside that.

      • sardaukar@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        You’ve never had to run migrations that lock tables or rebuild an index in two decades?

        • Shimitar@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Why would that have blocked all my databases at once? That would affect the same database I was migrating, not the others.

          • sardaukar@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 months ago

            Yes, it would cause downtime for the one being migrated - right? Or does that not count as downtime?

            • Shimitar@feddit.it
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              9 months ago

              Yes it counts indeed… But in that case the service is down while its migrated so the fact the database is also down does it count?

              I mean, it’s a self hosted home service, not your bank ATM network…

  • MrMcGasion@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    6
    ·
    9 months ago

    That’s a big reason I actively avoid docker on my servers, I don’t like running a dozen instances of my database software, and considering how much work it would take to go through and configure each docker container to use an external database, to me it’s just as easy to learn to configure each piece of software for yourself and know what’s going on under the hood, rather than relying on a bunch of defaults made by whoever made the docker image.

    I hope a good amount of my issues with docker have been solved since I last seriously tried to use docker (which was back when they were literally giving away free tee shirts to get people to try it). But the times I’ve peeked at it since, to me it seems that docker gets in the way more often than it solves problems.

    I don’t mean to yuck other people’s yum though, so if you like docker, and it works for you, don’t let me stop you from enjoying it. I just can’t justify the overhead for myself (both at the system resource level, and personal time level of inserting an additional layer of configuration between me and my software).

    • summerof69@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      What overhead are you talking about? You don’t need a dozen of instances of a database. You can create one, with or without docker, and configure any service to use it. The idea of docker and docker compose is that you can easily start up the whole env. But you don’t have to.

    • sardaukar@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      It’s kinda weird to see the Docker scepticism around here. I run 40ish services on my server, all with a single docker-compose YAML file. It just works.

      Comparing it to manually tweaking every project I run seems impossibly time-draining in comparison. I don’t care about the underlying mechanics, just want shit to work.

      • skittlebrau@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        I care about the underlying tech in the things I deploy, but the reality is that I lack the time to actively do this in practice.

        Ideally I would set everything up manually, but it’s just too hard to keep up with 30+ projects and remembering how/why I set everything up, even with documentation. Docker Compose makes my homelab hobby more manageable.

      • MrMcGasion@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        I think that my skepticism and desire to have docker get out of my way, has more to do with already knowing the underlying mechanics, being used to managing services before docker was a thing, and then docker coming along and saying “just learn docker instead.” Which is fine, if it didn’t mean not only an entire shift from what I already know, but a separation from it, with extra networking and docker configuration to fuss with. If I wasn’t already used to managing servers pre-docker, then yeah, I totally get it.

        • sardaukar@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          I used to be a sysadmin in 2002/3 and let me tell you - Docker makes all that menial, boring work go away and services just work. Which is want I want, instead of messing with php.ini extensions or iptables custom rules.

          • MrMcGasion@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 months ago

            Maybe I’ll try and give it another go soon to see if things have improved for what I need since I last tried. I do have a couple aging servers that will probably need upgraded soon anyway, and I’m sure my python scripts that I’ve used in the past to help automate server migration will need updated anyway since I last used them.

      • Moonrise2473@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        I have everything in docker too, but a single yml with 40 services is a bit extreme - you would be forced to upgrade everything together, no?

        • sardaukar@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Not really. The docker-compose file has services in it, and they’re separate from eachother. If I want to update sonarr but not jellyfin (or its DB service) I can.

    • LifeBandit666@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      I agree to a certain extent and I’m actively using Docker.

      What I’ve done is made an Ubuntu VM, put Docker on it and booted a Portainer client container on it, then made that into a container template, so I can just give it an IP address and boot it up, then add it to Portainer in 3 clicks.

      It’s great for just having a go on something and seeing if I wanna pursue it.

      But so far I’ve tried to boot and run Arr and Plex, and more recently Logitech Media Server and it’s just been hard work.

      I’ve found I’m making more VMs than I thought I would and just putting things together in them, rather than trying to run stacks of Docker together.

      That said, it looks like it is awesome when you know what you’re doing.

  • Shimitar@feddit.it
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    7
    ·
    9 months ago

    This is one of the annoying issues with docker, or better, on how docker is abused in production.

    The single instance/multiple databases is the correct way to go, docker approach mess up with that.

    Rewriting docker files is always a possibility but honestly defies the reason why docker is used by self hosters.

    Also beware that some devs will shunt you out of support if you do, specially the apps that ships docker files by default.

    Go bare metal if possible, that way you have full control. Do docker for testing up stuff quickly and be flexible at cost of accepting how stuff is packaged by upstream

    • sardaukar@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 months ago

      The official Postgres Docker image is geared towards single database per instance for several reasons - security through isolation and the ability to run different versions easily on the same server chief among them. The performance overhead is negligible.

      And about devs not supporting custom installation methods, I’m more inclined to think it’s for lack of time to support every individual native setup and just responding to tickets about their official one (which also is why Docker exists in the first place).

  • marcos@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    20
    ·
    9 months ago

    Do not run databases in Docker unless you know really well what you are doing.

    It’s completely possible to run them correctly in Docker. But it’s far from trivial, and if you need to ask this, it means that you probably won’t be able to.

    • foggy@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      9 months ago

      Nothing worse in Linux communities than gatekeeper answers like this.

      It’s fine to point out that something’s challenging to someone who may be a novice, but to suggest it’s above them? Eat it. At the very least, provide a resource and let them confirm for themselves.

      • richmondez@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 months ago

        This right here… This whole community is about learning to do things for yourself. It might be after been given resources to learn you do decide its too much for you but people should be given the chance to discover that themselves.

        • foggy@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          9 months ago

          Shout out to hack the box.

          If you’re a noob or a veteran in any branch of IT looking for a good cybersecurity community/platform…

          Most of us IT folk check the box of “knowledge peaks and valleys”. They’re the first community I’ve found that seems to actually respect the idea that someone might know way more about XSS and SQL injection in react apps than some other guy knows about binary exploits through packet disassembly, and that both of them are fucking experts and neither of them are lacking for not knowing what the other knows.

      • cm0002@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        Cybersecurity communities too, there was one guy on [The Other Site] I saw awhile back who, whenever somebody asked a question about what they should do to secure X or Y or if Z security product was better than V because they just did general IT, would always default to something along the lines of “If you don’t know, don’t bother its above you and you should shell out $$$ to an actual firm otherwise you’ll be shelling out $$$$ to another firm to clean up your mess”

        Surprise surprise, when I googled his username (The fact I was even able to do this isnt a great sign for a “security professional” IMO lmao) he actually owned one of those “Databreach Triage” firms…yea…I’m sure there was no conflict of interest whatsoever lmaoo