Show HN: Unregistry – “docker push” directly to servers without a registry

by psviderskion 6/18/2025, 11:17 PMwith 159 comments

I got tired of the push-to-registry/pull-from-registry dance every time I needed to deploy a Docker image.

In certain cases, using a full-fledged external (or even local) registry is annoying overhead. And if you think about it, there's already a form of registry present on any of your Docker-enabled hosts — the Docker's own image storage.

So I built Unregistry [1] that exposes Docker's (containerd) image storage through a standard registry API. It adds a `docker pussh` command that pushes images directly to remote Docker daemons over SSH. It transfers only the missing layers, making it fast and efficient.

  docker pussh myapp:latest user@server
Under the hood, it starts a temporary unregistry container on the remote host, pushes to it through an SSH tunnel, and cleans up when done.

I've built it as a byproduct while working on Uncloud [2], a tool for deploying containers across a network of Docker hosts, and figured it'd be useful as a standalone project.

Would love to hear your thoughts and use cases!

[1]: https://github.com/psviderski/unregistry

[2]: https://github.com/psviderski/uncloud

by shykeson 6/19/2025, 11:37 PM

Docker creator here. I love this. In my opinion the ideal design would have been:

1. No distinction between docker engine and docker registry. Just a single server that can store, transfer and run containers as needed. It would have been a much more robust building block, and would have avoided the regrettable drift between how the engine & registry store images.

2. push-to-cluster deployment. Every production cluster should have a distributed image store, and pushing images to this store should be what triggers a deployment. The current status quo - push image to registry; configure cluster; individual nodes of the cluster pull from registry - is brittle and inefficient. I advocated for a better design, but the inertia was already too great, and the early Kubernetes community was hostile to any idea coming from Docker.

by richardc323on 6/19/2025, 8:10 PM

I naively sent the Docker developers a PR[1] to add this functionality into mainline Docker back in 2015. I was rapidly redirected into helping out in other areas - not having to use a registry undermined their business model too much I guess.

[1]: https://github.com/richardcrichardc/docker2docker

by nine_kon 6/19/2025, 12:04 AM

Nice. And the `pussh` command definitely deserves the distinction of one of the most elegant puns: easy to remember, self-explanatory, and just one letter away from its sister standard command.

by alisonatworkon 6/19/2025, 2:29 AM

This is a cool idea that seems like it would integrate well with systems already using push deploy tooling like Ansible. It also seems like it would work as a good hotfix deployment mechanism at companies where the Docker registry doesn't have 24/7 support.

Does it integrate cleanly with OCI tooling like buildah etc, or if you need to have a full-blown Docker install on both ends? I haven't dug deeply into this yet because it's related to some upcoming work, but it seems like bootstrapping a mini registry on the remote server is the missing piece for skopeo to be able to work for this kind of setup.

by metadaton 6/19/2025, 1:07 AM

This should have always been a thing! Brilliant.

Docker registries have their place but are overall over-engineered and an antithesis to the hacker mentality.

by amneon 6/19/2025, 7:54 AM

Takes a look at pipeline that builds image in gitlab, pushes to artifactory, triggers deployment that pulls from artifactory and pushes to AWS ECR, then updates deployment template in EKS which pulls from ECR to node and boots pod container.

I need this in my life.

by lxeon 6/19/2025, 12:27 AM

Ooh this made me discover uncloud. Sounds like exactly what I was looking for. I wanted something like dokku but beefier for a sideproject server setup.

by modelesson 6/19/2025, 2:38 AM

It's very silly that Docker didn't work this way to start with. Thank you, it looks cool!

by scott113341on 6/19/2025, 1:22 AM

Neat project and approach! I got fed up with expensive registries and ended up self-hosting Zot [1], but this seems way easier for some use cases. Does anyone else wish there was an easy-to-configure, cheap & usage-based, private registry service?

[1]: https://zotregistry.dev

by matt_kantoron 6/19/2025, 2:47 PM

Functionality-wise this is a lot like docker-pushmi-pullyu[1] (which I wrote), except docker-pushmi-pullyu is a single relatively-simple shell script, and uses the official registry image[2] rather than a custom server implementation.

@psviderski I'm curious why you implemented your own registry for this, was it just to keep the image as small as possible?

[1]: https://github.com/mkantor/docker-pushmi-pullyu

[2]: https://hub.docker.com/_/registry

by reviconon 6/19/2025, 3:29 PM

Is this different from using a remote docker context?

My workflow in my homelab is to create a remote docker context like this...

(from my local development machine)

> docker context create mylinuxserver --docker "host=ssh://revicon@192.168.50.70"

Then I can do...

> docker context use mylinuxserver

> docker compose build

> docker compose up -d

And all the images contained in my docker-compose.yml file are built, deployed and running in my remote linux server.

No fuss, registry, no extra applications needed.

Way simpler than using docker swarm, Kubernetes or whatever. Maybe I'm missing something that @psviderski is doing that I don't get with my method.

by fellatioon 6/19/2025, 2:41 AM

Neat idea. This probably has the disadvantage of coupling deployment to a service. For example how do you scale up or red/green (you'd need the thing that does this to be aware of the push).

Edit: that thing exists it is uncloud. Just found out!

That said it's a tradeoff. If you are small, have one Hetzner VM and are happy with simplicity (and don't mind building images locally) it is great.

by sushidevon 6/20/2025, 8:46 AM

I've prepared a quick one using reverse port forwarding and a local temp registry. In case anyone finds it useful:

  #!/bin/bash
  set -euo pipefail
  
  IMAGE_NAME="my-app"
  IMAGE_TAG="latest"
  
  # A temporary Docker registry that runs on your local machine during deployment.
  LOCAL_REGISTRY="localhost:5000"
  REMOTE_IMAGE_NAME="${LOCAL_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}"
  REGISTRY_CONTAINER_NAME="temp-deploy-registry"
  
  # SSH connection details.
  # The jump host is an intermediary server. Remove `-J "${JUMP_HOST}"` if not needed.
  JUMP_HOST="user@jump-host.example.com"
  PROD_HOST="user@production-server.internal"
  PROD_PORT="22" # Standard SSH port
  
  # --- Script Logic ---
  
  # Cleanup function to remove the temporary registry container on exit.
  cleanup() {
      echo "Cleaning up temporary Docker registry container..."
      docker stop "${REGISTRY_CONTAINER_NAME}" >/dev/null 2>&1 || true
      docker rm "${REGISTRY_CONTAINER_NAME}" >/dev/null 2>&1 || true
      echo "Cleanup complete."
  }
  
  # Run cleanup on any script exit.
  trap cleanup EXIT
  
  # Start the temporary Docker registry.
  echo "Starting temporary Docker registry..."
  docker run -d -p 5000:5000 --name "${REGISTRY_CONTAINER_NAME}" registry:2
  sleep 3 # Give the registry a moment to start.
  
  # Step 1: Tag and push the image to the local registry.
  echo "Tagging and pushing image to local registry..."
  docker tag "${IMAGE_NAME}:${IMAGE_TAG}" "${REMOTE_IMAGE_NAME}"
  docker push "${REMOTE_IMAGE_NAME}"
  
  # Step 2: Connect to the production server and deploy.
  # The `-R` flag creates a reverse SSH tunnel, allowing the remote host
  # to connect back to `localhost:5000` on your machine.
  echo "Executing deployment command on production server..."
  ssh -J "${JUMP_HOST}" "${PROD_HOST}" -p "${PROD_PORT}" -R 5000:localhost:5000 \
    "docker pull ${REMOTE_IMAGE_NAME} && \
     docker tag ${REMOTE_IMAGE_NAME} ${IMAGE_NAME}:${IMAGE_TAG} && \
     systemctl restart ${IMAGE_NAME} && \
     docker system prune --force"
  
  echo "Deployment finished successfully."

by jokethrowawayon 6/19/2025, 3:05 AM

Very nice! I used to run a private registry on the same server to achieve this - then I moved to building the image on the server itself.

Both approaches are inferior to yours because of the load on the server (one way or another).

Personally, I feel like we need to go one step further and just build locally, merge all layers, ship a tar of the entire (micro) distro + app and run it with lxc. Get rid of docker entirely.

The size of my images are tiny, the extra complexity is unwarranted.

Then of course I'm not a 1000 people company with 1GB docker images.

by actinium226on 6/19/2025, 12:31 AM

This is excellent. I've been doing the save/load and it works fine for me, but I like the idea that this only transfers missing layers.

FWIW I've been saving then using mscp to transfer the file. It basically does multiple scp connections to speed it up and it works great.

by bradlyon 6/18/2025, 11:59 PM

As a long ago fan of chef-solo, this is really cool.

Currently, I need to use a docker registry for my Kamal deployments. Are you familiar with it and if this removes the 3rd party dependency?

by larsnystromon 6/19/2025, 6:49 AM

Nice to only have to push the layers that changed. For me it's been enough to just do "docker save my-image | ssh host 'docker load'" but I don't push images very often so for me it's fine to push all layers every time.

by layoricon 6/19/2025, 2:09 AM

I'm so glad there are tools like this and swing back to selfhosted solutions, especially leveraging SSH tooling. Well done and thanks for sharing, will definitely be giving it a spin.

by MotiBananaon 6/19/2025, 5:17 AM

I've been using ttl.sh for a long time, but only for public, temporary code. This is a really cool idea!

by koakuma-chanon 6/18/2025, 11:44 PM

This is really cool. Do you support or plan to support docker compose?

by esafakon 6/19/2025, 12:48 AM

You can do these image acrobatics with the dagger shell too, but I don't have enough experience with it to give you the incantation: https://docs.dagger.io/features/shell/

by nothrabannosiron 6/18/2025, 11:53 PM

What’s the difference between this and skopeo? Is it the ssh support ? I’m not super familiar with skopeo forgive my ignorance

https://github.com/containers/skopeo

by mountainriveron 6/19/2025, 3:01 AM

I’ve wanted unregistry for a long time, thanks so much for the awesome work!

by iw7tdb2kqo9on 6/19/2025, 6:59 AM

I think it will be a good fit for me. Currently our 3GB docker image takes a lot of time to push to Github package registry from Github Action and pull from EC2.

by yjftsjthsd-hon 6/19/2025, 12:48 AM

What is the container for / what does this do that `docker save some:img | ssh wherever docker load` doesn't? More efficient handling of layers or something?

by rcarmoon 6/19/2025, 10:23 AM

I think this is great and have long wondered why it wasn’t an out of the box feature in Docker itself.

by dzongaon 6/18/2025, 11:59 PM

this is nice, hopefully DHH and the folks working on Kamal adopt this.

the whole reason I didn't end up using kamal was the 'need' a docker registry thing. when I can easily push a dockerfile / compose to my vps build an image there and restart to deploy via a make command

by quantadevon 6/19/2025, 3:54 AM

I always just use "docker save" to generate a TAR file, then copy the TAR file to the server, and then run "docker load" (on the server) to install the TAR file on the target machine.

by spwa4on 6/19/2025, 11:19 AM

THANK you. Can you do the same for kubernetes somehow?

by remramon 6/19/2025, 12:43 AM

Does it start a unregistry container on the remote/receiving end or the local/sending end? I think that runs remotely. I wonder if you could go the other way instead?

by armx40on 6/19/2025, 12:06 AM

How about using docker context. I use that a lot and works nicely.

by hopppon 6/19/2025, 11:32 AM

Oh this is great, its a problem I also have.

by cultureulterioron 6/19/2025, 4:00 AM

This is super slick. I really wish there was something that did the same, but using torrent protocol, so all your servers shared it.

by victorbjorklundon 6/19/2025, 8:08 AM

Sweet. I been wanting this for long.

by dborehamon 6/19/2025, 2:37 PM

I like the idea, but I'd want this functionality "unbundled".

Being able to run a registry server over the local containerd image store is great.

The details of how some other machine's containerd gets images from that registry to me is a separate concern. docker pull will work just fine provided it is given a suitable registry url and credentials. There are many ways to provide the necessary network connectivity and credentials sharing and so I don't want that aspect to be baked in.

Very slick though.

by alibarberon 6/19/2025, 9:26 AM

This is timely for me!

I personally run a small instance with Hetzner that has K3s running. I'm quite familiar with K8s from my day job so it is nice when I want to do a personal project to be able to just use similar tools.

I have a Macbook and, for some reason I really dislike the idea of running docker (or podman, etc) on it. Now of course I could have GitHub actions building the project and pushing it to a registry, then pull that to the server, but it's another step between code and server that I wanted to avoid.

Fortunately, it's trivial to sync the code to a pod over kubectl, and have podman build it there - but the registry (the step from pod to cluster) was the missing step, and it infuriated me that even with save/load, so much was going to be duplicated, on the same effective VM. I'll need to give this a try, and it's inspired me to create some dev automation and share it.

Of course, this is all overkill for hobby apps, but it's a hobby and I can do it the way I like, and it's nice to see others also coming up with interesting approaches.

by peylorideon 6/19/2025, 6:20 AM

This is awesome, thanks!

by s1mplicissimuson 6/18/2025, 11:59 PM

very cool. now lets integrate this such that we can do `docker/podman push localimage:localtag ssh://hostname:port/remoteimage:remotetag` without extra software installed :)

by czhu12on 6/19/2025, 4:13 AM

Does this work with Kubernetes image pulls?

by bfleschon 6/19/2025, 7:34 AM

this is useful. thanks for sharing

by isaacvandoon 6/19/2025, 1:29 AM

Love it!

by jdsleppyon 6/19/2025, 11:22 AM

I've been very happy doing this:

DOCKER_HOST=“ssh://user@remotehost” docker-compose up -d

It works with plain docker, too. Another user is getting at the same idea when they mention docker contexts, which is just a different way to set the variable.

Did you know about this approach? In the snippet above, the image will be built on the remote machine and then run. The context (files) are sent over the wire as needed. Subsequent runs will use the remote machine's docker cache. It's slightly different than your approach of building locally, but much simpler.

by politelemonon 6/19/2025, 6:20 AM

Considering the nature of servers, security boundaries and hardening,

> Linux via Homebrew

Please don't encourage this on Linux. It happens to offer a Linux setup as an afterthought but behaves like a pigeon on a chessboard rather than a package manager.

by jlhawnon 6/18/2025, 11:52 PM

A quick and dirty version:

    docker -H host1 image save IMAGE | docker -H host2 image load
note: this isn't efficient at all (no compression or layer caching)!

by Aaargh20318on 6/19/2025, 1:35 PM

I simply use "docker save <imagename>:<version> | ssh <remoteserver> docker load"

by ajd555on 6/19/2025, 1:34 PM

This is great! I wonder how well it works in case of Disaster Recovery though. Perhaps it is not intended for production environments with strict SLAs and uptime requirements, but if you have 20 servers in a cluster that you're migrating to another region or even cloud provider, the pull approach from a registry seems like the safest and most scalable approach