I suggest a documentation cleanup. The initial README should have blurbs about who should use it, what it's for, how it does it, and links to example use cases. A quick start guide steps a user through accomplishing a simple task, and links to extended documentation. Extended documentation is the reference guide to the latest code, and should be generated from the code. I would not suggest splitting documentation up into multiple places (a readme here, a lengthy blogpost there, plus discombobulated Wiki); all documentation should be accessible from a single portal, with filtering capabilities (search is incredibly difficult to make accurate, whereas filtering is easy and effective).
This whole solution seems like a very custom way to use docker. You can already create custom Docker images with specific content, use multi-stage builds to cache layers, split pipelines up into sections that generate static assets and pull the latest ones based on a checksum of its inputs, etc. I think the cost of maintaining this solution is going to far outweigh that of just using existing tooling differently.
in Docker that would be something like
COPY Gemfile Gemfile.lock /src
RUN bundle install
even if they don't use docker to run application in prod, it can be [ab]used to perform efficient build layer (build step) caching and distribution.Curious what the HN community feels is a "slow deploy". I scanned the article first to find time reductions and still couldn't see how much time was actually taken at the end of it.
11 minutes is a great time reduction. (11*30 builds a day = 5.5 hours saved in total).
But I still am not sure what constitutes as a slow builds. I assume at some point there's an asymptotic curve of diminishing returns where in order to shave off a minute, the complexity of the pipeline increases dramatically (caching being a tricky example). So do y'all have any opinions on what makes a build slow for you?
The first point isn't so much a change in build pipelines as much as it is avoiding the build pipeline altogether and deploying prebuilt artifacts; I can't think of a reason to re-run your build for prod if you have run it for another environment already. In other words recognizing that deployment and build stages are different.
It's actually touch a point very close to my work.
We definitely need much more speed in running our pipeline.
The software is mostly C/C++ with a lot of internal dependencies.
Do you guys have any experience in that?
What is worth the complexity and what is not?
This is basically parallel-make as a service.
This has been an increasingly difficult problem as more and more pipelines move to containers for testing and building. What other solutions have folks come up with?
JavaScript bundles are often a bottleneck in web builds. I wish there were better ways to speed this up.
To summarize:
1. Your build pipeline has a lot of hermetic actions.
2. To speed it up, you execute these actions remotely on isolated environments, cache the results and reuse when possible.
Pretty neat.
You might want to look into https://goo.gl/TB49ED and https://console.cloud.google.com/marketplace/details/google/... if you need a managed service to do just that.