Testing and Dev/Production Parity in Docker
DockerCon is just around the corner, and as you may have heard, “Containers are the Future” ;). With that in mind, let’s take a look at some of the biggest questions surrounding Docker in the present.
In a recent presentation to Heavybit member companies, CircleCI CEO Paul Biggar presented some Docker Infrastructure questions to the community. His two biggest questions were:
- In a containerized world, how do you test images and where do you put your testing tools?
- What are best practices for achieving dev/production parity?
Since then, we’ve had a number of conversations around the Heavybit clubhouse about how our members are managing these aspects of their Docker usage.
Stephen Nguyen offered Iron.io’s approaches:
- Image Testing: Currently our testing strategy relies on limiting the footprint of containers within our Docker stack. By starting from the scratch image and creating images that run one process (usually contained within a single binary), we focus our testing efforts on the codebase and APIs, and simplify our container-level testing requirements.
- Dev Production Parity: Consistent workflows across your entire development team allow for shorter development cycles and more reliable production deployments. We designed IronWorker with full Docker support for our development workflow along with supporting Docker containers out-of-the-box. When development workflows operate the same as in production environments, the debugging, troubleshooting, and head scratching that follows just disappears. Our goal is to remove challenges of developers and enable them to do what they do best – write software that just works.
In a recent Heavybit Speaker Series event, Docker’s own Jérôme Petazzoni discussed what he believes are ‘Best Practices in Dev to Production Parity for Containers’.
Said Petazzoni, “The idea is to do one thing, do it well, just like the Unix philosophy. We have one container for the component itself, for our application code. Then we have a separate container for logging. Another for monitoring. Another for backups if we have some data behind this container. Another for debugging when we need to, and so on, and so on.”
Despite the widespread use of containers and Docker in particular, the orchestration and management of in production containers is still an area of discovery and growth. Companies like Iron.io and CircleCI continue to test the boundaries of containers and continue to build new processes over time.
On Tues June 23rd, the two member companies will be co-hosting a DockerCon After Party at Heavybit with short presentations from company founders. To register check out the Eventbrite page.
Subscribe to Heavybit Updates
You don’t have to build on your own. We help you stay ahead with the hottest resources, latest product updates, and top job opportunities from the community. Don’t miss out—subscribe now.
Content from the Library
Regulation & Copyrights: Do They Work for AI & Open Source?
Emerging Questions in Global Regulation for AI and Open Source The 46th President of the United States issued an executive order...
How to Properly Scope and Evolve Data Pipelines
For Data Pipelines, Planning Matters. So Does Evolution. A data pipeline is a set of processes that extracts, transforms, and...
The Role of Synthetic Data in AI/ML Programs in Software
Why Synthetic Data Matters for Software Running AI in production requires a great deal of data to feed to models. Reddit is now...