Instastart Java Microservices With Kontainers
An industry-standard container that starts in milliseconds, runs in its own private VM, and is minimally small sounds good to a lot of software developers.
To make these benefits more concrete, let’s talk about an example: Java microservices. (If you are so inclined, please insert “Java microservices is an oxymoron” quip here.) Given VMware Pivotal’s SpringOne event this week, it’s a timely topic.
Begin With the Goal In Mind
The trend toward microservice-based application software architectures is intimately entwined with the trends toward distributed, cloud-native architectures. Arguably, the only way to properly do distributed cloud-native software is to properly hew to microservice design principles.
Among the goals of this style of software design are to:
- Deliver consistently high application performance.
- Minimize the cost of running the application.
To Dream The Impossible Dream
Sadly, these goals are diametrically opposed.
Goal #1 is easy: Merely make sure you always have enough instances of your microservices up running at all times. In the extreme, you could fire up an instance of each service on each physical node of every datacenter and PoP that Amazon operates across the globe. You’d have one amazingly high performance service. And you’d finally make Jeff Bezos truly wealthy.
That may sound like a joke, but let’s be serious: A less extreme version of that strategy, where instances are over-provisioned by millions of developers and devops engineers across tens of millions of applications running on a googol of physical servers has had no small impact on Mr. Bezos’ wealth.
Goal #2 is also easily achieved, by doing precisely the opposite. Do not, ever, stand up an instance of anything unless you’re darned sure (because your load balancer told you) that you have useful work for that instance to do, right now. That would be great for reducing cloud OpEx, but our managers would fire us if our users didn’t kill us first for having such abysmal performance.
The second strategy also confers a security benefit: The fewer instances you have running, the fewer points of vulnerability you have. There’s an old saying that idle hands are the devil’s workshop. We could also say that an idle instance is the devil’s attack surface.
If the cost of pre-provisioning resources were zero, the infinite pre-provisioning strategy would work. Unfortunately, this isn’t happening.
Similarly, if the time it took to stand up resources was zero, the never pre-provision strategy would work. That’s not going to happen either.
But we can make the world a better place by trying.
The Calculus of Provisioning
Because perfection is impossible, we compromise. We try to make sure we have enough horsepower online to meet momentary demand, and we also try to limit overprovisioning. We restate our two conflicting goals with more relaxed, and achievable, intent:
Always have enough capacity available to meet demand for computational horsepower at the current moment, plus any additional demand that will show up between now and the time it would take to provision the incremental capacity for that additional demand.
We can’t know a priori what that additional demand will be, so we have to estimate. Further, we always add safety margin to make sure demand doesn’t overrun supply. These necessities mean that we’re always planning to provision more supply of resources available than current demand.
We can, however, approach our mythical minimum cost while still delivering target performance in the face of variable demand. The trick is to reduce the time it takes to provision additional capacity.
Reduction in provisioning time has been a trend for as long as we can remember. In the last millennium, companies like Dell reduced the time to provision additional server capacity from months to weeks. In the first decade of the present millennium, companies like VMware reduced effective provisioning time from weeks to hours, even minutes. In the last decade, containers popularized by companies like Docker slashed effective provisioning time from minutes to seconds. Each step was a quantum jump in the elasticity and economics of compute capacity.
One of Kontain’s goals is to deliver the next quantum of acceleration of effective provisioning time: From seconds to milliseconds.
What Does This Have To Do With Java Microservices?
Java’s got plenty of assets, but being incredibly fast to instantiate in a container isn’t one of them.
With such frameworks, it still typically takes from several to tens of seconds to provision a Java microservice instance. The closer to zero we can drive that time, the lower we can make the cost of operating our app in production without compromising on performance for our users.
What if you could instantiate Java microservices more than an order of magnitude faster?
To test the boundaries of what’s possible, Kontain’s engineering team tested a Hello, world Java microservice. The results were dramatic:
We achieved these results by running our trivial Java microservice in a kontainer. That reduced the time to start the container itself to a handful of milliseconds.
Java microservice frameworks add considerably more startup overhead beyond just starting the container. The Java class loader is very expensive, and the classes have to be loaded in every instance of every container (or kontainer) before the microservice can do useful work. Using Kontain’s InstaStart feature, we compressed the Java- and framework-specific startup overhead to tens of milliseconds.
More complex microservices than our trivial Hello, world test could take significantly longer to start in the native frameworks. We’ve talked with software teams whose Java containers take tens of seconds, or more, to start. Because of this, they take great pains to avoid having to start microservice instances. It works, but it’s expensive. Not only in terms of operating expense, but in devops complexity and developer productivity.
The process for using Kontain to produce these ultra-efficient microservices is easy, and requires no changes to your application source code.
If the potential to start Java microservice instances something like 4000% faster sounds interesting, we’d love to hear from you.