Rule the clouds with Mist v4.2

We are pleased to announce today the release of Mist v4.2. This release includes new governance features like cost quotas, machine leases and rules on logs. It also includes enhancements on Mist's support for several public clouds and a set of bug fixes.

Constraints: cost quotas and machine leases

Mist v4.2 improves multi-cloud governance by introducing constraints. Constraints extend role-based access controls (RBAC) and are configured from the Teams section. In this first iteration, Mist supports constraints for implementing cost quotas and machine leases.

Cost quotas help you stay within budget and avoid unpleasant surprises when you receive invoices from your cloud providers. Mist v4.2 supports quotas per team and organization. Quotas apply whenever a team member attempts to create, start or resize a machine. Mist will compare the current run rate to the relevant quota. The requested action will be allowed only if the run rate is below the quota. For more information on how to set up quotas, check out our help documentation here.

Machine leases help you reduce machine sprawl. You no longer have to spend valuable time trying to figure out who owns a machine and what to do with it. When a lease expires, machines will get automatically destroyed or stopped. For more details, check out our help documentation here.

Please keep in mind that the above features are only available in Mist Enterprise Edition (EE) and Mist Hosted Service (HS).

Observation logs

Mist emits logs for every action performed through its API. This is useful for auditing and troubleshooting purposes. In addition to that, Mist v4.2 emits logs whenever it detects changes in your infrastructure. This way, you can keep track of actions that did not happen through Mist.

Mist observation logs
Mist observation logs

Specifically, Mist v4.2 emits logs when it detects:

  • creation or destruction of machines, volumes, networks and DNS zones,
  • changes in the size of machines (e.g. machine was resized from 2 vCPUs to 4),
  • changes in the status of machines (e.g. machine went from running state to stopped),
  • block storage volumes getting attached or detached

For more details, check out our help documentation here.

Rules on logs

Since the very first versions of Mist you can set rules on metrics from monitored machines. These rules can trigger actions like email alerts, resource lifecycle actions, script execution etc. Mist v4.2 extends the rules engine to support queries on logs from all supported resource types.

Mist log rules
Mist log rules

This opens up several new options, especially when combined with observation logs. Some interesting examples include:

  • Notify me when machines are created or destroyed in my production cloud.
  • Destroy a machine when post-deployment steps fail.
  • Open a ticket in my issue tracker when provisioning fails.

For more information regarding rules on logs check out our help documentation < href="">here.

Maxihost bare metals in Mist

Maxihost and Mist logos in the clouds

Mist v4.2 brings support for Maxihost, a provider of on-demand bare metal servers. Maxihost is based in Brazil and serves a wide range of global companies like Riot Games, Algolia, Zoho and more. We love their service for the flexibility and cost efficiency it offers.

If you'd like to learn more about Maxihost, visit their website at

If you are a Maxihost user already, you can add it to Mist by following the instructions here.

Other updates in Mist v4.2

Besides the above, Mist v4.2 includes several fixes and improvements. The most notable are:

  • Support FoundationDB Document Layer as a replacement for MongoDB.
  • Improved volume support and machine provisioning on Microsoft Azure Resource Manager and Alibaba Cloud
  • Attach existing and new volumes when creating machines to AWS, DigitalOcean and Alibaba Cloud.
  • Cloud-init support for OpenStack, Alibaba Cloud, IBM Cloud and Vultr.
  • Hide unavailable actions in the web UI according to RBAC permissions.
  • Rules can trigger webhook actions.
  • Include alert level description in rule notification actions.


Mist v4.2 focuses on how you can improve governance through features like constraints, observation logs and rules on logs. It brings support for a new bare metal cloud provider and several enhancements to existing ones. Finally, it introduces fixes for bugs and further improves the web UI. The next major release will go out late in Q1. Until then, stay tuned for minor releases on a monthly schedule.

To check out the entire platform please reach out to and one of our engineers will give you a quick overview.

If you'd like to try out for yourselves, sign up for a Mist HS account and begin your 14-day free trial.

Community Edition users can get the latest version and relevant instructions on GitHub.

Mist now supports Alibaba Cloud

Alibaba Cloud and logos

Market share research from Gartner shows that Alibaba Cloud (a.k.a. Aliyun) is the #1 IaaS vendor in Asia Pacific with almost double the size of Amazon AWS which is second. Alibaba Cloud also boasts a #3 position worldwide, behind only Amazon AWS and Microsoft Azure. It has a very high density of datacenters in Asia Pacific and China, but it's more sparse in the rest of the world. In terms of feature set, it offers all the core services that you would expect from a major cloud vendor.

Recently, we noticed a considerable uptake in user requests for adding Alibaba Cloud support in Mist. Mist v4.1 delivers the first iteration. Now, our users are able to manage their Alibaba Cloud Elastic Compute Service (ECS) together with other public or private infrastructure from a single pane of glass.

More on and Alibaba Cloud

Our work with Alibaba Cloud doesn't stop here though. We are happy to announce that is a Technology Partner for Alibaba Cloud and in the future you should expect deeper integration and collaboration. Stay tuned!

Our experience with Alibaba's team has been very positive and we recommend to try it out, especially if you have workloads that need to run in APAC.

New users can sign up for a free trial.

Users with some initial exposure to Alibaba Cloud can leverage the Starter Package until October 10th, 2019. The Starter Package offers discounted rates across a number of services. More details and price comparisons to AWS can be found here.

Other updates in Mist v4.1

Some of our bigger customers are all-in self-service devops, e.g. SevOne. To make this happen in an organized way they need very fine grained control over who has access where.

Mist v4.1 adds another layer of such controls, enabling users to enforce Mist's RBAC policies to cloud locations. This feature is available in Mist Hosted Service (HS) and Mist Enterprise Edition (EE) which come with RBAC support out-of-the-box. RBAC on locations applies both to public clouds and private infrastructure. For example, account owners could now allow their teams to provision resources only on Alibaba ECS EU Central 1 (Frankfurt) availability zone A and vCenter Cluster 2.

Setting up RBAC for locations
Setting up RBAC for locations

Besides the above, Mist v4.1 includes several fixes and improvements. The most notable are:

  • Support for volumes in Packet clouds.
  • Support for new OpenSSH key format.
  • Set filesystem type when creating volumes in DigitalOcean.
  • Create and attach volume on OpenStack from the machine creation form.
  • Support OpenStack API v2.2 & OpenStack Auth API v3.
  • Update date picker in schedules.
  • Fix editing of schedule script parameters.
  • Fix tag editing in lists.
  • Fix price retrieval for GCE Asia regions.


Alibaba Cloud support is the big new feature in Mist v4.1. This release also brings role-based access controls for cloud locations in private and public clouds. Finally, it includes several fixes and improvements for Packet, OpenStack, DigitalOcean, Google Compute Engine and OpenSSH.

Starting today, v4.1 is available on all Mist editions.

For a quick demo, reach out to and arrange a video call with one of our engineers.

New features, performance and usability improvements in Mist v4.0

We are happy to announce version 4.0 of the Mist Cloud Management Platform. This major new release brings several new features, performance and usability improvements. It also incorporates the lessons we have learned in the past few months while working with teams that manage thousands of resources, e.g. SevOne.

Mist v4.0 updates

Mist now runs on Python v3.7

python programming language logo

Mist v4.0 brings a complete migration from Python v2.7 to Python v3.7. Our goal is to future-proof the code base and take advantage of the latest language improvements.

Thanks to this migration, Mist users will notice considerable improvements in server-side performance.

The migration also allowed us to upgrade to Apache Libcloud v2.4.0 which no longer supports older Python versions. The latest stable version of Libcloud includes many new features for OpenStack, Amazon AWS, Google Cloud Platform, Microsoft Azure and DigitalOcean. You can see a full list of changes here.

If you are considering a similar migration for your projects, check out this post with a nice overview of the differences between Python v2.X and Python v3.X. You can find further useful information in the official Python documentation, here and here. Finally, keep in mind that community support for Python v2.7 will expire on January 1st 2020.

Polymer v2.X and Web Components v1

polymer project logo

In Mist v4.0 the front-end code is in Polymer v2.X, up from Polymer v1.X. This is the first step towards moving to Polymer v3.X. The goal of this transition is to offer improved browser interoperability and performance. It also allows us to easily upgrade 3rd party components for additional usability improvements.

Migrating from Polymer v1.X to v2.X is not very simple because v2.X introduces breaking changes. Before you try something similar, make sure you check out this excellent upgrade guide. For more information on what Polymer v2.X brings on the table you can check out this document. Since this will probably be a short-term intermediate step before moving to Polymer v3.X, you should also go over the relevant v3.X documents here and here. The good news is that once you're on v2.X moving to v3.X requires less effort than moving from v1.X to v2.X.

Usability improvements

Alongside the major changes mentioned in the previous paragraphs, Mist v4.0 includes several usability improvements to ease your day-to-day routines. The most notable ones are:

i) Searchable key & network selection widgets in forms.

ii) Collapsible sections in monitoring dashboards.

iii) Export machine monitoring dashboard as pdf.

iv) Improved user interaction when adding "Other Server" Clouds.

v) Widget for selecting existing tags.

add existing tag to machine
Adding existing tags to new machines

vi) Configurable filters in every list that persist in localStorage.

saving search filters
Saving custom search filters

vii) Improved display of JSON & XML metadata.

browsing JSON metadata
Browsing machine metadata in JSON

Automatic backup & restore scripts

For Mist Community and Enterprise Edition users who are managing their own Mist installations, v4.0 includes a new backup feature. You can now automatically backup and restore everything, including MongoDB and InfluxDB, by making some simple configuration changes.

Pre and post action hooks

Mist v4.0 allows users to set specific pre and post action hooks at the cloud level, e.g. for all resources in my OpenStack cloud. This is useful for users with large infrastructure footprints that require very custom workflows and integrations with 3rd party systems. For example, one of our users is taking advantage of this feature for metering and billing purposes. When a new VM is provisioned a post-action hook notifies the billing system. The same happens after the VM gets destroyed. Based on this information, it is possible to know how many resources were utilized and for how long. This is then translated to an internal cost unit.


Mist v4.0 is a major stable release which brings lots of changes and significant improvements.

Starting today, v4.0 is available on all Mist editions.

For a quick demo, reach out to and arrange a video call with one of our engineers.

SevOne revamps Self Service DevOps to move faster and save money

Christos Psaltis profile pic

SevOne logo

SevOne is a leading provider of network monitoring solutions. For its engineering needs it runs several thousands of VMs on more than 7 platforms, including VMware vCenter, several versions of Red Hat OpenStack, Kubernetes and more. A few months ago SevOne turned to to improve self-service workflows for its engineers. Empowered by the Mist Cloud Management Platform, SevOne was able to speed up development, save money and invest more resources in key business drivers.

The Case

SevOne, a Boston MA based tech company, builds a suite of network & infrastructure monitoring products. This sounds like an understatement if you consider that SevOne's customers include Verizon, Comcast, eBay, Credit Suisse, Lockheed Martin and many more. In fact, some of the largest networks in the world rely on SevOne to run at peak performance.

SevOne's infrastructure footprint for development and QA runs across public clouds and its own data center with a mix of bare metals, VMware vCenter, several versions of Red Hat OpenStack and more. Currently many applications are moving to microservices using Kubernetes for container orchestration. In most cases the OS of choice is Linux. More than 100 developers from different teams need to have access to at least a subset of this hubrid infrastructure, measuring several thousands of VMs, to meet their day-to-day business needs.

Kevin Williams

"We chose Mist because it was easy to onboard and use. It required no changes in the way we were doing things, while enabling us to iterate and improve." Kevin Williams, Corporate Services Engineer at SevOne

To be more agile and move faster, SevOne adopted a self-service model via a homegrown web-based virtual server provisioning system. However, SevOne development resources familiar with the homegrown application were often diverted of moved to other posts, making it increasingly difficult to maintain and support the application.

"Bottomline, we had a basic homegrown application that was hard to support, maintain and extend. It ended up holding us back instead of helping us move faster", says Kevin Williams Corporate Services Engineer at SevOne.

With this experience SevOne started looking into third party management platforms. Within a few months of testing, SevOne chose Mist.

Kevin notes, "We chose Mist because it was easy to onboard and use. It required no changes in the way we were doing things, while enabling us to iterate and improve. Also, it helps us easily manage our Kubernetes clusters. Finally, we were impressed by their support. people were very responsive, knowledgeable and helped us hands-on when needed."

Life with Mist

SevOne is currently managing its DevOps infrastructure with Mist Enterprise Edition, installed on-prem. Each SevOne user belongs to a team with specific rights over resources based on Mist role-based access control. To provision applications, SevOne DevOps has prepared a set of templates and scripts. SevOne developers use these to deploy complex applications, like Kubernetes clusters, in just a few clicks. As an added value, SevOne DevOps is able to view who owns resources and how they are utilized.

Kevin comments, "Since roll out, Mist gained a lot of traction over our homegrown application. Today it's one of the top 3 tools people use on daily basis. Our developers love how easy they can provision resources with templates and pre-built scripts. For example, when we're about to do a new release we need to provision a lot of VMs for QA and training purposes. Mist gives my end users the freedom to do this right away. They don't have to open a ticket for the IT team like they did in the past. My end users are happy and the IT team doesn't have to perform manual steps to stand VMs up. This alone saves us hundreds of hours in each release cycle. It's also much easier to track users and resources across systems, e.g. which team owns what, how much they are using etc. For example, last weekend an engineer was trying to find one of his VMs. He only had an IP that was not responding because the VM was powered off. With vCenter, tracking this VM would be time consuming. With Mist it was a matter of seconds."


By adopting Mist, SevOne was able to score multiple wins across the board:

  • SevOne DevOps saved at least 1 full-time equivalent from supporting the old homegrown application. SevOne saved even more by not having to add new features to it. All this effort was diverted to more business critical projects.
  • SevOne developers are happier because they can get resources faster and easier. They are no longer bogged down by details and can focus on the work at hand, saving hundreds of hours in each release cycle.
  • SevOne managers are also happier because productivity increased across the board. They now have better visibility into what each team owns and this paves the way for further optimizations and more savings.

To learn more about how Mist can help you achieve similar results, contact us at

To try Mist right away, sign up for a 14-day free trial.

Open source and DIY enthusiasts can try our Community Edition on Github.

Block storage, new KVM and vSphere features

Today we are happy to announce version 3.3 of the Mist Cloud Management Platform. We are excited about v3.3 because it brings:

  • A brand new volumes section from where you can manage block storage across clouds.
  • Support for multiple network interfaces and static IP configuration for KVM guests.
  • UI performance improvements that allow seamless management of thousands of resources.
  • A new section for quick overview of all your clouds.
  • Saved searches to go through your logs faster.
  • Support for snapshots for VMware vSphere virtual machines.

Mist v3.3 updates


Mist v3.3 brings support for block storage volumes on public and private clouds. Mist auto-discovers your existing volumes in seconds so you know exactly what you have and where. Right out-of-the-box you will be able to see which of your volumes are used and which you should delete to save money. On top of all common actions like create, delete, attach, detach etc, you are also able to associate volumes with tags and teams. This allows additional visibility, e.g. quickly finding the owner of a volume. It also allows control through Mist's RBAC, e.g. giving the right only to specific teams to create new volumes. Block storage is currently supported for Amazon Web Services, DigitalOcean, Google Cloud Platform and OpenStack.

Mist volumes section
Mist's new volume section

KVM networking

One of the most common pain points for KVM users is network configuration; topologies might differ between hosts and guests, DHCP might or might not be there etc. To ease this pain, Mist now offers options for configuring multiple network interfaces per guest. It is also possible to manually configure IP addresses during provisioning.

Configuring networks for KVM guests
Configuring networks for KVM guests

Clouds section

Mist v3.3 brings a new clouds section to help you quickly navigate your inventory. From this section you can get a quick overview of which clouds you have connected to Mist and how many machines, networks, volumes and DNS zones are provided by each one.

Mist clouds section
Mist's new cloud section

Saved searches

Many log searches are fairly repetitive. When the parameters of a query are complicated Mist's free form search was not very helpful. That is why Mist v3.3 introduces saved searches. The only thing you need to do is type your query once and hit save. Every time you log in again you can simply select it and get the results. In future versions we will extend this functionality to all search forms in Mist's interface.

Mist saved search example
Saved search example

UI performance improvements

When managing several thousands of resources, the Mist UI may need to do a lot of heavy lifting to maintain an up to date overview at all times. In some cases this might impact user experience. In this release, we refactored the front-end code that was responsible for most of the bottlenecks in the UI. The result of this effort is a seamless experience, even when managing thousands of machines.

vSphere snapshots

Some of's biggest customers are heavy VMware users. In the same time they run a lot of infrastructure on public clouds. For this reason Mist has become their first stop when they need to perform a certain action. To simplify their daily routines Mist v3.3 brings support for snapshots in vSphere virtual machines. Users can now create and revert to snapshots from within Mist without having to jump back and forth vSphere or vCenter.


With so many changes in this version we recommend you sign in and try things out. Starting today, v3.3 is available on all Mist editions.

For a quick demo, reach out to and arrange a video call with one of our engineers.

The case for cloud neutrality

Dimitris Moraitis profile pic

Computing is the fuel of AI. We need so much more of it and we have to become way more efficient in producing and distributing it.

Clouds want to lock you in

Sun over clouds

Public cloud providers have been frenetically developing new features and services. Most of those generate value by solving real world problems. Quite often, these add-ons are provided for free, which is wonderful, but there is a catch: the overwhelming majority of these awesome products can only use computing power from that same cloud provider. Oh my!

Maybe this doesn't sound so unreasonable. Isn't it natural for a company's products to fit well together? After all, this tight coupling can be beneficial to the end user despite some level of lock-in. For example, coffee capsule machines are not as rare as most gourmet coffee addicts would expect. They are admittedly less messy than grinding your own beans. On the other hand, would you choose a refrigerator that only accepts food products of a single brand? Clearly, a line must be drawn somewhere.

When it comes to computing, it is up to each organization to draw that line. If your workloads are small and predictable, perhaps optimizing your cloud neutrality isn't worth the extra effort. At the same time, many organizations will go to great lengths to reduce their dependence on proprietary software and services that are not sufficiently commoditized.

This fact has been leveraged by Google in the ongoing cloud wars. They launched Kubernetes as an open source project, which quickly became the de-facto standard for orchestrating containers. Amazon, Microsoft & Docker tried to compete but were eventually forced to acknowledge the dominance of Kubernetes. Now they're trying to catch up with Google as to who will build the best walled garden around it, as always constraining the compute resources one can add to it.

On another front of that same war, Microsoft acquired Github while Google is funding Gitlab. Both tech conglomerates are employing the very best strategies available to their deep pockets. All that jazz, just to win the hearts of the developers of this world and, more importantly, their computing workloads.

The machine learning / AI revolution is raising the stakes and the complexity involved. A lot of workloads are mostly stateless and idempotent, but often require GPU's or new architectures like TPU's and, soon enough, neuromorphic chips. Every new feature that makes a difference will be leveraged to tighten the lock-in. Is there a viable defense strategy to that fierce technological jiu jitsu?

Turn them into mist!

Bridge covered by clouds

We would like to help ignite the upcoming AI explosion without getting further locked in any cloud. This is why we built Mist/a>. Our mission is to perfect the tools that will streamline the process of provisioning, operating and distributing computing resources from any provider or platform.

We are committed to provide these tools as Free and Open Source software. Thus, the vast majority of our code is included in the Community Edition of the Mist Cloud Management Platform.

In order to finance this effort, we've also launched a set of commercial offerings that enhance the base platform with features requested by our Enterprise users. These include tools for optimizing spending, regulating access, detecting anomalies and reselling excess resources. These are all available as part of the Hosted Service Hosted Service and the Enterprise Edition.

We've gone a long way in the last 5 years, but our work is far from complete. A growing number of users of all shapes and sizes depend on our software and services and the platform has matured significantly and broadened its scope in Mist v3. But let's be honest. It is still an infant. There are so many use cases that we are not yet supporting but cannot afford to ignore if we're serious about democratizing cloud computing.

Some of the areas that we're starting to engage include i) cost based auto-scaling for Kubernetes clusters across clouds and ii) producing increasingly intelligent recommendations that aspire to evolve into an autopilot for DevOps.

If you think you have a multi-cloud use-case that Mist cannot yet address, please let us know. We love being challenged, especially when it's about breaking virtualized chains.

Clouds want to lock us in, let's turn them into mist!

Photo by donabelandewen@flickr CC BY 2.0

This post was originally published by Dimitris Moraitis,'s co-founder & CTO, on Quora.

Mist operator for Red Hat OpenShift Container Platform has partnered with Red Hat to develop a Kubernetes Operator of the Mist platform for Red Hat OpenShift.


For software companies, building and maintaining cloud-native applications today is not a simple task - they must address significant complexities during the initial build and provide maintenance across siloed cloud footprints. Helping to address these challenges are Operators, a method of packaging, deploying and managing a Kubernetes application. A Kubernetes application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl tooling. Kubernetes Operators simplify application development for Kubernetes platforms by abstracting away complexities and coding human operational knowledge into applications, creating services that can function without human intervention.

This service automation can improve the lives of both developers and operations teams, enabling Kubernetes applications to function like cloud services, from self-healing and self-provisioning capabilities to easier management and maintenance at production-scale. Built from the Operator Framework, applications can be instantly updated across all footprints simultaneously, limiting IT downtime and easing the burden of maintaining large clusters of Kubernetes applications.

In this context, Red Hat has launched a new initiative to support software companies in shipping their products as Kubernetes Operators.

You can read the full press release for Red Hat's initiative here.

If you're interested in beta testing the Mist Operator please reach out to

Mist v3 released, featuring rules on groups of resources and monitoring with InfluxDB - also available on-prem

We are happy to announce version 3 of the Cloud Management Platform.

Some of the most notable changes are:

  • Monitoring with InfluxDB and Telegraf. now offers a fully open source monitoring option to our Community Edition users. At the same time the whole monitoring service is more extensible and easy to maintain.
  • New rules engine. It's now trivial to set rules that apply to groups of machines, both existing and new ones. This almost eliminates the mundane effort of setting up such policies across your infrastructure. If you'd like to dig deeper you could check out our docs here.
  • Improved VMware vSphere support. Provisioning, networks (list & assign), retrieve extended metadata.
  • Enterprise Edition.  The fully featured Mist platform is now available on-prem behind your firewall.

Other items that are are part of this release:

  • Support for ClearOS and ClearCenter SDN.
  • More network options when provisioning machines on Google Compute Engine.
  • Overhauled OpenStack support.
  • It's now possible to bring together machines into a single virtual "Cloud".
  • Many usability and performance improvements.
  • New interactive API docs using OpenAPI 3.0 spec & Swagger UI which you can find here.
  • Finally, we've remorselessly eliminated more than a few bugs.

Our biggest open source release yet

Our engineers have been hard at work over the past months to restructure the code and bring the open-source version up to speed with the SaaS at

Today, we are pleased to announce that the new version is out, and it is our biggest open-source release so far! It comes with an easier installation process using Docker Compose and many new features:

  • Support for more providers
  • Faster and better designed UI
  • Run scripts and Ansible playbooks
  • Schedule actions
  • Manage DNS records, networks and logs
  • User and team management
  • Estimation of infrastructure costs

Last but not least, the tagging mechanism has been vastly improved and is deeply integrated with the scheduler and the cost management system

And we're not stopping there. Now that the restructuring is over, we will continue to improve the open-source version by adding orchestration and monitoring support in the coming months.

Here is a rundown of the most important changes in this release.

User and team management

Until now, the open-source version supported a single user and tenant, without any authentication system built in. With this release, multiple users and organizations can access the same instance, using separate accounts. Each user can create multiple organizations and multiple teams inside those organizations in order to invite other users to manage the same infrastructure.

Scripts & Schedules

Another feature previously only found on the SaaS version is the ability to schedule and run scripts on your servers. Scripts can be executable or Ansible playbooks.

If you want to know more about script management, hop over to the documentation.

You can also use Schedules to perform specific tasks (like reboot or destroy a VM) or run scripts on your servers periodically. Schedules can be run on multiple machines, grouped by the same tag or selected by their uptime and/or cost.

DNS and network management

You can now add, delete and manage your zones along with the rest of your infrastructure directly through We currently support Amazon's Route 53, Google, Digital Ocean, Linode, Rackspace, Softlayer and Vultr with more providers coming up soon.

Additionally, network management has also landed on the open-source version and you can create and delete networks easier than ever before.

Detailed logs

Logging is an important aspect of infrastructure management and security. Every action and event happening in leaves an audit trail. The new dashboard presents a searchable list of the latest log entries for all of your infrastructure, while on each resource page there are detailed log listings with all entries related to that resource.

Revamped architecture using microservices

The open-source code has also been broken down to smaller repositories. The code is now a lot cleaner and more manageable. The UI, API, landing page, and even the test suite are now separate repositories that act as submodules to the main git repository. Each part comes with its own docker image that gets fired up as a microservice by Docker Compose.

New user interface based on web components

Built using Polymer and the latest web component standards, the new UI is much faster and looks better. It reduces the browser load and can accommodate large installations with hundreds or even thousands of machines. Web components are a W3C standard supported by all major browsers. They enable more complex applications while encouraging better reusability and separation of concern between components.


Want to check out the new version? The installation steps have been greatly simplified. First, you need to install Docker Compose:

With Docker Compose installed, download the release yaml file and fire up

docker-compose up -d

Sit back for a few minutes. The first time might take a while as it downloads all the required Docker images. When it's ready you should be able to visit http://localhost and see the landing page. Port 80 needs to be available as it will be mapped to the nginx container.

To create a user for the first time sign up using you email. By default will forward all email to the internal mailmock container. You can get the confirmation link by tailing the logs of that container.

docker-compose logs mailmock

You can update the mailer settings and configure to use your own email server, by editing the settings file at config/ After updating the MAILER_SETTINGS parameter you'll need to restart for the changes to take place.

docker-compose restart api celery

You can also add admin users through the command line.

docker-compose exec api bin/adduser --admin

Enter a password and use it to sign in at http://localhost

Welcome to! Enjoy!

How to drive down dev infra cost by 60% (or more)

I will show you how to use's cloud scheduler to automatically stop VMs during non-business hours to avoid paying for VMs that are not actively being used.

You can think of the cloud scheduler as a programmable on/off/destroy switch for your VMs – in our use case it dynamically turns them off on a set schedule. 

Let's get started…

Step 1 - Start by adding a cloud(s) and auto-discovering your VMs. 

Skip this step if you already have clouds in your account.

  1. Log in to your account
  2. Click Add Cloud will automatically fetch and list all your VMs across all your cloud environments to give you an inventory of your resources.

Step 2 - View cost and usage reports.

Next, view the cost and usage reports to spot waste. To view the reports:

  1. Click on Insights (last option on the side navigation bar)
  2. Use the cost, inventory, and usage reports to understand your usage patterns and identify VMs that can be powered off.
  3. Note: Insights is currently in a private beta. Contact support to enable your account.

Step 3 - Create a schedule to stop VMs during non-business hours and stop burning cash.

  1. Click on Schedules 
  2. Click Add Schedule
  3. Complete form and save

Schedules can be applied to either specific VMs or to tags. 

Now the scheduled task will stop all VMs on the set schedule you created. On most public clouds when you stop a VM you don't have to pay for it. The scheduler makes it really easy to stop machines across all your public and private clouds and, hopefully, drastically reduce your monthly costs associated with non-production VMs.

The scheduler works with AWS, Azure, GCE, IBM/SoftLayer, Rackspace, DigitalOcean, KVM, VMware, OpenStack, and more.

Load more