Block storage, new KVM and vSphere features

Today we are happy to announce version 3.3 of the Mist Cloud Management Platform. We are excited about v3.3 because it brings:

  • A brand new volumes section from where you can manage block storage across clouds.
  • Support for multiple network interfaces and static IP configuration for KVM guests.
  • UI performance improvements that allow seamless management of thousands of resources.
  • A new section for quick overview of all your clouds.
  • Saved searches to go through your logs faster.
  • Support for snapshots for VMware vSphere virtual machines.

Mist v3.3 updates

Volumes

Mist v3.3 brings support for block storage volumes on public and private clouds. Mist auto-discovers your existing volumes in seconds so you know exactly what you have and where. Right out-of-the-box you will be able to see which of your volumes are used and which you should delete to save money. On top of all common actions like create, delete, attach, detach etc, you are also able to associate volumes with tags and teams. This allows additional visibility, e.g. quickly finding the owner of a volume. It also allows control through Mist's RBAC, e.g. giving the right only to specific teams to create new volumes. Block storage is currently supported for Amazon Web Services, DigitalOcean, Google Cloud Platform and OpenStack.

Mist volumes section
Mist's new volume section

KVM networking

One of the most common pain points for KVM users is network configuration; topologies might differ between hosts and guests, DHCP might or might not be there etc. To ease this pain, Mist now offers options for configuring multiple network interfaces per guest. It is also possible to manually configure IP addresses during provisioning.

Configuring networks for KVM guests
Configuring networks for KVM guests

Clouds section

Mist v3.3 brings a new clouds section to help you quickly navigate your inventory. From this section you can get a quick overview of which clouds you have connected to Mist and how many machines, networks, volumes and DNS zones are provided by each one.

Mist clouds section
Mist's new cloud section

Saved searches

Many log searches are fairly repetitive. When the parameters of a query are complicated Mist's free form search was not very helpful. That is why Mist v3.3 introduces saved searches. The only thing you need to do is type your query once and hit save. Every time you log in again you can simply select it and get the results. In future versions we will extend this functionality to all search forms in Mist's interface.

Mist saved search example
Saved search example

UI performance improvements

When managing several thousands of resources, the Mist UI may need to do a lot of heavy lifting to maintain an up to date overview at all times. In some cases this might impact user experience. In this release, we refactored the front-end code that was responsible for most of the bottlenecks in the UI. The result of this effort is a seamless experience, even when managing thousands of machines.

vSphere snapshots

Some of Mist.io's biggest customers are heavy VMware users. In the same time they run a lot of infrastructure on public clouds. For this reason Mist has become their first stop when they need to perform a certain action. To simplify their daily routines Mist v3.3 brings support for snapshots in vSphere virtual machines. Users can now create and revert to snapshots from within Mist without having to jump back and forth vSphere or vCenter.

Conclusion

With so many changes in this version we recommend you sign in Mist.io and try things out. Starting today, v3.3 is available on all Mist editions.

For a quick demo, reach out to demo@mist.io and arrange a video call with one of our engineers.

The case for cloud neutrality

Dimitris Moraitis profile pic

Computing is the fuel of AI. We need so much more of it and we have to become way more efficient in producing and distributing it.

Clouds want to lock you in

Sun over clouds

Public cloud providers have been frenetically developing new features and services. Most of those generate value by solving real world problems. Quite often, these add-ons are provided for free, which is wonderful, but there is a catch: the overwhelming majority of these awesome products can only use computing power from that same cloud provider. Oh my!

Maybe this doesn't sound so unreasonable. Isn't it natural for a company's products to fit well together? After all, this tight coupling can be beneficial to the end user despite some level of lock-in. For example, coffee capsule machines are not as rare as most gourmet coffee addicts would expect. They are admittedly less messy than grinding your own beans. On the other hand, would you choose a refrigerator that only accepts food products of a single brand? Clearly, a line must be drawn somewhere.

When it comes to computing, it is up to each organization to draw that line. If your workloads are small and predictable, perhaps optimizing your cloud neutrality isn't worth the extra effort. At the same time, many organizations will go to great lengths to reduce their dependence on proprietary software and services that are not sufficiently commoditized.

This fact has been leveraged by Google in the ongoing cloud wars. They launched Kubernetes as an open source project, which quickly became the de-facto standard for orchestrating containers. Amazon, Microsoft & Docker tried to compete but were eventually forced to acknowledge the dominance of Kubernetes. Now they're trying to catch up with Google as to who will build the best walled garden around it, as always constraining the compute resources one can add to it.

On another front of that same war, Microsoft acquired Github while Google is funding Gitlab. Both tech conglomerates are employing the very best strategies available to their deep pockets. All that jazz, just to win the hearts of the developers of this world and, more importantly, their computing workloads.

The machine learning / AI revolution is raising the stakes and the complexity involved. A lot of workloads are mostly stateless and idempotent, but often require GPU's or new architectures like TPU's and, soon enough, neuromorphic chips. Every new feature that makes a difference will be leveraged to tighten the lock-in. Is there a viable defense strategy to that fierce technological jiu jitsu?

Turn them into mist!

Bridge covered by clouds

We would like to help ignite the upcoming AI explosion without getting further locked in any cloud. This is why we built Mist/a>. Our mission is to perfect the tools that will streamline the process of provisioning, operating and distributing computing resources from any provider or platform.

We are committed to provide these tools as Free and Open Source software. Thus, the vast majority of our code is included in the Community Edition of the Mist Cloud Management Platform.

In order to finance this effort, we've also launched a set of commercial offerings that enhance the base platform with features requested by our Enterprise users. These include tools for optimizing spending, regulating access, detecting anomalies and reselling excess resources. These are all available as part of the Hosted Service Hosted Service and the Enterprise Edition.

We've gone a long way in the last 5 years, but our work is far from complete. A growing number of users of all shapes and sizes depend on our software and services and the platform has matured significantly and broadened its scope in Mist v3. But let's be honest. It is still an infant. There are so many use cases that we are not yet supporting but cannot afford to ignore if we're serious about democratizing cloud computing.

Some of the areas that we're starting to engage include i) cost based auto-scaling for Kubernetes clusters across clouds and ii) producing increasingly intelligent recommendations that aspire to evolve into an autopilot for DevOps.

If you think you have a multi-cloud use-case that Mist cannot yet address, please let us know. We love being challenged, especially when it's about breaking virtualized chains.

Clouds want to lock us in, let's turn them into mist!

Photo by donabelandewen@flickr CC BY 2.0

This post was originally published by Dimitris Moraitis, Mist.io's co-founder & CTO, on Quora.

Mist operator for Red Hat OpenShift Container Platform

Mist.io has partnered with Red Hat to develop a Kubernetes Operator of the Mist platform for Red Hat OpenShift.

image

For software companies, building and maintaining cloud-native applications today is not a simple task - they must address significant complexities during the initial build and provide maintenance across siloed cloud footprints. Helping to address these challenges are Operators, a method of packaging, deploying and managing a Kubernetes application. A Kubernetes application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl tooling. Kubernetes Operators simplify application development for Kubernetes platforms by abstracting away complexities and coding human operational knowledge into applications, creating services that can function without human intervention.

This service automation can improve the lives of both developers and operations teams, enabling Kubernetes applications to function like cloud services, from self-healing and self-provisioning capabilities to easier management and maintenance at production-scale. Built from the Operator Framework, applications can be instantly updated across all footprints simultaneously, limiting IT downtime and easing the burden of maintaining large clusters of Kubernetes applications.

In this context, Red Hat has launched a new initiative to support software companies in shipping their products as Kubernetes Operators.

You can read the full press release for Red Hat's initiative here.

If you're interested in beta testing the Mist Operator please reach out to support@mist.io.

Mist v3 released, featuring rules on groups of resources and monitoring with InfluxDB - also available on-prem

We are happy to announce version 3 of the Mist.io Cloud Management Platform.

Some of the most notable changes are:

  • Monitoring with InfluxDB and Telegraf. Mist.io now offers a fully open source monitoring option to our Community Edition users. At the same time the whole monitoring service is more extensible and easy to maintain.
  • New rules engine. It's now trivial to set rules that apply to groups of machines, both existing and new ones. This almost eliminates the mundane effort of setting up such policies across your infrastructure. If you'd like to dig deeper you could check out our docs here.
  • Improved VMware vSphere support. Provisioning, networks (list & assign), retrieve extended metadata.
  • Enterprise Edition.  The fully featured Mist platform is now available on-prem behind your firewall.
image

Other items that are are part of this release:

  • Support for ClearOS and ClearCenter SDN.
  • More network options when provisioning machines on Google Compute Engine.
  • Overhauled OpenStack support.
  • It's now possible to bring together machines into a single virtual "Cloud".
  • Many usability and performance improvements.
  • New interactive API docs using OpenAPI 3.0 spec & Swagger UI which you can find here.
  • Finally, we've remorselessly eliminated more than a few bugs.

Our biggest Mist.io open source release yet

Our engineers have been hard at work over the past months to restructure the Mist.io code and bring the open-source version up to speed with the SaaS at https://mist.io

Today, we are pleased to announce that the new version is out, and it is our biggest open-source release so far! It comes with an easier installation process using Docker Compose and many new features:

  • Support for more providers
  • Faster and better designed UI
  • Run scripts and Ansible playbooks
  • Schedule actions
  • Manage DNS records, networks and logs
  • User and team management
  • Estimation of infrastructure costs

Last but not least, the tagging mechanism has been vastly improved and is deeply integrated with the scheduler and the cost management system

And we're not stopping there. Now that the restructuring is over, we will continue to improve the open-source version by adding orchestration and monitoring support in the coming months.

Here is a rundown of the most important changes in this release.

User and team management

Until now, the open-source version supported a single user and tenant, without any authentication system built in. With this release, multiple users and organizations can access the same Mist.io instance, using separate accounts. Each user can create multiple organizations and multiple teams inside those organizations in order to invite other users to manage the same infrastructure.

Scripts & Schedules

Another feature previously only found on the SaaS version is the ability to schedule and run scripts on your servers. Scripts can be executable or Ansible playbooks.

If you want to know more about script management, hop over to the Mist.io documentation.

You can also use Schedules to perform specific tasks (like reboot or destroy a VM) or run scripts on your servers periodically. Schedules can be run on multiple machines, grouped by the same tag or selected by their uptime and/or cost.

DNS and network management

You can now add, delete and manage your zones along with the rest of your infrastructure directly through Mist.io. We currently support Amazon's Route 53, Google, Digital Ocean, Linode, Rackspace, Softlayer and Vultr with more providers coming up soon.

Additionally, network management has also landed on the open-source version and you can create and delete networks easier than ever before.

Detailed logs

Logging is an important aspect of infrastructure management and security. Every action and event happening in Mist.io leaves an audit trail. The new dashboard presents a searchable list of the latest log entries for all of your infrastructure, while on each resource page there are detailed log listings with all entries related to that resource.

Revamped architecture using microservices

The Mist.io open-source code has also been broken down to smaller repositories. The code is now a lot cleaner and more manageable. The Mist.io UI, API, landing page, and even the test suite are now separate repositories that act as submodules to the main Mist.io git repository. Each part comes with its own docker image that gets fired up as a microservice by Docker Compose.

New user interface based on web components

Built using Polymer and the latest web component standards, the new UI is much faster and looks better. It reduces the browser load and can accommodate large installations with hundreds or even thousands of machines. Web components are a W3C standard supported by all major browsers. They enable more complex applications while encouraging better reusability and separation of concern between components.

Installation

Want to check out the new version? The installation steps have been greatly simplified. First, you need to install Docker Compose: https://docs.docker.com/compose/install/

With Docker Compose installed, download the release yaml file and fire up Mist.io:

wget https://github.com/mistio/mist.io/releases/download/v2.0.0/docker-compose.yml
docker-compose up -d

Sit back for a few minutes. The first time might take a while as it downloads all the required Docker images. When it's ready you should be able to visit http://localhost and see the Mist.io landing page. Port 80 needs to be available as it will be mapped to the nginx container.

To create a user for the first time sign up using you email. By default Mist.io will forward all email to the internal mailmock container. You can get the confirmation link by tailing the logs of that container.

docker-compose logs mailmock

You can update the mailer settings and configure Mist.io to use your own email server, by editing the settings file at config/settings.py. After updating the MAILER_SETTINGS parameter you'll need to restart Mist.io for the changes to take place.

docker-compose restart api celery

You can also add admin users through the command line.

docker-compose exec api bin/adduser --admin user@example.com

Enter a password and use it to sign in at http://localhost

Welcome to Mist.io! Enjoy!

How to drive down dev infra cost by 60% (or more)

I will show you how to use Mist.io's cloud scheduler to automatically stop VMs during non-business hours to avoid paying for VMs that are not actively being used.

You can think of the cloud scheduler as a programmable on/off/destroy switch for your VMs – in our use case it dynamically turns them off on a set schedule. 

Let's get started…

Step 1 - Start by adding a cloud(s) and auto-discovering your VMs. 

Skip this step if you already have clouds in your Mist.io account.

  1. Log in to your Mist.io account
  2. Click Add Cloud

Mist.io will automatically fetch and list all your VMs across all your cloud environments to give you an inventory of your resources.

Step 2 - View cost and usage reports.

Next, view the cost and usage reports to spot waste. To view the reports:

  1. Click on Insights (last option on the side navigation bar)
  2. Use the cost, inventory, and usage reports to understand your usage patterns and identify VMs that can be powered off.
  3. Note: Insights is currently in a private beta. Contact Mist.io support to enable your account.

Step 3 - Create a schedule to stop VMs during non-business hours and stop burning cash.

  1. Click on Schedules 
  2. Click Add Schedule
  3. Complete form and save

Schedules can be applied to either specific VMs or to tags. 

Now the scheduled task will stop all VMs on the set schedule you created. On most public clouds when you stop a VM you don't have to pay for it. The scheduler makes it really easy to stop machines across all your public and private clouds and, hopefully, drastically reduce your monthly costs associated with non-production VMs.

The scheduler works with AWS, Azure, GCE, IBM/SoftLayer, Rackspace, DigitalOcean, KVM, VMware, OpenStack, and more.

Kubernetes basics & monitoring: webinar wrap-up

Cheers again to all who joined us last week for another in-depth discussion on Kubernetes and monitoring with our friends at GigaSpaces. We got a number of great questions and are looking forward to doing more of these events, in a continuing series.

Click here to watch the recorded video, or check out the presentation on SlideShare. Additionally, we've posted fresh, ready-to-run k8s manifests now available on GitLab.

Missed our October event on Moving to Microservices; how to plan and time migrations, avoid the gotchas and more? Join us for a Co-Founders' view on these topics this Thursday when we'll be covering these issues again, this time joined by Aaron Welch from Packet for his take on them.

Migrating to microservices webinar wrap-up

Thanks so much to all who turned out for last week's Moving from Monolith to Microservices webinar with Cloudify!

We were honored to have hundreds of registrations and attendees from a wide range of countries, industries, and companies to discuss microservices, containers, and how to simplify the provisioning of a Kubernetes clusters.

Click here to view the recording, also feel free to view the deck we presented on SlideShare.

We did receive questions via both the Q&A and the Chat features, and some of the latter were submitted anonymously so if anyone has any still unanswered or additional questions, and/or any feedback, just drop us a line from within your Mist.io account. Also, please connect with us on your favorite social network(s): We're on Twitter, LinkedIn, Facebook and Google+.

More to come soon,

The Mist.io Team

====

Update 10/27/16 8:14 AM PDT:

Additional Q&A

So the microservices are not started with something like Docker Compose for development?

We tried setting up a development configuration with Docker Compose about a year ago, but it wasn't working so great, especially when we had to frequently restart services during dev. Then we tried a single node Kubernetes setup for local development using Vagrant and Virtualbox, which ended up adding up a lot of cognitive and computing load in the dev process. For the time being, we're doing development in a fat Docker container that has all services running with stable network interfaces. It's not perfect though, because the container keeps getting fatter. In the meantime, Docker Compose has improved and MiniKube seems to be improving the local Kubernetes experience. We plan to give both of them another shot pretty soon.

Has anything gone wrong in prod yet? What caused it? How did it get resolved?

Some celery pods would in rare occasions loose connectivity with RabbitMQ. They were in running state but would not execute their workloads, leading to unreliable system behavior that was very hard to reproduce, because all other pods of the same type were working. After we figured it out, we made sure it won't happen again by writing a special health check that Kubernetes uses to see if the pod is actually working. Now if it happens again we know that Kubernetes will restart the affected pod.

How much culture change is required for DevOps to move to microservices?

That depends on your current culture. If it includes DevOps & CI/CD, then you mostly need more of the same, more streamlined, provided as a service to your entire organization. Also, take some steps to cultivate a microservices/component oriented mindset and know-how across your teams, to help them build stuff that stand on their own as well as together.

Where do you store the logging for monitoring and auditing?

In ElasticSearch, which we're consuming as a service.

Do you forklift everything or you do opportunistic migration to microservices?

You can forklift some existing modules into containers and pods but probably not all of them. I think you should be opportunistic in exploiting natural partitions in your existing architecture. Keep your old setup running in parallel with the microservices setup, while you incrementally carve out pieces of your monolith and deploy them as microservices.

How do you handle spikes in utilizations with Kubernetes?

We still have checks that alert us about spikes in case we need to do manual adjustments. We've configured our pods to scale when their CPU consumption increases. We're working on improving the metrics and thresholds that trigger scale up events. Also we're experimenting with auto-scaling the cluster and we plan to provide this functionality to our users as part of the Kubernetes blueprint, in a vendor agnostic manner.

Managing virtual private clouds (VPCs) in protected networks using VPN

Mist.io helps you manage and monitor your virtual machines across different clouds by giving you total visibility and control of your hybrid infrastructure. Up until now barriers existed in terms of connectivity under scenarios that required the Mist.io SaaS platform to access VMs sitting in Virtual Private Clouds (VPCs) or bare metal machines and docker containers set up in private networks. With the introduction of support for Virtual Private Networks (VPNs), Mist.io can now access private networks or networks with restricted access from the public Internet, thus enabling users to manage and monitor their machines over a secure connection.

Mist.io's VPN functionality is based on the OpenVPN protocol, which implements a Virtual Private Network (VPN) in order to create secure point-to-point connections to remote access areas. OpenVPN is capable of accessing private networks by traversing network address translators (NATs) and firewalls, while utilizing the exchange of keys in order to secure its VPN connections (or tunnels).

Setting up the VPN

To set up a new VPN, visit the Tunnels section from your dashboard menu and click on the Add your tunnels button. (VPN support is implemented in Mist.io's upcoming UI. To access it, click on the user thumbnail on the top right, and choose Beta UI). Type in a name for your tunnel, and optionally a description. Then, on the CIDRs field,  add the private IP range that the cloud that you want to manage resides in. Mist.io will choose two random IPv4 addresses for the endpoints of the VPN tunnel. If you want to exclude some of the addresses of the network to avoid IP conflicts, you can fill them in on the excluded CIDRs field.

image

Once you're done, click on the Add button. Mist.io will create the tunnel. Click on it, and Mist.io will provide you with a bash script that you'll need to run on your VPN client - usually one of the machines or the router of your private network.

image

When deploying your VPN client, make sure that there are no firewall rules blocking incoming or forwarded Internet traffic. Your VPN client needs to allow incoming data and outgoing data to Mist.io. The UDP port that is used can be seen in the script above. Additionally, make sure that the machine where the VPN client resides can forward packets to your local VMs. Please, ensure your firewall and IPtables rules (if any) have been properly configured.

As soon as you have established your VPN tunnels, you can go ahead and add your infrastructure in Mist.io. Your private network IPs will be accessible for you by the Mist.io service as if they were public. Just go ahead and add your private clouds and perform actions on private VMs like you would normally do!

Real time cost reporting

How many VMs did the QA team spin up this week? Do we have any idle development VMs that could be powered down? How much would we save if we transferred our infrastructure to another cloud provider? How much are we actually spending right now?

These questions are becoming harder to answer as companies embrace flexible and diverse infrastructure environments. In order to answer them, managers require access to more, real-time infrastructure data. And while moving to a multi cloud environment can bring tremendous gains in flexibility and speed, it can also bring complexity and lack of visibility.

Increasing infrastructure visibility has always been one of the key points of using Mist.io. Managing all your servers from a single place can help find and power down forgotten, idle servers that can greatly add to your expenses. And with cloud cost reporting coming soon, you'll be able to know exactly how much you're spending each moment. Even more, you'll be able to see how much you're spending for each cloud and make informed decisions on where you can cut costs and how much switching to a new cloud provider will save you.

Or even at a machine level to pinpoint which servers are impacting your expenses significantly

Mist.io estimates monthly costs based on the data we get from the providers. But you can also set a custom price for your machines. To learn more on how the cost reporting feature works, check out the relevant documentation.

Cloud cost reporting is tied to our new, polymer-based UI that is coming out in a couple of months. However, if you're interested in trying it out now, we would love some early feedback. Just drop us an email at support@mist.io and we'll enable access to the new UI and the cost reporting feature.

Load more