How to know if you need multicloud - and succeed if you do

Christos Psaltis profile pic

Originally published at DevOps.com

Abstract cloud

There are many good reasons for running in multiple clouds, as well as times when multicloud environments are unavoidable due to organizational history or legal reasons. However, operating in multiple clouds adds complexity, increases operational overhead and can become expensive. Organizations should only pursue it when they have clear business reasons for doing so.

In this article, I'm going to cover some common situations when organizations can get value out of multicloud as well as strategies for success. Because multicloud is hard, the first question organizations should ask themselves is, "Do we really need multicloud?". So, the first order of business is to outline when technology leaders are better off avoiding multicloud altogether.

When multicloud is a bad idea

A single application and its data should never span multiple clouds, if it can be avoided. Here are some common reasons organizations adopt a multicloud approach when they would be better off avoiding it.

Short-term bursting (in hybrid cases). "I'll just tap into a public cloud if my private cloud suddenly needs extra capacity."

Disaster recovery. "Let's run this application on AWS and Google in case one of them disappears overnight."

Staying cloud-agnostic. "By running on multiple clouds, we are no longer locked in."

In all the cases above, network latency and egress cost will hurt your application and your business. In the best-case scenario, you will spend a lot of time and effort prematurely optimizing for unlikely scenarios.

Unless you're perfectly prepared, please, don't go there.

When multicloud is a necessary evil

There are also plenty of times when multicloud is a drag on the engineering organization and a drag on the business - but can't be avoided. Here are totally legitimate reasons to have a multicloud setup that will still require more effort to maintain than operating in a single cloud.

Mergers and acquisitions. Some organizations can end up operating in public clouds and private clouds as they acquire other companies. The effort required to migrate everything to one cloud provider often isn't worth it, so everyone continues using the same cloud they used pre-acquisition.

Legal. Depending on the type of data you gather and the jurisdictions you operate in, you may have to worry about where your cloud provider has regions available. There will be times when you need a data center in multiple jurisdictions; none of the cloud providers are in all of them. In those cases, you don't have much choice, you need to be multicloud.

If you need to use multiple cloud providers, either for legal reasons or because of mergers, it's best to focus on making multicloud management as painless as possible - while still accepting that it will not be as smooth an experience as running all workloads in a single cloud.

When multicloud is a solid strategic move

There are also some very good reasons to adopt a multicloud approach on purpose. For example:

Facilitating engineering speed. Engineering speed is paramount. Companies who focus on using multicloud to become cloud-agnostic and avoid lock-in often sacrifice development velocity, but if the organization instead uses multicloud to allow everyone on the team to use the tools and environment they are most comfortable with, it can increase velocity.

Best-of-breed tooling. The cloud providers' offerings are not totally identical. Multicloud can let organizations use the best tools from each cloud provider; for example, machine learning technology from Google, compute from AWS and business intelligence from Azure.

Better customer experience. Especially if latency is a big concern, a multicloud approach can help organizations get physically closer to users, providing a better customer experience.

There is a common thread here: When evaluating whether or not you have a legitimate need for multicloud, the key question is whether you will be fulfilling a psychological need, a theoretical need (in other words, something that you do not actually need at this moment but might at some future date) or a concrete legal or technical need that you can easily describe. If there are no concrete legal, technical or business reasons for pursuing multicloud, don't. There are costs to pursuing multicloud, so organizations that won't actually reap benefits from it shouldn't attempt it.

How to succeed

Multicloud is hard. Here's how to increase your chances of success so you can get the business benefits of better tooling or better customer experiences from your multicloud deployments.

Use a cloud management platform. Getting a single pane of glass for all your deployments is critical to managing multiple clouds without pulling your hair out. This abstraction, however, comes at the cost of supporting very deep and cloud provider-specific use cases, for example, using obscure service X.

Standardize workflows with Terraform and Ansible. You need to have processes that are as repeatable and predictable as possible. Even with standardized workflows, you still have to adjust for each environment, but the more standardization the better.

Use Kubernetes. Kubernetes lets you abstract the infrastructure layer and move workloads around more easily, even across clouds. The caveat here is that you have to adopt microservices and deal with the increased complexity of managing Kubernetes and its underlying infrastructure.

Each of the above approaches has its merits and flaws. They are not mutually exclusive, and you should not treat them like they are. Be prepared to mix and match depending on your needs.

Weigh the costs and benefits

Sometimes companies are not realistic about their technical capacity or what their needs are. They want the ability to do the latest thing they read about without thinking about whether or not that technology or approach will provide business benefits.

Multicloud adds a huge amount of complexity to your deployment. If multicloud truly is essential, however, use tools and platforms that hide that complexity from users and can simplify the management process.

Get started with CloudSigma in Mist

CloudSigma logo

Our recent Mist v4.4 release includes a brand new integration with CloudSigma. The goal is to help CloudSigma users manage their multicloud and hybrid setups from a single pane of glass. Even if you use only CloudSigma, you can still get value. You can connect multiple accounts across regions to Mist and manage them centrally.

Specifically for CloudSigma, with Mist you can:

  • Get a list of your VMs. Listings include all metadata information, e.g. public & private IPs, runtime information, attached drives etc.
  • Start, stop, reboot and delete VMs.
  • Create new VMs.
  • List, create and destroy volumes.
  • Attach volumes to VMs and detach them.

You can also leverage features that are common to all our supported clouds, e.g. tags, ownership metadata, expiration dates, cost quotas, role-based access controls, shell, scripts, orchestration, monitoring, rules, audit logs etc.

How to connect CloudSigma to Mist

Mist add CloudSigma cloud form
Adding a CloudSigma cloud to Mist

You can connect CloudSigma to Mist in five simple steps:

  1. Log in Mist.
  2. Go to Mist's add cloud form at https://mist.io/clouds/+add and click the CloudSigma logo.
  3. Type a name for your cloud, and put your CloudSigma username and password in the respective fields.
  4. Choose your region from the dropdown menu.
  5. Click the ADD CLOUD button.

In a few seconds, Mist will auto-discover the resources in your account and will show you a cost estimate for them. You are ready to go!

Try it yourself

The easiest way to try this is by signing up for a Mist Hosted Service account. All new users get a 14-day free trial.

If you are interested in our open source Community Edition, you can get the latest version from Mist's GitHub repository.

For a guided tour of the entire Mist platform, please reach out to demo@mist.io.

The True Meaning of "Open Cloud"

Christos Psaltis profile pic

Originally published at The New Stack.

Mountains and clouds

What exactly does open mean when it's applied to the cloud? In the modern software engineering world, there's a tendency to value "openness" over all else, and sometimes we act as if the "open" in open source can be just as easily applied to other parts of the engineering stack.

But open source and open clouds are totally different concepts, even if they both contain the word "open." Even if a cloud provider claims to have an open cloud, it should be obvious to everyone that the cloud infrastructure is not maintained by a global community of volunteers who do not monetize their efforts.

Cloud "openness" is often misunderstood, sometimes because companies intentionally mislead us about what is and is not open, sometimes because as an industry we try to oversimplify very complex offerings and technology. Word choice hurts us, too. The linguistic similarities with "open source", and the fact that "open" and "closed" are usually presented as binary options rather than a spectrum, make it easier to misconstrue the realities of an "open" cloud.

Oversimplifying and/or misunderstanding what openness really means in the context of a cloud environment can lead organizations to make poor decisions about technology choices, leading to wasted time and wasted money. Here's what organizations should consider when they think about how to evaluate how open a cloud is - and whether or not that even matters.

"Open" is a spectrum

The openness of any platform, cloud or service is a measure of how locked in customers are, which itself is little more than a calculation of how much it would cost, in terms of time, money and headache, to migrate off of the platform or cloud.

Cloud providers might talk about being "open", but there are no completely open clouds. After all, while talking about how open their clouds are, all the cloud providers charge for egress traffic.

This can't be just about their own cost of maintaining the underlying technology: ingress traffic is free. If the dedication to a truly open cloud were real, moving out would be just as free as moving in.

One of the problems with the term "open cloud" is that it encourages people to think of "openness" as a binary: either a cloud is open or it is closed. But openness is a spectrum, not a binary. The extremes of completely open or completely closed don't exist: Migration costs are never zero. It is also never impossible to migrate off a cloud or platform, though it can be extremely expensive.

What factors make a cloud more or less open

So how do we evaluate where on the openness spectrum a particular cloud is? The most open of open clouds will always have the following characteristics:

  • Built on open source
  • Facilitate data openness, including having tools that make it easier to access, process and move data
  • Use open APIs, use standard interfaces and adopt open standards.

However, even a cloud that meets all of those requirements is not fully "open", in other words, the cost of migrating off that cloud would not be zero. Here are the factors that organizations should evaluate to see where a particular cloud or other platform falls on the openness spectrum:

How much glue is holding the open source components together? Just because a cloud is based on open source does not mean that creating the same experience or functionality elsewhere is easy. There are always custom, proprietary scripts holding everything together and making the open source software easier to use and more reliable. The more proprietary glue the less open the cloud.

Data portability. Data has gravity. Moving data can be both time-consuming and costly. How easy it is to move data out of a cloud environment is one of the most important factors when determining how open the cloud is.

Additional services. All of the cloud providers offer all kinds of add-on services, from secrets management to monitoring and logging. Each service you use increases your lock-in, making it harder for you to migrate elsewhere. The more proprietary services you use, the less open the cloud is. Some services will be more open than others, and it's useful to evaluate openness not just of the entire cloud provider but also of each individual service the organization uses.

What skills are required? Lastly, there is a skills component to cloud openness. Not all organizations have highly skilled engineering teams. The more sophisticated your team, the easier it would be to move away from any particular cloud and the less reliant the organization is on managed services.

Making better choices

When organizations buy into the idea of a binary open/closed cloud, they are skipping over all the details about what makes a cloud more or less open, and ultimately failing to thoroughly evaluate what is and is not important for their organization. Good decision making always requires a complete understanding of both the available options as well as the organization's strengths, weaknesses and priorities.

It would be a mistake to assume that the more open a cloud is the better. For many - probably most - organizations, worrying about cloud lock-in is a distraction that can prevent the engineering team from taking advantage of the cloud's agility, speed and cost savings. Sure, a few companies might derive a competitive advantage from infrastructure management, but they are rare. In most cases, organizations who insist on making their cloud environment as open as possible find themselves spending valuable engineering time managing open source software and rolling their own services that could be purchased off-the-shelf from the cloud provider.

Instead of thinking about whether or not a cloud is open, an engineering leader should evaluate your organization's priorities: Where is openness important? Where is it not? How well do the services you consume from your cloud provider fit with your organization's priorities? That will give you a better place to start when evaluating which cloud provider has the right balance of open source, proprietary programs, out-of-the-box services and barriers to migrating away and ultimately lead to better cloud-related business decisions.

Mist joins the Cloud Native Landscape by CNCF

Dimitris Moraitis profile pic

Mist has joined the Cloud Native Landscape by CNCF. You can find us under the Provisioning, Automation & Configuration section.

The Cloud Native Landscape by CNCF
The Cloud Native Landscape by CNCF

The Landscape visualizes the cloud native space. Although it is hard to accurately map such a complicated ecosystem, this curated list can help newcomers and veterans alike. Veterans can use it to discover tools that they did not know yet. Newcomers can quickly drill down to their area of interest and begin their exploration from there.

To learn more about the Landscape and how to navigate it, we propose you begin from this post by Catherine Paganini in the The New Stack. This is the first in a series of articles that drill down into the details of each Landscape layer.

If you would like to see firsthand how Mist can help with your cloud native needs:

New in Mist v4.4

Today we are announcing the release of the Mist Cloud Management Platform v4.4.

Mist v4.4 brings a lot of updates to supported clouds and a brand new implementation of the web shell. It includes several UI/UX enhancements coupled by an upgrade to Polymer v3 and Web Components v1. Finally, it introduces role-based access controls on images and constraints on machine sizes.

New web shell and script editor
New web shell window and script editor

This will be the last release in the 4.X line. We are already working on Mist v5 which will introduce a redesigned RESTful API and brand new CLI and a number of big new features. To save you from waiting, we are shipping previews of both today. Stay tuned for a dedicated post with more information.

For a quick overview of the entire Mist platform you can watch a 20-minute video demo on YouTube.

Cloud integrations

This release adds support for:

  • CloudSigma. This is totally new and allows you to manage machines and volumes on CloudSigma.
  • Linode v4 API. All new Linode clouds in Mist will require a v4 Linode API token. Check out Linode's documentation on how to get an API token. Existing clouds connected to Mist with API v3 tokens will continue to work.
  • Linode volumes. You can now list, create, delete, attach and detach block storage volumes.
  • DigitalOcean power cycle. In some cases, your Droplets may end up in an error state that is impossible to recover from. Power cycle will perform a hard shutdown and will then power on your Droplet again.
  • vSphere rename and clone machine.

New web shell

The web shell is one of our favourite Mist features. It allows us to quickly troubleshoot things from within Mist without having to share actual private keys with our team members. In fact, we use it so often that we brought the old implementation to its knees :)

For v4.4, we rewrote the backend component in Go. This improves performance and stability. In the UI front, you can open the web shell in a new window. This way you can work in any number of shells simultaneously.

Web UI

The web UI in Mist v4.4 leverages Polymer v3 and Web Components v1. This improves browser compatibility and developer experience. You will also notice some performance improvements. More importantly, this upgrade lays the groundwork for new features coming up in v5.

Also, we introduce the Monaco Editor wherever XML and JSON are involved. Monaco also handles inline scripts and templates. With Monaco and several other minor fixes our goal is to make the UX more uniform and friendly.

New in commercial editions

In Mist Hosted Service (HS) and Enterprise Edition (EE), you can now:

  • Set RBAC rules to images. For example, you can give your dev team access only to Debian 10 images.
  • Add constraints to machine size. This gives you finer and pro-active control over your inventory. For example, a team can provision only machines with 2 CPU cores, 4GB of RAM and 50GB of disk.

Specifically in Mist EE, v4.4 adds authentication with Microsoft 365.

Conclusion

Mist v4.4 brings CloudSigma support, several enhancements to other clouds, a new web shell and a ton of other fixes. It introduces finer controls over images and machine sizes, and brings authentication with Microsoft 365 to Mist EE users. More importantly it sets the groundwork for our next major release v5.0 with an upgrade to our front-end framework and a preview of Mist's API v2 and CLI.

For a guided tour of the Mist platform, please reach out to demo@mist.io.

If you'd like to try out for yourselves, sign up for a Mist Hosted Service account and begin your 14-day free trial.

Community Edition users can get the latest version from Mist's GitHub repository.

Mist in Linode One-Click App Marketplace

Linode Jan 21st 2021 one-click apps

We are happy to announce that Mist is now available through the Linode One-Click App Marketplace.

Linode users can spin up the open source Mist Community Edition and gain control of their multicloud infrastructure in just a few minutes!

Without further ado, let's dig into it.

Instructions

  1. First, log in to your Linode Cloud Manager account and find the Mist.io app in the Marketplace.

  2. Fill in the options required. Make sure you provide a Mist admin user email and password. We recommend you use the 8GB Linode size. Provisioning will take 5-15 minutes. In the meantime, you can check out a video demo of Mist.

  3. Navigate to the public IP address of the Linode you used to deploy the Mist One-Click App in your browser. You can find the IP address for your Linode on the Linode detail page in the Cloud Manager.

  4. Click SIGN IN at the top right.

  5. Fill in the Mist admin email and password you chose when you deployed your Linode. You will be redirected to Mist's main dashboard.

    mist empty account screenshot
    Your Mist account is ready
  6. Next, click on Add your Clouds button, select Linode from the list of supported providers, enter your API key and click Add cloud. You are all set! Your Linode cloud has been added and your Linodes will be auto-discovered by Mist.

    Mist dashboard with Linode cloud added
    Mist dashboard with Linode cloud added

Follow the instructions in our docs to add more clouds.

For anything other than testing we recommend setting up a DNS name and TLS. You can find more info in Mist's README. If you would like to add more users to Mist, you need to configure your mail settings as explained here.

To learn more about how to set up Mist in Linode check out the detailed instructions in Linode's docs.

How to move files between Git repos and preserve history

While working on a multitude of open source projects we faced an interesting Git puzzle. How can you move a file between Git repos?

The easy solution is to just forget about Git history. If you like to preserve it, the situation is not very straightforward. Below we will explain the problem, your options and how to apply them.

The problem

Let's assume you work on forkX, which is a fork of repo projectX. In forkX you are collaborating with your colleagues. At the same time, projectX is also moving forward. ForkX includes files with a long history of commits that are not in projectX. You would like to push some of those files to projectX.

Your current situation looks like this:

Git forks

What do you do?

Option A: Delete history

The easiest option is to push the files without preserving history.

Based on the example in the graph above, add relevant remotes and fetch their current state:

git remote add projectX git@github.com:someOrg/projectX
git remote add forkX git@github.com:myOrg/forkX
git fetch --all

Now, move to a clean branch that is identical to projectX/master.

git checkout projectX/master
git checkout -b cleanBranch

Then, add your new files:

git checkout forkX/newFeature -- file_1
git add file_1

You can repeat for more files and commit:

git commit -m "Add new feature"

You are now ready to open a pull request. If you have the proper rights, you can just push with:

git push -u projectX

Other than the "Add new feature" commit, the pushed files will have no history upstream. The person who pushed will appear as the sole contributor.

This option is simple but will not work well when many people collaborate on the same files. You will end up deleting all contributors and every commit message along the way.

Option B: Preserve history

In order to preserve history the situation is more complicated.

Similarly to the previous option, add remotes, fetch their current state and move to a clean branch that is identical to projectX/master:

git remote add projectX git@github.com:someOrg/projectX
git remote add forkX git@github.com:myOrg/forkX
git fetch --all
git checkout projectX/master
git checkout -b cleanBranch

Then:

git merge --no-ff --no-commit forkX/newFeature

The command above will stop the merge with all the files staged and ready to be committed. For every file, besides the new ones, you do:

git reset filePath

Be careful NOT to reset everything and then stage the new files again. When the new files are the only ones staged then you commit with:

git commit -m "Add new feature"

Finally, delete the unstaged files with:

git stash && git stash drop
git clean -f

You are now ready to open a pull request. If you have the proper rights, you can just push with:

git push -u projectX

Example

At Mist.io we rely on Apache Libcloud to interface with the APIs of many cloud providers. We maintain a fork of Libcloud to better control Mist's dependencies and to develop new features before we push them upstream.

Until recently, we were maintaining a driver for vSphere only in our fork. The driver was big, complicated and introduced new dependencies so we refrained from pushing it upstream. When we felt confident with the code we decided to open a pull request.

The bulk of the new code was in a few files that didn't exist upstream. However, the work on these files was done, over a long period of time, from several people in our team. For this reason, we wanted to preserve the history and we ended up using option B above.

Here is an example of how the same pull request looks like when pushed without history and with history.

Conclusion

In this post we went over how you can move files between Git repos. We showed two options; a simple one that deletes history and a complicated one that preserves it. We illustrated the two options with an example from Mist's Libcloud fork and the upstream Libcloud repo.

We would love to hear your thoughts in the comments!

VMware, please improve vSphere's RESTful API

Man thinking in front of laptop

This post is meant as a warning for users who are thinking to leverage vSphere's RESTful API. More importantly, we hope that someone on vSphere's team will read this and take action.

First, some brief history. vSphere's RESTful API, or Automation API as it is officially called, was introduced in v6.5. This initial version was lacking but in v6.7 it got better. Still, annoying issues persisted and going into v7.0 we were expecting to see some major improvements.

In Mist v4.3 we extended our support to vSphere v7.0. We were also planning to port our integration from the old SOAP API to the RESTful one. Unfortunately, the transition was impossible due to several issues.

In summary:

  1. Calls for listings of resources return only a limited number of items and have no pagination.
  2. You can't use VM templates that reside in folders.
  3. There is no way to resize disks.
  4. UIDs are returned only for VMs but not for other resources e.g. clusters, hosts, datastores etc.
  5. There is no information about hierarchical structures.
  6. You need to juggle through requests to get the IP of a VM.

Keep on reading for more details.

1. Pagination

Automation API responses lack pagination and have a hard limit on the number of returned items. For example, GET /rest/vcenter/vm returns up to 4,000 VMs (reference). In v6.7-U3 the limit used to be 1,000.

What do you do if you have more than 4,000 VMs?

You first need to get all hosts with GET /rest/vcenter/host and then do a separate request for each one with GET /rest/vcenter/vm?filter.hosts={{host-id}}.

Keep in mind that GET /rest/vcenter/host has a hard limit of 2,500 hosts (reference). If you have more, you need an additional layer of nesting, e.g. get all datacenters, then loop for hosts and then loop for machines. Iterating like this, adds complexity and slows down code execution.

2. VM templates in folders

The Automation API supports only VM templates in Content Libraries (reference). It totally ignores templates in folders. No call returns them and there is no way to perform any action, e.g. move them to a Content Library.

This is a surprising omission, especially if you consider that VM templates are commonly used to deploy new machines. The only thing you can do is move your templates to a Content Library before using the Automation API.

3. Disk resize

There is no call to resize a disk. You can change the number of CPUs and the size of RAM, but not the disk. To add more disk space to a machine, your only option is to create a new disk and attach it. You are also unable to copy data between disks.

Bottom line, if you need disk resizing stick to the SOAP API.

4. UIDs and MoIDs

Starting in v7.0, the Automation API returns UIDs of VMs. For other types of resources you have to settle with Managed Object Identifiers (MoIDs). The problem is that MoIDs are not unique across vCenters.

This seems like a small fix, since the information is already there. We hope it will be available soon. Until then, be careful with IDs when you manage infrastructure on multiple vCenters.

5. Hierarchical structures

Several objects in vSphere have a hierarchical structure. For example, datacenter->cluster, cluster->host, host->VM, folder->VM etc. Information about such structures is totally absent from API responses. To recreate it, you need to loop through all sorts of lists.

Let's assume you want to find the folder in which a VM resides:

  1. Get all folders with GET /rest/vcenter/folder. Notice that there is a limit of 1,000 returned items and the response includes only folder name, MoID and type (reference).
  2. For each folder do GET /rest/vcenter/vm?filter.folders={folder-id}.
  3. Then check if your VM is included in the response. Such representations are useful very often and it's hard to justify why they are not part of the API already.
6. VM IPs

If you want the get the IP of a machine you need to:

  1. GET /rest/vcenter/vm/{vm-id} and get the NICs part. There you will find MAC addresses (reference).
  2. GET /rest/appliance/networking/interfaces, for a list of all available interfaces. This is the only call that returns IP information (reference).
  3. Search through the list of interfaces for the relevant MAC address and get the IP you need. One would expect the IP to be available from the first call above (1) or at least offer a simpler way to cross reference items between calls.

Conclusion

In this post we went over the issues we had with vSphere's Automation API. We also suggested some workarounds wherever possible.

Having to build and maintain integrations with more than 20 infrastructure platform APIs, we are accustomed to idiosyncrasies and annoying issues. Unfortunately, in the case of the Automation API the issues were too many to deal with.

Our hope is that future releases will solve these issues and the Automation API will become a first-class citizen in vSphere's suite.

KubeVirt, LXD, KVM, vSphere, G8 and LDAP/AD updates in Mist v4.3

Today we are proud to announce Mist v4.3, the latest release of the Mist Cloud Management Platform.

Mist v4.3 brings brand new support for LDAP, Active Directory, KubeVirt, LXD and GiG G8. For VMware users, Mist v4.3 supports vSphere v7.0, VNC console access, Content Libraries and more storage options. KVM support has been enhanced and you can now create simple private clouds by grouping together your KVM hosts. Finally, we are rolling out custom pricing policies. For example, you are now able to define a cost for your VMware instances running on private metal.

We will go over all these updates in more detail below.

Manage KubeVirt, LXD and G8

Mist add cloud dialog

In Mist v4.3 you can now manage KubeVirt, LXD and G8 alongside public & private clouds, hypervisors, Docker and bare metal servers. All these from a single pane of glass and with all the tools to implement self-service workflows in a controlled way. With these additions, Mist currently supports 20 different infrastructure platforms.

For those of you who are not familiar with KubeVirt, LXD or G8:

  • KubeVirt blurs the boundaries between containers and VMs. It allows you to run VM-based workloads inside Kubernetes clusters and treat them like containers. This is very helpful when you need to run VMs, alongside containers, without going through a full migration. Check out our documentation here on how to get started with KubeVirt in Mist.
  • LXD is an open source technology for Linux Containers (LXC) that predates Docker. Like Docker, with LXD you are able to build and deploy lightweight images that boot up in seconds. Unlike Docker, LXD has a more robust security model. Containers can run in rootless mode which is still experimental in Docker. Network and storage stacks are not shared between LXD containers. This gets you closer to a standalone OS and makes LXD ideal as a migration target for traditional VM workloads. Check out our documentation here on how to get started with LXD in Mist.
  • GiG G8 is a private cloud platform. G8 allows you to deploy nodes on premises, edge and/or public locations while getting a uniform experience across all. Check out our documentation here on how to get started with G8 in Mist.

KVM cloud

Mist v4.3 allows you to create KVM clouds by grouping together your KVM hosts. You are able to provision, list and send actions to your guests. This is ideal for users who want to reduce licensing costs, e.g. when compared to vSphere, while keeping complexity low, e.g. when compared to OpenStack.

In terms of other new features and enhancements:

  • You can now connect to guests over a VNC console through Mist's web interface.
  • You have access to more metadata regarding VMs and hosts.
  • You can assign Virtual Network Functions when provisioning machines, leveraging SR-IOV when available.

Authentication with LDAP and Active Directory

Users of Mist Enterprise Edition, can now authenticate with LDAP and Active Directory. Both options increase the overall security of your organization as you can centrally manage user access.

In the case of Active Directory, Mist will check the user who is trying to log in and see to which AD group he belongs. If the user's group exists in Mist as a team, the user will be allowed to log in. He will then be able to perform only the actions that allowed to his Mist team.

New features for vSphere

With Mist v4.3 our goal is to offer a user-friendly alternative to native management tools. Specifically one that is ideal for hybrid and multicloud setups. In this context, our latest release:

  • Widens support for vSphere versions v4.0 up to v7.0.
  • Allows you to provision VMs using content libraries.
  • Gives you the option to choose datastores and folders when provisioning VMs.
  • You can access your VMs using a VNC console through Mist.

Pricing policies

Users of Mist's Enterprise Edition, can now define custom pricing policies for any cloud they manage. For example, you can set a price per CPU, GB of RAM and GB of disk for all your VMware machines. Pricing policies can also be applied on public clouds, e.g. if you'd like to mark up/down your cost. Finally, you can define what's your policy when a machine is turned off. Do you still charge for it or not and how much?

These options bring a lot of flexibility to MSPs who want to resell infrastructure, as well as enterprises looking to implement a fine-grained chargeback policy.

This feature is still under heavy development and we would love to hear your thoughts. Please reach out to demo@mist.io to arrange a demo.

Other updates in Mist v4.3

Besides the above, Mist v4.3 includes several fixes and improvements. Most notably:

  • Add process pool option to update machines in parallel and avoid unnecessary db updates.
  • Support listing and selecting security group on machine creation for AWS
  • Show more info about DigitalOcean sizes, like in DigitalOcean's portal.
  • Update AWS, Microsoft Azure, Google Cloud and Packet cost estimations.

Conclusion

Mist v4.3 focuses on integrations with KubeVirt, LXD, G8, LDAP and Active Directory. It also brings major improvements for KVM and vSphere. Finally, it introduces pricing policies, fixes several bugs and improves the web UI.

For a guided tour of the Mist platform, please reach out to demo@mist.io and one of our engineers will give you an overview.

If you'd like to try out for yourselves, sign up for a Mist Hosted Service account and begin your 14-day free trial.

Community Edition users can get the latest version and relevant instructions on GitHub.

Comparing clouds: Billing for stopped machines

Woman comparing clouds

Public clouds have grown considerably in size, complexity and sheer number of features. This makes it hard to answer even simple questions, especially when you are trying to compare clouds. We hear such questions daily so we decided to do something about it. This is the first in a series of posts that compare clouds on a number of practical issues.

One of the questions we hear very often is some variation of the following:

Does my cloud bill me for stopped machines, aka instances, linodes, droplets etc?

The reasoning behind this question is quite simple. If I stop a machine, it means I'm not using it so I assume my cloud will not bill me for it. After all, public clouds are all about elasticity. If this is the case, then I could save a lot of money by stopping machines when they are not needed.

Unfortunately, things are not very straightforward.

Let's go over, in alphabetical order, what is happening.

Service Bills for stopped machines? Notes
Alibaba ECS Yes (by default) Instances are billed per second.

You could avoid billing for stopped instances connected to a VPC and which don't have local disks. User action is required for that.

If you turn this feature on and stop an instance, you will be billed for any of the following that apply:

a) attached block storage
b) associated elastic IPs
c) bandwidth
d) images

For more details, check the official documentation for PAYG pricing here and specifically for stopped instances here.
Amazon EC2 No Linux instances are billed per second with 60 seconds minimum. All others are billed per hour.

When you stop an instance, you will be billed for any of the following that apply:

a) attached block storage
b) associated elastic IPs

For more details, check the official documentation here and "Billing and purchase options" in this FAQ.
Digital Ocean Yes Droplets are billed per hour.

Check the relevant answers in their pricing FAQ.
Google Compute Engine No Instances are billed per second with 60 seconds minimum. Some premium images follow a different model.

When you stop an instance, you will be billed for any of the following that apply:

a) persistent storage attached
b) local SSDs
c) associated static IPs

For more details, check the official documentation here.
IBM Cloud No Public Virtual Servers and billed per hour.

IBM offers "Suspended Billing". Servers after Nov 1st 2018 include suspended billing. Most servers created before this date don't offer it.

If suspended billing is available and you stop a server, then you will be charged for any of the following that apply:

a) storage
b) secondary public IP address

For more details, check the official documentation here.
Linode Yes Linodes are billed per hour.

Check the relevant answers in their pricing FAQ.
Microsoft Azure Maybe Virtual Machines are billed per second and for the full number of minutes the machine was running. The documentation specifically mentions that if a machine was running for 6min and 45sec you will be charged for 6min.

If the machine status is "Stopped Deallocated", you are not billed. If it is "Stopped" or "Stopped Allocated", you are billed for allocated virtual cores but not for software licenses. Full details on virtual machine states are available here.

In order to get to "Stopped Deallocated" state, you have to stop the machine from within Azure's management portal or over the API using a specific deallocation parameter. If you stop the machine from within the OS it will go into "Stop Allocated" state.

If you manage to get to "Stopped Deallocated" state, please keep in mind that you are still billed for any of the following that apply:

a) attached Premium (SSD based) disks
b) Attached Standard (HDD based) disks
c) In ARM deployment model, you are billed for static public IP address unless it is part of the first five in the region. Read more regarding IPs under the FAQ section at the bottom of this page.

For even more details, check the FAQ at the bottom of this page. The URL ends with /linux but you will also find the same FAQ under /windows…
Vultr Yes Vultr cloud instances are billed per hour.

Check the relevant answers in their pricing FAQ.

For easier reference, you can also view the table above in Google Docs.

The comparison includes only services that offer cloud machines. There are also a number of services that offer dedicated hosts and/or bare metals. We didn't include such services above because they are inherently different and, as expected, they charge you regardless of machine state.

Also, please keep in mind that the comparison refers to pay-as-you-go (PAYG) pricing. Alibaba, Amazon, Google, IBM and Microsoft offer reserved and spot pricing as well. In the case of reserved pricing, you will be billed even if you don't use your reserved capacity. In spot, stopping a machine will usually release it and return it to the pool. Billing stops at that point, but you can no longer use the machine. This happens in Amazon, Google and Azure. In Alibaba and IBM, stopping a spot will not release it, but you will continue to incur charges until they either claim it back or you release it yourself.

If things were not complicated enough, you also need to take special usage discounts into account. Such discounts are:

  • Alibaba subscriptions
  • Amazon saving plans
  • Google committed-use and sustained-use discount

In the case of Alibaba subscriptions, things are rather simple. When you buy a subscription you pay a discounted price upfront for the entire billing cycle. Changing the status of the machine won't save you anything.

With Amazon saving plans, things are simple too. You commit to certain usage over 1yr or 3yr term and get a discount. If you use it, you're good. If you don't use it, you still pay for it.

Google's committed-use discounts are very similar to Amazon saving plans.

Google's sustained-use discounts are more complicated. First of all, Google follows an approach which they call resource-based pricing. In this model, the base price of a machine is tied to the underlying resources it is using (vCPUs and memory). If during your billing cycle you continue to run the same total amount of resources, then you gradually earn a discount that's increasing over time. This is the sustained-use discount. The discount is irrelevant to the actual machines you run, it ties only into the total amount of resources used. This discount doesn't increase linearly over time. To understand it better we strongly recommend reading the documentation pages linked above.

Having said all of the above, let's restate the initial question:

Will I save money if I stop my cloud machines when they are not in use?

The answer depends on a number of factors. To get to the bottom of this you need to:

  1. Check if your service will charge you for stopped machines and how.
  2. Check your reservations and long term commitments.
  3. Don't take spot into account.
  4. If you are using Google Compute Engine, do the math for the sustained-use discount.

All these might sound disheartening, but you could potentially save a lot of money. Just to get a sense of ROI, one of our customers was recently able to reduce a 5-digit monthly bill for dev infrastructure by 50%. They did it by automatically tagging machines upon provisioning and then setting a schedule to stop them during off business hours.

Bottomline, the effort is well justified. Do your research and good luck!

This post is the first part in a series comparing clouds. Stay tuned for more.

Load more