Mist in Linode One-Click App Marketplace

Linode Jan 21st 2021 one-click apps

We are happy to announce that Mist is now available through the Linode One-Click App Marketplace.

Linode users can spin up the open source Mist Community Edition and gain control of their multi-cloud infrastructure in just a few minutes!

Without further ado, let's dig into it.

Instructions

  1. First, log in to your Linode Cloud Manager account and find the Mist.io app in the Marketplace.

  2. Fill in the options required. Make sure you provide a Mist admin user email and password. We recommend you use the 8GB Linode size. Provisioning will take 5-15 minutes. In the meantime, you can check out a video with some of the cool things you can do with Mist!

  3. Navigate to the public IP address of the Linode you used to deploy the Mist One-Click App in your browser. You can find the IP address for your Linode on the Linode detail page in the Cloud Manager.

  4. Click SIGN IN at the top right.

  5. Fill in the Mist admin email and password you chose when you deployed your Linode. You will be redirected to Mist's main dashboard.

    mist empty account screenshot
    Your Mist account is ready
  6. Next, click on Add your Clouds button, select Linode from the list of supported providers, enter your API key and click Add cloud. You are all set! Your Linode cloud has been added and your Linodes will be auto-discovered by Mist.

    Mist dashboard with Linode cloud added
    Mist dashboard with Linode cloud added

    You can continue adding more clouds following the instructions in our docs.

  7. Now, you need to set Mist's CORE_URI. Create an SSH connection to your Linode using the public IPv4 address and the root password you had setup prior to the app creation. Go to Mist's installation folder and edit ~/mist/settings/settings.py. Set your CORE_URI value to either your Linode's public IPv4 IP or a DNS name. For anything other than testing we recommend setting up a DNS name and TLS. You can find more info in Mist's README.

  8. Once set, restart docker compose with docker-compose restart.

  9. If you would like to add more users to Mist, you need to configure your mail settings as explained here.

To learn more about how to set up Mist in Linode check out the detailed instructions in Linode's docs.

How to move files between Git repos and preserve history

While working on a multitude of open source projects we faced an interesting Git puzzle. How can you move a file between Git repos?

The easy solution is to just forget about Git history. If you like to preserve it, the situation is not very straightforward. Below we will explain the problem, your options and how to apply them.

The problem

Let's assume you work on forkX, which is a fork of repo projectX. In forkX you are collaborating with your colleagues. At the same time, projectX is also moving forward. ForkX includes files with a long history of commits that are not in projectX. You would like to push some of those files to projectX.

Your current situation looks like this:

Git forks

What do you do?

Option A: Delete history

The easiest option is to push the files without preserving history.

Based on the example in the graph above, add relevant remotes and fetch their current state:

git remote add projectX git@github.com:someOrg/projectX
git remote add forkX git@github.com:myOrg/forkX
git fetch --all

Now, move to a clean branch that is identical to projectX/master.

git checkout projectX/master
git checkout -b cleanBranch

Then, add your new files:

git checkout forkX/newFeature -- file_1
git add file_1

You can repeat for more files and commit:

git commit -m "Add new feature"

You are now ready to open a pull request. If you have the proper rights, you can just push with:

git push -u projectX

Other than the "Add new feature" commit, the pushed files will have no history upstream. The person who pushed will appear as the sole contributor.

This option is simple but will not work well when many people collaborate on the same files. You will end up deleting all contributors and every commit message along the way.

Option B: Preserve history

In order to preserve history the situation is more complicated.

Similarly to the previous option, add remotes, fetch their current state and move to a clean branch that is identical to projectX/master:

git remote add projectX git@github.com:someOrg/projectX
git remote add forkX git@github.com:myOrg/forkX
git fetch --all
git checkout projectX/master
git checkout -b cleanBranch

Then:

git merge --no-ff --no-commit forkX/newFeature

The command above will stop the merge with all the files staged and ready to be committed. For every file, besides the new ones, you do:

git reset filePath

Be careful NOT to reset everything and then stage the new files again. When the new files are the only ones staged then you commit with:

git commit -m "Add new feature"

Finally, delete the unstaged files with:

git stash && git stash drop
git clean -f

You are now ready to open a pull request. If you have the proper rights, you can just push with:

git push -u projectX

Example

At Mist.io we rely on Apache Libcloud to interface with the APIs of many cloud providers. We maintain a fork of Libcloud to better control Mist's dependencies and to develop new features before we push them upstream.

Until recently, we were maintaining a driver for vSphere only in our fork. The driver was big, complicated and introduced new dependencies so we refrained from pushing it upstream. When we felt confident with the code we decided to open a pull request.

The bulk of the new code was in a few files that didn't exist upstream. However, the work on these files was done, over a long period of time, from several people in our team. For this reason, we wanted to preserve the history and we ended up using option B above.

Here is an example of how the same pull request looks like when pushed without history and with history.

Conclusion

In this post we went over how you can move files between Git repos. We showed two options; a simple one that deletes history and a complicated one that preserves it. We illustrated the two options with an example from Mist's Libcloud fork and the upstream Libcloud repo.

We would love to hear your thoughts in the comments!

VMware, please improve vSphere's RESTful API

Man thinking in front of laptop

This post is meant as a warning for users who are thinking to leverage vSphere's RESTful API. More importantly, we hope that someone on vSphere's team will read this and take action.

First, some brief history. vSphere's RESTful API, or Automation API as it is officially called, was introduced in v6.5. This initial version was lacking but in v6.7 it got better. Still, annoying issues persisted and going into v7.0 we were expecting to see some major improvements.

In Mist v4.3 we extended our support to vSphere v7.0. We were also planning to port our integration from the old SOAP API to the RESTful one. Unfortunately, the transition was impossible due to several issues.

In summary:

  1. Calls for listings of resources return only a limited number of items and have no pagination.
  2. You can't use VM templates that reside in folders.
  3. There is no way to resize disks.
  4. UIDs are returned only for VMs but not for other resources e.g. clusters, hosts, datastores etc.
  5. There is no information about hierarchical structures.
  6. You need to juggle through requests to get the IP of a VM.

Keep on reading for more details.

1. Pagination

Automation API responses lack pagination and have a hard limit on the number of returned items. For example, GET /rest/vcenter/vm returns up to 4,000 VMs (reference). In v6.7-U3 the limit used to be 1,000.

What do you do if you have more than 4,000 VMs?

You first need to get all hosts with GET /rest/vcenter/host and then do a separate request for each one with GET /rest/vcenter/vm?filter.hosts={{host-id}}.

Keep in mind that GET /rest/vcenter/host has a hard limit of 2,500 hosts (reference). If you have more, you need an additional layer of nesting, e.g. get all datacenters, then loop for hosts and then loop for machines. Iterating like this, adds complexity and slows down code execution.

2. VM templates in folders

The Automation API supports only VM templates in Content Libraries (reference). It totally ignores templates in folders. No call returns them and there is no way to perform any action, e.g. move them to a Content Library.

This is a surprising omission, especially if you consider that VM templates are commonly used to deploy new machines. The only thing you can do is move your templates to a Content Library before using the Automation API.

3. Disk resize

There is no call to resize a disk. You can change the number of CPUs and the size of RAM, but not the disk. To add more disk space to a machine, your only option is to create a new disk and attach it. You are also unable to copy data between disks.

Bottom line, if you need disk resizing stick to the SOAP API.

4. UIDs and MoIDs

Starting in v7.0, the Automation API returns UIDs of VMs. For other types of resources you have to settle with Managed Object Identifiers (MoIDs). The problem is that MoIDs are not unique across vCenters.

This seems like a small fix, since the information is already there. We hope it will be available soon. Until then, be careful with IDs when you manage infrastructure on multiple vCenters.

5. Hierarchical structures

Several objects in vSphere have a hierarchical structure. For example, datacenter->cluster, cluster->host, host->VM, folder->VM etc. Information about such structures is totally absent from API responses. To recreate it, you need to loop through all sorts of lists.

Let's assume you want to find the folder in which a VM resides:

  1. Get all folders with GET /rest/vcenter/folder. Notice that there is a limit of 1,000 returned items and the response includes only folder name, MoID and type (reference).
  2. For each folder do GET /rest/vcenter/vm?filter.folders={folder-id}.
  3. Then check if your VM is included in the response. Such representations are useful very often and it's hard to justify why they are not part of the API already.
6. VM IPs

If you want the get the IP of a machine you need to:

  1. GET /rest/vcenter/vm/{vm-id} and get the NICs part. There you will find MAC addresses (reference).
  2. GET /rest/appliance/networking/interfaces, for a list of all available interfaces. This is the only call that returns IP information (reference).
  3. Search through the list of interfaces for the relevant MAC address and get the IP you need. One would expect the IP to be available from the first call above (1) or at least offer a simpler way to cross reference items between calls.

Conclusion

In this post we went over the issues we had with vSphere's Automation API. We also suggested some workarounds wherever possible.

Having to build and maintain integrations with more than 20 infrastructure platform APIs, we are accustomed to idiosyncrasies and annoying issues. Unfortunately, in the case of the Automation API the issues were too many to deal with.

Our hope is that future releases will solve these issues and the Automation API will become a first-class citizen in vSphere's suite.

KubeVirt, LXD, KVM, vSphere, G8 and LDAP/AD updates in Mist v4.3

Today we are proud to announce Mist v4.3, the latest release of the Mist Cloud Management Platform.

Mist v4.3 brings brand new support for LDAP, Active Directory, KubeVirt, LXD and GiG G8. For VMware users, Mist v4.3 supports vSphere v7.0, VNC console access, Content Libraries and more storage options. KVM support has been enhanced and you can now create simple private clouds by grouping together your KVM hosts. Finally, we are rolling out custom pricing policies. For example, you are now able to define a cost for your VMware instances running on private metal.

We will go over all these updates in more detail below.

Manage KubeVirt, LXD and G8

Mist add cloud dialog

In Mist v4.3 you can now manage KubeVirt, LXD and G8 alongside public & private clouds, hypervisors, Docker and bare metal servers. All these from a single pane of glass and with all the tools to implement self-service workflows in a controlled way. With these additions, Mist currently supports 20 different infrastructure platforms.

For those of you who are not familiar with KubeVirt, LXD or G8:

  • KubeVirt blurs the boundaries between containers and VMs. It allows you to run VM-based workloads inside Kubernetes clusters and treat them like containers. This is very helpful when you need to run VMs, alongside containers, without going through a full migration. Check out our documentation here on how to get started with KubeVirt in Mist.
  • LXD is an open source technology for Linux Containers (LXC) that predates Docker. Like Docker, with LXD you are able to build and deploy lightweight images that boot up in seconds. Unlike Docker, LXD has a more robust security model. Containers can run in rootless mode which is still experimental in Docker. Network and storage stacks are not shared between LXD containers. This gets you closer to a standalone OS and makes LXD ideal as a migration target for traditional VM workloads. Check out our documentation here on how to get started with LXD in Mist.
  • GiG G8 is a private cloud platform. G8 allows you to deploy nodes on premises, edge and/or public locations while getting a uniform experience across all. Check out our documentation here on how to get started with G8 in Mist.

KVM cloud

Mist v4.3 allows you to create KVM clouds by grouping together your KVM hosts. You are able to provision, list and send actions to your guests. This is ideal for users who want to reduce licensing costs, e.g. when compared to vSphere, while keeping complexity low, e.g. when compared to OpenStack.

In terms of other new features and enhancements:

  • You can now connect to guests over a VNC console through Mist's web interface.
  • You have access to more metadata regarding VMs and hosts.
  • You can assign Virtual Network Functions when provisioning machines, leveraging SR-IOV when available.

Authentication with LDAP and Active Directory

Users of Mist Enterprise Edition, can now authenticate with LDAP and Active Directory. Both options increase the overall security of your organization as you can centrally manage user access.

In the case of Active Directory, Mist will check the user who is trying to log in and see to which AD group he belongs. If the user's group exists in Mist as a team, the user will be allowed to log in. He will then be able to perform only the actions that allowed to his Mist team.

New features for vSphere

With Mist v4.3 our goal is to offer a user-friendly alternative to native management tools. Specifically one that is ideal for hybrid and multi-cloud setups. In this context, our latest release:

  • Widens support for vSphere versions v4.0 up to v7.0.
  • Allows you to provision VMs using content libraries.
  • Gives you the option to choose datastores and folders when provisioning VMs.
  • You can access your VMs using a VNC console through Mist.

Pricing policies

Users of Mist's Enterprise Edition, can now define custom pricing policies for any cloud they manage. For example, you can set a price per CPU, GB of RAM and GB of disk for all your VMware machines. Pricing policies can also be applied on public clouds, e.g. if you'd like to mark up/down your cost. Finally, you can define what's your policy when a machine is turned off. Do you still charge for it or not and how much?

These options bring a lot of flexibility to MSPs who want to resell infrastructure, as well as enterprises looking to implement a fine-grained chargeback policy.

This feature is still under heavy development and we would love to hear your thoughts. Please reach out to demo@mist.io to arrange a demo.

Other updates in Mist v4.3

Besides the above, Mist v4.3 includes several fixes and improvements. Most notably:

  • Add process pool option to update machines in parallel and avoid unnecessary db updates.
  • Support listing and selecting security group on machine creation for AWS
  • Show more info about DigitalOcean sizes, like in DigitalOcean's portal.
  • Update AWS, Microsoft Azure, Google Cloud and Packet cost estimations.

Conclusion

Mist v4.3 focuses on integrations with KubeVirt, LXD, G8, LDAP and Active Directory. It also brings major improvements for KVM and vSphere. Finally, it introduces pricing policies, fixes several bugs and improves the web UI.

For a guided tour of the Mist platform, please reach out to demo@mist.io and one of our engineers will give you an overview.

If you'd like to try out for yourselves, sign up for a Mist Hosted Service account and begin your 14-day free trial.

Community Edition users can get the latest version and relevant instructions on GitHub.

Comparing clouds: Billing for stopped machines

Woman comparing clouds

Public clouds have grown considerably in size, complexity and sheer number of features. This makes it hard to answer even simple questions, especially when you are trying to compare clouds. We hear such questions daily so we decided to do something about it. This is the first in a series of posts that compare clouds on a number of practical issues.

One of the questions we hear very often is some variation of the following:

Does my cloud bill me for stopped machines, aka instances, linodes, droplets etc?

The reasoning behind this question is quite simple. If I stop a machine, it means I'm not using it so I assume my cloud will not bill me for it. After all, public clouds are all about elasticity. If this is the case, then I could save a lot of money by stopping machines when they are not needed.

Unfortunately, things are not very straightforward.

Let's go over, in alphabetical order, what is happening.

Service Bills for stopped machines? Notes
Alibaba ECS Yes (by default) Instances are billed per second.

You could avoid billing for stopped instances connected to a VPC and which don't have local disks. User action is required for that.

If you turn this feature on and stop an instance, you will be billed for any of the following that apply:

a) attached block storage
b) associated elastic IPs
c) bandwidth
d) images

For more details, check the official documentation for PAYG pricing here and specifically for stopped instances here.
Amazon EC2 No Linux instances are billed per second with 60 seconds minimum. All others are billed per hour.

When you stop an instance, you will be billed for any of the following that apply:

a) attached block storage
b) associated elastic IPs

For more details, check the official documentation here and "Billing and purchase options" in this FAQ.
Digital Ocean Yes Droplets are billed per hour.

Check the relevant answers in their pricing FAQ.
Google Compute Engine No Instances are billed per second with 60 seconds minimum. Some premium images follow a different model.

When you stop an instance, you will be billed for any of the following that apply:

a) persistent storage attached
b) local SSDs
c) associated static IPs

For more details, check the official documentation here.
IBM Cloud No Public Virtual Servers and billed per hour.

IBM offers "Suspended Billing". Servers after Nov 1st 2018 include suspended billing. Most servers created before this date don't offer it.

If suspended billing is available and you stop a server, then you will be charged for any of the following that apply:

a) storage
b) secondary public IP address

For more details, check the official documentation here.
Linode Yes Linodes are billed per hour.

Check the relevant answers in their pricing FAQ.
Microsoft Azure Maybe Virtual Machines are billed per second and for the full number of minutes the machine was running. The documentation specifically mentions that if a machine was running for 6min and 45sec you will be charged for 6min.

If the machine status is "Stopped Deallocated", you are not billed. If it is "Stopped" or "Stopped Allocated", you are billed for allocated virtual cores but not for software licenses. Full details on virtual machine states are available here.

In order to get to "Stopped Deallocated" state, you have to stop the machine from within Azure's management portal or over the API using a specific deallocation parameter. If you stop the machine from within the OS it will go into "Stop Allocated" state.

If you manage to get to "Stopped Deallocated" state, please keep in mind that you are still billed for any of the following that apply:

a) attached Premium (SSD based) disks
b) Attached Standard (HDD based) disks
c) In ARM deployment model, you are billed for static public IP address unless it is part of the first five in the region. Read more regarding IPs under the FAQ section at the bottom of this page.

For even more details, check the FAQ at the bottom of this page. The URL ends with /linux but you will also find the same FAQ under /windows…
Vultr Yes Vultr cloud instances are billed per hour.

Check the relevant answers in their pricing FAQ.

For easier reference, you can also view the table above in Google Docs.

The comparison includes only services that offer cloud machines. There are also a number of services that offer dedicated hosts and/or bare metals. We didn't include such services above because they are inherently different and, as expected, they charge you regardless of machine state.

Also, please keep in mind that the comparison refers to pay-as-you-go (PAYG) pricing. Alibaba, Amazon, Google, IBM and Microsoft offer reserved and spot pricing as well. In the case of reserved pricing, you will be billed even if you don't use your reserved capacity. In spot, stopping a machine will usually release it and return it to the pool. Billing stops at that point, but you can no longer use the machine. This happens in Amazon, Google and Azure. In Alibaba and IBM, stopping a spot will not release it, but you will continue to incur charges until they either claim it back or you release it yourself.

If things were not complicated enough, you also need to take special usage discounts into account. Such discounts are:

  • Alibaba subscriptions
  • Amazon saving plans
  • Google committed-use and sustained-use discount

In the case of Alibaba subscriptions, things are rather simple. When you buy a subscription you pay a discounted price upfront for the entire billing cycle. Changing the status of the machine won't save you anything.

With Amazon saving plans, things are simple too. You commit to certain usage over 1yr or 3yr term and get a discount. If you use it, you're good. If you don't use it, you still pay for it.

Google's committed-use discounts are very similar to Amazon saving plans.

Google's sustained-use discounts are more complicated. First of all, Google follows an approach which they call resource-based pricing. In this model, the base price of a machine is tied to the underlying resources it is using (vCPUs and memory). If during your billing cycle you continue to run the same total amount of resources, then you gradually earn a discount that's increasing over time. This is the sustained-use discount. The discount is irrelevant to the actual machines you run, it ties only into the total amount of resources used. This discount doesn't increase linearly over time. To understand it better we strongly recommend reading the documentation pages linked above.

Having said all of the above, let's restate the initial question:

Will I save money if I stop my cloud machines when they are not in use?

The answer depends on a number of factors. To get to the bottom of this you need to:

  1. Check if your service will charge you for stopped machines and how.
  2. Check your reservations and long term commitments.
  3. Don't take spot into account.
  4. If you are using Google Compute Engine, do the math for the sustained-use discount.

All these might sound disheartening, but you could potentially save a lot of money. Just to get a sense of ROI, one of our customers was recently able to reduce a 5-digit monthly bill for dev infrastructure by 50%. They did it by automatically tagging machines upon provisioning and then setting a schedule to stop them during off business hours.

Bottomline, the effort is well justified. Do your research and good luck!

This post is the first part in a series comparing clouds. Stay tuned for more.

Multi-cloud has the edge

Cloud computing

A spectre is haunting the clouds - the spectre of edge computing. Blazingly fast 5G networks, AI applications and data sovereignty concerns are inevitably pushing workloads to the edge.

Vendors have seen the writing on the wall and race to deliver mashups of hardware, software and/or professional services. Are you an AWS customer? Get Outposts to have EC2 in your office. Get Wavelength to create a mini AWS zone at the edge of your telco's network. Similar stories from Microsoft with Azure Stack and Google with Anthos.

To get a better view:

This is not the first time we see exciting tech in walled gardens. It can be convenient, but in the long term we are better off combining best of breed solutions, governed in a unified way. In this context, multi-cloud is here to stay and it will include the edge.

Rule the clouds with Mist v4.2

We are pleased to announce today the release of Mist v4.2. This release includes new governance features like cost quotas, machine leases and rules on logs. It also includes enhancements on Mist's support for several public clouds and a set of bug fixes.

Constraints: cost quotas and machine leases

Mist v4.2 improves multi-cloud governance by introducing constraints. Constraints extend role-based access controls (RBAC) and are configured from the Teams section. In this first iteration, Mist supports constraints for implementing cost quotas and machine leases.

Cost quotas help you stay within budget and avoid unpleasant surprises when you receive invoices from your cloud providers. Mist v4.2 supports quotas per team and organization. Quotas apply whenever a team member attempts to create, start or resize a machine. Mist will compare the current run rate to the relevant quota. The requested action will be allowed only if the run rate is below the quota. For more information on how to set up quotas, check out our help documentation here.

Machine leases help you reduce machine sprawl. You no longer have to spend valuable time trying to figure out who owns a machine and what to do with it. When a lease expires, machines will get automatically destroyed or stopped. For more details, check out our help documentation here.

Please keep in mind that the above features are only available in Mist Enterprise Edition (EE) and Mist Hosted Service (HS).

Observation logs

Mist emits logs for every action performed through its API. This is useful for auditing and troubleshooting purposes. In addition to that, Mist v4.2 emits logs whenever it detects changes in your infrastructure. This way, you can keep track of actions that did not happen through Mist.

Mist observation logs
Mist observation logs

Specifically, Mist v4.2 emits logs when it detects:

  • creation or destruction of machines, volumes, networks and DNS zones,
  • changes in the size of machines (e.g. machine was resized from 2 vCPUs to 4),
  • changes in the status of machines (e.g. machine went from running state to stopped),
  • block storage volumes getting attached or detached

For more details, check out our help documentation here.

Rules on logs

Since the very first versions of Mist you can set rules on metrics from monitored machines. These rules can trigger actions like email alerts, resource lifecycle actions, script execution etc. Mist v4.2 extends the rules engine to support queries on logs from all supported resource types.

Mist log rules
Mist log rules

This opens up several new options, especially when combined with observation logs. Some interesting examples include:

  • Notify me when machines are created or destroyed in my production cloud.
  • Destroy a machine when post-deployment steps fail.
  • Open a ticket in my issue tracker when provisioning fails.

For more information regarding rules on logs check out our help documentation < href="https://docs.mist.io/article/170-rules-on-logs">here.

Maxihost bare metals in Mist

Maxihost and Mist logos in the clouds

Mist v4.2 brings support for Maxihost, a provider of on-demand bare metal servers. Maxihost is based in Brazil and serves a wide range of global companies like Riot Games, Algolia, Zoho and more. We love their service for the flexibility and cost efficiency it offers.

If you'd like to learn more about Maxihost, visit their website at https://www.maxihost.com/.

If you are a Maxihost user already, you can add it to Mist by following the instructions here.

Other updates in Mist v4.2

Besides the above, Mist v4.2 includes several fixes and improvements. The most notable are:

  • Support FoundationDB Document Layer as a replacement for MongoDB.
  • Improved volume support and machine provisioning on Microsoft Azure Resource Manager and Alibaba Cloud
  • Attach existing and new volumes when creating machines to AWS, DigitalOcean and Alibaba Cloud.
  • Cloud-init support for OpenStack, Alibaba Cloud, IBM Cloud and Vultr.
  • Hide unavailable actions in the web UI according to RBAC permissions.
  • Rules can trigger webhook actions.
  • Include alert level description in rule notification actions.

Conclusion

Mist v4.2 focuses on how you can improve governance through features like constraints, observation logs and rules on logs. It brings support for a new bare metal cloud provider and several enhancements to existing ones. Finally, it introduces fixes for bugs and further improves the web UI. The next major release will go out late in Q1. Until then, stay tuned for minor releases on a monthly schedule.

To check out the entire platform please reach out to demo@mist.io and one of our engineers will give you a quick overview.

If you'd like to try out for yourselves, sign up for a Mist HS account and begin your 14-day free trial.

Community Edition users can get the latest version and relevant instructions on GitHub.

Mist now supports Alibaba Cloud

Alibaba Cloud and Mist.io logos

Market share research from Gartner shows that Alibaba Cloud (a.k.a. Aliyun) is the #1 IaaS vendor in Asia Pacific with almost double the size of Amazon AWS which is second. Alibaba Cloud also boasts a #3 position worldwide, behind only Amazon AWS and Microsoft Azure. It has a very high density of datacenters in Asia Pacific and China, but it's more sparse in the rest of the world. In terms of feature set, it offers all the core services that you would expect from a major cloud vendor.

Recently, we noticed a considerable uptake in user requests for adding Alibaba Cloud support in Mist. Mist v4.1 delivers the first iteration. Now, our users are able to manage their Alibaba Cloud Elastic Compute Service (ECS) together with other public or private infrastructure from a single pane of glass.

More on Mist.io and Alibaba Cloud

Our work with Alibaba Cloud doesn't stop here though. We are happy to announce that Mist.io is a Technology Partner for Alibaba Cloud and in the future you should expect deeper integration and collaboration. Stay tuned!

Our experience with Alibaba's team has been very positive and we recommend to try it out, especially if you have workloads that need to run in APAC.

New users can sign up for a free trial.

Users with some initial exposure to Alibaba Cloud can leverage the Starter Package until October 10th, 2019. The Starter Package offers discounted rates across a number of services. More details and price comparisons to AWS can be found here.

Other updates in Mist v4.1

Some of our bigger customers are all-in self-service devops, e.g. SevOne. To make this happen in an organized way they need very fine grained control over who has access where.

Mist v4.1 adds another layer of such controls, enabling users to enforce Mist's RBAC policies to cloud locations. This feature is available in Mist Hosted Service (HS) and Mist Enterprise Edition (EE) which come with RBAC support out-of-the-box. RBAC on locations applies both to public clouds and private infrastructure. For example, account owners could now allow their teams to provision resources only on Alibaba ECS EU Central 1 (Frankfurt) availability zone A and vCenter Cluster 2.

Setting up RBAC for locations
Setting up RBAC for locations

Besides the above, Mist v4.1 includes several fixes and improvements. The most notable are:

  • Support for volumes in Packet clouds.
  • Support for new OpenSSH key format.
  • Set filesystem type when creating volumes in DigitalOcean.
  • Create and attach volume on OpenStack from the machine creation form.
  • Support OpenStack API v2.2 & OpenStack Auth API v3.
  • Update date picker in schedules.
  • Fix editing of schedule script parameters.
  • Fix tag editing in lists.
  • Fix price retrieval for GCE Asia regions.

Conclusion

Alibaba Cloud support is the big new feature in Mist v4.1. This release also brings role-based access controls for cloud locations in private and public clouds. Finally, it includes several fixes and improvements for Packet, OpenStack, DigitalOcean, Google Compute Engine and OpenSSH.

Starting today, v4.1 is available on all Mist editions.

For a quick demo, reach out to demo@mist.io and arrange a video call with one of our engineers.

New features, performance and usability improvements in Mist v4.0

We are happy to announce version 4.0 of the Mist Cloud Management Platform. This major new release brings several new features, performance and usability improvements. It also incorporates the lessons we have learned in the past few months while working with teams that manage thousands of resources, e.g. SevOne.

Mist v4.0 updates

Mist now runs on Python v3.7

python programming language logo

Mist v4.0 brings a complete migration from Python v2.7 to Python v3.7. Our goal is to future-proof the code base and take advantage of the latest language improvements.

Thanks to this migration, Mist users will notice considerable improvements in server-side performance.

The migration also allowed us to upgrade to Apache Libcloud v2.4.0 which no longer supports older Python versions. The latest stable version of Libcloud includes many new features for OpenStack, Amazon AWS, Google Cloud Platform, Microsoft Azure and DigitalOcean. You can see a full list of changes here.

If you are considering a similar migration for your projects, check out this post with a nice overview of the differences between Python v2.X and Python v3.X. You can find further useful information in the official Python documentation, here and here. Finally, keep in mind that community support for Python v2.7 will expire on January 1st 2020.

Polymer v2.X and Web Components v1

polymer project logo

In Mist v4.0 the front-end code is in Polymer v2.X, up from Polymer v1.X. This is the first step towards moving to Polymer v3.X. The goal of this transition is to offer improved browser interoperability and performance. It also allows us to easily upgrade 3rd party components for additional usability improvements.

Migrating from Polymer v1.X to v2.X is not very simple because v2.X introduces breaking changes. Before you try something similar, make sure you check out this excellent upgrade guide. For more information on what Polymer v2.X brings on the table you can check out this document. Since this will probably be a short-term intermediate step before moving to Polymer v3.X, you should also go over the relevant v3.X documents here and here. The good news is that once you're on v2.X moving to v3.X requires less effort than moving from v1.X to v2.X.

Usability improvements

Alongside the major changes mentioned in the previous paragraphs, Mist v4.0 includes several usability improvements to ease your day-to-day routines. The most notable ones are:

i) Searchable key & network selection widgets in forms.

ii) Collapsible sections in monitoring dashboards.

iii) Export machine monitoring dashboard as pdf.

iv) Improved user interaction when adding "Other Server" Clouds.

v) Widget for selecting existing tags.

add existing tag to machine
Adding existing tags to new machines

vi) Configurable filters in every list that persist in localStorage.

saving search filters
Saving custom search filters

vii) Improved display of JSON & XML metadata.

browsing JSON metadata
Browsing machine metadata in JSON

Automatic backup & restore scripts

For Mist Community and Enterprise Edition users who are managing their own Mist installations, v4.0 includes a new backup feature. You can now automatically backup and restore everything, including MongoDB and InfluxDB, by making some simple configuration changes.

Pre and post action hooks

Mist v4.0 allows users to set specific pre and post action hooks at the cloud level, e.g. for all resources in my OpenStack cloud. This is useful for users with large infrastructure footprints that require very custom workflows and integrations with 3rd party systems. For example, one of our users is taking advantage of this feature for metering and billing purposes. When a new VM is provisioned a post-action hook notifies the billing system. The same happens after the VM gets destroyed. Based on this information, it is possible to know how many resources were utilized and for how long. This is then translated to an internal cost unit.

Conclusion

Mist v4.0 is a major stable release which brings lots of changes and significant improvements.

Starting today, v4.0 is available on all Mist editions.

For a quick demo, reach out to demo@mist.io and arrange a video call with one of our engineers.

SevOne revamps Self Service DevOps to move faster and save money

Christos Psaltis profile pic

SevOne logo

SevOne is a leading provider of network monitoring solutions. For its engineering needs it runs several thousands of VMs on more than 7 platforms, including VMware vCenter, several versions of Red Hat OpenStack, Kubernetes and more. A few months ago SevOne turned to Mist.io to improve self-service workflows for its engineers. Empowered by the Mist Cloud Management Platform, SevOne was able to speed up development, save money and invest more resources in key business drivers.

The Case

SevOne, a Boston MA based tech company, builds a suite of network & infrastructure monitoring products. This sounds like an understatement if you consider that SevOne's customers include Verizon, Comcast, eBay, Credit Suisse, Lockheed Martin and many more. In fact, some of the largest networks in the world rely on SevOne to run at peak performance.

SevOne's infrastructure footprint for development and QA runs across public clouds and its own data center with a mix of bare metals, VMware vCenter, several versions of Red Hat OpenStack and more. Currently many applications are moving to microservices using Kubernetes for container orchestration. In most cases the OS of choice is Linux. More than 100 developers from different teams need to have access to at least a subset of this hubrid infrastructure, measuring several thousands of VMs, to meet their day-to-day business needs.

Kevin Williams

"We chose Mist because it was easy to onboard and use. It required no changes in the way we were doing things, while enabling us to iterate and improve." Kevin Williams, Corporate Services Engineer at SevOne

To be more agile and move faster, SevOne adopted a self-service model via a homegrown web-based virtual server provisioning system. However, SevOne development resources familiar with the homegrown application were often diverted of moved to other posts, making it increasingly difficult to maintain and support the application.

"Bottomline, we had a basic homegrown application that was hard to support, maintain and extend. It ended up holding us back instead of helping us move faster", says Kevin Williams Corporate Services Engineer at SevOne.

With this experience SevOne started looking into third party management platforms. Within a few months of testing, SevOne chose Mist.

Kevin notes, "We chose Mist because it was easy to onboard and use. It required no changes in the way we were doing things, while enabling us to iterate and improve. Also, it helps us easily manage our Kubernetes clusters. Finally, we were impressed by their support. Mist.io people were very responsive, knowledgeable and helped us hands-on when needed."

Life with Mist

SevOne is currently managing its DevOps infrastructure with Mist Enterprise Edition, installed on-prem. Each SevOne user belongs to a team with specific rights over resources based on Mist role-based access control. To provision applications, SevOne DevOps has prepared a set of templates and scripts. SevOne developers use these to deploy complex applications, like Kubernetes clusters, in just a few clicks. As an added value, SevOne DevOps is able to view who owns resources and how they are utilized.

Kevin comments, "Since roll out, Mist gained a lot of traction over our homegrown application. Today it's one of the top 3 tools people use on daily basis. Our developers love how easy they can provision resources with templates and pre-built scripts. For example, when we're about to do a new release we need to provision a lot of VMs for QA and training purposes. Mist gives my end users the freedom to do this right away. They don't have to open a ticket for the IT team like they did in the past. My end users are happy and the IT team doesn't have to perform manual steps to stand VMs up. This alone saves us hundreds of hours in each release cycle. It's also much easier to track users and resources across systems, e.g. which team owns what, how much they are using etc. For example, last weekend an engineer was trying to find one of his VMs. He only had an IP that was not responding because the VM was powered off. With vCenter, tracking this VM would be time consuming. With Mist it was a matter of seconds."

Conclusion

By adopting Mist, SevOne was able to score multiple wins across the board:

  • SevOne DevOps saved at least 1 full-time equivalent from supporting the old homegrown application. SevOne saved even more by not having to add new features to it. All this effort was diverted to more business critical projects.
  • SevOne developers are happier because they can get resources faster and easier. They are no longer bogged down by details and can focus on the work at hand, saving hundreds of hours in each release cycle.
  • SevOne managers are also happier because productivity increased across the board. They now have better visibility into what each team owns and this paves the way for further optimizations and more savings.

To learn more about how Mist can help you achieve similar results, contact us at demo@mist.io.

To try Mist right away, sign up for a 14-day free trial.

Open source and DIY enthusiasts can try our Community Edition on Github.

Load more