Docker running on MacOS with Apple M1 chip in one command

Recently, Docker has announced that Docker Desktop will be a paid subscription for corporate users. Couple this with the latest Apple M1 chipset on the latest Macbooks, that still has limited support by the Virtual Machine vendors, and the community has a reason to look for alternatives.

Canonical (the makers of Ubuntu) has released Multipass, a dedicated Ubuntu VM-as-a-Service capability that fully supports the Apple M1 chip. With this, and in about 5 minutes of your time (depending on download speeds), you can get Docker running on your M1 Macbook.

Edit: I have created a script to automate the entire process!

Here is the repo: https://gitlab.com/scottbri/docker-on-m1

Installation

This installation assumes that homebrew is already installed. The script will fully automate the installation of docker, docker-compose, and multipass using homebrew.

We will use the name dockervm for the multipass ubuntu vm instance name.

Usage

bash -c “$(curl -fsSL https://gitlab.com/scottbri/docker-on-m1/-/raw/main/install.sh)”

How to do it manually:

Continue reading

Four practical steps of DevOps “Deploy”

CI is one thing CD is another. Let’s focus a bit on CD, and Deploy phase in particular from the "DevOps Loop". This sits squarely in the "Ops" category of DevOps. I’d like to expand on the concept, as there are several distinct steps that need to happen. Nothing is magical about it.

There are four practical steps that comprise Deploy:

  1. Release of the software bits and configuration into a deployable package
  2. Customization of that package to suit the target environment
  3. Deployment of the package and validation of success
  4. Cutover of users to the newly deployed release
DevOps Loop

Release

This step is distinct from the "Release" phase of the DevOps Loop. There the tested software is designated as a "Release" suitable for deploy phase. I would like to call out that software is released via many different package frameworks. It could be as a windows MSI bundle, as an OVA, or as a container and set of K8s manifests.

Regardless, the Ops side of your DevOps team needs to collect the bits and configuration into a package that fits the way your shop wants to automate deployment.

  • Put the immutable newly released software artifact(s) into the proper deployment registry (which may not be the registry the Dev team "released them into!")
  • Put the release deployment configuration into the proper registry
  • Identify dependencies and repeat above for each
  • Retrieve working copy of all configurations into assembly folder and arrange them into a standard (templated, parameterized) deployable package

Adapt

Great, the bits and pieces have been assembled into a deployable package for your needs, but into which environment will it be deployed? Will it be dev, test, stage, fix, pre-prod, or production? What public names will the services have? Will it be served by a load balancer or an ingress controller? Onto which IaaS? All of these questions and more lead into the need to adapt the standard release package to suite each specific target environment.

  • Identify changes to configurations for the target environment
  • Store configuration modifications to the release bundle preferably through parameterized custom values files or via overlays in the adaptations folder.
  • Take the action of generating the final deployment configuration for the specific environment and place everything needed for deployment into deployment folder ready to execute.

Deploy

What tools do you use for deployment? Everything prior is about creating a package that suits the deployment toolchain. Now it’s time to execute. Who maintains that deployment automation? Where there is not automation, who steps through the process and how? How is the deploy monitored and remediated in case of failure? How is a successful deployment measured? Are there functional tests that must pass? The deployment phase is where all prior effort culminates into action distinct from the packaging phases prior.

  • Roll out the deployment of the final configuration appropriately to the needs of the business (blue/green, rolling update, hard cutover, etc)
  • Retrieve deployment logs from all activities
  • Capture all configuration files and adaptations used to build the final deployment

Cutover

Deployment happened. You did a thing. Was that thing successful? Run additional testing over time to ensure successful release rollout. Things look good? Decide to cutover 100% to the new release deployment. Is something wrong? Roll back to previous versions. Automate the cutover or roll-back process as much as possible.

  • Run functional tests to ensure successful deploy
  • Execute final cutover automation once confirmation of deployment success is established
  • Capture all deployment logs and artifacts into an immutable archive

Tanzu Advanced edition addresses the real point of IT

What was the point of IT again? Why are we doing all of this? We’re doing all of this work, in order to get applications, software, code in the hands of our customers.

That’s ultimately what IT shops are all about, and what this vendor ecosystem is all about. What partners do is help get software running, and put that functionality into the hands of our customers’ customers, so that our customers can make money.

Software is becoming no longer, “This thing that is running in support of the business.” Now software is becoming the business, right?

Continue reading

Tanzu Connect Partner Webinar Series Episodes

The amazing Karin Bash and I host the VMware Tanzu Connect partner webinar series for VMware partners that want to stay current on the VMware Tanzu portfolio of products. Every week we host a 30-45 minute session on a topic relevant to VMware partners selling the Tanzu portfolio.

Sign up for the webinar here! (you must have a VMware PartnerConnect login!)

Episode Archive

Continue reading

vSphere7 Pod Service Security

TLDR:
In the first introductory meetings I have with partners, I will admit to a bit of “salesmanship” saying over-simply that pods are running “directly on the ESXi kernel” along-side VM’s.  The problem is that neither Pods nor VM’s run “directly on the ESXi kernel.”

ESXi is not Linux… but containers must run on a linux kernel!?  Continue reading

Introducing the VMware Tanzu Portfolio

Background

New infrastructure architectures have always followed the demands of new application architectures.  The rise of the public cloud IaaS answered the need to take applications virtualized on discreet VM’s and move them to a new shared platform.  Now, with applications being natively written to take advantage of the scale of the cloud, the packaging and execution of these highly distributed microservices based applications by the infrastructure is changing again in response. Continue reading

Tanzu Information Links to Great Content

Current (Virtual) Events and Webinars

General Information

Continue reading

Log Metrics and Log Data

There is a role for both in a cloud native world.

While you can send log data directly to Wavefront, the primary use case there is to convert logs into metrics. VMware Log Insight can do some of this as well, but is a better place to data mine the log data.

Metrics and log monitoring are complementary. If you embrace the power of tracking everything that moves in your environment, then you’ve added instrumentation to likely thousands of places across your codebase.

Metrics give you an aggregated view over this instrumentation, and Wavefront can derive metrics from the log stream. Logs give you information about every single request or event, and the log data can be archived in log data mining tools for root cause analysis, debugging, and troubleshooting. Continue reading

Operational Excellence at Pivotal

I work at Pivotal Software.  I was asked recently about Pivotal’s approach to “operational excellence.”  The answer has a lot to do with just how “cloud native” Pivotal is.

Pivotal’s heart and soul rest in the practices of Lean and Agile methodologies and especially Extreme Programming. The platforms that Pivotal has helped build are platforms focused on increasing developer productivity and shortening software release cycles.  Operational success is measured on the achieved outcomes like improved software release velocity with fewer bugs, and not as much on operations-specific metrics like reductions in trouble tickets.  Continue reading