Archives

Containers

Wavefront and Microservices Architectures

Microservice architectures improve the ability of IT organizations to more rapidly deliver business critical services to their customers, but this improved agility often comes at the cost of added complexity. This complexity is manifested by the increasing scale-out and ephemeral nature of this application architecture pattern. The transformation from monolithic applications to microservices means that in many cases, the complexity which resided on the &#rsquo;inside&#rdquo; of the application shifts towards the &#rsquo;outside&#rdquo; of the microservice as IT organizations now need to account for additional infrastructure components needed to support the microservice plus there are the additional communications pathways that will occur within the microservice ecosystem. In terms of the microservice lifecycle, data collected by companies such as New Relic confirms the trend towards increasingly short-lived microservices (with many running for less than one minute).

What is often lost in the discussion of microservices is the need to re-examine how an IT organization will be able to manage this new environment for the issues just raised can often &#rsquo;break&#rdquo; existing tooling. Let&#rsquo;s just look at one outcome – the missing or invisible container. Tools which perform coarse-grained data collection may not be able to observe a microservice for its entire lifecycle could occur in-between data collection operations. That&#rsquo;s why IT organizations may need to reconsider their existing monitoring tool portfolio to ensure that they have the necessary degree of fine-grained observability required for an increasingly dynamic environment. Returning to the &#rsquo;inner&#rdquo; versus &#rsquo;outer&#rdquo; architecture ramifications, IT organizations will also need tools that can not only store large amount of information, but also enable the rapid identification of patterns of microservice behavior that may not be obvious to the human operator.

VMware understands these issues and thus this was the impetus for our investment in Wavefront. We&#rsquo;ll continue to focus on transforming the manageability of an increasingly complex environment so that IT organizations can focus their energies on transforming their business.

 

 

 

The post Wavefront and Microservices Architectures appeared first on VMware Cloud Management.

Read more..

Part 2: Storage Policy Based Management for Containers Orchestrated by Kubernetes

This blog is written by Balu Dontu and Tushar Thole from Cloud Native Applications Storage team See Part 1 of the blog:Deploying WordPress-MySQL Application with Persistent Storage Create deploymentwith Persistent Volume Claim Create a MySql deployment Create a MySql deployment using this resource file: [crayon-58f516cd7e415539323450/]   Key parts here are: Resource defines a Deployment usingmysqlDocker

The post Part 2: Storage Policy Based Management for Containers Orchestrated by Kubernetes appeared first on Virtual Blocks.

Read more..

Part 1: Storage Policy Based Management for Containers Orchestrated by Kubernetes

This blog is written by Balu Dontu and Tushar Thole from Cloud Native Applications Storage team In a previous blog, we learnt how stateful containers orchestrated by Kubernetes can leverage storage exposed by vSphere (vSAN, VMFS, NFS) by using standard Kubernetes volume, persistent volume and dynamic provisioning primitives. Once you start enjoying the convenience of

The post Part 1: Storage Policy Based Management for Containers Orchestrated by Kubernetes appeared first on Virtual Blocks.

Read more..

Orchestrate Stateful Containers Using Docker Swarm and vSphere Docker Volume Service

This blog is written by Liping Xue who is a software engineer in Cloud Native Applications Storage team. She focuses on vSphere Docker Volume Service. As described inthis blog, we can see that users have strong demands to run data intensive workloads in containers. Although Docker has good support for running stateless applications which dominate

The post Orchestrate Stateful Containers Using Docker Swarm and vSphere Docker Volume Service appeared first on Virtual Blocks.

Read more..

vSphere Docker Volume Service is now Docker Certified!

We are happy to announcethat VMware has joinedDocker Certification ProgramandvSphere Docker Volume Service (vDVS) plugin is nowavailable on Docker Store! VMware’s inclusion into the program indicates that vSphere Docker Volume Servicehas been tested and verified by Docker, confirming to customers that the vSphere Docker Volume plugin hasbeen evaluated for security and issupported and built according

The post vSphere Docker Volume Service is now Docker Certified! appeared first on Virtual Blocks.

Read more..

vSphere Docker Volume Service is now Docker Certified!

We are happy to announcethat VMware has joinedDocker Certification ProgramandvSphere Docker Volume Service (vDVS) plugin is nowavailable on Docker Store! VMware’s inclusion into the program indicates that vSphere Docker Volume Servicehas been tested and verified by Docker, confirming to customers that the vSphere Docker Volume plugin hasbeen evaluated for security and issupported and built according

The post vSphere Docker Volume Service is now Docker Certified! appeared first on Virtual Blocks.

Read more..

Kubernetes and VMware NSX

Attending CloudNativeCon/KubeCon this week in Berlin (29th- 30th of March)? Please visit us at our booth #G1 andclick for more details about what’s happening at the show!


IT is undergoing a huge transformation.

Organizations are moving away from static infrastructure to full automation on every aspect of IT. This major shift is not happening overnight. It is an evolutionary process, and people decide to evolve their IT at different speeds based on organizational needs.

When I decided to join the VMware Networking & Security Business Unit four years ago, the key deciding factor for me was that I felt that networking is adopting automation far too slowly. Do not get me wrong – we always automated network configurations in some form. I still remember vividly my time as a networking consultant at a major German airport. Back at the beginning of the new millennium, I used a combination of Perl, Telnet and Expect to migrate the configuration of a huge core network from a single-tenant configuration to a multi-tenant MPLS/VPN. Nevertheless, at some point, network operators stopped evolving, and even today largely, we continue to automate by manually setting up new configuration into network devices using the individual boxes CLI syntax.

Then along came VMware NSX. NSX was, and still is, exactly what my definition of a system purpose-built for network and security automation. NSX abstracts away the &#rsquo;to be automated&#rdquo; parts of the network from the static physical infrastructure, and all of this is driven by APIs. NSX lets operators automate where it is good and safe to automate – in the overlay, as well as keep static configuration with the stability sought after in the physical network.

What does this all have to do with Kubernetes?

Let us first have a quick look at Kubernetes&#rsquo; mission statement:

&#rsquo;Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure&#rdquo;

Container-centric infrastructure needs a network and this network must be dynamic. We cannot keep the good ol&#rsquo; model of predefining everything and having the containers only &#rsquo;consume&#rdquo; networking. The Container Network must share the life cycle of the applications deployed on Kubernetes – created dynamically, scaled on-demand when the app is scaled, and must be multi-tenant. Anything less will result in incomplete automation and limited agility.

Let me give you an example:

A business unit decides to move to the new Kubernetes deployment that was recently set up by the internal IT Cloud Infrastructure team. Unfortunately, the business unit user will need to wait for a couple of days for his environment to be usable, as the underlying network topology (VLANs, Router IP Interfaces, VRFs, Firewall Interfaces, etc.) has to be pre-configured to map to the business units newly created Kubernetes namespace.

This clearly does not work! The business unit user rightly expects a &#lsquo;public cloud experience&#rsquo;. After finalizing contractual details through a web-portal, the Namespace in Kubernetes should be created alongside all needed network and storage constructs. Even more extreme, the business unit should be able to order its own complete Kubernetes deployment – with all network and storage constructs – delivered to them in less than 10 minutes after pressing the &#rsquo;order&#rdquo; button.

Now you might rightly say, &#rsquo;Isn&#rsquo;t this a solved problem? Don&#rsquo;t we have overlay network technologies in Kubernetes that already abstract away the logical network in Kubernetes from the underlying IaaS network?&#rdquo; True! Those have all been invented exactly to solve the issues caused by non-programmable static infrastructure that sits underneath Kubernetes.

However, the current implementations with overlay network technologies have a number of challenges that I would like to walk you through:

  • Missing fine-grained traffic control and monitoring: In Kubernetes, operators do not deploy individual containers, they deploy Pods. A Pod is a collection of containers that share the same network interface, and run on the same Kubernetes node. Every Pod has a distinct network interface that gets patched into a Linux network namespace for isolation. It is very complex to troubleshoot and secure Pod-to-Pod connectivity with the current technologies, as they do not offer a central visibility of all Pod network interfaces. Having a central management of Pod network interfaces, with the ability to read counters, do traffic monitoring and enforce spoofguard policies is at the core of the NSX value proposition. NSX also offers a rich set of troubleshooting tools to analyze and solve connectivity issues between Pods.
  • Missing fine-grained security policy (firewall rules): In some of the current technologies, Pod-to-Pod traffic is not secured at all. This opens an opportunity for attackers to move laterally from Pod to Pod without being blocked by firewall rules and, even worse, without leaving any traces of this lateral movement in logs. Kubernetes addresses this with the network policy project driven by the Networking Special Interest Group (SIG)in Kubernetes. NSX implements Network Policy alongside with pre-created &#lsquo;admin-rules&#rsquo; to secure Pod-to-Pod traffic in Kubernetes
  • Automating the creation of network topology: Many of the current implementations take a simple approach to network topology mapping, which is not to have any topology mapping. IP Subnet allocation for Pods is mostly done per Kubernetes node. Tenancy constructs like namespaces are usually not reflected in anything else than abstract firewall rules. NSX implements a distinct network topology per Kubernetes namespace. NSX maps logical network elements like logical switches and distributed logical router to Kubernetes namespaces in a fully automated manner. Each of those network topologies can then be adapted per namespace, e.g. operators can decide if the subnets of the namespace should be directly routed, or privately addressed and behind NAT

1- NSX_Kubernetes Topology

 

  • Integration in enterprise networking: A paradigm of many existing technologies is that the operator needs to decide at the time of install whether the container networks should be privately addressed, and if they are hidden behind a NAT boundary or directly routed within the enterprise network. Existing overlay technologies make it difficult to expose Pods to networks outside of the Kubernetes cluster. Exposing the services involves using NAT/PAT on the Kubernetes nodes themselves, putting the burden on the operator to design how to map e.g. external physical load-balancers or DNS records to TCP/UDP ports on the Kubernetes nodes. Alternatively, or in addition, one can use the new Kubernetes Ingress load-balancers to get traffic into the container networks. In any case, there is NAT involved. With the NSX integration, we intend to allow operators to be able to decide on a per-namespace basis if they want to do direct routing, and even if they want to inject the routes dynamically into the core network using BGP. On the other hand, if operators wanted to save IP address space, they would be able to &#lsquo;hide&#rsquo; the namespace networks behind NAT using private IPs and Kubernetes Ingress Controllers (Load-Balancers) to get external traffic to the Pods.

How does the NSX integration look like? How did we design it?

To start, the integration uses NSX-T since this makes the solution applicable to any compute environment, and not just vSphere. E.g. NSX-T will allow us to support Kubernetes on a variety of compute platforms – such as Photon-Platform (which uses ESXi hosts with vCenter), Baremetal Linux servers, Public Clouds, and KVM based virtualization environments.

To integrate Kubernetes with NSX-T we intend to develop three major Components; 1) the NSX Container Plugin (NCP), 2) the NSX CNI Plugin and 3) NSX Kube-Proxy.

  1. The NSX Container Plugin (NCP) is a software component that we intendto deliver as a container image, running as an infrastructure Pod in the Kubernetes cluster. It would sit between the Kubernetes API Server, watching for changes on Kubernetes Objects (namespaces, network policies, services etc.), and the NSX-T API. It would create networking constructs based on the object addition and changes reported by the Kubernetes API.

    2 - NCP

  2. The NSX CNI Plugin is a small executable intended to be installed on all Kubernetes Nodes. CNI stands for Container Network Interfaceand is a standard that intends to allow the integration of network solutions like NSX into container orchestration platforms. The Kuberenetes node component called Kubelet will instantiate/call the CNI Plugin to handle the Pod network attachment.
  3. The NSX Kube-Proxy is a daemon running on the Kubernetes Nodes. Again, we intend to deliver NSX Kube-Proxy as a container image, so that it can be run as a Kubernetes Daemon-Set on the Nodes. NSX Kube-Proxy would replace the native distributed east-west load balancer in Kubernetes called Kube-Proxy, which uses IPTables, with our solution that uses OpenVSwitch (OVS) load-balancing features.

Each of the components deserves a closer look and far more explanation than what I can cover in this initial article. We will follow up with a detailed look at each component in future articles.

Before I wrap this up, there is one more thing you might ask, &#rsquo;How do we solve the two layers of overlay problem?&#rdquo; Well, when running overlay networks between Kubernetes Nodes running as VMs on an IaaS that itself uses an overlay network solution, you get into the situation where you have double encapsulation. E.g. VXLAN in VXLAN.

3 - Overlay in Overlay Problem

Would the NSX-T Kubernetes Integration suffer from the same problem?

The answer would be No. When running the Kubernetes nodes as VMs, the tunnel encapsulation would be handled only on the hypervisor vSwitch layer. In fact, the OpenVSwitch (OVS) in the Kubernetes Node VM would not even have a control plane connection to the NSX-T controllers and managers thereby creating an additional layer of isolation and security between the containers and the NSX-T control plane. The NSX CNI Plugin intends to program the OVS in the Kubernetes Node to tag traffic from Pods with a locally significant VLAN id (per vnic). This would allow us to &#rsquo;multiplex&#rdquo; all the traffic coming from the Pods onto one of the Kubernetes Node VMs vnics towards the Hypervisor vSwitch. The VLAN id would allow us to identify individual Pods on the Hypervisor vSwitch using a logical sub interfaces of the VMs vnic. All management and enforcement actions (counters, spoofguard, firewalling, …) on the per-Pod logical port would be done on the Hypervisor vSwitch. The VLAN id imposed by OVS in the Node VM would be stripped by the Hypervisor vSwitch before encapsulating it with the overlay header.

4 - Pod Multiplexing using VLANs

There are many more details to be discussed than what we can not fit into a single article. Stay tuned for more articles on how we integrate Kubernetes with NSX-T!

Meanwhile, if you attend CloudNativeCon / Kubecon this week in Berlin (29th + 30th of March), please visit us at our booth #G1. We would be delighted to chat with you in detail about NSX-T with Kubernetes.

We will also be at Dockercon in Austin, TX, April 17th to the 20th as a gold sponsor. We would love to meet you at our booth #G3 where we will showcase container-ready SDDC solutions that includeNSX.

To learn more about NSX Container Networking and Security, watch these VMworld 2016 sessions:

  • VMworld 2016: NET9989 – VMware NSX, Network Bridge to Multi-Cloud Future from Guido Appenzeller
  • VMworld 2016: NET 7935 – Container Orchestration and Network Virtualization – free VMworld login required

To learn more about VMware&#rsquo;s open source projects and products in the cloud-native space, you can follow the following links:

  • Cloud-Native Apps
  • Photon Platform
  • vSphere Integrated Container

The post Kubernetes and VMware NSX appeared first on The Network Virtualization Blog.

Read more..

Deploying WordPress in vSphere Integrated Containers

WordPress is a popular, open-source tool for the agile deployment of a blogging system. In this article, we offer a step-by-step guide fordeploying WordPress in vSphere Integrated Containers. This involves creating two containers: one running a mysql database and the other running the wordpress web server. We provide three options:

  1. Deploy using docker commands in vSphere Integrated Container
  2. Deploy using docker-compose in vSphere Integrated Containers
  3. Deploy using Admiral and vSphere Integrated Containers

Deploy using docker commands in vSphere Integrated Containers

First, we need to install the virtual container host (VCH) with a volume store, which is used to persist the db data. In the following example, I create a VCH with a volume store test with the tag default under datastore1:

vic-machine-linux create --name=vch-test --volume-store=datastore1/test:default --target=root: --no-tlsverify --thumbprint=… --no-tls

Second, we deploy a container which runs the mysql database:

docker -H VCH_IP:VCH_PORT run -d -e MYSQL_ROOT_PASSWORD=wordpress -e MYSQL_DATABASE=wordpress -e MYSQL_USER=wordpress -e MYSQL_PASSWORD=wordpress -v mysql_data:/var/lib/mysql --name=mysql mysql:5.6

Replace VCH_IP and VCH_PORT with the actual IP and port used by the VCH, which can be found from the command line output of the above vic-machine-linux create. Here -v mysql_data:/var/lib/myql mounts the volume mysql_data to the directory /var/lib/mysql within the mysql container. Since there is no such volume mysql_data on the VCH, the VIC engine creates a volume with the same name in the default volume store test.

Third, we deploy the wordpress server container:

docker -H VCH_IP:VCH_PORT run -d -p 8080:80 -e WORDPRESS_DB_HOST=mysql:3306 -e WORDPRESS_DB_PASSWORD=wordpress --name wordpress wordpress:latest

Now if you run docker –H VCH_IP:VCH_PORT ps, you should see both containers running. Open a browser and access http://VCH_IP:8080. You should be able to see the famous WordPress start page below:

In addition, if you connect to your ESXi host or vCenter which hosts the VCH and the volume store, you should be able to find the data volume mysql_data under datastore1/test:

Deploy using docker-compose in vSphere Integrated Containers

Using docker-compose on vSphere Integrated Containers is as easy as on vanilla docker containers. First, you need to create the docker-compose.yml file as follows:

version: '2'services:   db:     image: mysql:5.6     environment:       MYSQL_ROOT_PASSWORD: wordpress       MYSQL_DATABASE: wordpress       MYSQL_USER: wordpress       MYSQL_PASSWORD: wordpress   wordpress:     depends_on:       - db     links:       - db     image: wordpress:latest     ports:       - "8080:80"     environment:       WORDPRESS_DB_HOST: db:3306       WORDPRESS_DB_PASSWORD: wordpress       WORDPRESS_DB_NAME: wordpress

Then simply run

docker-compose –H VCH_IP:VCH_PORT up –d

Open a browser and access http://VCH_IP:8080. You should be able to see the WordPress start page. Note that as of VIC engine 0.8, the volumes option is not yet support for docker-compose, which is why we only store the db data in the db container instead of persistent storage. A future release willinclude this feature.

Deploy using Admiral and vSphere Integrated Containers

Admiral is the management portal through which you can easily deploy containers using the Admiral UI or a template (similar to the docker-compose.yml file used by docker-compose). In this example, we will focus on deploying WordPress via the Admiral UI.

First, we need to deploy a container which runs the Admiral service:

docker –H VCH_IP:VCH_PORT run -d -p 8282:8282 --name admiral vmware/admiral

Go to the web page http://VCH_IP:8282 and add the VCH host to Admiral based on these instructions.

Second, create the mysql container by choosing Resources -> Containers -> Create Container, and input the parameters of the docker command you used previously when deploying WordPress on VIC. Don&#rsquo;t forget to set the ENVIRONMENT variables. Click on Provision to launch the container.

Now you should be able to see both the admiral container and the mysql container in the Admiral UI. Note down the actual container name of the mysql container (Admiral adds suffix to your specified name as the actual container name).

Third, deploy the wordpress container following the same flow as in the second step. Note that the environment variable WORDPRESS_DB_HOST should be set to mysql_container_name:3306.

Finally, open a browser and access http://VCH_IP:8080. You should be able to see the WordPress start page again.

Alternatively, you can also use the Admiral template, which works in a way similar to docker compose, to deploy your WordPress application. Simply go to Templates and choose the icon of Import template or Docker Compose. Then copy and paste the content of our docker-compose.yml file into the text box. Click the Import button on the bottom right and then click provision on the next page. The wordpress application isready for access after the status of the Provision Request becomes Finished.

Download vSphere Integrated Containers today!

The post Deploying WordPress in vSphere Integrated Containers appeared first on VMware vSphere Blog.

Read more..

Connecting Containers Directly to External Networks

This article comes from Hasan Mahmood, a staff engineer on the vSphere Integrated Containers team.

With vSphere Integrated Containers (VIC), containers can be connected to any existing vSphere network, allowing for services running on those containers to be exposed directly, and not through the container host, as is the case with Docker today. vSphere networks can be specified during a VIC Engine deployment, and they show up as regular docker networks.

Connecting containers directly to networks this way allows for a clean separation between internal networks that are used for deployment from external networks that are only used for publishing services. Exposing a service in docker requires port forwarding through the docker host, forcing use of network address translation (NAT) as well as making separating networks somewhat complicated. With VIC, you can use your existing networks (and separation that is already there) seamlessly through a familiar docker interface.

Setup

To add an existing vSphere network to a VIC Engine install, use the collection of -container-network options for the vic-machine tool. Here is an example run:

$ vic-machine-linux create --target al:password@vc --no-tlsverify --thumbprint C7:FB:D5:34:AA:B3:CD:B3:CD:1F:A4:F3:E8:1E:0F:88:90:FF:6F:18 --bridge-network InternalNetwork --public-network PublicNetwork --container-network PublicNetwork:public

The above command installs VIC adding an additional network for containers to use called public. The notation PublicNetwork:public maps an existing distributed port group called PublicNetwork to the name public. After installation, we can see that the public network is visible to docker:

$ docker -H 10.17.109.111:2376 --tls network ls NETWORK ID          NAME                DRIVER              SCOPE 62d443e3f5c1        bridge              bridge                                   b74bb80d92ad        public              external

To connect this network, use the -net option to the docker create or run command:

$ docker -H 10.17.109.111:2376 --tls run -itd --net public nginx Unable to find image 'nginx:latest' locally Pulling from library/nginx 386a066cd84a: Pull complete  a3ed95caeb02: Pull complete  386dc9762af9: Pull complete  d685e39ac8a4: Pull complete  Digest: sha256:e56314fa645f9e8004864d3719e55a6f47bdee2c07b9c8b7a7a1125439d23249 Status: Downloaded newer image for library/nginx:latest 4c17cb610a3ce9651288699ed18a9131022eb95b0eb54f4cd80b9f23fa994a6c

Now that a container is connected to the public network, we need to find out its IP address to access any exported services, in this case, the welcome page for the nginx web server. This can be done by the docker network inspect command, or the docker inspect command. We will use docker network inspect here since the output is more concise:

$ docker -H 10.17.109.111:2376 --tls network inspect public [     {         "Name": "public",         "Id": "b74bb80d92adf931209e691d695a3c133fad49496428603fff12d63416c5ed4e",         "Scope": "",         "Driver": "external",         "EnableIPv6": false,         "IPAM": {             "Driver": "",             "Options": {},             "Config": [                 {                     "Subnet": "10.17.109.0/24",                     "Gateway": "10.17.109.253"                 }             ]         },         "Internal": false,         "Containers": {             "4c17cb610a3ce9651288699ed18a9131022eb95b0eb54f4cd80b9f23fa994a6c": {                 "Name": "serene_carson",                 "EndpointID": "4c17cb610a3ce9651288699ed18a9131022eb95b0eb54f4cd80b9f23fa994a6c",                 "MacAddress": "",                 "IPv4Address": "10.17.109.125/24",                 "IPv6Address": ""             }         },         "Options": {},         "Labels": {}     } ]

We now know that our running container&#rsquo;s IP address is 10.17.109.125. Next, we can try reaching nginx via the browser.

This example only offers a very simple example of how to make vSphere networks available to VIC containers. You can learn more about the different networks that the VIC container host connects to. Download vSphere Integrated Containers today!

The post Connecting Containers Directly to External Networks appeared first on VMware vSphere Blog.

Read more..

Volume Support with vSphere Integrated Containers

This post comes from Matthew Avery, a member of the vSphere Integrated Containers engineering team.

I am excited to discuss volume supportwithvSphere Integrated Containers and its differences from standard Docker deployments. The important difference to note between VIC and Docker is that VIC leverages the robust vSphere environment in order to interact with pools of managed resources. This means that the VIC volume support is designed to allow you to create volumes from vSphere datastores. This can be done from the Docker command line interface (CLI) without knowledge of the vSphere topologies. There are several major advantages of having access to datastores from the Docker CLI. One is that containers can now have volumes with several different backing types, such as NFS, ISCSI, or SSD. Also, the Docker CLI can now harness a large storage pool by default, independent of the resource constraints of the Docker daemon’s host. Additionally, users get access to technologies like VSAN when using volumes with their containers, which boosts the management capabilities of container volumes.

Deploying a Virtual Container Host with Volume Stores

Volume support for VIC starts at deployment time when invoking vic-machine create. Since VIC is integrated with vSphere, we need a way to distinguish between different vSphere datastores. To allow this, we have created the concept of a Volume Store. Volume stores are defined at the Virtual Container Host (VCH) deployment time. New volume stores cannot be added after deployment at this time, but it is planned in the future. Below is a screenshot detailing how to add volume stores to a VCHdeployment using the -volume-store option in vic-machine create.

As seen above, multiple volume stores can be expressed. The format of the volume store target is :. The volume store name can be anything without spaces. The datastore is validated against the target vCenter Server, and if there is an invalid target datastore, vic-machine will suggest possible alternative datastores to the user. Another thing to note is that the &#rsquo;default&#rdquo; tag for volume stores is a special tag which allows VIC to support anonymous volumes — more on this below. Please note that without a volume store tagged as default, anonymous volumes will not be supported for that VCH deployment. There is a planned update in the future to include a -default-volume-store option to make this distinction more apparent. Another aspect of the volume store option to note is that you can also specify a file path. For example in the above screenshot nfs0-1/test is the target volume store. This means that volumes will be made in a directory called test inside the datastore nfs0-1. This can help vSphere admins organize their volume stores.

It is also possible to list the available volume stores of a VCH deployment through the docker info command from the Docker CLI. In the output, there is a field called VolumeStores which is a space list of volume stores that are available for that VCH deployment.

Creating Specified Volumes

Now that we have successfully deployed a VCH with two volume stores, we can explore volume creation from the Docker CLI. VIC&#rsquo;s default volume behavior is a little bit different from Docker&#rsquo;s. This difference is driven by the above explanation of volume stores. The default volume behavior with VIC comes with two driver arguments. The first is -opt VolumeStore= which allows the user to target which datastore they would like their volume to be created on. The default for this argument is the &#rsquo;default&#rdquo; volume store. If a default volume store is not tagged at create time this call will fail with an error. The second option is -opt Capacity= which defines the size of the volume that the user wishes to create. The default unit is megabytes. Some examples of this argument are 2GB, 20TB, and 4096. If no capacity is specified the default for now is 1GB; there are plans to make this configurable in the future.

Let&#rsquo;s take a look at the different volume create operations in detail. The first volume create operation above is possible since we created a default volume store. Notice we did not specify the volume store or the capacity, so this volume is 1GB and resides on the default store. The second volume has a 2GB capacity and is on the default store. The third and fourth volumes are created on the VSAN volume store with differing capacities. These examples show how you can create a complex set of volumes for your containers to consume.

Mounting Volumes To Containers

So now we have made some volumes and we want to mount them onto a container. VIC uses the normal syntaxfor mounting volumes onto containers, that is, -v :. For instance, in the above screenshot I have mounted the volume named defaultVolume above to this busybox container. I then attached to it and saw that the /myData directory was there, and added an empty file. Mount paths are unique, so if you attempt to mount several volumes to the same path in the container only one volume will be mounted.

VIC and Anonymous Volumes

The screenshot below involves a special case of anonymous volumes. As mentioned earlier, anonymous volumes are not supported unless a default volume store is specified. The main difference between an explicitly made (specified) volume and an anonymous volume is that anonymous volumes are made on the fly with default parameters. The pathway for creating anonymous volumes also does not follow the usual docker volume create target. Instead, they are made either from image metadata or from a -v option targeting a volume that does not exist at the time of the call. The yellow box indicates an anonymous volume that gets its name from the mount request — other than that it was automatically made on the default store with the default capacity of 1GB. All anonymous volumes are created in the default store with the default capacity. The orange box shows that VIC makes anonymous volumes when a targeted image has volume mounts in the image metadata. In this case, the mongo image requests two volumes in its metadata. We can see from the volume ls output that those volumes did not have an explicit name, so VIC assigned them UUIDs for their names. If you want to specify these volumes rather than having them anonymous, targeting the same mount path with a specified volume will take precedence over an anonymous volume.

For more information on how VIC works, see the documentation on Github. And download vSphere Integrated Containers today!

The post Volume Support with vSphere Integrated Containers appeared first on VMware vSphere Blog.

Read more..

Go Que Newsroom

Categories