Archives

Photon

Run your Cloud Native Applications in production with vRealize Automation XaaS scalable components

Cloud Native Applications are getting some momentum. Still there is some reticence to run these in production on premise, particularly for the applications that could become business critical.
VMware mission is to be here to help. As part of our container portfolio we do have Photon OS to run containers, enterprise-class features with vSphere Integrated Containers (VIC) and a secure and scalable Platform as a Service with Cloud Foundry.

In the case there is a requirement for managing the life cycle of the container infrastructure as often the case for other IaaS applications vRealize Automation has some features that can help extend Photon OS availability in production.

If you are in charge of the infrastructure and have to make critical cloud native applications available in production on vSphere you may start with deploying a Photon Blueprint. Still your application may be very vulnerable. Even if Photon OS does include swarm, a Docker native clustering it still need to be managed to provide high availability and scalability. Also access to the Docker APIs is not secured.

In this blog post I will leverage vRealize Automation features to create blueprints for highly available and scalable Cloud Native Applications.

 

How I designed my HA, scalable Photon blueprint.

To make the application deployment highly available I did set the cluster option on the blueprint VM. This also provide VM based scale out and scale in day 2 operations.
To make the application cluster aware I have created a “Create docker Node” vRealize Orchestrator workflow calling the swarm API to enable swarm and join new photon hosts. I also have a “Scale in node” workflow for removing a node from the swarm or all of them when decommissioning the deployment.

While this automation gives some great flexibility. I wanted to manage individual swarm nodes from the vRealize Automation user interface:

  • Request a swarm with 2 to 8 nodes
  • See in the vRA user self service portal the swarm nodes as distinct component running in a deployment and be able to check their properties
  • Scale out / Scale in Photon swarm deployment
  • Contextually run node specific day 2 operations in vRA without having to switch to a CLI or third party interface
  • Manage who has access to these day 2 operations using vRA entitlements and optionally approve some operations
  • Monitor the full life cycle of the app including day 2 operations

 

vRealize Automation 7.2 XaaS scalable components as a solution for managing Photon Swarms

An XaaS scalable component is a blueprint XaaS component that can be scaled independently from the blueprint VMs. I created my own “Docker node” resource using vRO Dynamic Types and could reuse the Create and Scale in workflows to leverage this new resource type.

A benefit from this design is that I can scale out in almost real time since I can pre-provision the VM used to scale out and just start it / join it to the swarm on node scale out.

Then to secure the application access to the API I had two options:

  • Via a certificate
  • Using an NSX virtual Firewall Rule which is one of the vRA blueprint component

This way each swarm can only be managed by the vRO server which in turn is used by vRA end users that can only operate the applications they own.

As you can see reading this post we managed to enable, manage and combine vRealize Automation and Photon scalability and security features providing a very flexible, as a service on premise enterprise ready Cloud Native app solution.

The post Run your Cloud Native Applications in production with vRealize Automation XaaS scalable components appeared first on Virtualize Business Critical Applications.

Read more..

Run your Cloud Native Applications in production with vRealize Automation XaaS scalable components

Cloud Native Applications are getting some momentum. Still there is some reticence to run these in production on premise, particularly for the applications that could become business critical.
VMware mission is to be here to help. As part of our container portfolio we do have Photon OS to run containers, enterprise-class features with vSphere Integrated Containers (VIC) and a secure and scalable Platform as a Service with Cloud Foundry.

In the case there is a requirement for managing the life cycle of the container infrastructure as often the case for other IaaS applications vRealize Automation has some features that can help extend Photon OS availability in production.

If you are in charge of the infrastructure and have to make critical cloud native applications available in production on vSphere you may start with deploying a Photon Blueprint. Still your application may be very vulnerable. Even if Photon OS does include swarm, a Docker native clustering it still need to be managed to provide high availability and scalability. Also access to the Docker APIs is not secured.

In this blog post I will leverage vRealize Automation features to create blueprints for highly available and scalable Cloud Native Applications.

 

How I designed my HA, scalable Photon blueprint.

To make the application deployment highly available I did set the cluster option on the blueprint VM. This also provide VM based scale out and scale in day 2 operations.
To make the application cluster aware I have created a “Create docker Node” vRealize Orchestrator workflow calling the swarm API to enable swarm and join new photon hosts. I also have a “Scale in node” workflow for removing a node from the swarm or all of them when decommissioning the deployment.

While this automation gives some great flexibility. I wanted to manage individual swarm nodes from the vRealize Automation user interface:

  • Request a swarm with 2 to 8 nodes
  • See in the vRA user self service portal the swarm nodes as distinct component running in a deployment and be able to check their properties
  • Scale out / Scale in Photon swarm deployment
  • Contextually run node specific day 2 operations in vRA without having to switch to a CLI or third party interface
  • Manage who has access to these day 2 operations using vRA entitlements and optionally approve some operations
  • Monitor the full life cycle of the app including day 2 operations

 

vRealize Automation 7.2 XaaS scalable components as a solution for managing Photon Swarms

An XaaS scalable component is a blueprint XaaS component that can be scaled independently from the blueprint VMs. I created my own “Docker node” resource using vRO Dynamic Types and could reuse the Create and Scale in workflows to leverage this new resource type.

A benefit from this design is that I can scale out in almost real time since I can pre-provision the VM used to scale out and just start it / join it to the swarm on node scale out.

Then to secure the application access to the API I had two options:

  • Via a certificate
  • Using an NSX virtual Firewall Rule which is one of the vRA blueprint component

This way each swarm can only be managed by the vRO server which in turn is used by vRA end users that can only operate the applications they own.

As you can see reading this post we managed to enable, manage and combine vRealize Automation and Photon scalability and security features providing a very flexible, as a service on premise enterprise ready Cloud Native app solution.

The post Run your Cloud Native Applications in production with vRealize Automation XaaS scalable components appeared first on Virtualize Business Critical Applications.

Read more..

HOL Three-Tier Application, Part 5 – Use Cases

If you have been following along in this series, first of all, thank you! Next, you should have a basicthree-tier application created:

A Simple Three-Tier Application

I tried to usesimple components to make it usable in either a home lab or a nested environment, so theyshould perform exceedingly wellin areal environment.

Virtual Machine Profile

The component Photon OS machines boot in a few seconds, even in our nested environment, and their profiles are fairly conservative:

  • 1 vCPU
  • 2 GB RAM
  • 15.625 GB disk

Once configured as indicated in this series, theseVMs will export as OVAs that are around 300 MB each, making them reasonably portable.

The storage consumed after thin-provisioned deployment is less than 650 MB for each virtual machine. At runtime, each consumes an additional 2 GB for the swapfile. During boot, in my environment, each VM’sCPU usage is a little over 600 MHz and the active RAM reports 125 MB, but those normalize quickly to nearly 0 MHz and 20 MB active RAM (+23 MB virtualization overhead). You may be able to reduce their RAM allocations, but I have not tried this.

So, what can I do with this thing?

It is nice to have tools, but without a reason to use them, they’re not that much fun. We use tools like this in our labs to demonstrate various functionality of our products and help our users understand how they work. Here are a few ideas, just to get you thinking:

vMotion, Storage vMotion, SRM Protection and Recovery

The virtual machines that you created can be used as a set, butthe base Photon OS template also makes a great single VM for demonstrating vMotion or Site Recovery Manager (SRM)recovery in a lab environment. They are small, but they have some “big VM” characteristics:

  • The VMware Tools provide appropriate information up to vCenter
  • They respond properly to Guest OS restart andpower off actions
  • Photon OS handles Guest Customization properly, so you can have the IP address changed during template deployment andSRM recovery.
  • You can ping and SSH into them
  • You can use them to generate load on your hosts and demonstrate Distributed Resource Scheduler (DRS) functionality

Firewalling/Micro-segmentation

We use a previous version of this application in several of our NSX labs that debuted at VMworld 2016. For a good micro-segmentation use case, you can look atHOL-1703-USE-2 - VMware NSX: Distributed Firewall with Micro-Segmentation. The manual is available for download here, or you can take the lab here.

For a more complicated use case using a similarapplication to demonstrateSRM and NSX integration, look atHOL-1725-USE-2 - VMware NSX Multi-Site DR with SRM.For that lab, the manual is available here and the lab ishere.

Each of the tiers must communicate with the others using specific ports

  • Client to Web = 443/tcp
  • Web to App = 8443/tcp
  • App to DB = 80/tcp

You can use this application to test firewall rules or other network restrictions that you are planning to implement. If a restriction breaks the application, you can determine where and why, then try again. If you want to change the port numbers to match your needs, you can do that as well. Keeping the application simple means that modifications should also be simple.

Load Balancing (Distribution)

The basic idea here is that you can create clones of the web-01a machine as many times as you like and pool them behind a load balancer. In your lab, if you have it, you may want to use NSX as a load balancer. If you want to do that, I suggest checking outModule 3 -Edge Services Gatewayinthe HOL-1703-SDC-1 - VMware NSX: Introduction and Feature Tour lab, which covers how to set that up. The manual is here and the lab is here.

If you want to use another vendor’s solution, feel free to do that as well. This application is REALLY simple. Some free load balancing solutions can be implementedusing nginx or haproxy. Fortunately, we already know about nginx from the build of our web servers, so I will cover that later in this post.First, though, I want to cover a DNS round robin configurationsince understandingthat makes the nginx load balancing simpler for the lab.

Example 1 - Load Distribution via DNS Round Robin

If you don’t have the resources for another VM, you canimplementsimple load distribution viaDNS round robinas long as you understand a few limitations:

  1. You must have access to change DNS for your lab environment.
  2. Using only DNS, you get loaddistribution but not reallybalancing; there is no awareness of theload onany particular node. Rather, you simply get the next one in the list.
  3. There is no awareness of the availability of any node in the pool. DNS simply provides the next address, whether it is responding or not.
  4. Connecting from a single client maynot show balancing since optimizations in modern web browsers maykeep existing sockets open.

In this first example, I have 3 web servers (web-01a, web-02a, web-03a) with IP addresses 192.168.120.30, 31, and 32. My SSL certificate contains the name webapp.corp.local and it is loaded onto each of the web servers.The picture looks something like this:

Create the VMs

To create web-02a and web-03a, I simply clone my web-01aVMthenreset thehostnames and IP addresses of each clone to the new values:

  • web-02a - 192.168.120.31
  • web-03a - 192.168.120.32

Alternatively, I can make a template from the web-01a VM and deploy the copies using Guest Customization to reconfigure them. Just make sure to populate the /etc/hosts file on the customized machines since the process wipes out and rebuilds that file.

Configure DNS

The required DNS changesare not complicated. You basically assign the name webapp.corp.localto the IP addresses of your web servers and set the time-to-live (TTL) to a low, non-zero value.

Using PowerShell against my lab DNS server called controlcenter.corp.localthat manages the corp.localzone, I add DNS records with a 1 second TTL, associatingall of the web server IP addresses to the name webapp.corp.local:

$ttl = New-TimeSpan -Seconds 1Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webapp' -IPv4Address '192.168.120.30' -TimeToLive $ttlAdd-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webapp' -IPv4Address '192.168.120.31' -TimeToLive $ttlAdd-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webapp' -IPv4Address '192.168.120.32' -TimeToLive $ttl

If you use a BIND DNS server, just create multiple A records pointing to the same name. BIND 4.9 or higher will automaticallyrotate through the records.In my case, I have a Windows 2012DNS server, and it cycles through the addresses when the webapp.corp.local name is requested.

Testing the Rotation

Here is a simple example of what this looks like from an ESXi host in my lab.A simple ping test shows therotation occurring as intended:

[root@esx-03a:~] ping -c 1 webapp.corp.localPING webapp.corp.local (192.168.120.30): 56 data bytes64 bytes from 192.168.120.30: icmp_seq=0 ttl=64 time=1.105 ms--- webapp.corp.local ping statistics ---1 packets transmitted, 1 packets received, 0% packet lossround-trip min/avg/max = 1.105/1.105/1.105 ms[root@esx-03a:~] ping -c 1 webapp.corp.localPING webapp.corp.local (192.168.120.32): 56 data bytes64 bytes from 192.168.120.32: icmp_seq=0 ttl=64 time=1.142 ms--- webapp.corp.local ping statistics ---1 packets transmitted, 1 packets received, 0% packet lossround-trip min/avg/max = 1.142/1.142/1.142 ms[root@esx-03a:~] ping -c 1 webapp.corp.localPING webapp.corp.local (192.168.120.31): 56 data bytes64 bytes from 192.168.120.31: icmp_seq=0 ttl=64 time=1.083 ms--- webapp.corp.local ping statistics ---1 packets transmitted, 1 packets received, 0% packet lossround-trip min/avg/max = 1.083/1.083/1.083 ms

Accessing the Application

Use the https://webapp.corp.local/cgi-bin/app.py URL from your web browser to access the application. Within the three-tier application, thescript on the app server displays which web server made the call to the application.

The scriptwill show the IP address of the calling web server unless it knows the name you would like it to display instead. You provide amapping of the IPs to the names you would like displayed at the top of the app.py script on the app server:

webservers = { '192.168.120.30':'web-01a', '192.168.120.31':'web-02a', '192.168.120.32':'web-03a'}

Simply follow the syntax and replace or add the values which are appropriate for your environment.

A Challenge ShowingLoad Distribution froma Single Host

Hmm… while the ping test shows that DNS is doing what we want, clicking the Refreshbutton in your web browser maynot be switching to a different web server as you expect.

Arefresh does not necessarily trigger a new connection and DNS lookup, even if the TTL has expired. Modern web browsers implement optimizations that will keep an existing connectionopen because odds are good that you will want to request more data from the same site.If a connection is already open, the browser will continue to use that, even if the DNS TTLhas expired. This means that you will not connectto a different web server.

You can wait for the idle sockets to time out or force the sockets closed and clear the web browser’s internal DNS cache before refreshing the web page, but that is not really convenientto do every time you want to demonstrate the distribution functionality. If you want to be able to click Refresh and immediately see that you have connected to a different web server in the pool, you can use NSX or a third-party load balancer. If you want to use the tools that we have currently available, the next example works around this issue.

Example 2 - Implementing a (Really)BasicLoad Balancer

Making a small change to the nginx configuration on one of the web server machines and adjusting DNS can provide a simple demonstration load balancer for your lab. This requires a slight deviation fromour current architecture to inject the load balancer VM in front of the web server pool:

Three-Tier Application with Load Balancer

Note that there are better, more feature-rich ways to do this, but we are going for quick and simple in the lab.

Create the Load Balancer

Create the load balancer VM. You can deploy a new one from a Photon OS base template and go through the configuration from there, but conveniently, the difference between the load balancer configuration and that of our web servers is just one line!

So, make a copy of the web-01a VM and updateits address and hostname:

  • lb-01a - 192.168.120.29

Change the nginx Configuration

On the lb-01a VM, edit the /etc/nginx/nginx.conf file

# vi +130 /etc/nginx/nginx.conf

Changeline 130 from

proxy_pass https://app-01a.corp.local:8443/;

to

proxy_pass https://webpool.corp.local/;

This will allow us to leverage DNS round-robin to rotate through the list of web servers and distribute the load. Nginx has advanced configurations to handle load balancing, but this will get the job done for a lab or demonstration. Terminating SSL on the load balancer while using plain HTTP on the web servers allows a lot more flexibility, but the configuration changes are beyond the scope of what I want to dohere.

Restart nginx

# systemctl restart nginx

Adjust DNS

Finally, adjust DNS to move the webapp.corp.local name to point at the load balancer and put the web servers into webpool.corp.localinstead.

If you are using Windows DNS, you can use PowerShell. For BIND, edit and create the records as needed.

  1. Remove the existing webapp.corp.local pool by deleting all of the A records that point to the individual web servers:
$rec = Get-DnsServerResourceRecord -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -Name 'webapp' -RRType Aif( $rec ) {   $rec | % { Remove-DnsServerResourceRecord -InputObject $_ -ZoneName 'corp.local' -Force }}

2. Create a newwebapp.corp.localA record that points to the lb-01a machine:

Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webapp' -IPv4Address '192.168.120.29'

3. Create the newwebpool.corp.local that contains the individual web servers:

$ttl = New-TimeSpan -Seconds 1Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webpool' -IPv4Address '192.168.120.30' -TimeToLive $ttlAdd-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webpool' -IPv4Address '192.168.120.31' -TimeToLive $ttlAdd-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webpool' -IPv4Address '192.168.120.32' -TimeToLive $ttl

Access the Application

Now, point your web browser to the https://webapp.corp.local/cgi-bin/app.py URL. Each timeyou click Refresh in your web browser or enter a new search string in the Name Filter box and click the Apply button, the data refreshand the Accessed via:line should update with a different web server from the pool:

Rotating through web servers in the pool

Because the web browser’s connection is to the load balancer VM, which controls which web server receives the request, we eliminate the issue experienced when using only DNS round robin. This very basic implementation does not handle failed servers in the pool and is not something thatwould be used in production, but, hey, this is a lab!

It is possible to extend this idea to put a load balancer in front of a pool of application servers as well:replace line 130 in eachweb server’s /etc/nginx/nginx.conf file with the URL of anapp server pool instead of pointing them directly at theapp-01a VM.

That’s a Wrap!

That concludes the series on building a minimal three-tier application. I am hopeful that you have found this interesting and can use these tools in your own environment.

Thank you for reading!

The post HOL Three-Tier Application, Part 5 – Use Cases appeared first on VMware Hands-On Lab (HOL) Blog.

Read more..

HOL Three-Tier Application, Part 2 – DB server

This is the second post in the series about building a three-tier application for demonstration, lab, and education purposes. At this point, you have downloaded Photon OS and prepared the base template according to the specifications. If not, please go to the first post in the series to prepare your environment.

I am tryingto release these posts on Wednesdays, but yesterday was a holiday in the US, so I had some time to get this one out the door early.

For the build, I will work bottom-up so it is possible totest the components as we go along andaddress any issues before we get to the end and have to worry aboutthe whole stack.

The Database (db-01a)

There are many choices that could have been made here, but I wanted to minimize the steps required and use the tools I had available by default, wherever possible. So, the database server for this application uses a SQLite database. This may seem like an odd choice since ithas no native remote access. But, the sqlite3is already present on thePhoton image.So is Python. Don’t expect elegance here since I do not normally use Python.I hacked together a simple CGI scriptto query the database and send the results over HTTP. It is not fancy, but it is small and does just enough for my needs.

Fear not! SQLite is simple enough that you don’t need any kind of DBA skills to use it. The minimal SQL that I know was enough to get this done.

The red box in the following diagram highlightsthe component that we are building in this post.

Let’s get started!

Deploy a copy of the base template

Deploy a copy of the base Photon template you created by following the steps in myfirst post. Name it something that makes sense to you. I called mine db-01a.

Power it up and log in as the root user.

Set theIP address

You need to configure the IP address for this machine. Here are the contents of my/etc/systemd/network/10-static-eth0.network file.

[Match]Name=eth0[Network]Address=192.168.120.10/24Gateway=192.168.120.1DNS=192.168.110.10Domains=corp.local

Set the name

Update the name in the/etc/hostsand/etc/hostname files with the proper name for this VM. Replace every occurrence of the name you used in the template with the name of this VM. Note that the hostname withinthe prompt string will change whenyou log out.

Restart the network to apply the settings

# systemctl restart systemd-networkd

Once you have finished with these steps, make sure you can access the network before moving on. You need Internet access in order to proceed.

SSH in

At this point, I strongly suggest you SSH tothe machine so that you can paste text. Doing all of the work from the console is possible, but pasting is easier. So, fire up puTTY or whatever else you use as your preferred SSH client.

Install the web server

I am using Apache here since I can get the base install to do what I need in pretty short order. Unfortunately, it is not already present, so we need to install it. Fortunately, it only takes a few minutes:

# tdnf install httpd

Make a directory to store the database file and make apache the owner

So you don’t absolutely needto do this, but I sometimes like to separate my data from the executables.

# mkdir /etc/httpd/db# chown apache:apache /etc/httpd/db

Start the web server and set it to startup when the VM boots

# systemctl start httpd# systemctl enable httpd

Create the database’s front-end CGI script

There is not much to this one. It performs a simple query of the database and dumps the result. You can type in the code, but if you have connected via SSH, you canpaste it in. I recommend the latter. Keep in mind that Python uses whitespace to give the program structure, so indenting is important. More precisely,the exact amount of indentation does not matter, but the relative indentation of nested blocks to one another matters a lot. If that sounds confusing, just make sure the spacing looks like mine.

This script takes an optional parameter named queryval that allows the data to be filtered on the name property of the records. It is a step above the “dump everything” approach we used in previous versions and provides the possibility for some user interaction.

Open anew file,/etc/httpd/cgi-bin/data.py

#!/usr/bin/env pythonimport cgiimport sqlite3conn=sqlite3.connect('/etc/httpd/db/clients.db')curs=conn.cursor()print "Content-type:text/plainnn";form = cgi.FieldStorage()querystring = form.getvalue("querystring")if querystring != None:   queryval = "%" + querystring + "%"   select = "SELECT * FROM clients WHERE name LIKE '" + queryval + "'"else:   select = "SELECT * FROM clients"for row in curs.execute(select):   if len(row) == 4:      for item in row:        print item,'|'      print "#"conn.close()

Save and close the file, then mark it executable

# chmod 755 /etc/httpd/cgi-bin/data.py

Create the database file and load it with data

SQLite will create the file if it is not already present. Bonus!

# sqlite3 /etc/httpd/db/clients.db

At the sqlite> prompt, create the table:

CREATE TABLE 'clients' ( "Rank" integer, "Name" varchar(30), "Universe" varchar(25), "Revenue" varchar(20) );

Then, load in some data. Feel free to use whatever you like:

INSERT INTO 'clients' VALUES (1,'CHOAM','Dune','$1.7 trillion'), (2,'Acme Corp.','Looney Tunes','$348.7 billion'), (3,'Sirius Cybernetics Corp.',"Hitchhiker's Guide",'$327.2 billion'), (4,'Buy n Large','Wall-E','$291.8 billion'), (5,'Aperture Science, Inc.','Valve','$163.4 billion'), (6,'SPECTRE','007','$157.1 billion'), (7,'Very Big Corp. of America','Monty Python','$146.6 billion'), (8,'Frobozz Magic Co.','Zork','$112.9 billion'), (9,'Warbucks Industries',"Lil' Orphan Annie",'$61.5 billion'), (10,'Tyrell Corp.','Bladerunner','$59.4 billion'), (11,'Wayne Enterprises','Batman','$31.3 billion'), (12,'Virtucon','Austin Powers','$24.9 billion'), (13,'Globex','The Simpsons','$23.7 billion'), (14,'Umbrella Corp.','Resident Evil','$22.6 billion'), (15,'Wonka Industries','Charlie and the Chocolate Factory','$21.0 billion'), (16,'Stark Industries','Iron Man','$20.3 billion'), (17,'Clampett Oil','Beverly Hillbillies','$18.1 billion'), (18,'Oceanic Airlines','Lost','$7.8 billion'), (19,'Brawndo','Idiocracy','$5.8 billion'), (20,'Cyberdyne Systems Corp.','Terminator','$5.5 billion'), (21,'Paper Street Soap Company','Fight Club','$5.0 billion'), (22,'Gringotts','Harry Potter','$4.4 billion'), (23,'Oscorp','Spider-Man','$3.1 billion'), (24,'Nakatomi Trading Corp.','Die-Hard','$2.5 billion'), (25,'Los Pollos Hermanos','Breaking Bad','$1.3 billion');

Once you are happy with the data you have entered — ensure that you finish with a semi-colon and a newline— press Control-D to close the sqlite session.

Set the database file’s owner

The apache user running the web server needs access to this file in order for the CGI script to read the data.

# chown apache:apache /etc/httpd/db/clients.db

Enable CGI on the webserver

The default Apache install on Photon does not have the CGI module loadedby default. It is simple enough to turn it on:

Open the web server’s configuration file. The +176before the file name opens the file at line 176, which is where we want to start:

# vi +176 /etc/httpd/httpd.conf

At line 176, add the following line to load the CGI module:

LoadModule cgi_module /usr/lib/httpd/modules/mod_cgi.so

Atline 379, add the following to enableaccess to the database directory. It goes right before a line that starts with

#database directory    AllowOverride None    Options None    Require all granted

Save and close the file.

Restart the web server to read the updated configuration

# systemctl restart httpd

Verify

Now, if you access the script via http, you should see the data.

# curl http://db-01a/cgi-bin/data.py

It won’t look too pretty, but the user never sees this back end data. That’s where the application server comes in. At this point, the resultshould look something like this:

root@db-01a [ ~ ]# curl http://db-01a/cgi-bin/data.py1 |CHOAM |Dune |$1.7 trillion |#... (truncated) ...#24 |Nakatomi Trading Corp. |Die-Hard |$2.5 billion |#25 |Los Pollos Hermanos |Breaking Bad |$1.3 billion |#root@db-01a [ ~ ]#

The next piece of the puzzle is the application server, which consumes the data provided by this component. If you had no problems with this setup, the rest should be a breeze. This is the most complicated part of the whole thing.

Thank you for reading!

The post HOL Three-Tier Application, Part 2 - DB server appeared first on VMware Hands-On Lab (HOL) Blog.

Read more..

HOL Three-Tier Application, Part 2 – DB server

Read full post . . . or http://www.go-que.com/hol-three-tier-application-part-2-db-server

This is the second post in the series about building a three-tier application for demonstration, lab, and education purposes. At this point, you have downloaded Photon OS and prepared the base template according to the specifications. If not, please go to the first post in the series to prepare your environment.

I am tryingto release these posts on Wednesdays, but yesterday was a holiday in the US, so I had some time to get this one out the door early.

For the build, I will work bottom-up so it is possible totest the components as we go along andaddress any issues before we get to the end and have to worry aboutthe whole stack.

The Database (db-01a)

There are many choices that could have been made here, but I wanted to minimize the steps required and use the tools I had available by default, wherever possible. So, the database server for this application uses a SQLite database. This may seem like an odd choice since ithas no native remote access. But, the sqlite3is already present on thePhoton image.So is Python. Don’t expect elegance here since I do not normally use Python.I hacked together a simple CGI scriptto query the database and send the results over HTTP. It is not fancy, but it is small and does just enough for my needs.

Fear not! SQLite is simple enough that you don’t need any kind of DBA skills to use it. The minimal SQL that I know was enough to get this done.

The red box in the following diagram highlightsthe component that we are building in this post.

Let’s get started!

Deploy a copy of the base template

Deploy a copy of the base Photon template you created by following the steps in myfirst post. Name it something that makes sense to you. I called mine db-01a.

Power it up and log in as the root user.

Set theIP address

You need to configure the IP address for this machine. Here are the contents of my/etc/systemd/network/10-static-eth0.network file.

[Match]Name=eth0[Network]Address=192.168.120.10/24Gateway=192.168.120.1DNS=192.168.110.10Domains=corp.local

Set the name

Update the name in the/etc/hostsand/etc/hostname files with the proper name for this VM. Replace every occurrence of the name you used in the template with the name of this VM. Note that the hostname withinthe prompt string will change whenyou log out.

Restart the network to apply the settings

# systemctl restart systemd-networkd

Once you have finished with these steps, make sure you can access the network before moving on. You need Internet access in order to proceed.

SSH in

At this point, I strongly suggest you SSH tothe machine so that you can paste text. Doing all of the work from the console is possible, but pasting is easier. So, fire up puTTY or whatever else you use as your preferred SSH client.

Install the web server

I am using Apache here since I can get the base install to do what I need in pretty short order. Unfortunately, it is not already present, so we need to install it. Fortunately, it only takes a few minutes:

# tdnf install httpd

Make a directory to store the database file and make apache the owner

So you don’t absolutely needto do this, but I sometimes like to separate my data from the executables.

# mkdir /etc/httpd/db# chown apache:apache /etc/httpd/db

Start the web server and set it to startup when the VM boots

# systemctl start httpd# systemctl enable httpd

Create the database’s front-end CGI script

There is not much to this one. It performs a simple query of the database and dumps the result. You can type in the code, but if you have connected via SSH, you canpaste it in. I recommend the latter. Keep in mind that Python uses whitespace to give the program structure, so indenting is important. More precisely,the exact amount of indentation does not matter, but the relative indentation of nested blocks to one another matters a lot. If that sounds confusing, just make sure the spacing looks like mine.

This script takes an optional parameter named queryval that allows the data to be filtered on the name property of the records. It is a step above the “dump everything” approach we used in previous versions and provides the possibility for some user interaction.

Open anew file,/etc/httpd/cgi-bin/data.py

#!/usr/bin/env pythonimport cgiimport sqlite3conn=sqlite3.connect('/etc/httpd/db/clients.db')curs=conn.cursor()print "Content-type:text/plainnn";form = cgi.FieldStorage()querystring = form.getvalue("querystring")if querystring != None:   queryval = "%" + querystring + "%"   select = "SELECT * FROM clients WHERE name LIKE '" + queryval + "'"else:   select = "SELECT * FROM clients"for row in curs.execute(select):   if len(row) == 4:      for item in row:        print item,'|'      print "#"conn.close()

Save and close the file, then mark it executable

# chmod 755 /etc/httpd/cgi-bin/data.py

Create the database file and load it with data

SQLite will create the file if it is not already present. Bonus!

# sqlite3 /etc/httpd/db/clients.db

At the sqlite> prompt, create the table:

CREATE TABLE 'clients' ( "Rank" integer, "Name" varchar(30), "Universe" varchar(25), "Revenue" varchar(20) );

Then, load in some data. Feel free to use whatever you like:

INSERT INTO 'clients' VALUES (1,'CHOAM','Dune','$1.7 trillion'), (2,'Acme Corp.','Looney Tunes','$348.7 billion'), (3,'Sirius Cybernetics Corp.',"Hitchhiker's Guide",'$327.2 billion'), (4,'Buy n Large','Wall-E','$291.8 billion'), (5,'Aperture Science, Inc.','Valve','$163.4 billion'), (6,'SPECTRE','007','$157.1 billion'), (7,'Very Big Corp. of America','Monty Python','$146.6 billion'), (8,'Frobozz Magic Co.','Zork','$112.9 billion'), (9,'Warbucks Industries',"Lil' Orphan Annie",'$61.5 billion'), (10,'Tyrell Corp.','Bladerunner','$59.4 billion'), (11,'Wayne Enterprises','Batman','$31.3 billion'), (12,'Virtucon','Austin Powers','$24.9 billion'), (13,'Globex','The Simpsons','$23.7 billion'), (14,'Umbrella Corp.','Resident Evil','$22.6 billion'), (15,'Wonka Industries','Charlie and the Chocolate Factory','$21.0 billion'), (16,'Stark Industries','Iron Man','$20.3 billion'), (17,'Clampett Oil','Beverly Hillbillies','$18.1 billion'), (18,'Oceanic Airlines','Lost','$7.8 billion'), (19,'Brawndo','Idiocracy','$5.8 billion'), (20,'Cyberdyne Systems Corp.','Terminator','$5.5 billion'), (21,'Paper Street Soap Company','Fight Club','$5.0 billion'), (22,'Gringotts','Harry Potter','$4.4 billion'), (23,'Oscorp','Spider-Man','$3.1 billion'), (24,'Nakatomi Trading Corp.','Die-Hard','$2.5 billion'), (25,'Los Pollos Hermanos','Breaking Bad','$1.3 billion');

Once you are happy with the data you have entered — ensure that you finish with a semi-colon and a newline— press Control-D to close the sqlite session.

Set the database file’s owner

The apache user running the web server needs access to this file in order for the CGI script to read the data.

# chown apache:apache /etc/httpd/db/clients.db

Enable CGI on the webserver

The default Apache install on Photon does not have the CGI module loadedby default. It is simple enough to turn it on:

Open the web server’s configuration file. The +176before the file name opens the file at line 176, which is where we want to start:

# vi +176 /etc/httpd/httpd.conf

At line 176, add the following line to load the CGI module:

LoadModule cgi_module /usr/lib/httpd/modules/mod_cgi.so

Atline 379, add the following to enableaccess to the database directory. It goes right before a line that starts with

#database directory    AllowOverride None    Options None    Require all granted

Save and close the file.

Restart the web server to read the updated configuration

# systemctl restart httpd

Verify

Now, if you access the script via http, you should see the data.

# curl http://db-01a/cgi-bin/data.py

It won’t look too pretty, but the user never sees this back end data. That’s where the application server comes in. At this point, the resultshould look something like this:

root@db-01a [ ~ ]# curl http://db-01a/cgi-bin/data.py1 |CHOAM |Dune |$1.7 trillion |#... (truncated) ...#24 |Nakatomi Trading Corp. |Die-Hard |$2.5 billion |#25 |Los Pollos Hermanos |Breaking Bad |$1.3 billion |#root@db-01a [ ~ ]#

The next piece of the puzzle is the application server, which consumes the data provided by this component. If you had no problems with this setup, the rest should be a breeze. This is the most complicated part of the whole thing.

Thank you for reading!

The post HOL Three-Tier Application, Part 2 - DB server appeared first on VMware Hands-On Lab (HOL) Blog.

Read more..

HOL Three-Tier Application, Part 1

This is the first post in my series about the multi-tier application we use in some of the VMware Hands-on Labs to demonstrate, among other things, network connectivity, microsegmentation and load balancing. This post will cover downloading the base operating system and performing the configuration tasks common to all of the VMs in the solution. As with anything, there are multiple ways to do this. This represents the way that worked for me.

The Need

Whether you live in a cutting-edge, microservices-oriented world, or have a traditional application spread across multiple machines, the components(machines, containers, services, processes, etc.) need to communicatewith one another over the network.Understanding what that looks like is important to securing the connection end-to-end. This simple application is intended to provide a starting point for learning or testing firewall and load balancing configurations to see how they affect a distributed environment.

For instruction purposes, we wanted three simple, independent partsthat could be deployed, rearranged, and otherwise manipulated to illustrate many differentsituations that may occur in an environment. For HOL and other labs, small is usually good. Oh, and fast. It should be fast.

The Application

This application consists ofthree operating systeminstances, independent VMs, each of which handles a specific task. When all of themcan communicate over the network overthe required ports, the client receives the requested information and can interact with that information. If there is a breakdown, not so much.

This demonstration application has been created so that each component VM is independent from the others: IP addresses can be changed and multiple instances of the web and application tier VMs can be created by cloning, renaming, and re-addressing. The basic build with one of each type and all resources on the same subnet will be described in this series. The followingis a simple diagram of what I will be covering. I put SSL in herebecause it is always a good idea to secure your web traffic, and it providesthe opportunity to configure a load balancer in front of the web tier in a more realisticscenario.

So, let’s get to it!

Build The Base

This application is built using VMware’s Photon OS. If you are not familiar with the Photon project, you can read more on the VMware Photon OS page. Basically, as the page indicates, Photon OS is a Minimal Linux Container Host.Because we have very basic needs, we are going to focus on the first half (Minimal Linux) and ignore the second half (Container Host) for now. One really cool thing about Photon OS is that it boots incredibly quickly.

Before we do anything, I’d like to give you an idea of the time involved inbuildingthis application. Once I have the software downloaded and have staged the base Photon template, I can get the basic application up,running, and captured in underan hour. If you are comfortable using the vi editor and an SSH connection, I think you can as well. Even if you are a bit rusty, it should not take too much longer than that. My time isskewed a bit since I was documenting the build.

Download the Software

This application is going to run as a set of virtual machines on my VMware ESXi hosts. I selected thePhoton OS, Version 1.0 — OVA with virtual hardware v10as my starting point. If you like, you can install Photon on your own from the ISO, but this has nearlyeverything we need in a simple package:A pre-installed and vSphere-optimized Photon OS minimal instance configured with virtual hardware version 10.At the time of this writing, that file was available using the link at the bottom of theVMware Photon OSpage. The file I downloadedwas called photon-custom-hw10-1.0-13c08b6.ova and is less than 300 MB.

Import the OVA

Once you have downloaded the software, import the OVA to your environment and power it up.

Create a Baseline

You can handle this however you like, but Ihave some tasks that are common across all of the VMs and don’t like to duplicate work if I can avoid it.Note that you will need Internet access from the VM in order to install software. You will also need three IP addresses that you can statically assign to the VMs.

Set the root password

The default password on the OVA is changeme — use this to log in with the user name root. The systemwill prompt you again for the changeme password and then require you to set a complex password. It didn’t like our standard (simple)HOL preferred password, so I had to set it to VMware123! and then I used passwd to change it to VMware1! that we use in all of the Hands-on Labs. Note that passwdwill complain about a weak password, but will still let you change it here as long as you are persistent:

Ensure that root’s password does not expire

It is always a drag when you finally get back to working on your lab, only to have to reset passwords… or, worse, figure out how to break in because the password is no longer valid. In production, I probably would not do this, but this is a lab tool.

Note that my convention is to prefix the examples with a “#” because they are executed as the root user. You don’t type the “#”

# chage -M -1 root

Note that is a NEGATIVE ONE after the -M

Set the hostname

Change the hostname from the default generated name to what you want to use. For the template,I usually set it to something besides the default photon-so that I know I have done this work. Note, if you’re not familiar with the vi editor, look herefor a “cheat sheet” from MIT.

# vi /etc/hostname

Replace the current name with the new name and save, close the file.

Set a static IP (change from default DHCP)

In this OVA, the default network configuration is stored in /etc/systemd/network/10-dhcp-en.network. To configure a static IP address on the eth0 interface, rename the file and replace the contents:

# mv /etc/systemd/network/10-dhcp-en.network /etc/systemd/network/10-static-eth0.network

Renaming it instead of copying it retains the permissions so that it will work. The contents are pretty straightforward. The following example is for the web-01a machine in my environment. Substitute with addresses that make sense for you. Don’t count on DNS to work once these VMs are deployed in DMZs or microsegments, but I configure it because I need to be able to resolve repository hostnames to install software:

[Match]Name=eth0[Network]Address=192.168.120.30/24Gateway=192.168.120.1DNS=192.168.110.10Domains=corp.local

Restart the network to apply the settings

# systemctl restart systemd-networkd

Edit the hosts file

Because this application is intended to be self-contained, we use local hosts files for name resolution. Configuring this template with all of the names and IPs that you want to use is easier than doing it later foreach VM. Specifyingnames allows the other tools’ configurations to be built using names instead of IP addresses. This and makes changing addresses later much easier.

Remember to also change the hostname on the loopback (127.0.0.1) from the default to your host’s name, too. This is an example of the edited file from our web-01a machine:

# Begin /etc/hosts (network card version)::1 localhost ipv6-localhost ipv6-loopback127.0.0.1 localhost.localdomain127.0.0.1 localhost127.0.0.1 web-01a# End /etc/hosts (network card version)192.168.120.10 db-01a.corp.local  db-01a192.168.120.20 app-01a.corp.local app-01a192.168.120.30 web-01a.corp.local web-01a

Modify the firewall to allow the desired ports

The iptables config script run at startup of the Photon OSis /etc/systemd/scripts/iptables and only allows SSH by default. Add the followinglines to the bottom of the file:

#Enable ping by anyoneiptables -A INPUT -p icmp -j ACCEPT#Enable web and app traffic iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPTiptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPTiptables -A INPUT -p tcp -m tcp --dport 8443 -j ACCEPT

The last three open the ports we need for all of the app layers.You can comment out the ones you don’t need for each VM after you deploy each one… or not.

Restart iptables to apply the new rules

# systemctl restart iptables

(optional) Verify the new rules

# iptables -L

(optional) Enable key-based SSH

If you have an SSH key that you use, now is a good time to copy your SSH key to the /root/.ssh/authorized_keys file, replacing the text that is there by default.

(optional) Install software used by all

The OVA contains a minimal installation of Photon OS, but I created this application with the default packages in mind. We use thetdnf tool to perform installations on Photon.While adding lsofis optional, I find it excellent for troubleshooting.

# tdnf install lsof

Once installed, try thisto see which services are listening and connected on which ports:

# lsof -i -P -n

Cool, right?

If you have anything else that you want to install — say you prefer nanoto vimas a text editor — go ahead and install that now using the same tdnf syntax:

# tdnf install nano

Finish Up

I usually reboot here just to make sure everything comes up as expectedbefore moving on. With Photon, that reboot only takes a few seconds.

If everything looks good, shut this machine down and clone it to a template for use when creating the web, app, and database servermachines. For this example, I called mine photon:

Next time, I will cover the build of the database VM using this template as a starting point.

Thank you for reading!

The post HOL Three-Tier Application, Part 1 appeared first on VMware Hands-On Lab (HOL) Blog.

Read more..

Go Que Newsroom

Categories