Skip to content

Reference configuration

A Reference Configuration of a Simple Application Setup in HyperCloud

1.0 Introduction

The intent of this document is to provide a simple application that will demonstrate various elements of SoftIron HyperCloud. Although this is a simple example, these steps provide the foundation for creating a multi-tenancy tiered application. In all cases, open source software has been used to simplify configuration of the applications.

2.0 Architecture

Below is a diagram of the sample application:

HC Reference Diagram

The setup consists of three Virtual Data Centers (VDCs) named: Public, Private, and DMZ. This represents a typical, basic architecture for a cloud solution. The Public VDC will represent a standard user with access to the public internet (although internal non-routeable addresses are used). The Private VDC represents an environment that has no access to the internet. The DMZ has controlled access to both the Private VDC and the Public VDC. This allows a user to access a web-based application that connects to application servers in the Private VDC.

A server monitoring application is used as the base application, as it is very easy to implement and show the three VDCs in operation. Nagios was chosen as the monitoring application, although any other application could be used instead. This provides a web front end that can be viewed via a browser from the User Interface (UI) in the Public VDC.

A target application server, with Operating System only, is located in the Private VDC. Through the Nagios monitoring server, the Public VDC user will be able to monitor performance data from the applications server.

All of the application servers i.e. User IF, Monitoring Front End, and Application Server, are realized using Virtual Machines (VMs). Each of these will be constructed such that they are contained within their respective VDCs. Further Users and Groups will be created to instantiate, configure, and manage the virtual machines. In this manner, there will be logical separations between the VDCs.

The networking between the three VDCs will be accomplished using a Gateway Appliance. This is a virtual machine that has been specially built for the SoftIron HyperCloud infrastructure. It will also show how startup scripts can be used from the GUI to configure applications servers.

3.0 Implementation

The implementation is broken down into the following sections:

Scope Description
Users and Groups Groups will be defined for each VDC. Additionally, for each VDC, a group admin and group user will be created. An admin will also be created such that it can access all VDCs (e.g. Ref_Config_Admin)
VDC Private, DMZ and Public VDCs will be created, and the respective User/Groups associated with them.
Networking This will be realized using the SoftIron Gateway Appliance. It is configured using a startup script to provide the networks for the VDCs.
Templates A template will be created for each of the application servers. This will be accomplished using the Ref_Config_Admin. Access rights will be given to the group admins and users to enable the applications servers in each VDC to be created.
Application Servers These will be created in each VDC by the group admin. Specific rights will also be added for the users in each VDC.
Server Monitoring This will be realized using the SoftIron Nagios appliance. It is configured using a startup script to provide server monitoring for virtual machines.

3.1 Users and Groups

As mentioned above, the following Groups and Users will be created for each VDC, as laid out below:

Info

This step is performed before the creation of the VDCs as it allows simplicity in assigning ownership of the various elements created later.

Scope
Group
User
Password
Capabilities
System Oneadmin Ref_Cfg_Admin RefCfg1234 Create / Modify / Delete all elements

Public VDC

Public_VDC

PublicVDCAdmin

PublicVDCUser

RefCfg1235

RefCfg1236
Create / Modify / Delete elements in Public VDC
Use access only to the elements in the Public VDC

DMZ VDC

DMZ_VDC

DMZVDCAdmin

DMZVDCUser

RefCfg1235

RefCfg1236
Create / Modify / Delete elements in DMZ VDC
Use access only to the elements in the DMZ VDC

Private VDC

Private_VDC

PrivateVDCAdmin

PrivateVDCUser

RefCfg1235

RefCfg1236
Create / Modify / Delete elements in Private VDC
Use access only to the elements in the Private VDC

The admins will be given an admin and cloud view in the Dashboard's Graphical User Interface (GUI). All other users will have cloud view only.

3.1.1 SSH Key pair creation

To allow the Dashboard to connect to the various devices that will be set up via SSH in the CLI, an RSA key pair will be created and utilized in the configuration tutorial

Connect to the Dashboard via SSH (the IP Address that is used to access the HyperCloud Dashboard GUI will also allow console access). The console will launch in the root home directory, /home/root; this directory must be used to create the key pair that will allow the Nagios configuration commands to locate the keys.

Then, follow the instructions on the SSH Keys page to create and paste a SSH Public Key to the Authentication section of the Dashboard settings on both the "admin" and Ref_Cfg_Admin accounts, once it is set up in the next section.

This will allow the Dashboard to connect, via SSH, to any of the VMs that are created by either the admin or Ref_Cfg_Admin accounts.

Below are the steps to set up the Groups and the Users:

3.1.2 Create Reference Configuration Admin

Initially, log in as the admin of the cluster and follow the steps to create the Ref_Config_Admin.

  1. Select System --> Users
  2. Initiate the new user wizard by pressing the green + button.
  3. Input the Username: Ref_Cfg_Admin.
  4. Input and confirm the Password: RefCfg1234.
  5. Set the Authentication to core.
  6. Set the Main Group to 0: oneadmin.
  7. Click the green Create button.

Create User

Next, add the public key that was obtained from the HyperCloud Dashboard in section 3.1.1.

  1. Select System --> Users.
  2. Select the User: Ref_Cfg_Admin.
  3. Select the Auth tab.
  4. Click on the Edit in the Public SSH Key section.
  5. Copy the Public Key (obtained in section 3.1.1) into the text window.
  6. Click Refresh.

The key has now been added to the Ref_Cfg_Admin account and will be used later.

Add User Key

The primary User for this Reference Configuration has now been created and the admin User can now be logged out.

Note

The name is updated to the new admin account. Ref Cfg Admin Name

The newly created Ref_Config_Admin will be logged into and used to set up the remaining Groups, Users, and various components of the example.

3.1.3 Create Public_VDC Group, Group Admin, and Group User

The next step in the reference configuration is to start creating the Groups and their Users.

  1. Select System --> Groups.
  2. Press the green + button.
  3. On the General tab, enter the Name Public_VDC.

    Create Group

  4. Switch to the Admin tab and check the box to Create an Administrator.

  5. Use the following username and password: PublicVDCAdmin and RefCfg1235.
  6. Set the Authentication to core.

    Create Public VDC

  7. Click the green Create button.

The Public_VDC Group has now been created and the list should resemble the image below:

Group List

A standard User will now be added to the group:

  1. Select Systems --> Users.
  2. Press the green + button.
  3. Use the following username and password: PublicVDCUser and RefCfg1236.
  4. Select 100: Public_VDC as the Main Group.
  5. Select users as the Secondary Group.

    Create Public VDC User

  6. Click the green Create button to finalize the creation of the User.

  7. The rest of the Groups and Users can be created according to the table in Users and Groups and the full User list will resemble the image below:

User List

3.2 Virtual Data Centers (VDCs)

Before the VDCs can be created, the Virtual Networks (VNets) must be set up. For more information on VDCs and VNets, follow the links below:

Virtual Data Centers

Virtual Networks

3.2.1 Virtual Networks

A VNet will be set up for each of the three VDCs. The one for the Public VDC will be show in detail; the other two are constructed in a similar fashion. The process in this example involves renaming and reassigning non-routeable networks that already exist within the infrastructure.

Original Name New Name Group Owner
NR POC Network 9 Public_VDC_VNet Public_VDC PublicVDCAdmin
NR POC Network 8 DMZ_VDC_VNet DMZ_VDC DMZVDCAdmin
NR POC Network 7 Private_VDC_VNet Private_VDC PrivateVDCAdmin
  1. Select Network --> Virtual Networks.
  2. Select NR POC Network 9.
  3. Click Pen and Pad Icon next to the Virtual Network's Name and change the name to Public_VDC_VNet, click away from the entry field to save change.
  4. Click Pen and Pad Icon to change the Owner and Group to PublicVDCAdmin and Public_VDC, from the dropdown menus, respectively.
  5. The Information and Ownership for the VNet will resemble the image below:

Virtual Network

Follow the steps above to change out the other two VNets' information and the result should resemble the image below:

VNet List

3.2.2 Virtual Data Centers

Once the Groups, Users, and Virtual Networks have been set up, the VDCs can be created. The instructions to create the DMZ VDC are detailed below:

  1. Select System --> VDCs.
  2. Click the green + button and edit the information under the General tab.
  3. Input the Name DMZ_VDC.
  4. Use the description "DMZ VDC".

    General Tab

  5. Switch to the Groups tab and select the DMZ_VDC group.

    Groups Tab

  6. Switch to the Resources tab and select the following options:

    Tab Option(s)
    Clusters Access to all Clusters
    Hosts Access to all Hosts
    VNets Select the DMZ_VDC_VNet
    Datastores Access to all Datastores

    cluster resources

    Clusters
    host resources
    Hosts
    vnet resources
    Virtual Networks
    datastore resources
    Datastores

  7. Click the green Create button to finalize the creation of the VDC.

The mappings of the VDCs are as below:

Name Group VNet
DMZ_VDC DMZ_VDC DMZ_VDC_VNet
Private_VDC Private_VDC Private_VDC_VNet
Public_VDC Public_VDC Public_VDC_VNet

Make sure you select access to all hosts and datastores for all VDCs.

The completed VDC list will resemble the image below:

VDC List

3.3 Gateway Appliance

Prior to downloading the appliances (Gateway and Nagios), the SoftIron Community Marketplace must be added following the instructions at SoftIron Marketplaces.

The implementation of the Gateway Appliance is accomplished in two steps: download the Image and then instantiate the server.

3.3.1 Download the Gateway image

After the marketplace has been added and populated with the Apps, follow the steps below to download the Gateway Appliance Image.

  1. Select Storage --> Apps.
  2. Search for Gateway.
  3. Select the Gateway Appliance.
  4. Select Import App Cloud to download the appliance.
  5. Name the Image "Gateway Appliance".
  6. Select the default datastore.
  7. Click Green Download Button.

    Download Gateway

    Warning

    Make sure the Gateway Image state is set to READY before moving to the next step.

  8. Select Storage --> Images.

Image Ready

3.3.2 Instantiate the Gateway Appliance

A startup script is needed to configure the appliance correctly. Here is the script that will create the connections between the virtual networks created earlier.

Note

The dashboard public key is the one created in Section 3.12. This is not a mandatory addition to the script.

#!/bin/bash

# mkdir -p /home/root/.ssh

# Dashboard
# echo "dashboard public key” >> root@hyperCloud-dashboard

# Allow Traffic for VRouter function
# The command below can be used via the VNC to verify the routing function
# iptables -L -n -v -t nat

## POC Public Network
iptables -t nat -A POSTROUTING -s 192.168.19.0/24 ! -d 192.168.19.0/24 -j MASQUERADE

## POC DMZ Network
iptables -t nat -A POSTROUTING -s 192.168.18.0/24 ! -d 192.168.18.0/24 -j MASQUERADE

## POC Private Network
iptables -t nat -A POSTROUTING -s 192.168.17.0/24 ! -d 192.168.17.0/24 -j MASQUERADE

exit 0

This script is then converted to a base64 format to create the input for the startup script file and will result in the following text, or similar:

IyEvYmluL2Jhc2gKIyMgUE9DIFB1YmxpYyBOZXR3b3JrCmlwdGFibGVzIC10IG5hdCAtQSBQT1NUUk9VVElORyAtcyAxOTIuMTY4LjE5LjAvMjQgISAtZCAxOTIuMTY4LjE5LjAvMjQgLWogTUFTUVVFUkFERQoKIyMgUE9DIERNWiBOZXR3b3JrCmlwdGFibGVzIC10IG5hdCAtQSBQT1NUUk9VVElORyAtcyAxOTIuMTY4LjE4LjAvMjQgISAtZCAxOTIuMTY4LjE4LjAvMjQgLWogTUFTUVVFUkFERQoKIyMgUE9DIFByaXZhdGUgTmV0d29yawppcHRhYmxlcyAtdCBuYXQgLUEgUE9TVFJPVVRJTkcgLXMgMTkyLjE2OC4xNy4wLzI0ICEgLWQgMTkyLjE2OC4xNy4wLzI0IC1qIE1BU1FVRVJBREUKCmV4aXQgMAo=

The Gateway Appliance can now be instantiated with the steps below:

  1. Select Instances --> Virtual Routers and click the green + button.
  2. Enter "Gateway Router" as the Name.
  3. Select the Gateway Appliance in the Template section.
  4. Name the Virtual Machine "Gateway Router".
  5. Add the following networks with the default Security Group:

    • Infrastructure Management Network
    • Public_VDC_VNet
    • Private_VDC_VNet
    • DMZ_VDC_VNet
  6. Add the base64 text, from the example above, to the Base64 Encoded Startup Script field, the screen will resemble the image below:

    Virtual Router Creation

    The other attributes can be left blank as they are not used in this example. More information about the Gateway can be found here.

  7. Click the green Create button to create the gateway router. It will take several minutes for this to become active (use the VM VNC viewer to check for a command prompt).

This will create both the Virtual Router and its associated Virtual Machine.

3.4 Virtual machines

A total of three VMs are needed to complete this reference configuration.

Name VDC VNet OS
Monitoring Front End DMZ_VDC DMZ_VDC_VNet Nagios Appliance
Application Server Private_VDC Private_VDC_VNet Debian 12
User Interface Public_VDC Public_VDC_VNet Debian 12

Before these VMs can be instantiated, a template is needed. The template will require an Operating System Image for the operating system source. This section will show how to set up the Debian and Nagios VMs.

3.4.1 Download Images

Two Images are needed for the two different server types:

Debian OS Image

The Debian Image is included in the standard Public Marketplace. This can be downloaded and used to create the Debian 11 Template using the following steps:

  1. Select Storage --> Apps.
  2. Search for "Debian - 12".
  3. Download the Image to the default datastore by clicking Import App Cloud.
  4. Name the new Image "Debian 12 Image".
  5. Click Green Download Button.
  6. Select Storage --> Images.

The resulting screen will be displayed to verify the creation of the Image and when the Status is in the READY state; i.e., the Image is completely downloaded and ready for use.

Note

In the LOCKED state, the Image is still being acquired from the marketplace. It may take a few minutes to download the Image. Please wait until this Image has been downloaded completely and in the READY state before moving to the next step. Also, wait at least one minute before refreshing the Image list.

Warning

Clicking refresh too frequently or trying to use the Image before it's been fully downloaded may cause a system warning in the bottom right of the screen. It can be ignored; however, it is informing that the actions being taken are too quick to allow the system to reset or the resources are not ready to be used.

Debian Image

Note

Although the owner is the system administrator, a template will need to be created for the VDC Admin account to access.

Nagios Appliance Image

The Nagios Appliance Image is included in the SoftIron Community Marketplace. This can be downloaded and used to create the Nagios Appliance Template using the following steps:

  1. Select Storage --> Apps.
  2. Search for "Nagios Appliance".
  3. Download the Image to the default datastore by clicking Import App Cloud.
  4. Name the new Image "Nagios Appliance".
  5. Click Green Download Button.
  6. Select Storage --> Images.

As with the previous template, wait until the Nagios Image is in a READY state before proceeding.

3.4.2 Create the Templates

A single Template is used to create two of the VMs.

Reminder

Since the VMs are being created by the Ref_Config_Admin, make sure that this account is still logged into the HyperCloud GUI from Section 3.12.

Follow the steps below to create the Debian Template:

  1. Select Templates --> VMs.
  2. Click Green + Dropdown and select Create.
  3. On the General tab, name the VM Template "Debian Baseline".
  4. Allocate 2 gigabytes of memory, 0.5 Physical CPU, and 4.0 Virtual CPU.
  5. Switch to the Storage tab and select the "Debian 12 Image".
  6. Under the Advanced options, scroll down and set the Size on instantiate to 20 gigabytes.
  7. Switch to the Network tab and select "Infrastructure Management Network" as the interface with the "default" Security Group.
  8. Switch to the Input/Output tab and add tablet and usb as the Type and Bus, respectively (make sure to press Blue Add Button to confirm the selections).
  9. Switch to the Context tab and paste: useradd -m Ref_Cfg_Admin && echo Ref_Cfg_Admin:RefCfg1234 | chpasswd && sudo usermod -aG sudo Ref_Cfg_Admin into the Start Script field and click away to set the startup script for the VMs that will be instantiated from the Template.

    Info

    • Step 7 adds the Infrastructure Management interface so that the resulting VMs can connect to the external internet
    • Step 8 will allow mouse action in the VNC window
    • Step 9 will add an admin user so that the VM can be accessed via the VNC screen later
  10. Click the green Create button to finalize the template creation.

Debian Baseline Template

Each VM is created by the Ref_Cfg_Admin using these templates. Initially, the Ref_Cfg_Admin will download the application code to each of the servers before deploying it into its respective VDC. At that point, the Infrastructure Management Interface will be removed and the local VDC VNet will replace it. The VDC admin will also be added as an admin user to the OS so that the application software can be configured.

3.4.3 Application Server

As mentioned earlier, this is initially set up using the Ref_Cfg_Admin account and deployed later. The VM is set up as follows:

  1. Select Instances --> VMs and click the green + button.
  2. Select the recently created "Debian Baseline" template from the list.
  3. Name the VM "Application Server".
  4. Remove the Infrastructure Management VNet.
  5. Add "Private_VDC_VNet" as the second network and select the "default" Security Group.
  6. Click the green Create button.

SSH into the Dashboard and check to see if the VM's OS is running; for example, ping the IP address of the VM (from the dashboard) using:

until ping -c1 xxx.xxx.xxx.xxx >/dev/null 2>&1; do :; done

Replace X with the VM's Infrastructure IP address (e.g. 10.199.x.x)

Once a positive ping response has been received, connect to the VM via SSH from the Dashboard

ssh root@<VM_IP>

same IP used above

Using the SSH screen, enter the following commands:

sudo hostnamectl set-hostname nagiosagent
echo "127.0.0.1 Nagiosagent" >> /etc/hosts
reboot

The reboot operation will break the SSH connection and reset the server's host name. The IP ping command can be used here as well to show when the server is up and running and ready to connect to via SSH:

until ping -c1 xxx.xxx.xxx.xxx >/dev/null 2>&1; do :; done

Finally, SSH back into the VM; the system name should now be nagiosagent. Add the PrivateVDCAdmin as a user on the machine with the following commands:

sudo useradd -m PrivateVDCAdmin && echo PrivateVDCAdmin:RefCfg1235 | sudo chpasswd
Exit from the SSH session.

The ownership and access of the VM can be changed with the following steps:

  1. Select Instances --> VMs.
  2. Select the "Application Server".
  3. On the Info tab, change the Owner to PrivateVDCAdmin and Group to "Private_VDC".
  4. Add Group Use access.

The resulting screen should resemble the image below:

Application Server Access

The VM is now ready for deployment later.

3.4.4 Nagios Front end Server

This VM is also set up using the Ref_Cfg_Admin account and access will be switch to DMZVDCAdmin. The VM is set up as follows:

  1. Select Instances --> VMs and click the green + button.
  2. Select the "Nagios Appliance" template from the list.
  3. Name the VM "Nagios Front end Server".
  4. Add the Dashboard IP (Host IP, e.g. 10.127.4.33).
  5. Attach the DMZ_VDC_VNet NIC with "default" Security Group.
  6. Attach the Infrastructure Management Network NIC with "default" Security Group.
  7. Click the green Create button. > Info: > - <> is the IP address from the Infrastructure Management Network VNet for the Nagios Front end Server. > - <> is the IP address from the Application Server Private_VDC_VNet.

    Return to Instances --> VMs and wait until the status of the VM is RUNNING. Make note of the Infrastructure Management Network IP for the VM.

    SSH into the Dashboard and check to see if the VM's OS is running; for example, ping the Infrastructure Management Network IP address, e.g. 10.199.0.12, of the VM (from the dashboard) using:

    until ping -c1 xxx.xxx.xxx.xxx >/dev/null 2>&1; do :; done

    Replace X with the VM's Infrastructure IP address (e.g. 10.199.x.x)

    Once a positive ping response has been received, from the Dashboard CLI, connect to the VM via the VNC to verify the OS setup is complete and then run the following commands:

    The dashboard interface will request that the server fingerprints are added to the local store. It may connect you to the target server. Make sure that you are in the dashboard console and not one of the servers by referencing the hostname in the command prompt

    ssh root@<<Nagios_server_IMN_IP>> systemctl stop nagios
    
    # Overwrite existing keys with Dashboard keys
    
    scp /home/root/.ssh/id_rsa root@<<Nagios_server_IMN_IP>>:/root/.ssh/id_rsa
    scp /home/root/.ssh/id_rsa.pub root@<<Nagios_server_IMN_IP>>:/root/.ssh/id_rsa.pub
    ssh root@<<Nagios_server_IMN_IP>> 'mkdir -p /home/nagios/.ssh/'
    scp /home/root/.ssh/id_rsa root@<<Nagios_server_IMN_IP>>:/home/nagios/.ssh/id_rsa
    scp /home/root/.ssh/id_rsa.pub root@<<Nagios_server_IMN_IP>>:/home/nagios/.ssh/id_rsa.pub
    
    # Remove HyperCloud server config
    
    ssh root@<<Nagios_server_IMN_IP>> rm /etc/nagios/objects/servers/hypercloud.cfg
    
    # import fingerprints for applications server
    
    ssh root@<<Nagios_server_IMN_IP>>  touch /home/nagios/.ssh/known_hosts /dev/null 2>&1
    ssh root@<<Nagios_server_IMN_IP>> touch /root/.ssh/known_hosts /dev/null 2>&1
    ssh root@<<Nagios_server_IMN_IP>> cat /root/.ssh/known_hosts
    
    ssh root@<<Nagios_server_IMN_IP>>  "ssh-keyscan -H <<App_server_VDC_IP>> >> /home/nagios/.ssh/known_hosts" /dev/null 2>&1
    ssh root@<<Nagios_server_IMN_IP>>  "ssh-keyscan -H <<App_server_VDC_IP>> -t rsa >> /root/.ssh/known_hosts" /dev/null 2>&1
    
    # create app server config file on Dashboard
    
    echo "ZGVmaW5lIGhvc3R7Cgl1c2UJCQlnZW5lcmljLWhvc3QKCWhvc3RfbmFtZQkJYXBwLXNlcnZlcgoJYWRkcmVzcwkJCTE5Mi4xNjguMTcuMTEKCWhvc3Rncm91cHMJCWh5cGVyY2xvdWQKCW1heF9jaGVja19hdHRlbXB0cwk1CgljaGVja19wZXJpb2QJCTI0eDcKCW5vdGlmaWNhdGlvbl9pbnRlcnZhbAkwCglub3RpZmljYXRpb25fcGVyaW9kCTI0eDcKCWNvbnRhY3RfZ3JvdXBzCQlhZG1pbnMKCWNoZWNrc19lbmFibGVkCQkxCglhY3RpdmVfY2hlY2tzX2VuYWJsZWQJMQoJY2hlY2tfY29tbWFuZAkJY2hlY2tfaWNtcCExMDAuMCw0MCUhNTAwLjAsNjAlCn0KCmRlZmluZSBzZXJ2aWNlewoJdXNlCQkJZ2VuZXJpYy1zZXJ2aWNlCglob3N0X25hbWUJCWFwcC1zZXJ2ZXIKCXNlcnZpY2VfZGVzY3JpcHRpb24JUElORwoJY2hlY2tfY29tbWFuZAkJY2hlY2tfaWNtcCExMDAuMCw0MCUhNTAwLjAsNjAlCgltYXhfY2hlY2tfYXR0ZW1wdHMJNQoJY2hlY2tfcGVyaW9kCQkyNHg3Cglub3RpZmljYXRpb25faW50ZXJ2YWwJMAoJbm90aWZpY2F0aW9uX3BlcmlvZAkyNHg3Cn0KCgpkZWZpbmUgc2VydmljZXsKICAgICAgICB1c2UgICAgICAgICAgICAgICAgICAgICBnZW5lcmljLXNlcnZpY2UKICAgICAgICBob3N0X25hbWUgICAgICAgICAgICAgICBhcHAtc2VydmVyCiAgICAgICAgc2VydmljZV9kZXNjcmlwdGlvbiAgICAgQXBwIFNlcnZlciBSb290RlMKICAgICAgICBjaGVja19jb21tYW5kICAgICAgICAgICBjaGVja19kaXNrX2xpbnV4IXJvb3QhOTAhOTUKICAgICAgICBtYXhfY2hlY2tfYXR0ZW1wdHMgICAgICA1CiAgICAgICAgY2hlY2tfcGVyaW9kICAgICAgICAgICAgMjR4NwogICAgICAgIG5vdGlmaWNhdGlvbl9pbnRlcnZhbCAgIDAKICAgICAgICBub3RpZmljYXRpb25fcGVyaW9kICAgICAyNHg3Cn0KCgoK
    " >> app-server.b64
    
    scp app-server.b64 root@<<Nagios_server_IMN_IP>>:/etc/nagios/objects/servers/app-server.b64
    
    ssh root@<<Nagios_server_IMN_IP>> "cat /etc/nagios/objects/servers/app-server.b64 | base64 -d > /etc/nagios/objects/servers/app-server.cfg"
    
    # Start Nagios server
    
    ssh root@<<Nagios_server_IMN_IP>> systemctl start nagios
    

    Finally, assign the server to the DMZ_VDC:

  8. Select Instances --> VMs.

  9. Select the "Nagios Appliance" server.
  10. On the Info tab, change the Owner to DMZVDCAdmin and the Group to "DMZ_VDC".
  11. Add Group Use access.

The resulting screen should resemble the image below:

Nagios Front end Permissions

3.4.5 Nagios Client Desktop

The Nagios Client Desktop is set up via the steps below:

  1. Select Instances --> VMs and click the green + button.
  2. Select the "Debian Baseline" template.
  3. Name the VM "Client Desktop Server".
  4. Allocate 4 gigabytes of memory, 1 Physical CPU and 4.0 VCPU.
  5. Attach the Public_VDC_VNet NIC with "default" Security Group.
  6. Click the green Create button.

SSH into the Dashboard and check to see if the VM's OS is running; for example, ping the IP address of the VM (from the dashboard) using:

until ping -c1 xxx.xxx.xxx.xxx >/dev/null 2>&1; do :; done

Replace X with the VM's Infrastructure IP address (e.g. 10.199.x.x)

Once a positive ping response has been received, connect to the VM via SSH:

ssh root@<VM_IP>

From the terminal, enter the following commands:

hostnamectl set-hostname Nagiosclient
echo 127.0.0.1 Nagiosclient>>/etc/hosts

Add the PublicVDCAdmin User to the VM with the following commands:

useradd -m PublicVDCAdmin && echo PublicVDCAdmin:RefCfg1235 | chpasswd

Add the PublicVDCUser to the VM with the following commands:

useradd -m PublicVDCUser && echo PublicVDCUser:RefCfg1236 | chpasswd

Install the desktop interface and graphics components with the following commands:

apt update && apt upgrade -y
apt install xfce4 xfce4-goodies firefox-esr -y
reboot

After the reboot, the Debian desktop login screen should be visible via the VNC interface.

4.0 Deployment

4.1 Configure the Nagios Front End Server

Remove the "Infrastructure Management Network" NIC by click the "x" next to the network as seen below:

  1. Select Instances --> VMs.
  2. Select "Nagios Front End Server" VM.
  3. Switch to Network tab.
  4. Click the "x" and wait for system to update.
  5. A prompt will inform that the NIC will be removed immediately, press OK.
  6. The screen will refresh and pause further actions until progress of removal is complete.

Remove Nagios FE IMN NIC

4.2 Configure the Client Desktop Server

Login to the HyperCloud GUI with the Ref_Cfg_Admin account and go to Instances --> VMs. Select the Client Desktop Server VM and on the Info tab complete the following steps:

  • Change the Owner to the PublicVDCAdmin.
  • Change the Group to the Public_VDC.
  • Add Group Use access permissions.

The resulting screen should resemble the image below:

Desktop Permissions

Sign out of the Ref_Cfg_Admin account and re-login as the PublicVDCAdmin account.

Reminder

Return to Section 3.13 to recall the information used to set up the account.

Click on the user account dropdown menu in the top-right of the dashboard and change the view to cloud.

Cloud View

At the top of the dashboard screen, click VMs to show the instantiated VMs, of which the "Client Desktop Server" is the only one visible:

Desktop VM

Next, remove the Infrastructure Management Network NIC from the Cloud view by switching to the Network tab.

Desktop NIC Removal

This should leave only the Public_VDC_VNet as the only network interface for the Client Desktop.

Sign out of the HyperCloud Dashboard GUI entirely and sign in with the PublicVDCUser account. Connect to the VM's VNC and log into the Debian desktop as the PublicVDCUser.

5.0 Testing the application

The system has now been configured to set up the Nagios server to monitor the application server. The last step is to set up the Nagios server to collect and display the data in the console.

Select the VM and start the VNC viewer. It may take a few minutes for the desktop to load the first time it is used. Wait until the screen is fully populated and look like this:

Debian Desktop

Using the Client Desktop's VNC, open the web browser by selecting the Applications menu in the top left of the screen. This may take a few moments to launch the first time it is used. Once the browser has loaded, navigate to http://192.168.18.11/nagios. You will be prompted to enter the credentials, use the default Username: nagiosadmin Password: nagiosadmin

Nagios Login Prompt

Once logged in, select Hosts under Current Status from the left-hand menu

Nagios Menu

After clicking Hosts, in the center of the screen there will be a list of the hosts being monitored.

Nagios Hosts

Clicking on the app-server will bring up the screen below:

App Server Details

Clicking on "View Status Detail For This Host" will result in the screen below:

Service Status

The latest performance data from the application server will be displayed. This shows the results of a ping to verify that the server is operational. Plus, an agentless disk query showing the application server drive status.

6.0 Summary

This example shows many of the capabilities of SoftIron HyperCloud that are used to construct a tiered cloud architecture.

Three simple VDCs are constructed, together with their VNets and Users. Virtual machines are instantiated and can be configured via the VNC window that is built into the HyperCloud GUI, or via an SSH connection from the HyperCloud Dashboard, accessed from a terminal CLI.

A connection to the internet will allow applications to be downloaded and installed on the VMs. A desktop can be added to a VM that is available in the VNC viewer.

Finally, specialized applications provided by SoftIron, a gateway and Nagios appliance, can be used to network the VDCs together and monitor the server's performance.