Reference configuration
A Reference Configuration of a Simple Application Setup in HyperCloud
1.0 Introduction
The intent of this document is to provide a simple application that will demonstrate various elements of SoftIron HyperCloud. Although this is a simple example, these steps provide the foundation for creating a multi-tenancy tiered application. In all cases, open source software has been used to simplify configuration of the applications.
2.0 Architecture
Below is a diagram of the sample application:
The setup consists of three Virtual Data Centers (VDCs) named: Public, Private, and DMZ. This represents a typical, basic architecture for a cloud solution. The Public VDC will represent a standard user with access to the public internet (although internal non-routeable addresses are used). The Private VDC represents an environment that has no access to the internet. The DMZ has controlled access to both the Private VDC and the Public VDC. This allows a user to access a web-based application that connects to application servers in the Private VDC.
A server monitoring application is used as the base application, as it is very easy to implement and show the three VDCs in operation. Nagios was chosen as the monitoring application, although any other application could be used instead. This provides a web front end that can be viewed via a browser from the User Interface (UI) in the Public VDC.
A target application server, with Operating System only, is located in the Private VDC. Through the Nagios monitoring server, the Public VDC user will be able to monitor performance data from the applications server.
All of the application servers i.e. User IF, Monitoring Front End, and Application Server, are realized using Virtual Machines (VMs). Each of these will be constructed such that they are contained within their respective VDCs. Further Users and Groups will be created to instantiate, configure, and manage the virtual machines. In this manner, there will be logical separations between the VDCs.
The networking between the three VDCs will be accomplished using a Gateway Appliance. This is a virtual machine that has been specially built for the SoftIron HyperCloud infrastructure. It will also show how startup scripts can be used from the GUI to configure applications servers.
3.0 Implementation
The implementation is broken down into the following sections:
Scope | Description |
---|---|
Users and Groups | Groups will be defined for each VDC. Additionally, for each VDC, a group admin and group user will be created. An admin will also be created such that it can access all VDCs (e.g. Ref_Config_Admin) |
VDC | Private, DMZ and Public VDCs will be created, and the respective User/Groups associated with them. |
Networking | This will be realized using the SoftIron Gateway Appliance. It is configured using a startup script to provide the networks for the VDCs. |
Templates | A template will be created for each of the application servers. This will be accomplished using the Ref_Config_Admin. Access rights will be given to the group admins and users to enable the applications servers in each VDC to be created. |
Application Servers | These will be created in each VDC by the group admin. Specific rights will also be added for the users in each VDC. |
Server Monitoring | This will be realized using the SoftIron Nagios appliance. It is configured using a startup script to provide server monitoring for virtual machines. |
3.1 Users and Groups
As mentioned above, the following Groups and Users will be created for each VDC, as laid out below:
Info
This step is performed before the creation of the VDCs as it allows simplicity in assigning ownership of the various elements created later.
Scope |
Group | User |
Password | Capabilities |
---|---|---|---|---|
System | Oneadmin | Ref_Cfg_Admin | RefCfg1234 | Create / Modify / Delete all elements |
Public VDC |
Public_VDC |
PublicVDCAdmin PublicVDCUser |
RefCfg1235 RefCfg1236 |
Create / Modify / Delete elements in Public VDC Use access only to the elements in the Public VDC |
DMZ VDC |
DMZ_VDC |
DMZVDCAdmin DMZVDCUser |
RefCfg1235 RefCfg1236 |
Create / Modify / Delete elements in DMZ VDC Use access only to the elements in the DMZ VDC |
Private VDC |
Private_VDC |
PrivateVDCAdmin PrivateVDCUser |
RefCfg1235 RefCfg1236 |
Create / Modify / Delete elements in Private VDC Use access only to the elements in the Private VDC |
The admins will be given an admin and cloud view in the Dashboard's Graphical User Interface (GUI). All other users will have cloud view only.
3.1.1 SSH Key pair creation
To allow the Dashboard to connect to the various devices that will be set up via SSH in the CLI, an RSA key pair will be created and utilized in the configuration tutorial
Connect to the Dashboard via SSH (the IP Address that is used to access the HyperCloud Dashboard GUI will also allow console access). The console will launch in the root home directory, /home/root
; this directory must be used to create the key pair that will allow the Nagios configuration commands to locate the keys.
Then, follow the instructions on the SSH Keys page to create and paste a SSH Public Key to the Authentication section of the Dashboard settings on both the "admin" and Ref_Cfg_Admin
accounts, once it is set up in the next section.
This will allow the Dashboard to connect, via SSH, to any of the VMs that are created by either the admin or Ref_Cfg_Admin
accounts.
Below are the steps to set up the Groups and the Users:
3.1.2 Create Reference Configuration Admin
Initially, log in as the admin of the cluster and follow the steps to create the Ref_Config_Admin.
- Select System --> Users
- Initiate the new user wizard by pressing the green + button.
- Input the Username:
Ref_Cfg_Admin
. - Input and confirm the Password:
RefCfg1234
. - Set the Authentication to
core
. - Set the Main Group to
0: oneadmin
. - Click the green Create button.
Next, add the public key that was obtained from the HyperCloud Dashboard in section 3.1.1.
- Select System --> Users.
- Select the User:
Ref_Cfg_Admin
. - Select the Auth tab.
- Click on the
in the Public SSH Key section.
- Copy the Public Key (obtained in section 3.1.1) into the text window.
- Click
.
The key has now been added to the Ref_Cfg_Admin
account and will be used later.
The primary User for this Reference Configuration has now been created and the admin User can now be logged out.
The newly created Ref_Config_Admin
will be logged into and used to set up the remaining Groups, Users, and various components of the example.
3.1.3 Create Public_VDC Group, Group Admin, and Group User
The next step in the reference configuration is to start creating the Groups and their Users.
- Select System --> Groups.
- Press the green + button.
-
On the General tab, enter the Name
Public_VDC
. -
Switch to the Admin tab and check the box to Create an Administrator.
- Use the following username and password:
PublicVDCAdmin
andRefCfg1235
. -
Set the Authentication to core.
-
Click the green Create button.
The Public_VDC
Group has now been created and the list should resemble the image below:
A standard User will now be added to the group:
- Select Systems --> Users.
- Press the green + button.
- Use the following username and password:
PublicVDCUser
andRefCfg1236
. - Select
100: Public_VDC
as the Main Group. -
Select
users
as the Secondary Group. -
Click the green Create button to finalize the creation of the User.
- The rest of the Groups and Users can be created according to the table in Users and Groups and the full User list will resemble the image below:
3.2 Virtual Data Centers (VDCs)
Before the VDCs can be created, the Virtual Networks (VNets) must be set up. For more information on VDCs and VNets, follow the links below:
3.2.1 Virtual Networks
A VNet will be set up for each of the three VDCs. The one for the Public VDC will be show in detail; the other two are constructed in a similar fashion. The process in this example involves renaming and reassigning non-routeable networks that already exist within the infrastructure.
Original Name | New Name | Group | Owner |
---|---|---|---|
NR POC Network 9 | Public_VDC_VNet | Public_VDC | PublicVDCAdmin |
NR POC Network 8 | DMZ_VDC_VNet | DMZ_VDC | DMZVDCAdmin |
NR POC Network 7 | Private_VDC_VNet | Private_VDC | PrivateVDCAdmin |
- Select Network --> Virtual Networks.
- Select NR POC Network 9.
- Click
next to the Virtual Network's Name and change the name to Public_VDC_VNet, click away from the entry field to save change.
- Click
to change the Owner and Group to PublicVDCAdmin and Public_VDC, from the dropdown menus, respectively.
- The Information and Ownership for the VNet will resemble the image below:
Follow the steps above to change out the other two VNets' information and the result should resemble the image below:
3.2.2 Virtual Data Centers
Once the Groups, Users, and Virtual Networks have been set up, the VDCs can be created. The instructions to create the DMZ VDC are detailed below:
- Select System --> VDCs.
- Click the green + button and edit the information under the General tab.
- Input the Name DMZ_VDC.
-
Use the description "DMZ VDC".
-
Switch to the Groups tab and select the DMZ_VDC group.
-
Switch to the Resources tab and select the following options:
Tab Option(s) Clusters Access to all Clusters Hosts Access to all Hosts VNets Select the DMZ_VDC_VNet Datastores Access to all Datastores -
Click the green Create button to finalize the creation of the VDC.
The mappings of the VDCs are as below:
Name | Group | VNet |
---|---|---|
DMZ_VDC | DMZ_VDC | DMZ_VDC_VNet |
Private_VDC | Private_VDC | Private_VDC_VNet |
Public_VDC | Public_VDC | Public_VDC_VNet |
Make sure you select access to all hosts and datastores for all VDCs.
The completed VDC list will resemble the image below:
3.3 Gateway Appliance
Prior to downloading the appliances (Gateway and Nagios), the SoftIron Community Marketplace must be added following the instructions at SoftIron Marketplaces.
The implementation of the Gateway Appliance is accomplished in two steps: download the Image and then instantiate the server.
3.3.1 Download the Gateway image
After the marketplace has been added and populated with the Apps, follow the steps below to download the Gateway Appliance Image.
- Select Storage --> Apps.
- Search for Gateway.
- Select the Gateway Appliance.
- Select
to download the appliance.
- Name the Image "Gateway Appliance".
- Select the default datastore.
-
Warning
Make sure the Gateway Image state is set to READY before moving to the next step.
-
Select Storage --> Images.
3.3.2 Instantiate the Gateway Appliance
A startup script is needed to configure the appliance correctly. Here is the script that will create the connections between the virtual networks created earlier.
Note
The dashboard public key
is the one created in Section 3.12. This is not a mandatory addition to the script.
#!/bin/bash
# mkdir -p /home/root/.ssh
# Dashboard
# echo "dashboard public key” >> root@hyperCloud-dashboard
# Allow Traffic for VRouter function
# The command below can be used via the VNC to verify the routing function
# iptables -L -n -v -t nat
## POC Public Network
iptables -t nat -A POSTROUTING -s 192.168.19.0/24 ! -d 192.168.19.0/24 -j MASQUERADE
## POC DMZ Network
iptables -t nat -A POSTROUTING -s 192.168.18.0/24 ! -d 192.168.18.0/24 -j MASQUERADE
## POC Private Network
iptables -t nat -A POSTROUTING -s 192.168.17.0/24 ! -d 192.168.17.0/24 -j MASQUERADE
exit 0
This script is then converted to a base64 format to create the input for the startup script file and will result in the following text, or similar:
IyEvYmluL2Jhc2gKIyMgUE9DIFB1YmxpYyBOZXR3b3JrCmlwdGFibGVzIC10IG5hdCAtQSBQT1NUUk9VVElORyAtcyAxOTIuMTY4LjE5LjAvMjQgISAtZCAxOTIuMTY4LjE5LjAvMjQgLWogTUFTUVVFUkFERQoKIyMgUE9DIERNWiBOZXR3b3JrCmlwdGFibGVzIC10IG5hdCAtQSBQT1NUUk9VVElORyAtcyAxOTIuMTY4LjE4LjAvMjQgISAtZCAxOTIuMTY4LjE4LjAvMjQgLWogTUFTUVVFUkFERQoKIyMgUE9DIFByaXZhdGUgTmV0d29yawppcHRhYmxlcyAtdCBuYXQgLUEgUE9TVFJPVVRJTkcgLXMgMTkyLjE2OC4xNy4wLzI0ICEgLWQgMTkyLjE2OC4xNy4wLzI0IC1qIE1BU1FVRVJBREUKCmV4aXQgMAo=
The Gateway Appliance can now be instantiated with the steps below:
- Select Instances --> Virtual Routers and click the green + button.
- Enter "Gateway Router" as the Name.
- Select the Gateway Appliance in the Template section.
- Name the Virtual Machine "Gateway Router".
-
Add the following networks with the default Security Group:
- Infrastructure Management Network
- Public_VDC_VNet
- Private_VDC_VNet
- DMZ_VDC_VNet
-
Add the base64 text, from the example above, to the Base64 Encoded Startup Script field, the screen will resemble the image below:
The other attributes can be left blank as they are not used in this example. More information about the Gateway can be found here.
-
Click the green Create button to create the gateway router. It will take several minutes for this to become active (use the VM VNC viewer to check for a command prompt).
This will create both the Virtual Router and its associated Virtual Machine.
3.4 Virtual machines
A total of three VMs are needed to complete this reference configuration.
Name | VDC | VNet | OS |
---|---|---|---|
Monitoring Front End | DMZ_VDC | DMZ_VDC_VNet | Nagios Appliance |
Application Server | Private_VDC | Private_VDC_VNet | Debian 12 |
User Interface | Public_VDC | Public_VDC_VNet | Debian 12 |
Before these VMs can be instantiated, a template is needed. The template will require an Operating System Image for the operating system source. This section will show how to set up the Debian and Nagios VMs.
3.4.1 Download Images
Two Images are needed for the two different server types:
Debian OS Image
The Debian Image is included in the standard Public Marketplace. This can be downloaded and used to create the Debian 11 Template using the following steps:
- Select Storage --> Apps.
- Search for "Debian - 12".
- Download the Image to the default datastore by clicking
.
- Name the new Image "Debian 12 Image".
- Click
.
- Select Storage --> Images.
The resulting screen will be displayed to verify the creation of the Image and when the Status is in the READY state; i.e., the Image is completely downloaded and ready for use.
Note
In the LOCKED state, the Image is still being acquired from the marketplace. It may take a few minutes to download the Image. Please wait until this Image has been downloaded completely and in the READY state before moving to the next step. Also, wait at least one minute before refreshing the Image list.
Warning
Clicking refresh too frequently or trying to use the Image before it's been fully downloaded may cause a system warning in the bottom right of the screen. It can be ignored; however, it is informing that the actions being taken are too quick to allow the system to reset or the resources are not ready to be used.
Note
Although the owner is the system administrator, a template will need to be created for the VDC Admin account to access.
Nagios Appliance Image
The Nagios Appliance Image is included in the SoftIron Community Marketplace. This can be downloaded and used to create the Nagios Appliance Template using the following steps:
- Select Storage --> Apps.
- Search for "Nagios Appliance".
- Download the Image to the default datastore by clicking
.
- Name the new Image "Nagios Appliance".
- Click
.
- Select Storage --> Images.
As with the previous template, wait until the Nagios Image is in a READY state before proceeding.
3.4.2 Create the Templates
A single Template is used to create two of the VMs.
Reminder
Since the VMs are being created by the Ref_Config_Admin, make sure that this account is still logged into the HyperCloud GUI from Section 3.12.
Follow the steps below to create the Debian Template:
- Select Templates --> VMs.
- Click
and select Create.
- On the General tab, name the VM Template "Debian Baseline".
- Allocate 2 gigabytes of memory, 0.5 Physical CPU, and 4.0 Virtual CPU.
- Switch to the Storage tab and select the "Debian 12 Image".
- Under the Advanced options, scroll down and set the Size on instantiate to 20 gigabytes.
- Switch to the Network tab and select "Infrastructure Management Network" as the interface with the "default" Security Group.
- Switch to the Input/Output tab and add
tablet
andusb
as the Type and Bus, respectively (make sure to pressto confirm the selections).
-
Switch to the Context tab and paste:
useradd -m Ref_Cfg_Admin && echo Ref_Cfg_Admin:RefCfg1234 | chpasswd && sudo usermod -aG sudo Ref_Cfg_Admin
into the Start Script field and click away to set the startup script for the VMs that will be instantiated from the Template.Info
- Step 7 adds the Infrastructure Management interface so that the resulting VMs can connect to the external internet
- Step 8 will allow mouse action in the VNC window
- Step 9 will add an admin user so that the VM can be accessed via the VNC screen later
-
Click the green Create button to finalize the template creation.
Each VM is created by the Ref_Cfg_Admin
using these templates. Initially, the Ref_Cfg_Admin
will download the application code to each of the servers before deploying it into its respective VDC. At that point, the Infrastructure Management Interface will be removed and the local VDC VNet will replace it. The VDC admin will also be added as an admin user to the OS so that the application software can be configured.
3.4.3 Application Server
As mentioned earlier, this is initially set up using the Ref_Cfg_Admin
account and deployed later. The VM is set up as follows:
- Select Instances --> VMs and click the green + button.
- Select the recently created "Debian Baseline" template from the list.
- Name the VM "Application Server".
- Remove the Infrastructure Management VNet.
- Add "Private_VDC_VNet" as the second network and select the "default" Security Group.
- Click the green Create button.
SSH into the Dashboard and check to see if the VM's OS is running; for example, ping the IP address of the VM (from the dashboard) using:
until ping -c1 xxx.xxx.xxx.xxx >/dev/null 2>&1; do :; done
Replace X with the VM's Infrastructure IP address (e.g. 10.199.x.x)
Once a positive ping response has been received, connect to the VM via SSH from the Dashboard
ssh root@<VM_IP>
same IP used above
Using the SSH screen, enter the following commands:
The reboot
operation will break the SSH connection and reset the server's host name. The IP ping command can be used here as well to show when the server is up and running and ready to connect to via SSH:
until ping -c1 xxx.xxx.xxx.xxx >/dev/null 2>&1; do :; done
Finally, SSH back into the VM; the system name should now be nagiosagent
. Add the PrivateVDCAdmin
as a user on the machine with the following commands:
The ownership and access of the VM can be changed with the following steps:
- Select Instances --> VMs.
- Select the "Application Server".
- On the Info tab, change the Owner to
PrivateVDCAdmin
and Group to "Private_VDC". - Add Group Use access.
The resulting screen should resemble the image below:
The VM is now ready for deployment later.
3.4.4 Nagios Front end Server
This VM is also set up using the Ref_Cfg_Admin
account and access will be switch to DMZVDCAdmin
. The VM is set up as follows:
- Select Instances --> VMs and click the green + button.
- Select the "Nagios Appliance" template from the list.
- Name the VM "Nagios Front end Server".
- Add the Dashboard IP (Host IP, e.g. 10.127.4.33).
- Attach the DMZ_VDC_VNet NIC with "default" Security Group.
- Attach the Infrastructure Management Network NIC with "default" Security Group.
-
Click the green Create button. > Info: > - <
> is the IP address from the Infrastructure Management Network VNet for the Nagios Front end Server. > - < > is the IP address from the Application Server Private_VDC_VNet. Return to Instances --> VMs and wait until the status of the VM is RUNNING. Make note of the Infrastructure Management Network IP for the VM.
SSH into the Dashboard and check to see if the VM's OS is running; for example, ping the Infrastructure Management Network IP address, e.g. 10.199.0.12, of the VM (from the dashboard) using:
until ping -c1 xxx.xxx.xxx.xxx >/dev/null 2>&1; do :; done
Replace X with the VM's Infrastructure IP address (e.g. 10.199.x.x)
Once a positive ping response has been received, from the Dashboard CLI, connect to the VM via the VNC to verify the OS setup is complete and then run the following commands:
The dashboard interface will request that the server fingerprints are added to the local store. It may connect you to the target server. Make sure that you are in the dashboard console and not one of the servers by referencing the hostname in the command prompt
ssh root@<<Nagios_server_IMN_IP>> systemctl stop nagios # Overwrite existing keys with Dashboard keys scp /home/root/.ssh/id_rsa root@<<Nagios_server_IMN_IP>>:/root/.ssh/id_rsa scp /home/root/.ssh/id_rsa.pub root@<<Nagios_server_IMN_IP>>:/root/.ssh/id_rsa.pub ssh root@<<Nagios_server_IMN_IP>> 'mkdir -p /home/nagios/.ssh/' scp /home/root/.ssh/id_rsa root@<<Nagios_server_IMN_IP>>:/home/nagios/.ssh/id_rsa scp /home/root/.ssh/id_rsa.pub root@<<Nagios_server_IMN_IP>>:/home/nagios/.ssh/id_rsa.pub # Remove HyperCloud server config ssh root@<<Nagios_server_IMN_IP>> rm /etc/nagios/objects/servers/hypercloud.cfg # import fingerprints for applications server ssh root@<<Nagios_server_IMN_IP>> touch /home/nagios/.ssh/known_hosts /dev/null 2>&1 ssh root@<<Nagios_server_IMN_IP>> touch /root/.ssh/known_hosts /dev/null 2>&1 ssh root@<<Nagios_server_IMN_IP>> cat /root/.ssh/known_hosts ssh root@<<Nagios_server_IMN_IP>> "ssh-keyscan -H <<App_server_VDC_IP>> >> /home/nagios/.ssh/known_hosts" /dev/null 2>&1 ssh root@<<Nagios_server_IMN_IP>> "ssh-keyscan -H <<App_server_VDC_IP>> -t rsa >> /root/.ssh/known_hosts" /dev/null 2>&1 # create app server config file on Dashboard echo "ZGVmaW5lIGhvc3R7Cgl1c2UJCQlnZW5lcmljLWhvc3QKCWhvc3RfbmFtZQkJYXBwLXNlcnZlcgoJYWRkcmVzcwkJCTE5Mi4xNjguMTcuMTEKCWhvc3Rncm91cHMJCWh5cGVyY2xvdWQKCW1heF9jaGVja19hdHRlbXB0cwk1CgljaGVja19wZXJpb2QJCTI0eDcKCW5vdGlmaWNhdGlvbl9pbnRlcnZhbAkwCglub3RpZmljYXRpb25fcGVyaW9kCTI0eDcKCWNvbnRhY3RfZ3JvdXBzCQlhZG1pbnMKCWNoZWNrc19lbmFibGVkCQkxCglhY3RpdmVfY2hlY2tzX2VuYWJsZWQJMQoJY2hlY2tfY29tbWFuZAkJY2hlY2tfaWNtcCExMDAuMCw0MCUhNTAwLjAsNjAlCn0KCmRlZmluZSBzZXJ2aWNlewoJdXNlCQkJZ2VuZXJpYy1zZXJ2aWNlCglob3N0X25hbWUJCWFwcC1zZXJ2ZXIKCXNlcnZpY2VfZGVzY3JpcHRpb24JUElORwoJY2hlY2tfY29tbWFuZAkJY2hlY2tfaWNtcCExMDAuMCw0MCUhNTAwLjAsNjAlCgltYXhfY2hlY2tfYXR0ZW1wdHMJNQoJY2hlY2tfcGVyaW9kCQkyNHg3Cglub3RpZmljYXRpb25faW50ZXJ2YWwJMAoJbm90aWZpY2F0aW9uX3BlcmlvZAkyNHg3Cn0KCgpkZWZpbmUgc2VydmljZXsKICAgICAgICB1c2UgICAgICAgICAgICAgICAgICAgICBnZW5lcmljLXNlcnZpY2UKICAgICAgICBob3N0X25hbWUgICAgICAgICAgICAgICBhcHAtc2VydmVyCiAgICAgICAgc2VydmljZV9kZXNjcmlwdGlvbiAgICAgQXBwIFNlcnZlciBSb290RlMKICAgICAgICBjaGVja19jb21tYW5kICAgICAgICAgICBjaGVja19kaXNrX2xpbnV4IXJvb3QhOTAhOTUKICAgICAgICBtYXhfY2hlY2tfYXR0ZW1wdHMgICAgICA1CiAgICAgICAgY2hlY2tfcGVyaW9kICAgICAgICAgICAgMjR4NwogICAgICAgIG5vdGlmaWNhdGlvbl9pbnRlcnZhbCAgIDAKICAgICAgICBub3RpZmljYXRpb25fcGVyaW9kICAgICAyNHg3Cn0KCgoK " >> app-server.b64 scp app-server.b64 root@<<Nagios_server_IMN_IP>>:/etc/nagios/objects/servers/app-server.b64 ssh root@<<Nagios_server_IMN_IP>> "cat /etc/nagios/objects/servers/app-server.b64 | base64 -d > /etc/nagios/objects/servers/app-server.cfg" # Start Nagios server ssh root@<<Nagios_server_IMN_IP>> systemctl start nagios
Finally, assign the server to the DMZ_VDC:
-
Select Instances --> VMs.
- Select the "Nagios Appliance" server.
- On the Info tab, change the Owner to
DMZVDCAdmin
and the Group to "DMZ_VDC". - Add Group Use access.
The resulting screen should resemble the image below:
3.4.5 Nagios Client Desktop
The Nagios Client Desktop is set up via the steps below:
- Select Instances --> VMs and click the green + button.
- Select the "Debian Baseline" template.
- Name the VM "Client Desktop Server".
- Allocate 4 gigabytes of memory, 1 Physical CPU and 4.0 VCPU.
- Attach the Public_VDC_VNet NIC with "default" Security Group.
- Click the green Create button.
SSH into the Dashboard and check to see if the VM's OS is running; for example, ping the IP address of the VM (from the dashboard) using:
until ping -c1 xxx.xxx.xxx.xxx >/dev/null 2>&1; do :; done
Replace X with the VM's Infrastructure IP address (e.g. 10.199.x.x)
Once a positive ping response has been received, connect to the VM via SSH:
ssh root@<VM_IP>
From the terminal, enter the following commands:
Add the PublicVDCAdmin
User to the VM with the following commands:
Add the PublicVDCUser
to the VM with the following commands:
Install the desktop interface and graphics components with the following commands:
After the reboot, the Debian desktop login screen should be visible via the VNC interface.
4.0 Deployment
4.1 Configure the Nagios Front End Server
Remove the "Infrastructure Management Network" NIC by click the "x" next to the network as seen below:
- Select Instances --> VMs.
- Select "Nagios Front End Server" VM.
- Switch to Network tab.
- Click the "x" and wait for system to update.
- A prompt will inform that the NIC will be removed immediately, press OK.
- The screen will refresh and pause further actions until progress of removal is complete.
4.2 Configure the Client Desktop Server
Login to the HyperCloud GUI with the Ref_Cfg_Admin
account and go to Instances --> VMs. Select the Client Desktop Server VM and on the Info tab complete the following steps:
- Change the Owner to the
PublicVDCAdmin
. - Change the Group to the Public_VDC.
- Add Group Use access permissions.
The resulting screen should resemble the image below:
Sign out of the Ref_Cfg_Admin
account and re-login as the PublicVDCAdmin
account.
Reminder
Return to Section 3.13 to recall the information used to set up the account.
Click on the user account dropdown menu in the top-right of the dashboard and change the view to cloud.
At the top of the dashboard screen, click VMs to show the instantiated VMs, of which the "Client Desktop Server" is the only one visible:
Next, remove the Infrastructure Management Network NIC from the Cloud view by switching to the Network tab.
This should leave only the Public_VDC_VNet as the only network interface for the Client Desktop.
Sign out of the HyperCloud Dashboard GUI entirely and sign in with the PublicVDCUser
account. Connect to the VM's VNC and log into the Debian desktop as the PublicVDCUser
.
5.0 Testing the application
The system has now been configured to set up the Nagios server to monitor the application server. The last step is to set up the Nagios server to collect and display the data in the console.
Select the VM and start the VNC viewer. It may take a few minutes for the desktop to load the first time it is used. Wait until the screen is fully populated and look like this:
Using the Client Desktop's VNC, open the web browser by selecting the Applications menu in the top left of the screen. This may take a few moments to launch the first time it is used. Once the browser has loaded, navigate to http://192.168.18.11/nagios
. You will be prompted to enter the credentials, use the default Username: nagiosadmin
Password: nagiosadmin
Once logged in, select Hosts under Current Status from the left-hand menu
After clicking Hosts, in the center of the screen there will be a list of the hosts being monitored.
Clicking on the app-server will bring up the screen below:
Clicking on "View Status Detail For This Host" will result in the screen below:
The latest performance data from the application server will be displayed. This shows the results of a ping to verify that the server is operational. Plus, an agentless disk query showing the application server drive status.
6.0 Summary
This example shows many of the capabilities of SoftIron HyperCloud that are used to construct a tiered cloud architecture.
Three simple VDCs are constructed, together with their VNets and Users. Virtual machines are instantiated and can be configured via the VNC window that is built into the HyperCloud GUI, or via an SSH connection from the HyperCloud Dashboard, accessed from a terminal CLI.
A connection to the internet will allow applications to be downloaded and installed on the VMs. A desktop can be added to a VM that is available in the VNC viewer.
Finally, specialized applications provided by SoftIron, a gateway and Nagios appliance, can be used to network the VDCs together and monitor the server's performance.