Table of Contents

Opennebula Cloud

To overcome the hardware limitations of external services we run our own computing cloud with the Opennebula platform and 12 Dell PowerEdge FC630 nodes. Each node has a Intel Xeon E5-2630 CPU with 20 cores (40 with hyperthreading) and 768GB memory. The cloud resources are furthermore connected to a Dell S4048-ON Open Networking Switch which is managed by one Opendaylight Controller. This cluster facilitates SDN and cloud experiments as well as provides compute resources for high requirement simulations and emulations.

Configure new Cluster

Manual configuration (not necessary - done by playbook)

The network interfaces, datastores, primary images and VM templates are already created during the installation of opennebula core with the ansible playbook. These are the steps in case it needs to be done manually:

Create Virtual Network Interfaces

Network → Virtual Networks - Plus → Create

  1. Chair - Internal Network Interface (Struk DHCP)
    • General - Name: “intern”, Description: “Internal Chair Interface”, CLuster: 0
    • Conf - Bridge: “chair”, Network Mode: “Bridged”, Physical device: “↔”
    • Addresses - AR → Ethernet, First MAC address: “↔”, Size: 256
    • Context - Gateway: 131.159.25.254, Netmask: 255.255.254.0, Network address: 131.159.24.0
  2. MWN - Student Network Interface (Struk DHCP)
    • General - Name: “mwn”, Description: “Studen/MWN Interface”, CLuster: 0
    • Conf - Bridge: “chair”, Network Mode: “Bridged”, Physical device: “↔”
    • Addresses - AR → Ethernet, First MAC address: “↔”, Size: 256
    • Context - Gateway: 172.24.25.254, Netmask: 255.255.254.0, Network address: 172.24.24.0

Create New OS and Template

  1. (ISO) Storage → Images → New Image
    • Name: <os>_<version>_iso (e.g. ubuntu_16.04.5_iso)
    • Type: “Readonly CD-ROM”, Datastore: “ceph_img”
    • Advanced Options: BUS: “SCSI”, Image mapping driver: “raw”
    • Upload ISO Image, after it is ready add the label “ISO”
  2. (Disk) Storage → Images → New Image
    • Name: <os>_<version>_raw (e.g. ubuntu_16.04.5_raw)
    • Type: “Generic storage datablock”, Datastore: “ceph_img”
    • Check “This image is persistent” - Empty disk image (5120MB → 5GB)
    • Advanced Options: BUS: “Virtio”, Image mapping driver: “raw”
    • This is the empty disk for the initial os installation
  3. (Initial Installation Template) Templates → VMs → Plus → Create
    • Try to copy an existing template and adjust “Storage” + “OS Booting” order
    • Otherwise create a new template:
    • Name: “<os> <version> RAW (e.g. 'Ubuntu 16.04.5 RAW')”, Hypervisor: “KVM”
    • General - Memory: 4GB, CPU: 2, VCPU: 2
    • Storage - Disk0 = <raw_disk>_raw, Disk1 = <iso_image>_iso ..
    • OS & CPU - Boot → <iso_image>_iso first, CPU Architecture: “x86_64”
    • Input/Output: VNC, Keymap: “de” (if needed)
    • Leave the rest empty
  4. Instantiate a new VM from the raw Template
    • Templates → Create → On Hold (VM is not started immediately, time to create the struk entry)
    • Attach an intern network interface (package download, etc.)
    • Create a new struk entry for a chair/intern IP address
    • Instances → VMs → Select VM → Deploy → Select Host
  5. Use VNC and go through installer. Afterwards you should have an installed OS on the second disk
  6. Shutdown VM and remove the second disk (with the installed OS) if possible, otherwise you can delete the VM, since the template needs to be updated anyway. After deletion/removal you have to make the disk non-persistent (Images → Select disk → persistent: No).
  7. Change Template → Remove the iso disk, so only the installed disk remains
  8. Create a new VM from the saved template/VM
  9. Boot the VM OS, <raw-disk>
  10. Make initial OS configuration
    • ip a and start network interface dhclient -v -i ens3
    • Add/Change i11 user:
      • mkdir .ssh, vim .ssh/authorized_keys → keepassx ssh pub key (ott-bottom level)
      • chmod -R og-rwx .ssh
      • sudo passwd i11 → keepassx password (ott- bottom level)
    • Adjust sudoers file, i11 user must be member of adm and adm nopasswd sudo allowed
      %adm    ALL=NOPASSWD: ALL
      %il11admin    ALL=NOPASSWD: ALL
    • Install context package
    • Install latest kernel version (look up command for ubuntu version)
      • sudo apt-get install –install-recommends linux-generic-hwe-18.04
    • Change one-contextualization for automatic interface dhcp
      • Adjusted network contextualization script is in the ansible-scripts onevm role
      • scp ..ansible-scripts/roles/onevm/files/loc-10-network vm:/etc/one-context.d/
      • After everything is done, dettach the network interface
    • Install ansible packages
      • sudo apt install
  11. Shutdown VM → Storage → Ubuntu Disk → Save as → <os>_<version> (e.g. ubuntu_18.04.3)
  12. Add label “Template” after image is ready, Set Image Owner as “oneadmin”, Add “Use” Permission for group
  13. Adjust raw template to become final VM template:
    • Storage - Select saved contextualized disk
    • OS & CPU- Boot order: select disk0
    • Context - Unselect “SSH contextualization”, Select “Network contextualization”
    • Context - callback to AWX in “Start Script”, lookup correct url and config key in AWX!:
      #!/bin/bash
      curl -H "remote-host: $NAME" -H "remote-ip: $(hostname -I)" --noproxy "*" -k -XPOST --data "host_config_key=<AWX-PLAYBOOK-CONFIG-KEY>" <AWX-PLAYBOOK-URL> 
    • Scheduling - Placement: Select Cluster = (0) default
  14. Delete Template “<os> <version> RAW”, change user and group of final template oneadmin:oneadmin
  15. Rest will be taken care of by the AWX (Ansible) callback (proxy, ntp, ldap, fail2ban, services, etc.)

Configure OS

After the one_core playbook run all the initial templates and images already exist. These steps are necessary to get a final OS image to actually deploy VMs from. Follow exactly the following steps:


Add new host to cluster

Fix Routing on Node Servers

Fix Routing on VMs

Fix Routing on ONE Host access MWN

Add new ONE Node

Installation

Sunstone / Webui

CEPH Datastores backend

Ceph Cluster Setup

New Ceph User

Create Datastores

New Template
We will create two templates: One default template with system files on the local hard disk and one HA template for system and image file in the Ceph cluster and live migration capabilities.

User Management

LDAP

server 1:
    # Ldap authentication method
    :auth_method: :simple

    # Ldap server
    :host: ldap.informatik.tu-muenchen.de
    :port: 389

    # base hierarchy where to search for users and groups
    :base: 'ou=Personen,ou=IN,o=TUM,c=DE'

    # group the users need to belong to. If not set any user will do
    #:group: 'cn=il11,ou=Gruppen,ou=IN,o=TUM,c=DE'

    # field that holds the user name, if not set 'cn' will be used
    :user_field: 'uid'

    # field name for group membership, by default it is 'member'
    :group_field: 'memberUid'

    # user field that that is in in the group group_field, if not set 'dn' will be used
    :user_group_field: 'cn'

    # Generate mapping file from group template info
    :mapping_generate: true

    # Seconds a mapping file remain untouched until the next regeneration
    :mapping_timeout: 300

    # Name of the mapping file in OpenNebula var diretory
    :mapping_filename: server1.yaml

    # Key from the OpenNebula template to map to an AD group
    :mapping_key: GROUP_DN

    # Default group ID used for users in an AD group not mapped
    :mapping_default: 1

SSL Certificates

/etc/nginx/sites-available/one

#### OpenNebula Sunstone upstream upstream sunstone {

      server 127.0.0.1:9869;

} upstream appserver {

  server 127.0.0.1:29877; # appserver_ip:ws_port

} map $http_upgrade $connection_upgrade {

  default upgrade;
  '' close;

}

#### cloudserver.org HTTP virtual host server {

      listen 80;
      server_name one.cm.in.tum.de;
      ### Permanent redirect to HTTPS (optional)
      return 301 https://one.cm.in.tum.de:443;

}

#### cloudserver.org HTTPS virtual host server {

      listen 443;
      server_name one.cm.in.tum.de;
      ### SSL Parameters
      ssl on;
      ssl_certificate /etc/ssl/certs/emu10.fullchain.cert.pem;
      ssl_certificate_key /etc/ssl/private/emu10.private.key;
      ### Proxy requests to upstream
      location / {
               proxy_pass http://sunstone;
      }

}

server {

      listen 29876;
      server_name one.cm.in.tum.de;
      ### SSL Parameters
      ssl on;
      ssl_certificate /etc/ssl/certs/emu10.fullchain.cert.pem;
      ssl_certificate_key /etc/ssl/private/emu10.private.key;
      ### Proxy requests to upstream
      location / {
               proxy_pass http://appserver;
               proxy_http_version 1.1;
               proxy_set_header Upgrade $http_upgrade;
               proxy_set_header Connection $connection_upgrade;
      }

}

/etc/one/sunstone-server.conf

… only VNC part….. :vnc_proxy_port: 29876 :vnc_proxy_support_wss: yes :vnc_proxy_cert: /etc/ssl/certs/emu10.fullchain.cert.pem :vnc_proxy_key: /etc/ssl/private/emu10.private.key :vnc_proxy_ipv6: false :vnc_request_password: false …..

ONE CLI
Log in to emu10 and use opennebula commands to perform certain tasks. Here is the documentation about possible commands: https://docs.opennebula.org/5.6/operation/references/cli.html To use the commands you need to perform the following steps:

Import other images (KVM/Virtualbox)
You can also import other images and directly boot them. Opennebula uses KVM as a Hypervisor therefore all kvm compatible images can be used. If you have a virtualbox image you can convert it to a raw image with this command:

VBoxManage clonehd --format RAW debian.vdi debian.img

To import it to Opennebula copy the image to the sunstone gui (emu10) to this directory /var/tmp/. The directory is important as images can only be imported from trusted/safe directories. Now use the one cli to import the image. First authenticate as described above in “ONE CLI” now use:

oneimage create -d ceph_img --name gbs_image --path /var/tmp/gbs.img --prefix hd --type OS --driver raw --description "Virtualbox GBS Image"

to import it.
Make sure that the access rights are correct (go+r) when copying it to /var/tmp/ otherwise the import will fail