Servers

Here you can find all the server information.

Server Room:

-01.01.038

The keys for the server room are in the secretary office, in the locker behind the door, in a small wooden box. They are labeled with “Server Raum”. Please turn off the lights and lock the door when leaving the server room.

Server NameTypeService TagOSiDrac-IPChair-IPHost NameNote
Devimg01Dell PowerEdge R53031ZMQ92Ubuntu 14.04.310.0.0.2131.159.24.137devimg01-cmDevelopment and Image Server
Testbed01Dell PowerEdge R530303SQ92Ubuntu 14.04.310.100.0.1131.159.24.142testbed01-cmTestbed for RIFE project
Testbed02Dell PowerEdge R530304MQ92Ubuntu 14.04.310.100.0.2131.159.24.150testbed02-cmTestbed for LLCM / SSICLOPS
Net01Dell PowerEdge R73031VSQ92FreeBSD 10.2 + Ubuntu 14.04.310.200.0.1131.159.24.163net01-cmNetworking / Cloud / Performance Tests
Net02Dell PowerEdge R73031XPQ92FreeBSD 10.2 + Ubuntu 14.04.310.200.0.2131.159.24.151net02-cmNetworking / Cloud / Performance Tests
Net03Dell PowerEdge R730CWKTQ92FreeBSD 10.2 + Ubuntu 14.04.310.200.0.3131.159.24.166net03-cmNetworking / Cloud / Performance Tests
Net04Dell PowerEdge R7301JXFQK2Ubuntu 16.0410.200.0.4-net04.cmSSICLOPS Project - FPGA Offloading Tests
Net05Dell PowerEdge R7301JW8QK2Ubuntu 16.0410.200.0.5-net05.cmSSICLOPS Project - FPGA Offloading Tests
Net06Dell PowerEdge R75060Q52V3Ubuntu 22.04.0110.30.0.1131.159.25.63net-06
Net07Dell PowerEdge R750B44GZT3Ubuntu 22.04.0110.30.0.2131.159.25.64net-07
Sim01Dell PowerEdge R730 49P50D2 Ubuntu 16.04 LTS Server 10.250.0.1131.159.24.15sim01-cmSimulation Server
Emu01Dell PowerEdge R630DWX40D2Ubuntu 16.04 LTS Desktop 10.150.0.1131.159.24.18emu01-cmEmulation Server
Emu02Dell PowerEdge R730 9DG49F2 - 10.150.0.2131.159.24.21emu02-cmServer Dell S4048 Switch Controller
FX2-1Dell PowerEdge FX2 Chassis 49WZ8F2Firmware 1.310.150.10.1-emu-fx-1Chassis Controller Management Emu03-Emu06
Emu03Dell PowerEdge FC63049L19F2Ubuntu 16.04 LTS Server10.150.0.3131.159.24.20emu03-cmEmulation Servercluster
Emu04Dell PowerEdge FC63049M09F2Ubuntu 16.04 LTS Server10.150.0.4-emu04-cmEmulation Servercluster
Emu05Dell PowerEdge FC63049M59F2Ubuntu 16.04 LTS Server10.150.0.5-emu05-cmEmulation Servercluster
Emu06Dell PowerEdge FC63049MZ8F2Ubuntu 16.04 LTS Server10.150.0.6-emu06-cmEmulation Servercluster
FX2-2Dell PowerEdge FX2 Chassis49LZ8F2Firmware 1.310.150.10.2-emu-fx-2Chassis Controller Management Emu07-Emu10
Emu07Dell PowerEdge FC63049R69F2Ubuntu 16.04 LTS Server10.150.0.7-emu07-cmEmulation Servercluster
Emu08Dell PowerEdge FC63049S29F2Ubuntu 16.04 LTS Server10.150.0.8-emu08-cmEmulation Servercluster
Emu09Dell PowerEdge FC63049S69F2Ubuntu 16.04 LTS Server10.150.0.9-emu09-cmEmulation Servercluster
Emu10Dell PowerEdge FC63049T19F2Ubuntu 16.04 LTS Server10.150.0.10-emu10-cmEmulation Servercluster
FX2-3Dell PowerEdge FX2 Chassis48M59F2Firmware 1.310.150.10.3-emu-fx-3Chassis Controller Management Emu11-Emu14
Emu11Dell PowerEdge FC63048359F2Ubuntu 16.04 LTS Server10.150.0.11-emu11-cmEmulation Servercluster
Emu12Dell PowerEdge FC63048529F2Ubuntu 16.04 LTS Server10.150.0.12-emu12-cmEmulation Servercluster
Emu13Dell PowerEdge FC630486Y8F2Ubuntu 16.04 LTS Server10.150.0.13-emu13-cmEmulation Servercluster
Emu14Dell PowerEdge FC630487Z8F2Ubuntu 16.04 LTS Server10.150.0.14-emu14-cmEmulation Servercluster
Mon01Dell PowerEdge R4306L0T7J2Ubuntu 16.04 LTS Server10.25.0.1-mon01-cmStorage Monitoring Server
Sto01Dell PowerEdge R730xd6KKN7J2Ubuntu 16.04 LTS Server10.50.0.1-sto01-cmStorage Server
Sto02Dell PowerEdge R730xd6KLQ7J2Ubuntu 16.04 LTS Server10.50.0.2-sto02-cmStorage Server
cmp01Dell PowerEdge R652588WS6J3Ubuntu 20.04.03 LTS Server10.10.0.1131.159.25.22cmp-01Compute Server
cmp02Dell PowerEdge R652568WS6J3Ubuntu 20.04.03 LTS Server10.10.0.2131.159.25.21cmp-02Compute Server
cmp03Dell PowerEdge R652558WS6J3Ubuntu 20.04.03 LTS Server10.10.0.3131.159.25.23cmp-03Compute Server
cmp04Dell PowerEdge R652548WS6J3Ubuntu 20.04.03 LTS Server10.10.0.4131.159.25.24cmp-04Compute Server
cmp05Dell PowerEdge R652578WS6J3Ubuntu 20.04.03 LTS Server10.10.0.5131.159.25.25cmp-05Compute Server
cmp06Dell PowerEdge R652598WS6J3Ubuntu 20.04.03 LTS Server10.10.0.6131.159.25.26cmp-06Compute Server
gpu01Dell PowerEdge R7525GV8BYJ3Ubuntu 20.04.05 LTS Server10.20.0.1131.159.25.18gpu-01GPU Server
gpu02Dell PowerEdge R7525FV8BYJ3Ubuntu 20.04.05 LTS Server10.20.0.2131.159.25.19gpu-02GPU Server

More specific information about each server.

DevImg01

Purpose: Development and operating system images from other servers

Operating SystemChair-IPiDrac-IPServer
Ubuntu 14.04.3131.159.24.13710.0.0.2Dell PowerEdge R530
Memory (RAM)Real CPU Cores (Hyper-threading)Storage
125GB16 (32)4x4TB = 16TB with RAID5 (one virtual disk with 11TB)

iDRAC

  • MAC: 14:18:77:45:AE:53
  • IP: 10.0.0.2, Subnet: 255.0.0.0, Gateway: 0.0.0.0, No DNS

Network Interface em1

  • Name: devimg01.cm.in.tum.de
  • Device: Embedded NIC.1-1-1, MAC: 14:18:77:45:AE:4F
  • IP: 131.159.24.137

Network interface 10 Gigabit

  • Device: NIC.Slot.3-1-1, Operating system device: p3p1
  • IP: 10.0.0.1, Subnet: 255.255.0.0
  • Do not use: 255.0.0.0 otherwise the LRZ backup node is unreachable

Testbed

Purpose: Testbed for RIFE (testbed01) and LLCM/SSICLOPS (testbed02) projects We have two testbed servers that are completely identical.

NameOperating SystemChair-IPiDrac-IPServerNotes
Testbed01-cmUbuntu 14.04.3131.159.24.14210.100.0.1Dell PowerEdge R530RIFE project
Memory (RAM)Real CPU Cores (Hyper-threading)Storage
105GB16 (32)2x2TB = 4TB with RAID1 (one virtual disk with 1.7TB)

Management interface (Dell iDRAC)

  • MAC: 14:18:77:45:A1:CB
  • IP: 10.100.0.1, Subnet: 255.0.0.0, Gateway: 0.0.0.0, No DNS

Network interface em1

  • Name: testbed01.cm.in.tum.de
  • Device: Embedded NIC.1-1-1, MAC: 14:18:77:45:A1:C7
  • IP: 131.159.24.142

Network interface 10 Gigabit

  • Device: NIC.Slot.2-1-1, Operating system device: p2p1
  • P: 10.0.0.2, Subnet: 255.255.0.0

Virtualization with two machines

NameOperating SystemChair-IPiDrac-IPServerNotes
Testbed02-cmUbuntu 14.04.3131.159.24.15010.100.0.2Dell PowerEdge R530LLCM/SSICLOPS project
Memory (RAM)Real CPU Cores (Hyper-threading)Storage
111GB16 (32)2x2TB = 4TB with RAID1 (one virtual disk with 1.7TB)

Management interface (Dell iDRAC)

  • MAC: 14:18:77:45:AA:8D
  • IP: 10.100.0.2, Subnet: 255.0.0.0, Gateway: 0.0.0.0, No DNS

Network interface em1

  • Name: testbed02.cm.in.tum.de
  • Device: Embedded NIC.1-1-1, MAC: 14:18:77:45:AA:89
  • IP: 131.159.24.150

Network interface 10 Gigabit

  • Device: NIC.Slot.2-1-1, Operating system device: p2p1
  • IP: 10.0.0.3, Subnet: 255.255.0.0

Net

The net first three servers have a external graphic card for offloading experimentation/tests. All three have a

Graphic-card: AMD FirePro S7000


To see if the graphic card is installed use the following command:

sudo lspci -v | grep "S7000" -A 17 -B 2 

There are three servers for network and performance tests.

NameOperating SystemChair-IPiDrac-IPServerStorageNotes
Net01-cmUbuntu 14.04.3 / FreeBSD 10.2131.159.24.16310.200.0.1Dell PowerEdge R7302x2TB = 4TB with RAID1 (one virtual disk with 1.7TB)
Net02-cmUbuntu 14.04.3 / FreeBSD 10.2131.159.24.15110.200.0.2Dell PowerEdge R7302x2TB = 4TB with RAID1 (one virtual disk with 1.7TB)
Net03-cmUbuntu 14.04.3 / FreeBSD 10.2131.159.24.16610.200.0.3Dell PowerEdge R7302x2TB = 4TB with RAID1 (one virtual disk with 1.7TB)
Memory (RAM)Real CPU Cores (Hyper-threading)Storage
125GB12 (24)2x2TB = 4TB with RAID1 (one virtual disk with 1.7TB)

The next two net server have FPGA network cards installed:

1. Mellanox Innova Flex 4 LX EN
2. NetFPGA - SUME


These are the specifications of the two servers dedicated to offloading tests.

NameOperating SystemChair-IPiDrac-IPServerStorageNotes
Net04.cmUbuntu 16.04 (MAAS)1JXFQK210.200.0.4Dell PowerEdge R730-Installed with MAAS
Net05.cmUbuntu 16.04 (MAAS)1JW8QK210.200.0.5Dell PowerEdge R730-Installed with MAAS
Memory (RAM)Real CPU Cores (Hyper-threading)Storage
64GB12600GB data storage (virtual disk RAID-1) - 120GB SSD system storage (virtual disk RAID-0)

Both servers have two 120GB SSD - the first thought was to create a RAID-1 with the SSDs but as mentioned in several discussions, there is a very high chance that the SSDs will fail at the same time in RAID-1. So we left one SSD that is not used currently. If the system SSD fails we can create a new virtual disk (iDrac) with the remaining SSD and do a quick new install with MAAS.

Sim

Purpose: Simulation server

NameOperating SystemChair-IPiDrac-IPServerNotes
Sim01-cmUbuntu 16.04 Server131.159.24.1510.250.0.1Dell PowerEdge R730
Memory (RAM)Real CPU Cores (Hyper-threading)Storage
251GB16 (32) 4x560GB = 2.2TB with RAID1 (one virtual disk with 1.1TB)

Management interface (Dell iDRAC)

  • MAC: 14:18:77:5F:C2:6D
  • IP: 10.250.0.1, Subnet: 255.0.0.0, Gateway: 0.0.0.0, No DNS

Network interface eno3

  • Name: sim01.cm.in.tum.de
  • Device: Embedded NIC.1-3-1, MAC: 24:6E:96:13:9E:5C
  • IP: 131.159.24.15

Emu

Purpose: Emulation server

Emu01

NameOperating SystemChair-IPiDrac-IPServer
Emu01-cmUbuntu 16.04 Desktop131.159.24.1810.150.0.1Dell PowerEdge R630
Memory (RAM) CPU Cores (Hyper-threading)Storage
123GB32 (64)4x280GB=1080GB RAID5 (one virtual disk with 840GB)

Management interface (Dell iDRAC)

  • MAC: 64:00:6A:C4:4B:84
  • IP: 10.150.0.1, Subnet: 255.0.0.0, Gateway: 0.0.0.0, No DNS

Network interface eno3

  • Name: emu01.cm.in.tum.de
  • Device: Embedded NIC.1-3-1, MAC: 24:6E:96:12:B2:74
  • IP: 131.159.24.18

Emu02

NameOperating SystemChair-IPiDrac-IPServer
Emu02-cmUbuntu 16.04.1 Server131.159.24.2110.150.0.2Dell PowerEdge R730
Memory (RAM) CPU Cores (Hyper-threading)Storage
128GB16 (32)RAID1 2×1.8TB (one virtual 200GB, one virtual 1.6 TB)

NIC Slot 2: Intel(R) 10G 2P X520 Adapter
NIC 2 Slot 1 Partition 1 - enp4s0f0

  • Network: Brocade Switch - Server
  • Mac-Address: A0:36:9F:B8:BE:30
  • IP-Address: 10.0.0.4

NIC 2 Slot 2 Partition 1 - enp4s0f1

  • Network: Brocade Switch - Chair
  • Mac-Address: A0:36:9F:B8:BE:32
  • IP-Address: 131.159.24.21


NIC 1: Broadcom Gigabit Ethernet BCM5720
NIC 1 Port 1 Partition 1

  • Network: Dell S4840 Switch Controller
  • Mac-Address: 18:66:DA:54:4C:A4
  • IP-Address: 10.20.0.1


FX Cluster NODES

Configuration for each FX Node FC630

NameOperating SystemServer
Emu<nn>-cmUbuntu 14.04.5 LTS ServerDell PowerEdge FC630
Memory (RAM)CPU Cores (Hyper-threading)Storage
768GB (24x32GB)20 (40)3 x 446GB SSD RAID5 (one virtual 893 GB SSD)


FX2 Cluster - 1

Emu03 - Outdate (Xen)

Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1

  • Network: Dell Switch
  • Mac-Address: 24:6E:96:1C:CD:C0
  • IP-Address: 10.0.1.1

NIC 1 Port 2 Partition 1

  • Network: Dell Switch
  • Mac-Adress: 24:6E:96:1C:CD:C2
  • IP-Address: 10.0.1.2

NIC 1 Port 3 Partition 1

  • Network: Brocade Switch - Server
  • Mac-Adress: 24:6E:96:1C:CD:C4
  • IP-Address: 10.0.0.5

NIC 1 Port 4 Partition 1

  • Network: Brocade Switch - Chair
  • Mac-Adress: 24:6E:96:1C:CD:C6
  • IP-Address: 131.59.24.20

Emu04 - Outdate (Xen)

Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1

  • Network: Dell Switch
  • Mac-Address: 24:6E:96:1C:CD:A0
  • IP-Address: 10.0.1.3

NIC 1 Port 2 Partition 1

  • Network: Dell Switch
  • Mac-Adress: 24:6E:96:1C:CD:A2
  • IP-Address: 10.0.1.4

NIC 1 Port 3 Partition 1

  • Network: Brocade Switch - Server
  • Mac-Adress: 24:6E:96:1C:CD:A4
  • IP-Address: 10.0.0.6

NIC 1 Port 4 Partition 1

  • Network: Brocade Switch - Chair
  • Mac-Adress: 24:6E:96:1C:CD:A6
  • IP-Address: 131.59.24.

Emu05

Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1

  • Network: Dell Switch
  • Mac-Address:
  • IP-Address: 10.0.1.

NIC 1 Port 2 Partition 1

  • Network: Dell Switch
  • Mac-Adress:
  • IP-Address: 10.0.1.

NIC 1 Port 3 Partition 1

  • Network: Brocade Switch - Server
  • Mac-Adress:
  • IP-Address: 10.0.0.

NIC 1 Port 4 Partition 1

  • Network: Brocade Switch - Chair
  • Mac-Adress:
  • IP-Address: 131.59.24.

Emu06

Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1

  • Network: Dell Switch
  • Mac-Address:
  • IP-Address: 10.0.1.

NIC 1 Port 2 Partition 1

  • Network: Dell Switch
  • Mac-Adress:
  • IP-Address: 10.0.1.

NIC 1 Port 3 Partition 1

  • Network: Brocade Switch - Server
  • Mac-Adress:
  • IP-Address: 10.0.0.

NIC 1 Port 4 Partition 1

  • Network: Brocade Switch - Chair
  • Mac-Adress:
  • IP-Address: 131.59.24.


FX2 Cluster - 2

Emu07

Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1

  • Network: Dell Switch
  • Mac-Address:
  • IP-Address: 10.0.1.

NIC 1 Port 2 Partition 1

  • Network: Dell Switch
  • Mac-Adress:
  • IP-Address: 10.0.1.

NIC 1 Port 3 Partition 1

  • Network: Brocade Switch - Server
  • Mac-Adress:
  • IP-Address: 10.0.0.

NIC 1 Port 4 Partition 1

  • Network: Brocade Switch - Chair
  • Mac-Adress:
  • IP-Address: 131.59.24.

Emu08

Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1

  • Network: Dell Switch
  • Mac-Address:
  • IP-Address: 10.0.1.

NIC 1 Port 2 Partition 1

  • Network: Dell Switch
  • Mac-Adress:
  • IP-Address: 10.0.1.

NIC 1 Port 3 Partition 1

  • Network: Brocade Switch - Server
  • Mac-Adress:
  • IP-Address: 10.0.0.

NIC 1 Port 4 Partition 1

  • Network: Brocade Switch - Chair
  • Mac-Adress:
  • IP-Address: 131.59.24.

Emu09

Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1

  • Network: Dell Switch
  • Mac-Address:
  • IP-Address: 10.0.1.

NIC 1 Port 2 Partition 1

  • Network: Dell Switch
  • Mac-Adress:
  • IP-Address: 10.0.1.

NIC 1 Port 3 Partition 1

  • Network: Brocade Switch - Server
  • Mac-Adress:
  • IP-Address: 10.0.0.

NIC 1 Port 4 Partition 1

  • Network: Brocade Switch - Chair
  • Mac-Adress:
  • IP-Address: 131.59.24.

Emu10

Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1

  • Network: Dell Switch
  • Mac-Address:
  • IP-Address: 10.0.1.

NIC 1 Port 2 Partition 1

  • Network: Dell Switch
  • Mac-Adress:
  • IP-Address: 10.0.1.

NIC 1 Port 3 Partition 1

  • Network: Brocade Switch - Server
  • Mac-Adress:
  • IP-Address: 10.0.0.

NIC 1 Port 4 Partition 1

  • Network: Brocade Switch - Chair
  • Mac-Adress:
  • IP-Address: 131.59.24.


FX2 Cluster - 3

Emu11

Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1

  • Network: Dell Switch
  • Mac-Address:
  • IP-Address: 10.0.1.

NIC 1 Port 2 Partition 1

  • Network: Dell Switch
  • Mac-Adress:
  • IP-Address: 10.0.1.

NIC 1 Port 3 Partition 1

  • Network: Brocade Switch - Server
  • Mac-Adress:
  • IP-Address: 10.0.0.

NIC 1 Port 4 Partition 1

  • Network: Brocade Switch - Chair
  • Mac-Adress:
  • IP-Address: 131.59.24.

Emu12

Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1

  • Network: Dell Switch
  • Mac-Address:
  • IP-Address: 10.0.1.

NIC 1 Port 2 Partition 1

  • Network: Dell Switch
  • Mac-Adress:
  • IP-Address: 10.0.1.

NIC 1 Port 3 Partition 1

  • Network: Brocade Switch - Server
  • Mac-Adress:
  • IP-Address: 10.0.0.

NIC 1 Port 4 Partition 1

  • Network: Brocade Switch - Chair
  • Mac-Adress:
  • IP-Address: 131.59.24.

Emu13

Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1

  • Network: Dell Switch
  • Mac-Address:
  • IP-Address: 10.0.1.

NIC 1 Port 2 Partition 1

  • Network: Dell Switch
  • Mac-Adress:
  • IP-Address: 10.0.1.

NIC 1 Port 3 Partition 1

  • Network: Brocade Switch - Server
  • Mac-Adress:
  • IP-Address: 10.0.0.

NIC 1 Port 4 Partition 1

  • Network: Brocade Switch - Chair
  • Mac-Adress:
  • IP-Address: 131.59.24.

Emu14

Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1

  • Network: Dell Switch
  • Mac-Address:
  • IP-Address: 10.0.1.

NIC 1 Port 2 Partition 1

  • Network: Dell Switch
  • Mac-Adress:
  • IP-Address: 10.0.1.

NIC 1 Port 3 Partition 1

  • Network: Brocade Switch - Server
  • Mac-Adress:
  • IP-Address: 10.0.0.

NIC 1 Port 4 Partition 1

  • Network: Brocade Switch - Chair
  • Mac-Adress:
  • IP-Address: 131.59.24.

Social Computing

There are also some servers in the chair server room, which are equipped with graphics cards for machine learning purposes etc.

HostnameIP AddressMAC Address
social1.cm.in.tum.de131.159.24.1238:60:77:6a:c6:db
social2.cm.in.tum.de131.159.24.134c8:60:00:c7:7F:7e
social3.cm.in.tum.de131.159.24.23810:bf:48:e2:a7:39
social4.cm.in.tum.de131.159.24.13b8:ca:3a:82:d6:6f
social5.cm.in.tum.de131.159.24.171-
social6.cm.in.tum.de131.159.24.184-

This section/chapter describes the network part.

  • Making server ready for access via the network
  • Find MAC Address in the iDrac

If you go to the server room make notes about the network configuration/how the cables are patched on the back side. Each Port has a small name or number next to it.

  • There is always one iDrac interface - the rbg-network group should patch this interface in our server management network (VLan132, accessible from vmott2)
  • There is always one cable to the chair network - the rbg-network group should patch this one in our chair network (131.159.24.0/23) this cable is often connected to the Integrated Network Card
  • The Integrated Network Device/Card has four Ports
    • On the iDrac webinterface they are called NIC.Integrated - with each Port behind it (e.g. NIC.Integrated.1-1-1, NIC.Integrated.1-2-1, etc.)
    • From these four ports, sometimes the first two ports are replaced with a better network card
    • Very often the chair network cable is plugged into the third Integrated Network Port
  • Any additional network cards or slots have other names in the iDrac interface

After you found out to which Port the chair network cable connects you can simply look the Mac Address up on the iDraw webinterface

Find MAC Address

Another way to check if the link is up and the mac address of the device, is to connect directly via ssh to the server management interface. For this step the iDrac interface address needs to be set and ping should work.

  1. Connect to vmott2
    ssh paulth@cm-mgmt.in.tum.de
  2. From there connect to the wished iDrac interface (table in Overview chapter)
    #example with sim01
    ssh root@10.250.0.1
    #password is the same as for the webinterface - password safe
  3. Now you can use different commands to check the interfaces
commandNote
racadm getsysinfoFull system report
racadm hwinventory nicShow all network interfaces
racadm ifconfigShows all up and running network interfaces
racadm nistatistics <interface>Use the interface from the hwinventory command. Only works when server is running and/or operating system is booted

With the nicstatistics command you can find out if the link is up or not, but only works if the server is running.

Network VLAN Tagging

This section describes how to setup vlan tagging on the servers.

sudo apt install vlan
sudo modprobe 8021q
sudo install bridge-utils
sudo vim /etc/network/interfaces
# content of interfaces file (only ubuntu < 18.04)
#--------------------------
# real hardare interface
auto eno1
iface eno1 inet dhcp

# vlan interface
auto eno1.83
iface eno1.83 inet manual
  vlan-raw-device eno1
  vlan_id 83

# bridge interface to vlan
auto chair
iface chair inet dhcp
  bridge_ports eno1.93
  bridge_fd 15
#-------------------------
sudo ifup eno1
sudo ifup eno1.83
sudo ifup chair

Every server has at least one management network interface. We have our own server management VLAN to administrate the servers. There is one central gateway where admins can log in and from there reach the server management interfaces, via the second network interface of the gateway.

  • VLAN address space: 10.0.0.0/8
  • RBG intern VLAN name: VLan132 il11_management
  • Gateway: vmott2 (131.159.24.136)
    • Name: cm-mgmt.informatik.tu-muenchen.de
    • Login: ssh

Procedure to administrate servers, check status, install new os, etc:

  1. New Server only: Go to the server room and set static ip address via buttons on server
    • Choose one IP from the Management Network: 10.0.0.0/8
    • Subnet-Mask: 255.0.0.0
    • Gateway: Not necessary → 0.0.0.0 (not allowed by iDrac, choose 10.0.0.0)
    • DNS: off
  2. Log in on vmott2 via ssh
    ssh paulth@cm-mgmt
  3. Ping static iDrac interface address, no response:
    • wrong settings on server (ip, subnet-mask, dns)
    • wrong VLAN on iDrac interface → rbg network group ask if they can patch the management in our management network (VLan 132).
  4. Log off from vmott2, now you can open a SOCKS5 proxy connection to vmott2, from there you have access to the iDrac webinterface of all servers
     ssh -ND 8080 paulth@cm-mgmt.in.tum.de
    • Type the command into your terminal, if the cursors stops after your password the connection is successful
    • Configure the proxy in your browser, Firefox:Preferences→Advanced→Network→Connection:Settings→Manual Proxy - ONLY Socks Host:localhost Port:8080→ OK
    • Now you can reach every server via
      https://<iDrac-interface-ip-address>
      #example (sim01)
      https://10.250.0.1
    • The login is root and the password can be found in our password safe
  5. OTHER POSSIBILITY: Log off from vmott2 and make an ssh tunnel to access a specific iDrac webinterface
    sudo ssh -L 443:SERVER-IP:443 -L 5900:SERVER-IP:5900 -L 5901:SERVER-IP:5901 USER@cm-mgmt.in.tum.de 
devimg01sudo ssh -L 443:10.0.0.2:443 -L 5900:10.0.0.2:5900 -L 5901:10.0.0.2:5901 paulth@cm-mgmt.in.tum.de
testbed01sudo ssh -L 443:10.100.0.1:443 -L 5900:10.100.0.1:5900 -L 5901:10.100.0.1:5901 paulth@cm-mgmt.in.tum.de
testbed02sudo ssh -L 443:10.100.0.2:443 -L 5900:10.100.0.2:5900 -L 5901:10.100.0.2:5901 paulth@cm-mgmt.in.tum.de
net01sudo ssh -L 443:10.100.0.1:443 -L 5900:10.100.0.1:5900 -L 5901:10.100.0.1:5901 paulth@cm-mgmt.in.tum.de
net02sudo ssh -L 443:10.100.0.2:443 -L 5900:10.100.0.2:5900 -L 5901:10.100.0.2:5901 paulth@cm-mgmt.in.tum.de
net03sudo ssh -L 443:10.100.0.3:443 -L 5900:10.100.0.3:5900 -L 5901:10.100.0.3:5901 paulth@cm-mgmt.in.tum.de
sim01sudo ssh -L 443:10.250.0.1:443 -L 5900:10.250.0.1:5900 -L 5901:10.250.0.1:5901 paulth@cm-mgmt.in.tum.de
emu01sudo ssh -L 443:10.150.0.1:443 -L 5900:10.150.0.1:5900 -L 5901:10.150.0.1:5901 paulth@cm-mgmt.in.tum.de

After that you can open the iDrac web interface by typing: https://localhost into your web browser.

Remote racadm

The racadm tool can be used in two ways:

  1. SSH into the idrac
  2. Install racadm package on Dell server, use it locally for this server (local) or control remote idrac interfaces (remote)

The racadm package throws errors if installed on non Dell server, but racadm binary is successfully downloaded under /opt/dell/srvadmin/bin/idracadm7. Even though it is idracadm7 it also works for iDrac8 and is the newest version (9.3.0). Use the following commands to set up a working remote racadm environment on a non-dell Server. Commands work for Ubuntu 18.04 only!! Have a look at current versions.:

sudo su
echo "deb http://linux.dell.com/repo/community/openmanage/930/bionic bionic main" > /etc/apt/sources.list.d/linux.dell.com.sources.list
gpg --keyserver-options http-proxy=http://proxy.in.tum.de:8080 --keyserver pool.sks-keyservers.net --recv-key 1285491434D8786F
gpg -a --export 1285491434D8786F | sudo apt-key add -
sudo apt update
# instasll packets - libssl required for ssl connection to idrac
sudo apt install libssl-dev srvadmin-idracadm8
sudo cp /opt/dell/srvadmin/bin/idracadm7 /root/
sudo ln -sf /root/idracadm7 /usr/bin/racadm
# remove broken package installation
sudo apt purge srvadmin-base srvadmin-hapi srvadmin-idracadm7

MAAS

  1. Maas can install the operating system with a few clicks
  2. Restart the server with PXE enabled on the network card interface connected to the brocade switch and in network internal, vlan_133
  3. The server should show up on the “Nodes” tab with a randomly selected name
  4. To Commission the server (prepare server) and finally Deploy it (boot and install OS) the power settings have to be set

Power Configuration

  1. To shutdown and start servers a new user is created on iDrac
  2. TODO user creation commands iDRAC
  3. After the user is created power settings have to be set in MAAS
  4. In order to reach the iDrac a route has to be established on mon01
    ip route add 10.200.0.0/24 via 131.159.24.136
  5. On vmott2 a NAT needs to be configured to forward packets from MAAS to the according server
    #uncomment in /etc/sysctl.conf
    net.ipv4.ip_forward = 1
    # iptables NAT - rewrite all incoming packets from MAAS to internal vmott2 interface
    sudo iptables -t nat -A POSTROUTING -s 131.159.24.39 -j SNAT --to-source 10.0.0.1 

Alternative Installation

If the server has the enterprise license the image can be mounted virtually from the iDrac interface. With the Express or any other license a boot stick/CD needs to be prepared and mounted manually down in the server room.

  1. Log in on the iDrac webinterface -more information
  2. Virtual console → Start virtual console (next to options) → Java application is downloaded and executed, make sure that you have java runtime environment installed and activated for browser content.
  3. In the console window → Virtual Device→Conncet virtual device → console window again→Virtual Device→Assign DVD/CD→Choose: Now choose the wished image→Assign device
  4. After choosing the image you have to set the boot option: Console window → Next start → virtual DVD/CD/ISO
  5. Now you have to restart/power on the server: Console window → Power → System on

OS Installation

  1. After mounting the boot image and choosing the image as next boot option, restart the server over the iDrac interface
  2. Now the normal operating system installation dialog should show up
  3. Configure the general operating system settings as shown below
  4. After that go to the next section - OS configuration.

ubuntu-16.04-server-amd64

  1. Language: English
  2. Install Ubuntu Server
  3. Select a language: English - English
  4. Country, territory or area: other→Europe→Germany
  5. Country to base default locale settings: United States - en_US.UTF-8
  6. Keyboard Layout: choose as you wish, most of the time English(US)
  7. Choose network interface: select the interface of the configured chair network interface, more information in the server administration chapter
  8. Autoconfiguration
  9. Choose hostname: server name / rbg-hostname (default if autoconfiguration successful) without the cm, example → sim01-cm = sim01
  10. User Full name: i11
  11. User account: i11
  12. Password: min 12 digits with numbers, big-small letters and special character, safe in password safe, more in chapter Passwords
  13. Encrypt home directory: No (Depends on purpose)
  14. Setting up clock: Automatically, if not choose timezone
  15. Time zone: Europe/Berlin
  16. Try umount disks that are in us: yes
  17. Partition disks: Manual
  18. Partition menu: go down to the disk with free space→create a new partition→size:max→type:primary
  19. Partition settings: use as: ext4, mount point: /, bootable flag: off → Done setting up the partition
  20. Finish partitioning and write changes to disk
  21. No Swap, continue without swap
  22. Write changes to disk: Yes
  23. Installing the system: automatic
  24. System upgrades: Install security updates automatically
  25. Software: System Utilities + Openssh Server (choose with space, enter to confirm)
  26. After installation unmount iso image and restart server

ubuntu-16.04-desktop-amd64

  1. Language: English
  2. Install Ubuntu
  3. Installation Type: Erase disk and install Ubuntu + Use LVM with the new Ubuntu installation
  4. Where are you: Berlin
  5. Keyboard Layout: English(US) - English(US)
  6. Settings: Your name: i11, Your computer's name: <server-name> (e.g. emu01), Username: i11, Password: min 12 digits with numbers, big-small letters and special character, safe in password safe, more in chapter Passwords
  7. After installation unmount iso image and restart server

Operating system configuration

In this chapter the configuration and integration of a linux server in our chair environment is described. The steps should be done in the following order:

  1. Configure autofs for automatic filesystem mounts
  2. LDAP authentification

NAS Share + Home automount

  1. Install autofs
    sudo apt-get install autofs
  2. Edit /etc/auto.master:
    /- /etc/auto.direct
  3. Create the file /etc/auto.direct
    /home       -fstype=nfs,defaults  nasil11.informatik.tu-muenchen.de:/srv/il11/home_il11
    /share      -fstype=nfs,defaults  nasil11.informatik.tu-muenchen.de:/srv/il11/share_il11
  4. If necessary move existing home out of the way
    sudo mv /home /home.old
  5. Reload autofs
    sudo service autofs restart

LDAP authentification

  1. Install LDAP packages
    sudo apt-get install nslcd ldap-utils
  2. Edit /etc/nslcd.conf, replce the respective lines:
    uri ldaps://ldapswitch.in.tum.de:636
    base ou=IN,o=TUM,c=de
    map passwd homeDirectory "/home/$uid"
  3. Edit /etc/nsswitch.conf
    passwd:         files ldap
    group:          files ldap
    shadow:         files ldap
  4. Update the nslcd service
    sudo update-rc.d nslcd enable
  5. Automatically create home folders when logging in for the first time, edit /etc/pam.d/common-session
    session required        pam_mkhomedir.so skel=/etc/skel umask=0022
  6. Add empty .Xauthority to the /etc/skel directory
  7. Restrict access to groups, create a file /etc/login.group.allowed (0644), fill in the groups
    i11
    il11admin
  8. To restrict to groups edit /etc/pam.d/common-auth, add to top of file
    auth    required                        pam_listfile.so onerr=fail item=group sense=allow file=/etc/login.group.allowed
  9. When authentification failure, use the SSH pam_unix → first search ldap, then locally, edit /etc/pam.d/common-auth
    auth    sufficient                      pam_ldap.so try_first_pass
    auth    sufficient                      pam_unix.so nullok use_first_pass
  10. Edit sudoers file for sudo access /etc/sudoers
    # Members of the LDAP group: il11admin get root privileges
    %il11admin ALL=(ALL) ALL 
  11. Allow password change from all servers
    • Edit /etc/pam.d/common-session
      session optional    pam_ldap.so
    • Edit /etc/pam.d/common-auth, remove the use_authtok, already done above
      auth  sufficient  pam_unix.so nullok use_first_pass
  12. In the last step restart all authentification services and hope the best ;)
    #it will ask if you want to override the local changes -> choose no
    sudo pam-auth-update
    sudo service nscd stop
    sudo service nslcd restart

Here are all the files listed with content that were changed during the process:

Autofs

/etc/auto.direct

/share      -fstype=nfs,defaults  nasil11.informatik.tu-muenchen.de:/srv/il11/share_il11
#virtual machine
/home_i11   -fstype=nfs,defaults  nasil11.informatik.tu-muenchen.de:/srv/il11/home_il11
#server
/home       -fstype=nfs,defaults  nasil11.informatik.tu-muenchen.de:/srv/il11/home_il11

/etc/auto.master

# Sample auto.master file
# This is an automounter map and it has the following format
# key [ -mount-options-separated-by-comma ] location
# For details of the format look at autofs(5).
#
/-      /etc/auto.direct
#
# NOTE: mounts done from a hosts map will be mounted with the
#       "nosuid" and "nodev" options unless the "suid" and "dev"
#       options are explicitly given.
#
#/net   -hosts
#
# Include /etc/auto.master.d/*.autofs
#
+dir:/etc/auto.master.d
#
# Include central master map if it can be found using
# nsswitch sources.
#
# Note that if there are entries for /net or /misc (as
# above) in the included master map any keys that are the
# same will not be seen as the first read key seen takes
# precedence.
#
+auto.master
/mount /etc/auto_mount -nosuid,noquota

Ldap

/etc/login.group.allowed

root
il11admin

/etc/nslcd.conf

# /etc/nslcd.conf
# nslcd configuration file. See nslcd.conf(5)
# for details.

# The user and group nslcd should run as.
uid nslcd
gid nslcd

# The location at which the LDAP server(s) should be reachable.
uri ldap://ldapswitch.informatik.tu-muenchen.de

# The search base that will be used for all queries.
base ou=IN,o=TUM,c=de

map passwd homeDirectory "/home_i11/$uid"

# The LDAP protocol version to use.
#ldap_version 3
...

/etc/nsswitch.conf

# /etc/nsswitch.conf
#
# Example configuration of GNU Name Service Switch functionality.
# If you have the `glibc-doc-reference' and `info' packages installed, try:
# `info libc "Name Service Switch"' for information about this file.

passwd:         files ldap
group:          files ldap
shadow:         files ldap

hosts:          files dns
networks:       files

protocols:      db files
services:       db files
ethers:         db files
rpc:            db files

netgroup:       nis

/etc/pam.d/common-session

...
# here are the per-package modules (the "Primary" block)
session [default=1]                     pam_permit.so
# here's the fallback if no module succeeds
session requisite                       pam_deny.so
# prime the stack with a positive return value if there isn't one already;
# this avoids us returning an error just because nothing sets a success code
# since the modules above will each just jump around
session required                        pam_permit.so
# The pam_umask module will set the umask according to the system default in
# /etc/login.defs and user settings, solving the problem of different
# umask settings with different shells, display managers, remote sessions etc.
# See "man pam_umask".
session optional                        pam_umask.so
# and here are more per-package modules (the "Additional" block)
session required        pam_unix.so
session [success=ok default=ignore]     pam_ldap.so minimum_uid=1000
session optional        pam_systemd.so
session required        pam_mkhomedir.so skel=/etc/skel umask=0022
# end of pam-auth-update config

/etc/pam.d/common-auth

...
# here are the per-package modules (the "Primary" block)
auth    required                        pam_listfile.so onerr=fail item=group sense=allow file=/etc/login.group.allowed

auth    sufficient                      pam_ldap.so try_first_pass
auth    sufficient                      pam_unix.so nullok use_first_pass
# here's the fallback if no module succeeds
auth    requisite                       pam_deny.so
# prime the stack with a positive return value if there isn't one already;
# this avoids us returning an error just because nothing sets a success code
# since the modules above will each just jump around
auth    required                        pam_permit.so
# and here are more per-package modules (the "Additional" block)
auth    optional                        pam_cap.so
# end of pam-auth-update config

/etc/sudoers

#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults        env_reset
Defaults        mail_badpass
Defaults        secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

# Host alias specification

# User alias specification

# Cmnd alias specification

# User privilege specification
root    ALL=(ALL:ALL) ALL

# Members of the LDAP group: il11admin get root privileges
%il11admin ALL=(ALL) ALL

# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL

# Allow members of group sudo to execute any command
%sudo   ALL=(ALL:ALL) ALL

# See sudoers(5) for more information on "#include" directives:

#includedir /etc/sudoers.d
%root   ALL=(ALL) NOPASSWD: ALL

/etc/pam.d/common-password

....
# here are the per-package modules (the "Primary" block)
password        [success=2 default=ignore]      pam_unix.so obscure sha512
password        [success=1 default=ignore]      pam_ldap.so minimum_uid=1000 try_first_pass
# here's the fallback if no module succeeds
password        requisite                       pam_deny.so
# prime the stack with a positive return value if there isn't one already;
# this avoids us returning an error just because nothing sets a success code
# since the modules above will each just jump around
password        required                        pam_permit.so
# and here are more per-package modules (the "Additional" block)
# end of pam-auth-update config

LDAP authentification with local caching

  1. Install ldap-utils and sssd:
    # apt install ldap-utils sssd
  2. Edit the configuration file for sssd in /etc/sssd/sssd.conf:
    [sssd]
    config_file_version = 2
    services = nss, pam
    domains = LDAP
    
    [domain/LDAP]
    cache_credentials = true
    enumerate = false
    
    id_provider = ldap
    auth_provider = ldap
    
    ldap_uri = ldaps://ldap.in.tum.de:636
    ldap_search_base = ou=IN,o=TUM,c=DE
    ldap_network_timeout = 2
    
    entry_cache_timeout = 7776000
  3. Change the files permission to 600, otherwise sssd will fail to start
    # chmod 600 /etc/sssd/sssd.conf 
  4. Disable NSCD caching for passwd, group and netgroup, as it would interfere with sssd caching. Change the following lines in /etc/nscd.conf :
    enable-cache    passwd    no
    enable-cache    group     no
    enable-cache    netgroup  no
  5. configure NSS to get user and group information from sssd. To do this append sss to the passwd, group, shadow and sudoers line in /etc/nsswitch.conf:
    passwd:    files sss
    group:     files sss
    shadow:    files sss
    sudoers:   files sss
  6. Put any groups you want to be able to login via LDAP in /etc/login.group.allowed. Do the same thing with individual users in /etc/login.user.allowed. Make sure both files exist, even if one of them may be empty.
    # echo il11 >> /etc/login.group.allowed
    # touch /etc/login.user.allowed
  7. Change the permission of each file to 0644
    # chmod 0664 /etc/login.group.allowed
    # chmod 0664 /etc/login.user.allowed
  8. Configure PAM to allow LDAP login
    • First edit /etc/pam.d/common-auth
      auth sufficient    pam_unix.so nullok
      auth sufficient    pam_sss.so use_first_pass
      auth requisite     pam_deny.so
    • To restrict users who are allowed to log in via ssh, add the following two lines to /etc/pam.d/sshd immediately before common-account is included:
      account [success=1 new_authtok_reqd=1 default=ignore]    pam_listfile.so onerr=fail item=group sense=allow file=/etc/login.group.allowed
      account required    pam_listfile.so onerr=fail item=user sense=allow file=/etc/login.user.allowed
    • Certain users should be granted sudo priviledges upon login. For those create entries in /etc/security/group.conf
      # members of il11admin are always granted sudo access
      *;*;%il11admin;Al0000-2400;adm
      # For other users create extra entries
      *;*;exampleuser;Al0000-2400;adm
    • In order for these rules to apply add the following line to /etc/pam.d/sshd and /etc/pam.d/login before common-auth is included
      auth optional           pam_group.so
    • Homedirectories should be created if not present. Add the following to /etc/pam.d/common-session
      session required    pam_mkhomedir.so skel=/etc/skel umask=0022
    • Also create an empty .Xauthority in /etc/skel
      # touch /etc/skel/.Xauthority
  9. Finally update PAM (when asked if you wish to overwrite local changes choose no), restart NSCD and start sssd
    # pam-auth-update
    # service nscd restart
    # service sssd start
  10. LDAP login is now enabled

Filesystem Setup

  1. Create a date folder under / with permission 777
    sudo mkdir /data && sudo chmod 777 /data

NTP TimeServer

Fail2Ban

Fail2Ban is a intrusion prevention system. It bans IP addresses after too many login requests.

  1. Install the fail2ban package
    sudo apt-get install fail2ban
  2. Copy the template configuration file
    sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
  3. Edit the configuration file and adjust settings
    sudo vim /etc/fail2ban/jail.local
    bantime=3600 # adjust value
    ...
    [sshd]
    enabled=true # add line
  4. Restart fail2ban Service
    sudo service fail2ban restart
  5. If wished E-Mail Notification can be enabled by setting parameters in the configuration file, make sure that sendmail is installed
    sudo vim /etc/fail2ban/jail.local
    destemail = root@mailschlichter.informatik.tu-muenchen.de
    ...
    action = %(action_mwl)s

Automatic Security Updates

Other

- Show system information on ssh login, install landscape-common

sudo apt-get install landscap-common

For secure password storage we made a keepass file where all the passwords are stored. Ask the old/current administrator for the most recent password file.

  • There is an admin user for every server called i11
  • The password is the same for every server purpose - e.g. passwords are the same for all simulation server, emulation server and so on

Chair Network

  • Network: 131.159.24.0/23
  • Subnet mask: 255.255.254.0
  • Broadcast: 131.159.25.255
  • RBG-Network intern VLAN: VLan83

Default Login iDrac

A new server has default login credentials for the iDrac interface:

  • Login: root
  • Password: calvin

Service Tag Numbers

ServerService TagLicense
Devimg0131ZMQ92Enterprise
Testbed01303SQ92Enterprise
Testbed02304MQ92Enterprise
Net0131VSQ92Enterprise
Net0231XPQ92Enterprise
Net03CWKTQ92Enterprise
Sim0149P50D2Enterprise
Em01DWX40D2Enterprise

Naming Scheme

  • Domain Names: <host>.cm.in.tum.de
  • Storage (secure storage, integrated or seperate and computation)
    • st01 – st<nn>
  • Development
    • devimg01 – devimg<nn>
  • Simulation
    • sim01 – sim<nn>
  • Testbed (RIFE, SSICLOPS)
    • testbed01 – testbed<nn>
  • Network Performance tests
    • net01 – net<nn>
  • Emulation (VMs..)
    • emu01 – emu<nn>

TUM Tag + Server Label

A few weeks after the server hardware order the bill should arrive at the secretary office together with a TUM label. This label should be put on the respective servers. Follow the same position as already pasted in the server room. Together with the label put a paper with the server name (emu01) on each server. Print the server name (e.g. sim01.cm.in.tum.de) on a white paper (Libreoffice: Dejavu Sans, 12) and fix it with some sellotape.

  1. Put TUM label on server, same position as on the other servers
  2. Print white paper with server name + cm.in.tum.de (DejaVu Sans, 12) and fix it on the server next to TUM label

Atschlichter3 Shutdown

Commands to restart atschlichter3, grub is corrupted → restart ends up in a minimal bash environment.
Commands:

  1. set root=md0
  2. linux /vmlinuz root=/dev/md0
  3. initrd /initrd.img
  4. boot

The server should boot and is reachable via ssh.

The Dell servers can be booted with a platform specific bootable iso which updates all the server components automatically. It is recommended to do this once in a while as also hardware issues can be resolved with updating the firmware (e.g. RAM problems).
Ansible (semi-automatic):

  • Download all bootable ISO images from the Dell support website (see manual process) and save them on the idrac-gtw:/srv/idrac-img/download/ (vmott22)
  • Log in on AWX and execute the “infra - iDrac Update ISOs” playbook
  • After that make sure nobody is using the according server and execute the “infra - iDrac Tasks” Playbook with limit set to the server and “Update iDrac” on true.
  • Server will be offline during the update

Manual process:

  • Download the server model specific iso from this website
  • OR go to this website and enter the server Service-Tag, choose category “Systemverwaltung” search for “Platform Specific Bootable ISO” and download it
  • Start the server iDrac management tool
  • Mount the ISO image as a virtual CD, select Virtual CD as boot parameter and reboot the server
  • Server should automatically boot into the ISO file and start the updates (can take up to 1 hour or more)
  • After updates are finished boot the server into the same ISO, due to firmware dependencies the update should run twice as sometimes not all updates can be made in the first run
  • After running the iso twice, iso can be unmounted and server rebooted into the hard drive
  • Done