The article describes two hub-spoke vnets in two different regions with VNet-to-VNet interconnection. A vnet-to-vnet interconnection provides an encrypted IPsec communication between the two hub vnets. In each hub vnet are present two linux VMs (nva11, nva12 in hub1 and nva21,nva21 in hub2) configured with ip forwarding. In each hub VNet is deployed an internal standard load balancer (ILB) configured with HA ports. The presence of ILB provides a configuration in HA on the flow in transit through the NVA VMs. The network diagram is reported below:
The ARM template creates all the environment; the ip forwarding in nva11,nva12, nva21, nva22 needs to be enabled manually in the OS.
Note
Before spinning up the ARM template you should:
- set the Azure subscription name in the file 2hubspoke.ps1
- set the administrator username and password in the file 2hubspoke.json
sed -i -e '$a\net.ipv4.ip_forward = 1' /etc/sysctl.conf
systemctl restart network.service
sysctl net.ipv4.ip_forward
The Azure internal load balancers (ilb1 and ilb2) require the presence of custom port on the VMs in the backend pool to make healtcheck. In our ATM template the probes have been defined to TCP port 80. httpd needs to be installed and activated on the nva11, nva12, nva21, nva22:
yum -y install httpd
systemctl enable httpd
systemctl start httpd
systemctl status httpd
In CentOS VMs nginx is available in EPEL repository:
yum install epel-release
yum -y install nginx
systemctl enable nginx
systemctl start nginx
systemctl status nginx
curl 127.0.0.1
To run HTTP queries in parallel, it can be used GNU parallel. In CentOS GNU parallel is in EPEL repository:
yum -y install parallel
In Annex is reported the procedure to increase the total number of open files in the OS (optional step).
vi client.sh
vi client.sh
#!/bin/bash
#redirect stdout to the device /dev/null
mycurl() {
START=$(date +%s)
curl -s "http://some_url_here/"$1 1>/dev/null
END=$(date +%s)
DIFF=$(( $END - $START ))
echo "It took $DIFF seconds"
}
export -f mycurl
seq 100000 | parallel -j0 mycurl
client.sh | nginx server | check by tcpdump the flows in the VMs | tcpdump command |
---|---|---|---|
vm1-10.0.11.10 | vm2-10.0.12.10 | nva11,nva12, nva21,nva22 | tcpdump -nqt host 10.0.11.10 |
vm1-10.0.11.10 | vm3-10.0.3.10 | nva11,nva12 | tcpdump -nqt host 10.0.11.10 |
vm1-10.0.11.10 | vm4-10.0.4.10 | nva11,nva12, nva21,nva22 | tcpdump -nqt host 10.0.11.10 |
vm2-10.0.12.10 | vm3-10.0.3.10 | nva11,nva12, nva21,nva22 | tcpdump -nqt host 10.0.12.10 |
vm2-10.0.12.10 | vm4-10.0.4.10 | nva21,nva22 | tcpdump -nqt host 10.0.12.10 |
vm3-10.0.3.10 | vm4-10.0.12.10 | nva11,nva12, nva21,nva22 | tcpdump -nqt host 10.0.3.10 |
Path from vm3 to vm4:
Path from vm1 to vm2:
This is optional configuration (you can skip it if you do not have interest to increase the system parameters). CentOS has a limit on max number of simultaneously open file:
ulimit -Hn
4096
Show max file:
cat /proc/sys/fs/file-max
88766
Change the limit vi /etc/sysctl.conf :
fs.file-max = 200500
increase hard and soft limits vi /etc/security/limits.conf:
* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
check files under /etc/security/limits.d:
cd /etc/security/limits.d
cat 20-nproc.conf
content of file 20-nproc.conf:
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
* soft nproc unlimited
root soft nproc unlimited
run the command:
sysctl -p
the command shows the new value: fs.file-max = 200500
Logout and then relogin in the system to get the new values.