Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
282 views17 pages

Open Vswitch With DPDK : Hands-On Lab

The document provides instructions for building and configuring Open vSwitch with DPDK support on a development machine. It outlines steps to build DPDK, build Open vSwitch with DPDK support, configure the system and Open vSwitch, generate virtual machines that use the Open vSwitch + DPDK infrastructure, and optionally compare performance with and without DPDK.

Uploaded by

Hans Maiser
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
282 views17 pages

Open Vswitch With DPDK : Hands-On Lab

The document provides instructions for building and configuring Open vSwitch with DPDK support on a development machine. It outlines steps to build DPDK, build Open vSwitch with DPDK support, configure the system and Open vSwitch, generate virtual machines that use the Open vSwitch + DPDK infrastructure, and optionally compare performance with and without DPDK.

Uploaded by

Hans Maiser
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Open vSwitch* with

DPDK*
Hands-on Lab
Clayne Robison
Ashok Emani
July 11, 2016
Lab Objective
After this lab you will be able to:
• Build and install the DPDK
• Build and install and configure Open vSwitch* with DPDK support in a nested VM environment
• Push network traffic over the Open vSwitch* + DPDK Connection
Stretch Objectives:
• Compare the performance of iperf3 with/without DPDK support
Things we assume you already have:
• Experience creating VMs
• Experience compiling code from a command-line
• SSH client installed on your laptop

2
Agenda
1. Get Setup
2. Build DPDK
3. Build Open vSwitch* with DPDK support
4. Configure System for Open vSwitch* and DPDK support
5. Configure Open vSwitch* to work with virtual machines
6. (Run iperf3 with DPDK)
7. (Run iperf3 without DPDK)

3
Setup
20 pt Intel Clear Bold Subhead, Date, etc.

4
Connect to Development Machine
Jump Server
• IP Address: 207.108.8.161
• SSH v2 preferred
• Username: student<1-20>
• Password: student<1-20>
Host Machine
• Hostname: dcomp__
• Username: student<1-20>
• Password: student<1-20>
Development VM
• VM Domain: student__
# sudo virsh domifaddr student__
# ssh user@<ipaddr>
• Username: user (root)
• Password: user (root)

5
Enable HUGEpage support
Enable Hugepages and mount the hugepage TLB
• sudo su
# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages

• On NUMA machines
# echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
# echo 1024 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
• Etc

Mount the hugepage TLB


# mkdir /mnt/huge
# mkdir -p /mnt/huge_2mb
# mount –t hugetlbfs nodev /mnt/huge
# mount -t hugetlbfs none /mnt/huge_2mb -o pagesize=2MB
6
Build DPDK
DPDK Tarball located in /home/user
# tar -xvf dpdk-16.04.tar.xz
# cd dpdk-16.04
# export DPDK_DIR=`pwd`
# sed 's/CONFIG_RTE_BUILD_COMBINE_LIBS=n/CONFIG_RTE_BUILD_COMBINE_LIBS=y/' -i
config/common_linuxapp
# make config T=x86_64-ivshmem-linuxapp-gcc
# cd build
# EXTRA_CFLAGS="-g -Ofast" make -j10
# EXTRA_CFLAGS="-g -Ofast" make install -j10

7
BUILD Open vSwitch* with
DPDK support
Open vSwitch* source tree already pulled from git repo into
/home/user/ovs
# cd /home/user/ovs
# export OVS_DIR=`pwd`
# ./boot.sh
# ./configure --with-dpdk="$DPDK_DIR/build/" CFLAGS="-g -Ofast"
# make 'CFLAGS=-g -Ofast -march=native' -j10

8
and start server
If you’ve never set up ovs on this machine, start here
# mkdir -p /usr/local/etc/openvswitch
# mkdir -p /usr/local/var/run/openvswitch
# cd $OVS_DIR
# ./ovsdb/ovsdb-tool create /usr/local/etc/openvswitch/conf.db \
./vswitchd/vswitch.ovsschema
# ./ovsdb/ovsdb-server \
--remote=punix:/usr/local/var/run/openvswitch/db.sock \
--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile –-detach
# ./utilities/ovs-vsctl --no-wait set Open_vSwitch . \
other_config:dpdk-init=true \
other_config:dpdk-lcore-mask=0x3 \
other_config:dpdk-socket-mem=2048
# ./utilities/ovs-vsctl --no-wait init
9
Optional: 1gb hugepage support
These steps are optional if you want to support 1GB hugepages.
2MB hugepage support doesn’t require kernel parameters and reboot.
• Append the following line to GRUB_CMD_LINE_LINUX in /etc/default/grub
• default_hugepagesz=1G hugepagesz=1G hugepages=16 hugepagesz=2M
hugepages=2048 iommu=pt intel_iommu=on isolcpus=1-13,15-27
• Run the following commands
# grub2-mkconfig -o /boot/grub2/grub.cfg
# reboot

10
Optional: load vfio-pci driver
This step is only necessary if you are using a physical NIC (which we are not)
# modprobe vfio-pci
# cp $DPDK_DIR/tools/dpdk_nic_bind.py /usr/bin/.
# dpdk_nic_bind.py --status
# dpdk_nic_bind.py --bind=vfio-pci 05:00.1

11
StarT the Open vSwitch* Daemon
# modprobe openvswitch
# $OVS_DIR/vswitchd/ovs-vswitchd \
unix:/usr/local/var/run/openvswitch/db.sock \
--pidfile --detach

12
user ports
These steps are used when you are communicating between VMs
# $OVS_DIR/utilities/ovs-vsctl show
# $OVS_DIR/utilities/ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
# ( NOTE: Only for physical ports $OVS_DIR/utilities/ovs-vsctl add-port br0 dpdk0 -- set Interface
dpdk0 type=dpdk #Only for physical ports )
# $OVS_DIR/utilities/ovs-vsctl add-port br0 vhost-user1 -- set Interface vhost-user1
type=dpdkvhostuser
# $OVS_DIR/utilities/ovs-vsctl add-port br0 vhost-user2 -- set Interface vhost-user2
type=dpdkvhostuser

13
Generate VMs that use DPDK
The OVS processes take all of our allocated hugepages, so allocate some more for the nested VMs
# echo 1536 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages

Create/launch a guest VM that uses our OVS + DPDK infrastructure:


# qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda <your-boot-image> c -enable-kvm \
-no-reboot -nographic -net none –chardev \
socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user1 \
-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
-device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \
-object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem -mem-prealloc

Or, just run these scripts:


# /home/user/createvm1
# /home/user/createvm2

14
Verify that VMs can communicate
You can just ping. If you have time, run iperf3. If you have even more time, run tcpdump

15
Optional: Compare performance
without DPDK
Install stock Open vSwitch* from OS repo
# pkill -9 ovs
# dnf install openvswitch
# rm -f /etc/openvswitch/conf.db
# mkdir -p /var/run/openvswitch
# ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema
# ovsdb-server --remote=punix:/var/run/openvswitch/db.sock --
remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
# ovs-vsctl --no-wait init
# ovs-vswitchd unix:/var/run/openvswitch/db.sock --pidfile --detach
# ovs-vsctl add-br br0
# ovs-vsctl show

16
Optional: Configure vms that
don’t use dpdk
These VMs have tap devivces and non-DPDK OVS bridge
# qemu-system-x86_64 -m 512 -smp 4 -cpu host -hda
/var/lib/libvirt/images/dpdkfedora23-a.qcow2 -boot c -enable-kvm -no-
reboot -nographic -net nic,macaddr=00:11:22:EE:EE:EE -net
tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown

# qemu-system-x86_64 -m 512 -smp 4 -cpu host -hda


/var/lib/libvirt/images/dpdkfedora23-a.qcow2 -boot c -enable-kvm -no-
reboot -nographic -net nic,macaddr=00:11:23:EE:EE:EE -net
tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown

17

You might also like