[[labs.beatcraft.com]]~
[[OpenFlow]]~
#contents

* OpenFlow/OpenFlow Tutorial 1 [#b16df4b4]
>
To understand the overview of OpenFlow, replicate the tutorial. To use mininet, create a virtual network and test its features. The URL of the original tutorial is shown below.~
~
[[http://www.openflow.org/wk/index.php/OpenFlow_Tutorial#Start_Network]]~

** Creating a virtual network environment [#hf85599d]
>
To use mininet, create a virtual network environment. The topology is shown below.~
#ref(mininet.jpg,,55%)~

>
To start the virtual network, apply the command shown below.~
 $ sudo mn --topo single,3 --mac --switch ovsk --controller remote
 *** Creating network
 *** Adding controller
 Unable to contact the remote controller at 127.0.0.1:6633
 *** Adding hosts:
 h1 h2 h3
 *** Adding switches:
 s1
 *** Adding links:
 (h1, s1) (h2, s1) (h3, s1)
 *** Configuring hosts
 h1 h2 h3
 *** Starting controller
 *** Starting 1 switches
 s1
 *** Starting CLI:
 mininet>

>
Using these commands, three virtual hosts (h1, h2, and h3) and a software switch are created in the kernel. Each of these three virtual hosts receives a unique, individual, and different IP address, and the software is based upon Open wSwitch and has three ports. Each virtual host is connected to the software switch via a virtual ethernet cable. The OpenFlow Switch is configured to be connected to  the remote controller (c0) via link local.~
~
To apply a mininet-specific command, nodes, check the list of nodes.~
 mininet> nodes
 available nodes are:
 h2 h3 h1 s1 c0
~
To check the IP of a virtual host and switch, the detail of each setting is shown.~

>
- h1

>
 mininet> h1 ifconfig
 h1-eth0   Link encap:Ethernet HWaddr  00:00:00:00:00:01
           inet addr:10.0.0.1  Bcast:10.255.255.255  Mask:255.0.0.0
           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
           collisions:0 txqueuelen:1000
           RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
 
 lo        Link encap:Local Loopback
           inet addr:127.0.0.1  Mask:255.0.0.0
           UP LOOPBACK RUNNING  MTU:16436  Metric:1
           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
           collisions:0 txqueuelen:0
           RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

>
-h2

>
 mininet> h2 ifconfig
 h2-eth0   Link encap:Ethernet HWaddr 00:00:00:00:00:02
           inet addr:10.0.0.2  Bcast:10.255.255.255  Mask:255.0.0.0
           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
           collisions:0 txqueuelen:1000
           RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
 
 lo        Link encap:Local Loopback
           inet addr:127.0.0.1  Mask:255.0.0.0
           UP LOOPBACK RUNNING  MTU:16436  Metric:1
           RX packets:0 errors:0 dropped:0 Overruns:0 frame:0
           TX packets:0 errors:0 dropped:0 Overruns:0 carrier:0
           collisions:0 txqueuelen:0
           RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

>
-h3

>
 mininet> h3 ifconfig
 h3-eth0   Link encap:Ethernet HWaddr 00:00:00:00:00:03
           inet addr:10.0.0.3  Bcast:10.255.255.255  Mask:255.0.0.0
           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
           collisions:0 txqueuelen:1000
           RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
 
 lo        Link encap:Local Loopback
           inet addr:127.0.0.1  Mask:255.0.0.0
           UP LOOPBACK RUNNING  MTU:16436  Metric:1
           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
           collisions:0 txqueuelen:0
           RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

>
-s1

>
 mininet> s1 ifconfig
 eth0      Link encap:Ethernet HWaddr 00:23:8b:56:f9:ed
           inet addr:192.168.0.146  Bcast:192.168.0.255  Mask:255.255.255.0
           inet6 addr: 2001:268:321:8000:223:8bff:fe56:f9ed/64 Scope:Global
           inet6 addr: fe80::223:8bff:fe56:f9ed/64 Scope:Link
           inet6 addr: 2001:268:321:8000:4c96:18c9:b0b1:bb7a/64 Scope:Global
           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
           RX packets:4906 errors:0 dropped:0 overruns:0 frame:0
           TX packets:670 errors:0 dropped:0 overruns:0 carrier:0
           collisions:0 txqueuelen:1000
           RX bytes:589463 (589.4 KB)  TX bytes:78711 (78.7 KB)
           Interrupt:16
 
 lo        Link encap:Local Loopback
           inet addr:127.0.0.1  Mask:255.0.0.0
           UP LOOPBACK RUNNING  MTU:16436  Metric:1
           RX packets:876 errors:0 dropped:0 Overruns:0 frame:0
           TX packets:876 errors:0 dropped:0 Overruns:0 carrier:0
           collisions:0 txqueuelen:0
           RX bytes:58056 (58.0 KB)  TX bytes:58056 (58.0 KB)
 
 s1-eth1   Link encap:Ethernet HWaddr d6:7e:38:49:0f:3d
           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
           collisions:0 txqueuelen:1000
           RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
 
 s1-eth2   Link encap:Ethernet HWaddr 9e:62:1c:ea:62:0e
           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
           RX packets:0 errors:0 dropped:0 Overruns:0 frame:0
           TX packets:0 errors:0 dropped:0 Overruns:0 carrier:0
           collisions:0 txqueuelen:1000
           RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
 
 s1-eth3   Link encap:Ethernet HWaddr da:ab:31:04:07:d6
           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
           collisions:0 txqueuelen:1000
           RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

** Example of how to use dpctl [#ue6de687]
>
Once mininet boots, execute dpctl by a different Shell.  dpctl is a utility, which is included in OpenFlow Reference, and it is a command line program that monitors and manages OpenFlow datapath.~
~
To execute show, connect OpenFlow switch and show the present status.~

>
 $ dpctl show tcp:127.0.0.1:6634
 features_reply (xid=0x77393f77): ver:0x1, dpid:1
 n_tables:255, n_buffers:256
 features: capabilities:0xc7, actions:0xfff
  1(s1-eth1): addr:d6:7e:38:49:0f:3d, config: 0, state:0
      current:    10GB-FD COPPER
  2(s1-eth2): addr:9e:62:1c:ea:62:0e, config: 0, state:0
      current:    10GB-FD COPPER
  3(s1-eth3): addr:da:ab:31:04:07:d6, config: 0, state:0
      current:    10GB-FD COPPER
  LOCAL(s1): addr:de:5f:fc:4b:49:4a, config: 0x1, state:0x1
 get_config_reply (xid=0x85b71d0d): miss_send_len=0

>
To execute dump-flows, display the present status of a table flow.~

>
 $ dpctl dump-flows tcp:127.0.0.1:6634
 stats_reply (xid=0x265ded5c): flags=none type=1(flow)

>
Since OpenFlow Controller has not worked, its flow table remains empty.~
~
At the mininet console, try to send a ping from h1 to h2.~

>
 mininet> h1 ping -c3 h2
 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
 From 10.0.0.1 icmp_seq=1 Destination Host Unreachable
 From 10.0.0.1 icmp_seq=2 Destination Host Unreachable
 From 10.0.0.1 icmp_seq=3 Destination Host Unreachable
 
 --- 10.0.0.2 ping statistics ---
 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 1999ms
 pipe 3

>
Since The table flow of the switch is still empty and the controller does not connect to the switch, the switch does not know how to handle the traffic. Therefore, ping does not go through.~
~
Then, to use dpctl command, add flows.~

>
 $ dpctl add-flow tcp:127.0.0.1:6634 in_port=1,actions=output:2
 $ dpctl add-flow tcp:127.0.0.1:6634 in_port=2,actions=output:1

>
To use the console, check the flow table.~

>
 $ dpctl dump-flows tcp:127.0.0.1:6634
 stats_reply (xid=0x2fbc71b7): flags=none type=1(flow)
   cookie=0, duration_sec=14s, duration_nsec=964000000s, table_id=0, priority=32768, n_packets=0, n_bytes=0, idle_timeout=60,hard_timeout=0,in_port=1,actions=output:2
   cookie=0, duration_sec=8s, duration_nsec=934000000s, table_id=0, priority=32768, n_packets=0, n_bytes=0, idle_timeout=60,hard_timeout=0,in_port=2,actions=output:1

>
Then, try to send a ping again.~

>
 mininet> h1 ping -c3 h2
 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
 64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=1.39 ms
 64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=0.132 ms
 64 bytes from 10.0.0.2: icmp_req=3 ttl=64 time=0.140 ms
 
 --- 10.0.0.2 ping statistics ---
 3 packets transmitted, 3 received, 0% packet loss, time 2001ms
 rtt min/avg/max/mdev = 0.132/0.557/1.399/0.595 ms

>
To use add-flow, configure to send a packet from port1 to port2 and vice versa, so now ping goes through. Try to send a ping the other way around.~

>
 mininet> h2 ping -c3 h1
 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
 64 bytes from 10.0.0.1: icmp_req=1 ttl=64 time=0.711 ms
 64 bytes from 10.0.0.1: icmp_req=2 ttl=64 time=0.128 ms
 64 bytes from 10.0.0.1: icmp_req=3 ttl=64 time=0.135 ms
 
 --- 10.0.0.1 ping statistics ---
 3 packets transmitted, 3 received, 0% packet loss, time 1998ms
 rtt min/avg/max/mdev = 0.128/0.324/0.711/0.273 ms

>
After sending a ping, check the condition of the flow table.~

>
 $ dpctl dump-flows tcp:127.0.0.1:6634
 stats_reply (xid=0x35d81496): flags=none type=1(flow)
   cookie=0, duration_sec=21s, duration_nsec=123000000s, table_id=0, priority=32768, n_packets=8, n_bytes=672, idle_timeout=60,hard_timeout=0,in_port=1,actions=output:2
   cookie=0, duration_sec=19s, duration_nsec=312000000s, table_id=0, priority=32768, n_packets=8, n_bytes=672, idle_timeout=60,hard_timeout=0,in_port=2,actions=output:1

>
The connection time, packets, and bytes are increased.~

* Revision History [#mf04d0ad]
>
- 2013/08/28 This article is initially uploaded.

Front page   Edit Diff Backup Upload Copy Rename Reload   New List of pages Search Recent changes   RSS of recent changes