Week 11

I have completed 250 HRS of working at PC systems. and last 4 weeks i was involved full time to create and test more that 20 test cases. using Xena and iPerf packet generators with various unicast and multicast policies configured in the Fortinet 1500D series firewall.

All the information will be reflected in my project report and i just started writing my project project report and hoping to complete it in next couple of weeks.

Week 10

I am going to blog about a Fortinet firewall performance test case created using Client-Server iPerf PC pair.

Test : Load balancing


Aim: Confirm 1500D is capable of load balancing in a variety of configurations.

Test Setup: Two IPerf Host test appliances, one connected to a 1Gbps input port on Firewall and one connected to a 1Gbps output port on Firewall Configuration: Unicast policy configured to permit TCP traffic from source IP 192.168.1.1 to multiple destinations –
each destination VIP using a different health check and load balancing method. For VIPs that contain backup servers, the active servers were removed from the IPerf Host configuration to confirm sessions were then sent to the backup servers that should have become active.


Expected Result: 1500D should be able load balance in the same way that our existing load balancers are capable of.
.
Below is a diagram of the test setup:

Test results:
Results are as follows:

All the above combinations were tested. One limitation, is that a real server IP can only exist as a real server for a single VIP, so would mean a difference to where currently we are able to use a real server IP address within multiple real server groups but listening on a different port each time.

There was an issue encountered, is that if within a VIP that contains active and standby real servers, all active servers are gracefully disabled – the standby servers do not become active. Also, if a group contains active and standby servers and the active servers are not listening, so the standby servers have become active-  but then one of the failed servers is gracefully disabled via the firewall GUI, this causes all the standby servers that are currently active to change mode to standby in the GUI – although open TCP sessions still seem to remain active. What is reported in the GUI does not always appear to be accurate.

Note that other load balancing options available were:

  • Weighted
  • First alive
  • Least RTT
  • HTTP host header

These were not tested as we do not currently use any of these methods within our production environments.

Another option that was testes was for HTTP based VIPs, that multiple TCP sessions can be multiplexed by the firewall – reducing the number of sessions to the web server – with the ability to either hide clients behind the firewall interface IP, or to present this single session as if it is the original client IP.

Week 09

I am going to blog about first Fortinet firewall performance test case created using Xena packet generator.

Test : Unicast throughput test


Aim : Confirm the maximum possible unicast throughput for different frame sizes.


Test Setup : XENA connected to a 10Gbps input port on Firewall and a 10Gbps output port on Firewall


Configuration: Unicast policy configured to permit UDP traffic from source IP 192.168.1.1 to destination IP 192.168.2.1


Expected Result: 1500D should be capable of close to line rate depending on the frame length.


Below is a diagram of the test setup:

Test results:
The following frame sizes and rates passed without drops:

*because of VLAN tag and UDP payload being added to packet, minimum frame size had to be set to 72 bytes to accommodate full packet with embedded XENA sequence number

Week 08

This week i am going to explore how to perform stress test on Fortinet Firewall 1500D device. to test the performance on the firewall there should be and packet generator to produce packet with relevant frames size. we cannot use simple PC to generate traffic since it adds operating system processing and network card overhead when generate packets.

Xena provides a new class of professional gigabit Ethernet test infrastructure for the Ethernet ecosystem, delivering a breakthrough price performance benchmark for load stress and functional testing of Ethernet equipment and network infrastructure. In addition, the world’s highest density and lowest power consumption per test port delivers a test platform ready for the future.

Developers and providers of Ethernet based network appliances and services can deploy the Xena Networks test platform as an ideal complement or alternative to existing test equipment solutions, at a price point which obsolete in-house custom built test solution projects

The Xena test platform provides an open environment where L2-3 traffic can be generated and performance analyzed at wire speed. Our user friendly .NET based GUI client is provided for test execution and remote management of test equipment located in multiple locations. In addition, an open TCP/IP command line based scripting API allows users to script and automate testing from any software and tool environment.

The Xena Ethernet Test Infrastructure platform provides a suite of test modules with copper and optical interface speeds of 10/100/1000 Mbps, 10 Gbps, 40 Gbps, and 100 Gbps. 

Reference : https://xenanetworks.com/product/vulcancompact/

However in normal circumference PC with iPerf3 software can be used generate unicast or multicast traffic for performance testing with less overhead with less accuracy. Basically iPerf3 is a tool for active measurements of the maximum achievable bandwidth on IP networks. It supports tuning of various parameters related to timing, buffers and protocols (TCP, UDP, SCTP with IPv4 and IPv6). For each test it reports the bandwidth, loss, and other parameters.

Reference : https://iperf.fr/

Two PC has been used with iPerf software and once works as server which generate IP packet and other iPerf PC would receive the same. based on the iPerf client-server communication it determine bandwidth,delay and jitter.

below configuration shows, constructing a multicast packet destined 226.94.1.1 by iPerf Server

root@mcastserver:~# iperf -s -u -B 226.94.1.1 -i 1

Server listening on UDP port 5001
Binding to local address 226.94.1.1
Joining multicast group 226.94.1.1
Receiving 1470 byte datagrams

UDP buffer size: 122 KByte (default)

[ 3] local 226.94.1.1 port 5001 connected with 212.11.66.254 port 49525
[ ID] Interval Transfer Bandwidth Jitter Lost/Total
Datagrams
[ 3] 0.0- 1.0 sec 128 KBytes 1.05 Mbits/sec 0.037 ms 0/ 89
(0%)
[ 3] 1.0- 2.0 sec 128 KBytes 1.05 Mbits/sec 0.020 ms 0/ 89
(0%)
[ 3] 2.0- 3.0 sec 128 KBytes 1.05 Mbits/sec 0.021 ms 0/ 89
(0%)
[ 3] 0.0- 3.0 sec 386 KBytes 1.05 Mbits/sec 0.022 ms 0/ 269
(0%)
^C

below configuration shows, lightning multicast packet destined 226.94.1.1 by iPerf Server

root@mcastclient:~# iperf -c 226.94.1.1 -u -T 32 -t 3 -i 1

Client connecting to 226.94.1.1, UDP port 5001
Sending 1470 byte datagrams
Setting multicast TTL to 32

UDP buffer size: 122 KByte (default)

[ 3] local 212.11.66.254 port 49525 connected with 226.94.1.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 129 KBytes 1.06 Mbits/sec
[ 3] 1.0- 2.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 2.0- 3.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 0.0- 3.0 sec 386 KBytes 1.05 Mbits/sec
[ 3] Sent 269 datagrams

Week 07

Our solution is to design secure multicast app tire traffic using Fortinet firewall. this week i am going to looking at design consideration about this firewall set up at production.

Main points of Multicast Network Design has mentioned in below,


• Two Application servers (Multicast sources) are resided at Application tire and they are constantly sending application data encapsulated in a UDP multicast packet.


•Each server’s multicast stream carried out via separate Vlan into the App tire switch which is also called Trust Switch and the vlans are extended to the Firewall as firewall would be next hop address for these multicast streams.


•Two physical firewalls to be worked as a cluster in active-standby mode.


•Protocol independent PM-SM (source specific multicast) to be used in this design and RP (Rendezvous Point)are set in three layers in the design. In that way, multicast domains are segmented. RP for App tire resides inFirewall.


•RP redundancy to be performed with anycast RP.


•Untrust switches layer included in this design for stack scalability. More App tires can be connected to Untrust switch segment and this will be single point of layer for receivers at southbound


•Untrust switch tire has its own multicast domain.


•Multicast receivers are connected to the southbound distribution switch and Untrust switch tire would be boundary for their multicast.


•All the switches and firewall to be procured from industrial leading vendors with low latency specification


• Packet flow – multicast packet from source reach to first-hop router which is firewall and get registered in RP. Multicast receiver at southbound send IGMP join with requesting multicast source eventually it get registered at RP at distribution switches. Finally, it will reach upstream firewall where the packet to be inspect against multicast policies then flood the stream to the receiver.

Week 06

My plan in this week to finalize the Network Schematic for this project.

Customer Requirement

Based on the various discussion with the customer and PC systems, below points were gathered for their proof of concept,

  • Customer software applications are installed on their bare-metal server and servers are equipped with the industry-leading Mellanox ConnectX family of intelligent data-center network adapters.
  • There are two multicast aware applications resides in these servers and each server send data out with it’s own multicast stream.
  • Low latency would be highly considers during multicast stream traversal over the entire network
  • Each multicast stream from north bound servers should be filtered at firewall and need to perform customer ennoblement based on source IP address at south bound
  • Customer also need to design a principle that downstream customer hosts wishing to receive a multicast stream does not justify flooding the entire network with periodic multicast traffic such as Protocol Independent Multicast-Sparse Mode (PIM-SM)
  • Customer also need test approach and results for the testing of the FortGate 1500D firewall
  • Device fail-over would be considered

Proposed Network Layout

Week 05

This week mainly focus on to build the test bed which is multi vendor network emulator and test required product images.

I have already installed VMware vSphere hypervisor on the bare metal server as taught by Net 701 – Enterprise Infrastructure in Semester 01 .

Installed EVE-NG pro version as a virtual machine. detail instruction of how to install EVE-NG can be found on https://www.eve-ng.net/index.php/documentation/installation/bare-install/. Then installed vendor specific virtual images to emulate routing & switching / Firewall enviroment.

Cisco vIOS and IOL images as well as Arista vEOS to emulate routing and switching :

Fortinate fortigate virtual image to emulate firewall :

Week 04

This week i am going to carry on my research to find out best virtual network emulator that would be my test bed to the Firewall implementation along with the other supporting network environment.

i am going to use my high end home PC which is dell precision tower 7810 ( dual Intel® Xeon® Processor E5-2600 v4 with 64 GB memory)

There are three high end network emulators can be found in the market and their names and features are listed below. it is required to have a class 1 hypervisor such as Vmware vSphere to install below emulators.

  • Cisco VIRL (http://virl.cisco.com) : Cisco Virtual Internet Routing Lab (VIRL) is an extensible network virtualization platform that enables the development of high-fidelity models of real or planned networks.  VIRL includes current virtualized versions of Cisco network operating systems and allows integration with ‘real’ physical / external networks, network elements, and servers.
  • GNS3 (https://www.gns3.com/) : is an open-source, free server/client interface meant for virtualization and network emulation. It is a Python based platform and supports Cisco 1700, 2600, 2691, 3600, 3725, 3745, as well as 7200 router platforms. GNS3 is older network emulator which allows the combination of virtual and real devices, used to simulate complex networks.
  • EVE-NG (https://www.eve-ng.net/) – This is the first clientless multivendor network emulation software that empowers network and security professionals with huge opportunities in the networking world.

Differentiating GNS3 vs Eve-NG vs VIRL :

While comparing GNS3 vs EVE-NG, we find that GNS3 is a free, open source community that has built a well documented piece of software. It does follow a traditional client/server application model, but the best part is that the server component is easy to configure, deploy and maintain. In case of EVE-NG, it comes in both free community edition and professional paid edition. One key setback with GNS3 is that you are required to source own network device software images in order to emulate. But it is not needed to be observed as a fault because bundling software images from Cisco IOS with GNS3 would turn out to be illegal. Similarly, EVE-NG also requires licensed access in order to get to the network device software images.


While comparing GNS3 vs Eve-NG vs VIRL on the grounds of operating as a specialized network emulator, we would find that only EVE-NG is the one which is a clientless network emulator performing virtually. On the other hand, both VIRL and GNS3 require you to first download and then install an independent application to manipulate the functioning of network devices on the server.
Both VIRL and GNS3 require a separate terminal application to function, EVE-NG only needs lightweight terminal application like PuTTY, in order to build and modify a network topology. The entire process can easily be accomplished via an HTML 5 web client and it can not only be used over desktop but also on various mobile devices too!

After analyzing above three product carefully i have chosen EVE-NG as a network emulator for my project

Week 03

Based on the market data research in last week , Company has decided to select Fortinet Fortigate firewall appliance since it consist with exceptional low-latency data through-output. Looking at customer network requirement, PC system selected Fortinet fortigate 1500D model for this project . The FortiGate 1500D series delivers high performance next generation firewall (NGFW) capabilities for large enterprises and service providers. With multiple high-speed interfaces, high-port density, and high-throughput, ideal deployments are at the enterprise edge.

Product datasheet can be found on https://www.fortinet.com/content/dam/fortinet/assets/data-sheets/FortiGate_1500D.pdf

Now i need to perform further research on evaluation testing that to be performed on the FortiGate 1500D firewall. This project supposed to be performed on physical devices but i will have to work on virtual enviroment due to sudden Covid19 situation arises in NZ.

PC system has agreed with me to perform my research which implementing firewall device based on the customer requirement with best practices and feasible test cases. further i am allowed to working from home due to sudden covid19 lock down.

PC system is expecting from me to provide implementation details and test methodologies once the project is completed.

Week 02

Beginning of this week i had meeting with managing director of the PC systems Mr.Neil Albury discussed my project proposal thoroughly.

PC systems need to research firewall product enabling multicast and unicast traffic. Basically their customer has SAP application servers which send data out from their network cards encapsulated UDP data with multicast and unicast. SAP applications are connected to the Layer 2 switch and the firewall will be placed inline between SAP application segment and the rest distribution network infrastructure.

PC systems is mainly proposing two firewall product and they are Watchguard Firbox and Fortinet Fortigate. i had to find out key point to select suitable firewall device for this project. i analyzed product datasheets and performance guidelines of both the firewall UTM products.

After analyzing preliminary the UDP latency, found that Fortigate appliances are always market first in transmitting multicast traffic at ultra low latency.

Source : https://www.fortinet.com/content/dam/fortinet/assets/analyst-reports/nss-labs-2018-ngfw-comparative-report-performance.pdf