Final Solution (Part 2)

Posteado por
28May

Welcome back everybody!! Today’s post is our last (but not least) of the course, and we will present the second part of our case’s final solution, the design of a Data Center for a bank institution. Particularly, in this post we will talk about the network and management modules.

Network module

First of all, we had to decide which architecture we would use. Basically, we could choose between a Layer 2, a Layer 3 or an overlaid solution. When comparing advantages from these approaches, L2 architectures offer lower capital expenditure, can be simpler to deploy and manage in small and medium sized Data Centers, supports VMs migration without having to change IP addressing and aggregate less latency to the network.  Otherwise, L3 architectures reduce broadcast storms by segmenting its domains, deliver more efficient forwarding (traffic between VLANs don’t need to travel to the core to get routed), offer better scale capabilities for large networks, achieve more efficient use of network uplinks (no loops need to be resolved) and provide more efficient troubleshooting capabilities thanks to segmentation. Finally, overlay technologies let the company enjoy the benefits of L3 architectures while providing L2 connectivity for applications and devices within the DC. This is achieved by creating a virtual  L2 network encapsulated inside an L3 architecture. Added to the technologies commented in the L3 solution, overlay architectures also use the Virtual Extensible LAN (VXLAN) paradigm to solve the problem of VM Mobility, network infrastructure MAC addresses and VLAN scalability limitations. Given all these benefits, we finally opted for and Overlaid architecture.

Final Solution: Overlaid Architecture

Overlaid Architecture

Secondly, we had to decide which layer model we would implement. The options were a flat (1-tier), a spine and leaf (2-tier) or a 3-tier model. The flat one was obviously discarded because of its disadvantages: much less scalability, no segmentation and waste of bandwidth and computational resources. When deciding between the 2-tier or the 3-tier approach, the key factor was that nowadays most of the Data Center traffic is from East to West instead North to South, which is more common in legacy ones. When comparing 2-tier and 3-tier architectures, experts say that Spine and Leaf solutions are more suitable for East-West traffic patterns. Furthermore, Spine and Leaf architectures are deterministic in terms of the hops that traffic between servers must take to reach the destination, while 3-tier ones are not deterministic (depending on the physical placement, traffic needs to pass through the core layer or not). This fact also helps to design the capabilities of the network in terms of bandwidth allocation as it is easier to calculate its Oversubscription Ratios.

The next step was to find technologies that could accomplish with the requirements of redundancy, loop prevention without STP and load balancing. Redundancy is obviously achieved by connecting servers to more than one leaf switch, and each of them to more than one spine leaf too. But these leads to the loop prevention matter. Our provider, HP, mainly considers two solutions to this problem, which are TRILLS (a standard solution) or IRF (a propietary solution). We decided to use IRF because it adds less latency to the network and also solves automatically the uplink load balancing problem. But what does it consist of? Intelligent Resilient Framework (IRF) is a network virtualisation technology that allows the interconnection of multiple devices through physical IRF interfaces that combine them in a way that they are seem as a single logical device (switch, router, etc.) from other devices view. Thanks to it, all the devices in the same IRF group need to be configured only once, and this configuration will be automatically applied to all devices. When used with Link Aggregation Protocol (LACP), several parallel links between devices can be formed, achieving an on-demand and scalable performance boost. Moreover, the virtualization to a single logical device allows the avoidance of loop prevention and redundancy protocols such as STP, MSTP, RSTP, VRRL, HSRP, etc. because these functionalities are performed by itself. Also, virtualized systems, by definition, provide load balancing mechanisms between member devices, thus fully utilizing available bandwidth. Furthermore, IP addressing becomes simpler because all member devices of the IRF group will only need one single IP address to be managed and perform packet forwarding.

Final SOlution: Spine and Leaf model with IRF and LACP

Spine and Leaf model with IRF and LACP

The next requirement we had to meet was the capability to isolate application and data networks from email and Internet access networks, as well as isolating them from management and backup networks. Virtualisation is the key point. On one hand, the use of Virtual Machines provides a way to have a lot of mobility and scalability and share the same physical resources for different purpose instances while segmenting them into L3 independent networks. When this is combined with an SDN architecture, management of the Data Center turns much simpler. SDN offers an easier, more dynamic interaction with the network through a “clean” interface obtained through abstraction of the control plane. This reduces the complexity of managing, provisioning, and changing the network. The convergence of SDN and Virtualization is achieved with the HP DCN Solution, which uses VMWare NSX. It provides a complete network virtualization for the Software-Defined Data Center and helps the automation of provisioning of custom, multitier network topologies. NSX creates an overlay network which provisions virtual networking environments while avoiding CLIs or manual administrator intervention. The virtual overlay network abstracts the network operations from the underlying hardware, just like server virtualization does for processing power and operating systems. Thanks to this software-defined network, downlink load balancing will be managed in the control plane too, without the need of specific equipment of load balancers.

Final Solution: VXLAN overview

VXLAN overview

The interconnection and isolation of VMs is achieved thanks to VXLAN protocol, which is used in the overlaid architecture we propose. Communication between VMs are established with Virtual Tunnel Endpoints (VTEP), encapsulating layer 2 MAC frames into a UDP header. These VXLAN frames only carry a 54-byte overhead, and use VXLAN Network Identifier (VNI) to isolate the traffic, supporting up to 16 million LAN Segments, which is much higher than the 4094 limit imposed by the IEEE 802.1Q VLAN standard. VXLAN tunnels are created dynamically, and terminated within vSwitch instances, or by using either direct OVSDB or SDN-enabled termination on HP switches. VMware direct OVSDB method allows HP hardware VXLAN Tunnel End Points (VTEPs) to integrate directly with VMware NSX to bridge VMs on virtual networks to physical bare metal equipment. These technology, when implemented in the HP DCN solution, lets the company to fully integrate different branches and perform vMotion between them using an MPLS interconnection.

Final Solution: Branch Interconnection

Branch Interconnection

Finally, the network equipment used to implement packet forwarding are HPE 12900E Switch Series as the spine switches and the HPE FlexFabric 5950 Switch Series as the leaf switches. On one hand, the HPE FlexFabric 12900E is HPE’s major core data center switching platform for next generation software defined data centers. It delivers unprecedented levels of performance, buffering, scale and availability with high density 10GbE, 40GbE and 100GbE interfaces which are requirements for the bank’s datacenter. The switch series includes a 4-, 8-, and 16-slot chassis, and it supports full Layer 2 and 3 features and advanced data center features. This capabilities allow us to implement the desired L2 inside L3 topology and achieve the speeds we required. On the other hand, the HPE FlexFabric 5950 is a 25/50/100GbE networking switch series that provides customers with a high density and ultra-low latency solution enabling them to deploy network configurations for business critical applications. Consisting of a 1U 32-port 100GbE QSFP28 Switch, the 5950 brings high density to a small footprint. The 100GbE ports may be split into four 25GbE ports and can also support 40GbE which can be split into four by 10GbE for a total of 128 25/10GbE ports. The most important aspect of this switch is the ultra low latency and ultra high speeds it can deliver in its ports since in a banking environment those are pretty important things. Also the modern data center capabilities it provides, with its fine manipulation of data, make for a very powerful combination for rapidly and dynamically adapting the network to the moment’s requirements. It is also important to say that both switches have the ability to build Layer 2 fabrics which are flexible, resilient, and scalable with VxLAN, TRILL and/or Hewlett Packard Enterprise IRF.

Final Solution: HPE 12900E and 5950 switches

HPE 12900E (left) and 5950 (right) switches

In order to have a better understanding of how our solution may look like, below you can check the physical and logical topology of the network.

Final Solution: Physical network topology

Physical network topology

 

Final Solution: Logical network topology

Logical network topology

 

Management module

The first step in this section was to decide whether we were going to use standard solutions like DCIM based products or proprietary solutions. After studying few DCIM based solutions like Schenider’s one, we found out that they cannot give the same level of detail and options than proprietary solutions do. In addition, experts on the sector told us that DCIM technology was experiencing a deprecation phase. Another reason to use proprietary solutions was the we only have two providers instead of 3, so we would just need two different management tools to fully achieve the control of the Data Center.

For the network and services modules, we chose HPE OneView and HPE Insight Online tools. On one hand, HPE OneView is an automation engine that performs and efficient infrastructure management based on a software-defines intelligence. This tool helps to deploy infrastructure faster by rapidly updating or composing resources using automated templates, which gives a quick response to dynamic environments like bank-institutions (different kind of tasks are performed depending on the time of day). Thanks to its Global Dashboard, all operations are simplified too. It provides a unified, single view of the Data Center management with information of thousand of servers, profiles and more, enhancing the scalability of the overall scenario. Also, the productivity increases thanks to its automate resource provisioning capability. Moreover, configuration and monitoring gets simpler with the OneView unified rest-API, achieving more automation with a responsive service delivery. On the other hand, HPE Insight Online is an automated remote support tool which lets HP to take care of the health of the Data Center in real time. A 24×7 monitoring of the IT infrastructure and environment is performed by HP. When integrated with HPE OneView, automatic alerts and calls are send from the equipment by themselves. All the information and diagnostics about failures are logged and reports are automatically generated for the company. It is said that the use of this tool in coordination with OneView achieves up to 77% reduction in downtime, 55% lower Mean Time To Repair (MTTR) for unplanned downtime, nearly 100% diagnostic accuracy while offering a single consolidated view of the environment.

Management Solution: HPE OneView dashboard screenshot

HPE OneView dashboard screenshot

 

Management Solution - HPE Insight Online: Devices overview

HPE Insight Online: Devices overview

 

For the security module, the solution provided by Fortinet is the FortiManager. It provides a simplified policy management and device provisioning for large scale Fortinet Enterprise Firewall deployment from one central place using hardware or virtual appliance. Common security baseline is enforced and shared among multiple administrative domains (ADOMs). The tool also gives support of Restful API for automation, which reduces the administrative burden, and a centralised logging provides superior visibility and insights into events, traffic and threats. With the FortiManager, the administrator can review, approve and audit policy changes from a central place, with an automated process to facilitate policy compliance and policy lifecycle management. This tool has the ability to centrally manage up to 10,000 FortiGate security appliances, and in our case, given that all our Data Center solution is based in virtualisation, we have chosen to use the VM version of the FortiManager.

Final Solution: The FortiManager VM

The FortiManager VM

 

That’s all guys! We expect that, as we did, you have learnt a lot about Data Centers: what are they, which protocols and technologies are implied, which are the trends of the new generation Data Centers and what considerations must be kept in mind when designing one. Thank you for following us during all the course, we hope that you liked our case solution and, if you did, please give us a like and share it! See you!

Comentar

Tu dirección e-mail no se publicará.

Iniciar sesión