Engineers from La Salle-URL share the latest news and projects in the field of network solutions in telematic engineering.

29 May 2016 | Posted by Redacción Data Center

Final Resolution

What's up GG's? Today we will have our FINAL post! Before starting I want to say thank you for your attention, to have followed our post. It was a pleasure to do this and we learnt a lot about Data Center, especially for a bank.

 

Let's move on!

I will give you few details about our final Data Center, because this post will be so long instead! But we will know enough to see where we want to go with this architecture!

 

For the physical topology, we have planned to use spine and leaf because it's better for our virtual environment, with numerous trafic west-east.

The logical topology will see a combination of virtual and physical devices, as Firewall  (virtual and physical), routers, switches and virtual machines.

About the equipment we will use, we will take for our server a HP ConvergedSystem 250-HC Store Virtual. It supports virtualisation, storage and some management tools.

Switches will be from the HPE FlexFabric 12900 Switch Series for the spine and for the leaf. It supports IRF (which supports spine and leaf) and Intelligent Management Center (IMC).

Our Router will be HPE FlexNetwork HSR6804 Router Chassis. It is a really high performance a router and it suits really well with the rest of our equipments.

We have already talked A LOT about security, I will just here give you some details about the software used. We want to use FortiGate (from Fortinet). The FortiGate has several functions as Firewall, IPS, Application Control and Reputations Services.

The HPE Intelligent Management Center Enterprise Software Platform is going to take care of the management. It’s a comprehensive wired and wireless network management tool. It will provide for end-to-end business management of IT, scalability, and accommodation of new technology according to HP.

The infrastructure needed for the data-center is not really big.
Several rooms of 20 m² should be enough. We need one to store the CPD, one to handle the cold and one to power up all of this.
Each of this room should have special walls, walls that let the heat go out but not in. Of course we’ll need UPS to protect the servers during the power up of the generator when one them is out of order.
To store every rack, we’ll need conditioning cabinet to help cooling them off. Canada, Sweden and Greenland seems to be the best places to build a data-center.

 

Now you know everything of our Data Center! I have nothing more to say about it, I guess this is finally over. Thanks for your attention, and don't forget to stay tuned ... if it's not here with us, it has to be in every possible subject! Take care of yourselves!

Share

Add new comment

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
16 + 0 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.