نصب سرور بلید
در این پست درباره مقاله ای که در یکی از وبلاگ های خارجی درباره سرور های بلید نوشته بود را میخواهم برایتان ترجمه کرده تاانجا که ممکن میباشد بدون تغییر و ساده تر برایتان بیان کنم:
نصب سرور بلید
BLADE SERVER HP
Installation Project: HP BladeSystem (part 1)
پدر این پست درباره مقاله ای که در یکی از وبلاگ های خارجی درباره سرور های بلید نوشته بود را میخواهم برایتان تا انجا که ممکن میباشد ترجمه کرده و در وبسایت قرار بدهم.البته مد نظر داشته باشید که اگر کمی در ترجمه به مشکل برخورد کردم خودتون تصحیح بفرمائید من به همراه ترجمه های فارسی خود فایل را نیز بصورت انگلیسی میگزارم که خودتون هم بدونید چی به چیه ...اونقدی من سوتی میدم تو ترجمه که متوجه نشید.
- 1 x HP BladeSystem c7000 blade enclosure (with 10 fans and 6 power supplies) خود شاسی سرور بلید اچ پی
- 8 x HP bl460c Gen9 blade servers تعداد 8 عدد تیغه ی سرور بلید
- 2 x HP VirtualConnect FlexFabric modules
- ماژول های شبکه یا همون که شما میتونید نوع مجازی رو انتخاب بکنید
…all to be used for demo purposes in PROOFMARK portal!
We already have a few blade enclosures in our demo center and all previous blade generations (from G1) but now we are talking about the meanest and the baddest, state-of-the-art Generation 9’s.
البته در مورد سرور های بلید بگم که تا اونجایی که من اطلاع دارم شما باید انکلوژر رو انتخاب بکنید و سپس هم میتونید تیغه هاتون رو انتخاب بکنید هیچ فرقی در انتخاب شاسی نیس و شما میتونید با خرید یک شاسی مثلا نسل 7 تیغه هایشو عوض کنید تیغه های نسل 9 رو هم بندازید روش و استفاده بکنید البته بازار خوبی تو ایران برای سرور های بلید اچ پی فراهم شده ولی هنوز اونطوری که باید تیغه های بلید داخل ایران تو بورس باشند اونطوری نیس هنوز.
Unboxing
یکی از بدترین قیمت هایی که در مواجهه با سرور های بلید باید بهتون بگم وزن بسیار سنگین این سرور های میباشد و پدر ادم هنگام حمل ونقل این سرور در میاد البته شما در بالا فقط شاسی دستگاه رو مشاهده میکنید و میتونم بهتون بگم که یکی از سنگین ترین شاسی هایی میباشد که من تو عمرم مشاهده کردم حتی سنگین تر از شاسی های سری سرور های dl980 g7 که اگه یادتون باشه برا اون سرور های هم براتون یکم کوچولو مقاله ای گذاشتم.
در تصویر بالا نمایی کامل ار انچه که بعد از خرید یک کانفیگ از سرور بلید به دستتون میرسه رو مشاهده میکنید البته اون پروژه هایی که خود بنده در انها شریک بودم کمی متفاوت تر با انچه بود که در تصاویر مشاهده میکنید البته خودم یه چند تا عکس دارم که اگه بتونم پیدا کنم حتما براتون میزارم.
البته این شاسی برای سرور سری c3000 میباشد که در یکی از شهرستان ها پیاده سازی و نصب گردید.
On the other hand all the styrofoam padding has been extremely…hmm…imaginatively designed, to say the least. I have absolutely no explanation for the numerous edges, corners and pointy ends but I’m sure they have an important purpose. Maybe it’s for aerodynamics (just in case)?
در نگاه اولیه انچه به نظر میرسد حجم بسیار بزرگ این انکلوژر میباشد اما باید توجه داشته باشید در دید نزدیک بزرگی انچنانی ندارد و براحتی میتوانید متوجه گردید که در انکلوژر فوق باید 16 عدد تیغه ی سرور دیگر نیز باید جای بگیرد و این یعنی چیزی بسیار سنگین تر از انچه که در تصاویر مشاهده میکنید.
البته بر روی جابجایی تجهیزات بسیار دقت نمایید و هرگز بدون کمک و همراهی دوستان و خواندن مراحل نصب اقدام به جابجایی و نصب نکنید.منم فکر نمیکردم با یه یه سرور 150 کیلویی روبرو خواهم شد ولی انچه که میبیند چیزی در حدود 100 تا 150 کیلو خواهد بود.
اگر میخواهید در نظرتان تجسم کنید که با چه سرور روبرو خواهید بود کافیست سرور dl380 را مدنظر قرار بدهید و نوع چینش هارد های ان را ببینید اگر این هارد ها را تیغه های سرور در نظر بگیرید ,و خانه هایی که هارد در ان قرار میگیرد ار همین انکلوژر متوجه خواهید شد که با چه هیولایی سر و کار خواهید داشت.در تصویر بالا تمام 16 خانه ای که تیغه ها در ان قرار میگیرد را مشاهده میکنید و در قیمت پایین تر نیز یک عدد مانیتور کوچک تر نیز میباشد که برای منیح و مدیریت انکلوژر و ای پی دادن به ان استفاده میشود.وجاهای خالی کوچکتر نیز محل قرار گیری پاور های انکلوژر c7000 میباشد.
تمام تیغه های سرور های بلید و پاور هایش جلوی سرور بلید میباشند و برای دیدن تمام اینترکانکت ها و ماژول های فن و باید پشت سرور بلید را بررسی کنید که در تصویر زیر بلانک های خالی رو مشاهده میکنید.
اگر من جای شما بودم اول تمام قطعات مثل فن و پاور رو خالی میگردم و انچه را که در شکل زیر مشاهده میکنید را تجربه کنید و بعد اقدام به نصلب دستگاه در رک نمایید زیرا اگر تمام تجهیزات را روی ان نصب کنید چیزی در حدود 200 کیلو را باید بلند کنید و در رک قرار بدهید.
HP BladeSystem سرور بلید اچ پی
قطعات شاسی به صورت زیر میباشد:
That’s the rear view of the blade enclosure already installed into the rack. From top down we have…
– 5 empty fan slots
- 5 محفظه ی خالی برای جایگذاری فن ها
– 2 empty interconnect module bays. Interconnect modules are meant for all kinds of switches and other “data transfer modules”. We’ll talk about these things later.
- 2 تا محفظه ی خالی نیز برای جایگذاری اینترکانکت ها که همان نقش ماژول های انتقال اطلاعات یا همان سوئیچ های را ایفا میکنند.
– 6 more empty interconnect module (IC) bays but these ones have the dust covers (blanks) installed. That makes a total of 8 IC module bays in one c7000 enclosure.
- 6 تا interconnect خالی برای چایگذاری انچه interconnect module (IC) نامیده میشود.
– Empty Onboard Administrator tray bay. That’s the brains & logic of the blade enclosure. Seems pretty “lobotomized” at the moment, eh? We’ll change that shortly.
Onboard Administrator که بصورت چداگانه خریداری شده و معمولا دو عدد میباشد و برای مدیریت سرور بلید از ان استفاده میشود.
– 5 more empty fan slots. Making it a maximum of 10 fans in a c7000.
به تعداد 10 عدد فن سرور بلید که در شاسی قرار گرفته میشود./
– Finally, 6 single phase power cable connectors to be connected to power outlets.
شش عدد کابل پاور اچ پی برای برقراری ارتباط بین سرور بلید شما البته تنها شاسی سی 7000 با پی دی یو هایی که در دیتاسنتر شما موجود میباشد.
Oh, by the way, installing the enclosure rack rails using the included rack mount kit is a walk in the park. You don’t even need any tools, screws or anything. Just extend the two-part rails to the correct length and they will snap right in place in any standard rack. Even my mom could do it. Well, not really but you get the picture.
در تصاویر زیر نیز همانطوری که میبینید تمام قطعات ی را که بر روی سرور بلید باید نصب گردد را بصورت چداگانه مشاهده میکنید این قطعات کلا بعد از نصب انکلوژر c7000 در رک شما باید بر روی سرور اسمبل گردد تا سنگینی بیش از اندازه قطعات مشکلی را پیش نیاورد.
در تصویر فوق تمام تجهیزات ی را که باید در سفارش شما باشد را مشاهده میکنید البته گفتم با توجه یه سنگین بودن سرور و انکلوژر های فوق دوستان فنی کار و در بسیاری از موارد خود بنده مجبور هستیم کل قطعات را از روی شاسی باز کنیم و تا شاسی سبک تر را بتوانیم بر روی رک نصب کنیم.
البته فکر کنم دونفر برای این کار خیلی بهتر هست در ضمن شما میتوانید کل شاسی را باز کرده و یکی یکی روی رک اسمبل کنید که این داستان خاص خودش رو داره و زیاد من سفارش به این کار نمیکنم.
And on the table from left to right:
– 6 power supplies
– 8 brand new Generation 9 blade servers with just a tad over 1TB or
RAM! We will spend a lot more talking about these bad boys later!
– 10 fans
– 2 Virtual Connect Modules and an Onboard Administrator tray (with one OA module).
– 6 power cables
– Some random accessories
Fans سرور بلید
Let’s start with the fans. These 10 Active Cool Fans (as HP calls them) cool down almost the whole enclosure centrally, all components including the servers, interconnect modules, Onboard Administrator modules, internal circuit boards and so on. The only modules in the enclosure that have their own fans are power supplies. This is the whole beauty of blade server concept in general: we have a chassis which is much like a data center, it has four walls, a roof and a floor. Then we install some fans to the chassis to keep the chassis cool (or big cooling units in a data center) and finally we need power supplies to provide power to the whole enclosure. After that we can start carrying all the geeky stuff in!
I remember one marketing slogan that HP used back in the day describing the blade concept: “HP BladeSystem – Data Center in a box” (or something like that). I like that a lot. Because that’s what it basically is!
HP BladeSystem سرور بلید اچ پی
The design of the Active Cool Fans is said to be inspired by jet engines and they have some 20 patents. All I know is that they are some pretty damn powerful air movers! Anyone who has tried stressing a fully populated c7000 to the limit knows what I’m talking about.
HP BladeSystem سرور بلید اچ پی
Rear view. So, jet engine inspired, huh? Yep, and they even say that if you look very carefully, you can see a faint Rolls Royce logo printed inside those blowers. Urban legend? Beats me. I’ve never seen them but that doesn’t prove anything.
HP BladeSystem سرور بلید اچ پی
(My apologies for a bit blurry photo here)
You can actually get your chassis with fewer than 10 fans to save some schillings but if you do, you need to remember a few population rules. The minimum number of fans you need to have is 4 or the Onboard Administrator won’t start. AND with 4 fans you can only use 2 out of 16 blades. So, you have paid for the mighty 16-slot blade chassis but you decided only to use 2 blades? OK…why? I really can’t think of any good reason to go for 4 fans. Nevertheless, if you do, you must populate the fans in bays 4, 5, 9 and 10 so the rightmost bays. That’s because you’d start populating the servers in the front from left to right.
Here you can see the rest of the recommended best practice fan configurations: 6, 8 and 10 fans. With 6 fans you can have one half (to be specific: left half) of the blades running and with 8 or 10 fans all 16 blades can be run simultaneously (the way it’s meant to be).
Using 10 fans also gives you one extra edge in the form of redundancy: you can lose 2 fans and still have the whole enclosure up&running.
Our chassis came with all fans installed, so it’s pretty straight forward to install them in correct bays.
OK, fans are in. Next up, the brains of the chassis: Onboard Administrator (or friendly “OA”) modules.
Onboard Administrator
As mentioned before Onboard Administrator (OA) is the management module of the whole chassis. You can use OA to set the IP addresses of all the components in the chassis, define power modes, boot-up sequences, e-mail notification settings and a ton of other things. You can access OA either thru GUI (web browser), built in LCD display (called Insight Display) or Command Line Interface.
The OA hardware entity consists of a couple of different components: OA tray (in the back), OA module itself (front left) and dust cover (in case you only have one OA module). In most production configurations, you’d always have two OA modules for redundancy (side note: wish I had a redundant pair of brains on some certain mornings) but since our chassis is purely for educational and demonstration purposes, we can manage with one OA module and even tolerate a loss of that.
Actually, the chassis can run without the OA modules completely. Can’t boot without, but if all the OA modules fail while the enclosure is up&running, all the fancy optimization logic is gone, removed, head shot and the enclosure falls into survival mode; it makes all the fans blow at warp speed, doesn’t enforce any power limitations and most importantly, makes all the LED’s go to David Guetta mode. It’s fun to watch. Then, when you reinstall the OA modules everything immediately goes back to (boring) normal.
That’s a close-up of the OA tray. In the middle there’s a couple of standard RJ-45 ports. They are called Enclosure Interlinks and they are used to…well, link enclosures together. This way, when you connect to one of the OA modules you can manage all linked enclosures. Handy! The maximum number of enclosures you can link together is, unfortunately, only 4.
OA module itself. Ports from left to right are:
– iLO port. Used to connect to the OA itself plus all the blade servers’ Integrated Lights-Out management chips. So, no 16 separate network cables (as the situation would be with 16 rack mounted servers) but only one.
– USB port for updating the enclosure firmware, uploading/downloading
configuration and mounting ISO images as optical drives to the blade
servers.
– Serial port. Nuff said? =D Well, not much used anymore.
Mostly due to the fact that for example I’d have to go to some used
computer store to first buy a computer that has a serial port and then
to another used computer store to buy a serial cable.
– VGA port for
KVM (Keyboard Video Mouse) capabilities since the blades themselves
don’t have those ports. Well, actually they kinda do through a special
adapter but that’s cheating. Much like my Macbook Air’s Thunderbolt
port. “Sure, you have all the ports in the world available”, said the
Apple Genius. “Just 49,95€ per port”, the Genius continued.
The OA tray is located just beneath the interconnect modules and takes the whole width of the enclosure. You first need to install the tray and only after it is securely installed, you can install the OA modules. The same goes other way around: you can not remove the OA tray without first removing both of the OA modules.
See those purple handles? You first push the module from the BODY all the way deep into the enclosure and THEN use that handle just to lock the module in place. NOT to push the module in. Approximately 330 service requests saved there. You can thank me later, HP.
OA tray in place, next up the OA module itself. We know the drill already.
There you go. OA tray, OA module and the dust cover all installed and ready for action. Next, Virtual Connects.
Virtual Connect modules
Well, well, well…where to begin. Virtual Connect is one of my favourite topics to talk about with blades but it’s also so DAMN hard to explain simply and quickly. But at the same time definitely one of the coolest things data center computing has seen for he past 10 years.
I’m not gonna start lecturing you about Virtual Connect (now) so if you are not very familiar with Virtual Connect, I can warmly recommend this one exceptionally well-written introduction book called HP Virtual Connect for Dummies. It explains all the basic concepts of server-edge virtualization, purpose and advantages of Virtual Connect, different VC modules etc in a very enjoyable fashion and the best part is, it’s only some 60 pages! So, you can easily read it during a summer holiday. What? That’s reasonably fast for me.
This is how a 24-port HP Virtual Connect FlexFabric module looks from the uplink side. We have a total of 8 ports and the first 4 ports can be selected to function either in FC or in Ethernet mode, the last 4 ones are fixed for Ethernet. So, it is pretty much as close to convergence we currently can be with the existing standards. And DON’T get me started with FCoE/CEE/DCB n’ stuff. We’re not there yet. Soon, but not yet.
And that weird looking white piece of paper on top of the module is just a sticker with default Administrator passwords, MAC addresses etc. You should stick it somewhere with all the other important papers you have. Just in case.
A couple of so called transcievers or SFPs that we need to plug into the empty port slots in the VC-FF module. These ones happen to be 1Gbit and 8Gbit versions. You can also use standard 1Gbit RJ-45 ports if you feel like it, no problem.
A couple of SFPs installed in place. We are going to use the 4 leftmost ones for FC connectivity to our brand new 3PAR 7200c storage arrays and the 4 rightmost ports are dedicated for Enet.
And this is an internal view of a Virtual Connect FlexFabric module for all of you who are interested in this kind of stuff. Not much to say here but: “Boy, that’s a lot of fancy stuff in a small space!”
A rear view of a Virtual Connect module. This is how all IC modules look like from the rear; no matter if we are talking about Virtual Connects, SAS switches or simple pass-thru modules, the way they connect to the signal midplane internally within se chassis is thru this 180-pin port that handles all the traffic from/to all 16 servers in the enclosure.
Those first two adjacent interconnect bays are reserved for our Virtual Connect FlexFabric modules. Whatever modules are installed in IC bays 1 and 2 always connect to the default ports on all the 16 blades in the chassis. So, make sure your interconnect modules match those of the blade ports. FC modules don’t communicate very well (read: at all) with Ethernet ports so, careful.
Installation of an interconnect module is pretty much the same as the installation of an OA module: first push the module far in from the body, then lock it in place pushing the purple handle in.
There, both VC-FF modules installed with all the SFPs we’re going to need. The rest of the interconnect bays are reserved for expansion. To use those 6 bays you need to have an expansion card, called mezzanine card, installed in the blades. Expansion cards can be for example 2-port FC cards, 4-port Enet cards or something else. Then, depending on the slot you have the mezzanine installed in the blades, you need to use a matching IC module in the back.
You can refer to one of the several port mapping documents in the web for learning more about c7000. Here is at least one quick and simple explanation about c7000 port mapping.
That’s more or less the rear of the chassis covered, now onto the front side component.
Power Supply modules
We can use a maximum of 6 x 2650W power supplies with a c7000 enclosure. That’s a whopping 15,9kW of total power! More than three times what the heater in my sauna can produce! Hmm, maybe I should swap my current heater for a c7000 blade enclosure…would be super cool. And also a bit disturbing that I find it cool.
That’s how a c7000 power supply looks like. It is pretty long-shaped and as mentioned before, has it’s own built-in cooling system. These power supplies are the only components in the enclosure that the 10 Active Cool fans don’t cool down.
That’s power supply #1 going in. Once again, from the body all the way in, then locking it using the purple handle.
Slot numbering is pretty straight forward: from left to right, 1 to 6. But the best practice population is not. First power supply goes into slot #1 (as in the above picture), the next one goes to slot #4, then slot 2, 5, 3 and finally 6. You can think of the power supplies as two separate “clusters”: left side (slots 1, 2 and 3) and right side (4,5 and 6). Then you simply start populating both clusters from left to right.
Oh, by the way, the LCD display (Insight Display) on he bottom in front of the PS slots slides out of the way horizontally if you need to touch power supplies 3 or 4. That Insight Display is one way of managing OA module.
We
have all the six power supplies so once again, installation is pretty
easy. Here you see a fully populated power supply system – All 6 power
supplies installed and the Insight Display in front of PS 3 and 4