Lenovo ThinkSystem SD530 Server Video Walkthrough
Articles,  Blog

Lenovo ThinkSystem SD530 Server Video Walkthrough

Hello Again. My name is David Watts from Lenovo Press. I have with me today Paul Scaini. How are you doing? Paul is the worldwide segment manager for Dense Optimized Systems. Correct. Today we’ll be talking about the ThinkSystem SD530 and D2 Enclosure. Paul, tell us about these systems and who
they’re aimed at. Sure. The SD530 and D2 Enclosure are a 2U 4-Node system. Density-optimized isn’t new to the market. There’s a lot of 2U 4-Nodes out there. It’s not new to Lenovo either. We have current products in the market today: the SD350 and our NeXtScale product line. What makes this a little different from NeXtScale is that NeXtScale is a 6U high 12-Node and this is a little bit smaller. It’s a 2U 4-Node. So you’re getting a more digestible package by reducing it down to a 2U, but you’re getting the same density, where
you’re getting the same amount of compute and storage and stuff that you would have in a 1U server in a half-wide configuration. Right. Four systems in a 2U package. Correct. Who’s the audience, who’s the top customer that’s best suited for this system? The early adopters of the density-optimized
servers were the HPC, or high-performance computing type. Because they just want to pack as much IOPS and FLOPs and stuff like that as they can into a rack. But recently it’s been expanding quite a
bit. It’s becoming kind of a lead platform for
software-defined storage or Nutanix hyperconverged infrastructure—as well as in the datacenter. Basically, anyone who’s trying to virtualize
and get the most amount of compute in a small amount of space will switch over
to a density-optimized server. We’re even starting to see these for the
ROBO market as well. We do have an 1100 watt power supply that runs at 110W specifically for those customers. Right. Very good. Okay, let’s look at the front of the enclosure
first. There are…there’s space for four nodes. We have one installed and three blanks. You can see on the front, here, that each
of the nodes has five, sorry, it has six drive bays. Up to six drive bays. Correct. …Up to six drive bays. Which is quite a great feature for this type
of system… Yeah. …of this much storage. Most of the 2U 4-Node systems in the market will have a JBOD or 24 drives across the front. We’ve designed it slightly differently. Due to some of the space limitations those
others are not full half-wide nodes. So you would have to make some trade-offs, when it comes to how much you can fit in there. We were able to make these two half-wide nodes and at the same time not sacrifice any drive densities. So you really get up to six drives in the
front, Plus the internal M.2s give you a total of
eight drives per node. Now this particular node has the KVM module here. Correct. Which takes up one of the drive bays. So it’s optional, right? Correct. It is optional. Now, what does this do for you? For those that are still using crash carts
or need to manage it up close and personal, as opposed to using
the network, we offer this as an available part that you
can add as an option. And what that does is, it’s a proprietary
connector that hooks up to this dongle. Right. So this unit, this cable, has a VGA port, two USB 2.0 ports for your keyboard and mouse and a serial port. So, if you’re looking for local connectivity… Correct. …into the node, then this will provide that. The KVM module also includes a USB 3.0 port. This port, as well as giving you traditional
USB access to the node, also is our functional lead for the XClarity
Controller connectivity. If you have a mobile phone you can connect your phone or your tablet via USB tether function. That will give you access to the XClarity
controller, the system manager processor in the node. That will allow you to do local management
of the system. Correct. And also in front you’ll have typically
what you’ll see on the front of servers. Our power button, and our UID button. Yeah. So looking at the front here, this first one
is the power button and LED. Next to it, the hole, the round hole, is the
NMI reset. And next up, the slot hole, that’s the temperature
sensor input. And then third to that is the ID button and
LED. That will allow you, if you’re in a rack,
you press the button and that will light up the blue light at the back as well. You can enable that ID function remotely as
well. So, if you’re going to work on a system
in the data center, you’ll know exactly which system to work
on, and not start working on the wrong one. Now, further, is this pull-out tab. On the production systems, this will have
a little label on it with the networking information for the XClarity
Controller interface. Correct. Let’s take a look under the hood. Sure. So, the nodes come out very easily. Just unlatch there and remove. And put that on top. And remove the cover, like so. There we have the system. So, Paul…? As you can see, in the front of the system
we have our 6-drive backplane that we’re showing here today. That comes in two flavors: one that’s an All-SATA/SAS version and one that has an optional two AnyBays, that enables you to use high-speed NVMe drives. Right. Yeah. That’s pretty similar to what we have in the rest of the portfolio. Yeah, so you can have two NVMe drives here, if you wish, if you use the NVMe backplane. Correct. It also supports a 4-drive backplane, if you so desire. All-SAS/SATA, that’s an option as well. Correct. Yep. As you can see, this is a 16-DIMM system. So you’re looking at eight per socket. To be a balanced configuration, we’re looking
at … most customers will be using it as a 12-DIMM system. Yeah, the system is designed for an optimized memory-performance system with 12 DIMMs. However, of course, if your application can take advantage of 16 DIMMs, maybe there’s a slight performance drop, then the 16-DIMM capability is available. Due to the density of the system, you have to use smaller heat sinks and that will limit the processors you can use with a 16-DIMM solution. But it’s certainly there if you need it. Yeah.. I would add, too, of course, that these are the new Intel Xeon processor Scalable Family CPUs. We support CPUs all the way through
to the…165W processors? Correct. Yeah, the highest core count of 28. The highest. Right. Including the 28 core count processors. So, if you’re looking for a dense offering with significant processor cores, the SD530 is an excellent solution here. So back here we have our hardware RAID controller. Yes, just let me open that up for you. Two SAS connectors to the front drives. Remember there are six drives in the front
bay. So that a hardware RAID adapter. It connects to the PCI connector just there. You’ll see there are additional PCIe connectors. This is one more here and there’s two farther up here. These are for future use, when we bring in the GPU tray… Correct. …for this system. So, each of those connectors are eight lanes of PCIe and that gives you a total of sixteen per CPU that
can then be used for the companion tray that gives you the option of adding GPUs. Now this system also supports SATA connections, so if you don’t wish to use hardware RAID or you just want to use SATA drives, for example, then the SATA options are there, too. That will allow you to use SATA SSDs or
hard drives in the bay as well. Now, I would point out, too, Paul, the M.2
adapter here. So as well as the six drives at the front, the server also supports two—one or two—M.2… That’s correct. …drives. Tell us about that. So, we decided, across the portfolio, that we were going to move away from booting internally off SD cards or USB
keys. We just felt it wasn’t truly enterprise-class, it wasn’t robust. So it uses the exact same M.2
module and controller controllers that you would have on any of the other products we have in our entire portfolio. And the one that’s installed here is the
single adapter… Correct. …that has space for one M.2 card. Yes. We also offer the…this is the M.2
dual adapter that has space for one M.2 card on each side. Correct. This is ideal because this adapter supports
RAID 1. So it’s enterprise-grade quality. It can tolerate the failure of one of these
M.2 cards and the system still works as normal. Correct. David, as you might know, there are a few
things missing from this node that you would normally see in a
server. One of them being your LOM, or LAN on motherboard, and the other being your PCIe slots. So what we’ve done in this box is we’ve completely disaggregated the I/O and the PCIe from the node itself and moved it to another part of the server, which we can, I guess, talk about now. Shall we spin it around? Sure. So those…all those connections are at the back of the enclosure. The thing to start with…this here is the External I/O Module, the EIOM,… Correct. …and there are eight connections. These are 10 gig Ethernet connections. And two of them…I’ll show you this label here …two go to each of the nodes. It’s a direct connection to each of the nodes… Correct. …at the front of the enclosure. The EIOM is available both in RJ45 and 10 GBase-T… …or SFP+ cages if you want to
use optical connections. Exactly. Or, if you don’t need them at all, you don’t have to populate it. So, disaggregating the I/O from the node itself gave us an added flexibility that enables us to have a swap-out or swappable module in the back that gives you that flexibility.. So, shall we open up the I/O tray and… Let’s take a look. Yes sir, we can do that. Release the levers…unscrew these handles here and pull this out. Of course, you do it while the power is off. Of course. Put it on top. Okay, so this is the… We’re calling it the Shuttle. The Shuttle. The I/O Shuttle. Correct. Yes. Okay, so tell us…what are we seeing here? The Shuttle itself is the main part that controls all the thermals. So you see all the fans here, as well as the power, because the power supplies are in the back. But what makes this a little bit more unique is the fact that…we talked about our EIOM module here. And then it also takes the PCIe lanes. So, via this connector, there’s two risers here that give you, in this configuration, two x8 PCIe lanes per node. So you can see on the side here, there’s
1, 2, 3 slots here and there’s a fourth one on the back side. So it’s two PCIe x8 connections for each of the nodes. Direct connection. Exactly. Correct. And we also offer a second shuttle, a different version, which we have over here. Yes. It’s over here. This is our x16 shuttle. Customers that use this want lots of I/O.
Someone who uses this might populate this with a whole bunch of 10 gig cards. For those customers, like the HPC customers, who want a high-speed fabric, like a Mellanox InfiniBand, or an OPA, or Intel OmniPath, they’re going to want these x16 PCie slots, because that’s what’s required to get that kind of optimal I/O. Now, as well as the difference of having different lane widths—this is x8 and this is x16— the x16 shuttle, each of the x16 cards
has its own magazine—own cassette I should say… Correct. Which means you can easily remove the adapter by simply moving the lever across and pulling the card out. Right. This cassette holds each individual adapter. Now the advantage of this is you can
remove the adapter by simply powering off that one node and then removing the
adapter from the system. As opposed to the x8 solution. Because of
the way it’s designed, you have to power off the entire enclosure to get access to those. So, if you’re looking for a high availability
solution, and you don’t need the number of slots, certainly the x16 cassettes are a big advantage there. Yup. Now this unit over here… Let me pull this out… This is the SMM, the management
module for the entire enclosure. Correct. So not only does the chip on here support the fan and power controller for the entire system, what we’ve done is we’ve added a switch on here that connects to the XClarity Controller on each of the nodes individually. What that does, is it gives you a single management port for the fan and power controller and for each of the nodes. So if you think about our NeXtScale System
today, on the NeXtScale System each of the nodes individually on the front has its own
management port. And then there’s a separate
port in the back for the fan and power controller. So in a 6U configuration, you’re looking
at 13 management ports to manage the entire system. Here there’s just one. We wanted to simplify that. So in the 2U system you only need one cable to support all four nodes, and the fan and power cord. And this also has some LEDs on the back, too, including the ID function for determining which node you’re working on from the back of the system. This is a hot-swappable device, so if you need to you can pull this out while the enclosure and the nodes are still running and replace it if need be. True. And the fans themselves are hot-swappable. There is a translating rail kit that comes with the system that enables you to slide out the rail. And then there’s a hatch that enables you to access them while in rack. Yes, so without shutting down the enclosure again you can perform maintenance on any of these five fans: three 60 mm fans and two 80 mm fans… Correct. …cool the entire enclosure. Right. Power supplies in the back. Two hot-swap power supplies are redundant in most configurations. There are three choices, Paul. 1100W… Correct. …1600… Correct. …and 2000. 2000. That’s right. The 1100 watt, as Paul mentioned in the beginning, supports 110V. All three of them support 220V. Correct. All right, so I think that’s about it. Yep. So, this is, again, this is the ThinkSystem SD530… Yep. …our dense-optimized system, complete with the D2 Enclosure, that supports four of them. It’s a 2U enclosure, ideal for a variety of solutions: HPC, as well as ROBO and virtualization. I hope you found the video useful. Paul, thanks very much. No problem. And we will see you later. Bye. Yes.

Leave a Reply

Your email address will not be published. Required fields are marked *