It’s not easy for me to describe the issue in one line on the title. Let me give some background here. I have 2 set of VMs. Set 1 has VM A & VM B. Set 2 has VM C & VM D. Each VM has a vNIC configured with a private IP address. VM A and VM C also have another vNIC configured with an L3 (Routable) IP address. Each set’s private IP addresses are the same. To make sure no confusion I implemented a vRouter VM for each set. The vRouter is same as VM A or VM C, it has two vNICs. One is connected to L3 network, another is connected to the private network. This way can keep the private network traffic not going outside of the set. So the both set no disturb each other when I set same private IP addresses.
Following are IP addresses I set for each VM:
VM A: 192.168.0.11
VM B: 192.168.0.12
VM C: 192.168.0.11
VM D: 192.168.0.12
The problem is I still can get ping responding on VM A to 192.168.0.12 when I turn off VM B. I expected to see the L2 traffic goes to it own vRouter and finds VM B is offline. But tracert command shows me the traffic goes from VM A’s L3 network to vRouter of the 2nd set, and then get the answer from VM D. Looks like the L2 ping package is broadcasting on L3 network.
The issue was fixed by enabling a feature on L3 network. It called “Enforce Subnet Check for IP Learning“. Cisco changed the name to “Limit IP Learning To Subnet“. It’s a VLAN level setting. It will not allow broadcasting the private Ip traffic on an L3 network. It forces private IP traffic to go to L2 network only.
You may need only 1 IP address for blade console in Cisco UCS Manager. You can follow Understanding “Management IP” of Cisco UCS Manager to configure it. You may see warning “Vlan ‘xxx’ resolved to unsupported VLAN ID” when you delete existing inbound and outbound IP pools if you are trying to clean up existing management IP pools.
That’s because inbound IP address for blade is not cleaned. You have to go to “Equipment” -> “Chassis” -> Target chassis -> “Servers” -> Target server -> Go to “Inventory” tab -> “CIMC” tab -> Click “Change Inbound Management IP” -> Remove existing VLAN and IP pool.
You will see inband IP tab is blank once it’s saved. Please note, the IP address reassign back after 1 minute if you clicked “Delete Inband Configuration” instead of “Change Inbound Managemnt IP“.
The reason is Admin -> Key Management -> KeyRing default is expired. It’s not possible to delete or change the KeyRing in GUI. You have to log in to SSH of Cisco UCS Manager and run following commands (The strings after “#”):
You may see “The IP address to reach the server is not set” when clicking the KVM console in Cisco UCS Manager. The issue persists even Cisco UCS Manager has enough IP addresses for management. Re-acknowledge or reset CIMC cannot fix the problem.
The fix procedure is go to “Equipment” -> Select the server -> “General” tab -> “Server Maintenance” -> “Decommission” the server.
Wait for the decommission completed, then re-acknowledge the server. IP address will be assigned to the server after the acknowledge process is completed.
I used to see memory degrading on Cisco UCS blades. But less see on HPE blades. I thought it maybe quality control problem of Cisco manufacture. Today I read two articles in Cisco website, it explains why we see memory degrading and how it works. I attached the articles below.
New B200 M4 blades can running on Intel v4 processors. You may see discovery issue if your UCSM firmware version lower than 2.2.7c. I hit that problem few days ago when I install a new M4 blade. The FSM hung on 58% a real long time and failed eventually.
Cisco UCS blade system is the best blade system I used so far. Whatever the hardware, software or support is perfect. I recommend leverage the system for primary system of virtualization. UCS blade system architecture is different with HP. I feel it more likes a network system. Fabric Interconnect (FI) modules exchange data between uplinks and internal components. IOMs on each chassis controls data routing. Architecture is complicate, but it’s powerful to manage large datacenter. Talking about large datacenter, you may have hundred chassis or blades. Data goes through FIs, IOMs and blades, you could see issues on any layer. It’s hard to find out where exactly the problem is. UCS Manager provides statistics for ports just like how Cisco does on network switches. You can show statistics of a particular port. But it doesn’t tell you when and which layer it happened. I tested Cisco UCS adapter for vRealize Operation Manager before I reviewed NetApp adapter for vRealize Operation Manager. It’s developed by same company Blue Medora. I’d like to introduce few of this product, it’s just my personal review.
Whatever you configure on MDS, whatever you configure on Cisco UCS FIs, whatever you do for port channel on both side, the Cisco UCS uplink ports always down with error message Initilize failed, or Error disabled.