output
stringlengths 1
63.5k
| instruction
stringlengths 1
274k
| input
stringlengths 1
244
|
---|---|---|
Already find the answer. It possible to make a local replica, delete replication session, and then clear "destination attribute" via UEMCLI from destination LUN to make it accessible. [URL] | Hi! I'm new to EMC Unity. Is there any way to do local LUN copy in Unity system? | Local LUN copy |
I would first double check that the PERC driver is on the latest, as well as the OpenManage software. If the OpenManage software is on the latest version, you can restart it's services by pulling up the Services pane and stopping all 4 DSM services, then restarting them. That can fix a number of issues with OpenManage and will let us verify that the issue isn't a software error. I'm also happy to look at a PERC log if you would like to PM me one. <ADMIN NOTE: Broken link has been removed from this post by Dell> | Hi.We have an R710 with an H700 controller with 5 physical disks in RAID-5 that we are swapping out to allow adding more space. The disks have been swapped by using Offline/remove/replace with new unit/wait for Rebuilding to complete, then repeat steps for each disk. The array currently has a single Virtual Disk in MBR mode - which we plan to leaving alone, then add a second Virtual Disk configrured with GPT to allow a greater-than 2TB volume with the remaining added space. The old disks were 300GB each (all identical models either as DELL-branded or OEM Seagates). The new ones are 2TB each - again all identical models bought in a batch: Seagate ST2000NM0045 Exos 7E8 Enterprise, 3.5", SAS.The first 4 drives are all on Backplane Connector 0 (so ID as 0:0:0 through to 0:0:3) and all report space as;Capacity = 1,862.50GB,Used RAID Disk Space = 278.88GB andAvailable RAID Disk Space = 1,583.62GB.The last disk is on Connector 1 (so ID's as 1:0:4) reports the same Capacity and Used RAID space, but it reports the Available as 0.00GB. So I'm being prevented from adding a new Virtual Disk, as the OMSA Wizard reports no free space.Our H700 controller details:Firmware Version 12.10.7-0001Driver Version 6.801.05.00Storport Driver Version 6.1.7601.18386I have tried removing and reinserting that last disk, but after the rebuild it still reports the same zero available.Could this relate to it being on the second Enclosure backplane (Connector 1)?I'm planning a reboot of the system overnight just in case a hardware-level restart is required - but I'd welcome any other guidance! | H700 last disk Available RAID Disk Space = 0 |
The write cache on Unity also is battery backed. The difference between VNX vaulting and Unity vaulting is the location where the data is vaulted to (system drives on VNX, internal M2 SSD on Unity). Both still need the battery to do the vaulting process in the event of a catastrophic power failure. | Hi, anyone has experience in disabling Battery (BBU) alerts on Unity ? Any help, Much appricaited. Many Thanks, Vivek | Disabling alerts- Battery |
I swear the keys were added and appeared when I checked yesterday. But today they weren't there. After I added them (again), access is working as it should be. Sorry for the distraction. | I have installed a valid openssh public key to the iDRAC8 on an R230 server to a new user account (not root). But when running ssh to that new account from the system with that key, I am prompted for a password. What am I missing? I see online that a different Dell system restricts this usage to a specific user account name. Is there a similar restriction for this iDRAC8? | iDRAC8 ssh key-based authentication |
Hello, I wan't able to find the PowerShell cmdlets on our website, either. I did find something that looks like it may be what you need on the Dell GitHub page, though. Check it out and see if it helps. [URL] | Hi, I can't manage to find the DellPEPowerShellTools module which is normally available here: <ADMIN NOTE: Broken link has been removed from this post by Dell> There isn't any download links on that page. Also, my google searches are null. Any ideas? | Dell PowerEdge PowerShell cmdlets | DellPEPowerShellTools |
We solved it. Internal Problem. Closed. | Hey, we are running OME since one month and after a while some host dont display any Information in the Configuration Inventory Tab. We also arent able to create a Compliance Baseline because we dont have information about the devices. Is there already a fix or workaround ? | No Information in Inventory |
Wrong speed/latency, wrong size, wrong layout, wrong type … something is wrong with it. Try a single stick in the first slot. I would probably send it back and buy something you can confirm the specs on. | Hi, I have a Dell PowerEdge T310 with Intel Xeon X3440 in it.In theory it is supporting 6 x 4GB RAMI wanted to upgrade my ram, from 8GB to 24GB.It has two 4GB 1333 registered ecc sticks,so I have bought four more sticks with the same specs.When I installed them, I got an error on the servers lcd screen saying:"Memory is configured but not usable"My bios version is the latest possible. I have tried with four 4GB sticks, same error. Any ideas? | Memory is configured but not usable |
SOLVED. I found the cause and resolution to the problem. It seems that during an in place upgrade from a previous version of the Qlogic Drivers and Management Applications package from 20.20.2.2 to 20.20.3.2, which is a QCS upgrade from version 30 to 40 (drivers remain the same for BCM57xxx in my case), and dll is not updated, probably due to poor packaging of the DUP. Broken QCS - gamapi_x64.dll version 1.0.27.0 - 982,016 bytes Working QCS - gamapi_x64.dll version 1.0.27.0 - 990,720 bytes Notice the same dll version but different size? Shame So by simply putting correct 'version' of the dll in the QCSR\ folder fixes the issue without having to totally remove and re-install the drivers and applications, very annoying if you're using FCoE and/or iSCSI functions. I got the proper dll from the latest DUP, Network_Driver_F58MD_WN64_34.07.00_A00-00_01.exe Nice one Dell. | Hello all, I've recently updated the Qlogic drviers on our 12G and 13G systems with the latest DUP, Network_Driver_F58MD_WN64_34.07.00_A00-00.EXE, [URL] but now QCS crashes on launch with the following error: Faulting application name: QCS.exe, version: 40.0.16.0, time stamp: 0x5ab27e63Faulting module name: QLogic Corporation\QCSR\QCS.exe!GAM_SetVLANTableCfg, version: 6.3.9600.19153, time stamp: 0x5b93ffa7Exception code: 0xc0000139Fault offset: 0x00000000000ecf30Faulting process id: 0xa60cFaulting application start time: 0x01d4abf3646ee005Faulting application path: C:\Program Files\QLogic Corporation\QCSR\QCS.exeFaulting module path: QLogic Corporation\QCSR\QCS.exeReport Id: a22e7ec6-17e6-11e9-8123-00074329ad08Faulting package full name:Faulting package-relative application ID: All systems are running Windows 2012 R2 which are kept up to date and patched monthly. Any thoughts? | QLogic Control Suite - crashes on startup |
All, I replaced both power supplies and the server is running with no issue. I guess when the one power supply blew out, it also caused some type of damage to the other. I was hearing a high pitch squeal in the working one. After I replaced both I tried to also boot with one of the two and it continued to work. With voltage sometimes redundancy doesn't work I guess. In any case I've saved a server. | Hi All, At bit of a background on this server. It had a blown PSU after which the server would not boot. I pulled it from our data center to troubleshoot it and it's a bit of a strange one. The server starts up and posts with the working PSU but after it gets to the iDrac and finishes posting it reboots. I took out CPU 2 and had just one memory module with CPU 1 and the server posts and goes into the BIOS. I then took CPU2 and put it into the CPU1 socket and the server boots and gets into the BIOS. It's only having issues when both CPUs are in there. I suspect there may be an issue with the other PSU even though the server is booting. I'm going to try to get two good PSUs from another 720xd and see if the issue goes away to rule out the MB as the issue. If the issue persists with two other PSUs, would the issue then be with the MB? Thanks | R720xd posts but reboots |
I checked again, and it is definitely a license restriction. I deleted the enterprise license from one of our lab servers and the option to specify location became grayed out. [IMAGE][IMAGE] | I just received a brand new T440 server. Unfortunately it does not allow me to update the firmware to the latest version. When I login to the iDRAC web interface of that T440 server and click on "Maintenance" -> "System Update" -> "Manual Update", I only get a grayed out, disabled "Location Type" drop down box which is firmly set to "Local". So I cannot select "http", "https", "ftp", etc.[IMAGE] I don't assume this is normal, is it? I have checked the network configuration settings dozen times. I tried DHCP, as well manual network configuration (with of course correct static IP, netmask, gateway and DNS server). But the result is always the same: I can login to the iDRAC web interface, but the update feature is disabled like shown above. I also checked whether iDRAC actually got a working Internet connection by clicking on "Diagnostics" and then pinging a Dell download server, and that went just fine without any error or packet loss. What I could not check so far was whether iDRAC for some reason got an issue with DNS resolving, because I could not find a diagnostic command that would do that. Is there one? Also what I found strange: this T440 server was shipped with iDRAC version 3.21.21.21, that's from June 2018! Is that normal nowadays that Dell servers are no longer shipped with an (at least somewhat) recent firmware version? I mean Dell sent me this server just days ago. | T440 Firmware Update disabled |
I can verify that if a spare disk fails it will follow all failure processes of an active disk. This will include a solid amber light on the disk externally. The blinking of a disk only enables when you set to be enable it in DSM. See Enable or Disable the Disk Indicator Light on page 284 of the Dell Storage Manager 2016 R3 Administrator’s Guide: [URL] | Sorry for the disturbance, but does anyone know the list of HDD/SSD events, which will cause the front panel LEDs indicating them? On a particular SC4020 system I found: - The failure of single HDD in hotspare pool changed nothing in drive cage indicators state (both LEDs remained green), however, the fault was perfectly registered in web-console; - The LED control from web-console works fine, both via pop-up menu or HDD replacement wizard; - The HDD failure in a production pool also did not displayed via HDD cage LEDs. As for me, this behaviour looks strange, because the periodical 'alarm lamp' visual seeking remains the easiest way to briefly check the hardware state in 'machinery room'. Could that be an SCOS issue? [IMAGE]SC Screenshots | Compellent SC4020 front LED logics |
Yes, the first generation board does have limited support for non-130w 5600 series Westmere CPUs. The information I provided previously was not accurate. There is a processor information update in the documentation section of the R710 support page. It has more detailed information. [URL] | Greetings! Recently bought PE R710 and it appears to be Gen.1 i believe: DRAC says System Revision is "I" and labels on the motherboard suggest the motherboard is 0N047H however it doesnt match the label on the side of the machine which is 0H241F. I want to upgrade the CPUs for the purpose of my needs to the fastest, highest core count CPUs I can put in it. As far as I understant, considering its Gen.1 and can't host 130W TDP cpus, I can go for X5675. Is that correct? Current BIOS version is 6.4.0 - can I use it or upgrade is needed to host X5675 - I rather not upgrade the BIOS because I dont want Specter/Meltdown security mitigations -> currently performance is more important than security. Side question: in Power Monitoring in DRAC (as well as on LCD) power reading says 245W in IDLE and DROPS when system gets pushed hard which does not make any sense. Measured with external tool it is actually 125W when IDLE. I also upgraded to the latest DRAC FW 2.91 but readings didn't change. | CPU upgrade PE R710 |
Hello I think you may be experiencing this issue: [URL] Thanks | When installing the non-gui Hyper-V Server 2016 on a new server I never get farther that the Windows Logo. After the "Loading files ..." progress bar it just freezes, not spinning circle of dots or other signs of activity. Here's what I've tried:* Updating all Firmware / Drivers* Bios* Chipset* RAID Controller* iDRAC* Drivers for OS Deployment* Install using Lifecycle Controller OS Deployment (via browser)* with and without Secure Boot* with and without selecting Server 2016 to add drivers during the install* Install from a bootable USB stick with the Hyper-V Server 2016 Iso* tried front and back ports* activated Generic USB in the Bios* downloaded and recreated the USB stick a couple of times* Let the install run overnight to see if it was just hanging for a very long timeTo see if I was completely off course I tried installing Hyper-V Server 2012r2 and it worked on the first try using the Lifecycle Controller and selecting Server 2012 for the drivers Any insight to what I may have done wrong or am overlooking? Any details I need to provide to understand the problem better? Thanks in advance. | PowerEdge R6415 Install of Hyper-V 2016 Server Freezes |
Hello zj0328, What you want to do first is to reseat the SPS cable on SPS A first to see if it was loose. If you happen to have a spare SPS cable, then you can also try swapping the cable. When you look at the SPS does it have an amber light or is it green? If you don’t have either SPS working and you lose power your CX700 can’t save the data that was active on the SP’s. Please let us know if you have any other questions. | SPS A fault,SPS B提示Invalid Multiple Cable,现在我能做什么操作,是先更换 fault的A还是先解决B。另外请问两个SPS都fault会有什么后果。 | cx700:Invalid Multiple Cable SPE Fault |
Hello The iDRAC6 does not have security features that are considered secure by current standards. Modern browsers and plug-ins like Java and ActiveX have increased minimum security requirements. The only two ways I know to get the iDRAC web server and console to function is to either reduce security or use older browsers and plug-ins that are compatible. Reducing security may not be easy, it may require using development tools that are not user friendly. Thanks | Hello so i brought a new power edge T610 and when i go to launch the remote access console I get an error (seen below) [IMAGE] The settings are: [IMAGE] | Java Unable to launch IDRAC application |
You need to use a Dell Update Package from either the iDRAC web interface or the operating system. If the backflash still fails then it is not permitted. The backflash will not be permitted if conditions are met that would cause the system to not be able to complete POST after the backflash. The conditions are listed on the 1.4.5 download page. No backflash method will work if those conditions are met. | Hello, I am trying to down rev an R640 BIOS from 1.6.12 to 1.3.7. The BIOS was initially 1.6.11 and I was able to update to 1.6.12 but I can not down rev to 1.3.7 from 1.6.11 or 1.6.12. The issue is only seen on one R640 as I can down rev other R640s. The down rev BIOS is staged fine but does not update correctly after a reboot. I have tried a power drain and racadm systemerase commands to no success. This is what the LC Log says for the failed BIOS update. | can not down rev R640 BIOS |
Hello Those are the old deprecated commands. I think this is the command you are looking for. racadm get/set idrac.ldap.GroupAttributeIsDN That will change the Use Distinguished Name to Search Group Membership option that is configurable for LDAP in the web interface. Many of the commands are case agnostic, but if you encounter issues running a racadm command use proper case to see if it resolves syntax issues. You can reference the RACADM CLI guide for more information. The latest documentation appears to be on the firmware 2.05 support page. [URL] [IMAGE] Thanks | Hello. I want to configure my iDRAC 8 for LDAP connections. In the web page configuration > "iDRAC settings" > User Authentication" > "Directory Services" tab > "Commen Settings" > I want to check the box "Use Distinguished Name to Search Group Membership". I want to do that automatically, by using a command line. Is there any one who know the command (cfgldap, cfgldaprolegroup...) and the syntax to use? Thank you in advance. | How to check "USE DISTINGUISHED NAME TO SEARCH GROUP MEMBERS |
The settings are under iDRAC settings >Network, then the services sub tab.[IMAGE] However, in order to set passphrases that is done under User Authentication. Under User Authentication you will select your user, and as you see there is an SNMPv3 column and it's disabled. Then on the user profile page select Configure User and then click next. [IMAGE] There you will locate the SNMPv3 Enable check box and your passphrase type setting. The passphrase is the password configured for the user. Hope this helps. [URL] [URL] | Hello, I would like to monitor my PowerEdge servers via SNMPv3 with a secured authentication and encryption. But I didn't find where I can configure the SHA and AES passphrases on the iDRAC web UI. I can activate both options on the user parameters, but I don't know where to configure the passphrases. I tried to make some SNMPv3 request without theses options, it works well, but the data transits in clear text. Model : PowerEdge T330 iDRAC version 2.52.52.52 Is there anyone that can help me on this matter? Thanks very much in advance ! Guewen | Passphrases SNMPv3 on iDRAC8 |
I am afraid such configuration is not available in OME. I'll pass on this valuable feedback to the concerned team. | Hi all, I have been receiving numerous Authentication Error alerts from my Domain Controller. After doing some digging I found that even though I had set up the SNMP service on the DC to accept alerts from the IP of my Essentials server, it was saying I had an unauthorized server sending SNMP requests. This is how I learned that somehow the Essentials server was attempting to connect to the DC via SNMP from a different IP than the IP of the virtual switch configured to communicate with the LAN. It was attempting to use the IP of the switch configured to communicate with my SAN. I was wondering if anyone else has experienced this issue of OME attempting to use the incorrect Ethernet adapter and if so what did you do to resolve this issue? Thanks. | OME Authentication Error |
you can run an "ip route add" to add it to the route table if you can ping a server in the new vlan then you will need to edit the /etc/sysconfig/network/routes file to make it permanent you will need to add the route to the DD as well but it can be done from the DD mgmt. GUI | Hello All, I don't know whether this is a right question to ask but as a newbie to Avamar product,I want someone to answer my question regarding adding static routes to Avamar (single node grid). is it even possible ? We have two networks (network 1, network 2) which are isolated but recently we decided to backup all the clients from network 2 to network 1. since both the networks are segregated the networking team created a routing interface for devices in network 1 to talk to devices in network 2 and vice versa. so my question is, is it possible to add static route in (Avamar)IDPA for all the devices in network 1 to talk to devices in network 2 Note : the networking team did a ping test from all the routers (16) in network 2 to Avamar and to routing interface everything is reachable. Even the Avamar in Network 1 can reach the routing interface that's created but cannot reach any router (16) in network 2. There are no firewalls on the routers in network 2.So what changes need to be made on Avamar/DD. Is it even possible ? I hope this makes sense. Let me know if you have any questions. I can provide more details if needed. Thanks in Advance PK | Adding static route in Avamar |
To summarize, it appears that my power supply was replaced by Dell Part Number: YFG1C. So firmware updates for the legacy power supply are not available. I picked up two of these YFG1C supplies cheaply online and will see how my T610 likes them going forward. Thanks Dell for all the help here. | [I posted this message on Monday, but it appears to have disappeared from the forums for some reason, shrug.] I have a Power Edge T610 with a single 870W power supply (Dell Part Number PT1641). I appear to be suffering from this issue Power Supply Error on Dell R710 The issue being errors reported with temporary communication failures with the power supply. My power supply appears to have firmware version 03.02.50, which is indicated as having this bug. My question is where can I download the firmware update for this power supply? I have tried the following: 1) The Dell support website for my service tag lists a firmware update for the wrong power supply. 2) I attempted to update the firmware via the Lifecycle controller, but after downloading the catalog and verifying it, the LC reports that no updates are available. Thank you. | Seeking Firmware update for T610 Power Supply PT1641 |
Hi Joffer77 I covered the URL location in a previous post on here: [URL]#M837 Essentially it looks [URL] and that in turn points to a tar.gz in the archives directory. I've copied that the content of that URL to a webserver on our disconnected network and everything works as expected. I've successfully update to 3.1.0 without issue (takes about 15-20 mins on our virtual appliance). | Hi. We have OpenManage Enterprise 3.0.0 installed. I see that OME v3.1 is out ( <ADMIN NOTE: Broken link has been removed from this post by Dell> but it seems to be a OVF install only, not an update package? Will there be a upgrade iso or something? I don't see any update if I go to Application Settings > Console Update. I have it set to Automatic Check and Online (repo). Does anyone know which online url it's trying to connect to, so I can troubleshoot possible connection issues? Isn't there a way to ssh into the appliance to do some local troubleshooting? | Upgrade OME from 3.0 to 3.1? |
Hello I'm not sure if that is possible on the iDRAC6. The option you are looking for is cfgnicselection. It is covered on page 157 of the manual. I show the latest manual is on the support page of firmware version 1.95. [URL] I am unable to find anywhere that you can specify the shared LOM interface. The available options I can find are covered on page 157. Being able to specify the LOM interface is likely a feature added in future generations. Thanks | I am working with a PowerEdge R710 and I am attempting to make some configuration changes to the iDRAC and get it up and running on the network. I have done this with all of our newer servers with ease, but this one does not have a dedicated NIC for the iDRAC. In looking at the configuration, I see it is set to use LOM1: LOM Status:NIC Selection = SharedLink Detected = YesSpeed = 1Gb/sDuplex Mode = Full DuplexActive LOM in Shared Mode = NIC1 I want to change this to use LOM4 by using the RACADM command line. But for the life of me I cannot find the command to use to do this. Could anyone here offer me any insight into this? I would sincerely appreciate it. | Looking for a particular iDRAC command |
Hello DanPosey19, When you get the Physical disk channel error what you want to do is to check to make sure all drives are online & present. Are there any expansion enclosures attached to your MD3000? Here are the commands that need to be run: smcli -n -p password -c "clear allPhysicalDiskChannels stats;" smcli -n -p password -c "set physicalDiskChannel [0] status=optimal;" smcli -n -p password -c "set physicalDiskChannel [1] status=optimal;" smcli -n -p password -c "set physicalDiskChannel [2] status=optimal;" smcli -n -p password -c "set physicalDiskChannel [3] status=optimal;" If after running the commands you are still getting the error then we will need to review a support bundle from your MD3260i. Please let us know if you have any other questions. | Hello, I have a MD3260 array. Found a Physical Disk Channel Degraded message, all drives report as optimal. What can I try to troubleshoot this issue to resolve it. Date/Time: 12/31/18 7:31:42 AMSequence number: 128471Event type: 1209Event priority: WarningDescription: Physical disk channel degradedEvent specific codes: 0/0/0Event category: ErrorComponent type: ChannelComponent location: Physical Disk-side: channel 0Logged by: RAID Controller Module in slot 0 Raw data:4d 45 4c 48 03 00 00 00 d7 f5 01 00 00 00 00 0009 12 16 11 5e 36 2a 5c 00 00 00 00 00 00 00 0001 10 00 00 01 00 00 00 06 00 00 00 06 00 00 0001 00 00 00 01 00 00 00 01 00 00 00 00 00 00 0000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0001 00 00 00 01 00 00 04 80 00 00 00 20 00 47 0700 00 00 00 02 00 00 00 00 00 00 00 75 0a 4e 5b3a 36 2a 5c f2 07 cd 0a 2a 1d 01 00 de 19 01 0020 00 47 87 90 01 00 00 e5 45 00 00 00 00 00 0000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0025 00 00 00 20 00 47 87 08 00 00 00 88 d5 00 0000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0000 00 00 00 00 00 00 00 10 00 47 87 0e 7a f3 c628 7d f3 c6 39 00 00 00 14 00 00 00 | MD3260 Physical Disk Channel Degraded |
Hi, It looks like a memory error that is getting corrected. If it is working now I would continue to monitor. Rebooting the stack and then updating firmware would be the next steps. | hi, I got yesterday (once) this error block, firmware version is 6.3.3.10: <187> Dec 28 21:37:10 10.222.0.10-2 DRIVER[bcmDPC]: trident.c(3115) 12873 %% unit 0 L2_ENTRY_ONLY parity hardware inconsistency <187> Dec 28 21:37:10 10.222.0.10-2 DRIVER[bcmDPC]: trident.c(3115) 12872 %% unit 0 L2_ENTRY_ONLY parity hardware inconsistency <187> Dec 28 21:37:10 10.222.0.10-2 DRIVER[bcmDPC]: trident.c(3115) 12871 %% unit 0 L2_ENTRY_ONLY parity hardware inconsistency <187> Dec 28 21:37:10 10.222.0.10-2 DRIVER[bcmDPC]: trident.c(3115) 12870 %% unit 0 L2_ENTRY_ONLY parity hardware inconsistency <187> Dec 28 21:37:10 10.222.0.10-2 DRIVER[bcmDPC]: trident.c(3115) 12869 %% unit 0 L2_ENTRY_ONLY parity hardware inconsistency <187> Dec 28 21:37:10 10.222.0.10-2 DRIVER[bcmDPC]: trident.c(3115) 12868 %% unit 0 L2_ENTRY_ONLY parity hardware inconsistency <187> Dec 28 21:37:10 10.222.0.10-2 DRIVER[bcmDPC]: trident.c(3115) 12867 %% unit 0 L2_ENTRY_ONLY parity hardware inconsistency <187> Dec 28 21:37:10 10.222.0.10-2 DRIVER[bcmDPC]: ser.c(4655) 12866 %% CLEAR_RESTORE: L2X[3349] blk: ipipe0 index: 75779 : [0][0] <187> Dec 28 21:37:10 10.222.0.10-2 DRIVER[bcmDPC]: ser.c(4276) 12865 %% SER_CORRECTION: reg/mem:3349 btype:-1 sblk:0 at:1 stage:0 addr:0x00000000 port: 0 index: 75779 <187> Dec 28 21:37:10 10.222.0.10-2 DRIVER[bcmDPC]: trident.c(3224) 12864 %% unit 0 L2_ENTRY_ONLY entry 0x7132803 parity error <188> Dec 28 21:37:10 10.222.0.10-2 DRIVER[bcmDPC]: trident.c(3205) 12863 %% L2X entry id:12803 data:0x00000000 0x20000000 0x00000000 0x00000000 . <187> Dec 28 21:37:10 10.222.0.10-2 DRIVER[bcmDPC]: trident.c(3115) 12862 %% unit 0 L2_ENTRY_ONLY parity hardware inconsistency <187> Dec 28 21:37:10 10.222.0.10-2 DRIVER[bcmL2X.0]: mem.c(4979) 12861 %% L2_ENTRY.ipipe0 failed(NAK) The error has not reappeared and everything as far as I can tell is working properly. This is a stack of 2 N4032 switches. The switch nor the stack have not been rebooted. I have seen some parity errors on this forum but not the one I got. Does anybody know what is means or can point me to a some manual where errors would be listed ? I checked N4032 documentation but did not find any reference to parity errors. Maybe there is nothing to do or maybe I just show reboot, or update firmware. I appreciate any input. Thank you | N4032 - Swtich parity error |
I fund out the root cause (as far as I could take it) no ping propagation through a NIC used by the Virtual Switch, in either direction. David L. | Dell T310, iDrac 6 express firmware 2.91 (Build 02) I'm unable to ping 1 particular fixed IP address on the same LAN as the controller. I am able to ping that IP address from other PC's and servers. I'm able to ping other fixed and DHCP assigned IP addresses from the iDrac on the LAN. any ideas as to the possible cause? David L. | iDrac 6 unable to ping 1 IP |
I recommend you take a look at the Pair States section in the SRDF CLI product guide: This will help you understand the R1, link and R2 states with the various SRDF operations. A RESTORE operation makes the data on the R2 device the master and it will overwrite the R1 data. An ESTABLISH is the opposite, R1 is master. Perhaps you can familiarise yourself with all of the available SRDF operations as it sounds like an SRDF SWAP may be useful to you if you want to write at each site from time-to-time. SWAP will make the DR site the R1 so you will not loose DR protection. | hi, between two sites we have 2 SAN with srdf. can be vol1 in rw more and vol2 in ro mode?vol1 is the srdf part in first site and vol2 in the other site. can be vol1 and vol2 in rw mode? | srdf |
Hello Yes, that procedure sounds correct. If a rebuild does not initiate automatically then just set the replacement drive as a hot spare to be pulled into the virtual disk. [URL] Thanks | I have a PowerEdge R310 with a PERC H700. 3 drive RAID 5 with Physical Disk 0:1 Failed. I thought it had hot-swap but it doesn't. My question is about the procedure for replacing the drive. Since it's not hot-swap, I figure I just shut down the server, replace the bad drive with a good one, reboot. Then go into openmanage and make sure it's rebuilding. Is that all or do I have to do or do I need to initiate the rebuild? I can't find any docs on the subject. Thanks, | Replacing bad hard drive in R310 |
I found a solution, this setting can be turned off in bios, so even with the fan errors it just boots right into to OS without any key press, thanks for the help! | Heey, I have a T610 with a single 870w psu, where the fan always runs at Max speed, is there a way to make it turn it down? | Max fan speed on T610 |
No, the PERC6 does not support OCE in this context. There are two forms of online capacity expansion. The PERC6 only supports #2. On controllers that support #1 the controller is basically performing a retag, it is rewriting the virtual disk size parameter. Changing RAID levels is RAID level migration. | Hi folks, and thanks in advance. The other posts I have searched out here have been helpful, but, have not exactly matched my issue, so here I am. :-) 2950 with 6/i controller. The 6i is responsible for 6 x 1tb SATA drives, with a single virtual disk for data storage. (not booting an OS) Windows 2012r2 server on the system, OpenManage admin version 8.5.0 Bios as up to date as it should be. In an effort to expand the capacity of the system, within it's physical chassis, I have one by one, replaced the 1tb drives with new 2tb drives, and let the raid-5 rebuild each time. That has happened successfully, and the virtual drive shows green light. All 6 of the new drives are identical, right down to the revision number. All 6 show a total capacity of 1862.5Gb, used raid space of 931.0Gb, and avaialble capacity of 931.5Gb.(the last replaced showed zero capacity avaialble, until I rebooted, now it's fine too) If I choose Reconfigure the virtual disk, it shows me all of the drives already selected, and it's greyed out. If I hit continue, the only option it presents me with, is to create a Raid-0 with a max capacity of 5,586Gb. (which doesn't even really add up, although that is half of 11tb, not far off the 12tb total physical capacity prsent) I understood that I should be able to extend my VD by this approach, but am not seeing any options in that regard... do I need to go into Bios for this? Or is there something I am doing incorrectly? thanks much, in advance! Andrew | Extend single Raid-5 on a Perc 6/i and 2950 |
Hello I suggest removing the hardware that you just added. If this issue started after attaching drives to the controller then disconnect those drives. One of the drives may be faulty or incompatible. Once you correct the issue of the controller being unresponsive you will still likely need to deal with a foreign configuration. You can find information about foreign configurations in the controller manual. [URL] Thanks | Heey, i have a T610 with a H700 raid controller (FW= 12.10.7-0001) i just swaped some HW in it, and booted into OS, then i wanted to add 2 new hard drives, for some reason it failed to add one of them in the configuration, so i just wanted to go on without it, but every time i start the server i get this "Foreign configuration(s) found on adapter. Press any key to continue, or ’C’ to load the configuration utility or ’F’ to import foreign configuration(s) and continue." i can't get past this, no mather what i press before or after, the only thing that works here is CTRL+ALT+DELETE for a reboot, and then it just gets stuck here again, can anyone help me out of this problem? | H700 error and freeze |
Hello Vijay, that depends on the type of Raid that you will be using as well as the type of data. In most cases that I have seen most customers create 2 VD instead of one large one. The biggest reason is the rebuild time, as with a 18TB VD the rebuild time will be long. Please let us know if you have any other questions. | Hello, I have MD 3220i with 24 sas drive with raid 6 setup. Now i have purchasednew Dell MS 1220 expansion san with 24 sas drive and attached with MD 3220i.So now can you please guide me for setup to get best IO permanence .Spec.HDD Detail : 6 Gbps, 10k, space per disk 1.2 tbDisk pools:1Virtual Disks on Disk Pools: 2Disk groups:0Access virtual disks:1 Thanks, Vijiay | Best practice for expand MD 1220 with MD 3220i San |
Hi Patrik, Are you following the same process as described here? <ADMIN NOTE: Broken link has been removed from this post by Dell> If so, I would try restarting the iDRAC to see if that helps. If not, you might try bringing the iDRAC back to something like 2.50.50.50 to see if it is an issue that only seems to affect 2.60.60.60. | When generating a CSR from the webgui, it just generates an empty textfile. When running the command racadm -r -u -p sslcsrgen -g -f cert.txt , the contents of cert.txt is "ERROR: Unable to read CSR.". I have set all fields accordingly, and running racadm -r -u -p getconfig -g cfgracsecurity lists the fields correctly (they only have lower case letters, spaces and a dash ("-") in them, the longest field is 26 characters. The keysize is 2,048 bits. Server is a PowerEdge R720xd, IDRAC version is 7, firmware version is 2.61.60.60 (latest). All help and ideas greatly appreciated. /Patrik | Unable to generate CSR from IDRAC on PowerEdge R720Xd |
Hi Brian, You are correct, you'll need to create another RAID volume for the added drives. Once that is created, Windows should then see them in Disk Management and you'll be able to format the virtual drive for use. In order for windows to see those drives directly, they;d have to be set to non-RAID, which doesn't seem to be what you're wanting to do. The easiest way to create the new array is to use OpenManage Server Administrator, if you have it installed. This video is a bit dated, but should still be accurate: [URL] | Hi All, I already have a Raid 1 configuration on my first 2 hard drives. now I wanted to add 2 more HDD as a separate Raid 1 configuration as well. I tried plugging the 2 Hard Drives but it was not seen by window.s Should I create another Virtual Disk ? HDD 0 and HDD 1 uses a SAS Drive HDD 2 and HDD 3 uses an SSD Drive HDD 2 and HDD 3 is suppose to be used on Data while HDD 0 and HDD 1 is for our Applications[IMAGE] Looking forward to your responses. Thank you! | Adding an Additional Hard Drive with Raid 1 Configuration? |
Looks like I answered my own question. Using this method works. add-type @" using System.Net; using System.Security.Cryptography.X509Certificates; public class TrustAllCertsPolicy : ICertificatePolicy { public bool CheckValidationResult( ServicePoint srvPoint, X509Certificate certificate, WebRequest request, int certificateProblem) { return true; } } "@ [System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy $h = "[URL]" $usr = "admin" $pwd ="password" $request = $h + "/fm/systems" $response = Invoke-WebRequest -Uri $main -Method Get -SessionVariable cas $form = $response.Forms[0] $form.fields.username = $usr $form.fields.password = $pwd $loginurl = $h + $form.action $response = Invoke-WebRequest -Uri $loginurl -WebSession $cas -Body $form.fields -Method Post $response = Invoke-RestMethod -Uri $request -WebSession $cas -method get -ContentType "application/xml" | Hi all, I am trying to use powershell to query Vision via REST API. Particularly, I am trying to export XML files via https:// :8443/fm/systems and then drill down my queries from there. I understand that in order to authenticate you need to do the following: 1. Get a TGT from https:// :8443/cas/v1/tickets 2. Get a ST from the TGT from https:// :8443/cas/v1/tickets 3. Get a jsessionid from https:// :8443/fm/auth 4. Finally connect to https:// :8443/fm/systems with the session cookies and cookie header. However, I am not able to get step 4 to work, the core just won't accept the sessionid a valid authentication. Does anyone have a working script using powershell on VCE Vision core that works? The programmer's guide is really bad and it doesn't explain the authentication scheme at all. I have to decode that from the java script examples given in the guide. BTW, my version is 3.5 (yes, its old, I am waiting for upgrade) Here is how my script works #ignore cert errors add-type @" using System.Net; using System.Security.Cryptography.X509Certificates; public class TrustAllCertsPolicy : ICertificatePolicy { public bool CheckValidationResult( ServicePoint srvPoint, X509Certificate certificate, WebRequest request, int certificateProblem) { return true; } } "@ [System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy #Get TGT $usr = "admin"$pwd ="password" $body = @{ username = $usr; password= $pwd;} $ticketsrv = "[URL]" $r = Invoke-WebRequest -Uri $ticketsrv -Method Post -ContentType "text/plain" -body $body$r -match "TGT-.*vce.com" $tgt = $Matches[0] #get ST from TGT $authsrv = "[URL]"$tgtsrv = $ticketsrv + "/" + $tgt $body2 = @{ service=$authsrv ;}$head = @{"Accept"="application/xml"} $st = Invoke-WebRequest -Uri $tgtsrv -Method Post -ContentType "text/plain" -body $body2 #now grab a jsessionid from the ST $stsrv = $authsrv + "/?" + $st.content $session = Invoke-WebRequest -Uri $stsrv -Method Get -SessionVariable cas -header $head #now login to fm/systems with session cookie to grab the XML file $jsession = $cas.cookies.getcookies($authsrv).value$js = "JSESSIONID=" + $jsession$head = @{"Accept"="application/xml"; "Cookie"=$js} $request = "[URL]" #this don't work, it won't authenticate $response = Invoke-WebRequest -Uri $Request -Method Get -ContentType "application/xml" -Headers $head -websession $cas | How to query VCE Vision /fm/systems using Powershell via REST API? |
.exe files are not what Windows is looking for. These .exe packages need to be extracted so that you have a few files, including .inf and .sys filetypes. You can do this by running the .exe on another machine, or by changing the filetype to .zip, navigating to the payload folder within, and copying the contents to your USB drive. | I have been trying to install windows server 2016 on this R900 and i get as far as where you select what drive you want to install the OS on to. Problem is it will not show any of the drives that are in the machine. Not the HDD in the front cage, not the SSD in the front cage nore the USB drive in the internal socket, I really want to install the OS on the internal USB. It keeps prompting me to install the drivers so it will be able to see them but after spending several hours going through many Dell and Intel drivers, RAID, SAS, non-SAS drivers i am still no where closer. Does anyone know the proper driver I need for this? and if so where can i find it? Thank You | PowerEdge R900 will not recognize HDD, SSD, or internal USB |
Hi RRR, By powershell do you mean UEMCLI? The command you are looking for might be: uemcli -no /stor/prov/luns/lun show -detail Check the line that reads "Host LUN IDs" and the line above that. You can filter also for what you need only. Example: uemcli -no /stor/prov/luns/lun show -filter "ID,Name,LUN access hosts,Host LUN IDs" Sample output: ID = sv_xx Name = Test_LUN LUN access hosts = Host_13, Host_14, Host_9, Host_10 Host LUN IDs = 7, 7, 7, 7 So from here you get what hosts can see a LUN, along with the Host LUN ID they are mapped with. If you need to see it the other way around (from the host side, not LUN), then try the host command: uemcli -no /remote/host show -detail The filter option can also be used here. All these outputs are based on Unity OE 4.4.1, so if you are running dated versions, you may not have some options. Hope this helps. Andre @ Dell EMC If this answered your question, please remember to mark this thread as resolved/answered, so it can help other users. | I'm trying to retrieve a list of LUNs attached to hosts on my Unitys and I need to know the host LUN ids, but I'm getting a lot of information (name, wwn, size), but no host LUN id. Does anyone know how to retrieve this from a Unity system using Powershell? | How do I retrieve host LUN ids using Powershell? |
Hi castleknock You can use UEMCLI (or the RestAPI equivalent). Have a look at the "Unisphere Command Line Interface User Guide" for Unity. Page 43 (Perform a system health check). For the RestAPI guides, try this: https:// /apidocs/index.html Thanks Andre @ Dell EMC If this answered your question, please remember to mark this thread as resolved/answered, so it can help other users. | Is there an equivalent commend to the VNX nas_checkup health check ? and if so, can it also be invoked from the REST API ( so a automated check can be created ) | unity equivalent to nas_checkup ? |
Hi [NAME] Apologies for the late reply. I don't think so at the moment. Let me confirm this with the CloudIQ team and I'll let you know. I assume you mean, all users access the CloudIQ portal using different accounts, correct? Thanks | scenario Team of storage engineers with on unity(s) system want to use group metric dashboards so they are all looking at the same things. Can I create a metric dashboard and share to team or do the dashboard need to be replicated for each engineer ? | can cloud iq metric dashboards be shared ? |
for reference Inorder to query metrics, you need to enable it by invoking enable_perf_stats(): ```python import storops # New unity system unity = storops.UnitySystem('192.168.100.10', 'admin', 'password') # Enable matric query ``` Then, you can get the object and metrics: disk = unity.get_disk(name=xxx) You can refer snmp-agent project for further usage: [URL] | Hi, Apologies for question, but not sure who to ask. I see in [URL] That supported metrics metrics are exposed. Looking at the methods visible, I see; {"UnitySystem": {"avg_power": 310, "current_power": 313, "existed": true, "hash": 1480349, "health": {"UnityHealth": {"hash": 2129561}}, "id": "0", "internal_model": "DPE OB BDW 25DRV 256GB 14C", "is_auto_failback_enabled": false, "is_eula_accepted": true, "is_upgrade_complete": false, "mac_address": "08:00:1B:FF:B2:AD", "model": "Unity 550F", "name": "dubnas308-spa-mgmt", "platform": "Oberon_DualSP", "serial_number": "CKM00184801131"}} ['__class__', '__delattr__', '__dict__', '__doc__', '__eq__', '__format__', '__getattr__', '__getattribute__', '__getstate__', '__hash__', '__init__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_auto_balance_sp', '_cli', '_default_rsc_list_with_perf_stats', '_get_first_not_none_prop', '_get_name', '_get_parser', '_get_properties', '_get_property_from_raw', '_get_raw_resource', '_get_unity_rsc', '_get_value_by_key', '_id', '_is_updated', '_ntp_server', '_parse_raw', '_parsed_resource', '_preloaded_properties', '_self_cache_', '_self_cache_lock_', '_system_time', 'action', 'add_dns_server', 'add_metric_record', 'add_ntp_server', 'build_nested_properties_obj', 'clear_dns_server', 'clear_ntp_server', 'clz_name', 'create_cg', 'create_host', 'create_io_limit_policy', 'create_iscsi_portal', 'create_nas_server', 'create_pool', 'create_tenant', 'delete', 'disable_perf_stats', 'disable_persist_perf_stats', 'dns_server', 'doc', 'enable_perf_stats', 'enable_persist_perf_stats', 'existed', 'get', 'get_battery', 'get_capability_profile', 'get_cg', 'get_cifs_server', 'get_cifs_share', 'get_dae', 'get_dict_repr', 'get_disk', 'get_disk_group', 'get_dns_server', 'get_doc', 'get_dpe', 'get_ethernet_port', 'get_fan', 'get_fc_port', 'get_feature', 'get_file_interface', 'get_file_port', 'get_filesystem', 'get_host', 'get_id', 'get_index', 'get_initiator', 'get_io_limit_policy', 'get_io_module', 'get_ip_port', 'get_iscsi_node', 'get_iscsi_portal', 'get_lcc', 'get_license', 'get_link_aggregation', 'get_lun', 'get_memory_module', 'get_metric_query_result', 'get_metric_timestamp', 'get_metric_value', 'get_mgmt_interface', 'get_nas_server', 'get_nested_properties', 'get_nfs_server', 'get_nfs_share', 'get_pool', 'get_power_supply', 'get_preloaded_prop_keys', 'get_property_label', 'get_resource_class', 'get_sas_port', 'get_snap', 'get_sp', 'get_ssc', 'get_ssd', 'get_system_capacity', 'get_tenant', 'get_tenant_use_vlan', 'info', 'is_perf_stats_enabled', 'is_perf_stats_persisted', 'json', 'metric_names', 'modify', 'ntp_server', 'parse', 'parse_all', 'parsed_resource', 'property_names', 'remove_dns_server', 'remove_ntp_server', 'resource_class', 'set_cli', 'set_preloaded_properties', 'set_system_time', 'shadow_copy', 'singleton_id', 'system_time', 'system_version', 'update', 'update_name_if_exists', 'upload_license', 'verify'] Where do I get for example utilization as a call to 'get_sp’ give me the static data; {"UnityStorageProcessorList": [{"UnityStorageProcessor": {"bios_firmware_revision": "53.51", "emc_part_number": "110-297-014C-04", "emc_serial_number": "CE8HH184100349", "existed": true, "hash": 1814369, "health": {"UnityHealth": {"hash": 2129621}}, "id": "spa", "is_rescue_mode": false, "manufacturer": "", "memory_size": 131072, "model": "ASSY OB SP BDW 14C 2.0G 128G STM", "name": "SP A", "needs_replacement": false, "parent_dpe": {"UnityDpe": {"hash": 2129513, "id": "dpe"}}, "post_firmware_revision": "31.10", "sas_expander_version": "2.26.1", "slot_number": 0, "vendor_part_number": "", "vendor_serial_number": ""}}, {"UnityStorageProcessor": {"bios_firmware_revision": "53.51", "emc_part_number": "110-297-014C-04", "emc_serial_number": "CE8HH184100370", "existed": true, "hash": 1480429, "health": {"UnityHealth": {"hash": 2283289}}, "id": "spb", "is_rescue_mode": false, "manufacturer": "", "memory_size": 0, "model": "ASSY OB SP BDW 14C 2.0G 128G STM", "name": "SP B", "needs_replacement": false, "parent_dpe": {"UnityDpe": {"hash": 2283305, "id": "dpe"}}, "post_firmware_revision": "31.10", "sas_expander_version": "2.26.1", "slot_number": 1, "vendor_part_number": "", "vendor_serial_number": ""}}]} but I assume its some variation of 'get_metric_query_result' to get performance metrics, but I an unsure of syntax of call Any help appreciated; | unity and storops and metrics |
Hello I would not create multiple virtual disks across the drives, if that is what you are referring. Slicing should only be done if necessary. I'm not sure if storage pools will be possible. I think storage pools are part of storage spaces direct, I could be wrong on that. Storage spaces direct will only work with an HBA, if S2D is required for pools then I don't think that option will be available for the H710. Also, RAID 6 requires a minimum of four disks. If you only have three disks then you could create a RAID 5. You can find storage controller manuals on this page: [URL] Thanks | Hi, Silly Noob question I'm afraid :manembarrassed:. so sorry in advance. I have windows server 2016, and h710p pecr with 8 x 3tb drives installed. This is my server for a lot of media and photos for which I have backups, so my questions are this. Should I just use OpenManage to make a Raid 6 and also to make virtual disks? Then use them as a partition under windows? or Make Raid 6 in OpenManage then use windows server to see the drive as a storage pool, so as I can then thin provision as I like? and if i do should I set the allocation size under windows the same as dell OpenManage raid strip size. also should I make one big raid 6 array or lots of small ones? underOpenManage? Been looking for best practices on this but can't find anything. | h710p windows server 2016 |
Hello, Yes you can enable thin provisioning at any time w/o damage. The host is not aware of the change. Re: KB. That article will help regardless of virus or malware scanner. Since ANY read invokes a small write. DBs tend to do a lot of reads. So the SQL engine is going to be doing those reads all the time. Re: 64K NTFS cluster size. That is a long standing recommendation that applies across storage vendors. The 64K stripe size of the EQL array means that if you send less than 64K the read or write will be misaligned from the RAID stripe. On writes this means more steps to get the write done compared to one that aligns on the 64K boundary. Other storage vendors may use a different size. Unfortunately, you can only set that when you format the volume. So you would have to create a new volume format it, then migrate your data. However, if you are not expressing a large load on the array the difference won't be significant. Regards, Don | Hello, I want to enable the snapshot of a lun, but I have some doubts: Q1 - Where are shanpshot stored? In the same volume of the LUN? Or in another area? Q2 - The snapshot reserve (%) is the space provided for snapshot. Does this mean that if I want a 1TB LUN, do I need to create a larger LUN to accommodate this percentage? Does not this percentage interfere with the LUN's useful area space? Q3 - How do I estimate the space required to create a snapshot? If I select to keep two snapshots, do I have to double the amount of space consumed and so on? Q4 - What would these options be: - Make writable snapshots? - Set snapshot online? - Make snapshot read-write? Q5 - If I want to go back to snapshot, can I restore it to hot? Is there any procedure to run that I need to be aware of? Q6 - When I restore the LUN, but before setting it to offline, it appeared another snapshot in the list. Why did this happen? Thank you. | Snapshot Doubts |
From what I've found, it doesn't appear that there is a way to pull it by SNMP, unfortunately. | Hello! Does anyone know how to get the value of the optical signal level on the interface on the switch Dell Force 10 S2410-01-GE-24P using SNMP? At the moment unit is not responding to telnet commands and i dont know why, but it responds to snmpwalk(get) commands of linux servers. | Obtaining optical signal level on S2410-01-GE-24P |
I dont think so you can ask formally via service request and maybe open a product enhancement request. | Hello, I have a customer with a Unity who wants to backup his fileservers on Unity using 2-way NDMP over SAN. Tape Drives are connected to two different fabrics. On Unity onboard CNA ports are installed with FibreChannel SFPs. Additional FibreChannel moduls are not installed. Customer does use sync replication on Unity. From Unity NDMP whitepaper I got information that NDMP 2way backup cannot performed over sync replication ports. Per default Unity spa_fc4, spb_fc4 are configured as replication ports. As we do not get NDMP connections to tape drives that are connected over fabric to these ports I guess that spa_fc4, spb_fc4 do not accept NDMP traffic even if sync replication is not configured. Is that right? If yes, is it possible somehow to disable sync replication feature so that these ports can be used for NDMP backup? Thanks. Robert | 2-way NDMP over Unity CNA ports |
I dont think this type of problem can be troubleshooted on the community forum - I would suggest to open a service request | Hello, I have a customer with two Unity 400 systems. Both are configured for file and block access. Filesystems and block luns reside on the same storage pool. Each Unity has one hybrid storage pool. File systems are asynchronous replicated between Unity systems. Replication interval is 10 min. Source Filesystems on Unity are configured with snapshots. Unity code is 4.4.1. Sometimes the following replication Warning occurs: Alert Text: Replication session rep_sess_res_9_res_14_CKM00123456789_CKM00987654321 can not update destination because destination pool is full. About one minute later a notice is displayed that replication is working again properly. File systems are smaller than 8TB with about 75% free space in each filesystem. Storage Pools on Unity (both source and target) have more than 50TB free space in pool. What is wrong here? Robert | Filesystem asynch replication issue: Destination pool full |
sure we do same as we did before in VNX and Celerra you can see that in the directory service config in Unisphere I would suggest to look at the Unity multiprotocol PDF manual and the Unity NAS white paper | Does unity support uid field being populated in AD and not requiring NIS Background NIS map files are generated from AD as an interirm step to remove NIS dependency. Want to use unity using AD only for uid resolution rather than creating another NIS dependency. | unity unix access post NIS world |
Hi aswokei, I'm not sure I understand your question. Please elaborate/clarify if you could. If you mean GUI management: then yes, only via the mgmt ports. You can also monitor / manage a Unity system with other tools remotely such as UEMCLI (Unity CLI), RestAPI, Unisphere Central, and a few more tools. On the service ports, IPMI is also available for access but this is something else (search the Unity support pages for IPMI tool). In any case: the other network interfaces (for iSCSI, NAS, etc) cannot be used to manage the system. You also say: "The Unity has designated management ports on some of the SPs". This is not correct entirely. Not "some", but all. Unity has two SPs, and both have a management interface, however only the primary SP provides access at any given time. Hope this helps Andre @ Dell EMC | The Unity has designated management ports on some of the SPs, but I was wondering if it's possible to manage the Unity inband--using the same ports that are used for storage. Is that possible? What are the management options? Does it have to out of band? I couldn't find anything useful in the manuals. Thank you! | Are the designated management ports the only way you can manage the Unity? |
You must use a controller if using a backplane - you cannot plug the backplane into the onboard SATA ports. You would need to find a way to adapt the power to provide for the drives. With your setup, I would replace the PERC 6 with an H200 and run the drives "unconfigured" (non-RAID), to essentially pass them through as individual drives. | Hi My home server is T410, I want to replace the 2x 1TB HDD with SSDs I am interested in model: [URL] However, this is a 1TB SSD, and it is desktop model, will it work on my server? and if I dont want to achieve RAID, do still I need to buy RAID Controller to connect these disks into my motherboard? the RAID Controller currently is PERC6i, and I believe it is 3Gb/s while the SSD disks I want are 6Gb/s thanks, | Can I used normal SSDs in Dell T410 |
The vWitness is part of either Solutions Enabler or Unisphere for VMAX/PowerMax vApp So yes, you can download the latest SE and it will include it. | Hello, Can someone provide insight as to where to download the SRDF vWitness OVA from? I've read the other thread on this forum that calls out an SE OVA that is an older version of SE and all the docs simply state to download it from online support but there is nothing that comes up in any searches or even a product related to SRDF/Metro in the landing page of support.emc.com so I'm not really sure where to download this from. Is it simply the latest SE Virtual Appliance OVA? Or is it a seperate type of OVA download? Thanks for any assistance! -Keith | SRDF vWitness Download |
Thanks for the manual reference, being able to see what you saw is very helpful. The module in question is a pass-through module for network traffic. It doesn't have any logic on it. I would try two other things. First, perform a power drain on the server. Make sure to unplug the power from the server, then hold the power button down for 30 seconds. The idea is to perform a hard restart of the iDRAC. That message leads me to believe that something is doing a soft reset of it, and we can try to clear the iDRAC by restarting it, and disabling the front panel to isolate that. The second thing you may try is flashing the iDRAC firmware. iDRAC 2.61.60: [URL] | There are constant reloads of the module BMС. In the logs, the problem looks like this "Message = The iDRAC firmware was rebooted with the following reason: ID Button." Tried to turn off the server for a while, by completely shutting down and replacing the module with another (good). These actions did not solve the problem - the same error messages appear on the same server (despite of another BMC module). Maybe someone faced a similar problem and can help in solving it? | PowerEdge r430 Problem with BMC module |
Hi castleknock, This is not a Dell EMC tool, so I can't really help myself. But maybe other forum users may know. I would suggest, however, that you contact the author of this tool/utility for troubleshooting steps (author email on the link you posted here). Andre @ Dell EMC | I have installed [URL] and prerequesite uemcli but cannot see a set of config steps for check_mk. Anyone been down this path and care to share ? | monitoring in check_mk steps ? |
I've done some digging around and came to found out that you can still get to the bootable files by going to [URL] >. For example, for the T410, it'll be [URL] (I was looking for the ISOs for a R610 and looking at the wayback machine gave me some insights of what to look for). | generation 11 has been removed from the link bellow does any body have a copy of the bootable media for dell T410? [URL] | PowerEdge Server Generation 11 T410 bootable media / ISO |
Hi cberger[NAME].lu, The new size is reflected on both sides now? vSphere and Unisphere? I have seen this before yes. Sometimes the datastore needs to be resized manually on the vSphere side after a rescan. Have a look at these KBs: [URL] [URL] Andre @ Dell EMC | Go0od morning y'all. Yesterday, I got this error message while extending a datastore on a Unity : [IMAGE] The new size hasn't been displayed on the Unity web GUI, but on the vCenter. I checked this morning again, and it's fine and I don't have any alarm.Did you already experience this behaviour ? | "Manage host file system using block" error while extending a datastore |
Hi, Try the password reset steps. Page 822 [URL] | Dear, My Dell S4128F-ON switch can't log in with admin password. I changed username and password, used that command "username admin password $0$ role sysadmin" . Next log in with new password "admin123" and "$0$ " , that's not work. It showed incorrect password. But "Linux admin" can log in. How can I reset admin password. | Dell S4128F-ON Network Switch Admin Password login Fail |
Hello duy.ipad, you can’t shrink a raid 10 DM. You will need to create a new volume then copy the data to that volume and then destroy the old volume. Please let us know if you have any other questions. | Hi, [IMAGE] - We are using SC4020. - We've run out of SSD tier, so we've changed the profile of volume which size is 10-TB from "Recommend" to our custom. Our custom profile use RAID-6 on both tier SSD and HDD.- The problem is the storage does not shrink the RAID-10-DM allocated space, so all our data is now replicate to HDD tier because of RAID-6 level in SSD tier has not enough free space. - Can we shrink RAID-10-DM allocated space? Thank you very much. | Can we shrink RAID-10-DM allocated disk space? |
You miss one step and that will make you look like a dunce! It was not added to the baseline, therefore it did not know it was outdated. Happy Monday! | I have 3 ESXi hosts that are outdated yet openmamage only shows two of them outdated, I've reset the iDRAC of the one server not showing as needing updates as well as rebooted the openmanage appliance, I also deleted the server from openmanage and re added it with no change, why is this happening? | Openmanage Enterprise |
Hi, There are certain steps and precautions to take before and during installing the port module. For the cisco procedure you can go to: [URL] Check from page I-38 onwards. For DELL EMC procedure contact your local team, and they will assist with the replacement or inserting a new switching module. Best Regards, Ed | Hello All, What steps to follow to diligently add 23 port line card to empty slot in MDS 9509? | How to Add 32 port line card to Cisco MDS 9509? |
Hi, There are certain steps and precautions to take before and during installing the port module. For the cisco procedure you can go to: [URL] Check from page I-38 onwards. For DELL EMC procedure contact your local team, and they will assist with the replacement or inserting a new switching module. Best Regards, Ed | Hello All, What steps to follow to diligently Add 32 port line card to empty slot in 9509 MDS switch? | How to Add 32 port Line card to Cisco MDS 9509 |
I could solve the problem by using uemcli and deleteing the old X.509 VASA cetificate | Hi, Last year in first registeration of VASA between vcenter server 6 and emc unity 300 a certificate has been created and assigned for VASA use on EMC .. [URL] Now it's expired and so the registration is broken and failed. changing Unity IP address or hostname renews the management service certificate renewed but no effect on VASA service address. It's still using the old certificate And I cannot find a way to renew the certificate .. stil the old one issued by EMC and expired .. any help would be greatly appreciated. Thanks | VASA Povider certificate expired - Unity 300 inaccessible |
Hi, Suggest to open a SR with support, to get the issue looked at. They might be able to escalate it further if the environment is supported / compatible etc. That would give you the best solution. The CMCNE guide 14.4.1 on page 710 and 711, shows the steps how to configure from the CMCNE side of things. Regards, Ed | Hi, I'm trying to use a TACACS solution to authenticate users on the CMCNE tool. When I test the connection with the ACS, I have an error saying that the authorization is failed. On the ACS side, I only see that the authentication is successful. Does anybody know if there are specific configuration to do on the ACS side? Shell profile for example, which kind of attributes? Regards, Guillermo | CMCNE to integrate with CISCO ACS5.2 |
Hello again. I´ve already found the answer: Access-based enumeration (ABE) is a Microsoft Windows (SMB protocol) feature which allows the users to view only the files and folders to which they have read access when browsing content on the file server. ... ABE controls the user's view of shared folders on mounted file system shares based on the user permissions. Therefore, i must activate ABE in the ISILON with the command: isi smb settings global modify --access-based-share-enum true. Thank you. | Hello Isilon Members: Please, help me to see what´s going up here .. We´re migrating several Windows File Servers to Isilon. So far, everything is fine. But, after that, a share and folders were migrated, one user, accesing to the share, can see all the Folders in there using windows explorer and checking the same user accesing the share in the file server, can only see the folders , which he have access in windows explore. Below, an image, which shows the scenario: [IMAGE] Please, note, that the image to the left, is how the customer can see all the folders, accesing to the share in the ISILON.He only, can see the folders, he doesn´t have access all them. To the right side, is the same user accesing the share in the Windows FileServer, Please, note that only can see the folders, which he have access. Can you say me, what could be the reason for that ? ... Thank you. | User can see folders with not permission acces in a Isilon smb share |
You do not need to configure a resource if you use the -y option. Resources are necessary for automatic operation. However, in your case, you specify the retention along with the command anyway. ... -y "7days" ... should do the job as well as ... -y "12/20/2018" ... | Hello together, I need to clone some save sets from the CLI. I am running NetWorker v18.1 on a Windows Server 2012 R2. The cloning target is a LTO8 IBM tape. Source would be a Data Domain. I use the syntax: nsrclone -v -s -y ‘7 Days’ -b CLONE -S -f saveset_clone_list_01.txt I get the message: Invalid retention time: '7 Days' Beforehand I created a Time Policy which is called '7 Days'. When I use the options "-y" and "-w" I get the message: Invalid browse retention time: '7 Days' As since NetWorker v9 there is only a retention policy which is no longer splited in retention and browse policy. I also renamed the Time Policy to something without numbers and a space between. Did also fail. How can I give a manually nsrclone command a different retention time? Cheers, Beacon | Set retention for nsrclone from CLI in v18.1 |
Hi Nghia, Are you essentially asking if you can setup FC hosts without an FC switch? So direct attach? If so, then yes you can, but it's not best practice. Some hosts may even require a special request (RPQ) in order to be supported. Have a look at the host connectivity guide for Unity for more info. Thanks Andre @ Dell EMC | Dear Everyone, I have a question about Dell EMC Unity 300 Embedded CNA Port. If I buy the FC transceiver for 04 x Embedded CNA Port, can I connect directly to the FC HBA on Server or I need to use a FC Switch ? I have already read some document. It just mention that if I buy the FC I/O Module with 04 x FC port, I can connect directly to the FC HBA. Please help me to answer my question. Thanks & Best Regards, Nghia To Duc | Question About Dell EMC Unity 300 Embedded CNA Port |
Hello, I re-read your reply, and I believe I found the issue. In your subject, you mention using the H330, but the pciture shows the S130. Enabling RAID mode in the BIOS activates the PERC S130, which may be interfering with the H330. What I would do is toggle RAID mode off and go back to AHCI mode. Then, depending on if you're in UEFI or BIOS boot mode, watch for the Control + R option to access the H330 configuration utility, or go into F2 and then the Device Settings menu to select the H330, and you can check on your storage from within there. | I have a new Dell PowerEdge T130 with Perc H330. I have 3 1TB SATA drives in it, BIOS can see the disks on boot, See screen shot, And I have changed BIOS to RAID but when I enter RAID setup I can only see ther CD/DVD Drive and no disks, Any ideas? I believe we are using the latest FW.[IMAGE]RAID[IMAGE]BIOS[IMAGE]BIOS RAID | Dell PowerEdge T130 with Perc H330 cannot see disks |
You've pretty much covered it and generally, yes, it's that easy. Pulling some of your comments in: It pulls one IP address per node added per pool. There are three assigned (int-a, int-b and failover), but it's only using one address from each of the three pools. If you want addresses assigned, then this is correct. You don't need addresses, though, if you don't want them on the network. If you're not assigning addresses, make sure you remove interfaces from any pools that are automatically configured via the pool rules. Power on each node and either use a serial connection or front display to select option to join existing cluster. Wait while OneFS on the new nodes is upgraded or downgraded to match the version of the cluster. You can also add them through the WebUI. Other things to watch for: Make sure you do NOT have a Node Firmware Package (NFP) installed. If it is installed and you try to add the node, it will fail. Drive Support Package (DSP) should be fine, but I've had issues a couple of times with that. If you want to go into advanced installation... The recommendation I make to the field support before installing a new node is to stand the node up as a single-node cluster (Infiniband and front end network cables MUST be unplugged). Once the node is up, install NFP and DSP, update firmware for the node and drives, then reimage it to the base level the cluster is running, boot to the install wizard and plug in the network cables. That way, you don't have to go through the firmware update procedure if it's down-rev from current while the node is active in the cluster. | Having never added a node to an existing cluster i thought i would check if my thinking is correct here. We have an existing 8 node cluster and adding some more nodes. These are identical model and spec nodes to the existing ones so not having to worry about compatibility issues there. But from what i can find its pretty straight forward but almost seems too straight forward, hence asking the question. Pre-Implementation/Checks Internal IPs - Int-a and int-b has an IP range of 40 addresses. And believe you only need 2 per node so that's all good as we are not going up to 20 nodes. External IP Range in subnets - Edit these to ensure have enough IP's if allocated as Static or Dynamic so each node can have at least one IP allocated. Implementation Rack and Cable up to Inifiniband switches and external network switches. Power on each node and either use a serial connection or front display to select option to join existing cluster. Wait while OneFS on the new nodes is upgraded or downgraded to match the version of the cluster. Post Implementation If i look in the WebGUI i should see on the Dashboard the new nodes listed after a couple minutes if all has gone well? The nodes will join the node pools automatically as they are the same model and spec as existing nodes wont they? All i would need to do now is edit the IP pool in each External network subnet and add in this node and that should be it, shouldn't it? Make sure that run either the Multiscan job or the Autobalance & Collect jobs to balance out the disk usage across the cluster. Questions Does that look right? Is it really that simple in theory or have i missed something obvious? And if you were adding multiple nodes to a cluster i assume you can add them all in one go? So can power one up, select join the cluster, power on the next, select join the cluster and repeat? If you wouldn't mind letting me know if i've missed anything obvious or not that would be appreciated. Thanks | Check Adding New Node Process |
Hello Based on your description, it does not sound like you are using an H330 HBA. If you have RAID 1 and RAID 5 virtual disks created on the H330 then it is the H330 RAID controller. We have an article about performance expectations on controllers without cache, especially when using parity. 20MB/s sustained writes with RAID 5, SATA spindle drives, and a controller without cache does not sound abnormal to me. The performance impact of the RAID 5 will be felt on the RAID 1 with SSDs because the controller has to pause all I/O to perform parity calculations once the queue is full. [URL] If you go to the R540 support page and select ESXi 6.5 in the OS drop-down there is a category called Enterprise Solutions. That has the custom images we provide. The important information section lists all of the VIBs that we change in the image. The VIB listed there is the one validated for the PERC in that OS version. [URL] Thanks | On a new install of ESXi 6.5 on a new PowerEdge R540 using an H330 HBA, sustained write speeds to an SSD volume in RAID 1 and a SATA volume in RAID 5 are approximately 20 MB/sec. Symptoms are very similar to those described in this post: [URL] Unfortunately, the only driver available is a native lsi_mr3. After disabling that driver using the ESXi shell, the volumes connected to the H330 are no longer shown. I tried re-enabling the driver, but each time I reboot the driver is disabled, so the volumes are lost. But that is a problem for another day. The big question is: What other options are available to improve performance under ESXi 6.5 to SSD and SATA volumes connected via the H330? Is there a vmklinux driver that could be substituted in place of the native driver? | Poor Write Performance using Dell R540 with H330 ESXi 6.5 |
The team is working on improving the logging. Feature by feature, you should see improvements coming in future releases. | Hello, How can you download the execution details (including Results and Messages) from a job that has been executed? The export function will export only that the job has been completed, but no details on target systems and what is the message. Thank you, | Dell openamanage job execution details |
Hi, I downloaded and installed Repository Manager onto my system (Ubuntu 18.10). I initially got the same error as you. However, when I modified the folder permissions to include read and write for other users, I was able to export to my test directory. | I'm using the 3.1.0.468 version of the Dell Repository Manager on an Ubuntu machine, and have trouble creating an ISO file. I tick the check box for the repository I want to update and click EXPORT. Regardless of what I enter as a location for the Smart Bootable ISO, I am served this error message when clicking EXPORT: 'The specified path is not accessible. Kindly check the path or access rights.' I have tried several different directories that I know I have read/write access to, and even tried with root with the same result. Anyone actually tried this with Linux? | Unable to specify acceptable path when creating bootable iso |
I apologize, I pulled the wrong one. I believe the one you need is part Y688K, as seen below.[IMAGE] | I want to remove the PERC 6/i RAID controller from my PowerEdge T310 so that I can connect my storage drives to the motherboard and use a software RAID. There aren't enough power connectors from the power supply to do this now. Is there a cable kit available so that I can do this? Thanks. | Cable kit to remove PERC 6/i, backplane from T310 |
Hello, I'm about 99% sure those are not EQL qualified drives based on the symptoms you describe. Additionally, Dell EMC doesn't sell array drives on arrays that are out-of-warranty. So I suspect you have Dell server or other non-EQL drives. You can either try to get array under contract, which would also allow you to upgrade the array firmware. 6.0.x is years old. You can also ask them about Dell Extended Maintenance or otherwise try to find a 3rd party maintainer for proper EQL compatible drives. Regards, Don | Does the Dell Equalogic PS4110 support 4kn drives? | PS4110 and 4kn drives |
Hello Yes, the PF470F is supported by OMEn. There is a support matrix on the OpenManage Enterprise support page that has more detailed information. [URL] Thanks | Dell EMC VxRail P470F (13th Generation PowerEdge Server) iDRAC 8 (Version 2.52.52.52 (Build 12)) Is "Dell EMC OpenManage™ Enterprise" compatible with VxRail Appliance? Can it monitor the hardware components of the VxRail appliance? | Dell EMC OpenManage Enterprise - Monitoring VxRail Appliance |
Just a final update... physical disk replacement, new array, and bare metal restore successful. Thanks again Dylan for all your help. | Hello, I've got a remote site with an out of warranty T610 that went down over the weekend. We had plans to replace the system after the first of the year but we didn't quite make it. Would very much appreciate some guidance in how best to move forward. Error at boot from PERC 6/i: There are two arrays. The first four disks are RAID-10 (if I recall correctly), and the last two are mirrored. All six drive enclosure LED's are solid green. I asked the site contact to hit "C" to load the config utility."VD management" reports only Virtual Disk 1."Physical Disks" only shows drives 04 and 05. What should I do to get this system back up again? Thanks for your help. | PERC 6i "the following vds are missing" |
Hi there, RPVM 5.2.1 which is expected to go GA next month will enable replication to AWS (S3) including orchestration in the cloud using Cloud DR and recovery to VMC. Hope that helps, Idan Kentor Product Technology Engineer - RecoverPoint and RecoverPoint for VMs [NAME] idan.kentor[NAME].com | Hi Experts, Do we support RP4VM on VMware Cloud on Amazon? | RecoverPoint for VM on AWS |
Hello, [URL] Above is the link to the H310 user guide. Based upon a review of that guide, it appears that the additional storage space will be available upon completion of the reconstruction operation. Considering that the drives are not members of the array until after the completion of this operation, I wouldn't expect that capacity to be available because the logical block addressing hasn't been created yet. Once the operation is completed, Windows will still not yet expand the partitions. You would need to expand the partitions after the reconstruction, in order to make that raw storage into usable space. Finally, the speed that you're seeing for the rebuild isn't unusual. This is a result of the controller being slower compared to the H710 or H710P, and that we have SATA drives. | We have a PowerEdge T320 Server running Windows Server 2008 SP2. It Is used for running a couple of applications, but mainly is for backups of our main server SAN. These are Shadowprotect full images and incrementals. It has a PERC H 310 adapter with 8 drive slots. Slots 0-1 are for 2 560GbSAS disks, (Virtual Disk 0)in Raid 1. Slots 4-7 are for 4 4TbSATA disks (Backup) in Raid 5. Due to a lack of space on both our main server and the Backup drive, we have commenced the process of getting a major upgrade with a new SAN for our main data, and the existing SAN to replace the current Backup drive. This is taking some time, however, and in the meantime we need more space. The existing 4 disks in slots 4-7 were Western digital product # WDC WD40EFRX-68WT0N0 each 3,725.50GB I added 2 new disks to slots 2-3 which are Seagate Constellation product # ST4000NM0033-9ZM170 each 3,725.50GB After adding them, I used the “Reconfigure” Task in the Virtual disk section of OpenManage Server Administrator to add the two extra disks to the “Backup” virtual drive. Immediately, the State of the Backup Drive changed from “Ready” to “Reconstructing”. The size of the drive remained the same however, 11,176.5 GB. I checked Windows DriveManagment, but there was no option to expand the drivespace. This is disappointing, as I had hoped that the space would immediately jump by around 6TB. I booted into the PERC bios configuration utility, and checked the physical disk state there. I have DISK ID TYPE CAPACITY STATE DG VENDOR 00:01:00 SAS 558.37 GB ONLINE 00 SEAGATE 00:01:01 SAS 558.37 GB ONLINE 00 SEAGATE 00:01:02 SATA 3725.50 GB ONLINE -- ATA 00:01:03 SATA 3725.50 GB ONLINE -- ATA 00:01:04 SATA 3725.50 GB ONLINE 01 ATA 00:01:05 SATA 3725.50 GB ONLINE 01 ATA 00:01:06 SATA 3725.50 GB ONLINE 01 ATA 00:01:07 SATA 3725.50 GB ONLINE 01 ATA There appears to be no option to add disks 2,3 to a DG (Disk Group?). The installation was on Friday before lunch, and the reconstruction process has so far reached 16% (73 hours). That suggests that the reconstruction process may take 19 days. Is there anything I can do? Will it add the drives to the DG when reconstruction is complete? | Add HDD's to existing Perch H310 virtual drive. |
I figured this out after speaking with Dell support. This error was because I was trying to map a volume to a cluster where a mapping for one of the servers in said cluster already existed for the volume. I had to remove the individual server mapping before creating the map to the cluster. | I'm getting an error when I try to map a volume to a cluster of servers in Unisphere. I can map the volume to a server within the cluster, just not the cluster itself. [IMAGE] | Error in Unisphere when mapping a volume to cluster |
Hello Christoph, ADAPT or Distributed RAID is similar to Dynamic Disk Pooling (DDP) on an MD3 systems. In essence, DDP functions as effectively another RAID level offering in addition to the previously available RAID 0, 1, 10, 5, and 6 traditional RAID Disk Groups. DDP greatly simplifies storage administration because there is no need to manage Idle spares or RAID groups. Please let us know if you have any other questions. | Dear Community, I am currently setting up a new ME4024 with 24x SAS Disks. It will be used as a backup repository. Usually I would use a RAID5 with Hotspare Disk. But I noticed that there is an "ADAPT" mode for the pool configuration. Can anyone explain how this exactly work? I already read the administrators guide with the description, but could not find out how it works technically. Maybe there is some kind of picture to see how this mode handels data blocks / stripes on the disk? Thank you! Christoph | ME4024 - ADAPT vs. RAID5 |
it depends what you understand as "disrupt connectivity" a reboot - no matter how fast - will always be kind of disruptive on the lower levels and a client will have to at least re-establish the TCP connection The question is more how much of that is visible to the client OS and application NFS clients using default hard mounts will just see a pause in I/O but no error to applications The OS and protocol stack will of course re-establish the connection, recover locks, .... for CIFS clients it depends on the application and OS Windows itself will automatically reconnect cluster aware application that retry internally should be ok simple applications like copying files via explorer.exe can stop and show a "Try again" dialog For those application that really require transparent failover - like SharePoint or Hyper-V over SMB shares you can enable SMB CA (Continuous Availability) per share then they will also just pause and resume I/O similar to NFS See the NAS white paper and Microsoft details about CA in SMB3 Why dont you just try it ?? all an upgrade is doing is a SP reboot - which you can easily do even from the GUI If you dont want to use your hardware Unity as VSA will show the same behaviour | Hi All, Is the Unity OE upgrade going to disrupt connectivity to the CIFS and NFS shares that are configured on the array? As far as I understands how it is configured, and how it is supposed to work, it will be a disruptive activity. However I cannot find any kind of confirmation for that on the DELL EMC support site, or any KBs, whitepapers that would prove the opposite. Has anyone had any recent experience with upgrading Unity OE on the Unity that has file shares configured? If so, can you please share what exactly happened to the client connections to the file shares and how you handled it, if you had to do anything at all? Appreciate your input. | Upgrading OE on Unity that is configured for file services only |
Hello If you perform a transfer of ownership to get the server registered to you then you may be able to recover the license. Once the transfer of ownership is complete you can contact customer care and request the registration code to add the license to your digital locker. Once it is in your digital locker you can download it whenever you like. Transfer of ownership requires account verification of the current registered owner, so it is intended to be performed by the current owner. If you are unable to get the seller to perform the transfer or you don't know the required information to fill out the transfer of ownership, the transfer may be denied. Thanks | Hello, I've purchased an out-of-warranty R720 for my personal learning homelab, and I see the original manifest included the following: Part number: KV4ND Quantity: 1 Description: SERVICE INSTALL MODULE, IDRAC7, ENTERPRISE, PERP The physical iDRAC7 is present, but the server has been competently wiped and so the Enterprise license key is not present. I am hoping that the "PERP" in the Description implies a Perpetual Enterprise license for the hardware. How may I recover the license key? Many Thanks, | iDRAC7 License recovery |
This is correct. The CG needs to reach a Logged Image Access state to enable failover. Rgds, Rich | Hello All, I am trying to test my failover capabilities through RPA. After enabling 'Image access' (physical) and mounting the replica LUN to the DR ESXi 5.5 hosts, I am not able to do the 'Failover Actions - Failover to remote replica' (as shown in the picture below). This options is grayed out. I don't have any SRM and my policy setting for 'Stretch cluster/VMWare SRM support' etc are set to NONE. Without the failover actions, I am not able to replicate back my changes to the Prod_Test site. I need suggestions. Am I missing any policy or any configuration? Also, as seen in the picture below, the storage status in the DR_Test (remote side) is showing 'Enabling logged access'. So does this mean that this is still processing this enabling and that's the reason why it is grayed out? I am however able to mount the replica LUN, add the replicated VM to the inventory, boot it up and browse the replicated VM at my DR site. Thanks in advance! [IMAGE] Regards, Vilas | Recoverpoint v3.5 - 'Failover options' greyed out |
Hi, Thanks for your post. The current implementation of OpenManage Enterprise needs the IP ranges to be distributed into smaller ranges of 10,000 IPs even though lesser number of devices are actually present in the range. It is because irrespective of how many number of devices are present, the appliance needs to scan through all the IPs. And when you provide there won't be enough thread available. So, at this point in time, the only feasible solution is to break the IP range into smaller ranges. | Hey, i am running the OM Enterprise Appliance Version 3.0.0 (Build 990) I want to discover two ranges like 10.0.0.0/16. So from 10.0.0.1 to 10.0.255.255. OM Enterprise does not allow me to do this, as this is more than 10.000 IPs.At all there are less than 1000 hosts in this range. (lets please do not discuss the sense of this :smileyhappy:) a) How can I discover this range? Adding several small ranges ist just nothing I want to do with this size b) Can I add hosts via REST API or similar? I checked the API but did not see the option to add hosts via API. Thanks Thomas | Discover range >10.000 IPs or use API? |
After reregister de DAG, the issue is gone. | When trying to expand and browse a DAG client, this message appear: Client refused browse request. 6207 Message intended for Client (CID DAG) received incorrectly at agent on (Node name:IP Node(CID node)) currently registered to (Avamar server). Avamar Server tries connect to DAG with CID of node on DAG. When trying browse any DAG's node , no errror. Any ideias? | Browse DAG fails |
Hi, Some features may work without IPMI but in general it needs to be enabled to work properly. | Hello! Can someone please tell me if WSMAN requires IPMI to be enabled? Thanks. -r | WSMAN and IPMI |
yes you can | It might sound like a stupid question but can SAS cables be disconnected/removed between a MD3260 and a R720 while the server is powered on? I already removed all the accesses between the MD and the server (no more mounts, multipath devices removed, linux devices removed, MD LUNs deleted, MD host mappings deleted, etc). The MD is not used anymore and can be shutdown at any time but the server cannot be shutdown. So I guess it would be safe to disconnect the cables but I don't want to get any surprises while doing it (like some kernel panic or any other malfunction). | Disconnect SAS cables while the server is powered on |
Hi Jacopo, Not without a reinstall. This behavior might change in the future so stay tuned. Regards, Idan | Hello, is it possible to remove an RPA from a cluster? The version of RPA is 5.1(c.150) Have a nice day Jacopo | Remove RPA from Cluster RP 5.1(c.150) |
There is no right or wrong way to accomplish what you are doing. The two types of workflows can essentially support both use cases, i.e. protection and repurposing. The protection seems straightforward for you. The service plan can be utilized to keep any number of copies, ready to be utilized for restoring. The time to restore I cannot state, but from working with many other customers, AppSync has reduced those times significantly..all dependent upon the environment of course. For the other use case, repurposing, you can utilize either type of workflow. I suggested service plans, due to the way your applications are laid out. If there are multiple applications within one CG, then you really do not have a choice, unless you can break them up at the storage layer. If you can create unique CGs per application/database, then the repurposing workflows are certainly more conducive for your purposes. Utilizing the repurposing workflows will provide the ability to create 1st and 2nd gen copies. The 1st gen could also be utlized for a restore, so long as it is never mounted, and the 2nd gen copies happen near instantaneously, though the mount operations may take longer. The 2nd gen copies can be divvied out to the developers, who all will have a working copy of the 1st gen. This solution will only affect the application once. This is of course, as we have discussed, dependent upon the layout of the CGs, as the repurposing workflows perform copy management differently than the service plans. | Hey guys, I'm a Database Developer trying to get a handle on Appsync so we can start using it with our XtremIO SAN, and take away some of the hassle from our overworked Sysadmin, who has been mounting Protect (Service Plan) copies, unmounting and complaining of having to manually unmapping and expiring the copies. I'm trying to perform a regular repurpose copy for development, but I am finding when I perform a snapshot repurpose, that it says in the details: The problem here is, although I am on the "User Databases" page of "Copy Management", I am selecting a single database, and I want to just repurpose that single database - and instead, this makes a g1 copy of all the databases on the server. Is there something I'm doing wrong here - why is it choosing All Databases when I am after the one? We're also finding that in the XtremIO utility, it is reporting the copy is the full size of the original - so, for a 50gb database, it'll report that an additional 50gb is used for each copy we make - eg 3 copies is 150gb on our XtremIO. My understanding was that it was a point-in-time diffed copy and it would only accrue additional space as the two copies became different from each other? | Creating a Repurpose SQL snapshot on XtremIO - all databases |
fixed the issue by shorting the names | Hello We are getting an error when using Vipr 3.6 and RPA 5 to create a volume. Error 19000: Message: Failed to create RecoverPoint consistency group ViPR-DAL-HAW: Links for RecoverPoint consistency group: ViPR-DAL-HAW failed to become active Description: Create consistency group subtask for RP CG: ViPR-DAL-HAW Additional errors occurred during processing. Each error is listed below. Additional Message: Failed to delete consistency group ViPR-DAL-HAW: Illegal consistency group parameter: ConsistencyGroupUID [id=23378130] Description: Rollback Create consistency group subtask for RP CG: ViPR-DAL-HAW Additional Message: "Error deleting volumes: Failed to delete volumes: DAL-HAW-PWHAWXIO-journal-1:-Internal Error : {"message":"cannot_remove_volume_that_is_mapped","error_code":400}" Rollback encountered problems cleaning up XTREMIO+FNM00161200193+VOLUME+a3b4be2786824eb69dd5a4e527803650 and may require manual clean up Description: Rollback Creating volumes: Volume: DAL-HAW-PWHAWXIO-journal-1 (urn:storageos:Volume:315f4746-7279-4766-9981-d8d1c35e8943:vdc1) Additional Message: "Error deleting volumes: Failed to delete volumes: PWDA-GRID-REP-01-target-PWHAWXIO_829:-Internal Error : {"message":"cannot_remove_volume_that_is_mapped","error_code":400}" Rollback encountered problems cleaning up XTREMIO+FNM00161200193+VOLUME+a99f1f11e3b649a1a341919915d2dd4d and may require manual clean up Description: Rollback Creating volumes: Volume: PWDA-GRID-REP-01-target-PWHAWXIO (urn:storageos:Volume:7109cd52-fd11-49b7-b445-2a55d1387cc7:vdc1) Logs | Failed to create RecoverPoint consistency group |
Hello irfon-kim, If your replacement drive doesn’t have an EMC part# on the drive, then it will not be seen by your VNXe3100. In most cases when running an older flare code on your system and using an EMC replacement HDD the drive will show up. If the drive doesn’t have an EMC part# on it, then it doesn’t have the EMC firmware on it so that it can be seen by your VNXe3100. Please let us know if you have any other questions. | Hi! I'm new to VNXe. I've just had a system dropped on me that has had a disk fault. We ordered a replacement disk from our vendor, and they told us that the disk in question was no longer available for purchase, but that they had a replacement that would work. When installing it, the system doesn't see it at all, and thus doesn't rebuild across it. The vendor has told us that their "EMC guys" have suggested that we need to "format and initialize" the disk. I can find no such option during my brief tour through the Unisphere interface, the Unisphere CLI command reference, or the help page when logged in via SSH. We're having a few other oddball problems that might simply be due to slightly older system software (2.4.0.20932), but we can't install upgrades to either the system software or the hard disk firmware while the system doesn't pass a health check, so getting the faulted disk(s) successfully replaced seems to be the top priority. The system is in use, which makes wiping or re-initializing the entire box undesirable. Does such a function exist? If so, how do I find it? Or is the vendor spinning tales? | VNXe3100: Format a disk? |
Hello, To resolve this issue I cleared the CMOS, I removes the bios battery and powered on the node it works . Don't forget to remit the bios battery . Thank you Rached | Hello The node does not want to boot with 1-5-4-2 beeps and red alarms led . The model is phoenix . Can you help me to locate the problem. Regards. Rached | Vxrail serie G does not want to boot. |
Hello, To resolve this issue I cleared the CMOS, I removes the bios battery and powered on the node it works . Don't forget to remit the bios battery . Thank you Rached | Hello Please can you help me about an hardware issue on the Vxrail G series . The node does not want to start with 1-5-4-2 beeps. The constructor is Phoenix. Regards, Rached | Vxrail does not want to start with beeps errors. |
Hi Abdurrahman, It looks like 2666 MHz support was later added to the R730, so the DIMMs should work, but they'll be slowed down to 2133MHz (to match the others). You are correct about the slot population. Thanks, | Hello, I have a R730 with 2x E5 2670 v3 and 2x 16 GB - 2RX4 RDIMM 2133 MHz. I am thinking to upgrade the memory with 2x 32GB - 2Rx4 RDIMM 2666MHz. In R730 owner's manual, it is said that frequency can be 1866, 2133 or 2400. So can this 2666MHz memory module work at 2133MHz with 16Gb-2133 MHz memory modules without any problem? For the populations of the module, I think It should be 32GB to A1-B1 and 16Gb to A2-B2 slots, If it is wrong please correct me. I am adding the memory modules link below. Thanks for your help. [URL] [URL] | R730 Memory Upgrade and Configuration |
Chris, thanks for helping! Do you think that those limits could affect a very simple configuration as mine, RAID 1 on 2 disks? What may I expect in case of failure? | We're using a Dell PowerEdge T130 server with Windows Server 2012 R2 since two years, with the onboard RAID controller set to RAID 1 & two disks. Although we had no problems, as I have read some messages discouraging using the onboard RAID controller, I am wondering at what risk are we exposed | Poweredge T130 onboard RAID reliability |
If you mean directly connecting hosts via SAS to the storage systems - Unity doesnt support that It offers Fibre Channel and ISCSI for this There is a new low-end block only Dell storage system - the PowerVault ME4 series - that supports SAS direct connect see here It does use different management tools than CX or Unity | Hi, We are going to replace our CX4-240 FC SAN. Due to limited budget, we are considering HPE SMA 202052 SAS Dual Controller SFF Storage. Since we have previous experience with CX4 and VNX SAN, we wonder is there any low end Hybrid EMC Unity Product for SAS Connection ? May I ask whether we can use tools similar to Unisphere Client to manage that Unity SAN ? Thanks | Low End Unity SAN |
Hello I suggest that you verify the model is correct. You should also check the listed capabilities of the interface within the operating system. You need to troubleshoot to find out if it is a limitation of the NIC, a result of link speed negotiation with the switch, or a setting or driver issue. You can try booting to a different operating system, like our Support Live Image, to fault isolate. [URL] Thanks | I have two nearly identical R610 servers sitting side by side, call them #3 and #4. Only differences are 96 and 64 GB RAM and slightly different Xeon CPUs. Both are VM hosts, with very light duty, #4 the lighter of the two. Both run the same versions/updates of 2012R2, and both have Broadcom BCM5709C NetXtremeNICs running the same windows drivers. I have switched LAN cables at the servers - it does not make any difference. Openmanage correctly identifies the NICs on #3 as 1000.0 Mbps. But #4 is 100.0 Mbps and that is what is operating at. I did notice a firmware version difference #4 (problematic) Driver Name bxnd60a Driver Image Path C:\Windows\system32\DRIVERS\bxnd60a.sys Driver Version 7.4.23.2 Firmware Version Family 6.4.5 (b9x2 5.2.3, iSCSI v6.4.3) #3 (operating properly) - same driver Firmware Version Family 5.0.13 (b9x2 5.0.11, iSCSI v4.1.6) | Broadcom 1GB NIC speed only 100 Mbps |
Hello Polym, Currently you can’t use an ME412 storage expansion direct connected to a server. We are working on having that in the future for the ME systems. Please let us know if you have any other questions. | Hi. Can be Dell EMC ME412 Storage Expansion connected directly to the server via SAS? | Dell EMC ME412 Storage Expansion Enclosure connect to server |
Hello The information Jimmy provided was correct at the time. Non-RAID support was just added with firmware version 50.5.0-1750. The release notes are on the firmware download page. You can find drivers and firmware on the system support page. [URL] Thanks | Hi to All, we're going to buy 3 new 740XD with the controller H740P. Before buy the new server, we need to know if we can make a RAID1 Virtual Disk with 2 HDD and keep the other as JBOD. I've seen a post here where has been said that with the 740P is not possible to have not raid disk ([URL] but in the configuration manual explain how to have a non-raid disk. Can someone please help us? Here's the link: [URL] (page 59) | Dell 740XD with H740P |
For anyone else who is looking for this. Open manage 9.2 for windows is out, and will connect to the Esxi 6.7 VIB [URL] | Hi I have a R610 with the latest VIB for ESXi 6.7, when running open mange it returns the message Server Administrator Web Server Version 9.1.0 is not supported with this Server Instrumentation Version 9.2.0. Is there a "new" version for server administrator? If so is anyone able to link me to it? | OpenManage 9.1.0 not connecting to ESXi 6.7- 9.2.0. |
As the others have stated, root is not available except via support channels. You shouldn't need to use it, unless its specifically required. If you want to create users via the CLI, you'll need to install the UEMCLI Client for your environment. User guide for the UEMCLI is here:[URL] This assumes that you are using an account that has the correct privileges to create users. uemcli -d {array ip} -u {local or fqdn/user} - /user/account create -name {value} -role {roletype} -type {local | ldapuser} -passwd {password} or -passwdSecure e.g uemcli -d 10.0.0.1 -u doman.local/admin -p {password} /user/account create -name 'TestUser' -role operator -type local -passwd Password123! To see the list of User Accounts on the Array: uemcli -d 10.0.0.1 -u doman.local/admin -p {password} /user/account show This should show the account in the list of Users | Hi Guy I want create user local use cli. i ssh via user service but not permission. 09:16:28 service@(none) spb:/> su root Password: su: Authentication failure Pls supply default root password. Thanks! | Unity 400 default root password |
Hey Sean, What I do is rename the .exe to .zip. If you open the folder, you find a payload directory. Everything you want is in there. | For the life of me I need the .inf file and not the .exe's that are available on the Dell site for the Perc S100 raid controller. It is for a 2008 R2 x64 server. Please help, I have been banging my head against my desk for several hours now trying to get this server back online. It's not that I need to repair windows or run the exe in the repair module. I need to boot into EaseUS Partition Manager and extend the C volume and decrease the D volume because its so full that boot up fails before the login screen. | T110 II Perc S100 Driver |
Hello, The VSM Admin guide shows you what versions you can upgrade to/from. In Table 2 of the 4.7 Guide it only shows from 4.6 to 4.7. So you would have to upgrade in steps several times to get to 4.7. Might be easier to just redeploy 4.7. Looks like 3.5.x -> 4.0.2->4.5.x->4.6.x->4.7.x. You might be able to do 4.5.x to 4.7.x. but I don't know that for sure. Regards,. Don | Good Afternoon Ladies and Gents, I am currently planning an upgrade for Dell VSM, The current version is 3.5.3.10 and we are looking to go to version 4.7.0 for VMware 6.5 support. Im struggling to find an upgrade path anywhere, I am aware i could recreate the VM using the OVA but we are trying to save on recreating our replication jobs unless absolutley necessary. Is anyonw aware of any issues this may cause or have any experience upgrading from significantly different version numbers? | Upgrade path for Dell VSM 3.5.3.10 > 4.7 |
This is a normal alert, if there is no follow up alert that shows Status:Cleared, then you need to check if your Veth0 or management port is online and can ping your DNS server from within the DD CLI. This alert is telling you because you restarted external network, DD can no longer see the DNS server, under normal circumstance this alert will auto clear itself once DNS is accepting requests Hope this helps | Hello, When I've just rebooted my network conectivity my DataDomain gave me a lot of erro's like: Does it mean that DataDomain sends some data to emc.com? What kind of data and for what reason it sends? DNSUnresponsive Message:Unable to communicate with configured DNS. Severity:CRITICAL Class:Network Object ID: Event ID:EVT-NETM-00009 Additional Info:DNS-Servers = see output from net show dns Status:Cleared Description: Action: | Datadomain 3300 - Reboot network - |