Quantcast
Channel: High Availability (Clustering) forum
Viewing all 3614 articles
Browse latest View live

Adding S2D nodes or additional S2D cluster

$
0
0

Hi

We are scaling up our HyperV S2D park. We already have two S2D HCI Clusters, each with 6 nodes. We now have 6 additional nodes ready to be brought into production.

So the question pops up. Add these 6 nodes to one of the two existing clusters ? Or, create a third cluster with 6 nodes ?

S2D supports up to 16 nodes, no issues there, although I find it more risky to have larger clusters. If there is ever an issue on the cluster level, you loose more workloads. Basically the 'do not put all your eggs in one basket' theory.
So what is the sweetspot for number of nodes ? Depends on your environment probably. It's more a gut feeling then solid judgement. Twelve nodes sounds too risky for me, or on the border of risk anyway. Don't know why, ask my gut.

So lets say I choose a 3rd cluster of 6 nodes. Then I get a 3rd S2D storage pool. And subsequently, this cluster will carry it's own RDMA/ROCE traffic for this pool on the same 10Gbit switches as the other 2 clusters. In short, I have 6 more nodes talking RDMA in the same two storage VLANs as the other 2 nodes.
Is this a disadvantage ?
Should I separate the storage VLAN's for each cluster ?

I'm sure all options will work. On paper. So how do I get to making the right choice ?
Storage sizing isn't a factor. We don't use hybrid volumes with dual-parity, only 3 way mirroring. More nodes in one cluster will make the storage sizing more efficient if I was using dual-parity.

Any thoughts ?

Greetz
RW


un-clustering hyperv nodes into standalone hosts

$
0
0

As the title suggests, I have a pair of server 2016 hosts which are currently operating as a 2-node failover cluster.

Due to relocation of resources, I'm looking to break up the nodes into two standalone servers. One of the servers is to be decommissioned from Hyper-V use, the other to remain as an operational stand-alone Hyper-V host.

Is there a known process for un-clustering nodes back to stand along hyper hosts.

I have a number of sizable VM's (storage size wise) which currently sit on the C:\ClusterStorage\ location, I'm trying to figure out if I need to create a new LUN on my SAN and migrate the VMs onto this new LUN, or whether I could utilize the existing disk LUN currently used by the CSV volume?

Is there scope for me to powerdown the VM's, uncluster the node, convert the CSV volume to a normal SAN based disk and power back up the VMs on the now standalone host?
*EDIT*
I know there's the option to 'remove from cluster shared volumes' but I'm not sure what the impact is of doing this is..






대구오피 “uuzoa2.com ” {유유닷컴} 오피가격

대전오피 「uuzoa2.com 」 ▷유유닷컴◁ 오피가격

S2D StoragePool and Virtual disk size

$
0
0

Hi, All.

I'm testing a failover cluster with S2D enabled on WS2019. I have 3 VMs with 2 HDD 5GB on each. I've created failover cluster, enabled S2D and created a storage pool.

PS C:\Windows\system32> Get-ClusterS2D

CacheMetadataReserveBytes : 34359738368
CacheModeHDD              : ReadWrite
CacheModeSSD              : WriteOnly
CachePageSizeKBytes       : 16
CacheState                : Disabled
Name                      : s2d-cluster2
ScmUse                    : Cache
State                     : Enabled

PS C:\Windows\system32> Get-StorageSubsystem *cluster* | Get-PhysicalDisk

DeviceId FriendlyName        SerialNumber                     MediaType CanPool OperationalStatus HealthStatus Usage       Size
-------- ------------        ------------                     --------- ------- ----------------- ------------ -----       ----
3002     VMware Virtual disk 6000c29503bcbdb2cf84ec2867ea371b HDD       True    OK                Healthy      Auto-Select 5 GB
1002     VMware Virtual disk 6000c29a55adf0ba2fb870ad3a9dfd32 HDD       True    OK                Healthy      Auto-Select 5 GB
3001     VMware Virtual disk 6000c291b4372413c4f7feaa10fe9beb HDD       True    OK                Healthy      Auto-Select 5 GB
2002     VMware Virtual disk 6000c2993762bd6617b3cd5eef1ff9d0 HDD       True    OK                Healthy      Auto-Select 5 GB
2001     VMware Virtual disk 6000c292a3946d8f1b33bb0f716ecd44 HDD       True    OK                Healthy      Auto-Select 5 GB
1001     VMware Virtual disk 6000c294f797902a11829554da27c6ae HDD       True    OK                Healthy      Auto-Select 5 GB


Questions about space allocation:
1. After creating a pool I see only 26.9GB free space. What is AllocatedSize? And where is gone also 1.6GB?

PS C:\Windows\system32> Get-StoragePool

FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly    Size AllocatedSize
------------ ----------------- ------------ ------------ ----------    ---- -------------
Primordial   OK                Healthy      True         False        70 GB       29.9 GB
S2D_4        OK                Healthy      False        False      26.9 GB        1.5 GB
Primordial   OK                Healthy      True         False        70 GB       29.9 GB

2. I try to create virtual disk 3GB with mirror. I'm expecting that pool decrease by 6GB, but really footprint is 8GB

PS C:\Windows\system32> New-VirtualDisk -StoragePoolFriendlyName "S2D_4" -FriendlyName disk1 -ResiliencySettingName Mirror -NumberOfDataCopies 2 -ProvisioningType Fixed -Size 3GB

FriendlyName ResiliencySettingName FaultDomainRedundancy OperationalStatus HealthStatus Size FootprintOnPool StorageEfficiency
------------ --------------------- --------------------- ----------------- ------------ ---- --------------- -----------------
disk1        Mirror                1                     OK                Healthy      3 GB            8 GB            37.50%


PS C:\Windows\system32> Get-StoragePool

FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly    Size AllocatedSize
------------ ----------------- ------------ ------------ ----------    ---- -------------
Primordial   OK                Healthy      True         False        70 GB       29.9 GB
S2D_4        OK                Healthy      False        False      26.9 GB        9.5 GB
Primordial   OK                Healthy      True         False        70 GB       29.9 GB


3. Next I try to create virtual disk 500MB with mirror. I'm expecting that pool decrease by 1000MB, but really footprint again is 8GB

PS C:\Windows\system32> New-VirtualDisk -StoragePoolFriendlyName "S2D_4" -FriendlyName disk2 -ResiliencySettingName Mirror -NumberOfDataCopies 2 -ProvisioningType Fixed -Size 500MB

FriendlyName ResiliencySettingName FaultDomainRedundancy OperationalStatus HealthStatus Size FootprintOnPool StorageEfficiency
------------ --------------------- --------------------- ----------------- ------------ ---- --------------- -----------------
disk2        Mirror                1                     OK                Healthy      3 GB            8 GB            37.50%


PS C:\Windows\system32> Get-StoragePool

FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly    Size AllocatedSize
------------ ----------------- ------------ ------------ ----------    ---- -------------
Primordial   OK                Healthy      True         False        70 GB       29.9 GB
S2D_4        OK                Healthy      False        False      26.9 GB       17.5 GB
Primordial   OK                Healthy      True         False        70 GB       29.9 GB

Another question, if I try to create virtual disk from GUI, most of options are absent and I can only set name and size of new virtual disk. For example I can't set resiliency two-way mirror.

Help please, what did I do wrong?

Validate Windows Firewall Configuration error

$
0
0

Hi

I am getting the following error while validating a three-node Windows failover cluster

The Windows Firewall on node XX-XEV-UKW-DB1.xxdevtest.local is not properly configured for 
failover clustering. In particular, the 'Domain' firewall profile is enabled on
adapter 'XX-XEV-UKW-DB1.xxdevtest.local - Ethernet 2'.
The 'Failover Clusters' rule group is not enabled in firewall profile 'Domain'.
This may prevent some network communication between cluster nodes.



I am still getting the even with firewall disabled. How can I resolve it

Thanks

Cluster Resource Name DNS record updated with IP of owner

$
0
0

Hi,

I've a Windows Server 2008 R2 file server cluster to which I recently added a new Client Access Point. It's working except that every 24h hours the IP on the DNS record it's changed to the IP of the owner node, and few minutes later, changed back to the IP address assigned to the CAP. I understand why the process occurs every 24h (DNS client registration runs every 24h), but not why or what it's indeed happening.

To put some context, and hints on what I've done wrong: this new name comes from a consolidation DFS (hosted in another server, turned off, DNS records deleted), and before add it as a CAP I tried:
 - Add it as a CNAME record pointing to the (didn't work).
 - Set DisableStrickNameChecking (didn't work).
 - Add a new Service Principal Name (didn't work).

I've then undo all of that, and added the new name as CAP. As I said, it's working except for that issue. I've search in the registry for keys with that name, and the only one that I can find is related to the cluster service (so it's expected).

Any guess, hint? Thanks!

High Availability Cluster without Shared Storage

$
0
0

Hi Experts, 

I've been doing some research on how to achieve this goal and what's the best practice.

We are planning to do a high availability cluster for our server running the following services.

  1. Active Directory
  2. DNS and DHCP
  3. File Server

Currently we have one fully operational Windows server 2016 running in Dell R530. Since we have 2 set of dell server, we want to configure a HA cluster for downtime protection. And  we want to set it up in a way were we have a main server that will be doing all the workload and a backup server that will replace the main if it fail without downtime.

But most common reference I found related to our goal involved a shared network. Now what I wanted to know are:

  • Why is it recommended to have a shared storage 
  • Is it possible to configure HA without shared storage
  • If possible, what are the risk of not having a shared storage

Thank you in advance experts.


File Server Clustering between Two Domain Controller

$
0
0

Hi all,

Is it possible to cluster file server between two active directory domain controller.

As of now our server is still in standalone. We will soon add another active directory in our domain for fault tolerance if the server fail. 

Our current server runs the following services which we want to add redundancy that's why we want to add new server.

  • Active Directory
  • DNS
  • DHCP
  • File Server

In my research, Active Directory and DNS High Availability will be achieved once we add another domain controller in our current domain. And in DHCP there's a feature called DHCP Clustering.

But regarding File Server, I haven't found any clear ideas on how to achieve this.

Thanks in advance for your advises.

Error with cluster-aware updating

$
0
0
I currently have one cluster with two nodes.  I manually apply updates to the nodes with cluster-aware updating.  The last several updates have gone fine on node 1 but I get an error "partially failed" on node 2.  The description is "Node "xxx" failed to leave maintenance mode".  The node is up and running, all vm's hosted on it are fine as well.  Does anyone have any idea why I'm getting this error and how to resolve it?  Nothing has changed on the cluster or the nodes that I'm aware of.

Why Clustering Domain controllers is a bad approach?

$
0
0

Hi Experts,

I would like to ask your insights about why is it bad to cluster domain and what are the risk . I have read some forums and pages concerning about this, but it seems I can't get a clear picture of it.

I understand that DC doesn't need to be clustered for failover environment. but what if there are services in our environment that needed to be clustered such as File Server. 

Thanks in advance.

Microsoft NLB not working correctly

$
0
0

Hi,

I have setup an NLB-cluster containing 2 servers.

As an exampel:

cluster called: corpweb (ip:10.0.0.3)

servers called corpweb01 (ip 10.0.0.1), corpweb02 (ip 10.0.0.2)

All Three are registred in DNS. If I browse corpweb, I come to the user GUI as expected. However, if I stop the host corpweb01 in the nlb-cluster I am still able to reach that server using the NLB-name? How come? I am not able to reach the server corpweb02, even though it is green in the NLB GUI. So the cluster-solution doesnt seem to work at all, besides I can use the dns-name for the cluster to reach server1. Any suggestion here would be appreciated.

[SOLVED] Disk Manager Hanging on Clustered (iSCSI) hosts

$
0
0

Just wanted to let you know a recent problem and fix I was having recently.

Server 2016, Hyper-V hosts, Clustered, iSCSi storage

Disk Manager would hang when I would open it.   Also, I couldn't "connect" to any VM via Failover Cluster Manager nor Hyper-V Manager.   If I would reboot both nodes in the 2-node cluster, it would work as expected for a short time.   I have 8 of these 2-node clusters.  It was happening on all 8 clusters.

After working with MS support, we were able to determine that it was coming from our Avocet IP KVM.   The Avocet have Virtual Disk capabilities (basically, it has the option to mount a disk via the USB port through the KVM).   There was a recursive call in the Virtual Disk Service for this VD.   I was able to disable the VD via the KVM management console to fix the problem.

Hyper-Stretched Cluster with storage Replica, Windows 2016 Node Error

$
0
0

We have four HPE nodes in a HyperV-Stretched Cluster with two HPE 2040 Storage.

One of the node has repeated the below mentioned error twice in 9 days time. This disturbs the whole cluster, bring the volumes online and offline and pausing the VMs.

Error      3/8/2019 9:18:19 AM     FailoverClustering           1230       Resource Control Manager

A component on the server did not respond in a timely fashion. This caused the cluster resource 'Virtual Machine XXXXX' (resource type 'Virtual Machine', DLL 'vmclusres.dll') to exceed its time-out threshold. As part of cluster health detection, recovery actions will be taken. The cluster will try to automatically recover by terminating and restarting the Resource Hosting Subsystem (RHS) process that is running this resource. Verify that the underlying infrastructure (such as storage, networking, or services) that are associated with the resource are functioning correctly.

The same error was repeated on 03/17/2019    3:08 PM

Adding new storage to existing cluster

$
0
0

I currently have a dell VRTX running two blades (server 2012r2 on each).  The blades are setup for failover clustering and are sharing storage.  I have hyper-v installed on the vrtx.  I'm  quickly running out of storage and need to add some new hard drives into the shared storage.  I've been told that this will "break" my cluster and I will have to rebuild everything from scratch.  

Can anyone give me some insight on this?  I was hoping it would be as easy as popping the new drives in and allocating them to the shared storage.  Thanks in advance.


Task or job Schedule Cluster with Windows Fail over Cluster

$
0
0

I need to create a 2-node cluster where cluster can use Windows Task Schedule. If one node goes offline the jobs should be able to run in the network.

It is Windows 2016 Standard server environment. 

I have configured two node fail over cluster but not sure what roles should be use for this purpose. Any help would be appreciated.

DFS share on Cluster

$
0
0

Hi guys,

is it possible to create a DFS share on Clusterd disk? 

In my lab i have 2 nodes, which are in cluster called Cluster01. Both nodes have connected a drive (e:/) via iSCSI. On Cluster01 is created a Files Server cluster role (File is the name of the role). On this role is created a share called HomeDir (\\File\HomeDir). The target of this share is 'e:\HomeDir'. 

And now, when i have created a share, why i cannot create a DFS on this share? The holder of the namespace is domain controller DC-01. Why, when i create a DFS folder 'Test', I cannot add target a HomeDir share?

When I destroy a cluster, all working like a sharm. It means, that DFS share cannot be  created on a clustered disk? Or bcs it is a share based on clustered role?

the specified disk or volume is managed by the microsoft failover clustering component In server 2012

$
0
0

In Disk management 4 TB disk showing Reserved, i am trying to bring to online I am getting the error "The specified disk or volume is managed by the Microsoft Fail-over Clustering component. The disk must be in cluster maintenance mode and the cluster resource status must be online to preform this operation.

Present we don't have cluster, even in Fail-over Clustering manager not showing any nodes. how to bring online the 4TB with data. please help me.

Regards

Malli86

S2D pool still has free space, but cannot create more vDisks...

$
0
0

Hi

We have a 6 node HCI S2D Cluster. SSD cache and HDD Capacity disks. The pool already has 10 LUNs (virtual disks) and still has 40TB of free space. All these LUNs are 3-way mirror. These LUNs are all 4TB. In the past when I created another 4TB LUN, I knew the footprint would be 3 times as high, so 12TB. I would see the pool size shrink with 12TB.

So, now I have 40TB free, so this means I have at least 3 more LUNs to go before I empty out my pool (3X12=36, so 4 TB leftover).

But something weird happens. When I create a 4TB LUN (virtual disk) I get an error saying it cannot create this new virtual disk.

My first reaction was I might have some Storage Jobs still running in the background. Checked, but there were none.
Next I thought I might have big imbalance of data-blocks across my nodes. I used Prettypool.ps1 and it looked pretty enough to me. But I decided to Optimize the pool anyway. After that finished, I retried but still the same error.

Next I created a 2TB LUN instead. This worked. Then I created another 2TB LUN. Which also worked. Hang on, since when does 2+2 no longer makes 4 ?

So I was thinking, colums, faultdomains, etc. etc. But, I never changed any of those. Faultdomain is at the default settting of Node. I never change any other settings for 3-way mirror either.

When creating the disk, you see both storage tiers and they display their individual left-over footprint sizes, making up that 40TB free poolsize. See below in yellow highlight. 13.382TB of free space left for 3-way mirror disks.

This is my first time making more then 10 LUNs (virtual disks) on a pool. Is there some limit ? OK but why can I not create a 4TB disk, while at the same time I can create 2 disks of 2TB ?

The pool used to have Dual-Parity disks, and even a Hybrid volume with Mirror-accelerated parity. Those are deleted now, and the space has been re-claimed.
I also miss a couple of months of Windows Updates (like 4), but I don't seem to remember reading any release notes with a fix for this.

Or am I missing something about the way S2D works ?

Any help is welcome.

Howto get a clustered MSDCT in Azure on Windows Server 2019 with S2D to function

$
0
0

I've been pulling my hair for the last few days trying to get a FCI working with SQL Server and MSDTC, no matter what I try I can't get the MSDTC role to get online.

I've followed the instructions here: https://www.ryanjadams.com/2018/07/sql-server-failover-cluster-instance-azure-msdtc/

Current setup:
- Domain joined servers (ADDS)
- Two CSV (one for SQL, one for DTC) running on S2D (2 data disks per server = 4 disks in total)
- Windows Server 2019 Datacenter with latest updates applied
- SQL Server 2016 SP2 and latest CU
- Standard IP's (one NIC per server) and a Standard load balancer for the VM's
- Cloud witness for quorum
- Static ip addresses for all servers and FCI (cluster and roles)
- Pre-staged DNS records allowing authenticated users to update the records
- Running install/configuration with domain admin account
- Created the FC using the ManagementPointNetworkType = Singleton
- Allowed DTC, COM+ Network Access, Network Discovery, File and Printer Sharing plus the SQL port in Firewall
- Configured DTC to allow all options and tested different auth levels

All other settings are made according to the video Ryan posted but as soon as I set the static ip for the DTC role the resource fails and I can't get it started.

Anyone know what I'm missing? Has anyone else successfully configured a similar setup in Azure?

regards
/andreas

Viewing all 3614 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>