Quantcast
Channel: High Availability (Clustering) forum
Viewing all 3614 articles
Browse latest View live

RegisterAllProvidersIP

$
0
0

My cluster has a network name with 2 ipaddress.

by default RegisterAllProvidersIP is 1 when cluster is setup.

But I only see 1 ip address in my dns records?

I am not sure it's security problem or not?

or network problem.

the 2nd IP address has to be access to DNS ?


Thoughts on best practices for clustering a file server with Hyper-V

$
0
0

We have a new Hyper-V failover cluster with severals VMs. We have a Server 2012 R2 file server failover cluster that we want to transition over to Hyper-V. Our new SAN storage is connected via 10Gb iSCSI, and is where all of the VMs and file server storage will reside. My question is which way is better/makes more sense?

1. Set up two failover cluster nodes as VMs in Hyper-V. Use physical disk connection or cluster shared storage? Is this adding unnecessary complication with the additional machines and layers?

2. Add the file server as another clustered resource on the Hyper-V cluster. Use cluster disk or cluster shared storage? Is this a security risk, having this clustered on the same failover cluster as all of the Hyper-V machines?

Thanks in advance for any thoughts/best practices for this.

Three Node Windows Cluster

$
0
0

We having 3  Node Windows 2012 R2 Cluster. With fileshare witness.(assume fileshare iis not located any of these 3 nodes)

What happens if 2 nodes down? 

is it necessary to  Force Quorum on 3rd Node?

Create a new Hyper-V Replica Broker on a 2 node cluster crashes the cluster resource manage and the rle fails to start

$
0
0

Hello ,

We have a domainless 2 node Windows 2016 cluster .  THis was setup for SQL Server availability groups and works fine for that.

I have HyperV installed on both nodes and want to replicate the VM guests from one server to the other.  I read that I need to Add the Hyper-V Replica Broker to the cluster before configuring the VM guests for replication.

Everytime I create a new Role the RHS.exe (I think this is the cluster resource manager) crashes and the role fails to start.

The crashes also affects our SQL Server availability groups as well.

I have look at the Cluster logs but there doesn't seem to be an error reason.

Can anyone help (I don't even know what to post to help find out what the problem is)

Both servers are up to date with Windows updates.  Its Windows 2016 .  No doman.

Any and all helo would be great

Thanks

Greg 

SMBWitnessClient EventID 8 - Failed to register from Trusted Domain

$
0
0

Hi there!

I am having errors every 30sec on machines that try to connect to SMB from a failover cluster from a trusted domain.

Event ID 8

Error details: Witness Client failed to register with Witness Server TestSRV02 for notification on NetName \\TestSrv with error (The parameter is incorrect.)

I know that to connect to the trusted domain I need to add the full FQDN but as the server requests the list of Witness Servers from the Failover Cluster, it seems that the list returns without FQDN so my server cannot connect without it.


MCSE: Server Infrastructure

Storage Spaces Direct - No disks with supported bus types found to be used for S2D

$
0
0

Hello,<o:p></o:p>

I am trying to setup a 3 nodes Windows Cluster to take advantage of SQL Always On failover feature. 

I have 3 VM's running on VMWare, inside my company’s datacenter (not Azure), with Windows Server 2016 DataCenter installed on each. I can create the cluster with those 3 nodes, they are not joining any Active Directory (DNS only). I want to use Storage Spaces Direct as shared storage, and this is where I am stuck.

On each 3 nodes, I have 4 disks. From the “Get-PhysicalDisk” PS command result, for all disks, MediaType is SSD and BusType is SAS. I have one disk as the boot volume, one disk to store various files, and 2 disks with an unused partition. These 2 last disks are the ones I want to use with S2D, and they are marked as CanPool=True.
When I run the “Get-PhysicalDisk” PS command from the first node, the disks showing up in the list are : the boot disk and the file disk from node 1, and 6 poolable disks (2 disks from each 3 nodes).<o:p></o:p>

From the S2D validation report (launched from the Failover cluster manager), the 6 poolable disks are marked as "eligible for validation=True" with these characteristics :
Disk partition style is MBR. Disk has an Unused Partition. Disk type is BASIC.

while the others disks (boot volume and file disk) are reporting a warning (I am not sure if it is a problem preventing the enability of S2D...) :
Failed to get SCSI page 83h VPD descriptors for physical disk 0.

and have those characteristics :

Disk 1 : Disk is a boot volume. Disk is a system volume. Disk is used for paging files. Disk partition style is MBR. Disk has an Unused Partition. Disk has an IFS Partition. Cannot cluster a disk with an IFS Partition. Disk type is BASIC. The required inquiry data (SCSI page 83h VPD descriptor) was reported as not being supported.

Disk 2 : 
 Disk partition style is MBR. Disk has an Unused Partition. Disk has an IFS Partition. Cannot cluster a disk with an IFS Partition. Disk type is BASIC. The required inquiry data (SCSI page 83h VPD descriptor) was reported as not being supported.  

<o:p> </o:p>

When I want to enable S2D from PowerShell command prompt, I receive an error saying : "No disks with supported bus types found to be used for S2D", even if the bus type is SAS.<o:p></o:p>

<o:p> </o:p>

I am not an expert on managing servers, and I may have overlooked something during the setup. If more information on my setup is needed, I can provide them to the best of knowledge. I wanted to put some screenshots but since my account is not verified yet it was impossible, I have put as many details as I could.<o:p></o:p>

Thank you for any advice provided.<o:p></o:p>

Record Hyper-V guest parent

$
0
0

In order to satisfy server licensing on our 4 node Windows Server 2012R2 cluster, I need to keep 90 days worth of logs that show which guest vm is hosted by which host.

Is there any way to accurately record this? I tried GetCluster-log but it doesn't seem to show vm affinity, only CSV affinity.

Thanks in advance,

Matt

add node to 2016 cluster no longer has validation option

$
0
0

using RSAT on a 2016 GUI server to remotely administer a 2016 cluster (still running v8) to add the last 2016 node to it before upgrading to v9.

going through the add node wizard in failover cluster manager no longer seems to have the option of running cluster validation unlike 2012 R2 failover cluster manager - is this by design? i know i can run it after the node is added but would prefer it be done prior to as well.


Building a Two-Node Failover Cluster

$
0
0

I have issue whe i try to create a Two-Node Failover Cluster :

i got this massges 

    Node SJEDITB41606.corp.sva.com successfully issued call to Persistent Reservation RESERVE for Test Disk 0 which is currently reserved by node SJEDITB41607.corp.sva.com. This call is expected to fail.

    Test Disk 0 does not provide Persistent Reservations support for the mechanisms used by failover clusters. Some storage devices require specific firmware versions or settings to function properly with failover clusters. Please contact your storage administrator or storage vendor to check the configuration of the storage to allow it to function properly with failover clusters.



creates a replication but this error occurs. Storage Replica - Windows Server 2019 Standard.

$
0
0

creates a replication but this error occurs.

New-SRPartnership : Unable to synchronize replication group rgteste2, detailed reason: Cannot update state for replication group rgteste2 in the Storage Replica driver.

At line:1 char:1
+ New-SRPartnership -SourceComputerName SR1 -SourceRGName rgteste1 -Sou ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (MSFT_WvrAdminTasks:root/Microsoft/...T_WvrAdminTasks) [New-SRPartnership], CimException
    + FullyQualifiedErrorId : Windows System Error 1395,New-SRPartnership

Can anyone tell me why this error?


Att. Gabriel Luiz

Network Load balancing setup

$
0
0

We are in the process of setting web servers that will use NLB to share the load.

Two of the servers set on our internal network and are setup in their own NLB cluster.

The NLB cluster is set to unicast.

Each server has one network adapter. These adapters have been added to the cluster. 

Each server needs to be able to communicate with an SQL server. At this point only one server in the cluster can do so.

If I switch the cluster to multicast each server can communicate as expected individually. However we are unable to communicate with the cluster interface IP thus rendering the cluster ineffective. 

We have two more servers that sit in our DMZ that are on their own NLB cluster as well.

Their NLB cluster is setup the same as the internal cluster.

The DMZ servers need to communicate with the same SQL server as the internal servers. 

We are having the same issue with the DMZ servers.

Any guidance would be appreciated.

Most gentle and proper way to reboot a cluster node?

$
0
0

In a 2012 R2 hyper-v cluster, what are all the different ways to reboot a cluster node and what are the risks factors in each method?

For example, some techs may simply go to the start menu on the node they want to reboot and click Restart, whereas others go through the extra steps of going into Failover cluster manager and doing a "drain and pause" on that host before restarting it.

Are there performance impacts to the various ways of doing it?

Are there stability impacts long term?

Is there a "perfect" way of doing this?


Failover

$
0
0
I have never written powershell script.   I need script to push the software to see second server if first fails.  I have to be making this way to hard.

Move Virtual Machine Storage stucked in Loading State : Windows Server 2016 | Work-group |Starwind VSAN

$
0
0

Hi all,

I've built Hyper V 2 node fail-over using Starwind VSAN Free , but when im trying to move virtual machine storage to CSV , in Virtual machine moving screen stuck in same state saying loading as shown in screen . Its configured in Workgroup without domain and I've added dns suffix for both nodes .

How do you backup data on a CSV

$
0
0

We currently have 2 servers included in a Cluster Shared Volume.  Specifically using the File Server role.  The point of this is to have a storage location for important data that must be kept online.  We are not using it for hosting Hyper-v VMs.  This is solely used as a file server.  Before anyone responds that why not just use DFS, believe me we have tried that route and it does not work well for our scenario.

The issue now is how do I back these servers up?  We have Veeam that we are using for our VM backups but it says that CSVs are not supported and are skipped during the backup.  According to Veeam, this is a limitation of the product and not CSVs themselves.  

This may be a simple question but I have not been able to find a good solution.  There has to be a way to back this data up while doing server backups.  

Anyone have a solution?


SOFS and load balancing

$
0
0

Hey

Just create a Cluster running Scale-Out Fileserver.

I need to have an active/active system due to many connection.

When looking into open files - it seems users only connect to one server in the cluster. (I have enable continuous avalibility)

Why?

Mike


Sysadm

VMs Failing to Automatically Migrate

$
0
0
I come in every morning to find a hand full of my VMs indicating "Live Migration was canceled." This seems to be happening around 12:00 - 1:00 AM, but I can't find anything configured to tell it to migrate so I'm not sure why it is happening to begin with. The event logs are not helpful... Cluster Event ID is 1155 "The pending move for the role 'server name' did not complete." The Hyper-V-High-Availability log shows Event ID 21150 "'Virtual Machine Cluster WMI' successfully taken the cluster WMI provider offline." which was right before the 21111 Event ID "Live migration of 'VM Instance Name' failed. It is typically the same VMs, but not always. I see the error on both Nodes (2 node cluster, 2 CSVs). Hyper-V-VMMS logs show 1940 "The WMI provider 'VmmsWmiInstanceAndMethodProvider' has shut down." Then 20413 "The Virtual Machine Management service initiated the live migration of virtual machine  'VM Name' to destination host 'Other Node' (VMID)." for each of the VMs running on that node. Some are successful, but a few get 21014 "Virtual machine migration for 'VM Name' was not finished because the operation was canceled. (Virtual machine ID)" and finally 21024 "Virtual machine migration operation for 'VM Name' failed at migration source 'Host Name'. (Virtual machine ID)". I can manually live migrate all VMs back and forth all day. I have plenty of resources on both nodes (RAM & CPU), and I have turned off the Hyper-V cluster balancer to automatically move machines. We used to have SCVMM installed but it was overkill for our small environment so it was decommissioned. While I would like to resolve the failures, I would be happy just knowing what was causing the VMs to migrate in the first place since it isn't necessary for them to do this every night. The cluster is not configured with CAU. Any guidance would be greatly appreciated!!

Hyper-V VMMS service stuck

$
0
0

Hi,

this is the second time we encounter this issue this year. It happened some months ago also. Suddenly on a host (can be any host in cluster) the vmms service is not responding anymore. You cannot do anything anymore with VM's. The ones that are running will keep running, but as soon as you try to reboot a VM (properly reboot from inside the VM), it is stuck in stopping state. Also: live migration to other hosts are failing. Last time we foreced reboot of host, but unfortunately this gave as quite some headaches with corrupt vhdx files.

After googling alot, I thought to give Technet forum a shot.

Any advise? I have a MS case open but as usual, takes forever to get some kind of help.

thanks in advance!

regards,

Jeroen

Windows 2016 cluster quorum type

$
0
0

I changed to use fileshare witness of the cluster witness but the quorum type is still "majority" not NodeandfileshareMajority?

S2D in a lab - questions about node failure.....

$
0
0

Set up a 2 node S2D cluster with nested Hyper-V and was doing a few tests. Live and quick migration work fine, but if I 'pull the power' on one of the S2D nodes (to simulate a node failure), the machine on that node never migrates. In Failover Cluster Manager the role shows as 'Unmonitored' and the VM is dead in the water. I do have a file share witness on a machine not impacted by my testing.

I would think that if I pulled the power on a host those VMs would 'figure out' the node is offline and for the other node to pick up the load.

Did I miss a configuration step somewhere?

[EDIT] After a few minutes the machine came back online, but like it had been restarted. Is that expected behavior? Was hoping that would be faster or in a state where the machine wasn't reset. But I may need to set my expectations!

Viewing all 3614 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>