[no subject]
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
View ArticleRe: SnapMirror Bi-directional possible?
You can also look at somthing like DFS or a global name space like panzura. I believe they can do multiple copies but have some hardware requirements.
View ArticleMax volume count per node
Hey there, does anyone know what the real implication of the max volume count per node setting on the FAS systems is good for? Is there any chance th
View ArticleRE: Max volume count per node
There is no way to raise it. It's set to the limit based on the max load a node can carry in the event of a failover and is hard coded. There are disc
View ArticleAW: Max volume count per node
Alright, thanks - Maybe I can find a way to work around the need for distinct volumes by using qtrees or something like that for this specific applica
View ArticleRe: Max volume count per node
Generally, this is the best practice. If it is for a file share, use qtrees. There's no elegant solution if this is for block LUNs, though. On Thu, J
View ArticleAW: Max volume count per node
There are several drawbacks by using qtrees which is why we did set up separate volumes here. F.ex. if you have multiple storage tiers in a system an
View Articlemirror one volume read/write across snapmirror?
>>>>> "Scott" == Scott M Gelb via Toasters <toasters@teaparty.net> writes: Scott> From: Scott M Gelb <scottgelb@yahoo.com> Scott> To: "toasters@teapa
View ArticleRE: SnapMirror Bi-directional possible?
>>>>> "Eric" == Eric Peng <epeng@esri.com> writes: Eric> Thanks all, for confirming that bi-directional SnapMirror on the Eric> *same* volume is not
View ArticleRetrieving lock status in cDOT (CIFS/NFSv4)
We implemented simple file lock monitoring on our 7mode filers: A script runs each 5 minutes retrieving output from lock status -n command from all
View ArticleRe: Retrieving lock status in cDOT (CIFS/NFSv4)
What are you doing with the lock statuses? What are you ultimately trying to do? On Sun, Aug 5, 2018 at 11:01 PM Jacek <janosldx@gazeta.pl> wrote: >
View ArticleAW: wafl.vol.walloc.rsv.failmount
Hey Mike, some interesting ideas out of this old post: https://community.netapp.com/t5/Simulator-Discussions/aggr0-full-on-7-3-4-sim/m-p/67205#M
View Articlevolume move and SFO
Good morning I have a four node cluster, nodes1/2 are SAS/SATA and nodes 3/4 are AFF. I have a long running volume move going from node 1 to node 4.
View ArticleRe: wafl.vol.walloc.rsv.failmount
Thanks for the replies on this one all. The solution was to run wafliron on the aggregate, which very quickly brought it and it's volumes back online
View ArticleRe: volume move and SFO
Ian, Just a suggestion (its been a while but I think this is how I removed the throttle in 9.1): volume move governor*> ? modify
View ArticleRe: volume move and SFO
Hi Douglas Thanks for writing. If I am understanding that governor correctly, that is for the number of concurrent moves? In this specific instance,
View ArticleRe: volume move and SFO
I was thinking throttle, but forgot the exact command. Yes 400 MB/s is typically what I see with 10G. I'm thinking itll make you cancel the vol move
View ArticleRe: volume move and SFO
How long did it take to get to your 60% point that you reference? Some quick math says 30TB at 400MB/s should complete in about 22 hours. If you're
View ArticleRe: volume move and SFO
Hi Mike Somewhat redacted output from a few minutes ago: MyCluster1::> volume move show -instance Vserver Name: mySvm
View ArticleRe: volume move and SFO
Hi Ian, The good news is that, forgetting about what it is estimating, weâve seen that in 24 hours 21TB has been copied and. Hopefully another 30
View Article