Building a Server...Again

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
This is good discussion, gives me some more details about the layer options between KVM, ESXi, and Neither.

I also agree, while 10G Ethernet would be cool, I think the infrastructure may be beyond my budget.

Still working though hardware options, not sure if changing MB makes sense, may just use the LSI SATA/SAS controller with the Z77Extreme.

Thanks!
 

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
So I have 8 Server Functions on this system:
- Web (Apache)
- Proxy
- Firewall (UFW)
- File Sharing (Samba)
- DHCP (DHCP3)
- DNS
- SQL (MySQL)
- Gateway/Router

I also have a need for a Windows 7 Home VM. If I use KVM/ESXi how would I structure this? Would I use 2 VM on it (Ubuntu and Windows7)? Does the Window7 VM need to go under the Ubuntu one? Am I really gaining anything with KVM/ESXi on this setup?

The more I read and learn, the more a hypervisor makes sense on my backup server (Nemesis). This will be:
- File Server (Samba)
- Version Control (SVN or similar)
- Backup (Areca)

As well as be used to development and learning. It seams this may make more sense to use a hypervisor, where I may be tweaking and changing things more often.

Right now KVM seems more in line with something I would use rather than ESXi, but I am still searching.

Thanks!
 

Knavish

Senior member
May 17, 2002
910
3
81
I second the opinions on getting a low level "server-grade" motherboard. It's not that much more expensive, especially considering the ~$2000 you are spending on hard drives. If you are going all the way with RAIDZ2, doesn't it make sense to protect your data with ECC as well?

Your Z77 motherboard is ~$125 vs. a (randomly picked) Asrock C216 motherboard for ~190. 16GB of regular memory is around $140 vs. 16GB of unbuffered ECC for $160. Just add a Xeon E3 and you're good to go, for about a $100 total price premium.
 

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
I agree, but I already have the processor and loose the backup hardware that is common with the current systems I have (Same MB, RAM would be on 3 systems allowing for quick swaps to keep server up and redistribution of RAM as needed).

I am toying with a C216 board, not sure if I want to do the Xeon or not, it would kill the processor upgrade for my main system. It is an option, but I am not sure where it will get me any great benefits besides ECC.

While I like server grade equipment (ran for several years), the ECC will not gain me a great amount as most of the storage is for Media (Read only) and the major write cycles will be on my second Server/Linux Workstation.

Thanks for the ideas, I am still toying with that!
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
So I have 8 Server Functions on this system:
- Web (Apache)
- Proxy
- Firewall (UFW)
- File Sharing (Samba)
- DHCP (DHCP3)
- DNS
- SQL (MySQL)
- Gateway/Router

I also have a need for a Windows 7 Home VM. If I use KVM/ESXi how would I structure this? Would I use 2 VM on it (Ubuntu and Windows7)? Does the Window7 VM need to go under the Ubuntu one? Am I really gaining anything with KVM/ESXi on this setup?

The more I read and learn, the more a hypervisor makes sense on my backup server (Nemesis). This will be:
- File Server (Samba)
- Version Control (SVN or similar)
- Backup (Areca)

As well as be used to development and learning. It seams this may make more sense to use a hypervisor, where I may be tweaking and changing things more often.

Right now KVM seems more in line with something I would use rather than ESXi, but I am still searching.

Thanks!

I think that it makes more sense to use a hypervisor on your main server. The backup server is basically just serving files, which doesn't warrant a hypervisor nearly as much. With KVM, the hypervisor is really just the Linux kernel, so you're not having to install any special cut down OS.

Anyway, I think it makes sense to group similar functions at a VM level. So you would have something like:

Routing VM (bridged to Internet and private network):
- Proxy
- Firewall (UFW)
- Gateway/Router

Core Network Services VM (bridged to private)
- DHCP (DHCP3)
- DNS

LAMP VM (bridged to private)
- Web (Apache)
- SQL (MySQL)

Windows VM
- Whatever Windows is useful for

For file sharing, you'll get best performance from running that right out of the host, but you could also make a VM for it.
 

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
I think I am starting to understand the hypervisor thing! Thanks for the thoughts.

I am specifying two different setups, one with the interchangeable hardware, one with a Xeon/ECC setup.

With the Xeon setup, the open Item I have is finding a 2 port SATAIII card for the dual mirror SSD for boot. Any recommendations on one that is Linux Compatible and stable? The MB I have found has 10 SATAIII ports, I intend on using those for the RAID Z2, but need 2 more for the Boot drives.

The i3 setup would use the LSI SAS card for additional drives.

Thanks!
 

evilr00t

Member
Nov 5, 2013
29
8
81
The Core i3 also supports ecc if you don't want to spring for a xeon, but it does lack vt-d. Keep that in mind
 

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
I am leaning towards the Xeon over the i3 due to the support of VT-D and other Virtualization technologies. I have not found a lot of evidence to support that the ECC will buy me much, but I get a much faster process which should be better for the VM, as well as 2x the cores and significantly more Cache.

Now I need to identify the MB/Controller combination that will work best at a reasonable cost. Find it odd that Intel does not have any LG1150ATX boards.

Thanks!
 

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
Well, this has been an inforitive 2 days!

After much thought, research and consideration, I think I have a final solution:

ASRock Q87WS-DL ATX Server Motherboard LGA 1150 Intel Q87
Intel Core i5-4570S Haswell 2.9GHz LGA 1150 65W
G.SKILL Ripjaws X Series 16GB (4 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600
LSI Internal SATA/SAS 9211-4i 6Gb/s PCI-Express 2.0
10 Seagate Desktop HDD.15 ST4000DM000 4TB 64MB Cache (32TB RAIDZ2)
CORSAIR TX Series CMPSU-650TX 650W
Cooler Master HAF 912 Mid Tower Computer Case
Cooler Master 4-in-3 Device Module Cage
3ware .5M Internal (SFF-8087) Serial ATA Breakout Cable
Nippon Labs SATA Adpater Molex 4-Pin PC power cable to 2 x SATA

This will be combined with 2 Virtex4 SSD in a Mirror for boot/VM.

Now the questions of why:
Why not Xeon Processor?
I looked extensively at the Xeon processor as well as ECC. What I found was that the only advantage of Xeon E3-1225V3 over the i5-4570S was ECC and Cache size. The disadvantages were considerable though:
- Can not use the integrated graphics without a C226 chipset
- higher power draw
- Limited selection of MB over standard Haswell Core processors

The biggest advantage of the E3-1225V3 over the i3-2120T I have was the speed and VT-x/VT-d extensions for virtualization. These are covered by the i5-4570S in a low power package and with more flexibility in chipsets (not limited to C222, C224, C226). Hence the i5 was the best choice for what I need.

What about ECC?
No doubt, ECC is a reliability improvement. But the improvement is minor when you consider the data out there. ECC memory can only correct 1bit errors, and when you consider the amount of Errors that are generated elsewhere in the system, this becomes a trivial part. I have been running non-ECC memory in my current server (and it's successor) for the better part of a decade, and I have not had any reliability issues on either of these systems since launched in 2004. Can it happen? Yes. Will I loose data? Yes. But eliminating the hassle of the Xeon and it's chipset/feature supports simplified everything and netted a $100 system cost reduction as well as reducing several potential incompatibility headaches.

Why Q87 chipset?
This supports all of the virtualization extensions and gives a pseudo-server class board. Z87 and H87 do not support these features. Also the board has Dual NIC, which is a great addition since I need at least 2 NIC (though I may add a third).

Hypervisor?
Yes, mfenn talked me into it. I like the idea of separating the servers into different VM (yes, windows 7 VM as well) which allows me to solidify each function without affecting the others. I still have much research to do on this, but I suspect KVM may beat out ESXi for my application, ZFS being the contributing factor.

Other Comments?
A few other changes include going to the 4Port (1 SAS Port) LSI card. getting the right combination of SATA ports, Processor, MB and Network cards was a hassle. In the end I could not find a MB that would give me 12 SATA Ports and that I would have confidence in its performance. The ASRock Q87 board has 8 6GBps Ports and the LSI has an additional 4, giving the 12 that I need. 2 MB ports will be for boot, the rest for the RAID Z2 array.

I bumped the memory to 16GB, I figure with the VM scheme, this will be a good investment.

I changed the Case. This has a better ability to store all of the drives, especially with the next 2 - 2.5" drive cage it has. Combined with the 4 to 3 drive cage, everything should fit.

Looking at other SATA cards, there seems to be a lot of inconsistent support/success with different cards in Linux. LSI seems to be the best for compatibility, one less thing to hassle with on this build.

Overall This seems to be the best compromise of compatibility, performance, and capability. I wanted to keep the build simple, having only the features I need and not the ones that I don't (higher reliability).

Feel free to continue to evaluate and discuss, I am always open to others thoughts, but based on the great feedback, I feel better about this setup than the previous.

Thanks!
 

Knavish

Senior member
May 17, 2002
910
3
81
Thanks for the summary. You make some good points. I've been thinking about going with a C216 motherboard, Haswell Pentium (supports ECC, VT-x, not VT-d). The discussion about using KVM might have inspired me to give it a try. I'm also debating as to whether running things like Samba and a LAN webserver in a VM would be a large time sink with no practical benefit...

It looks like you would run KVM something like this:
  • On base machine, run linux server OS of your choice. Configure the RAIDZ2 disk and coordinate its backups at this level. Run KVM here.
  • Within KVM, setup a linux server VM. Run Samba, httpd, etc. This VM will mount the RAIDZ2 share from the base server.
  • Within KVM, setup a second linux server VM. Run more services here (networking, etc.)

How is the network setup within the VM's? If the RAID disk is on the base system and Samba in a VM, do you take a big performance hit because the computer hardware has to emulate a network layer just to ship bytes out of the box an onto the "real" network?

I guess if you have VT-d support and enough physical ethernet ports, you could allocate a network port to each VM. In this case, your RAIDZ2-to-Samba connection would get limited by a gigabit pipe even though they are in the same system...
 
Last edited:

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,886
3,231
126
did u confirm with the guys that ESXI will work on a Z87...

the Z77 platform is at best (insert all the nasty rants one can think of)!!!!! when u try to load server OS's on.
If ESXi will work without hitch on it, then everything else will be smooth.

However if i recall, no one will attempt to really install ESXi on a "Z" chipset due to compatibility issues..

i couldnt even get windows home server 2011 to install properly on the Z77 platform due to missing drivers on some of the features on the board.


EDIT: AHHH isee u changed to the Q87 platform... much better choice...
 
Last edited:

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
Yep, no ESXi and No Z87.

I am still working on the how for the Software side, Hypervisors are still new to me, and formulating a plan is taking some time.

In my research I did find SmartOS. Not sure if it would work, but may be a good base that I can then Add the 4 VM to.

Right now I have planned:
- VM1: Proxy/Gateway/Router/Firewall
- VM2: DHCP/DNS/Samba
- VM3: SQL/Web
- VM4: Windows 7

Sitting these on SmartOS which would handle the KVM and ZFS portion, sounds so simple ! Too bad it is software and that never works with simple...
 

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
Another thought that I had is how to divide the storage. 4 VM will take much more than the planned 128GB Mirrored SSD that I was going to use.

Options:
1. Put VM1, VM2 on SSD, VM3, VM4 on ZFS Array
2. Don't Mirror, VM1/VM2 on one SSD, VM3/VM4 on other
3. Use 1 SSD for VM1/VM2/ZFS Caching, Use 1TB 7200RPM Drive (I have this sitting around) for VM3/VM4

I have both the SSD and 1TB drive sitting around, I am trying not to add more cost to this build. I also have 2 500GB drives that I could use.

Thoughts?
 

Knavish

Senior member
May 17, 2002
910
3
81
Another thought that I had is how to divide the storage. 4 VM will take much more than the planned 128GB Mirrored SSD that I was going to use.

Options:
1. Put VM1, VM2 on SSD, VM3, VM4 on ZFS Array
2. Don't Mirror, VM1/VM2 on one SSD, VM3/VM4 on other
3. Use 1 SSD for VM1/VM2/ZFS Caching, Use 1TB 7200RPM Drive (I have this sitting around) for VM3/VM4

I have both the SSD and 1TB drive sitting around, I am trying not to add more cost to this build. I also have 2 500GB drives that I could use.

Thoughts?

I'd have to check, but I expect a basic (no GUI) linux install for one of your server VMs would be much less than 10GB. Perhaps you can put all your linux VMs on the SSD and keep the separate 1TB drive for the Win 7 VM.
 

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
I think you are right, so maybe I am good with the 128GB Mirror:

10GB VM Host
10GB VM1
10GB VM2
10GB VM3
10GB ZFS Cache
10GB ZFS Cache
20GB Swap
30GB VM4 (Windows)
18GB Free

Maybe I am good, Will know when I get this built in less than 30days!

Thanks!
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
Yep, no ESXi and No Z87.

I am still working on the how for the Software side, Hypervisors are still new to me, and formulating a plan is taking some time.

In my research I did find SmartOS. Not sure if it would work, but may be a good base that I can then Add the 4 VM to.

Right now I have planned:
- VM1: Proxy/Gateway/Router/Firewall
- VM2: DHCP/DNS/Samba
- VM3: SQL/Web
- VM4: Windows 7

Sitting these on SmartOS which would handle the KVM and ZFS portion, sounds so simple ! Too bad it is software and that never works with simple...

SmartOS is a Solaris derivative which is meant for cloud hosting environments. That means: PXE boot always, read-only root, very limited tools in host, etc. You could certainly make it work, but I don't think it's a good choice for the uninitiated.

Options:
1. Put VM1, VM2 on SSD, VM3, VM4 on ZFS Array
2. Don't Mirror, VM1/VM2 on one SSD, VM3/VM4 on other
3. Use 1 SSD for VM1/VM2/ZFS Caching, Use 1TB 7200RPM Drive (I have this sitting around) for VM3/VM4

I would not do any of these certainly not what you listed in your latest post. The idea behind ZFS is that you have pooled, thin-provisioned storage, with DRAM and SSDs transparently working to make a big HDD array fast. If you go and statically slice it up, you're going to end up with a very complex, poorly performing system.

What I would do is:

RAID controller: set up the drives for pass-through (no RAID)

SSD physical partitions:
sd[ab]1 500MB ext4, normal Linux md RAID1 /boot
sd[ab]2 40GB ext4, normal Linux md RAID1 /
sd[ab]3 8GB ZFS (see below for zpool config)
sd[ab]4 Extended
sd[ab]5 80GB ZFS (ditto)

HDD physical partitions
None, give ZFS the whole disk

ZFS zpool:
sd[ab]3 mirrored log
sd[ab]5 cache
sd[c-l] raidz2

Logical:
/zpool1/smb - Normal ZFS filesystem, set a quota to about 70% of overall capacity
/zpool1/vm[1-3] zvol ~40GB quota for Linux
/zpool1/vm4 zvol ~100GB quota for Windows

Everything ends up being thin-provisioned, so you only use space that you actually write to. All writes first go to the SSD before flushing to HDD, giving you SSD-like write performance over the whole array. The most frequently used data sits in DRAM, the next tier sits in your SSD cache. Only the rarely-accessed data actually has to come from disk.

Oh yeah, as for the RAID controller, you might as well get the 8i version for future expansion.
 
Last edited:

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
Thanks for the insight. Never thought of using ZFS on the SSD, I am still stuck in the old method (A partition for a OS/Task, etc...)

I like the idea you have setup, I am still learning about ZFS, this helps a bit.

Just to clarify a few things:
- zpools can contain whole drives and/or partitions? (That is what you seem to be recommending on the SSD, some partitions are part of the Zpool)
- I will still need to have EXT4 partition for Boot and Host VM system?
- Host VM will host KVM and in tern launch the other VM Guests (I hope my terminology is correct)?
- Do I mirror the SSD first, then put the partitions, or do I just create a Boot partition/ VM Host Partition on SSD0, add the balance to ZFS as well as SSD1 and then mirror the /Boot and / in ZFS (this is my main point of confusion)?

I tend to think in a hardware manor, trying to learn the software lingo (seems to be a necessary evil, but we will not discuss that view here).

I understand the 10Drives go the ZFS, that makes sense, they go to a zpool, and a RAID-Z2 is made.

The 2 SSD is where I am confused. My mind wants to mirror them, then use them outside of ZFS, mainly because it is easy for my simple mind to understand. But I am starting to see there are advantages to having ZFS manage these as well.

I also understand that booting Linux for ZFS is not really a recommended practice currently, hence the 2 EXT4 partitions (/boot and /). It then sounds like the rest of the drive goes to ZFS (8GB for Log mirror and the remainder as cache for improved performance of the 10Drive array).

Where it I loose you is how do the EXT4 partitions become mirrored? will ZFS mirror them? If so, what happens if the drive fails, will I still boot? Or should the SSD be mirrored with the controller and then the partitions on top?

It also sounds like another set of logical volumes are created from the Z2 array to support the VM, that makes sense as well.

Sorry for the questions, started reading through the ZFS best practice guide but it takes me a bit to change from the hardware view to the virtual view.

Thanks!
 

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
I also thought about the 8Port LSI card, but there is no more room in the case for more drives. Figured I would save $100.

Thanks!
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
Overall, yes you do have to change your way of thinking when you start using pooled storage. Think of it this way: the main copy (copies really) of the data always sit on the HDD array. The SSD bits (log and cache) are optional, transparent accelerators. Once they're enabled, you don't have to worry about putting data on them explicitly, ZFS manages all of that for you.

- zpools can contain whole drives and/or partitions? (That is what you seem to be recommending on the SSD, some partitions are part of the Zpool)

Yes, zpools can have any type of block device in them, it doesn't particularly care whether or not it's a partition or a whole disk.

- I will still need to have EXT4 partition for Boot and Host VM system?

Yes, you will want to have ext4 for /boot (GRUB doesn't understand ZFS) and / (the Ubuntu installer doesn't understand ZFS). While you can theoretically work around both of these, it's not worth the effort to do so.

- Host VM will host KVM and in tern launch the other VM Guests (I hope my terminology is correct)?

The host is your physical Linux box, it is not a VM. It will then use KVM to launch VMs.

- Do I mirror the SSD first, then put the partitions, or do I just create a Boot partition/ VM Host Partition on SSD0, add the balance to ZFS as well as SSD1 and then mirror the /Boot and / in ZFS (this is my main point of confusion)?

You could do it either way, but I recommend partitioning it and then mirroring the partitions. Do not do any mirroring in the RAID controller, just pass them through as individual disks.

What you want to do is partition both SSDs exactly the same. Then use the normal Linux RAID tools (mdadm) to create RAID1 devices for /boot and / . The Ubuntu alternative installer can do this for you. That way, you have a functional system that you can boot even if something happens to ZFS.

Leave the other partitions alone until after the OS is installed. Once you have the OS up and running and ZFS installed, you can then use the zpool command to create the 10 drive RAIDZ2 and attach the log and cache devices.

It also sounds like another set of logical volumes are created from the Z2 array to support the VM, that makes sense as well.

Once you have a zpool, you can then create as many logical volumes and filesytems as you want. One of the great things about ZFS is that it is all thin-provisioned, so you don't have to allocate storage ahead of time, you will just use space as your grow.

Please keep asking questions if any of this doesn't make sense.
 

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
Thanks for the clarity!

With this and the research from yesterday, I think I understand ZFS at a reasonable level.

Current plans are:
- SSD0/SSD1 (Mirror)
- EXT4, /Boot (1GB)
- EXT4, / (63GB)
- zpool, zLOG (16GB)
- zpool, L2ARC (48GB)

- HDD0-9
- zpool, RAID-Z2, 32TB

With this:
- Host VM (Sounds like this is just KVM, but we will get to that in a minute) on EXT4
- Network VM on EXT4
- Routing VM on EXT4

- LAMP VM on zvol1
- Windows VM on zvol2
- Data Storage on zvol3

I know this is slightly different that your recommendation, but lets see if my reasons make sense (understand I have a bit of OCD, so the sizes are feeding that mental issue)
- EXT4 is needed to boot from for host VM/KVM (will address this in another minute), but I also like the idea of the Network and Routing VM residing on the EXT4. This keeps the SSD drives (which will be mirrored, probably with mdadm as you recommend) as a separate entity so if I loose a controller card, the data may be down, but I still have access to all the network services needed to at least go online and order a new card.
- I increased the size of the Read Cache (L2ARC) because one of the most used functions of the array will be for serving media. All of the DVD/BluRay are in ISO format which are rather large. for movies we watch more often, this should push them into the L2ARC, and I want a size large enough they may actually fit.
- I increased the size of the Write Cache because, will 48/16 is a nice number split that 56/8 (I told you that OCD just kind of comes out and causes issues sometimes!). If I can get past the mental block, I may make this 60/4 split since most of the writes (past the initial rips) will be small files and infrequent.
- I put the Windows VM and LAMP VM in the zpool because these are not as critical functions as the network and routing VM. The LAMP will be me playing and learning more than anything, the Windows is for a handful of programs that others need to use on windows (since my main workstation and laptop will be the only Windows machines in the near future...sorry, hate windows 8, and not willing to purchase more licenses for Windows 7 just for 2 or 3 programs).

Now for the Host VM/KVM issue...you have greatly helped me understand ZFS, now I need to understand KVM.

My initial understanding was that I had a host VM that would then launch KVM. This host could then handle the underlying ZFS support and launch the other VM (3 launch at all times, the Windows on demand). Your recent wording has lead me to believe my understanding is incorrect. Is KVM something that is launched directly from GRUB? If so, can it handle the ZFS? How does that work? How do I set that up (I know I need to research this more, but I am giving time for all of the research on ZFS from yesterday to sink in, perhaps this weekend I will get more into KVM).

Thanks again for all the help, hopefully this will also help others that are considering this type of setup in the future.

Thanks!

BTW, starting to order parts that are on sale today...hope to start into the software side on the new server in early December!
 
Last edited:

Knavish

Senior member
May 17, 2002
910
3
81
I'm sure mfenn will comment on this, but I'll give it a try...

KVM, or "Kernel-based Virtual Machine" is software that runs in linux. You have to install some version of linux, including the KVM package(s) to be able to run it. KVM is not a hypervisor like ESXi; it is more similar to VMware Workstation. (For example, you can run VMware Workstation from within Windows to provide a virtualized linux computer at your Windows desktop.)

A hypervisor is a stripped down OS that runs directly on the hardware. It provides functionality to manage and run the VMs that you install within the hypervisor. From what I see on Wikipedia, the original VMware ESX hypervisor was based on a linux kernel.

You will be running a full linux OS directly on the hardware. The RAID system will be configured and loaded within this full linux OS. After linux boots, it will start KVM and load your various VMs. I assume you will use NFS (or something similar?) to make the big RAID disk available to the VM running Samba....(right?)

I'm still curious about the bandwidth / latency of a client accessing a Samba share from a VM vs. a Samba share from the base OS....

Example with no VMs used:
  • client machine requests video on network
  • request received by server OS running Samba
  • server OS pulls data from local disk (RAIDZ2) and sends it to client

versus

Example with VMs:
  • client machine requests video on network
  • request received by base server OS
  • request forwarded (via virtual network) to VM running Samba
  • Samba requests data from base OS's disk over the virtual network
  • Base OS ships data from RAID over virtual network to VM
  • VM sends data to client via the virtual network
  • Base OS takes VM's data on virtual network and ships it out to the client via the physical network.

I know all this happens within the base computer's hardware, but you are hitting the linux network stack a bunch more times for the VM approach. (Or am I missing something here?)
 
Last edited:

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
With this:
- Host VM (Sounds like this is just KVM, but we will get to that in a minute) on EXT4
- Network VM on EXT4
- Routing VM on EXT4

As Knavish explained, there is is no such thing as a "host VM" with KVM. The host is just a normal Linux operating system. I assume that by "network VM" you mean DNS, DHCP, etc and by "routing VM" you mean the actual router, firewall, and proxy.

- LAMP VM on zvol1
- Windows VM on zvol2
- Data Storage on zvol3

There is no real reason to put normal files inside a zvol. Just make a normal ZFS filesystem (zfs create zpool1/myfiles).


- EXT4 is needed to boot from for host VM/KVM (will address this in another minute), but I also like the idea of the Network and Routing VM residing on the EXT4. This keeps the SSD drives (which will be mirrored, probably with mdadm as you recommend) as a separate entity so if I loose a controller card, the data may be down, but I still have access to all the network services needed to at least go online and order a new card.

I can understand this somewhat, but in the event of a disaster, you should be able to turn your host OS into a router/NAT box in about 5 minutes. (You will be backing up your network configs, won't you?)

- I increased the size of the Read Cache (L2ARC) because one of the most used functions of the array will be for serving media. All of the DVD/BluRay are in ISO format which are rather large. for movies we watch more often, this should push them into the L2ARC, and I want a size large enough they may actually fit.

I would like to point out that you actually decreased the size of the cache device from my recommendation. I recommended 80GB, you are specifying 48GB.

- I increased the size of the Write Cache because, will 48/16 is a nice number split that 56/8 (I told you that OCD just kind of comes out and causes issues sometimes!). If I can get past the mental block, I may make this 60/4 split since most of the writes (past the initial rips) will be small files and infrequent.

The ZIL log device is not a write cache per se in that it is never read from except in the case of an unexpected crash. Writes are actually cached in DRAM and written out to the log device "just in case". The writes are flushed directly to disk from DRAM.

Anyway, there is no reason to make a log device more than 8GB, even on a huge system. There is certainly no reason to make it equal in size to main memory. This is because the log only needs to keep dirty (written) bits since the last flush to disk. By default, ZFS will sync every ~4 seconds, so the log device only needs to be big enough to keep the last 4 seconds worth of writes. For an 8GB log, that works out writing at a rate of 2GB/s, which you'll never be able to do anyway. So 8GB is already very safe, 16GB is just wasteful.

Now for the Host VM/KVM issue...you have greatly helped me understand ZFS, now I need to understand KVM.

My initial understanding was that I had a host VM that would then launch KVM. This host could then handle the underlying ZFS support and launch the other VM (3 launch at all times, the Windows on demand). Your recent wording has lead me to believe my understanding is incorrect. Is KVM something that is launched directly from GRUB? If so, can it handle the ZFS? How does that work? How do I set that up (I know I need to research this more, but I am giving time for all of the research on ZFS from yesterday to sink in, perhaps this weekend I will get more into KVM).
/QUOTE]

Knavish explained this well, but to reiterate: There is no such thing as a "host VM" in a KVM setup. The host is just a normal Linux system and the VMs run inside KVM processes on the host. It is similar to something like VirtualBox or VMWare Workstation like Knavish pointed out, except with an important difference: there are tools for managing the VMs so that you don't have to literally open up a graphical application in order to get access.

Thanks again for all the help, hopefully this will also help others that are considering this type of setup in the future.

Happy to help.
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
I assume you will use NFS (or something similar?) to make the big RAID disk available to the VM running Samba....(right?)

I would recommend just serving Samba directly out from the host, but if you did it in a VM, it would make more sense to have the Samba files local to the VM (i.e. host the VM in a big ass zvol).

I'm still curious about the bandwidth / latency of a client accessing a Samba share from a VM vs. a Samba share from the base OS....

Short answer: yes, VMs will always introduce more latency than running directly on the host.

Long answer: that latency may not be as much as you expect. Here's what an optimized VM flow would look like:

Frame arrives on physical eth0
Frame gets sent up through the Ethernet stack to a Linux bridge device
Frame gets sent over a tap device to the VM
Frame goes through the TCP/IP stack on the VM
Message is delivered to the application
Application reads data from virtual disk
Read request gets passed to the host
Read is issued against the physical disk(s)
Data is passed back to the host
Data is passed back to VM
Message sent through TCP/IP stack on VM
VM writes frame to tap device
Linux bridge device writes frame to eth0
Frame leaves eth0

Contrast that with a physical host:

Frame arrives on physical eth0
Frame goes through the TCP/IP stack on the host
Message is delivered to the application
Application reads data from filesystem
Read is issued against the physical disk(s)
Message sent through TCP/IP stack on host
Frame leaves eth0

There are a lot more data copies that happen, but the network stack processing only happens once. Where there is a duplication of processing is reading from the virtual disk and then from the physical disk.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Also, Samba shares will typically be mounted async, by default, taking advantage of write caching from both the server and client OS. NFS can do that, but it doesn't always work out (Red Squirrel never could get Kate to behave well over NFS, FI). Sync behavior is fine for simple backups, but not replacing even a slow local disk with a share, when the local disk would be mounted async or flush.
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
Also, Samba shares will typically be mounted async, by default, taking advantage of write caching from both the server and client OS. NFS can do that, but it doesn't always work out (Red Squirrel never could get Kate to behave well over NFS, FI). Sync behavior is fine for simple backups, but not replacing even a slow local disk with a share, when the local disk would be mounted async or flush.

That's a good point and one reason why it's much easy to get a high peformance Samba share than an NFS one.

The NFS spec mandates that when a write is completed, that the data must reside on disk (aka. sync). Much pain and nashing of teeth occurs if you try to mount async because so many applications expect it.

However, ZFS does allow you to cheat a bit on this. Even when a sync write comes in, it doesn't actually send the data to the primary disk, it just writes it to the log device and keeps the dirty page in DRAM like normal. It's written to a disk and can be recovered in the case of a crash, so that's good enough. Sync writes on ZFS get SSD latency instead of the DRAM latency that async writes get, but that's better than the HDD latency that non-ZFS shares would give.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |