I assume you will use NFS (or something similar?) to make the big RAID disk available to the VM running Samba....(right?)
I would recommend just serving Samba directly out from the host, but if you did it in a VM, it would make more sense to have the Samba files local to the VM (i.e. host the VM in a big ass zvol).
I'm still curious about the bandwidth / latency of a client accessing a Samba share from a VM vs. a Samba share from the base OS....
Short answer: yes, VMs will always introduce more latency than running directly on the host.
Long answer: that latency may not be as much as you expect. Here's what an optimized VM flow would look like:
Frame arrives on physical eth0
Frame gets sent up through the Ethernet stack to a Linux bridge device
Frame gets sent over a tap device to the VM
Frame goes through the TCP/IP stack on the VM
Message is delivered to the application
Application reads data from virtual disk
Read request gets passed to the host
Read is issued against the physical disk(s)
Data is passed back to the host
Data is passed back to VM
Message sent through TCP/IP stack on VM
VM writes frame to tap device
Linux bridge device writes frame to eth0
Frame leaves eth0
Contrast that with a physical host:
Frame arrives on physical eth0
Frame goes through the TCP/IP stack on the host
Message is delivered to the application
Application reads data from filesystem
Read is issued against the physical disk(s)
Message sent through TCP/IP stack on host
Frame leaves eth0
There are a lot more data copies that happen, but the network stack processing only happens once. Where there is a duplication of processing is reading from the virtual disk and then from the physical disk.