Hello everyone,
I'm currently running ClearOS 6 on a smoldering pile of old Dell hardware that, with just the right amount of shock therapy continues to run but quite poorly. I'd like to move my ClearOS installation into my ESXi Cluster.
So, a little background:
I'm currently running ClearOS 6 in Gateway mode. Eth0 goes to the internal LAN, eth1 to a residential cable modem with a dynamic address and eth2 goes to a business cable modem with a static address. I'm running a few plug-ins, proxy, IDS/IDP, Dynamic VPN, OpenVPN, DNS and DHCP. Nothing really out of the ordinary. It's all running on an old Dell 490 workstation with 12GB booting from an SSD and backing itself up to an external USB drive. Really, kind of straight forward I think. My only problem i the hardware is starting to crack else I'd just leave it alone.
I've got this ESXi HA Cluster that is currently sitting at 4 nodes totaling 8 processors and 16 cores with 192GB of RAM (48GB/node) and ~20TB of 1gbit of iSCSI storage. Each node has 2x Intel Gig-E nics integrated into the system plus I've added an Intel i340-T4 card for a total of 6 nics per node. All of the nics are stand-alone right now but I was thinking about teaming some of them together, just haven't gotten that far yet. The cluster seems to happily digest anything I throw at it and is currently sitting at about 30% CPU and 50% memory usage.
I have virtualized ClearOS into standalone ESXi hosts with multiple static nics many times and it works fine. I can even see in my minds eye how I could put it all into a cluster if I just had two static interfaces by themselves and I can kind of imagine how I'd put this installation into the virtual world but I want to double check my logic here...
And now, on with the question:
At the physical switch level, if I put each cable modem on its own vlan and tag the traffic, and I put everything into the cluster on one ethernet interface per node? In theory, if I tagged each network individually I'm thinking I could be able to do this and let the clusters Virtual Switch figure out the internal routing and tagging and the traffic should be delivered to/from the VM on the correct virtual interface regardless of what physical node it's running on, right?
Has anyone out here done such a thing or similar? And if not, does anyone have any deeper knowledge of physical and virtual switches that can confirm or debunk my theory?
If I don't get the answers I need them I'm still going to try it and I'll create a thread in here to document my configurations along the way.
Any help is greatly appreciated,
thanks,
-brian
I'm currently running ClearOS 6 on a smoldering pile of old Dell hardware that, with just the right amount of shock therapy continues to run but quite poorly. I'd like to move my ClearOS installation into my ESXi Cluster.
So, a little background:
I'm currently running ClearOS 6 in Gateway mode. Eth0 goes to the internal LAN, eth1 to a residential cable modem with a dynamic address and eth2 goes to a business cable modem with a static address. I'm running a few plug-ins, proxy, IDS/IDP, Dynamic VPN, OpenVPN, DNS and DHCP. Nothing really out of the ordinary. It's all running on an old Dell 490 workstation with 12GB booting from an SSD and backing itself up to an external USB drive. Really, kind of straight forward I think. My only problem i the hardware is starting to crack else I'd just leave it alone.
I've got this ESXi HA Cluster that is currently sitting at 4 nodes totaling 8 processors and 16 cores with 192GB of RAM (48GB/node) and ~20TB of 1gbit of iSCSI storage. Each node has 2x Intel Gig-E nics integrated into the system plus I've added an Intel i340-T4 card for a total of 6 nics per node. All of the nics are stand-alone right now but I was thinking about teaming some of them together, just haven't gotten that far yet. The cluster seems to happily digest anything I throw at it and is currently sitting at about 30% CPU and 50% memory usage.
I have virtualized ClearOS into standalone ESXi hosts with multiple static nics many times and it works fine. I can even see in my minds eye how I could put it all into a cluster if I just had two static interfaces by themselves and I can kind of imagine how I'd put this installation into the virtual world but I want to double check my logic here...
And now, on with the question:
At the physical switch level, if I put each cable modem on its own vlan and tag the traffic, and I put everything into the cluster on one ethernet interface per node? In theory, if I tagged each network individually I'm thinking I could be able to do this and let the clusters Virtual Switch figure out the internal routing and tagging and the traffic should be delivered to/from the VM on the correct virtual interface regardless of what physical node it's running on, right?
Has anyone out here done such a thing or similar? And if not, does anyone have any deeper knowledge of physical and virtual switches that can confirm or debunk my theory?
If I don't get the answers I need them I'm still going to try it and I'll create a thread in here to document my configurations along the way.
Any help is greatly appreciated,
thanks,
-brian
In Hardware
Share this post:
Responses (3)
-
Accepted Answer
-
Accepted Answer
Hello Brian
Let me first not get your hopes up, I have never configured a vSphere Distributed Switch I use the free portion of ESXI. I have waited to see if anyone more qualified would answer your posts.
I am going to make an assumption that you do not have a need to physically separate your lan’s you only need to do this virtually. I believe the first step would be to establish communication via one vlan to the ClearOS vm and the rest should hopefully fall into place. I do not know how to configure your physical switch and regrettably in the world where everyone making a Layer 2 and above switch seems to run their own show in the hardware world. Now you may ask where vDS fits in all this, well it does not for now best to get a handle on you equipment. I am going to describe the steps that are required to have proper communication and you will be required to find the right configuration parameters for your equipment. Let say we attempted to connect a PC to your ClearOS vm Lan adapter:
PC < -- > stream frames not tagged < -- > Physical Switch (i.e. Access Port 1) configure the port to initiate tagging on the stream frames to (i.e. vlan 100), note configuration must be set to remove tags on the way back to the pc < -- > Configure the Physical Switch to create a Trunk, may or may not be required to configure a group? Must accept frames that are tagged for vlan 100 from Port 1, exit via (i.e. Access via Port 2), note trunk configuration should not remove tags on exiting the switch < -- > ESXI Physical Adapter (i.e. vmnic0) frames are still tagged < -- > ESXI Standard vSwitch0 frames are tagged, configure new VMKernel Group Name vlan100 to use VLAN ID: 100, tags will automatically be removed to frames exiting to the vm adapter card and tagged on the way back < -- > ClearOS vm adapter card configured to use VMKernel Group vlan100, stream frames are not tagged.
I made another assumption that we are going to use a Trunk between the Physical Switch and the ESXI Server. You could use a port without trunking but keep the logic consistent to a Trunk you must configure the port to not remove the tags when you exit the Switch with the frames on port 2. I noticed that you have a Layer 3 physical switch do not use the routing capabilities of your switch in other words stick to the Layer 2 capabilities for your configuration. I do not recall changing the network policy on my ESXI Switch, I recently built an ESXI 6 server connected the Trunk and everything worked out of the box. In regards to which lan adapter to use from your CLearOS vm, if you are not sure let me know we will have to track down the mac address from your vm adapters in the CLearOS server.
If you ping the ClearOS box after all this we are half there. -
Accepted Answer
For anyone who cares to hear an update, I'm really struggling with this job and finally restored to undoing everything I've tried and posting in the VMware vNetwork forums with hopes of getting help from someone with deeper knowledge of networking and vDS (Virtual Distributed Switches) than me. If you're interested in reading, it's over at: https://communities.vmware.com/message/2551058#2551058
thanks,
-brian
Please login to post a reply
You will need to be logged in to be able to post a reply. Login using the form on the right or register an account if you are new here.
Register Here »