Have you done any performance testing with this?
I have 2 identical R310's, 4 core, 4GB Ram, 2 on board nics and 2x 2GB PCIe NICs.
Loaded up an identical config on both, with the aim to bond the 2 PCIe NIC interfaces then speed test between the two units..
I tested the performance on a single interface between the two units, 419.5Mbps average (receive). Not impressed with the overhead ESXi puts on the network.
Unfortunately ROS won't see the RAID card in the machines so I've had to go down the VM route. I need 4GBps throughput!!
Any suggestions, experiences?
I run multiple VMs with ROS on Dell servers, HP servers and noname servers since ESXi 4.1 and now on ESXi 5.
The performance is as if the ROS is running natively on hardware.
In general ESXi's network overhead is minimal IMO. I haven't had any network issues neither with ROS or any other OS running on ESXi.
I've done bandwidth tests and get 900-950mbits in/out with no sweating.
The configuration is pretty much the defaults with the exception of adding an IDE virtual disk for ROS to install onto.
Other than that no special configuration was done on the ESXi.
I haven't tried bonding multiple interfaces to achieve speeds over 1gbit to be honest.
Maybe you could try ESXi's bonding options instead of Mikrotik?
I believe ESXi's implementation is 'heavy duty' so it might work better, plus it's some layers below the layer of the VM's networking stack.