Here's an interesting article on leveraging LDoms for application servers and databases. The article has some interesting ideas and tools for doing the benchmarks. I might have to leverage those for some tests in the future!
One of the interesting things I saw in the article is the use of multiple VSWs for separating back-end and front-end traffic. This demonstrates one of the strengths of LDoms where it is possible to create virtual switches for different purposes. One thing of note is that multiple VSWs were used to isolate a single guest domain and front-end client for more throughput. However, it would appear that the utilization on the virtual switches easily allowed for increasing number of guest domains per virtual switch as was demonstrated when the number of guest domains was doubled.
I think this brings up some common misconceptions that I see regularly in the industry. Just as CPU utilization is observed to be less than 10% in many environments, the same holds true to networking. Many shops deploy 1GbE nowadays, but are under utilizing the capacity. The bad part being that many servers are provisioned with multiple 1GbE connections for different VLANs, such as production, backups, management, monitoring, etc. This leads to a huge waste in resources as ports and cabling increase substantially per server rack.
The use of LDoms can reduce some of this waste by sharing each NIC through VSWs. This allows multiple guest domains to use the same NIC and drive the utilization up. It'll be nice when Link Aggregation and VLAN Tagging become available for LDoms in Solaris 10. This will reduce the requirement of probe-based IPMP in guest domains and further consolidation of networks onto fewer physical links. With the ability to use to 10GbE on the T-series servers, this opens many possibilities. Just think how nice it would be to only have two network connections per server with all of your VLANs and guest domains running together on them.
Another interesting item in the article is the need to do performance and capacity planning for proper consolidation ratios for virtualization. I can't stress how important it is to look at the utilization (CPU, Memory, Network, Storage, I/O, etc.) of your servers now. Then test those workloads on a T5220 or T5240. If the application works well with a few threads or cores, you can size your guest domains. I do like the comments about looking for the resource factor that will limit the size of your guest domains. Some applications may need more memory than CPU for example. So the amount of memory can determine the number of guest domains for a given distributed application.
One of the common issues I'm seeing is that people have a hard time believing that a V880 can be replaced by a T5220. While single-threaded applications are a challenge for the CMT processors, fewer and fewer commercial applications are single-threaded. With the advent of highly scalable databases and Java applications, multi-threading is becoming the mantra for programmers globally. The other thing to consider is that older generation servers from Sun used PCI, while the current servers use PCI-E, which is considerably faster. For example, consider that an 8 lane PCI-E slot can handle 250MB/s per lane. A dual 1GbE card only has four lanes, but the Ethernet is moving at Gigabits, not GigaBytes. And given that most network ports are under utilized, you can see that current technology has a lot of head room for bandwidth. It's really amazing to think of soo much bandwidth being available on todays servers. Of course, there are always applications that can push that bandwidth utilization up. Think of the the need for pushing high volumes of data over networks or SANs for things like VOIP, video streaming, data mining, backups, etc. Definitely interesting applications that servers like the T5120-T5240 are designed to handle easily.