Wowza Community

New server - 25X 15K SAS - Best RAID configuration?

I just picked up an HP DL180 G6 like this ( http://r.ebay.com/lHyBzS) but have the 3.06GZ 12MB L3 cache and 96GB RAM. I am trying to figure out the best way to configure my RAID for reliability of the OS, but also to maximize the performance of VOD. I have around 1TB of storage needs and growing and 25X 146GB 15K SAS drives which gives me 3650GB to play with.

Based off what I am reading on Wikipedia (http://en.wikipedia.org/wiki/Standard_RAID_levels) I had considered sectioning it out like this:

HP Smart Array P410 Controller

5 drives in RAID 5 - OS Drive (438GB)

10 drives in RAID 0 - Content disk 1

10 drives in RAID 0 - Content disk 2

I wasn’t sure because of the 2nd network port if I should break those last 20 up into separate RAIDs, or maybe you will tell me I should really have 5 separate RAID 0 going on for max performance and divide up my content into multiple applications to point to multiple content folders or I could just put them all on a single RAID 0 if that works out.

I will be connecting to the internet with dual 1GB connections and hope to serve up around 2000-2500 users doing VOD with no transcoding as we already have all qualities encoded already. I also have 6 300GB 10K SAS and 2 450GB 10K I could use, but wasn’t sure if there was too big of a performance hit on the 10K drives, but the extra space would be nice for future content.

===== Side track ====

I was running the load testing tool yesterday on an HPDL360 G5 / 32GB RAM 4X72GB 15K SAS in RAID 0 and was able to get about 1100 simultaneous connection with 175 sample files of varying bitrates before the network was saturated so I am hoping to use 2 1GB ports and am currently not sure if that means 2 applications, or if I can just tell it to listen to both IPs in the vhost.xml. Was planning on figuring that out next :slight_smile:

One thing I did notice is that when I ran the load testing tool on the last server, the RAM usage kept getting higher and higher each time I ran the tool, but never dropped back down. It didn’t seem to impact the performance of the server, but that was just within 1-2 hours of running different tests, not sure what the implications would be for it running 24X7X365. I tried identifying the application using all the RAM using “top” and “ps -aux” but couldn’t get it to drop back down below 25GB in use even after restarting all processes like apache, wowza, mysql, memcached. I still need to do all the performance tuning on the real server, this last one was just a test to see what happened with the CPU and Memory with that many users. (if you are wondering, CPU got up to around 75% usage only after 800 users, memory got progressively more filled to 98%). I never could figure out how to determine the load on the hard drives, iotop, and iostat didn’t seem to see it at all.

During the setup process I am visiting a lot with users on ServerFault to learn from what they know as well. For fun I setup all 25 SAS drives in a single RAID 0 to measure performance and was surprised by a few things. You can follow my progress here:

http://serverfault.com/questions/655305/data-transfer-speed-hardware-config

http://serverfault.com/questions/655663/new-raid-hdparm-slow

I hope to learn more here as well for what the best config is for Wowza. I realize that the server would likely keep up fine with dual 1Gb network connections, but I hope to find another hosting provider with 10Gb and want to make sure my hardware never stands in the way of performance.

Anyone?

Thanks Paul, I will certainly look into that, just as a followup since we are talking about RAM, during my tests, it ramps up the memory, but then never releases it till the server is rebooted. When the CPU maxes out, it causes an interuption in the network as you can see in the screenshot.

I am not sure if it is important about it not releasing RAM, it seems to do ok during the short tests I have done, but the behavior has been consistent on 3 installations between Ubuntu and CentOS. In the meantime I have upgraded the processors and put in 98GB RAM to do the next round of tests.

Thanks Paul, that makes a lot of sense and I was able to use the “free -m” command to find it was caching 33GB of “stuff”. I will submit a ticket about the dips in the interface if it persists after upgrading the CPU / Memory.

Alan

Hi,

If you’re after optimum performance and you also wish to have redundancy should a disk fail then RAID1 will likely provide the best read throughput, or RAID 10 (striped and mirrored) if you have sufficient disks. I’d avoid RAID5 or RAID6 or basically any that require parity as they add a number of extra writes, though they do provide more usable storage. You could look at implementing MediaCache in a Vod Edge application, and with the amount of RAM you appear to have available create a ramdisk (see tmpfs etc) as your store. This will get repopulated should the server hosting the ram disk restart. You would need to calculate the volume of VOD assets pulled from the MediaCache source to see if this is viable (the MediaCache store is configured with a TTL to stop it retaining stale data).

Paul

Hi,

The RAM bar chart showing “high usage” is not uncommon on recent Linux installs. As the OS will attempt to cache all available RAM this can show it being used when it’s really just the OS that’s grabbed it and will release if requested by running processes. This article has a detailed explanation of this. I’m not sure about the causes of the dips in your network interface. We would need to see if there is anything in the Wowza logs or any other factors that may be occurring at the same time.

Paul

Hi,

Glad it helped. Do please log a call with us at support@wowza.com if you feel there is something happening at the Wowza level that we can investigate. If you do then please also provide as much detail as possible, as explained in this article.

Paul