Wowza Community

About wowza performance + max connections

Hi,

Our server:

CentOS 6.3

Intel® Xeon® CPU E5-2620 0 @ 2.00GHz

16GB RAM

5 x NIC bond ~ 5Gb

Server is still on trial and it’s running 30 channels live TV. We’ve recently benchmark the server to see max concurrent connections (CCU) that server can handle.

I noticed that when CCU reached about 1700-1800, although CPU & RAM usage is low (CPU ~ 14%, RAM ~ 9GB/16GB), and bandwidth ~ 2.53Gb, the stream begin not smooth, it lag and stop many times.

I’ve follow this guide to do performance tuning:

https://www.wowza.com/docs/how-to-do-performance-tuning

Is there a limit of wowza’s trial version?

There is not a limit in Wowza. Sounds like there is a bandwidth bottle-neck somewhere in your network downstream from the 5gbs nic.

Richard

it can be bottleneck in your provider …

Hi all,

When my server reach ~2000 CCU, Wowza service down (failed to access to port 80,8086). I’ve no idea about this, because the resource is free enough (bandwidth ~ 2G/5G, CPU ~20%, RAM 10G/32G).

I’m using version 3.5.2 (the lastest) trial edition.

Any help would be greatly appreciated.

The problem suggests more a socket limit. Make sure ulimit socket option is set to a number, not unlimited, say 40000. I would also recommend getting the load test tool, from your posts you seem to be doing a live service ?

Andrew.

Hi Richard,

We’re now using mode 4 bonding. There’s a problem that those interfaces does not get balanced. This is the capture when CCU reached 1100-1200:

Following this link: http://wiki.centos.org/TipsAndTricks/BondingInterfaces

Could you recommend me what is the best bonding mode for this?

Thank you very much,

Lee

Hi,

Tried change to mode 6 does not help. Some channel just stopped while playing, when CCU reached 1100-1200 :frowning:

[root@Wowza-68 WowzaMediaServer]# cat /etc/modprobe.d/bonding.conf
alias bond0 bonding
options bond0 mode=6 miimon=100
[root@Wowza-68 WowzaMediaServer]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: transmit load balancing
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:90:91:db:c7
Slave queue ID: 0
Slave Interface: eth2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:90:65:e5:66
Slave queue ID: 0
Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:90:65:e5:67
Slave queue ID: 0
Slave Interface: eth4
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:90:77:3b:7e
Slave queue ID: 0

Hi,

I’ve changed to mode 4 with this config:

alias bond0 bonding
options bond0 mode=4 miimon=80 lacp_rate=1 xmit_hash_policy=1
options lnet networks=tcp(bond0)

And it seems balanced now.

But even Tx/eth only ~ 500-600Mbps, the stream begin stop many times (per 5-10 minutes)

Hi,

Thank you. I’ve limited max CCU to 1500 and it seems fine now after 1 week.

However, sometimes, when a channel has live event, let say the server reach max 1500 CCU at that time, and about 1450 CCU view that channel, the result is that channel stop many times while playing (about 15 mins), while other channels still play fine.

For example, let say we have:

channel1_2500.stream

channel1_1000.stream

channel2_2500.stream

channel2_1000.stream

(1000 stands for 1Mbps, 2000 stands for 2.5Mbps)

My server streaming rtmp & hls:

RTMP for 1Mbps & 2.5Mbps (user can select profile)

HLS for only 1Mbps

Problem: when channel1 reach 1450 viewer at 1 time, channel1_1000.stream stop many times, while channel1_2500.stream play fine, channel2 play fine too.

I don’t think it’s a bottleneck problem, because server bonding is balanced.

Hi all,

When my server reach ~2000 CCU, Wowza service down (failed to access to port 80,8086). I’ve no idea about this, because the resource is free enough (bandwidth ~ 2G/5G, CPU ~20%, RAM 10G/32G).

I’m using version 3.5.2 (the lastest) trial edition.

Any help would be greatly appreciated.

Hi Andrew,

Yes, I’m doing a live service. Also, using rtmpdump/flazr to benchmark the server.

Using ulimit command, the output is unlimited.

This is the ouput using ulimit -a (as root):

core file size (blocks, -c) 0

data seg size (kbytes, -d) unlimited

scheduling priority (-e) 0

file size (blocks, -f) unlimited

pending signals (-i) 256509

max locked memory (kbytes, -l) 64

max memory size (kbytes, -m) unlimited

open files (-n) 65536

pipe size (512 bytes, -p) 8

POSIX message queues (bytes, -q) 819200

real-time priority (-r) 0

stack size (kbytes, -s) 10240

cpu time (seconds, -t) unlimited

max user processes (-u) 65535

virtual memory (kbytes, -v) unlimited

file locks (-x) unlimited

Lee

Hi Lee,

If the switch supports it then mode 6 is the best to use otherwise mode 5.

Roger.

Hi,

Mode 6 requires settings on the switch to support it. You may have to contact the data center to dee if they will set that up.

I have used it before and it works well.

I haven’t tried mode 5 but it should work similar to mode 6.