Wowza Community

AMI 4.5.0 - s3fs package not preinstalled

Hi team, hope you’re all well. I currently have Wowza Streaming Engine deployed on an AWS EC2 instance and I’m attempting to automatically archive live recordings to S3 using the FUSE-based file system. I could do with a helping hand.

I’m following the guide titled “How to use the FUSE-based file system backed by Amazon S3”, which can be found here: https://www.wowza.com/docs/how-to-use-the-fuse-based-file-system-backed-by-amazon-s3

It starts with the line “Wowza media server software Amazon Machine Images (AMIs) include a preinstalled s3fs package”. I used the following AMI: https://aws.amazon.com/marketplace/pp/B013FEULQA?qid=1468331952367 (version 4.5.0)

When I look in both the ‘/usr/local/bin’ & ‘/usr/bin’ directory there is no s3fs package.

Has it been missed off of this image? Is it located somewhere new? If it is not preinstalled on this image how do I install it myself?

Thanks in advance,

Bump.

Hi Elliott,

This has been addressed in ticket #184160, and I am copying the response here for other users.

The S3FS package is no longer pre-installed on the 4.5 AMI. The documentation will be updated to reflect this. As a workaround, you can install it by using the following bash script:

cd /home/wowza
mkdir s3fs
cd s3fs
echo;echo "install s3fs"
yum -y install make curl-devel gcc-c++ libxml2-devel openssl-devel fuse-devel > /dev/null || exit 1
resp=$( { wget --progress=dot -O s3fs-1.74.tar.gz http://s3fs.googlecode.com/files/s3fs-1.74.tar.gz; } 2>&1)
# look for 200 response
if [[ $resp != *"200"* ]]; then
echo "$resp"
echo "Response of 200 not found"
echo; echo "ABORT!"
exit 5280
else
echo;echo "Response good for s3fs-1.74.tar.gz"
fi
tar -zxf s3fs-1.74.tar.gz
rm s3fs-1.74.tar.gz
cd s3fs-*
if [[ ! -f configure ]]; then
echo;echo "S3fs not extracted correctly"
exit 5280
fi
./configure
make
make install
echo "remove packages with rpm if they exist..."
info=$( { rpm -e --nodeps gcc-c++; } 2>&1 )
if [[ $info == *"gcc-c++ is not installed"* ]]; then 
echo "good, gcc-c++ is not installed"
else 
echo "error: gcc-c++ should not be installed"
echo $info
fi

Michelle

Having had quite a bit of experience running S3FS, I would like to throw out a caution. We were seeing some weirdness and inconsistent issues when we were slogging large video files over S3FS. We still use it for small files, and I actually mount some common shared configurations over it. However, there are some ceilings on your accesses that you may run into.

If you are going to be pushing large files through that mechanism, I suggest you implement the AWS API’s for file movement.

Just my $0.02,

Bob

Are you saying just simply using aws command line tool to do multipart uploads to s3? You have good sample scripts to share?

Hello Bob,

Thanks for the feedback on this.

One common issue with large media files is if the MediaCache Store is too small and/or TTL settings are too long, then the storage cache may become filled or filled by reserved space. In this scenario the requests will bypass Media Cache and pull directly from the S3 source leading to poor performance.

Something that many users don’t realize is that when you request a file using MediaCache, it will calculate and reserve space, to save the full file, within the MediaCache store. Therefore if you have large files, even if a user only watches a small portion of the video pulled through MediaCache, the storage will remain blocked off for the full filesize until the TTL settings allow it to be garbage collected by the system.

To avoid this, we normally recommend a large and using the default TTL settings or shorter with large source files.

1200000

600000

This should help to avoid the MediaCache bypassing scenario mentioned above.

Best regards,
Andrew

Thanks so much for the thoughts on the MediaCache settings. I am playing around with one my periodic updates to my WSE’s so this will definitely come in handy. As another thought, since I was using S3FS to send large files back to my home region, I re-architected all my scripts to make better use of the AWS CLI’s to send files to S3. This has allowed me to do two things, remove the dependency on the FileMover module within WSE and rely more on scripting to move files and remove S3FS totally.

Now off to update land.