Wowza Community

Record stream directly to S3 with AWS Multipart Upload?

I’m not sure. I have made it work following that guide.

Richard

If it’s mounted with s3fs, within the FTP path, yes, you should be able to see it.

Hi, Its been a while since ive used wowza server, where do i find my accessKeyId and secretAccessKey to mount the s3 bucket?

Im using the init.sh script on startup to mount the s3 bucket.

My recordings are showing up on the ec2 server under: “/mnt/s3” but they are not being automatically moved to my s3 bucket.

What could be the problem?

Whats the easiest way?

/usr/local/bin/s3fs - when using ftp client, this directory doesn’t exist, is this required to make it work? FTP wont allow to make a new folder.

Also I didn’t use - /etc/passwd-s3fs - file with the keys because its done with the init.sh file, is this ok?

Ive been trying that guide but having trouble creating the passwd-s3fs file, and i dont want to have to do this everytime to start a ec2 instance. I am trying to set it up only using the startup package and FTP. Can you help me with that.

The problem im having is when trying to edit “/etc/passwd-s3fs” file it says [read-only] at the bottom so it wont allow me to edit it.

Im logged in as ec2-user and wont allow me to login using root.

So everytime we start an ec2 instance and want to transfer recordings from ec2 to s3 we have to login to ssh and follow this procedure manually typing in the access code and secret access key? theres no way to automate this?

Got the file to transfer to s3, now having trouble playing file using vods3.

WARN	server	comment	2012-01-13	00:17:52	-	-	-	-	-	1405.995	-	-	-	-	-	-	-	amazons3/mybucket/testing14.mp4	MediaReaderH264.open[1]: java.io.IOException: MediaCacheRandomAccessReader.open: Item not in cache: amazons3/mybucket/testing14.mp4
WARN	server	comment	2012-01-13	00:24:04	-	-	-	-	-	1778.166	-	-	-	-	-	-	-	amazons3/mybucket/Extremists.flv	open: java.io.IOException: MediaCacheRandomAccessReader.open: Item not in cache: amazons3/mybucket/Extremists.flv

Thanks for getting back to me so quickly, Charlie. You’re good!

I know that S3FS is used to transfer the files from EC2 to S3, but I thought that the current implementation that is bundled with Wowza first records the FLV to the EC2 instance and detects when the recording is complete. When the recording has finished, it transfers the complete FLV to S3.

Is that right? Is there any way that this process could be simplified by recording directly to the S3 bucket? And programs wouldn’t then need to record the file and then wait while the file is being transferred to S3 before playback.

Thanks, Charlie.

Should I be able to see s3 in FileZilla?

It’s mounted with s3fs. Everything is working fine except I cant see the ‘s3’ directory in FileZilla.

What do you mean within FTP path?

Richard

I’m a rookie, just FYI.

I have followed the instructions above and added an “init.sh” file to the root of my .ZIP archive (startup package – same version you used). I added the following:

#!/bin/sh

mkdir /mnt/s3

/usr/bin/s3fs cogwareplayhouston.s3.amazonaws.com -o accessKeyId=XXXXX -o secretAccessKey=XXXXX -o default_acl=public-read /mnt/s3

Obviously the “XXXXX” would be my keys. The problem I’m having is that when I look at the launched instance I don’t see that a “s3” directory has been created in the “mnt” directory. Isn’t this script supposed to create that directory?

My goal is to simply have a recording automatically transfer from one place and copy over to my s3 bucket once it has completed.

Any suggestions?

I’m using the ami-5453a53d Amazon EC2 Amazon Machine Image ID.

In the startup package (based off the ‘startup_2.2’ provided example) I modify the “startup.xml” file:


wowza


I then create an “init.sh” file and place it at the same level as the “startup.xml” file and “wowza” and “tuning” directories. The “init.sh” file looks like this (with the XXXXXXXXXX being replaced with my Amazon AWS keys:


#!/bin/sh

mkdir /mnt/s3

/usr/bin/s3fs cogwareplayhouston.s3.amazonaws.com -o accessKeyId=XXXXXXXXXX -o secretAccessKey=XXXXXXXXXX -o default_acl=public-read /mnt/s3


I have an S3 bucket called “cogwareplayhouston” that sits at the top level of my S3 account. I’m assuming that “cogwareplayhouston.s3.amazonaws.com” is the correct way to point the script to this bucket, right?


I then used the instructions, as noted above by acropolis, found here: https://www.wowza.com/docs/how-to-use-the-fuse-based-file-system-backed-by-amazon-s3

I didn’t do the mkdir command or mount the S3 bucket from these instructions, since I assume those are handled by the “init.sh”

What I did was modify the Wowza application that records video. In my case, I’m using “rtplive” to stream live video.

I then added the following, exactly, to the end of the “Modules” list and the “Properties” list in the "Application.xml that sits in "conf/rtplive/. Since there were three “” areas I assumed it was the was the one right below the Modules list I just added to (just before “”). So, the final portion of the “Application.xml” looks like this:


ModuleMediaWriterFileMover

ModuleMediaWriterFileMover

com.wowza.wms.module.ModuleMediaWriterFileMover

fileMoverDestinationPath

/mnt/s3

fileMoverDeleteOriginal

false

Boolean

fileMoverVersionFile

true

Boolean


I use the LiveStreamRecord example Flash player to “Start Recording” and then “Pause Recording”. When I FTP into the server I can see the “myStream.sdp.flv” file that is recorded, as well as the temporary file being written if I am still ‘recording’. When I’m done recording the temporary file goes away as it becomes part of the “myStream.sdp.flv” file that resides in the “content” folder.

So, the file is correctly being recorded and saved to the “content” folder… but it’s not being moved to the s3 folder – in fact, when I FTP into the server I don’t see a “s3” folder anywhere, not even in the “mnt” directory.

In the “mnt” directory I see “lost+found”, “mediacache” and “WowzaMediaServer” as the only directories.

I’m not sure what I’m missing here. Any help would be greatly appreciated.

Richard

That was the issue. I changed the s3 location to “cogwareplayhouston” and everything worked like magic. Thanks for your help on this!!!

I was trying to do something like this by creating an init.sh script and referencing it in the startup package.

init.sh contains the following:

#!/bin/sh

mkdir /mnt/s3

/usr/bin/s3fs bucketname -o accessKeyId=accesskey -o secretAccessKey=secretkey -o default_acl=public-read /mnt/s3

I wanted to share and maybe get some feedback. I haven’t been able to get it to work. When I try to launch and instance it’s completely broken. Might be a problem with my zip or encoding. Using a PHP script… er. trying to.

Thanks for the reply and for testing it out. Nice to put that part to rest. Now, referencing this page: https://www.wowza.com/docs/how-to-use-the-fuse-based-file-system-backed-by-amazon-s3

Great info. However, while it’s clear that one should not write directly to a mounted bucket, does the configuration described in the above article automatically move the file to the bucket? Assuming, yes; does this happen as soon as the stream is stopped? How does one know when it’s safe to unload and instance… meaning is there a way to get a status on the file move? It would be great if there was an http page that had a status on filemover activity. I am trying to automate an on demand webcast process and having to get on a command line doesn’t work well for that. Any pointers would be great.

Richard,

unfortunately it doesn’t work on our case.

We use EC2 with live-record and the module:

ModuleMediaWriterFileMover

ModuleMediaWriterFileMover

com.wowza.wms.module.ModuleMediaWriterFileMover

and the properties:

fileMoverDestinationPath

/mnt/s3

fileMoverDeleteOriginal

false

Boolean

fileMoverVersionFile

true

Boolean

It saves perfectly well the files on /mnt/s3,

Now, if we “/usr/bin/s3fs mybucket -o accessKeyId=XXXXX -o secretAccessKey=XXXXXX -o default_acl=public-read /mnt/s3”, and try to navigate to the /mnt/s3 folder it doesn’t allow us to even open the folder.

Because of this, we are not able to use FME to stream. We have to reboot the machine for everything to work.

Is there a chance for this be a permission issue or something else?

Thank you Richard!

I spent all day trying to find what was the problem…! No EU buckets!