Wowza Community

Record stream directly to S3 with AWS Multipart Upload?

Hi all,

I got an email from Amazon last month announcing S3 Multipart Upload:

http://aws.amazon.com/about-aws/whats-new/2010/11/10/Amazon-S3-Introducing-Multipart-Upload/

Right away I started checking the Wowza forums to see if this would be rolled into the EC2-S3 configuration rather than the current implementation. Is there any chance that will be happening? Would that alleviate some of the problems that can pop up from recording to EC2 then transferring after the recording has completed?

Thanks all!

Ange52

Recordind direct to S3 is done through the s3fs system. It would need to adopt this new addtional:

http://code.google.com/p/s3fs/

Charlie

We have no plans to add this type of feature. We will continue to leverage s3fs to do recording to S3.

Charlie

I tried using one of the current Wowza AMIs (ami-5453a53d) and it worked.

I used this startup package (from here):

http://wowzamediasystems.s3.amazonaws.com/com/wowza/startup/2.2/default_2.2.zip

  • Download and unzip the package

  • Open startup.xml in text editor, then add the init.sh script reference so it looks like this:

    <Startup>
    	<Commands>
    		<Install>
    			<Folder>wowza</Folder>
    		</Install>
    		<RunScript>
    			<Script>tuning/tune.sh</Script>
    		</RunScript>
    		<RunScript>
    			<Script>init.sh</Script>
    		</RunScript>
    	</Commands>
    </Startup>
    
    
  • Add your init.sh the package (I used my AWS keys and S3 bucket of course):

    #!/bin/sh
    mkdir /mnt/s3
    /usr/bin/s3fs bucketname -o accessKeyId=accesskey -o secretAccessKey=secretkey -o default_acl=public-read /mnt/s3
    
    
  • Zipped the package back up

  • Started ami-5453a53d in ElasticFox, including the startup package by clicking Add Binary File button in Launch dialog

    Richard

Yes, using the configuration in that article, a new recording is moved automatically. That happens when the write process completes, not when recording stops.

If you want more control, you can use the IMediaWriteActionNotifiy interface onWriteComplete handler:

https://www.wowza.com/docs/how-to-use-imediawriteractionnotify-to-programmatically-move-and-rename-recordings-of-live-streams

You need the Wowza IDE to build application modules

The source code for the automated ModuleMediaWriterFileMover is here:

Richard

Copy the command to ssh console and see it works. Double check everything. Did you add the startup.xml RunScript tag to run init.sh? It did work when I tested it.

Richard

I think I just used the simple bucket name:

/usr/bin/s3fs cogwareplayhouston -o accessKeyId=XXXXXXXXXX -o secretAccessKey=XXXXXXXXXX -o default_acl=public-read /mnt/s3

Your credentials provide the location, i.e., a bucket belonging to you.

Richard

You’re welcome, thanks for the update

Richard

These are associated with your Amazon AWS account.

Richard

Sounds like it was not mounted correctly. You have to start over. You can re-do it on the running instance.

Richard

There is a guide here:

https://www.wowza.com/docs/how-to-use-the-fuse-based-file-system-backed-by-amazon-s3

Richard

I’m not sure. I have made it work following that guide.

Richard

If it’s mounted with s3fs, within the FTP path, yes, you should be able to see it.

Hi, Its been a while since ive used wowza server, where do i find my accessKeyId and secretAccessKey to mount the s3 bucket?

Im using the init.sh script on startup to mount the s3 bucket.

My recordings are showing up on the ec2 server under: “/mnt/s3” but they are not being automatically moved to my s3 bucket.

What could be the problem?

Whats the easiest way?

/usr/local/bin/s3fs - when using ftp client, this directory doesn’t exist, is this required to make it work? FTP wont allow to make a new folder.

Also I didn’t use - /etc/passwd-s3fs - file with the keys because its done with the init.sh file, is this ok?

Ive been trying that guide but having trouble creating the passwd-s3fs file, and i dont want to have to do this everytime to start a ec2 instance. I am trying to set it up only using the startup package and FTP. Can you help me with that.

The problem im having is when trying to edit “/etc/passwd-s3fs” file it says [read-only] at the bottom so it wont allow me to edit it.

Im logged in as ec2-user and wont allow me to login using root.

So everytime we start an ec2 instance and want to transfer recordings from ec2 to s3 we have to login to ssh and follow this procedure manually typing in the access code and secret access key? theres no way to automate this?

Got the file to transfer to s3, now having trouble playing file using vods3.

WARN	server	comment	2012-01-13	00:17:52	-	-	-	-	-	1405.995	-	-	-	-	-	-	-	amazons3/mybucket/testing14.mp4	MediaReaderH264.open[1]: java.io.IOException: MediaCacheRandomAccessReader.open: Item not in cache: amazons3/mybucket/testing14.mp4
WARN	server	comment	2012-01-13	00:24:04	-	-	-	-	-	1778.166	-	-	-	-	-	-	-	amazons3/mybucket/Extremists.flv	open: java.io.IOException: MediaCacheRandomAccessReader.open: Item not in cache: amazons3/mybucket/Extremists.flv