Wowza Community

Record stream directly to S3 with AWS Multipart Upload?

Copy the command to ssh console and see it works. Double check everything. Did you add the startup.xml RunScript tag to run init.sh? It did work when I tested it.

Richard

I think I just used the simple bucket name:

/usr/bin/s3fs cogwareplayhouston -o accessKeyId=XXXXXXXXXX -o secretAccessKey=XXXXXXXXXX -o default_acl=public-read /mnt/s3

Your credentials provide the location, i.e., a bucket belonging to you.

Richard

You’re welcome, thanks for the update

Richard

These are associated with your Amazon AWS account.

Richard

Sounds like it was not mounted correctly. You have to start over. You can re-do it on the running instance.

Richard

There is a guide here:

https://www.wowza.com/docs/how-to-use-the-fuse-based-file-system-backed-by-amazon-s3

Richard

I’m not sure. I have made it work following that guide.

Richard

If it’s mounted with s3fs, within the FTP path, yes, you should be able to see it.

Hi, Its been a while since ive used wowza server, where do i find my accessKeyId and secretAccessKey to mount the s3 bucket?

Im using the init.sh script on startup to mount the s3 bucket.

My recordings are showing up on the ec2 server under: “/mnt/s3” but they are not being automatically moved to my s3 bucket.

What could be the problem?

Whats the easiest way?

/usr/local/bin/s3fs - when using ftp client, this directory doesn’t exist, is this required to make it work? FTP wont allow to make a new folder.

Also I didn’t use - /etc/passwd-s3fs - file with the keys because its done with the init.sh file, is this ok?

Ive been trying that guide but having trouble creating the passwd-s3fs file, and i dont want to have to do this everytime to start a ec2 instance. I am trying to set it up only using the startup package and FTP. Can you help me with that.

The problem im having is when trying to edit “/etc/passwd-s3fs” file it says [read-only] at the bottom so it wont allow me to edit it.

Im logged in as ec2-user and wont allow me to login using root.

So everytime we start an ec2 instance and want to transfer recordings from ec2 to s3 we have to login to ssh and follow this procedure manually typing in the access code and secret access key? theres no way to automate this?

Got the file to transfer to s3, now having trouble playing file using vods3.

WARN	server	comment	2012-01-13	00:17:52	-	-	-	-	-	1405.995	-	-	-	-	-	-	-	amazons3/mybucket/testing14.mp4	MediaReaderH264.open[1]: java.io.IOException: MediaCacheRandomAccessReader.open: Item not in cache: amazons3/mybucket/testing14.mp4
WARN	server	comment	2012-01-13	00:24:04	-	-	-	-	-	1778.166	-	-	-	-	-	-	-	amazons3/mybucket/Extremists.flv	open: java.io.IOException: MediaCacheRandomAccessReader.open: Item not in cache: amazons3/mybucket/Extremists.flv

Thanks for getting back to me so quickly, Charlie. You’re good!

I know that S3FS is used to transfer the files from EC2 to S3, but I thought that the current implementation that is bundled with Wowza first records the FLV to the EC2 instance and detects when the recording is complete. When the recording has finished, it transfers the complete FLV to S3.

Is that right? Is there any way that this process could be simplified by recording directly to the S3 bucket? And programs wouldn’t then need to record the file and then wait while the file is being transferred to S3 before playback.

Thanks, Charlie.

Should I be able to see s3 in FileZilla?

It’s mounted with s3fs. Everything is working fine except I cant see the ‘s3’ directory in FileZilla.

What do you mean within FTP path?

Richard

I’m a rookie, just FYI.

I have followed the instructions above and added an “init.sh” file to the root of my .ZIP archive (startup package – same version you used). I added the following:

#!/bin/sh

mkdir /mnt/s3

/usr/bin/s3fs cogwareplayhouston.s3.amazonaws.com -o accessKeyId=XXXXX -o secretAccessKey=XXXXX -o default_acl=public-read /mnt/s3

Obviously the “XXXXX” would be my keys. The problem I’m having is that when I look at the launched instance I don’t see that a “s3” directory has been created in the “mnt” directory. Isn’t this script supposed to create that directory?

My goal is to simply have a recording automatically transfer from one place and copy over to my s3 bucket once it has completed.

Any suggestions?