Wowza Community

S3FS Fuse-based file system

These will work in the fileMoverDestinationPath Property. If you the mount location was at one of those locations it might be useful.



This would assume you mounted your S3FS bucket here:


It’s not that helpful for you really, I don’t think.


Don’t create folders inside the s3 mount with linux commands. If you want “subfolders”, create sub-buckets in S3 organizer or similar tool.


In Liverepeater (origin/edge), the actual stream name that an edge restreams from an origin is something like:


Some characters are replace, but otherwise, it is being saved by its name. You can use this built-in module and properties to change the name:


You will have to build the ModuleWriteListener example at the top of the post instead of using the built-in module, and add code to rename and copy in the onWriteComplete handler


Sorry, yes this post

When using these examples, make sure the name of file matches the name of the module, and adjust the package path, the line at top, as necessary.

So, for ModuleWriteListener, the file name must be ModuleWriteListener.Java

Trick: You can select the text of the example, copy, then click on a package in the ide, right click and paste, and the module will be created with package path adjusted automatically.


Yes, that is recommended if you want content on S3. You can copy not move, so it could be played back locally. It won’t be available as vod from local storage until after recording is complete, and it won’t be available to MediaCache edge servers until it finishes being copied to s3 bucket (using s3fs).


Use Liverepeater (orgin/edge) and the Wowza Load Balancer for live streams

Use MediaCache and the Wowza Load Balancer for vod


You can use nDVR addon on live streams from edge servers.

MediaCache caches to disk, but in chunks that you would not recognize. There is more details and documentation in the package.


I don’t think this is useful for your purpose. I answered another post from you about streaming from Android. See if that helps


Please explain in more detail. What version of Wowza are you using?


I understand the issue with s3. However if you have mounted a ebs does the same thing apply.

I want to avoid writing to an instance if possible.

Ie can I record to ebs then transfer the recorded file to s3?

Streaming anything directly off S3 is not recommended for performance reasons. Better to use vods3/mediacache.

The “ls” command don’t show any result, any suggestion?

In my case, ls command hangs PuTTY terminal :frowning:

s3fs command seems to work but after that, I just can’t do anything with mounted bucket …

mkdir test output :

drwxr-xr-x 2 root root 4096 Mar 5 09:45 test

After s3fs command :

drwxr-xr-x 1 root root 0 Dec 31 1969 test

I tried both ami-47bc9733 and ami-41bc9735, keys are ok, bucket exists at root, s3cmd works correctly, don’t know what’s going on …

Still not working here. European S3 and EC2.

I’ve installed s3cmd and everything is ok with my buckets …

Actually it works with US bucket and EU ami.

I created a US bucket from scratch, set ACL, no distribution. s3fs works, I can ls dir and copy to.

I created a EU bucket from scratch, set ACL, no distribution. s3fs works, ls, copy, etc. fails and PuTTY gets stuck …

Any fix possible ?

No EU bucket support :

Patch available :

Fixes : It changes the way urls are used passing from to

No comment …

About ModuleMediaWriterFileMover : it works great but moving file is so slow and then WMS is blocked in module.

I’m moving record from EC2 EU to S3 US. Could it explain why it’s so long ?

I’d like to ask if you could change module to externalize file transfer but the answer will certainly be DIY :wink:

Thank you …

For the record, recording directly to S3 using S3FS is a very bad idea. Wowza re-opens the recording file for writing for every streamed chunk. Since S3 does not support appending bytes to existing object, S3FS re-sends the entire file contents every time a chunk is recorded.

The longer your recording is, the more data will need to be sent for each chunk. For recordings longer than a minute you’ll start experiencing a delay when closing the stream. For a 20 minute recording that delay will be 5+ minutes. Your private network bandwidth will be maxed out.

A better approach is to simply record to local disk and transfer to the S3 mount once the recording is finished. From my experience, EC2 to S3 bandwidth (for writes) is about 60 Mbps (around 7 MB / sec) on a small instance.

Hi Richard,

After mount the s3fs all is working well except the “ls” linux command.

I can read/write/delete/create any file, and list this after with S3Fox extension but inside the wowza server I don’t see the files.

The “ls” command don’t show any result, any suggestion?

What version of S3FS is p recompiled in the last EC2 image?