Page 1 of 7 123 ... LastLast
Results 1 to 10 of 66

Thread: S3FS Fuse-based file system

  1. #1
    Join Date
    Dec 2007
    Posts
    28,412

    Default S3FS Fuse-based file system


    This forum post has moved to the following article:

    http://www.wowzamedia.com/forums/content.php?72

    Enjoy,
    The Wowza Team
    Last edited by charlie; 10-04-2010 at 12:46 PM.

  2. #2
    Join Date
    Oct 2009
    Posts
    4

    Default auto-mount s3fs on reboot

    I love the s3fs concept and have been working with it to record audio without issue. I have created a script (called from startup.xml) that basically automounts an s3 bucket on startup, but the bucket doesn't get remounted if I reboot the ec2 server. Interestingly, if I truncate the wowzamediaserverpro_startup.log file before the reboot, no script output happens on the reboot, which leads me to believe it doesn't get run a second time.

    Here is the startup.xml file:
    Code:
    <Startup>
     <Commands>
        <RunScript>
          <Script>tuning/mount_s3.sh</Script>
        </RunScript>
        <Install>
          <Folder>wowza</Folder>
        </Install>
        <RunScript>
          <Script>tuning/tune.sh</Script>
        </RunScript>
      </Commands>
    </Startup>


    Here is the mount_s3.sh file (excuse a couple of debugging echos in there):
    Code:
    #!/bin/sh
    S3DIR="/mnt/s3"
    S3BUCKET="bucket-name"
    echo "Unmounting and removing the s3 recording bucket at $S3DIR if it exists"
    echo "Check the contents of /mnt"
    echo "`ls -l /mnt`"
    if [ -d $S3DIR ]; then
      echo "ls -l $S3DIR"
      echo "`ls -l $S3DIR`"
      echo "/bin/fusermount -u $S3DIR"
      /bin/fusermount -u $S3DIR
    else
      echo "$S3DIR doesn't exist"
    fi
    echo "rm -rf $S3DIR"
    rm -rf $S3DIR
    echo "Mounting the s3 recording bucket at $S3DIR"
    echo "Creating /etc/passwd-s3fs"
    echo "access_key:secret_key" > /etc/passwd-s3fs
    echo "mkdir -p $S3DIR"
    mkdir -p $S3DIR
    echo "/usr/bin/s3fs $S3BUCKET $S3DIR"
    /usr/bin/s3fs $S3BUCKET $S3DIR

  3. #3
    Join Date
    Feb 2009
    Posts
    23

    Default

    For the record, recording directly to S3 using S3FS is a very bad idea. Wowza re-opens the recording file for writing for every streamed chunk. Since S3 does not support appending bytes to existing object, S3FS re-sends the entire file contents every time a chunk is recorded.

    The longer your recording is, the more data will need to be sent for each chunk. For recordings longer than a minute you'll start experiencing a delay when closing the stream. For a 20 minute recording that delay will be 5+ minutes. Your private network bandwidth will be maxed out.

    A better approach is to simply record to local disk and transfer to the S3 mount once the recording is finished. From my experience, EC2 to S3 bandwidth (for writes) is about 60 Mbps (around 7 MB / sec) on a small instance.
    Last edited by slegay; 05-19-2010 at 06:54 PM.

  4. #4

    Default Ebs

    I understand the issue with s3. However if you have mounted a ebs does the same thing apply.
    I want to avoid writing to an instance if possible.
    Ie can I record to ebs then transfer the recorded file to s3?

  5. #5
    Join Date
    Dec 2007
    Posts
    28,412

    Default

    Yes, EBS does not have this problem, it supposed to have better throughput then the instance local storage. That's what Amazon says on their EBS site. And it is an excellent way to insulate yourself from instance failure.

    Richard

  6. #6

    Default

    Hi Richard,

    After mount the s3fs all is working well except the "ls" linux command.
    I can read/write/delete/create any file, and list this after with S3Fox extension but inside the wowza server I don't see the files.

    The "ls" command don't show any result, any suggestion?
    What version of S3FS is p recompiled in the last EC2 image?

    Thanks,
    Ale

  7. #7
    Join Date
    Dec 2007
    Posts
    28,412

    Default

    Using ls command is the test, so if that's not working something is not right.

    Which AMI are you using? I used ami-ff8b6796 yesterday to try out the new ModuleMediaWriterFileMover, and it worked great.

    Check your AWS keys.

    Richarrd

  8. #8

    Default

    Hi Richard,

    I'm using the same AMI.

    My mount command is:
    /usr/bin/s3fs wmconsulting/wowza -o accessKeyId=ACCESS-KEY -o secretAccessKey=SECRET_KEY -o use_cache=/tmp -o allow_other -o default_acl=public-read /mnt/s3

    If I List (using s3cmd tool) the files in S3 I have this:

    Code:
    s3cmd ls s3://wmconsulting/wowza/
    Code:
    2010-02-11 13:16       289   s3://wmconsulting/wowza/BigBuckCupertino.smil
    2010-02-11 13:16  57981953   s3://wmconsulting/wowza/BigBuckCupertinoHi.mov
    2010-02-11 13:16  25065590   s3://wmconsulting/wowza/BigBuckCupertinoLo.mov
    2010-02-11 13:16  43695703   s3://wmconsulting/wowza/BigBuckCupertinoMed.mov
    2010-02-10 16:15  22918100   s3://wmconsulting/wowza/Extremists.flv
    2010-02-10 16:15  18261973   s3://wmconsulting/wowza/Extremists.m4v
    2010-02-12 03:28        27   s3://wmconsulting/wowza/radiostation.stream
    2010-02-16 03:48        10   s3://wmconsulting/wowza/test
    2010-02-16 04:28     45040   s3://wmconsulting/wowza/wms-plugin-collection.jar
    but if list the files with the mount point and ls command don't list any file:
    Code:
    [root@ip-10-244-00-00 s3]# pwd
    /mnt/s3
    [root@ip-10-244-00-00 s3]# ls -la
    total 0
    Trying read/write action I see the files:

    Code:
    [root@ip-10-244-00-00 s3]# cat test
    test fuse
    [root@ip-10-244-00-00 s3]# echo 'Fuse test write' >> test
    [root@ip-10-244-00-00 s3]# cat test
    test fuse
    Fuse test write
    Any suggestion?

    BTW, "s3cmd" is a great tools and please if possible built-in by default in the S3 image.

    Thanks in advance
    Alejandro

  9. #9
    Join Date
    Dec 2007
    Posts
    28,412

    Default

    Try it without the sub-bucket, and without the extra args. I assume you are replacing with your access and secret key.

    Richard

  10. #10

    Default

    Richard,

    I have try with basic command, and without the Sub-Bucket, and the output is:

    Code:
    ls -la 
    total 1
    ---------- 1 root root 0 Feb 10 11:14 wowza_$folder$
    I can't see the SubFolders inside of this bucket.

    But right now I can list the files only if they are in the ROOT directory not inside of one folder.

    Code:
    [root@ip-10-244-00-00 s3]# pwd
    /mnt/s3
    [root@ip-10-244-00-00 s3]# ls -la
    total 1
    -rw-r--r-- 1 root root 0 Feb 16 08:35 test.rootdirectory
    ---------- 1 root root 0 Feb 10 11:14 wowza_$folder$
    I read in the s3fs page about problems with subfolders and S3Fox extension.
    If create the folder with "mkdir" you can see that on the FUSE FS, but don't can on S3Fox, and is you create the folder with S3Fox, when list the files show "dirname_$folder$"
    Code:
    [root@ip-10-244-00-00 s3]# ll
    total 2
    ---------- 1 root root 0 Feb 16 08:39 test-s3fox_$folder$
    drwxr-xr-x 1 root root 0 Feb 16 08:38 test-sub-bucket
    -rw-r--r-- 1 root root 0 Feb 16 08:35 test.rootdirectory
    ---------- 1 root root 0 Feb 10 11:14 wowza_$folder$
    You have more information about this issue?

    Thanks
    Ale

Page 1 of 7 123 ... LastLast

Similar Threads

  1. Unable to mount S3 Bucket using FUSE s3fs
    By SimonShapcott in forum Wowza Media Server 3 for Amazon EC2 Discussion
    Replies: 2
    Last Post: 07-19-2013, 09:19 AM
  2. Unable to mount S3 Bucket using FUSE s3fs
    By SimonShapcott in forum Wowza Media Server 3 for Amazon EC2 Discussion
    Replies: 1
    Last Post: 07-19-2013, 08:33 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •