I showed you the example we have for that already, “ModuleVideoNameList” This one:
http://www.wowza.com/community/t/-/237&page=3#22
Richard
I showed you the example we have for that already, “ModuleVideoNameList” This one:
http://www.wowza.com/community/t/-/237&page=3#22
Richard
This is an AS3 version. If you are using Flex, you an use the Bindable ArrayCollection with a ComboBox. In Flash CS you can change it to an Array
netconnection.call("getVideoNames",new Responder(videoNamesHandler));
[Bindable]
privat var recordedStreamsDP = new ArrayCollection();
private function videoNamesHandler(results:Object):void
{
for( var item:String in results )
{
var obj:ObjectProxy = new ObjectProxy(); // In Flex use ObjectProxy to populate Bindable ArrayCollection.
obj.label = results[item];
obj.value = item;
recordedStreamsDP.addItemAt(obj,0);
}
}
This should work in Flash AS3 or AS2:
var result:Object = new Object();
result.onResult = function(results:Object)
{
for( var item:String in results )
{
trace(results[item]);
trace(item);
}
}
netconnection.call("getVideoNames", result);
Richard
You can’t use an Application module with .net. You can use a HTTProvider or a Web Service. But this code could be made to work in a HTTProvider and output xml instead of returning an object. You would have to tell it what application is involved.
Since you will be passing data in, start with this post:
http://www.wowza.com/community/t/-/245
And since you will be building xml output, you can look at this post for reference:
http://www.wowza.com/forums/showthread.php?t=597
Richard
I really don’t have that exact thing. These are all the pieces, you just have to mix them together. The IDE has a Wizard for creating a new HTTProvider. The code in that module will work in a HTTProvider, except that where the Application is part of the context of a Module you will have to pass in the Application name and get a reference to it, there are examples of that in the HTTProvider examples.
Looks like I pasted in the wrong example for the 2nd link. I meant to give you this one:
http://www.wowza.com/community/t/-/60
Richard
If you mean a snapshot of a stream as in that post, that’s what we have for that purpose.
Richard
You can create an HTTProvider version of this application module.
https://www.wowza.com/docs/http-providers
You need the IDE:
It would take some understanding of Wowza Application module and HTTProviders. There is a pretty good example of an HTTProvider version of an application Module in the LiveStreamRecord addon
Richard
Looking at the code, I think it will, but the output will be one frame flv.
Richard
No, I am pretty sure that is not possible.
Richard
However, on the camera side, try setting key frame interval to 60, and the FPS to 30. Then you will have a key frame every 2 seconds
Richard
You could lower bitrate or higher compression setting (If Axis cam) so you can set fps higher. I don’t know another solution otherwise
Richard
It opens in FFprobe and should be suitable to make a jpg from.
Richard
hi,
this should work for you
ffmpeg -i yourvideo.flv -s 160x120 -ss 00:00:02 -f mjpeg -t 0.001 -y yourthumbnail.jpg
hope this helps
koz
in fact this will work better, giving you better compresison on the JPEG
ffmpeg -i yourvideo.flv -vcodec mjpeg -vframes 1 -an -f rawvideo -s 160x120 -ss 00:00:02 -y yourthumbnail.jpg
this takes an image from two seconds in to the video and creates a 160x120 jpeg
hope this helps,
koz
thx, I’m really stupid. I don’t see it
a new problem, it’s not relative to wowza server but to conversion with ffmpeg. I usually use :
ffmpeg -i file.flv -vcodec png -vframes 1 -ss 0:0:5.000 -an -f rawvideo -y snapshot.png
something which usally works fine. With the last version of wowza, there is now support for h264 so i try it and it’s really usefull. Streaming get a really better quality . But when I want to generate snapshots from those video, the command failed.
FFmpeg version SVN-r12665, Copyright (c) 2000-2008 Fabrice Bellard, et al.
configuration: --enable-gpl --enable-postproc --enable-swscale --enable-avfilt
er-lavf --enable-pthreads --enable-liba52 --enable-avisynth --enable-libfaac --e
nable-libfaad --enable-libgsm --enable-memalign-hack --enable-libmp3lame --enabl
e-libnut --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid
--cpu=i686 --extra-ldflags=-static
libavutil version: 49.6.0
libavcodec version: 51.54.0
libavformat version: 52.13.0
libavdevice version: 52.0.0
built on Apr 2 2008 22:35:11, gcc: 4.2.3
[flv @ 00AA6C70]Unsupported video codec (7)
[flv @ 00AA6C70]Unsupported video codec (7)
[flv @ 00AA6C70]Unsupported video codec (7)
[flv @ 00AA6C70]Unsupported video codec (7)
[flv @ 00AA6C70]Unsupported video codec (7)
...
[flv @ 00AA6C70]Unsupported video codec (7)
[flv @ 00AA6C70]Unsupported video codec (7)
[flv @ 00AA6C70]Unsupported video codec (7)
[flv @ 00AA6C70]Unsupported video codec (7)
[flv @ 00AA6C70]Unsupported video codec (7)
Seems stream 0 codec frame rate differs from container frame rate: 1000.00 (1000
/1) -> 25.00 (25/1)
Input #0, flv, from 'file.flv':
Duration: 00:00:50.3, start: 0.000000, bitrate: 80 kb/s
Stream #0.0: Video: 0x0007, 25.00 tb(r)
Stream #0.1: Audio: mp3, 22050 Hz, stereo, 80 kb/s
picture size invalid (0x0)
Cannot allocate temp picture, check pix fmt
it seems for me that ffmpeg doesn’t support H264, this is the only thing which change since before. But surprise, the documentation and official website of ffmpeg clearly say that flv & h264 is supported. If somebody get and idea or already get this error.
is it possible to show captured frame in the flash client?
I want to save snapshots of live streams to the database so that later i can see them from the admin panel which is a flex client.
Thanks
I modified the Java code in the first post. See if that does the trick. I did not have time to test it.
Charlie
Hi Charlie,
where do I find the code snippet, mentioned above?
Regards,
Markus
Hi Richard,
I use the code you at the first page for creating snapshots.
The problem is that this code don’t seems to work for recorded H264-Streams.
On top of page 3 Charlie mentioned, that he has modified
the code on page 1, so that ist worked for H264 also.
I’am especially interested in this modifications.
For the sake of completeness follows the code I use at the moment for creating snapshots (it is a slightly modified version of the code for createSnapshotVOD(…) on page 1):
public void createFlvSnapshot(Snapshot snapshot)
{
long timecodeInMillis = Integer.valueOf(snapshot.getStreamPosInSec())*1000;
if (!snapshot.getSnapshotDirectory().isDirectory()){
snapshot.getSnapshotDirectory().mkdirs();
}
if (snapshot.getFlvFile().exists())
{
log.info("create flv snapshot from: "+snapshot.getFlvFile().getAbsolutePath()+" dst: "+snapshot.getFlvSnapshotFile().getAbsolutePath());
AMFPacket lastVideoKeyFrame = null;
try
{
BufferedInputStream is = new BufferedInputStream(new FileInputStream(snapshot.getFlvFile()));
FLVUtils.readHeader(is);
AMFPacket amfPacket;
while ((amfPacket = FLVUtils.readChunk(is)) != null)
{
if (lastVideoKeyFrame != null && amfPacket.getTimecode() > timecodeInMillis)
break;
if (amfPacket.getType() != IVHost.CONTENTTYPE_VIDEO)
continue;
if (FLVUtils.getFrameType(amfPacket.getFirstByte()) == FLVUtils.FLV_KFRAME)
lastVideoKeyFrame = amfPacket;
}
is.close();
}
catch (Exception e)
{
log.error("Error: createSnapshotVOD: reading flv: "+e.toString());
}
if (lastVideoKeyFrame != null)
{
try
{
BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream(snapshot.getFlvSnapshotFile(), false));
FLVUtils.writeHeader(out, 0, null);
FLVUtils.writeChunk(out, lastVideoKeyFrame.getDataBuffer(), lastVideoKeyFrame.getSize(), 0, (byte)lastVideoKeyFrame.getType());
out.close();
}
catch (Exception e)
{
log.error("Error: createSnapshotVOD: writing flv: "+e.toString());
}
}
}
}
Hi Charlie,
AMFPacket codecConfig = stream.getVideoCodecConfigPacket(packet.getAbsTimecode());
how do I extract such a codecConfig Packet from a recorded h264 file?
Regards
Markus
Hi Charlie,
I want to take a snapshot from a FLV-File which was recorded some time ago.
Therefore I don’t have a IMediaStream Object at the
time I want to create the snapshot.
My first step is to “load” the file via:
BufferedInputStream is = new BufferedInputStream(new FileInputStream(snapshot.getFlvFile()));
Then I search the AMFPacket at the given timecode by:
FLVUtils.readHeader(is);
AMFPacket amfPacket;
while ((amfPacket = FLVUtils.readChunk(is)) != null)
{
if (lastVideoKeyFrame != null && amfPacket.getTimecode() > timecodeInMillis)
break;
if (amfPacket.getType() != IVHost.CONTENTTYPE_VIDEO)
continue;
if (FLVUtils.getFrameType(amfPacket.getFirstByte()) == FLVUtils.FLV_KFRAME)
lastVideoKeyFrame = amfPacket;
}
is.close();
In a last step I write this packet (lastVideoKeyFrame) to a new flv-File.
The code for this step is:
BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream(snapshot.getFlvSnapshotFile(), false));
FLVUtils.writeHeader(out, 0, null);
//FLVUtils.writeChunk(out, lastVideoKeyFrame.getDataBuffer(), lastVideoKeyFrame.getSize(), 0, (byte)lastVideoKeyFrame.getType());
AMFPacket codecConfig = [COLOR="Red"]stream[/COLOR].getVideoCodecConfigPacket(packet.getAbsTimecode());
if (codecConfig != null){
FLVUtils.writeChunk(out, codecConfig.getDataBuffer(), codecConfig.getSize(), 0, (byte)codecConfig.getType());
}
FLVUtils.writeChunk(out, lastVideoKeyFrame.getDataBuffer(), lastVideoKeyFrame.getSize(), 0, (byte)lastVideoKeyFrame.getType());
out.close();
Regards,
Markus
© 2007–2024 Wowza Media Systems™, LLC. All rights reserved. Security & Privacy PolicyLegalSystem Status