Wowza Community

publishing raw h.264 byte stream

I am trying to stream an h.264 encoded video using its raw h.264 byte stream that I read in from a file (not through RTP).

My code works fine when reading in an FLV file. That is, I have code for FLV files that can read in the data, create an AMFPacket using FLVUtils.readHeader() and FLVUtils.readChunk(), and then publish it to the stream using Publisher.addVideoData(). But I’m not sure what to do when trying to read in raw h.264 bytes. Does FLVUtils provide functions for doing that?

I know I can package up the source h.264 data into RTP packets and then have Wowza read them from the RTP port, but that seems like a lot of extra steps that are not needed and would introduce latency, since I have access to the raw h.264 data directly.

I assume Wowza already knows how to stream the raw h.264 data using RTP packets after stripping off the RTP header, but do developers have access to those functions? If so, does anyone have some example code for how to add raw h.264 data using Publisher.addVideoData()?

Thanks for your help!

We do not have any utilties for reading raw h.264 files. If you can do FLV then it is just a matter of figuring out the file format.

Charlie

I don’t understand what you are asking. The headers are needed as part of the addVideoData and addAudioData for the publisher API. They are also part of the data of an AMFPacket.

Charlie

Here is working code for sending audio. I would get this working first:

https://www.wowza.com/docs/how-to-use-publisher-api-and-jspeex-to-publish-an-audio-stream-voip-integration

I know it is an audio example but it is at least the whole process end to end. BTW, the codec config part is important. H.264 will not play back without it.

Charlie

It needs to have a special very short header which I think is described in the server-side API docs for the Publisher API:

Basic packet format:
Audio:
AAC
[1-byte header]
[1-byte codec config indicator (0 - audio data, 1 - codec config packet)]
[n-bytes audio content or codec config data]
All others
[1-byte header]
[n-bytes audio content]
Below is the bit 
layout of the header byte of data (table goes from least significant bit to most significant bit):
1 bit Number of channels:   
  0    mono 
  1    stereo 
 
1 bit Sample size:   
  0    8 bits per sample 
  1    16 bits per sample 
 
2 bits Sample rate:   
  0    special or 8KHz 
  1    11KHz 
  2    22KHz 
  3    44KHz 
 
4 bits Audio type:   
  0    PCM (big endian) 
  1    PCM (swf - ADPCM) 
  2    MP3 
  3    PCM (little endian) 
  4    Nelly Moser ASAO 16KHz Mono 
  5    Nelly Moser ASAO 8KHz Mono 
  6    Nelly Moser ASAO 
  7    G.711 ALaw 
  8    G.711 MULaw 
  9    Reserved 
  a    AAC 
  b    Speex 
  f    MP3 8Khz 
Note: For AAC the codec config data is generally a two byte packet that describes the stream. It must
be published first. Here is the basic code to fill in the codec config data.
AACFrame frame = new AACFrame();
int sampleRate = 22100;
int channels = 2;
frame.setSampleRate(sampleRate);
frame.setRateIndex(AACUtils.sampleRateToIndex(sampleRate));
frame.setChannels(channels);
frame.setChannelIndex(AACUtils.channelCountToIndex(sampleRate));
byte[] codecConfig = new byte[2];
AACUtils.encodeAACCodecConfig(frame, codecConfig, 0);
Note: For AAC the header byte is always 0xaf
Note: For Speex the audio data must be encoded as 16000Hz wide band
Video:
H.264
[1-byte header]
[1-byte codec config indicator (1 - video data, 0 - codec config packet)]
[3-byte time difference between dts and pts in milliseconds]
[n-bytes video content or codec config data]
All others
[1-byte header]
[n-bytes audio content]
Below is the bit layout of the header byte of data (table goes from least significant bit to most significant bit):
4 bits Video type:   
  2    Sorenson Spark (H.263) 
  3    Screen 
  4    On2 VP6 
  5    On2 VP6A 
  6    Screen2 
  7    H.264 
 
2 bit Frame type:   
  1    K frame (key frame) 
  2    P frame 
  3    B frame 
 
Note: H.264 codec config data is the same as the AVCc packet in a QuickTime container.
Note: All timecode data is in milliseconds

Charlie

Look at the Publisher class info in the server-side API document that comes with Wowza. Also, more carefully study the Speex publishing API example. There is an additonal 1-byte header that is needed for Speex publishing. Also, Speex is only playable in Flash. It will not work to the iPhone. H.264 requires and addtional 5-byte header and AAC has an addtional 2-byte header.

Also, in the most general case you cannot just send the raw RTP packets in to this API. Many of the RTP packetization schemes have addtional data along with the 12 byte RTP header that needs to be properly removed. I can’t remember how Speex over RTP works byt I believe there are a few extra bytes that need to be properly deal with. The Publisher API expects raw encoded media frames with the proper 1, 2 (AAC audio), or 5 (H.264 video) byte header.

Charlie

This API is really indtended for folks who understand H.264 at a very, very deep level. There is just too much to explain without this prior knowledge.

The other big issue is that we do not support MP4V-ES. Wowza, Flash and Silvleright only support H.264 (MPEG4 Part 10) MPEG4 video. This type of MPEG4 data is not supported.

Charlie

It is constructed from the PPS and SPS NAL units. It is the same block that is written to an MP4 file as the avcC block.

Charlie

That is correct.

Charlie

Yes, if Baseline or no B-frames you are most likely safe setting to all zeros.

BTW, nothing will play unless you get the codec config thing right.

Charlie

Thanks for the reply.

I guess I don’t really understand what addVideoData() is expecting to receive. The way I currently understand how the system works is that when I create a Publisher instance and then setFileExtension(“flv”), then the data that I add using addVideoData() will be output to the end-user in a FLV container. Likewise, if I were to use setFileExtension(“mp4”), then the data I add would be output in a MP4 format. Is that a correct understanding?

If so, doesn’t that mean that addVideoData() is just expecting the raw encoded data? In my case, that raw data would be encoded with h.264. Or does the data that gets sent to addVideoData have to be in a special format that Wowza understands? Are you saying that I need to parse the h.264 encoded data into an AMFPacket and then give the AMFPacket data to addVideoData() so that it gets output in the MP4 container?

I’ve searched the documentation over, but I can’t seem to find anything that describes all of this. I appreciate your help.

Are those headers for AMFPacket that are auto-generated if I create a new AMFPacket and add the h.264 data, or do they have nothing to do with AMFPacket (i.e. I have to create the headers as new byte[] and set them manually based on each frame in the raw h.264)?

Thanks again!

OK, let me back up. I have an exe that outputs raw h264 into a named pipe. It is a continuous stream of data that I want Wowza to publish. So, in my onServerInit() function, I want to sit in a loop reading that data from the pipe and give it to addVideoData().

With FLV files it is no problem. I use FLVUtils.readChunk() and pass the resulting AMFPacket.getData() to addVideoData() and everything works great.

However, I haven’t gotten raw h264 data to work. I send the raw h264 in packets through the named pipe, and then add the payload of each packet to addVideoData(). Per your info, I also tried having the exe include the 5-byte header data in the payload of the packet, but it doesn’t seem to be working. That is, I do not see anything being published in the Flash client, nor do I see any errors or logging going on after I pass the bytes to addVideoData(). So, basically, I can’t tell what the problem is, nor do I have a working model to compare to.

Do you have any example code that I could use that accomplishes what I’m trying to do, or any debug tools that might help me figure out what is going on?

Thank you!

Hi,

I’m investigating Publisher API, and ability to stream video generated by 3-d party software.

Only relative info about Publisher API is this thread, documentation is wery poor in this specific area.

First, I want to investigate how Publisher API works.

From this thread and from Speex-audio publishing example I assume that methods

publisher.addVideoData
publisher.addAudioData

should be used with binary data contained in RTP packet. By binary data I mean RTP-packet payload, without RTP-header or any other (UDP or IP) header info.

To make things simple - I’m trying to pass to publisher.addVideoData binary data, that was captured by WireShark while watching streamed video from IP camera. I ommited RTSP and RTSP/SDP pakets, and used only RTP pakets. Camera was streaming video encoded in MP4V-ES/90000 format.

I’ve exported payload data and timestamp from captured packets, saved data to text files. Also, WireShark gave me info about delays between pakets-receiving, so I exported and saved that info into text file also.

I’ve took SpeexPublisher class from Speex-audio publishing example, and modified it in following way:

  1. in while loop I’m getting next payload data from text file, converting it to binary

  2. I’m getting next timestamp from text file, converting to Long

  3. getting delay time from text file.

  4. calling publisher.addVideoData, passing payload data, and timestamp to it.

  5. if delay is long enough (greater that 1 ms) - sleeping current thread for amount of ms captured earlier.

Here is my code (only RUN method, rest is the same as in SpeexPublisher class)

public void run()
	{		
		String basePath = Bootstrap.getServerHome(Bootstrap.APPHOME) + "/content/";
		ArrayList<String> payloadStrings = ReadLines(basePath + "RTPPayload.txt");
		ArrayList<String> timecodeStrings = ReadLines(basePath + "RTPTimestamp.txt");
		ArrayList<String> delayStrings = ReadLines(basePath + "RTPDelay.txt");
		
		try
		{
			
			IVHost vhost = VHostSingleton.getInstance(VHost.VHOST_DEFAULT);
			Publisher publisher = Publisher.createInstance(vhost, "live");
			publisher.setFileExtension("mp4");
			publisher.setStreamType("live");
			publisher.publish("myBinaryStream", "live");
			int RepeatCount = 1;
			int packetNumber = 0;
			Long currentTimecode;
			while (true)
			{	
				if (packetNumber > payloadStrings.size()-1)
				{
					packetNumber = 0;
					RepeatCount--;
					if (RepeatCount < 0)
						break;
				}
					byte[] binarydata = HexStringToByteArray(payloadStrings.get(packetNumber));
				
					Float delayMilliseconds = Float.parseFloat(delayStrings.get(packetNumber))* 1000;
					currentTimecode =Long.decode("0x"+timecodeStrings.get(packetNumber));
					
					publisher.addVideoData(binarydata, binarydata.length,currentTimecode);
					if (delayMilliseconds > 1)
					{
						//System.out.println("Sleeping: " + delayMilliseconds.longValue() + " milisec");
						Thread.sleep(delayMilliseconds.longValue());				
					}
					packetNumber++;									
			}
			publisher.unpublish();
			publisher.close();
		}
		catch (Exception e)
		{
			System.out.println("Error: " + e.toString());
		}
	}

Despite all that, i can’t get any video playing in Flash client.

Console output from Wowza IDE gives me something like

ServerListenerSpeexPublisher.onServerInit
SpeexPublisher.create
INFO application app-start _definst_ live/_definst_
INFO stream create - -
INFO stream publish myBinaryStream -
INFO server comment - LiveStreamPacketizerCupertino.init[live/_definst_/myBinaryStream]: chunkDurationTarget: 10000
INFO server comment - LiveStreamPacketizerCupertino.init[live/_definst_/myBinaryStream]: chunkDurationTolerance: 500
INFO server comment - LiveStreamPacketizerCupertino.init[live/_definst_/myBinaryStream]: audioGroupCount: 3
INFO server comment - LiveStreamPacketizerCupertino.init[live/_definst_/myBinaryStream]: playlistChunkCount:3
INFO server comment - MediaStreamMap.getLiveStreamPacketizer: Create live stream packetizer: cupertinostreamingpacketizer:myBinaryStream
INFO server comment - CupertinoPacketHandler.startStream[live/_definst_/myBinaryStream]
INFO server comment - LiveStreamPacketizerCupertino.handlePacket: Video codec: UNKNOWN
INFO server comment - MediaStreamMap.getLiveStreamPacketizer: Create live stream packetizer: smoothstreamingpacketizer:myBinaryStream
INFO server comment - LiveStreamPacketizerSmoothStreaming.startStream[live/_definst_/myBinaryStream]
INFO stream unpublish myBinaryStream -
INFO server comment - CupertinoPacketHandler.resetStream[live/_definst_/myBinaryStream]
INFO server comment - LiveStreamPacketizerSmoothStreaming.resetStream[live/_definst_/myBinaryStream]
INFO stream destroy myBinaryStream -
INFO server comment - MediaStreamMap.removeLiveStreamPacketizer[live/_definst_/myBinaryStream]: Destroy live stream packetizer: smoothstreamingpacketizer
INFO server comment - MediaStreamMap.removeLiveStreamPacketizer[live/_definst_/myBinaryStream]: Destroy live stream packetizer: cupertinostreamingpacketizer

Following line

INFO server comment - LiveStreamPacketizerCupertino.handlePacket: Video codec: UNKNOWN

is pointing me that i’m doing something wrong, I suspect.

Can someone help me with this?

Any info on Publisher API will be appreciated

Thanks for your reply, Charlie

I’ve examined server-side API, in Publisher class example there are lines like

AMFPacket amfPacket;
// read packet from audio, video, data source
// amfPacket = readPacketFromSomewhere();

So I supposed that I should intercept that packet from somewhere. Please correct me if I’m wrong.

You wrote

The Publisher API expects raw encoded media frames with the proper 1, 2 (AAC audio), or 5 (H.264 video) byte header

I’we tried that also, actually I made another test-case for it.

I’m getting raw RTP data (payload) from packets and concatenating that data according to timestamps into single byte[] array (assuming I’ll get single frame in result).

It takes from 2 up to 7 packets for single frame.

After I got complete frame - I’m sending it to Publisher via

publisher.AddVideoData

After reading about RTP timestamps, specially this post http://stackoverflow.com/questions/2439096/h264-rtp-timestamp

I’m using current date converted to milliseconds as a RTP timestamp. I believe it should work, because I’m waiting some amout of milliseconds before publishing next frame.

Once again, you wrote about headers:

The Publisher API expects raw encoded media frames with the proper 1, 2 (AAC audio), or 5 (H.264 video) byte header

I suppose that frame that I’m sending contains all necessary header bytes - the original packets was captured from live srtream, and all I did - was just recovering original frame from multiple packets. Please, correct me if I’m wrong.

Anyway - I can’t get any video playing.

After enabling DEBUG in log4j.properties file, I can see some debug data from Wowza, and it looks strange for me. This output was prodused while I was publishing frames and after I’ve clicked “Play” button in Flash-player. Can you please take a look at it?

DEBUG server comment - checkFlush[false,false,75]: tc:1285510496282>1285510496285 || rt:1285510496283>1285510496285
DEBUG server comment - checkFlush[false,false,75]: tc:1285510496283>1285510496285 || rt:1285510496284>1285510496285
DEBUG server comment - send[823773096]: size:0:0 filter:7 time:253 tOffset:0 rwrt:true
DEBUG server comment - checkFlush[true,false,75]: tc:1285510496317>1285510496285 || rt:1285510496317>1285510496285
DEBUG server comment - flush: notify:false tSize:38 dataObjs:2750 time:107 tOffset:0
DEBUG server comment - checkFlush[false,false,75]: tc:1285510496321>1285510496392 || rt:1285510496321>1285510496392
DEBUG server comment - checkFlush[false,false,75]: tc:1285510496323>1285510496392 || rt:1285510496323>1285510496392

How do you think, why I get this line

DEBUG server comment - send[823773096]: size:0:0 filter:7 time:253 tOffset:0 rwrt:true

with size:0:0?

I really don’t know where should I go from this point.

Only thing in my mind is to capture stream in h264, currently I’m experimenting with stream captured in MP4V-ES/90000 format.

Charlie,

Many of the RTP packetization schemes have addtional data along with the 12 byte RTP header that needs to be properly removed

I’ve just read your post once again, and understood how stupid my approach is.

Just ignore my previous post, I’ll get raw frames from somewhere and I’ll come back with results.

Thanks again, for pointing me in right direction.

Ok,

I’ve made a bit of self-education here, and I’ve got raw h264 frames.

My next question is:

What is codec config data from Server-Side API documentation:

H.264
[1-byte header]
[1-byte codec config indicator (1 - video data, 0 - codec config packet)]
[3-byte time difference between dts and pts in milliseconds]
[n-bytes video content or [B]codec config data[/B]]

Is it the SPS (sequence parameter set) & PPS(picture parameter set) NAL units?

If it is not, then give me some clue to find it.

Thanks

Sorry for dummy question,

but can anyone confirm that I’m going into right direction?

I want to create header as described below:

Below is the bit layout of the header byte of data (table goes from least significant bit to most significant bit):

4 bits Video type:

2 Sorenson Spark (H.263)

3 Screen

4 On2 VP6

5 On2 VP6A

6 Screen2

7 H.264

2 bit Frame type:

1 K frame (key frame)

2 P frame

3 B frame

Have to form a byte according to this rule

so we have 8 bits to set

XXXX XXXX

I’ve got H264 video and a key frame,

“7” in binary representation is “111”, as it’s written above - is has to be 4 bits, so add one zero at beginning - got “0111”

so our byte looks like

XXXX 0111

I have a key frame, so next 2 bits will be “01”. Adding two zeros at the beginning - got “0001”

byte looks like

0001 0111

In hexadecimal it’s 0x17, in decimal - 23

I’m doing everything right, am I?

Thanks for your reply.

Anyway - I have few more questions, can you please help me with them?

It’s about header for h264 again, documentation says that I need

[3-byte time difference between dts and pts in milliseconds]

I’ve found explanation to DTS and PTS here.

It says following:

The Decode Time Stamp (DTS) indicates the time at which an access unit should be instantaneously removed from the receiver buffer and decoded. It differs from the Presentation Time Stamp (PTS) only when picture reordering is used for B pictures. If DTS is used, PTS must also be provided in the bit stream.

If it’s definition is relevant here, and as long as I’m not using picture reordering for B pictures - can I set above-mentioned 3-bytes in header to zeros?

(I’m asking because I’ve wrote test code that is constructing that kind of header, but I can’t get anything passed to flash-client. Everything looks good in debug console - no errors, codec is properly recognized, but somehow it doesn’t work. I’m trying to localize my error. )

Charlie,

I believe I’ve made codec config thing right. But I’m still having problems. Can you take a look at my results?

I’m creating live stream, and publishing frames via Publisher API.

I’m passing valid headers and “codec config” to Publisher API.

I’m testing with VLC player connected to WOWZA via RTSP.

Currently I can get RTSP/SDP packet with valid “Media format specific parameters”, and WireShark seems to understand it properly. (As far as I can see - this section is exactly what I pass as a “codec config” to Publisher API);

Here is “Media format specific parameters” packet details, exported by WireShark:

 <field name="sdp.fmtp.parameter" showname="Media format specific parameters: profile-level-id=42401F" size="23" pos="563" show="profile-level-id=42401F" value="70726f66696c652d6c6576656c2d69643d343234303146">
          <field name="h264.profile" showname="Profile: 42401f" size="3" pos="0" show="42:40:1f" value="42401f">
            <field name="h264.profile_idc" showname="0100 0010 = Profile_idc: Baseline profile (66)" size="1" pos="0" show="66" value="42" unmaskedvalue="42"/>
            <field name="h264.constraint_set0_flag" showname="0... .... = Constraint_set0_flag: 0" size="1" pos="1" show="0" value="0" unmaskedvalue="40"/>
            <field name="h264.constraint_set1_flag" showname=".1.. .... = Constraint_set1_flag: 1" size="1" pos="1" show="1" value="1" unmaskedvalue="40"/>
            <field name="h264.constraint_set2_flag" showname="..0. .... = Constraint_set2_flag: 0" size="1" pos="1" show="0" value="0" unmaskedvalue="40"/>
            <field name="h264.constraint_set3_flag" showname="...0 .... = Constraint_set3_flag: 0" size="1" pos="1" show="0" value="0" unmaskedvalue="40"/>
            <field name="h264.reserved_zero_4bits" showname=".... 0000 = Reserved_zero_4bits: 0" size="1" pos="1" show="0" value="0" unmaskedvalue="40"/>
            <field name="h264.level_id" showname="0001 1111 = Level_id: 31 [Level 3.1 14 Mb/s]" size="1" pos="2" show="31" value="1F" unmaskedvalue="1f"/>
          </field>
        </field>
        <field name="sdp.fmtp.parameter" showname="Media format specific parameters: sprop-parameter-sets=J0JAH5WgLASbAQCA,KM4BryA=" size="46" pos="587" show="sprop-parameter-sets=J0JAH5WgLASbAQCA,KM4BryA=" value="7370726f702d706172616d657465722d736574733d4a304a41483557674c415362415143412c4b4d34427279413d">
          <field name="" show="NAL unit 1 string: J0JAH5WgLASbAQCA" size="16" pos="608" value="4a304a41483557674c41536241514341"/>
          <field name="h264.nal_unit" showname="NAL unit: 2742401f95a02c049b010080" size="12" pos="0" show="27:42:40:1f:95:a0:2c:04:9b:01:00:80" value="2742401f95a02c049b010080">
            <field name="h264.forbidden_zero_bit" showname="0... .... = Forbidden_zero_bit: 0" size="1" pos="0" show="0" value="0" unmaskedvalue="27"/>
            <field name="h264.nal_ref_idc" showname=".01. .... = Nal_ref_idc: 1" size="1" pos="0" show="1" value="1" unmaskedvalue="27"/>
            <field name="h264.nal_unit_type" showname="...0 0111 = Nal_unit_type: Sequence parameter set (7)" size="1" pos="0" show="7" value="7" unmaskedvalue="27"/>
            <field name="h264.profile_idc" showname="0100 0010 = Profile_idc: Baseline profile (66)" size="1" pos="1" show="66" value="42" unmaskedvalue="42"/>
            <field name="h264.constraint_set0_flag" showname="0... .... = Constraint_set0_flag: 0" size="1" pos="2" show="0" value="0" unmaskedvalue="40"/>
            <field name="h264.constraint_set1_flag" showname=".1.. .... = Constraint_set1_flag: 1" size="1" pos="2" show="1" value="1" unmaskedvalue="40"/>
            <field name="h264.constraint_set2_flag" showname="..0. .... = Constraint_set2_flag: 0" size="1" pos="2" show="0" value="0" unmaskedvalue="40"/>
            <field name="h264.constraint_set3_flag" showname="...0 .... = Constraint_set3_flag: 0" size="1" pos="2" show="0" value="0" unmaskedvalue="40"/>
            <field name="h264.reserved_zero_4bits" showname=".... 0000 = Reserved_zero_4bits: 0" size="1" pos="2" show="0" value="0" unmaskedvalue="40"/>
            <field name="h264.level_id" showname="0001 1111 = Level_id: 31 [Level 3.1 14 Mb/s]" size="1" pos="3" show="31" value="1F" unmaskedvalue="1f"/>
            <field name="h264.seq_parameter_set_id" showname="1... .... = seq_parameter_set_id: 0" size="1" pos="4" show="0" value="95"/>
            <field name="h264.log2_max_frame_num_minus4" showname=".001 01.. = log2_max_frame_num_minus4: 4" size="1" pos="4" show="4" value="95"/>
            <field name="h264.pic_order_cnt_type" showname=".... ..01  1... .... = pic_order_cnt_type: 2" size="1" pos="4" show="2" value="95"/>
            <field name="h264.num_ref_frames" showname=".010 .... = num_ref_frames: 1" size="1" pos="5" show="1" value="a0"/>
            <field name="h264.gaps_in_frame_num_value_allowed_flag" showname=".... 0... = gaps_in_frame_num_value_allowed_flag: 0" size="1" pos="5" show="0" value="a0"/>
            <field name="h264.pic_width_in_mbs_minus1" showname=".... .000  0010 1100 = pic_width_in_mbs_minus1: 43" size="1" pos="5" show="43" value="a0"/>
            <field name="h264.pic_height_in_map_units_minus1" showname="0000 0100  100. .... = pic_height_in_map_units_minus1: 35" size="1" pos="7" show="35" value="04"/>
            <field name="h264.frame_mbs_only_flag" showname="...1 .... = frame_mbs_only_flag: 1" size="1" pos="8" show="1" value="9b"/>
            <field name="h264.direct_8x8_inference_flag" showname=".... 1... = direct_8x8_inference_flag: 1" size="1" pos="8" show="1" value="9b"/>
            <field name="h264.frame_cropping_flag" showname=".... .0.. = frame_cropping_flag: 0" size="1" pos="8" show="0" value="9b"/>
            <field name="h264.vui_parameters_present_flag" showname=".... ..1. = vui_parameters_present_flag: 1" size="1" pos="8" show="1" value="9b"/>
            <field name="h264.aspect_ratio_info_present_flag" showname=".... ...1 = aspect_ratio_info_present_flag: 1" size="1" pos="8" show="1" value="9b"/>
            <field name="h264.aspect_ratio_idc" showname="0000 0001 = aspect_ratio_idc: 1" size="1" pos="9" show="1" value="01"/>
            <field name="h264.overscan_info_present_flag" showname="0... .... = overscan_info_present_flag: 0" size="1" pos="10" show="0" value="00"/>
            <field name="h264.video_signal_type_present_flag" showname=".0.. .... = video_signal_type_present_flag: 0" size="1" pos="10" show="0" value="00"/>
            <field name="h264.chroma_loc_info_present_flag" showname="..0. .... = chroma_loc_info_present_flag: 0" size="1" pos="10" show="0" value="00"/>
            <field name="h264.timing_info_present_flag" showname="...0 .... = timing_info_present_flag: 0" size="1" pos="10" show="0" value="00"/>
            <field name="h264.nal_hrd_parameters_present_flag" showname=".... 0... = nal_hrd_parameters_present_flag: 0" size="1" pos="10" show="0" value="00"/>
            <field name="h264.vcl_hrd_parameters_present_flag" showname=".... .0.. = vcl_hrd_parameters_present_flag: 0" size="1" pos="10" show="0" value="00"/>
            <field name="h264.pic_struct_present_flag" showname=".... ..0. = pic_struct_present_flag: 0" size="1" pos="10" show="0" value="00"/>
            <field name="h264.bitstream_restriction_flag" showname=".... ...0 = bitstream_restriction_flag: 0" size="1" pos="10" show="0" value="00"/>
            <field name="h264.rbsp_stop_bit" showname="1... .... = rbsp_stop_bit: 1" size="1" pos="11" show="1" value="80"/>
            <field name="h264.rbsp_trailing_bits" showname=".000 0000 = rbsp_trailing_bits: 0" size="1" pos="11" show="0" value="80"/>
          </field>
          <field name="" show="NAL unit 2 string: KM4BryA=" size="8" pos="625" value="4b4d34427279413d"/>
          <field name="h264.nal_unit" showname="NAL unit: 28ce01af20" size="5" pos="0" show="28:ce:01:af:20" value="28ce01af20">
            <field name="h264.forbidden_zero_bit" showname="0... .... = Forbidden_zero_bit: 0" size="1" pos="0" show="0" value="0" unmaskedvalue="28"/>
            <field name="h264.nal_ref_idc" showname=".01. .... = Nal_ref_idc: 1" size="1" pos="0" show="1" value="1" unmaskedvalue="28"/>
            <field name="h264.nal_unit_type" showname="...0 1000 = Nal_unit_type: Picture parameter set (8)" size="1" pos="0" show="8" value="8" unmaskedvalue="28"/>
            <field name="h264.pic_parameter_set_id" showname="1... .... = pic_parameter_set_id: 0" size="1" pos="1" show="0" value="ce"/>
            <field name="h264.seq_parameter_set_id" showname=".1.. .... = seq_parameter_set_id: 0" size="1" pos="1" show="0" value="ce"/>
            <field name="h264.entropy_coding_mode_flag" showname="..0. .... = entropy_coding_mode_flag: 0" size="1" pos="1" show="0" value="ce"/>
            <field name="h264.pic_order_present_flag" showname="...0 .... = pic_order_present_flag: 0" size="1" pos="1" show="0" value="ce"/>
            <field name="h264.num_slice_groups_minus1" showname=".... 1... = num_slice_groups_minus1: 0" size="1" pos="1" show="0" value="ce"/>
            <field name="h264.num_ref_idx_l0_active_minus1" showname=".... .1.. = num_ref_idx_l0_active_minus1: 0" size="1" pos="1" show="0" value="ce"/>
            <field name="h264.num_ref_idx_l1_active_minus1" showname=".... ..1. = num_ref_idx_l1_active_minus1: 0" size="1" pos="1" show="0" value="ce"/>
            <field name="h264.weighted_pred_flag" showname=".... ...0 = weighted_pred_flag: 0" size="1" pos="1" show="0" value="ce"/>
            <field name="h264.weighted_bipred_idc" showname="00.. .... = weighted_bipred_idc: 0" size="1" pos="2" show="0" value="01"/>
            <field name="h264.pic_init_qp_minus26" showname="..00 0001  1010 1... = pic_init_qp_minus26(se(v)): -26" size="1" pos="2" show="52" value="01"/>
            <field name="h264.pic_init_qs_minus26" showname=".... .1.. = pic_init_qs_minus26: 0" size="1" pos="3" show="0" value="af"/>
            <field name="h264.chroma_qp_index_offset" showname=".... ..1. = chroma_qp_index_offset: 0" size="1" pos="3" show="0" value="af"/>
            <field name="h264.deblocking_filter_control_present_flag" showname=".... ...1 = deblocking_filter_control_present_flag: 1" size="1" pos="3" show="1" value="af"/>
            <field name="h264.constrained_intra_pred_flag" showname="0... .... = constrained_intra_pred_flag: 0" size="1" pos="4" show="0" value="20"/>
            <field name="h264.redundant_pic_cnt_present_flag" showname=".0.. .... = redundant_pic_cnt_present_flag: 0" size="1" pos="4" show="0" value="20"/>
            <field name="h264.rbsp_stop_bit" showname="..1. .... = rbsp_stop_bit: 1" size="1" pos="4" show="1" value="20"/>
            <field name="h264.rbsp_trailing_bits" showname="...0 0000 = rbsp_trailing_bits: 0" size="1" pos="4" show="0" value="20"/>
          </field>
        </field>

Despite all this, it doesn’t work.

Next RTP packets are very short, and all cary same payload - “21”. Not 21 bytes, but just one byte with “21” in it.

Here is example of that rtp-packet

<proto name="rtp" showname="Real-Time Transport Protocol" size="13" pos="42">
    <field name="rtp.setup" showname="Stream setup by RTSP (frame 125)" size="0" pos="42" show="">
      <field name="rtp.setup-frame" showname="Setup frame: 125" size="0" pos="42" show="125"/>
      <field name="rtp.setup-method" showname="Setup Method: RTSP" size="0" pos="42" show="RTSP"/>
    </field>
    <field name="rtp.version" showname="10.. .... = Version: RFC 1889 Version (2)" size="1" pos="42" show="2" value="2" unmaskedvalue="80"/>
    <field name="rtp.padding" showname="..0. .... = Padding: False" size="1" pos="42" show="0" value="0" unmaskedvalue="80"/>
    <field name="rtp.ext" showname="...0 .... = Extension: False" size="1" pos="42" show="0" value="0" unmaskedvalue="80"/>
    <field name="rtp.cc" showname=".... 0000 = Contributing source identifiers count: 0" size="1" pos="42" show="0" value="0" unmaskedvalue="80"/>
    <field name="rtp.marker" showname="1... .... = Marker: True" size="1" pos="43" show="1" value="1" unmaskedvalue="e1"/>
    <field name="rtp.p_type" showname="Payload type: DynamicRTP-Type-97 (97)" size="1" pos="43" show="97" value="61" unmaskedvalue="e1"/>
    <field name="rtp.seq" showname="Sequence number: 2" size="2" pos="44" show="2" value="0002"/>
    <field name="rtp.extseq" showname="Extended sequence number: 65538" size="2" pos="44" show="65538" value="0002"/>
    <field name="rtp.timestamp" showname="Timestamp: 349200" size="4" pos="46" show="349200" value="00055410"/>
    <field name="rtp.ssrc" showname="Synchronization Source identifier: 0x738557df (1938118623)" size="4" pos="50" show="0x738557df" value="738557df"/>
    <field name="rtp.payload" showname="Payload: 21" size="1" pos="54" show="21" value="21"/>
  </proto>

What could be wrong? I’m passing valid content length to

publisher.addVideoData

method.

Do you have any clue why it’s sending short rtp packets?

Thanks

P.S. All rtp-packets are also “Marked”