Use the intended sequence for re-using elements:
* EOS
* STREAM_START if element is to be re-used
This avoids having elements (such as queue/multiqueue/queue2) not
properly resetting themselves.
When delaying EOS propagation (because we want to wait until all
streams of a group are done for example), we re-trigger them by
first sending the cached STREAM_START and then EOS (which will
cause elements to re-set themselves if needed and accept new
buffers/events).
https://bugzilla.gnome.org/show_bug.cgi?id=785951
It is forwarding messages to the playbin bus, thus forwarding messages
that contain a floating reference to the application. This generally
makes bindings unhappy, we must not leak floating references to them.
Crossfading is a bit more complex than just having two pads with the
right keyframes as the blending is not exactly the same.
The difference is in the way we compute the alpha channel, in the case
of crossfading, we have to compute an additive operation between
the destination and the source (factored by the alpha property of both
the input pad alpha property and the crossfading ratio) basically so
that the crossfade result of 2 opaque frames is also fully opaque at any
time in the crossfading process, avoid bleeding through the layer
blending.
Some rationnal can be found in https://phabricator.freedesktop.org/T7773.
https://bugzilla.gnome.org/show_bug.cgi?id=784827
channels=1 is always mono, having it 'unpositioned' does not make
sense.
This fixes pipeline such as:
gst-validate-1.0 audiotestsrc ! audio/x-raw,channels=2,rate=44100,layout=interleaved ! audioconvert ! audioresample ! audio/x-raw, rate=44100, channels=1 ! avenc_mp2 ! fakesink
https://bugzilla.gnome.org/show_bug.cgi?id=785407
Do not remove other parsebin's input streams. It will cause unexpected
removal of any input streams in multi-parsebin use case.
Basically, the purpose of blocking buffers is similar to checking
no-more-pads of chain/group. That is, it gives hint to know the timing
to remove old (EOSed) streams of the parsebin and to add/reuse slots
for new input streams. But, that doesn't mean that we need to remove
other parsebin's EOSed stream. Each parsebin has most likely its
own streaming thread and therefore EOSed time can be much different.
(i.e., much early EOS of subtitle only parsebin)
https://bugzilla.gnome.org/show_bug.cgi?id=785120
Fields related to stream handling (input_streams,
output_streams, slots, guint slot_id) where used totally unprotected
until know.
This lead to several races, especially playing back RTSP streams.
To protect those fields, the OBJECT_LOCK can not be used as we sometimes
need to be able to post message on the bus while holding it.
decodebin3 already has a lock to manage stream selection, and in the end
it makes sense to protect all the stream management fields with the same
lock which is why we reuse the SELECTION_LOCK here.
https://bugzilla.gnome.org/show_bug.cgi?id=784012
decodebin3 checks input streams and pushes EOS if all input streams
are EOSed. If not, fake EOS is pushed to the corresponding slot.
When adaptivedemux is used with multi-track configuration,
adaptivedemux never ever push EOS to non-selected track
because streaming thread for the slot stops with not-linked flow return.
So, decodebin3 should generate EOS itself to finish playback.
https://bugzilla.gnome.org/show_bug.cgi?id=777735
linked input of slot can be old input, so urisourcebin should check
eos state to figure out whether it's new one or not.
If not, urisourcebin never ever forwards EOS to downstream at the end
of presentation, because the old input is still there without removal
https://bugzilla.gnome.org/show_bug.cgi?id=777735
group-id in stream-start event might be updated in
parse_chain_output_probe (). This cause duplicated stream-start
twice with identical stream-id and seq-num, but only group-id is
different. Although there is no change, stream-start event will
be followed by the first buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=771088
This makes it possible for GstDiscoverer to work with sources that
have multiple source pads and hence will trigger the creation of multiple
decodebin instances such as rtspsrc.
Based on the work of Vineeth TM <vineeth.tm@samsung.com>
https://bugzilla.gnome.org/show_bug.cgi?id=754178
The base class is trying to align the processed data, but it endup
removing the GstVideoMeta. That caused wrong result. Instead, just copy
from the process function with the appropriate alignment.
https://bugzilla.gnome.org/show_bug.cgi?id=781204
And only set low-percent/high-percent if not using downloadbuffer, just
like in old uridecodebin. using the watermark based buffering causes
playback to hang never finish buffering with downloadbuffer.
With both audiorate and videorate, it seems more sensible to apply rate
adjustments after the first buffer appears. For example, with v4l2src,
there is often a small delay before the first video buffer turns up, and
this can cause a stuttery start because of videorate trying to ensure a
perfect stream.
Those multiqueue are the ones dealing with adaptive demuxers. They should
have a time limit set so that they don't end up buffering too much data.
They would previously be set with no limits at all, which would cause them
to grow indefinitely until downstream blocks.
gst_video_rate_flush_prev() ensures that the pushed buffer is writable
by calling gst_buffer_make_writable() on videorate->prevbuf.
In drop-only mode we always push buffers directly when they are received
from GstBaseTransform (gst_video_rate_transform_ip()) and do not keep them
around. GstBaseTransform already ensures that those buffers are
writable so there is no need to do it twice.
This change saves us from copying buffers in drop-only mode as we no longer
calls gst_buffer_make_writable() with a buffer having a refcount of 2
(one ref owned by GstBaseTransform and one in videorate->prevbuf).
https://bugzilla.gnome.org/show_bug.cgi?id=780767
When caps changes while streaming, the new caps was getting processed
immediately in videoaggregator, but the next buffer in the queue that
corresponds to this new caps was not necessarily being used immediately,
which resulted sometimes in using an old buffer with new caps. Of course
there used to be a separate buffer_vinfo for mapping the buffer with its
own caps, but in compositor the GstVideoConverter was still using wrong
info and resulted in invalid reads and corrupt output.
This approach here is more safe. We delay using the new caps
until we actually select the next buffer in the queue for use.
This way we also eliminate the need for buffer_vinfo, since the
pad->info is always in sync with the format of the selected buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=780682
Instead go backwards before segment.stop based on the framerate or the
next buffers end timestamp. Otherwise the first buffer will usually be
dropped because outside the segment.
https://bugzilla.gnome.org/show_bug.cgi?id=781899