gst_sparse_file_read() is supposed to set @error when returning 0 but
in some cases was not.
Hopefully fix a crash in gst_download_buffer_read_buffer() which is
checking error->code when 0 is returned.
I'm not totally sure when this happens as I debugged this from a post
mortem crash but returning a generic error here seems the safe thing to
do.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8321>
A JSON configuration file is generated for core plugins, which maps
plugin names with sources to parse for docstrings.
The file is then opened by the configuration generator script, which
will now favor explicitly listed files to (usually wildcarded) paths
passed on its command line.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8231>
We do not own any ref to queries when running them.
If we end up processing the query from the streaming thread, it means that it was
a serialized query, and the query is being waited to be processed on the sinkpad
streaming thread, thread which owns the reference.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7767>
This is documented as:
> you can query how many buffers are queued by reading the
> #gstqueue:current-level-buffers property. you can track changes
> by connecting to the notify::current-level-buffers signal (which
> like all signals will be emitted from the streaming thread). the same
> applies to the #gstqueue:current-level-time and
> #gstqueue:current-level-bytes properties.
... but was not implemented.
This also respects the `notify::silent` property for the notify signals
to be less intrusive.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7486>
If the first buffers have no timestamp then the sink position would be
initialized to 0. The source pad might output this buffer, which would then
initialize the source position to 0 too.
Afterwards two buffers with a valid but huge timestamp might arrive before any
of them are output on the source pad. The first one would set the sink position
to a huge value, the second one would notice that the difference between the
huge value and 0 is certainly larger than max-size-time and consider the queue
as full.
Instead, simply don't update the times from buffers without timestamps and
assume whatever was set before is still valid, i.e. the buffer has the same
timestamp as the previous one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7071>
_get_osfhandle() expects valid fd and CRT will abort program
if given paramerter is invalid. The fd can be invalidated
in various way, file was deleted by other process after
we open a file. To avoid it, our own exception
handler must be installed so that _get_osfhandle() can return
INVALID_HANDLE_VALUE if fd is invalid.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6877>
In the case where the queue shrinks due to a property change and the queue
becomes full, we would set the waiting_del flag, which would prevent posting the
100% buffering message on the bus. Since the pipeline is not aware of the new
buffering value, in the common case where the pipeline is paused during
buffering, it would never resume.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6165>
Remove the percent_changed check to determine whether a buffering message should
be posted. The check on the last posted buffering value is sufficient, and the
removal doesn't introduce additional complexity.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6165>
It is racy and may cause us to accidentally keep forwarding data past
the EOS. The only reason to stop dropping would be when we encounter a
stream-start, segment, or segment-done event, either in push_one
(already queued) or in the sink pad's event function.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5766>
Use gst_data_queue_push_force() for most events so they
are immediately enqueued. Only gap events and actual buffer
data will now block when the queue is full.
This fixes a problem with non-flushing seek handling
where events following a segment-done event would block
if they precede the SEGMENT event, since only SEGMENT
events would clear the 'eos' state of the multiqueue
queue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5801>
Do not update timelevel on segment. Segment itself does not tell
anything about the amount of buffered time duration in the element
but buffer timestamp/duration is required to measure actual bufferred time.
Moreover, at the time when new segment is applied to sink/srcpad,
segment.position would point to random value.
Therefore calculating running time using the random value does not
make sense and it will result in wrong timelevel report.
This patch updates queue/queue2's timelevel measuring logic so that
it can be updated only on buffer/buffer-list/gap-event flow.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5430>
This is to fix an infinitely blocked upstream streaming thread if
* upstream has fixed-size buffer pool, some H/W decoders for example
* downstream returned flow error without releasing buffer
When the fixed-size buffer pool hits its configured max-buffers and
also downstream of queue returned flow error without releasing corresponding
buffer, upstream has no chance to run the next processing loop
because it will be blocked by acquire_buffer(), and therefore
downstream flow will not be propagated to upstream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5023>
We don't need to obtain the mutex to ensure that `sq` is non-NULL. `sq`
is assigned immediately after the pads are created and not destroyed
until the pads are finalized.
Use the pad direction to determine which internal peer we need.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/888>
Due to the dynamic nature of multiqueue, when `use-interleave` is used we can't
report a maximum tolerated latency (when queried) since it is calculated
dynamically.
When in such live pipelines, we need to make sure multiqueue can handle the
lowest global latency (provided by this event). Failure to do that would
result in not providing enough buffering for a realtime pipeline.
Fixes#1732
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3772>
Pads are activated automatically when they are added if the element
state is >=PAUSED, so it's not necessary to activate them manually
anymore.
This patch removes manual pad activation from gstaggregator, gstconcat,
gstfunnel, and gstinputselector.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3636>
Commit d3a66f9851ea introduced a potential deadlock with two parallel release_pad
calls, where one could release the main multiqueue lock (qlock) while still
holding the reconf_lock and then calling other routines which in some conditions
may try to acquire qlock again. The second release_pad could already acquire the
qlock and then start waiting on reconf_lock, which may never be possible because
because the first one isn't releasing it until it can acquire qlock.
Fix it by holding reconf_lock for the whole durationg of qlock, making this
particular deadlock impossible.
Fixes#1642
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3571>