1
0
Fork 0

media updates for v4.20-rc1

-----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJb2F9AAAoJEAhfPr2O5OEVm5YP/Ak53aAEI1oJNequwdTYKc+/
 2xWRpYWREa1g+x4MlqWO+szlPWbGXCUVwye8ii2N/xihLapsKVrLCr/dDd5khsvw
 bDux33BzpU3Ug/ncQKD6ZZv4vVRzG8DMPcpkOwSs0OoboJns6AkHVGCugR32qZsH
 3SH/r1aJce0oK1rrzgbYYZHTvaPshvY2IOLPKrtFmO+73iCVRhpSdWjFsY+q2Alp
 +3Ho/06iQYB2i+enXrwoIKHAYoXArXYbxS2dhaNz+NURrOAytmgfMisvvt67heHx
 IEilE0AcSjjlN/eyOxp+WCZrg9JLXVzZLX6ZnqqM2OEu1AS/XBultJBsGaN0hOiV
 dir2enoHNNOStI40hNSdbumg9I0Txmag2jtpaGyaBnnGmGRJ/JIYegCPRVMLygAf
 HHFHjR4fnRnqZrlh9OGAHaqc9RNlUgFVdlyqFtdyIah+aNeuij3o69mWM35QMLhw
 /0dTXBUXw9aD1dEg1cZ6PdzLWJgDd7n1gIdfzzzzLnzmBwmmhqxW8+evu9qSAXsP
 rnEZuE77HYKVfiacWMwpZK6+lT51STAE8ouo3N8fmaC+4RQmpq0dYXtR8RnlcSUD
 hKpJ6UsIIb5A6xKX7ed8x6FxV14TEEaa042A4eclxsAFiqqkNfWSozqV0vfW5vCD
 2lrsuN3knpfh7XDBSr0y
 =V4X4
 -----END PGP SIGNATURE-----

Merge tag 'media/v4.20-2' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media

Pull new experimental media request API from Mauro Carvalho Chehab:
 "A new media request API

  This API is needed to support device drivers that can dynamically
  change their parameters for each new frame. The latest versions of
  Google camera and codec HAL depends on such feature.

  At this stage, it supports only stateless codecs.

  It has been discussed for a long time (at least over the last 3-4
  years), and we finally reached to something that seem to work.

  This series contain both the API and core changes required to support
  it and a new m2m decoder driver (cedrus).

  As the current API is still experimental, the only real driver using
  it (cedrus) was added at staging[1]. We intend to keep it there for a
  while, in order to test the API. Only when we're sure that this API
  works for other cases (like encoders), we'll move this driver out of
  staging and set the API into a stone.

  [1] We added support for the vivid virtual driver (used only for
  testing) to it too, as it makes easier to test the API for the ones
  that don't have the cedrus hardware"

* tag 'media/v4.20-2' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media: (53 commits)
  media: dt-bindings: Document the Rockchip VPU bindings
  media: platform: Add Cedrus VPU decoder driver
  media: dt-bindings: media: Document bindings for the Cedrus VPU driver
  media: v4l: Add definition for the Sunxi tiled NV12 format
  media: v4l: Add definitions for MPEG-2 slice format and metadata
  media: videobuf2-core: Rework and rename helper for request buffer count
  media: v4l2-ctrls.c: initialize an error return code with zero
  media: v4l2-compat-ioctl32.c: add missing documentation for a field
  media: media-request: update documentation
  media: media-request: EPERM -> EACCES/EBUSY
  media: v4l2-ctrls: improve media_request_(un)lock_for_update
  media: v4l2-ctrls: use media_request_(un)lock_for_access
  media: media-request: add media_request_(un)lock_for_access
  media: vb2: set reqbufs/create_bufs capabilities
  media: videodev2.h: add new capabilities for buffer types
  media: buffer.rst: only set V4L2_BUF_FLAG_REQUEST_FD for QBUF
  media: v4l2-ctrls: return -EACCES if request wasn't completed
  media: media-request: return -EINVAL for invalid request_fds
  media: vivid: add request support
  media: vivid: add mc
  ...
hifive-unleashed-5.1
Linus Torvalds 2018-10-31 10:53:29 -07:00
commit b3491d8430
102 changed files with 6165 additions and 384 deletions

View File

@ -0,0 +1,54 @@
Device-tree bindings for the VPU found in Allwinner SoCs, referred to as the
Video Engine (VE) in Allwinner literature.
The VPU can only access the first 256 MiB of DRAM, that are DMA-mapped starting
from the DRAM base. This requires specific memory allocation and handling.
Required properties:
- compatible : must be one of the following compatibles:
- "allwinner,sun4i-a10-video-engine"
- "allwinner,sun5i-a13-video-engine"
- "allwinner,sun7i-a20-video-engine"
- "allwinner,sun8i-a33-video-engine"
- "allwinner,sun8i-h3-video-engine"
- reg : register base and length of VE;
- clocks : list of clock specifiers, corresponding to entries in
the clock-names property;
- clock-names : should contain "ahb", "mod" and "ram" entries;
- resets : phandle for reset;
- interrupts : VE interrupt number;
- allwinner,sram : SRAM region to use with the VE.
Optional properties:
- memory-region : CMA pool to use for buffers allocation instead of the
default CMA pool.
Example:
reserved-memory {
#address-cells = <1>;
#size-cells = <1>;
ranges;
/* Address must be kept in the lower 256 MiBs of DRAM for VE. */
cma_pool: cma@4a000000 {
compatible = "shared-dma-pool";
size = <0x6000000>;
alloc-ranges = <0x4a000000 0x6000000>;
reusable;
linux,cma-default;
};
};
video-codec@1c0e000 {
compatible = "allwinner,sun7i-a20-video-engine";
reg = <0x01c0e000 0x1000>;
clocks = <&ccu CLK_AHB_VE>, <&ccu CLK_VE>,
<&ccu CLK_DRAM_VE>;
clock-names = "ahb", "mod", "ram";
resets = <&ccu RST_VE>;
interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>;
allwinner,sram = <&ve_sram 1>;
};

View File

@ -0,0 +1,29 @@
device-tree bindings for rockchip VPU codec
Rockchip (Video Processing Unit) present in various Rockchip platforms,
such as RK3288 and RK3399.
Required properties:
- compatible: value should be one of the following
"rockchip,rk3288-vpu";
"rockchip,rk3399-vpu";
- interrupts: encoding and decoding interrupt specifiers
- interrupt-names: should be "vepu" and "vdpu"
- clocks: phandle to VPU aclk, hclk clocks
- clock-names: should be "aclk" and "hclk"
- power-domains: phandle to power domain node
- iommus: phandle to a iommu node
Example:
SoC-specific DT entry:
vpu: video-codec@ff9a0000 {
compatible = "rockchip,rk3288-vpu";
reg = <0x0 0xff9a0000 0x0 0x800>;
interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "vepu", "vdpu";
clocks = <&cru ACLK_VCODEC>, <&cru HCLK_VCODEC>;
clock-names = "aclk", "hclk";
power-domains = <&power RK3288_PD_VIDEO>;
iommus = <&vpu_mmu>;
};

View File

@ -262,3 +262,5 @@ in the end provide a way to use driver-specific callbacks.
.. kernel-doc:: include/media/media-devnode.h
.. kernel-doc:: include/media/media-entity.h
.. kernel-doc:: include/media/media-request.h

View File

@ -21,6 +21,7 @@ Part IV - Media Controller API
media-controller-intro
media-controller-model
media-types
request-api
media-funcs
media-header

View File

@ -16,3 +16,9 @@ Function Reference
media-ioc-enum-entities
media-ioc-enum-links
media-ioc-setup-link
media-ioc-request-alloc
request-func-close
request-func-ioctl
request-func-poll
media-request-ioc-queue
media-request-ioc-reinit

View File

@ -0,0 +1,66 @@
.. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections
.. _media_ioc_request_alloc:
*****************************
ioctl MEDIA_IOC_REQUEST_ALLOC
*****************************
Name
====
MEDIA_IOC_REQUEST_ALLOC - Allocate a request
Synopsis
========
.. c:function:: int ioctl( int fd, MEDIA_IOC_REQUEST_ALLOC, int *argp )
:name: MEDIA_IOC_REQUEST_ALLOC
Arguments
=========
``fd``
File descriptor returned by :ref:`open() <media-func-open>`.
``argp``
Pointer to an integer.
Description
===========
If the media device supports :ref:`requests <media-request-api>`, then
this ioctl can be used to allocate a request. If it is not supported, then
``errno`` is set to ``ENOTTY``. A request is accessed through a file descriptor
that is returned in ``*argp``.
If the request was successfully allocated, then the request file descriptor
can be passed to the :ref:`VIDIOC_QBUF <VIDIOC_QBUF>`,
:ref:`VIDIOC_G_EXT_CTRLS <VIDIOC_G_EXT_CTRLS>`,
:ref:`VIDIOC_S_EXT_CTRLS <VIDIOC_G_EXT_CTRLS>` and
:ref:`VIDIOC_TRY_EXT_CTRLS <VIDIOC_G_EXT_CTRLS>` ioctls.
In addition, the request can be queued by calling
:ref:`MEDIA_REQUEST_IOC_QUEUE` and re-initialized by calling
:ref:`MEDIA_REQUEST_IOC_REINIT`.
Finally, the file descriptor can be :ref:`polled <request-func-poll>` to wait
for the request to complete.
The request will remain allocated until all the file descriptors associated
with it are closed by :ref:`close() <request-func-close>` and the driver no
longer uses the request internally. See also
:ref:`here <media-request-life-time>` for more information.
Return Value
============
On success 0 is returned, on error -1 and the ``errno`` variable is set
appropriately. The generic error codes are described at the
:ref:`Generic Error Codes <gen-errors>` chapter.
ENOTTY
The driver has no support for requests.

View File

@ -0,0 +1,78 @@
.. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections
.. _media_request_ioc_queue:
*****************************
ioctl MEDIA_REQUEST_IOC_QUEUE
*****************************
Name
====
MEDIA_REQUEST_IOC_QUEUE - Queue a request
Synopsis
========
.. c:function:: int ioctl( int request_fd, MEDIA_REQUEST_IOC_QUEUE )
:name: MEDIA_REQUEST_IOC_QUEUE
Arguments
=========
``request_fd``
File descriptor returned by :ref:`MEDIA_IOC_REQUEST_ALLOC`.
Description
===========
If the media device supports :ref:`requests <media-request-api>`, then
this request ioctl can be used to queue a previously allocated request.
If the request was successfully queued, then the file descriptor can be
:ref:`polled <request-func-poll>` to wait for the request to complete.
If the request was already queued before, then ``EBUSY`` is returned.
Other errors can be returned if the contents of the request contained
invalid or inconsistent data, see the next section for a list of
common error codes. On error both the request and driver state are unchanged.
Once a request is queued, then the driver is required to gracefully handle
errors that occur when the request is applied to the hardware. The
exception is the ``EIO`` error which signals a fatal error that requires
the application to stop streaming to reset the hardware state.
It is not allowed to mix queuing requests with queuing buffers directly
(without a request). ``EBUSY`` will be returned if the first buffer was
queued directly and you next try to queue a request, or vice versa.
A request must contain at least one buffer, otherwise this ioctl will
return an ``ENOENT`` error.
Return Value
============
On success 0 is returned, on error -1 and the ``errno`` variable is set
appropriately. The generic error codes are described at the
:ref:`Generic Error Codes <gen-errors>` chapter.
EBUSY
The request was already queued or the application queued the first
buffer directly, but later attempted to use a request. It is not permitted
to mix the two APIs.
ENOENT
The request did not contain any buffers. All requests are required
to have at least one buffer. This can also be returned if some required
configuration is missing in the request.
ENOMEM
Out of memory when allocating internal data structures for this
request.
EINVAL
The request has invalid data.
EIO
The hardware is in a bad state. To recover, the application needs to
stop streaming to reset the hardware state and then try to restart
streaming.

View File

@ -0,0 +1,51 @@
.. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections
.. _media_request_ioc_reinit:
******************************
ioctl MEDIA_REQUEST_IOC_REINIT
******************************
Name
====
MEDIA_REQUEST_IOC_REINIT - Re-initialize a request
Synopsis
========
.. c:function:: int ioctl( int request_fd, MEDIA_REQUEST_IOC_REINIT )
:name: MEDIA_REQUEST_IOC_REINIT
Arguments
=========
``request_fd``
File descriptor returned by :ref:`MEDIA_IOC_REQUEST_ALLOC`.
Description
===========
If the media device supports :ref:`requests <media-request-api>`, then
this request ioctl can be used to re-initialize a previously allocated
request.
Re-initializing a request will clear any existing data from the request.
This avoids having to :ref:`close() <request-func-close>` a completed
request and allocate a new request. Instead the completed request can just
be re-initialized and it is ready to be used again.
A request can only be re-initialized if it either has not been queued
yet, or if it was queued and completed. Otherwise it will set ``errno``
to ``EBUSY``. No other error codes can be returned.
Return Value
============
On success 0 is returned, on error -1 and the ``errno`` variable is set
appropriately.
EBUSY
The request is queued but not yet completed.

View File

@ -0,0 +1,252 @@
.. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections
.. _media-request-api:
Request API
===========
The Request API has been designed to allow V4L2 to deal with requirements of
modern devices (stateless codecs, complex camera pipelines, ...) and APIs
(Android Codec v2). One such requirement is the ability for devices belonging to
the same pipeline to reconfigure and collaborate closely on a per-frame basis.
Another is support of stateless codecs, which require controls to be applied
to specific frames (aka 'per-frame controls') in order to be used efficiently.
While the initial use-case was V4L2, it can be extended to other subsystems
as well, as long as they use the media controller.
Supporting these features without the Request API is not always possible and if
it is, it is terribly inefficient: user-space would have to flush all activity
on the media pipeline, reconfigure it for the next frame, queue the buffers to
be processed with that configuration, and wait until they are all available for
dequeuing before considering the next frame. This defeats the purpose of having
buffer queues since in practice only one buffer would be queued at a time.
The Request API allows a specific configuration of the pipeline (media
controller topology + configuration for each media entity) to be associated with
specific buffers. This allows user-space to schedule several tasks ("requests")
with different configurations in advance, knowing that the configuration will be
applied when needed to get the expected result. Configuration values at the time
of request completion are also available for reading.
Usage
=====
The Request API extends the Media Controller API and cooperates with
subsystem-specific APIs to support request usage. At the Media Controller
level, requests are allocated from the supporting Media Controller device
node. Their life cycle is then managed through the request file descriptors in
an opaque way. Configuration data, buffer handles and processing results
stored in requests are accessed through subsystem-specific APIs extended for
request support, such as V4L2 APIs that take an explicit ``request_fd``
parameter.
Request Allocation
------------------
User-space allocates requests using :ref:`MEDIA_IOC_REQUEST_ALLOC`
for the media device node. This returns a file descriptor representing the
request. Typically, several such requests will be allocated.
Request Preparation
-------------------
Standard V4L2 ioctls can then receive a request file descriptor to express the
fact that the ioctl is part of said request, and is not to be applied
immediately. See :ref:`MEDIA_IOC_REQUEST_ALLOC` for a list of ioctls that
support this. Configurations set with a ``request_fd`` parameter are stored
instead of being immediately applied, and buffers queued to a request do not
enter the regular buffer queue until the request itself is queued.
Request Submission
------------------
Once the configuration and buffers of the request are specified, it can be
queued by calling :ref:`MEDIA_REQUEST_IOC_QUEUE` on the request file descriptor.
A request must contain at least one buffer, otherwise ``ENOENT`` is returned.
A queued request cannot be modified anymore.
.. caution::
For :ref:`memory-to-memory devices <codec>` you can use requests only for
output buffers, not for capture buffers. Attempting to add a capture buffer
to a request will result in an ``EACCES`` error.
If the request contains configurations for multiple entities, individual drivers
may synchronize so the requested pipeline's topology is applied before the
buffers are processed. Media controller drivers do a best effort implementation
since perfect atomicity may not be possible due to hardware limitations.
.. caution::
It is not allowed to mix queuing requests with directly queuing buffers:
whichever method is used first locks this in place until
:ref:`VIDIOC_STREAMOFF <VIDIOC_STREAMON>` is called or the device is
:ref:`closed <func-close>`. Attempts to directly queue a buffer when earlier
a buffer was queued via a request or vice versa will result in an ``EBUSY``
error.
Controls can still be set without a request and are applied immediately,
regardless of whether a request is in use or not.
.. caution::
Setting the same control through a request and also directly can lead to
undefined behavior!
User-space can :ref:`poll() <request-func-poll>` a request file descriptor in
order to wait until the request completes. A request is considered complete
once all its associated buffers are available for dequeuing and all the
associated controls have been updated with the values at the time of completion.
Note that user-space does not need to wait for the request to complete to
dequeue its buffers: buffers that are available halfway through a request can
be dequeued independently of the request's state.
A completed request contains the state of the device after the request was
executed. User-space can query that state by calling
:ref:`ioctl VIDIOC_G_EXT_CTRLS <VIDIOC_G_EXT_CTRLS>` with the request file
descriptor. Calling :ref:`ioctl VIDIOC_G_EXT_CTRLS <VIDIOC_G_EXT_CTRLS>` for a
request that has been queued but not yet completed will return ``EBUSY``
since the control values might be changed at any time by the driver while the
request is in flight.
.. _media-request-life-time:
Recycling and Destruction
-------------------------
Finally, a completed request can either be discarded or be reused. Calling
:ref:`close() <request-func-close>` on a request file descriptor will make
that file descriptor unusable and the request will be freed once it is no
longer in use by the kernel. That is, if the request is queued and then the
file descriptor is closed, then it won't be freed until the driver completed
the request.
The :ref:`MEDIA_REQUEST_IOC_REINIT` will clear a request's state and make it
available again. No state is retained by this operation: the request is as
if it had just been allocated.
Example for a Codec Device
--------------------------
For use-cases such as :ref:`codecs <codec>`, the request API can be used
to associate specific controls to
be applied by the driver for the OUTPUT buffer, allowing user-space
to queue many such buffers in advance. It can also take advantage of requests'
ability to capture the state of controls when the request completes to read back
information that may be subject to change.
Put into code, after obtaining a request, user-space can assign controls and one
OUTPUT buffer to it:
.. code-block:: c
struct v4l2_buffer buf;
struct v4l2_ext_controls ctrls;
int req_fd;
...
if (ioctl(media_fd, MEDIA_IOC_REQUEST_ALLOC, &req_fd))
return errno;
...
ctrls.which = V4L2_CTRL_WHICH_REQUEST_VAL;
ctrls.request_fd = req_fd;
if (ioctl(codec_fd, VIDIOC_S_EXT_CTRLS, &ctrls))
return errno;
...
buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
buf.flags |= V4L2_BUF_FLAG_REQUEST_FD;
buf.request_fd = req_fd;
if (ioctl(codec_fd, VIDIOC_QBUF, &buf))
return errno;
Note that it is not allowed to use the Request API for CAPTURE buffers
since there are no per-frame settings to report there.
Once the request is fully prepared, it can be queued to the driver:
.. code-block:: c
if (ioctl(req_fd, MEDIA_REQUEST_IOC_QUEUE))
return errno;
User-space can then either wait for the request to complete by calling poll() on
its file descriptor, or start dequeuing CAPTURE buffers. Most likely, it will
want to get CAPTURE buffers as soon as possible and this can be done using a
regular :ref:`VIDIOC_DQBUF <VIDIOC_QBUF>`:
.. code-block:: c
struct v4l2_buffer buf;
memset(&buf, 0, sizeof(buf));
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
if (ioctl(codec_fd, VIDIOC_DQBUF, &buf))
return errno;
Note that this example assumes for simplicity that for every OUTPUT buffer
there will be one CAPTURE buffer, but this does not have to be the case.
We can then, after ensuring that the request is completed via polling the
request file descriptor, query control values at the time of its completion via
a call to :ref:`VIDIOC_G_EXT_CTRLS <VIDIOC_G_EXT_CTRLS>`.
This is particularly useful for volatile controls for which we want to
query values as soon as the capture buffer is produced.
.. code-block:: c
struct pollfd pfd = { .events = POLLPRI, .fd = req_fd };
poll(&pfd, 1, -1);
...
ctrls.which = V4L2_CTRL_WHICH_REQUEST_VAL;
ctrls.request_fd = req_fd;
if (ioctl(codec_fd, VIDIOC_G_EXT_CTRLS, &ctrls))
return errno;
Once we don't need the request anymore, we can either recycle it for reuse with
:ref:`MEDIA_REQUEST_IOC_REINIT`...
.. code-block:: c
if (ioctl(req_fd, MEDIA_REQUEST_IOC_REINIT))
return errno;
... or close its file descriptor to completely dispose of it.
.. code-block:: c
close(req_fd);
Example for a Simple Capture Device
-----------------------------------
With a simple capture device, requests can be used to specify controls to apply
for a given CAPTURE buffer.
.. code-block:: c
struct v4l2_buffer buf;
struct v4l2_ext_controls ctrls;
int req_fd;
...
if (ioctl(media_fd, MEDIA_IOC_REQUEST_ALLOC, &req_fd))
return errno;
...
ctrls.which = V4L2_CTRL_WHICH_REQUEST_VAL;
ctrls.request_fd = req_fd;
if (ioctl(camera_fd, VIDIOC_S_EXT_CTRLS, &ctrls))
return errno;
...
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.flags |= V4L2_BUF_FLAG_REQUEST_FD;
buf.request_fd = req_fd;
if (ioctl(camera_fd, VIDIOC_QBUF, &buf))
return errno;
Once the request is fully prepared, it can be queued to the driver:
.. code-block:: c
if (ioctl(req_fd, MEDIA_REQUEST_IOC_QUEUE))
return errno;
User-space can then dequeue buffers, wait for the request completion, query
controls and recycle the request as in the M2M example above.

View File

@ -0,0 +1,49 @@
.. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections
.. _request-func-close:
***************
request close()
***************
Name
====
request-close - Close a request file descriptor
Synopsis
========
.. code-block:: c
#include <unistd.h>
.. c:function:: int close( int fd )
:name: req-close
Arguments
=========
``fd``
File descriptor returned by :ref:`MEDIA_IOC_REQUEST_ALLOC`.
Description
===========
Closes the request file descriptor. Resources associated with the request
are freed once all file descriptors associated with the request are closed
and the driver has completed the request.
See :ref:`here <media-request-life-time>` for more information.
Return Value
============
:ref:`close() <request-func-close>` returns 0 on success. On error, -1 is
returned, and ``errno`` is set appropriately. Possible error codes are:
EBADF
``fd`` is not a valid open file descriptor.

View File

@ -0,0 +1,67 @@
.. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections
.. _request-func-ioctl:
***************
request ioctl()
***************
Name
====
request-ioctl - Control a request file descriptor
Synopsis
========
.. code-block:: c
#include <sys/ioctl.h>
.. c:function:: int ioctl( int fd, int cmd, void *argp )
:name: req-ioctl
Arguments
=========
``fd``
File descriptor returned by :ref:`MEDIA_IOC_REQUEST_ALLOC`.
``cmd``
The request ioctl command code as defined in the media.h header file, for
example :ref:`MEDIA_REQUEST_IOC_QUEUE`.
``argp``
Pointer to a request-specific structure.
Description
===========
The :ref:`ioctl() <request-func-ioctl>` function manipulates request
parameters. The argument ``fd`` must be an open file descriptor.
The ioctl ``cmd`` code specifies the request function to be called. It
has encoded in it whether the argument is an input, output or read/write
parameter, and the size of the argument ``argp`` in bytes.
Macros and structures definitions specifying request ioctl commands and
their parameters are located in the media.h header file. All request ioctl
commands, their respective function and parameters are specified in
:ref:`media-user-func`.
Return Value
============
On success 0 is returned, on error -1 and the ``errno`` variable is set
appropriately. The generic error codes are described at the
:ref:`Generic Error Codes <gen-errors>` chapter.
Command-specific error codes are listed in the individual command
descriptions.
When an ioctl that takes an output or read/write parameter fails, the
parameter remains unmodified.

View File

@ -0,0 +1,77 @@
.. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections
.. _request-func-poll:
**************
request poll()
**************
Name
====
request-poll - Wait for some event on a file descriptor
Synopsis
========
.. code-block:: c
#include <sys/poll.h>
.. c:function:: int poll( struct pollfd *ufds, unsigned int nfds, int timeout )
:name: request-poll
Arguments
=========
``ufds``
List of file descriptor events to be watched
``nfds``
Number of file descriptor events at the \*ufds array
``timeout``
Timeout to wait for events
Description
===========
With the :c:func:`poll() <request-func-poll>` function applications can wait
for a request to complete.
On success :c:func:`poll() <request-func-poll>` returns the number of file
descriptors that have been selected (that is, file descriptors for which the
``revents`` field of the respective struct :c:type:`pollfd`
is non-zero). Request file descriptor set the ``POLLPRI`` flag in ``revents``
when the request was completed. When the function times out it returns
a value of zero, on failure it returns -1 and the ``errno`` variable is
set appropriately.
Attempting to poll for a request that is not yet queued will
set the ``POLLERR`` flag in ``revents``.
Return Value
============
On success, :c:func:`poll() <request-func-poll>` returns the number of
structures which have non-zero ``revents`` fields, or zero if the call
timed out. On error -1 is returned, and the ``errno`` variable is set
appropriately:
``EBADF``
One or more of the ``ufds`` members specify an invalid file
descriptor.
``EFAULT``
``ufds`` references an inaccessible memory area.
``EINTR``
The call was interrupted by a signal.
``EINVAL``
The ``nfds`` value exceeds the ``RLIMIT_NOFILE`` value. Use
``getrlimit()`` to obtain this value.

View File

@ -306,10 +306,23 @@ struct v4l2_buffer
- A place holder for future extensions. Drivers and applications
must set this to 0.
* - __u32
- ``reserved``
- ``request_fd``
-
- A place holder for future extensions. Drivers and applications
must set this to 0.
- The file descriptor of the request to queue the buffer to. If the flag
``V4L2_BUF_FLAG_REQUEST_FD`` is set, then the buffer will be
queued to this request. If the flag is not set, then this field will
be ignored.
The ``V4L2_BUF_FLAG_REQUEST_FD`` flag and this field are only used by
:ref:`ioctl VIDIOC_QBUF <VIDIOC_QBUF>` and ignored by other ioctls that
take a :c:type:`v4l2_buffer` as argument.
Applications should not set ``V4L2_BUF_FLAG_REQUEST_FD`` for any ioctls
other than :ref:`VIDIOC_QBUF <VIDIOC_QBUF>`.
If the device does not support requests, then ``EACCES`` will be returned.
If requests are supported but an invalid request file descriptor is
given, then ``EINVAL`` will be returned.
@ -514,6 +527,11 @@ Buffer Flags
streaming may continue as normal and the buffer may be reused
normally. Drivers set this flag when the ``VIDIOC_DQBUF`` ioctl is
called.
* .. _`V4L2-BUF-FLAG-IN-REQUEST`:
- ``V4L2_BUF_FLAG_IN_REQUEST``
- 0x00000080
- This buffer is part of a request that hasn't been queued yet.
* .. _`V4L2-BUF-FLAG-KEYFRAME`:
- ``V4L2_BUF_FLAG_KEYFRAME``
@ -589,6 +607,11 @@ Buffer Flags
the format. Any Any subsequent call to the
:ref:`VIDIOC_DQBUF <VIDIOC_QBUF>` ioctl will not block anymore,
but return an ``EPIPE`` error code.
* .. _`V4L2-BUF-FLAG-REQUEST-FD`:
- ``V4L2_BUF_FLAG_REQUEST_FD``
- 0x00800000
- The ``request_fd`` field contains a valid file descriptor.
* .. _`V4L2-BUF-FLAG-TIMESTAMP-MASK`:
- ``V4L2_BUF_FLAG_TIMESTAMP_MASK``

View File

@ -1497,6 +1497,182 @@ enum v4l2_mpeg_video_h264_hierarchical_coding_type -
.. _v4l2-mpeg-mpeg2:
``V4L2_CID_MPEG_VIDEO_MPEG2_SLICE_PARAMS (struct)``
Specifies the slice parameters (as extracted from the bitstream) for the
associated MPEG-2 slice data. This includes the necessary parameters for
configuring a stateless hardware decoding pipeline for MPEG-2.
The bitstream parameters are defined according to :ref:`mpeg2part2`.
.. c:type:: v4l2_ctrl_mpeg2_slice_params
.. cssclass:: longtable
.. flat-table:: struct v4l2_ctrl_mpeg2_slice_params
:header-rows: 0
:stub-columns: 0
:widths: 1 1 2
* - __u32
- ``bit_size``
- Size (in bits) of the current slice data.
* - __u32
- ``data_bit_offset``
- Offset (in bits) to the video data in the current slice data.
* - struct :c:type:`v4l2_mpeg2_sequence`
- ``sequence``
- Structure with MPEG-2 sequence metadata, merging relevant fields from
the sequence header and sequence extension parts of the bitstream.
* - struct :c:type:`v4l2_mpeg2_picture`
- ``picture``
- Structure with MPEG-2 picture metadata, merging relevant fields from
the picture header and picture coding extension parts of the bitstream.
* - __u8
- ``quantiser_scale_code``
- Code used to determine the quantization scale to use for the IDCT.
* - __u8
- ``backward_ref_index``
- Index for the V4L2 buffer to use as backward reference, used with
B-coded and P-coded frames.
* - __u8
- ``forward_ref_index``
- Index for the V4L2 buffer to use as forward reference, used with
B-coded frames.
.. c:type:: v4l2_mpeg2_sequence
.. cssclass:: longtable
.. flat-table:: struct v4l2_mpeg2_sequence
:header-rows: 0
:stub-columns: 0
:widths: 1 1 2
* - __u16
- ``horizontal_size``
- The width of the displayable part of the frame's luminance component.
* - __u16
- ``vertical_size``
- The height of the displayable part of the frame's luminance component.
* - __u32
- ``vbv_buffer_size``
- Used to calculate the required size of the video buffering verifier,
defined (in bits) as: 16 * 1024 * vbv_buffer_size.
* - __u8
- ``profile_and_level_indication``
- The current profile and level indication as extracted from the
bitstream.
* - __u8
- ``progressive_sequence``
- Indication that all the frames for the sequence are progressive instead
of interlaced.
* - __u8
- ``chroma_format``
- The chrominance sub-sampling format (1: 4:2:0, 2: 4:2:2, 3: 4:4:4).
.. c:type:: v4l2_mpeg2_picture
.. cssclass:: longtable
.. flat-table:: struct v4l2_mpeg2_picture
:header-rows: 0
:stub-columns: 0
:widths: 1 1 2
* - __u8
- ``picture_coding_type``
- Picture coding type for the frame covered by the current slice
(V4L2_MPEG2_PICTURE_CODING_TYPE_I, V4L2_MPEG2_PICTURE_CODING_TYPE_P or
V4L2_MPEG2_PICTURE_CODING_TYPE_B).
* - __u8
- ``f_code[2][2]``
- Motion vector codes.
* - __u8
- ``intra_dc_precision``
- Precision of Discrete Cosine transform (0: 8 bits precision,
1: 9 bits precision, 2: 10 bits precision, 3: 11 bits precision).
* - __u8
- ``picture_structure``
- Picture structure (1: interlaced top field, 2: interlaced bottom field,
3: progressive frame).
* - __u8
- ``top_field_first``
- If set to 1 and interlaced stream, top field is output first.
* - __u8
- ``frame_pred_frame_dct``
- If set to 1, only frame-DCT and frame prediction are used.
* - __u8
- ``concealment_motion_vectors``
- If set to 1, motion vectors are coded for intra macroblocks.
* - __u8
- ``q_scale_type``
- This flag affects the inverse quantization process.
* - __u8
- ``intra_vlc_format``
- This flag affects the decoding of transform coefficient data.
* - __u8
- ``alternate_scan``
- This flag affects the decoding of transform coefficient data.
* - __u8
- ``repeat_first_field``
- This flag affects the decoding process of progressive frames.
* - __u8
- ``progressive_frame``
- Indicates whether the current frame is progressive.
``V4L2_CID_MPEG_VIDEO_MPEG2_QUANTIZATION (struct)``
Specifies quantization matrices (as extracted from the bitstream) for the
associated MPEG-2 slice data.
.. c:type:: v4l2_ctrl_mpeg2_quantization
.. cssclass:: longtable
.. flat-table:: struct v4l2_ctrl_mpeg2_quantization
:header-rows: 0
:stub-columns: 0
:widths: 1 1 2
* - __u8
- ``load_intra_quantiser_matrix``
- One bit to indicate whether to load the ``intra_quantiser_matrix`` data.
* - __u8
- ``load_non_intra_quantiser_matrix``
- One bit to indicate whether to load the ``non_intra_quantiser_matrix``
data.
* - __u8
- ``load_chroma_intra_quantiser_matrix``
- One bit to indicate whether to load the
``chroma_intra_quantiser_matrix`` data, only relevant for non-4:2:0 YUV
formats.
* - __u8
- ``load_chroma_non_intra_quantiser_matrix``
- One bit to indicate whether to load the
``chroma_non_intra_quantiser_matrix`` data, only relevant for non-4:2:0
YUV formats.
* - __u8
- ``intra_quantiser_matrix[64]``
- The quantization matrix coefficients for intra-coded frames, in zigzag
scanning order. It is relevant for both luma and chroma components,
although it can be superseded by the chroma-specific matrix for
non-4:2:0 YUV formats.
* - __u8
- ``non_intra_quantiser_matrix[64]``
- The quantization matrix coefficients for non-intra-coded frames, in
zigzag scanning order. It is relevant for both luma and chroma
components, although it can be superseded by the chroma-specific matrix
for non-4:2:0 YUV formats.
* - __u8
- ``chroma_intra_quantiser_matrix[64]``
- The quantization matrix coefficients for the chominance component of
intra-coded frames, in zigzag scanning order. Only relevant for
non-4:2:0 YUV formats.
* - __u8
- ``chroma_non_intra_quantiser_matrix[64]``
- The quantization matrix coefficients for the chrominance component of
non-intra-coded frames, in zigzag scanning order. Only relevant for
non-4:2:0 YUV formats.
MFC 5.1 MPEG Controls
---------------------

View File

@ -60,6 +60,22 @@ Compressed Formats
- ``V4L2_PIX_FMT_MPEG2``
- 'MPG2'
- MPEG2 video elementary stream.
* .. _V4L2-PIX-FMT-MPEG2-SLICE:
- ``V4L2_PIX_FMT_MPEG2_SLICE``
- 'MG2S'
- MPEG-2 parsed slice data, as extracted from the MPEG-2 bitstream.
This format is adapted for stateless video decoders that implement a
MPEG-2 pipeline (using the :ref:`codec` and :ref:`media-request-api`).
Metadata associated with the frame to decode is required to be passed
through the ``V4L2_CID_MPEG_VIDEO_MPEG2_SLICE_PARAMS`` control and
quantization matrices can optionally be specified through the
``V4L2_CID_MPEG_VIDEO_MPEG2_QUANTIZATION`` control.
See the :ref:`associated Codec Control IDs <v4l2-mpeg-mpeg2>`.
Exactly one output and one capture buffer must be provided for use with
this pixel format. The output buffer must contain the appropriate number
of macroblocks to decode a full corresponding frame to the matching
capture buffer.
* .. _V4L2-PIX-FMT-MPEG4:
- ``V4L2_PIX_FMT_MPEG4``

View File

@ -243,7 +243,20 @@ please make a proposal on the linux-media mailing list.
It is an opaque intermediate format and the MDP hardware must be
used to convert ``V4L2_PIX_FMT_MT21C`` to ``V4L2_PIX_FMT_NV12M``,
``V4L2_PIX_FMT_YUV420M`` or ``V4L2_PIX_FMT_YVU420``.
* .. _V4L2-PIX-FMT-SUNXI-TILED-NV12:
- ``V4L2_PIX_FMT_SUNXI_TILED_NV12``
- 'ST12'
- Two-planar NV12-based format used by the video engine found on Allwinner
(codenamed sunxi) platforms, with 32x32 tiles for the luminance plane
and 32x64 tiles for the chrominance plane. The data in each tile is
stored in linear order, within the tile bounds. Each tile follows the
previous one linearly in memory (from left to right, top to bottom).
The associated buffer dimensions are aligned to match an integer number
of tiles, resulting in 32-aligned resolutions for the luminance plane
and 16-aligned resolutions for the chrominance plane (with 2x2
subsampling).
.. tabularcolumns:: |p{6.6cm}|p{2.2cm}|p{8.7cm}|

View File

@ -102,7 +102,19 @@ than the number requested.
- ``format``
- Filled in by the application, preserved by the driver.
* - __u32
- ``reserved``\ [8]
- ``capabilities``
- Set by the driver. If 0, then the driver doesn't support
capabilities. In that case all you know is that the driver is
guaranteed to support ``V4L2_MEMORY_MMAP`` and *might* support
other :c:type:`v4l2_memory` types. It will not support any others
capabilities. See :ref:`here <v4l2-buf-capabilities>` for a list of the
capabilities.
If you want to just query the capabilities without making any
other changes, then set ``count`` to 0, ``memory`` to
``V4L2_MEMORY_MMAP`` and ``format.type`` to the buffer type.
* - __u32
- ``reserved``\ [7]
- A place holder for future extensions. Drivers and applications
must set the array to zero.

View File

@ -95,6 +95,25 @@ appropriate. In the first case the new value is set in struct
is inappropriate (e.g. the given menu index is not supported by the menu
control), then this will also result in an ``EINVAL`` error code error.
If ``request_fd`` is set to a not-yet-queued :ref:`request <media-request-api>`
file descriptor and ``which`` is set to ``V4L2_CTRL_WHICH_REQUEST_VAL``,
then the controls are not applied immediately when calling
:ref:`VIDIOC_S_EXT_CTRLS <VIDIOC_G_EXT_CTRLS>`, but instead are applied by
the driver for the buffer associated with the same request.
If the device does not support requests, then ``EACCES`` will be returned.
If requests are supported but an invalid request file descriptor is given,
then ``EINVAL`` will be returned.
An attempt to call :ref:`VIDIOC_S_EXT_CTRLS <VIDIOC_G_EXT_CTRLS>` for a
request that has already been queued will result in an ``EBUSY`` error.
If ``request_fd`` is specified and ``which`` is set to
``V4L2_CTRL_WHICH_REQUEST_VAL`` during a call to
:ref:`VIDIOC_G_EXT_CTRLS <VIDIOC_G_EXT_CTRLS>`, then it will return the
values of the controls at the time of request completion.
If the request is not yet completed, then this will result in an
``EACCES`` error.
The driver will only set/get these controls if all control values are
correct. This prevents the situation where only some of the controls
were set/get. Only low-level errors (e. g. a failed i2c command) can
@ -209,13 +228,17 @@ still cause this situation.
- ``which``
- Which value of the control to get/set/try.
``V4L2_CTRL_WHICH_CUR_VAL`` will return the current value of the
control and ``V4L2_CTRL_WHICH_DEF_VAL`` will return the default
value of the control.
control, ``V4L2_CTRL_WHICH_DEF_VAL`` will return the default
value of the control and ``V4L2_CTRL_WHICH_REQUEST_VAL`` indicates that
these controls have to be retrieved from a request or tried/set for
a request. In the latter case the ``request_fd`` field contains the
file descriptor of the request that should be used. If the device
does not support requests, then ``EACCES`` will be returned.
.. note::
You can only get the default value of the control,
you cannot set or try it.
When using ``V4L2_CTRL_WHICH_DEF_VAL`` be aware that you can only
get the default value of the control, you cannot set or try it.
For backwards compatibility you can also use a control class here
(see :ref:`ctrl-class`). In that case all controls have to
@ -272,8 +295,15 @@ still cause this situation.
then you can call :ref:`VIDIOC_TRY_EXT_CTRLS <VIDIOC_G_EXT_CTRLS>` to try to discover the
actual control that failed the validation step. Unfortunately,
there is no ``TRY`` equivalent for :ref:`VIDIOC_G_EXT_CTRLS <VIDIOC_G_EXT_CTRLS>`.
* - __s32
- ``request_fd``
- File descriptor of the request to be used by this operation. Only
valid if ``which`` is set to ``V4L2_CTRL_WHICH_REQUEST_VAL``.
If the device does not support requests, then ``EACCES`` will be returned.
If requests are supported but an invalid request file descriptor is
given, then ``EINVAL`` will be returned.
* - __u32
- ``reserved``\ [2]
- ``reserved``\ [1]
- Reserved for future extensions.
Drivers and applications must set the array to zero.
@ -347,11 +377,14 @@ appropriately. The generic error codes are described at the
EINVAL
The struct :c:type:`v4l2_ext_control` ``id`` is
invalid, the struct :c:type:`v4l2_ext_controls`
invalid, or the struct :c:type:`v4l2_ext_controls`
``which`` is invalid, or the struct
:c:type:`v4l2_ext_control` ``value`` was
inappropriate (e.g. the given menu index is not supported by the
driver). This error code is also returned by the
driver), or the ``which`` field was set to ``V4L2_CTRL_WHICH_REQUEST_VAL``
but the given ``request_fd`` was invalid or ``V4L2_CTRL_WHICH_REQUEST_VAL``
is not supported by the kernel.
This error code is also returned by the
:ref:`VIDIOC_S_EXT_CTRLS <VIDIOC_G_EXT_CTRLS>` and :ref:`VIDIOC_TRY_EXT_CTRLS <VIDIOC_G_EXT_CTRLS>` ioctls if two or
more control values are in conflict.
@ -362,7 +395,9 @@ ERANGE
EBUSY
The control is temporarily not changeable, possibly because another
applications took over control of the device function this control
belongs to.
belongs to, or (if the ``which`` field was set to
``V4L2_CTRL_WHICH_REQUEST_VAL``) the request was queued but not yet
completed.
ENOSPC
The space reserved for the control's payload is insufficient. The
@ -370,5 +405,9 @@ ENOSPC
and this error code is returned.
EACCES
Attempt to try or set a read-only control or to get a write-only
control.
Attempt to try or set a read-only control, or to get a write-only
control, or to get a control from a request that has not yet been
completed.
Or the ``which`` field was set to ``V4L2_CTRL_WHICH_REQUEST_VAL`` but the
device does not support requests.

View File

@ -65,7 +65,7 @@ To enqueue a :ref:`memory mapped <mmap>` buffer applications set the
with a pointer to this structure the driver sets the
``V4L2_BUF_FLAG_MAPPED`` and ``V4L2_BUF_FLAG_QUEUED`` flags and clears
the ``V4L2_BUF_FLAG_DONE`` flag in the ``flags`` field, or it returns an
EINVAL error code.
``EINVAL`` error code.
To enqueue a :ref:`user pointer <userp>` buffer applications set the
``memory`` field to ``V4L2_MEMORY_USERPTR``, the ``m.userptr`` field to
@ -98,6 +98,28 @@ dequeued, until the :ref:`VIDIOC_STREAMOFF <VIDIOC_STREAMON>` or
:ref:`VIDIOC_REQBUFS` ioctl is called, or until the
device is closed.
The ``request_fd`` field can be used with the ``VIDIOC_QBUF`` ioctl to specify
the file descriptor of a :ref:`request <media-request-api>`, if requests are
in use. Setting it means that the buffer will not be passed to the driver
until the request itself is queued. Also, the driver will apply any
settings associated with the request for this buffer. This field will
be ignored unless the ``V4L2_BUF_FLAG_REQUEST_FD`` flag is set.
If the device does not support requests, then ``EACCES`` will be returned.
If requests are supported but an invalid request file descriptor is given,
then ``EINVAL`` will be returned.
.. caution::
It is not allowed to mix queuing requests with queuing buffers directly.
``EBUSY`` will be returned if the first buffer was queued directly and
then the application tries to queue a request, or vice versa. After
closing the file descriptor, calling
:ref:`VIDIOC_STREAMOFF <VIDIOC_STREAMON>` or calling :ref:`VIDIOC_REQBUFS`
the check for this will be reset.
For :ref:`memory-to-memory devices <codec>` you can specify the
``request_fd`` only for output buffers, not for capture buffers. Attempting
to specify this for a capture buffer will result in an ``EACCES`` error.
Applications call the ``VIDIOC_DQBUF`` ioctl to dequeue a filled
(capturing) or displayed (output) buffer from the driver's outgoing
queue. They just set the ``type``, ``memory`` and ``reserved`` fields of
@ -133,7 +155,9 @@ EAGAIN
EINVAL
The buffer ``type`` is not supported, or the ``index`` is out of
bounds, or no buffers have been allocated yet, or the ``userptr`` or
``length`` are invalid.
``length`` are invalid, or the ``V4L2_BUF_FLAG_REQUEST_FD`` flag was
set but the the given ``request_fd`` was invalid, or ``m.fd`` was
an invalid DMABUF file descriptor.
EIO
``VIDIOC_DQBUF`` failed due to an internal error. Can also indicate
@ -153,3 +177,12 @@ EPIPE
``VIDIOC_DQBUF`` returns this on an empty capture queue for mem2mem
codecs if a buffer with the ``V4L2_BUF_FLAG_LAST`` was already
dequeued and no new buffers are expected to become available.
EACCES
The ``V4L2_BUF_FLAG_REQUEST_FD`` flag was set but the device does not
support requests for the given buffer type.
EBUSY
The first buffer was queued via a request, but the application now tries
to queue it directly, or vice versa (it is not permitted to mix the two
APIs).

View File

@ -424,8 +424,18 @@ See also the examples in :ref:`control`.
- any
- An unsigned 32-bit valued control ranging from minimum to maximum
inclusive. The step value indicates the increment between values.
* - ``V4L2_CTRL_TYPE_MPEG2_SLICE_PARAMS``
- n/a
- n/a
- n/a
- A struct :c:type:`v4l2_ctrl_mpeg2_slice_params`, containing MPEG-2
slice parameters for stateless video decoders.
* - ``V4L2_CTRL_TYPE_MPEG2_QUANTIZATION``
- n/a
- n/a
- n/a
- A struct :c:type:`v4l2_ctrl_mpeg2_quantization`, containing MPEG-2
quantization matrices for stateless video decoders.
.. tabularcolumns:: |p{6.6cm}|p{2.2cm}|p{8.7cm}|

View File

@ -88,10 +88,50 @@ any DMA in progress, an implicit
``V4L2_MEMORY_DMABUF`` or ``V4L2_MEMORY_USERPTR``. See
:c:type:`v4l2_memory`.
* - __u32
- ``reserved``\ [2]
- ``capabilities``
- Set by the driver. If 0, then the driver doesn't support
capabilities. In that case all you know is that the driver is
guaranteed to support ``V4L2_MEMORY_MMAP`` and *might* support
other :c:type:`v4l2_memory` types. It will not support any others
capabilities.
If you want to query the capabilities with a minimum of side-effects,
then this can be called with ``count`` set to 0, ``memory`` set to
``V4L2_MEMORY_MMAP`` and ``type`` set to the buffer type. This will
free any previously allocated buffers, so this is typically something
that will be done at the start of the application.
* - __u32
- ``reserved``\ [1]
- A place holder for future extensions. Drivers and applications
must set the array to zero.
.. tabularcolumns:: |p{6.1cm}|p{2.2cm}|p{8.7cm}|
.. _v4l2-buf-capabilities:
.. _V4L2-BUF-CAP-SUPPORTS-MMAP:
.. _V4L2-BUF-CAP-SUPPORTS-USERPTR:
.. _V4L2-BUF-CAP-SUPPORTS-DMABUF:
.. _V4L2-BUF-CAP-SUPPORTS-REQUESTS:
.. cssclass:: longtable
.. flat-table:: V4L2 Buffer Capabilities Flags
:header-rows: 0
:stub-columns: 0
:widths: 3 1 4
* - ``V4L2_BUF_CAP_SUPPORTS_MMAP``
- 0x00000001
- This buffer type supports the ``V4L2_MEMORY_MMAP`` streaming mode.
* - ``V4L2_BUF_CAP_SUPPORTS_USERPTR``
- 0x00000002
- This buffer type supports the ``V4L2_MEMORY_USERPTR`` streaming mode.
* - ``V4L2_BUF_CAP_SUPPORTS_DMABUF``
- 0x00000004
- This buffer type supports the ``V4L2_MEMORY_DMABUF`` streaming mode.
* - ``V4L2_BUF_CAP_SUPPORTS_REQUESTS``
- 0x00000008
- This buffer type supports :ref:`requests <media-request-api>`.
Return Value
============

View File

@ -131,6 +131,8 @@ replace symbol V4L2_CTRL_TYPE_STRING :c:type:`v4l2_ctrl_type`
replace symbol V4L2_CTRL_TYPE_U16 :c:type:`v4l2_ctrl_type`
replace symbol V4L2_CTRL_TYPE_U32 :c:type:`v4l2_ctrl_type`
replace symbol V4L2_CTRL_TYPE_U8 :c:type:`v4l2_ctrl_type`
replace symbol V4L2_CTRL_TYPE_MPEG2_SLICE_PARAMS :c:type:`v4l2_ctrl_type`
replace symbol V4L2_CTRL_TYPE_MPEG2_QUANTIZATION :c:type:`v4l2_ctrl_type`
# V4L2 capability defines
replace define V4L2_CAP_VIDEO_CAPTURE device-capabilities
@ -517,6 +519,7 @@ ignore define V4L2_CTRL_DRIVER_PRIV
ignore define V4L2_CTRL_MAX_DIMS
ignore define V4L2_CTRL_WHICH_CUR_VAL
ignore define V4L2_CTRL_WHICH_DEF_VAL
ignore define V4L2_CTRL_WHICH_REQUEST_VAL
ignore define V4L2_OUT_CAP_CUSTOM_TIMINGS
ignore define V4L2_CID_MAX_CTRLS

View File

@ -671,6 +671,13 @@ L: linux-crypto@vger.kernel.org
S: Maintained
F: drivers/crypto/sunxi-ss/
ALLWINNER VPU DRIVER
M: Maxime Ripard <maxime.ripard@bootlin.com>
M: Paul Kocialkowski <paul.kocialkowski@bootlin.com>
L: linux-media@vger.kernel.org
S: Maintained
F: drivers/staging/media/sunxi/cedrus/
ALPHA PORT
M: Richard Henderson <rth@twiddle.net>
M: Ivan Kokshaysky <ink@jurassic.park.msu.ru>

View File

@ -3,7 +3,8 @@
# Makefile for the kernel multimedia device drivers.
#
media-objs := media-device.o media-devnode.o media-entity.o
media-objs := media-device.o media-devnode.o media-entity.o \
media-request.o
#
# I2C drivers should come before other drivers, otherwise they'll fail

View File

@ -356,6 +356,8 @@ static int __vb2_queue_alloc(struct vb2_queue *q, enum vb2_memory memory,
vb->planes[plane].length = plane_sizes[plane];
vb->planes[plane].min_length = plane_sizes[plane];
}
call_void_bufop(q, init_buffer, vb);
q->bufs[vb->index] = vb;
/* Allocate video buffer memory for the MMAP type */
@ -497,8 +499,9 @@ static int __vb2_queue_free(struct vb2_queue *q, unsigned int buffers)
pr_info(" buf_init: %u buf_cleanup: %u buf_prepare: %u buf_finish: %u\n",
vb->cnt_buf_init, vb->cnt_buf_cleanup,
vb->cnt_buf_prepare, vb->cnt_buf_finish);
pr_info(" buf_queue: %u buf_done: %u\n",
vb->cnt_buf_queue, vb->cnt_buf_done);
pr_info(" buf_queue: %u buf_done: %u buf_request_complete: %u\n",
vb->cnt_buf_queue, vb->cnt_buf_done,
vb->cnt_buf_request_complete);
pr_info(" alloc: %u put: %u prepare: %u finish: %u mmap: %u\n",
vb->cnt_mem_alloc, vb->cnt_mem_put,
vb->cnt_mem_prepare, vb->cnt_mem_finish,
@ -683,7 +686,7 @@ int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory,
}
/*
* Call queue_cancel to clean up any buffers in the PREPARED or
* Call queue_cancel to clean up any buffers in the
* QUEUED state which is possible if buffers were prepared or
* queued without ever calling STREAMON.
*/
@ -930,6 +933,7 @@ void vb2_buffer_done(struct vb2_buffer *vb, enum vb2_buffer_state state)
/* sync buffers */
for (plane = 0; plane < vb->num_planes; ++plane)
call_void_memop(vb, finish, vb->planes[plane].mem_priv);
vb->synced = false;
}
spin_lock_irqsave(&q->done_lock, flags);
@ -942,6 +946,14 @@ void vb2_buffer_done(struct vb2_buffer *vb, enum vb2_buffer_state state)
vb->state = state;
}
atomic_dec(&q->owned_by_drv_count);
if (vb->req_obj.req) {
/* This is not supported at the moment */
WARN_ON(state == VB2_BUF_STATE_REQUEUEING);
media_request_object_unbind(&vb->req_obj);
media_request_object_put(&vb->req_obj);
}
spin_unlock_irqrestore(&q->done_lock, flags);
trace_vb2_buf_done(q, vb);
@ -976,20 +988,19 @@ EXPORT_SYMBOL_GPL(vb2_discard_done);
/*
* __prepare_mmap() - prepare an MMAP buffer
*/
static int __prepare_mmap(struct vb2_buffer *vb, const void *pb)
static int __prepare_mmap(struct vb2_buffer *vb)
{
int ret = 0;
if (pb)
ret = call_bufop(vb->vb2_queue, fill_vb2_buffer,
vb, pb, vb->planes);
ret = call_bufop(vb->vb2_queue, fill_vb2_buffer,
vb, vb->planes);
return ret ? ret : call_vb_qop(vb, buf_prepare, vb);
}
/*
* __prepare_userptr() - prepare a USERPTR buffer
*/
static int __prepare_userptr(struct vb2_buffer *vb, const void *pb)
static int __prepare_userptr(struct vb2_buffer *vb)
{
struct vb2_plane planes[VB2_MAX_PLANES];
struct vb2_queue *q = vb->vb2_queue;
@ -1000,12 +1011,10 @@ static int __prepare_userptr(struct vb2_buffer *vb, const void *pb)
memset(planes, 0, sizeof(planes[0]) * vb->num_planes);
/* Copy relevant information provided by the userspace */
if (pb) {
ret = call_bufop(vb->vb2_queue, fill_vb2_buffer,
vb, pb, planes);
if (ret)
return ret;
}
ret = call_bufop(vb->vb2_queue, fill_vb2_buffer,
vb, planes);
if (ret)
return ret;
for (plane = 0; plane < vb->num_planes; ++plane) {
/* Skip the plane if already verified */
@ -1105,7 +1114,7 @@ err:
/*
* __prepare_dmabuf() - prepare a DMABUF buffer
*/
static int __prepare_dmabuf(struct vb2_buffer *vb, const void *pb)
static int __prepare_dmabuf(struct vb2_buffer *vb)
{
struct vb2_plane planes[VB2_MAX_PLANES];
struct vb2_queue *q = vb->vb2_queue;
@ -1116,12 +1125,10 @@ static int __prepare_dmabuf(struct vb2_buffer *vb, const void *pb)
memset(planes, 0, sizeof(planes[0]) * vb->num_planes);
/* Copy relevant information provided by the userspace */
if (pb) {
ret = call_bufop(vb->vb2_queue, fill_vb2_buffer,
vb, pb, planes);
if (ret)
return ret;
}
ret = call_bufop(vb->vb2_queue, fill_vb2_buffer,
vb, planes);
if (ret)
return ret;
for (plane = 0; plane < vb->num_planes; ++plane) {
struct dma_buf *dbuf = dma_buf_get(planes[plane].m.fd);
@ -1250,9 +1257,10 @@ static void __enqueue_in_driver(struct vb2_buffer *vb)
call_void_vb_qop(vb, buf_queue, vb);
}
static int __buf_prepare(struct vb2_buffer *vb, const void *pb)
static int __buf_prepare(struct vb2_buffer *vb)
{
struct vb2_queue *q = vb->vb2_queue;
enum vb2_buffer_state orig_state = vb->state;
unsigned int plane;
int ret;
@ -1261,26 +1269,31 @@ static int __buf_prepare(struct vb2_buffer *vb, const void *pb)
return -EIO;
}
if (vb->prepared)
return 0;
WARN_ON(vb->synced);
vb->state = VB2_BUF_STATE_PREPARING;
switch (q->memory) {
case VB2_MEMORY_MMAP:
ret = __prepare_mmap(vb, pb);
ret = __prepare_mmap(vb);
break;
case VB2_MEMORY_USERPTR:
ret = __prepare_userptr(vb, pb);
ret = __prepare_userptr(vb);
break;
case VB2_MEMORY_DMABUF:
ret = __prepare_dmabuf(vb, pb);
ret = __prepare_dmabuf(vb);
break;
default:
WARN(1, "Invalid queue type\n");
ret = -EINVAL;
break;
}
if (ret) {
dprintk(1, "buffer preparation failed: %d\n", ret);
vb->state = VB2_BUF_STATE_DEQUEUED;
vb->state = orig_state;
return ret;
}
@ -1288,11 +1301,98 @@ static int __buf_prepare(struct vb2_buffer *vb, const void *pb)
for (plane = 0; plane < vb->num_planes; ++plane)
call_void_memop(vb, prepare, vb->planes[plane].mem_priv);
vb->state = VB2_BUF_STATE_PREPARED;
vb->synced = true;
vb->prepared = true;
vb->state = orig_state;
return 0;
}
static int vb2_req_prepare(struct media_request_object *obj)
{
struct vb2_buffer *vb = container_of(obj, struct vb2_buffer, req_obj);
int ret;
if (WARN_ON(vb->state != VB2_BUF_STATE_IN_REQUEST))
return -EINVAL;
mutex_lock(vb->vb2_queue->lock);
ret = __buf_prepare(vb);
mutex_unlock(vb->vb2_queue->lock);
return ret;
}
static void __vb2_dqbuf(struct vb2_buffer *vb);
static void vb2_req_unprepare(struct media_request_object *obj)
{
struct vb2_buffer *vb = container_of(obj, struct vb2_buffer, req_obj);
mutex_lock(vb->vb2_queue->lock);
__vb2_dqbuf(vb);
vb->state = VB2_BUF_STATE_IN_REQUEST;
mutex_unlock(vb->vb2_queue->lock);
WARN_ON(!vb->req_obj.req);
}
int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb,
struct media_request *req);
static void vb2_req_queue(struct media_request_object *obj)
{
struct vb2_buffer *vb = container_of(obj, struct vb2_buffer, req_obj);
mutex_lock(vb->vb2_queue->lock);
vb2_core_qbuf(vb->vb2_queue, vb->index, NULL, NULL);
mutex_unlock(vb->vb2_queue->lock);
}
static void vb2_req_unbind(struct media_request_object *obj)
{
struct vb2_buffer *vb = container_of(obj, struct vb2_buffer, req_obj);
if (vb->state == VB2_BUF_STATE_IN_REQUEST)
call_void_bufop(vb->vb2_queue, init_buffer, vb);
}
static void vb2_req_release(struct media_request_object *obj)
{
struct vb2_buffer *vb = container_of(obj, struct vb2_buffer, req_obj);
if (vb->state == VB2_BUF_STATE_IN_REQUEST)
vb->state = VB2_BUF_STATE_DEQUEUED;
}
static const struct media_request_object_ops vb2_core_req_ops = {
.prepare = vb2_req_prepare,
.unprepare = vb2_req_unprepare,
.queue = vb2_req_queue,
.unbind = vb2_req_unbind,
.release = vb2_req_release,
};
bool vb2_request_object_is_buffer(struct media_request_object *obj)
{
return obj->ops == &vb2_core_req_ops;
}
EXPORT_SYMBOL_GPL(vb2_request_object_is_buffer);
unsigned int vb2_request_buffer_cnt(struct media_request *req)
{
struct media_request_object *obj;
unsigned long flags;
unsigned int buffer_cnt = 0;
spin_lock_irqsave(&req->lock, flags);
list_for_each_entry(obj, &req->objects, list)
if (vb2_request_object_is_buffer(obj))
buffer_cnt++;
spin_unlock_irqrestore(&req->lock, flags);
return buffer_cnt;
}
EXPORT_SYMBOL_GPL(vb2_request_buffer_cnt);
int vb2_core_prepare_buf(struct vb2_queue *q, unsigned int index, void *pb)
{
struct vb2_buffer *vb;
@ -1304,8 +1404,12 @@ int vb2_core_prepare_buf(struct vb2_queue *q, unsigned int index, void *pb)
vb->state);
return -EINVAL;
}
if (vb->prepared) {
dprintk(1, "buffer already prepared\n");
return -EINVAL;
}
ret = __buf_prepare(vb, pb);
ret = __buf_prepare(vb);
if (ret)
return ret;
@ -1314,7 +1418,7 @@ int vb2_core_prepare_buf(struct vb2_queue *q, unsigned int index, void *pb)
dprintk(2, "prepare of buffer %d succeeded\n", vb->index);
return ret;
return 0;
}
EXPORT_SYMBOL_GPL(vb2_core_prepare_buf);
@ -1381,7 +1485,8 @@ static int vb2_start_streaming(struct vb2_queue *q)
return ret;
}
int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb)
int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb,
struct media_request *req)
{
struct vb2_buffer *vb;
int ret;
@ -1393,13 +1498,57 @@ int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb)
vb = q->bufs[index];
switch (vb->state) {
case VB2_BUF_STATE_DEQUEUED:
ret = __buf_prepare(vb, pb);
if ((req && q->uses_qbuf) ||
(!req && vb->state != VB2_BUF_STATE_IN_REQUEST &&
q->uses_requests)) {
dprintk(1, "queue in wrong mode (qbuf vs requests)\n");
return -EBUSY;
}
if (req) {
int ret;
q->uses_requests = 1;
if (vb->state != VB2_BUF_STATE_DEQUEUED) {
dprintk(1, "buffer %d not in dequeued state\n",
vb->index);
return -EINVAL;
}
media_request_object_init(&vb->req_obj);
/* Make sure the request is in a safe state for updating. */
ret = media_request_lock_for_update(req);
if (ret)
return ret;
break;
case VB2_BUF_STATE_PREPARED:
ret = media_request_object_bind(req, &vb2_core_req_ops,
q, true, &vb->req_obj);
media_request_unlock_for_update(req);
if (ret)
return ret;
vb->state = VB2_BUF_STATE_IN_REQUEST;
/* Fill buffer information for the userspace */
if (pb) {
call_void_bufop(q, copy_timestamp, vb, pb);
call_void_bufop(q, fill_user_buffer, vb, pb);
}
dprintk(2, "qbuf of buffer %d succeeded\n", vb->index);
return 0;
}
if (vb->state != VB2_BUF_STATE_IN_REQUEST)
q->uses_qbuf = 1;
switch (vb->state) {
case VB2_BUF_STATE_DEQUEUED:
case VB2_BUF_STATE_IN_REQUEST:
if (!vb->prepared) {
ret = __buf_prepare(vb);
if (ret)
return ret;
}
break;
case VB2_BUF_STATE_PREPARING:
dprintk(1, "buffer still being prepared\n");
@ -1600,6 +1749,11 @@ static void __vb2_dqbuf(struct vb2_buffer *vb)
call_void_memop(vb, unmap_dmabuf, vb->planes[i].mem_priv);
vb->planes[i].dbuf_mapped = 0;
}
if (vb->req_obj.req) {
media_request_object_unbind(&vb->req_obj);
media_request_object_put(&vb->req_obj);
}
call_void_bufop(q, init_buffer, vb);
}
int vb2_core_dqbuf(struct vb2_queue *q, unsigned int *pindex, void *pb,
@ -1625,6 +1779,7 @@ int vb2_core_dqbuf(struct vb2_queue *q, unsigned int *pindex, void *pb,
}
call_void_vb_qop(vb, buf_finish, vb);
vb->prepared = false;
if (pindex)
*pindex = vb->index;
@ -1688,6 +1843,8 @@ static void __vb2_queue_cancel(struct vb2_queue *q)
q->start_streaming_called = 0;
q->queued_count = 0;
q->error = 0;
q->uses_requests = 0;
q->uses_qbuf = 0;
/*
* Remove all buffers from videobuf's list...
@ -1712,19 +1869,38 @@ static void __vb2_queue_cancel(struct vb2_queue *q)
*/
for (i = 0; i < q->num_buffers; ++i) {
struct vb2_buffer *vb = q->bufs[i];
struct media_request *req = vb->req_obj.req;
if (vb->state == VB2_BUF_STATE_PREPARED ||
vb->state == VB2_BUF_STATE_QUEUED) {
/*
* If a request is associated with this buffer, then
* call buf_request_cancel() to give the driver to complete()
* related request objects. Otherwise those objects would
* never complete.
*/
if (req) {
enum media_request_state state;
unsigned long flags;
spin_lock_irqsave(&req->lock, flags);
state = req->state;
spin_unlock_irqrestore(&req->lock, flags);
if (state == MEDIA_REQUEST_STATE_QUEUED)
call_void_vb_qop(vb, buf_request_complete, vb);
}
if (vb->synced) {
unsigned int plane;
for (plane = 0; plane < vb->num_planes; ++plane)
call_void_memop(vb, finish,
vb->planes[plane].mem_priv);
vb->synced = false;
}
if (vb->state != VB2_BUF_STATE_DEQUEUED) {
vb->state = VB2_BUF_STATE_PREPARED;
if (vb->prepared) {
call_void_vb_qop(vb, buf_finish, vb);
vb->prepared = false;
}
__vb2_dqbuf(vb);
}
@ -2281,7 +2457,7 @@ static int __vb2_init_fileio(struct vb2_queue *q, int read)
* Queue all buffers.
*/
for (i = 0; i < q->num_buffers; i++) {
ret = vb2_core_qbuf(q, i, NULL);
ret = vb2_core_qbuf(q, i, NULL, NULL);
if (ret)
goto err_reqbufs;
fileio->bufs[i].queued = 1;
@ -2460,7 +2636,7 @@ static size_t __vb2_perform_fileio(struct vb2_queue *q, char __user *data, size_
if (copy_timestamp)
b->timestamp = ktime_get_ns();
ret = vb2_core_qbuf(q, index, NULL);
ret = vb2_core_qbuf(q, index, NULL, NULL);
dprintk(5, "vb2_dbuf result: %d\n", ret);
if (ret)
return ret;
@ -2563,7 +2739,7 @@ static int vb2_thread(void *data)
if (copy_timestamp)
vb->timestamp = ktime_get_ns();
if (!threadio->stop)
ret = vb2_core_qbuf(q, vb->index, NULL);
ret = vb2_core_qbuf(q, vb->index, NULL, NULL);
call_void_qop(q, wait_prepare, q);
if (ret || threadio->stop)
break;

View File

@ -25,6 +25,7 @@
#include <linux/kthread.h>
#include <media/v4l2-dev.h>
#include <media/v4l2-device.h>
#include <media/v4l2-fh.h>
#include <media/v4l2-event.h>
#include <media/v4l2-common.h>
@ -40,10 +41,12 @@ module_param(debug, int, 0644);
pr_info("vb2-v4l2: %s: " fmt, __func__, ## arg); \
} while (0)
/* Flags that are set by the vb2 core */
/* Flags that are set by us */
#define V4L2_BUFFER_MASK_FLAGS (V4L2_BUF_FLAG_MAPPED | V4L2_BUF_FLAG_QUEUED | \
V4L2_BUF_FLAG_DONE | V4L2_BUF_FLAG_ERROR | \
V4L2_BUF_FLAG_PREPARED | \
V4L2_BUF_FLAG_IN_REQUEST | \
V4L2_BUF_FLAG_REQUEST_FD | \
V4L2_BUF_FLAG_TIMESTAMP_MASK)
/* Output buffer flags that should be passed on to the driver */
#define V4L2_BUFFER_OUT_FLAGS (V4L2_BUF_FLAG_PFRAME | V4L2_BUF_FLAG_BFRAME | \
@ -118,6 +121,16 @@ static int __verify_length(struct vb2_buffer *vb, const struct v4l2_buffer *b)
return 0;
}
/*
* __init_v4l2_vb2_buffer() - initialize the v4l2_vb2_buffer struct
*/
static void __init_v4l2_vb2_buffer(struct vb2_buffer *vb)
{
struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
vbuf->request_fd = -1;
}
static void __copy_timestamp(struct vb2_buffer *vb, const void *pb)
{
const struct v4l2_buffer *b = pb;
@ -154,9 +167,181 @@ static void vb2_warn_zero_bytesused(struct vb2_buffer *vb)
pr_warn("use the actual size instead.\n");
}
static int vb2_queue_or_prepare_buf(struct vb2_queue *q, struct v4l2_buffer *b,
const char *opname)
static int vb2_fill_vb2_v4l2_buffer(struct vb2_buffer *vb, struct v4l2_buffer *b)
{
struct vb2_queue *q = vb->vb2_queue;
struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
struct vb2_plane *planes = vbuf->planes;
unsigned int plane;
int ret;
ret = __verify_length(vb, b);
if (ret < 0) {
dprintk(1, "plane parameters verification failed: %d\n", ret);
return ret;
}
if (b->field == V4L2_FIELD_ALTERNATE && q->is_output) {
/*
* If the format's field is ALTERNATE, then the buffer's field
* should be either TOP or BOTTOM, not ALTERNATE since that
* makes no sense. The driver has to know whether the
* buffer represents a top or a bottom field in order to
* program any DMA correctly. Using ALTERNATE is wrong, since
* that just says that it is either a top or a bottom field,
* but not which of the two it is.
*/
dprintk(1, "the field is incorrectly set to ALTERNATE for an output buffer\n");
return -EINVAL;
}
vbuf->sequence = 0;
vbuf->request_fd = -1;
if (V4L2_TYPE_IS_MULTIPLANAR(b->type)) {
switch (b->memory) {
case VB2_MEMORY_USERPTR:
for (plane = 0; plane < vb->num_planes; ++plane) {
planes[plane].m.userptr =
b->m.planes[plane].m.userptr;
planes[plane].length =
b->m.planes[plane].length;
}
break;
case VB2_MEMORY_DMABUF:
for (plane = 0; plane < vb->num_planes; ++plane) {
planes[plane].m.fd =
b->m.planes[plane].m.fd;
planes[plane].length =
b->m.planes[plane].length;
}
break;
default:
for (plane = 0; plane < vb->num_planes; ++plane) {
planes[plane].m.offset =
vb->planes[plane].m.offset;
planes[plane].length =
vb->planes[plane].length;
}
break;
}
/* Fill in driver-provided information for OUTPUT types */
if (V4L2_TYPE_IS_OUTPUT(b->type)) {
/*
* Will have to go up to b->length when API starts
* accepting variable number of planes.
*
* If bytesused == 0 for the output buffer, then fall
* back to the full buffer size. In that case
* userspace clearly never bothered to set it and
* it's a safe assumption that they really meant to
* use the full plane sizes.
*
* Some drivers, e.g. old codec drivers, use bytesused == 0
* as a way to indicate that streaming is finished.
* In that case, the driver should use the
* allow_zero_bytesused flag to keep old userspace
* applications working.
*/
for (plane = 0; plane < vb->num_planes; ++plane) {
struct vb2_plane *pdst = &planes[plane];
struct v4l2_plane *psrc = &b->m.planes[plane];
if (psrc->bytesused == 0)
vb2_warn_zero_bytesused(vb);
if (vb->vb2_queue->allow_zero_bytesused)
pdst->bytesused = psrc->bytesused;
else
pdst->bytesused = psrc->bytesused ?
psrc->bytesused : pdst->length;
pdst->data_offset = psrc->data_offset;
}
}
} else {
/*
* Single-planar buffers do not use planes array,
* so fill in relevant v4l2_buffer struct fields instead.
* In videobuf we use our internal V4l2_planes struct for
* single-planar buffers as well, for simplicity.
*
* If bytesused == 0 for the output buffer, then fall back
* to the full buffer size as that's a sensible default.
*
* Some drivers, e.g. old codec drivers, use bytesused == 0 as
* a way to indicate that streaming is finished. In that case,
* the driver should use the allow_zero_bytesused flag to keep
* old userspace applications working.
*/
switch (b->memory) {
case VB2_MEMORY_USERPTR:
planes[0].m.userptr = b->m.userptr;
planes[0].length = b->length;
break;
case VB2_MEMORY_DMABUF:
planes[0].m.fd = b->m.fd;
planes[0].length = b->length;
break;
default:
planes[0].m.offset = vb->planes[0].m.offset;
planes[0].length = vb->planes[0].length;
break;
}
planes[0].data_offset = 0;
if (V4L2_TYPE_IS_OUTPUT(b->type)) {
if (b->bytesused == 0)
vb2_warn_zero_bytesused(vb);
if (vb->vb2_queue->allow_zero_bytesused)
planes[0].bytesused = b->bytesused;
else
planes[0].bytesused = b->bytesused ?
b->bytesused : planes[0].length;
} else
planes[0].bytesused = 0;
}
/* Zero flags that we handle */
vbuf->flags = b->flags & ~V4L2_BUFFER_MASK_FLAGS;
if (!vb->vb2_queue->copy_timestamp || !V4L2_TYPE_IS_OUTPUT(b->type)) {
/*
* Non-COPY timestamps and non-OUTPUT queues will get
* their timestamp and timestamp source flags from the
* queue.
*/
vbuf->flags &= ~V4L2_BUF_FLAG_TSTAMP_SRC_MASK;
}
if (V4L2_TYPE_IS_OUTPUT(b->type)) {
/*
* For output buffers mask out the timecode flag:
* this will be handled later in vb2_qbuf().
* The 'field' is valid metadata for this output buffer
* and so that needs to be copied here.
*/
vbuf->flags &= ~V4L2_BUF_FLAG_TIMECODE;
vbuf->field = b->field;
} else {
/* Zero any output buffer flags as this is a capture buffer */
vbuf->flags &= ~V4L2_BUFFER_OUT_FLAGS;
/* Zero last flag, this is a signal from driver to userspace */
vbuf->flags &= ~V4L2_BUF_FLAG_LAST;
}
return 0;
}
static int vb2_queue_or_prepare_buf(struct vb2_queue *q, struct media_device *mdev,
struct v4l2_buffer *b,
const char *opname,
struct media_request **p_req)
{
struct media_request *req;
struct vb2_v4l2_buffer *vbuf;
struct vb2_buffer *vb;
int ret;
if (b->type != q->type) {
dprintk(1, "%s: invalid buffer type\n", opname);
return -EINVAL;
@ -178,7 +363,82 @@ static int vb2_queue_or_prepare_buf(struct vb2_queue *q, struct v4l2_buffer *b,
return -EINVAL;
}
return __verify_planes_array(q->bufs[b->index], b);
vb = q->bufs[b->index];
vbuf = to_vb2_v4l2_buffer(vb);
ret = __verify_planes_array(vb, b);
if (ret)
return ret;
if (!vb->prepared) {
/* Copy relevant information provided by the userspace */
memset(vbuf->planes, 0,
sizeof(vbuf->planes[0]) * vb->num_planes);
ret = vb2_fill_vb2_v4l2_buffer(vb, b);
if (ret)
return ret;
}
if (!(b->flags & V4L2_BUF_FLAG_REQUEST_FD)) {
if (q->uses_requests) {
dprintk(1, "%s: queue uses requests\n", opname);
return -EBUSY;
}
return 0;
} else if (!q->supports_requests) {
dprintk(1, "%s: queue does not support requests\n", opname);
return -EACCES;
} else if (q->uses_qbuf) {
dprintk(1, "%s: queue does not use requests\n", opname);
return -EBUSY;
}
/*
* For proper locking when queueing a request you need to be able
* to lock access to the vb2 queue, so check that there is a lock
* that we can use. In addition p_req must be non-NULL.
*/
if (WARN_ON(!q->lock || !p_req))
return -EINVAL;
/*
* Make sure this op is implemented by the driver. It's easy to forget
* this callback, but is it important when canceling a buffer in a
* queued request.
*/
if (WARN_ON(!q->ops->buf_request_complete))
return -EINVAL;
if (vb->state != VB2_BUF_STATE_DEQUEUED) {
dprintk(1, "%s: buffer is not in dequeued state\n", opname);
return -EINVAL;
}
if (b->request_fd < 0) {
dprintk(1, "%s: request_fd < 0\n", opname);
return -EINVAL;
}
req = media_request_get_by_fd(mdev, b->request_fd);
if (IS_ERR(req)) {
dprintk(1, "%s: invalid request_fd\n", opname);
return PTR_ERR(req);
}
/*
* Early sanity check. This is checked again when the buffer
* is bound to the request in vb2_core_qbuf().
*/
if (req->state != MEDIA_REQUEST_STATE_IDLE &&
req->state != MEDIA_REQUEST_STATE_UPDATING) {
dprintk(1, "%s: request is not idle\n", opname);
media_request_put(req);
return -EBUSY;
}
*p_req = req;
vbuf->request_fd = b->request_fd;
return 0;
}
/*
@ -204,7 +464,7 @@ static void __fill_v4l2_buffer(struct vb2_buffer *vb, void *pb)
b->timecode = vbuf->timecode;
b->sequence = vbuf->sequence;
b->reserved2 = 0;
b->reserved = 0;
b->request_fd = 0;
if (q->is_multiplanar) {
/*
@ -261,15 +521,15 @@ static void __fill_v4l2_buffer(struct vb2_buffer *vb, void *pb)
case VB2_BUF_STATE_ACTIVE:
b->flags |= V4L2_BUF_FLAG_QUEUED;
break;
case VB2_BUF_STATE_IN_REQUEST:
b->flags |= V4L2_BUF_FLAG_IN_REQUEST;
break;
case VB2_BUF_STATE_ERROR:
b->flags |= V4L2_BUF_FLAG_ERROR;
/* fall through */
case VB2_BUF_STATE_DONE:
b->flags |= V4L2_BUF_FLAG_DONE;
break;
case VB2_BUF_STATE_PREPARED:
b->flags |= V4L2_BUF_FLAG_PREPARED;
break;
case VB2_BUF_STATE_PREPARING:
case VB2_BUF_STATE_DEQUEUED:
case VB2_BUF_STATE_REQUEUEING:
@ -277,8 +537,17 @@ static void __fill_v4l2_buffer(struct vb2_buffer *vb, void *pb)
break;
}
if ((vb->state == VB2_BUF_STATE_DEQUEUED ||
vb->state == VB2_BUF_STATE_IN_REQUEST) &&
vb->synced && vb->prepared)
b->flags |= V4L2_BUF_FLAG_PREPARED;
if (vb2_buffer_in_use(q, vb))
b->flags |= V4L2_BUF_FLAG_MAPPED;
if (vbuf->request_fd >= 0) {
b->flags |= V4L2_BUF_FLAG_REQUEST_FD;
b->request_fd = vbuf->request_fd;
}
if (!q->is_output &&
b->flags & V4L2_BUF_FLAG_DONE &&
@ -291,158 +560,28 @@ static void __fill_v4l2_buffer(struct vb2_buffer *vb, void *pb)
* v4l2_buffer by the userspace. It also verifies that struct
* v4l2_buffer has a valid number of planes.
*/
static int __fill_vb2_buffer(struct vb2_buffer *vb,
const void *pb, struct vb2_plane *planes)
static int __fill_vb2_buffer(struct vb2_buffer *vb, struct vb2_plane *planes)
{
struct vb2_queue *q = vb->vb2_queue;
const struct v4l2_buffer *b = pb;
struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
unsigned int plane;
int ret;
ret = __verify_length(vb, b);
if (ret < 0) {
dprintk(1, "plane parameters verification failed: %d\n", ret);
return ret;
}
if (b->field == V4L2_FIELD_ALTERNATE && q->is_output) {
/*
* If the format's field is ALTERNATE, then the buffer's field
* should be either TOP or BOTTOM, not ALTERNATE since that
* makes no sense. The driver has to know whether the
* buffer represents a top or a bottom field in order to
* program any DMA correctly. Using ALTERNATE is wrong, since
* that just says that it is either a top or a bottom field,
* but not which of the two it is.
*/
dprintk(1, "the field is incorrectly set to ALTERNATE for an output buffer\n");
return -EINVAL;
}
vb->timestamp = 0;
vbuf->sequence = 0;
if (!vb->vb2_queue->is_output || !vb->vb2_queue->copy_timestamp)
vb->timestamp = 0;
if (V4L2_TYPE_IS_MULTIPLANAR(b->type)) {
if (b->memory == VB2_MEMORY_USERPTR) {
for (plane = 0; plane < vb->num_planes; ++plane) {
planes[plane].m.userptr =
b->m.planes[plane].m.userptr;
planes[plane].length =
b->m.planes[plane].length;
}
for (plane = 0; plane < vb->num_planes; ++plane) {
if (vb->vb2_queue->memory != VB2_MEMORY_MMAP) {
planes[plane].m = vbuf->planes[plane].m;
planes[plane].length = vbuf->planes[plane].length;
}
if (b->memory == VB2_MEMORY_DMABUF) {
for (plane = 0; plane < vb->num_planes; ++plane) {
planes[plane].m.fd =
b->m.planes[plane].m.fd;
planes[plane].length =
b->m.planes[plane].length;
}
}
/* Fill in driver-provided information for OUTPUT types */
if (V4L2_TYPE_IS_OUTPUT(b->type)) {
/*
* Will have to go up to b->length when API starts
* accepting variable number of planes.
*
* If bytesused == 0 for the output buffer, then fall
* back to the full buffer size. In that case
* userspace clearly never bothered to set it and
* it's a safe assumption that they really meant to
* use the full plane sizes.
*
* Some drivers, e.g. old codec drivers, use bytesused == 0
* as a way to indicate that streaming is finished.
* In that case, the driver should use the
* allow_zero_bytesused flag to keep old userspace
* applications working.
*/
for (plane = 0; plane < vb->num_planes; ++plane) {
struct vb2_plane *pdst = &planes[plane];
struct v4l2_plane *psrc = &b->m.planes[plane];
if (psrc->bytesused == 0)
vb2_warn_zero_bytesused(vb);
if (vb->vb2_queue->allow_zero_bytesused)
pdst->bytesused = psrc->bytesused;
else
pdst->bytesused = psrc->bytesused ?
psrc->bytesused : pdst->length;
pdst->data_offset = psrc->data_offset;
}
}
} else {
/*
* Single-planar buffers do not use planes array,
* so fill in relevant v4l2_buffer struct fields instead.
* In videobuf we use our internal V4l2_planes struct for
* single-planar buffers as well, for simplicity.
*
* If bytesused == 0 for the output buffer, then fall back
* to the full buffer size as that's a sensible default.
*
* Some drivers, e.g. old codec drivers, use bytesused == 0 as
* a way to indicate that streaming is finished. In that case,
* the driver should use the allow_zero_bytesused flag to keep
* old userspace applications working.
*/
if (b->memory == VB2_MEMORY_USERPTR) {
planes[0].m.userptr = b->m.userptr;
planes[0].length = b->length;
}
if (b->memory == VB2_MEMORY_DMABUF) {
planes[0].m.fd = b->m.fd;
planes[0].length = b->length;
}
if (V4L2_TYPE_IS_OUTPUT(b->type)) {
if (b->bytesused == 0)
vb2_warn_zero_bytesused(vb);
if (vb->vb2_queue->allow_zero_bytesused)
planes[0].bytesused = b->bytesused;
else
planes[0].bytesused = b->bytesused ?
b->bytesused : planes[0].length;
} else
planes[0].bytesused = 0;
planes[plane].bytesused = vbuf->planes[plane].bytesused;
planes[plane].data_offset = vbuf->planes[plane].data_offset;
}
/* Zero flags that the vb2 core handles */
vbuf->flags = b->flags & ~V4L2_BUFFER_MASK_FLAGS;
if (!vb->vb2_queue->copy_timestamp || !V4L2_TYPE_IS_OUTPUT(b->type)) {
/*
* Non-COPY timestamps and non-OUTPUT queues will get
* their timestamp and timestamp source flags from the
* queue.
*/
vbuf->flags &= ~V4L2_BUF_FLAG_TSTAMP_SRC_MASK;
}
if (V4L2_TYPE_IS_OUTPUT(b->type)) {
/*
* For output buffers mask out the timecode flag:
* this will be handled later in vb2_qbuf().
* The 'field' is valid metadata for this output buffer
* and so that needs to be copied here.
*/
vbuf->flags &= ~V4L2_BUF_FLAG_TIMECODE;
vbuf->field = b->field;
} else {
/* Zero any output buffer flags as this is a capture buffer */
vbuf->flags &= ~V4L2_BUFFER_OUT_FLAGS;
/* Zero last flag, this is a signal from driver to userspace */
vbuf->flags &= ~V4L2_BUF_FLAG_LAST;
}
return 0;
}
static const struct vb2_buf_ops v4l2_buf_ops = {
.verify_planes_array = __verify_planes_array_core,
.init_buffer = __init_v4l2_vb2_buffer,
.fill_user_buffer = __fill_v4l2_buffer,
.fill_vb2_buffer = __fill_vb2_buffer,
.copy_timestamp = __copy_timestamp,
@ -483,15 +622,30 @@ int vb2_querybuf(struct vb2_queue *q, struct v4l2_buffer *b)
}
EXPORT_SYMBOL(vb2_querybuf);
static void fill_buf_caps(struct vb2_queue *q, u32 *caps)
{
*caps = 0;
if (q->io_modes & VB2_MMAP)
*caps |= V4L2_BUF_CAP_SUPPORTS_MMAP;
if (q->io_modes & VB2_USERPTR)
*caps |= V4L2_BUF_CAP_SUPPORTS_USERPTR;
if (q->io_modes & VB2_DMABUF)
*caps |= V4L2_BUF_CAP_SUPPORTS_DMABUF;
if (q->supports_requests)
*caps |= V4L2_BUF_CAP_SUPPORTS_REQUESTS;
}
int vb2_reqbufs(struct vb2_queue *q, struct v4l2_requestbuffers *req)
{
int ret = vb2_verify_memory_type(q, req->memory, req->type);
fill_buf_caps(q, &req->capabilities);
return ret ? ret : vb2_core_reqbufs(q, req->memory, &req->count);
}
EXPORT_SYMBOL_GPL(vb2_reqbufs);
int vb2_prepare_buf(struct vb2_queue *q, struct v4l2_buffer *b)
int vb2_prepare_buf(struct vb2_queue *q, struct media_device *mdev,
struct v4l2_buffer *b)
{
int ret;
@ -500,7 +654,10 @@ int vb2_prepare_buf(struct vb2_queue *q, struct v4l2_buffer *b)
return -EBUSY;
}
ret = vb2_queue_or_prepare_buf(q, b, "prepare_buf");
if (b->flags & V4L2_BUF_FLAG_REQUEST_FD)
return -EINVAL;
ret = vb2_queue_or_prepare_buf(q, mdev, b, "prepare_buf", NULL);
return ret ? ret : vb2_core_prepare_buf(q, b->index, b);
}
@ -514,6 +671,7 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create)
int ret = vb2_verify_memory_type(q, create->memory, f->type);
unsigned i;
fill_buf_caps(q, &create->capabilities);
create->index = q->num_buffers;
if (create->count == 0)
return ret != -EBUSY ? ret : 0;
@ -560,8 +718,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create)
}
EXPORT_SYMBOL_GPL(vb2_create_bufs);
int vb2_qbuf(struct vb2_queue *q, struct v4l2_buffer *b)
int vb2_qbuf(struct vb2_queue *q, struct media_device *mdev,
struct v4l2_buffer *b)
{
struct media_request *req = NULL;
int ret;
if (vb2_fileio_is_active(q)) {
@ -569,8 +729,13 @@ int vb2_qbuf(struct vb2_queue *q, struct v4l2_buffer *b)
return -EBUSY;
}
ret = vb2_queue_or_prepare_buf(q, b, "qbuf");
return ret ? ret : vb2_core_qbuf(q, b->index, b);
ret = vb2_queue_or_prepare_buf(q, mdev, b, "qbuf", &req);
if (ret)
return ret;
ret = vb2_core_qbuf(q, b->index, b, req);
if (req)
media_request_put(req);
return ret;
}
EXPORT_SYMBOL_GPL(vb2_qbuf);
@ -714,6 +879,7 @@ int vb2_ioctl_reqbufs(struct file *file, void *priv,
struct video_device *vdev = video_devdata(file);
int res = vb2_verify_memory_type(vdev->queue, p->memory, p->type);
fill_buf_caps(vdev->queue, &p->capabilities);
if (res)
return res;
if (vb2_queue_is_busy(vdev, file))
@ -735,6 +901,7 @@ int vb2_ioctl_create_bufs(struct file *file, void *priv,
p->format.type);
p->index = vdev->queue->num_buffers;
fill_buf_caps(vdev->queue, &p->capabilities);
/*
* If count == 0, then just check if memory and type are valid.
* Any -EBUSY result from vb2_verify_memory_type can be mapped to 0.
@ -760,7 +927,7 @@ int vb2_ioctl_prepare_buf(struct file *file, void *priv,
if (vb2_queue_is_busy(vdev, file))
return -EBUSY;
return vb2_prepare_buf(vdev->queue, p);
return vb2_prepare_buf(vdev->queue, vdev->v4l2_dev->mdev, p);
}
EXPORT_SYMBOL_GPL(vb2_ioctl_prepare_buf);
@ -779,7 +946,7 @@ int vb2_ioctl_qbuf(struct file *file, void *priv, struct v4l2_buffer *p)
if (vb2_queue_is_busy(vdev, file))
return -EBUSY;
return vb2_qbuf(vdev->queue, p);
return vb2_qbuf(vdev->queue, vdev->v4l2_dev->mdev, p);
}
EXPORT_SYMBOL_GPL(vb2_ioctl_qbuf);
@ -961,6 +1128,57 @@ void vb2_ops_wait_finish(struct vb2_queue *vq)
}
EXPORT_SYMBOL_GPL(vb2_ops_wait_finish);
/*
* Note that this function is called during validation time and
* thus the req_queue_mutex is held to ensure no request objects
* can be added or deleted while validating. So there is no need
* to protect the objects list.
*/
int vb2_request_validate(struct media_request *req)
{
struct media_request_object *obj;
int ret = 0;
if (!vb2_request_buffer_cnt(req))
return -ENOENT;
list_for_each_entry(obj, &req->objects, list) {
if (!obj->ops->prepare)
continue;
ret = obj->ops->prepare(obj);
if (ret)
break;
}
if (ret) {
list_for_each_entry_continue_reverse(obj, &req->objects, list)
if (obj->ops->unprepare)
obj->ops->unprepare(obj);
return ret;
}
return 0;
}
EXPORT_SYMBOL_GPL(vb2_request_validate);
void vb2_request_queue(struct media_request *req)
{
struct media_request_object *obj, *obj_safe;
/*
* Queue all objects. Note that buffer objects are at the end of the
* objects list, after all other object types. Once buffer objects
* are queued, the driver might delete them immediately (if the driver
* processes the buffer at once), so we have to use
* list_for_each_entry_safe() to handle the case where the object we
* queue is deleted.
*/
list_for_each_entry_safe(obj, obj_safe, &req->objects, list)
if (obj->ops->queue)
obj->ops->queue(obj);
}
EXPORT_SYMBOL_GPL(vb2_request_queue);
MODULE_DESCRIPTION("Driver helper framework for Video for Linux 2");
MODULE_AUTHOR("Pawel Osciak <pawel@osciak.com>, Marek Szyprowski");
MODULE_LICENSE("GPL");

View File

@ -146,8 +146,7 @@ static void _fill_dmx_buffer(struct vb2_buffer *vb, void *pb)
dprintk(3, "[%s]\n", ctx->name);
}
static int _fill_vb2_buffer(struct vb2_buffer *vb,
const void *pb, struct vb2_plane *planes)
static int _fill_vb2_buffer(struct vb2_buffer *vb, struct vb2_plane *planes)
{
struct dvb_vb2_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
@ -385,7 +384,7 @@ int dvb_vb2_qbuf(struct dvb_vb2_ctx *ctx, struct dmx_buffer *b)
{
int ret;
ret = vb2_core_qbuf(&ctx->vb_q, b->index, b);
ret = vb2_core_qbuf(&ctx->vb_q, b->index, b, NULL);
if (ret) {
dprintk(1, "[%s] index=%d errno=%d\n", ctx->name,
b->index, ret);

View File

@ -1394,7 +1394,8 @@ static int rtl2832_sdr_probe(struct platform_device *pdev)
case RTL2832_SDR_TUNER_E4000:
v4l2_ctrl_handler_init(&dev->hdl, 9);
if (subdev)
v4l2_ctrl_add_handler(&dev->hdl, subdev->ctrl_handler, NULL);
v4l2_ctrl_add_handler(&dev->hdl, subdev->ctrl_handler,
NULL, true);
break;
case RTL2832_SDR_TUNER_R820T:
case RTL2832_SDR_TUNER_R828D:
@ -1423,7 +1424,7 @@ static int rtl2832_sdr_probe(struct platform_device *pdev)
v4l2_ctrl_handler_init(&dev->hdl, 2);
if (subdev)
v4l2_ctrl_add_handler(&dev->hdl, subdev->ctrl_handler,
NULL);
NULL, true);
break;
default:
v4l2_ctrl_handler_init(&dev->hdl, 0);

View File

@ -30,6 +30,7 @@
#include <media/media-device.h>
#include <media/media-devnode.h>
#include <media/media-entity.h>
#include <media/media-request.h>
#ifdef CONFIG_MEDIA_CONTROLLER
@ -377,10 +378,19 @@ static long media_device_get_topology(struct media_device *mdev, void *arg)
return ret;
}
static long media_device_request_alloc(struct media_device *mdev,
int *alloc_fd)
{
if (!mdev->ops || !mdev->ops->req_validate || !mdev->ops->req_queue)
return -ENOTTY;
return media_request_alloc(mdev, alloc_fd);
}
static long copy_arg_from_user(void *karg, void __user *uarg, unsigned int cmd)
{
/* All media IOCTLs are _IOWR() */
if (copy_from_user(karg, uarg, _IOC_SIZE(cmd)))
if ((_IOC_DIR(cmd) & _IOC_WRITE) &&
copy_from_user(karg, uarg, _IOC_SIZE(cmd)))
return -EFAULT;
return 0;
@ -388,8 +398,8 @@ static long copy_arg_from_user(void *karg, void __user *uarg, unsigned int cmd)
static long copy_arg_to_user(void __user *uarg, void *karg, unsigned int cmd)
{
/* All media IOCTLs are _IOWR() */
if (copy_to_user(uarg, karg, _IOC_SIZE(cmd)))
if ((_IOC_DIR(cmd) & _IOC_READ) &&
copy_to_user(uarg, karg, _IOC_SIZE(cmd)))
return -EFAULT;
return 0;
@ -425,6 +435,7 @@ static const struct media_ioctl_info ioctl_info[] = {
MEDIA_IOC(ENUM_LINKS, media_device_enum_links, MEDIA_IOC_FL_GRAPH_MUTEX),
MEDIA_IOC(SETUP_LINK, media_device_setup_link, MEDIA_IOC_FL_GRAPH_MUTEX),
MEDIA_IOC(G_TOPOLOGY, media_device_get_topology, MEDIA_IOC_FL_GRAPH_MUTEX),
MEDIA_IOC(REQUEST_ALLOC, media_device_request_alloc, 0),
};
static long media_device_ioctl(struct file *filp, unsigned int cmd,
@ -691,9 +702,13 @@ void media_device_init(struct media_device *mdev)
INIT_LIST_HEAD(&mdev->pads);
INIT_LIST_HEAD(&mdev->links);
INIT_LIST_HEAD(&mdev->entity_notify);
mutex_init(&mdev->req_queue_mutex);
mutex_init(&mdev->graph_mutex);
ida_init(&mdev->entity_internal_idx);
atomic_set(&mdev->request_id, 0);
dev_dbg(mdev->dev, "Media device initialized\n");
}
EXPORT_SYMBOL_GPL(media_device_init);
@ -704,6 +719,7 @@ void media_device_cleanup(struct media_device *mdev)
mdev->entity_internal_idx_max = 0;
media_graph_walk_cleanup(&mdev->pm_count_walk);
mutex_destroy(&mdev->graph_mutex);
mutex_destroy(&mdev->req_queue_mutex);
}
EXPORT_SYMBOL_GPL(media_device_cleanup);

View File

@ -0,0 +1,501 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Media device request objects
*
* Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
* Copyright (C) 2018 Intel Corporation
* Copyright (C) 2018 Google, Inc.
*
* Author: Hans Verkuil <hans.verkuil@cisco.com>
* Author: Sakari Ailus <sakari.ailus@linux.intel.com>
*/
#include <linux/anon_inodes.h>
#include <linux/file.h>
#include <linux/refcount.h>
#include <media/media-device.h>
#include <media/media-request.h>
static const char * const request_state[] = {
[MEDIA_REQUEST_STATE_IDLE] = "idle",
[MEDIA_REQUEST_STATE_VALIDATING] = "validating",
[MEDIA_REQUEST_STATE_QUEUED] = "queued",
[MEDIA_REQUEST_STATE_COMPLETE] = "complete",
[MEDIA_REQUEST_STATE_CLEANING] = "cleaning",
[MEDIA_REQUEST_STATE_UPDATING] = "updating",
};
static const char *
media_request_state_str(enum media_request_state state)
{
BUILD_BUG_ON(ARRAY_SIZE(request_state) != NR_OF_MEDIA_REQUEST_STATE);
if (WARN_ON(state >= ARRAY_SIZE(request_state)))
return "invalid";
return request_state[state];
}
static void media_request_clean(struct media_request *req)
{
struct media_request_object *obj, *obj_safe;
/* Just a sanity check. No other code path is allowed to change this. */
WARN_ON(req->state != MEDIA_REQUEST_STATE_CLEANING);
WARN_ON(req->updating_count);
WARN_ON(req->access_count);
list_for_each_entry_safe(obj, obj_safe, &req->objects, list) {
media_request_object_unbind(obj);
media_request_object_put(obj);
}
req->updating_count = 0;
req->access_count = 0;
WARN_ON(req->num_incomplete_objects);
req->num_incomplete_objects = 0;
wake_up_interruptible_all(&req->poll_wait);
}
static void media_request_release(struct kref *kref)
{
struct media_request *req =
container_of(kref, struct media_request, kref);
struct media_device *mdev = req->mdev;
dev_dbg(mdev->dev, "request: release %s\n", req->debug_str);
/* No other users, no need for a spinlock */
req->state = MEDIA_REQUEST_STATE_CLEANING;
media_request_clean(req);
if (mdev->ops->req_free)
mdev->ops->req_free(req);
else
kfree(req);
}
void media_request_put(struct media_request *req)
{
kref_put(&req->kref, media_request_release);
}
EXPORT_SYMBOL_GPL(media_request_put);
static int media_request_close(struct inode *inode, struct file *filp)
{
struct media_request *req = filp->private_data;
media_request_put(req);
return 0;
}
static __poll_t media_request_poll(struct file *filp,
struct poll_table_struct *wait)
{
struct media_request *req = filp->private_data;
unsigned long flags;
__poll_t ret = 0;
if (!(poll_requested_events(wait) & EPOLLPRI))
return 0;
spin_lock_irqsave(&req->lock, flags);
if (req->state == MEDIA_REQUEST_STATE_COMPLETE) {
ret = EPOLLPRI;
goto unlock;
}
if (req->state != MEDIA_REQUEST_STATE_QUEUED) {
ret = EPOLLERR;
goto unlock;
}
poll_wait(filp, &req->poll_wait, wait);
unlock:
spin_unlock_irqrestore(&req->lock, flags);
return ret;
}
static long media_request_ioctl_queue(struct media_request *req)
{
struct media_device *mdev = req->mdev;
enum media_request_state state;
unsigned long flags;
int ret;
dev_dbg(mdev->dev, "request: queue %s\n", req->debug_str);
/*
* Ensure the request that is validated will be the one that gets queued
* next by serialising the queueing process. This mutex is also used
* to serialize with canceling a vb2 queue and with setting values such
* as controls in a request.
*/
mutex_lock(&mdev->req_queue_mutex);
media_request_get(req);
spin_lock_irqsave(&req->lock, flags);
if (req->state == MEDIA_REQUEST_STATE_IDLE)
req->state = MEDIA_REQUEST_STATE_VALIDATING;
state = req->state;
spin_unlock_irqrestore(&req->lock, flags);
if (state != MEDIA_REQUEST_STATE_VALIDATING) {
dev_dbg(mdev->dev,
"request: unable to queue %s, request in state %s\n",
req->debug_str, media_request_state_str(state));
media_request_put(req);
mutex_unlock(&mdev->req_queue_mutex);
return -EBUSY;
}
ret = mdev->ops->req_validate(req);
/*
* If the req_validate was successful, then we mark the state as QUEUED
* and call req_queue. The reason we set the state first is that this
* allows req_queue to unbind or complete the queued objects in case
* they are immediately 'consumed'. State changes from QUEUED to another
* state can only happen if either the driver changes the state or if
* the user cancels the vb2 queue. The driver can only change the state
* after each object is queued through the req_queue op (and note that
* that op cannot fail), so setting the state to QUEUED up front is
* safe.
*
* The other reason for changing the state is if the vb2 queue is
* canceled, and that uses the req_queue_mutex which is still locked
* while req_queue is called, so that's safe as well.
*/
spin_lock_irqsave(&req->lock, flags);
req->state = ret ? MEDIA_REQUEST_STATE_IDLE
: MEDIA_REQUEST_STATE_QUEUED;
spin_unlock_irqrestore(&req->lock, flags);
if (!ret)
mdev->ops->req_queue(req);
mutex_unlock(&mdev->req_queue_mutex);
if (ret) {
dev_dbg(mdev->dev, "request: can't queue %s (%d)\n",
req->debug_str, ret);
media_request_put(req);
}
return ret;
}
static long media_request_ioctl_reinit(struct media_request *req)
{
struct media_device *mdev = req->mdev;
unsigned long flags;
spin_lock_irqsave(&req->lock, flags);
if (req->state != MEDIA_REQUEST_STATE_IDLE &&
req->state != MEDIA_REQUEST_STATE_COMPLETE) {
dev_dbg(mdev->dev,
"request: %s not in idle or complete state, cannot reinit\n",
req->debug_str);
spin_unlock_irqrestore(&req->lock, flags);
return -EBUSY;
}
if (req->access_count) {
dev_dbg(mdev->dev,
"request: %s is being accessed, cannot reinit\n",
req->debug_str);
spin_unlock_irqrestore(&req->lock, flags);
return -EBUSY;
}
req->state = MEDIA_REQUEST_STATE_CLEANING;
spin_unlock_irqrestore(&req->lock, flags);
media_request_clean(req);
spin_lock_irqsave(&req->lock, flags);
req->state = MEDIA_REQUEST_STATE_IDLE;
spin_unlock_irqrestore(&req->lock, flags);
return 0;
}
static long media_request_ioctl(struct file *filp, unsigned int cmd,
unsigned long arg)
{
struct media_request *req = filp->private_data;
switch (cmd) {
case MEDIA_REQUEST_IOC_QUEUE:
return media_request_ioctl_queue(req);
case MEDIA_REQUEST_IOC_REINIT:
return media_request_ioctl_reinit(req);
default:
return -ENOIOCTLCMD;
}
}
static const struct file_operations request_fops = {
.owner = THIS_MODULE,
.poll = media_request_poll,
.unlocked_ioctl = media_request_ioctl,
.release = media_request_close,
};
struct media_request *
media_request_get_by_fd(struct media_device *mdev, int request_fd)
{
struct file *filp;
struct media_request *req;
if (!mdev || !mdev->ops ||
!mdev->ops->req_validate || !mdev->ops->req_queue)
return ERR_PTR(-EACCES);
filp = fget(request_fd);
if (!filp)
goto err_no_req_fd;
if (filp->f_op != &request_fops)
goto err_fput;
req = filp->private_data;
if (req->mdev != mdev)
goto err_fput;
/*
* Note: as long as someone has an open filehandle of the request,
* the request can never be released. The fget() above ensures that
* even if userspace closes the request filehandle, the release()
* fop won't be called, so the media_request_get() always succeeds
* and there is no race condition where the request was released
* before media_request_get() is called.
*/
media_request_get(req);
fput(filp);
return req;
err_fput:
fput(filp);
err_no_req_fd:
dev_dbg(mdev->dev, "cannot find request_fd %d\n", request_fd);
return ERR_PTR(-EINVAL);
}
EXPORT_SYMBOL_GPL(media_request_get_by_fd);
int media_request_alloc(struct media_device *mdev, int *alloc_fd)
{
struct media_request *req;
struct file *filp;
int fd;
int ret;
/* Either both are NULL or both are non-NULL */
if (WARN_ON(!mdev->ops->req_alloc ^ !mdev->ops->req_free))
return -ENOMEM;
fd = get_unused_fd_flags(O_CLOEXEC);
if (fd < 0)
return fd;
filp = anon_inode_getfile("request", &request_fops, NULL, O_CLOEXEC);
if (IS_ERR(filp)) {
ret = PTR_ERR(filp);
goto err_put_fd;
}
if (mdev->ops->req_alloc)
req = mdev->ops->req_alloc(mdev);
else
req = kzalloc(sizeof(*req), GFP_KERNEL);
if (!req) {
ret = -ENOMEM;
goto err_fput;
}
filp->private_data = req;
req->mdev = mdev;
req->state = MEDIA_REQUEST_STATE_IDLE;
req->num_incomplete_objects = 0;
kref_init(&req->kref);
INIT_LIST_HEAD(&req->objects);
spin_lock_init(&req->lock);
init_waitqueue_head(&req->poll_wait);
req->updating_count = 0;
req->access_count = 0;
*alloc_fd = fd;
snprintf(req->debug_str, sizeof(req->debug_str), "%u:%d",
atomic_inc_return(&mdev->request_id), fd);
dev_dbg(mdev->dev, "request: allocated %s\n", req->debug_str);
fd_install(fd, filp);
return 0;
err_fput:
fput(filp);
err_put_fd:
put_unused_fd(fd);
return ret;
}
static void media_request_object_release(struct kref *kref)
{
struct media_request_object *obj =
container_of(kref, struct media_request_object, kref);
struct media_request *req = obj->req;
if (WARN_ON(req))
media_request_object_unbind(obj);
obj->ops->release(obj);
}
struct media_request_object *
media_request_object_find(struct media_request *req,
const struct media_request_object_ops *ops,
void *priv)
{
struct media_request_object *obj;
struct media_request_object *found = NULL;
unsigned long flags;
if (WARN_ON(!ops || !priv))
return NULL;
spin_lock_irqsave(&req->lock, flags);
list_for_each_entry(obj, &req->objects, list) {
if (obj->ops == ops && obj->priv == priv) {
media_request_object_get(obj);
found = obj;
break;
}
}
spin_unlock_irqrestore(&req->lock, flags);
return found;
}
EXPORT_SYMBOL_GPL(media_request_object_find);
void media_request_object_put(struct media_request_object *obj)
{
kref_put(&obj->kref, media_request_object_release);
}
EXPORT_SYMBOL_GPL(media_request_object_put);
void media_request_object_init(struct media_request_object *obj)
{
obj->ops = NULL;
obj->req = NULL;
obj->priv = NULL;
obj->completed = false;
INIT_LIST_HEAD(&obj->list);
kref_init(&obj->kref);
}
EXPORT_SYMBOL_GPL(media_request_object_init);
int media_request_object_bind(struct media_request *req,
const struct media_request_object_ops *ops,
void *priv, bool is_buffer,
struct media_request_object *obj)
{
unsigned long flags;
int ret = -EBUSY;
if (WARN_ON(!ops->release))
return -EACCES;
spin_lock_irqsave(&req->lock, flags);
if (WARN_ON(req->state != MEDIA_REQUEST_STATE_UPDATING))
goto unlock;
obj->req = req;
obj->ops = ops;
obj->priv = priv;
if (is_buffer)
list_add_tail(&obj->list, &req->objects);
else
list_add(&obj->list, &req->objects);
req->num_incomplete_objects++;
ret = 0;
unlock:
spin_unlock_irqrestore(&req->lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(media_request_object_bind);
void media_request_object_unbind(struct media_request_object *obj)
{
struct media_request *req = obj->req;
unsigned long flags;
bool completed = false;
if (WARN_ON(!req))
return;
spin_lock_irqsave(&req->lock, flags);
list_del(&obj->list);
obj->req = NULL;
if (req->state == MEDIA_REQUEST_STATE_COMPLETE)
goto unlock;
if (WARN_ON(req->state == MEDIA_REQUEST_STATE_VALIDATING))
goto unlock;
if (req->state == MEDIA_REQUEST_STATE_CLEANING) {
if (!obj->completed)
req->num_incomplete_objects--;
goto unlock;
}
if (WARN_ON(!req->num_incomplete_objects))
goto unlock;
req->num_incomplete_objects--;
if (req->state == MEDIA_REQUEST_STATE_QUEUED &&
!req->num_incomplete_objects) {
req->state = MEDIA_REQUEST_STATE_COMPLETE;
completed = true;
wake_up_interruptible_all(&req->poll_wait);
}
unlock:
spin_unlock_irqrestore(&req->lock, flags);
if (obj->ops->unbind)
obj->ops->unbind(obj);
if (completed)
media_request_put(req);
}
EXPORT_SYMBOL_GPL(media_request_object_unbind);
void media_request_object_complete(struct media_request_object *obj)
{
struct media_request *req = obj->req;
unsigned long flags;
bool completed = false;
spin_lock_irqsave(&req->lock, flags);
if (obj->completed)
goto unlock;
obj->completed = true;
if (WARN_ON(!req->num_incomplete_objects) ||
WARN_ON(req->state != MEDIA_REQUEST_STATE_QUEUED))
goto unlock;
if (!--req->num_incomplete_objects) {
req->state = MEDIA_REQUEST_STATE_COMPLETE;
wake_up_interruptible_all(&req->poll_wait);
completed = true;
}
unlock:
spin_unlock_irqrestore(&req->lock, flags);
if (completed)
media_request_put(req);
}
EXPORT_SYMBOL_GPL(media_request_object_complete);

View File

@ -4210,7 +4210,7 @@ static int bttv_probe(struct pci_dev *dev, const struct pci_device_id *pci_id)
/* register video4linux + input */
if (!bttv_tvcards[btv->c.type].no_video) {
v4l2_ctrl_add_handler(&btv->radio_ctrl_handler, hdl,
v4l2_ctrl_radio_filter);
v4l2_ctrl_radio_filter, false);
if (btv->radio_ctrl_handler.error) {
result = btv->radio_ctrl_handler.error;
goto fail2;

View File

@ -1527,7 +1527,7 @@ int cx23885_417_register(struct cx23885_dev *dev)
dev->cxhdl.priv = dev;
dev->cxhdl.func = cx23885_api_func;
cx2341x_handler_set_50hz(&dev->cxhdl, tsport->height == 576);
v4l2_ctrl_add_handler(&dev->ctrl_handler, &dev->cxhdl.hdl, NULL);
v4l2_ctrl_add_handler(&dev->ctrl_handler, &dev->cxhdl.hdl, NULL, false);
/* Allocate and initialize V4L video device */
dev->v4l_device = cx23885_video_dev_alloc(tsport,

View File

@ -1183,7 +1183,7 @@ static int cx8802_blackbird_probe(struct cx8802_driver *drv)
err = cx2341x_handler_init(&dev->cxhdl, 36);
if (err)
goto fail_core;
v4l2_ctrl_add_handler(&dev->cxhdl.hdl, &core->video_hdl, NULL);
v4l2_ctrl_add_handler(&dev->cxhdl.hdl, &core->video_hdl, NULL, false);
/* blackbird stuff */
pr_info("cx23416 based mpeg encoder (blackbird reference design)\n");

View File

@ -1378,7 +1378,7 @@ static int cx8800_initdev(struct pci_dev *pci_dev,
if (vc->id == V4L2_CID_CHROMA_AGC)
core->chroma_agc = vc;
}
v4l2_ctrl_add_handler(&core->video_hdl, &core->audio_hdl, NULL);
v4l2_ctrl_add_handler(&core->video_hdl, &core->audio_hdl, NULL, false);
/* load and configure helper modules */

View File

@ -265,9 +265,9 @@ static int empress_init(struct saa7134_dev *dev)
"%s empress (%s)", dev->name,
saa7134_boards[dev->board].name);
v4l2_ctrl_handler_init(hdl, 21);
v4l2_ctrl_add_handler(hdl, &dev->ctrl_handler, empress_ctrl_filter);
v4l2_ctrl_add_handler(hdl, &dev->ctrl_handler, empress_ctrl_filter, false);
if (dev->empress_sd)
v4l2_ctrl_add_handler(hdl, dev->empress_sd->ctrl_handler, NULL);
v4l2_ctrl_add_handler(hdl, dev->empress_sd->ctrl_handler, NULL, true);
if (hdl->error) {
video_device_release(dev->empress_dev);
return hdl->error;

View File

@ -2137,7 +2137,7 @@ int saa7134_video_init1(struct saa7134_dev *dev)
hdl = &dev->radio_ctrl_handler;
v4l2_ctrl_handler_init(hdl, 2);
v4l2_ctrl_add_handler(hdl, &dev->ctrl_handler,
v4l2_ctrl_radio_filter);
v4l2_ctrl_radio_filter, false);
if (hdl->error)
return hdl->error;
}

View File

@ -1424,7 +1424,7 @@ static int fimc_link_setup(struct media_entity *entity,
return 0;
return v4l2_ctrl_add_handler(&vc->ctx->ctrls.handler,
sensor->ctrl_handler, NULL);
sensor->ctrl_handler, NULL, true);
}
static const struct media_entity_operations fimc_sd_media_ops = {

View File

@ -940,7 +940,7 @@ isp_video_qbuf(struct file *file, void *fh, struct v4l2_buffer *b)
int ret;
mutex_lock(&video->queue_lock);
ret = vb2_qbuf(&vfh->queue, b);
ret = vb2_qbuf(&vfh->queue, video->video.v4l2_dev->mdev, b);
mutex_unlock(&video->queue_lock);
return ret;
@ -1028,7 +1028,7 @@ static int isp_video_check_external_subdevs(struct isp_video *video,
ctrls.count = 1;
ctrls.controls = &ctrl;
ret = v4l2_g_ext_ctrls(pipe->external->ctrl_handler, &ctrls);
ret = v4l2_g_ext_ctrls(pipe->external->ctrl_handler, NULL, &ctrls);
if (ret < 0) {
dev_warn(isp->dev, "no pixel rate control in subdev %s\n",
pipe->external->name);

View File

@ -475,7 +475,7 @@ static int rvin_parallel_subdevice_attach(struct rvin_dev *vin,
return ret;
ret = v4l2_ctrl_add_handler(&vin->ctrl_handler, subdev->ctrl_handler,
NULL);
NULL, true);
if (ret < 0) {
v4l2_ctrl_handler_free(&vin->ctrl_handler);
return ret;

View File

@ -1164,7 +1164,7 @@ static int rcar_drif_notify_complete(struct v4l2_async_notifier *notifier)
}
ret = v4l2_ctrl_add_handler(&sdr->ctrl_hdl,
sdr->ep.subdev->ctrl_handler, NULL);
sdr->ep.subdev->ctrl_handler, NULL, true);
if (ret) {
rdrif_err(sdr, "failed: ctrl add hdlr ret %d\n", ret);
goto error;

View File

@ -943,7 +943,7 @@ static int s3c_camif_qbuf(struct file *file, void *priv,
if (vp->owner && vp->owner != priv)
return -EBUSY;
return vb2_qbuf(&vp->vb_queue, buf);
return vb2_qbuf(&vp->vb_queue, vp->vdev.v4l2_dev->mdev, buf);
}
static int s3c_camif_dqbuf(struct file *file, void *priv,
@ -981,7 +981,7 @@ static int s3c_camif_prepare_buf(struct file *file, void *priv,
struct v4l2_buffer *b)
{
struct camif_vp *vp = video_drvdata(file);
return vb2_prepare_buf(&vp->vb_queue, b);
return vb2_prepare_buf(&vp->vb_queue, vp->vdev.v4l2_dev->mdev, b);
}
static int s3c_camif_g_selection(struct file *file, void *priv,

View File

@ -632,9 +632,9 @@ static int vidioc_qbuf(struct file *file, void *priv, struct v4l2_buffer *buf)
return -EIO;
}
if (buf->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE)
return vb2_qbuf(&ctx->vq_src, buf);
return vb2_qbuf(&ctx->vq_src, NULL, buf);
else if (buf->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)
return vb2_qbuf(&ctx->vq_dst, buf);
return vb2_qbuf(&ctx->vq_dst, NULL, buf);
return -EINVAL;
}

View File

@ -1621,9 +1621,9 @@ static int vidioc_qbuf(struct file *file, void *priv, struct v4l2_buffer *buf)
mfc_err("Call on QBUF after EOS command\n");
return -EIO;
}
return vb2_qbuf(&ctx->vq_src, buf);
return vb2_qbuf(&ctx->vq_src, NULL, buf);
} else if (buf->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) {
return vb2_qbuf(&ctx->vq_dst, buf);
return vb2_qbuf(&ctx->vq_dst, NULL, buf);
}
return -EINVAL;
}

View File

@ -394,7 +394,7 @@ static int soc_camera_qbuf(struct file *file, void *priv,
if (icd->streamer != file)
return -EBUSY;
return vb2_qbuf(&icd->vb2_vidq, p);
return vb2_qbuf(&icd->vb2_vidq, NULL, p);
}
static int soc_camera_dqbuf(struct file *file, void *priv,
@ -430,7 +430,7 @@ static int soc_camera_prepare_buf(struct file *file, void *priv,
{
struct soc_camera_device *icd = file->private_data;
return vb2_prepare_buf(&icd->vb2_vidq, b);
return vb2_prepare_buf(&icd->vb2_vidq, NULL, b);
}
static int soc_camera_expbuf(struct file *file, void *priv,
@ -1181,7 +1181,8 @@ static int soc_camera_probe_finish(struct soc_camera_device *icd)
v4l2_subdev_call(sd, video, g_tvnorms, &icd->vdev->tvnorms);
ret = v4l2_ctrl_add_handler(&icd->ctrl_handler, sd->ctrl_handler, NULL);
ret = v4l2_ctrl_add_handler(&icd->ctrl_handler, sd->ctrl_handler,
NULL, true);
if (ret < 0)
return ret;

View File

@ -3,7 +3,8 @@
*
* This is a virtual device driver for testing mem-to-mem videobuf framework.
* It simulates a device that uses memory buffers for both source and
* destination, processes the data and issues an "irq" (simulated by a timer).
* destination, processes the data and issues an "irq" (simulated by a delayed
* workqueue).
* The device is capable of multi-instance, multi-buffer-per-transaction
* operation (via the mem2mem framework).
*
@ -19,7 +20,6 @@
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/fs.h>
#include <linux/timer.h>
#include <linux/sched.h>
#include <linux/slab.h>
@ -148,7 +148,7 @@ struct vim2m_dev {
struct mutex dev_mutex;
spinlock_t irqlock;
struct timer_list timer;
struct delayed_work work_run;
struct v4l2_m2m_dev *m2m_dev;
};
@ -336,12 +336,6 @@ static int device_process(struct vim2m_ctx *ctx,
return 0;
}
static void schedule_irq(struct vim2m_dev *dev, int msec_timeout)
{
dprintk(dev, "Scheduling a simulated irq\n");
mod_timer(&dev->timer, jiffies + msecs_to_jiffies(msec_timeout));
}
/*
* mem2mem callbacks
*/
@ -385,15 +379,24 @@ static void device_run(void *priv)
src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
/* Apply request controls if any */
v4l2_ctrl_request_setup(src_buf->vb2_buf.req_obj.req,
&ctx->hdl);
device_process(ctx, src_buf, dst_buf);
/* Run a timer, which simulates a hardware irq */
schedule_irq(dev, ctx->transtime);
/* Complete request controls if any */
v4l2_ctrl_request_complete(src_buf->vb2_buf.req_obj.req,
&ctx->hdl);
/* Run delayed work, which simulates a hardware irq */
schedule_delayed_work(&dev->work_run, msecs_to_jiffies(ctx->transtime));
}
static void device_isr(struct timer_list *t)
static void device_work(struct work_struct *w)
{
struct vim2m_dev *vim2m_dev = from_timer(vim2m_dev, t, timer);
struct vim2m_dev *vim2m_dev =
container_of(w, struct vim2m_dev, work_run.work);
struct vim2m_ctx *curr_ctx;
struct vb2_v4l2_buffer *src_vb, *dst_vb;
unsigned long flags;
@ -805,6 +808,7 @@ static void vim2m_stop_streaming(struct vb2_queue *q)
struct vb2_v4l2_buffer *vbuf;
unsigned long flags;
flush_scheduled_work();
for (;;) {
if (V4L2_TYPE_IS_OUTPUT(q->type))
vbuf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
@ -812,12 +816,21 @@ static void vim2m_stop_streaming(struct vb2_queue *q)
vbuf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
if (vbuf == NULL)
return;
v4l2_ctrl_request_complete(vbuf->vb2_buf.req_obj.req,
&ctx->hdl);
spin_lock_irqsave(&ctx->dev->irqlock, flags);
v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_ERROR);
spin_unlock_irqrestore(&ctx->dev->irqlock, flags);
}
}
static void vim2m_buf_request_complete(struct vb2_buffer *vb)
{
struct vim2m_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
v4l2_ctrl_request_complete(vb->req_obj.req, &ctx->hdl);
}
static const struct vb2_ops vim2m_qops = {
.queue_setup = vim2m_queue_setup,
.buf_prepare = vim2m_buf_prepare,
@ -826,6 +839,7 @@ static const struct vb2_ops vim2m_qops = {
.stop_streaming = vim2m_stop_streaming,
.wait_prepare = vb2_ops_wait_prepare,
.wait_finish = vb2_ops_wait_finish,
.buf_request_complete = vim2m_buf_request_complete,
};
static int queue_init(void *priv, struct vb2_queue *src_vq, struct vb2_queue *dst_vq)
@ -841,6 +855,7 @@ static int queue_init(void *priv, struct vb2_queue *src_vq, struct vb2_queue *ds
src_vq->mem_ops = &vb2_vmalloc_memops;
src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
src_vq->lock = &ctx->dev->dev_mutex;
src_vq->supports_requests = true;
ret = vb2_queue_init(src_vq);
if (ret)
@ -992,6 +1007,11 @@ static const struct v4l2_m2m_ops m2m_ops = {
.job_abort = job_abort,
};
static const struct media_device_ops m2m_media_ops = {
.req_validate = vb2_request_validate,
.req_queue = vb2_m2m_request_queue,
};
static int vim2m_probe(struct platform_device *pdev)
{
struct vim2m_dev *dev;
@ -1015,6 +1035,7 @@ static int vim2m_probe(struct platform_device *pdev)
vfd = &dev->vfd;
vfd->lock = &dev->dev_mutex;
vfd->v4l2_dev = &dev->v4l2_dev;
INIT_DELAYED_WORK(&dev->work_run, device_work);
ret = video_register_device(vfd, VFL_TYPE_GRABBER, 0);
if (ret) {
@ -1026,7 +1047,6 @@ static int vim2m_probe(struct platform_device *pdev)
v4l2_info(&dev->v4l2_dev,
"Device registered as /dev/video%d\n", vfd->num);
timer_setup(&dev->timer, device_isr, 0);
platform_set_drvdata(pdev, dev);
dev->m2m_dev = v4l2_m2m_init(&m2m_ops);
@ -1040,6 +1060,7 @@ static int vim2m_probe(struct platform_device *pdev)
dev->mdev.dev = &pdev->dev;
strscpy(dev->mdev.model, "vim2m", sizeof(dev->mdev.model));
media_device_init(&dev->mdev);
dev->mdev.ops = &m2m_media_ops;
dev->v4l2_dev.mdev = &dev->mdev;
ret = v4l2_m2m_register_media_controller(dev->m2m_dev,
@ -1083,7 +1104,6 @@ static int vim2m_remove(struct platform_device *pdev)
media_device_cleanup(&dev->mdev);
#endif
v4l2_m2m_release(dev->m2m_dev);
del_timer_sync(&dev->timer);
video_unregister_device(&dev->vfd);
v4l2_device_unregister(&dev->v4l2_dev);

View File

@ -627,6 +627,13 @@ static void vivid_dev_release(struct v4l2_device *v4l2_dev)
kfree(dev);
}
#ifdef CONFIG_MEDIA_CONTROLLER
static const struct media_device_ops vivid_media_ops = {
.req_validate = vb2_request_validate,
.req_queue = vb2_request_queue,
};
#endif
static int vivid_create_instance(struct platform_device *pdev, int inst)
{
static const struct v4l2_dv_timings def_dv_timings =
@ -657,6 +664,16 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
dev->inst = inst;
#ifdef CONFIG_MEDIA_CONTROLLER
dev->v4l2_dev.mdev = &dev->mdev;
/* Initialize media device */
strlcpy(dev->mdev.model, VIVID_MODULE_NAME, sizeof(dev->mdev.model));
dev->mdev.dev = &pdev->dev;
media_device_init(&dev->mdev);
dev->mdev.ops = &vivid_media_ops;
#endif
/* register v4l2_device */
snprintf(dev->v4l2_dev.name, sizeof(dev->v4l2_dev.name),
"%s-%03d", VIVID_MODULE_NAME, inst);
@ -1060,6 +1077,7 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
q->min_buffers_needed = 2;
q->lock = &dev->mutex;
q->dev = dev->v4l2_dev.dev;
q->supports_requests = true;
ret = vb2_queue_init(q);
if (ret)
@ -1080,6 +1098,7 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
q->min_buffers_needed = 2;
q->lock = &dev->mutex;
q->dev = dev->v4l2_dev.dev;
q->supports_requests = true;
ret = vb2_queue_init(q);
if (ret)
@ -1100,6 +1119,7 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
q->min_buffers_needed = 2;
q->lock = &dev->mutex;
q->dev = dev->v4l2_dev.dev;
q->supports_requests = true;
ret = vb2_queue_init(q);
if (ret)
@ -1120,6 +1140,7 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
q->min_buffers_needed = 2;
q->lock = &dev->mutex;
q->dev = dev->v4l2_dev.dev;
q->supports_requests = true;
ret = vb2_queue_init(q);
if (ret)
@ -1139,6 +1160,7 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
q->min_buffers_needed = 8;
q->lock = &dev->mutex;
q->dev = dev->v4l2_dev.dev;
q->supports_requests = true;
ret = vb2_queue_init(q);
if (ret)
@ -1174,6 +1196,13 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
vfd->lock = &dev->mutex;
video_set_drvdata(vfd, dev);
#ifdef CONFIG_MEDIA_CONTROLLER
dev->vid_cap_pad.flags = MEDIA_PAD_FL_SINK;
ret = media_entity_pads_init(&vfd->entity, 1, &dev->vid_cap_pad);
if (ret)
goto unreg_dev;
#endif
#ifdef CONFIG_VIDEO_VIVID_CEC
if (in_type_counter[HDMI]) {
struct cec_adapter *adap;
@ -1226,6 +1255,13 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
vfd->lock = &dev->mutex;
video_set_drvdata(vfd, dev);
#ifdef CONFIG_MEDIA_CONTROLLER
dev->vid_out_pad.flags = MEDIA_PAD_FL_SOURCE;
ret = media_entity_pads_init(&vfd->entity, 1, &dev->vid_out_pad);
if (ret)
goto unreg_dev;
#endif
#ifdef CONFIG_VIDEO_VIVID_CEC
for (i = 0; i < dev->num_outputs; i++) {
struct cec_adapter *adap;
@ -1275,6 +1311,13 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
vfd->tvnorms = tvnorms_cap;
video_set_drvdata(vfd, dev);
#ifdef CONFIG_MEDIA_CONTROLLER
dev->vbi_cap_pad.flags = MEDIA_PAD_FL_SINK;
ret = media_entity_pads_init(&vfd->entity, 1, &dev->vbi_cap_pad);
if (ret)
goto unreg_dev;
#endif
ret = video_register_device(vfd, VFL_TYPE_VBI, vbi_cap_nr[inst]);
if (ret < 0)
goto unreg_dev;
@ -1300,6 +1343,13 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
vfd->tvnorms = tvnorms_out;
video_set_drvdata(vfd, dev);
#ifdef CONFIG_MEDIA_CONTROLLER
dev->vbi_out_pad.flags = MEDIA_PAD_FL_SOURCE;
ret = media_entity_pads_init(&vfd->entity, 1, &dev->vbi_out_pad);
if (ret)
goto unreg_dev;
#endif
ret = video_register_device(vfd, VFL_TYPE_VBI, vbi_out_nr[inst]);
if (ret < 0)
goto unreg_dev;
@ -1323,6 +1373,13 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
vfd->lock = &dev->mutex;
video_set_drvdata(vfd, dev);
#ifdef CONFIG_MEDIA_CONTROLLER
dev->sdr_cap_pad.flags = MEDIA_PAD_FL_SINK;
ret = media_entity_pads_init(&vfd->entity, 1, &dev->sdr_cap_pad);
if (ret)
goto unreg_dev;
#endif
ret = video_register_device(vfd, VFL_TYPE_SDR, sdr_cap_nr[inst]);
if (ret < 0)
goto unreg_dev;
@ -1369,12 +1426,25 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
video_device_node_name(vfd));
}
#ifdef CONFIG_MEDIA_CONTROLLER
/* Register the media device */
ret = media_device_register(&dev->mdev);
if (ret) {
dev_err(dev->mdev.dev,
"media device register failed (err=%d)\n", ret);
goto unreg_dev;
}
#endif
/* Now that everything is fine, let's add it to device list */
vivid_devs[inst] = dev;
return 0;
unreg_dev:
#ifdef CONFIG_MEDIA_CONTROLLER
media_device_unregister(&dev->mdev);
#endif
video_unregister_device(&dev->radio_tx_dev);
video_unregister_device(&dev->radio_rx_dev);
video_unregister_device(&dev->sdr_cap_dev);
@ -1445,6 +1515,10 @@ static int vivid_remove(struct platform_device *pdev)
if (!dev)
continue;
#ifdef CONFIG_MEDIA_CONTROLLER
media_device_unregister(&dev->mdev);
#endif
if (dev->has_vid_cap) {
v4l2_info(&dev->v4l2_dev, "unregistering %s\n",
video_device_node_name(&dev->vid_cap_dev));

View File

@ -136,6 +136,14 @@ struct vivid_cec_work {
struct vivid_dev {
unsigned inst;
struct v4l2_device v4l2_dev;
#ifdef CONFIG_MEDIA_CONTROLLER
struct media_device mdev;
struct media_pad vid_cap_pad;
struct media_pad vid_out_pad;
struct media_pad vbi_cap_pad;
struct media_pad vbi_out_pad;
struct media_pad sdr_cap_pad;
#endif
struct v4l2_ctrl_handler ctrl_hdl_user_gen;
struct v4l2_ctrl_handler ctrl_hdl_user_vid;
struct v4l2_ctrl_handler ctrl_hdl_user_aud;

View File

@ -1662,59 +1662,59 @@ int vivid_create_controls(struct vivid_dev *dev, bool show_ccs_cap,
v4l2_ctrl_auto_cluster(2, &dev->autogain, 0, true);
if (dev->has_vid_cap) {
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_user_gen, NULL);
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_user_vid, NULL);
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_user_aud, NULL);
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_streaming, NULL);
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_sdtv_cap, NULL);
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_loop_cap, NULL);
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_fb, NULL);
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_user_gen, NULL, false);
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_user_vid, NULL, false);
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_user_aud, NULL, false);
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_streaming, NULL, false);
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_sdtv_cap, NULL, false);
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_loop_cap, NULL, false);
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_fb, NULL, false);
if (hdl_vid_cap->error)
return hdl_vid_cap->error;
dev->vid_cap_dev.ctrl_handler = hdl_vid_cap;
}
if (dev->has_vid_out) {
v4l2_ctrl_add_handler(hdl_vid_out, hdl_user_gen, NULL);
v4l2_ctrl_add_handler(hdl_vid_out, hdl_user_aud, NULL);
v4l2_ctrl_add_handler(hdl_vid_out, hdl_streaming, NULL);
v4l2_ctrl_add_handler(hdl_vid_out, hdl_fb, NULL);
v4l2_ctrl_add_handler(hdl_vid_out, hdl_user_gen, NULL, false);
v4l2_ctrl_add_handler(hdl_vid_out, hdl_user_aud, NULL, false);
v4l2_ctrl_add_handler(hdl_vid_out, hdl_streaming, NULL, false);
v4l2_ctrl_add_handler(hdl_vid_out, hdl_fb, NULL, false);
if (hdl_vid_out->error)
return hdl_vid_out->error;
dev->vid_out_dev.ctrl_handler = hdl_vid_out;
}
if (dev->has_vbi_cap) {
v4l2_ctrl_add_handler(hdl_vbi_cap, hdl_user_gen, NULL);
v4l2_ctrl_add_handler(hdl_vbi_cap, hdl_streaming, NULL);
v4l2_ctrl_add_handler(hdl_vbi_cap, hdl_sdtv_cap, NULL);
v4l2_ctrl_add_handler(hdl_vbi_cap, hdl_loop_cap, NULL);
v4l2_ctrl_add_handler(hdl_vbi_cap, hdl_user_gen, NULL, false);
v4l2_ctrl_add_handler(hdl_vbi_cap, hdl_streaming, NULL, false);
v4l2_ctrl_add_handler(hdl_vbi_cap, hdl_sdtv_cap, NULL, false);
v4l2_ctrl_add_handler(hdl_vbi_cap, hdl_loop_cap, NULL, false);
if (hdl_vbi_cap->error)
return hdl_vbi_cap->error;
dev->vbi_cap_dev.ctrl_handler = hdl_vbi_cap;
}
if (dev->has_vbi_out) {
v4l2_ctrl_add_handler(hdl_vbi_out, hdl_user_gen, NULL);
v4l2_ctrl_add_handler(hdl_vbi_out, hdl_streaming, NULL);
v4l2_ctrl_add_handler(hdl_vbi_out, hdl_user_gen, NULL, false);
v4l2_ctrl_add_handler(hdl_vbi_out, hdl_streaming, NULL, false);
if (hdl_vbi_out->error)
return hdl_vbi_out->error;
dev->vbi_out_dev.ctrl_handler = hdl_vbi_out;
}
if (dev->has_radio_rx) {
v4l2_ctrl_add_handler(hdl_radio_rx, hdl_user_gen, NULL);
v4l2_ctrl_add_handler(hdl_radio_rx, hdl_user_aud, NULL);
v4l2_ctrl_add_handler(hdl_radio_rx, hdl_user_gen, NULL, false);
v4l2_ctrl_add_handler(hdl_radio_rx, hdl_user_aud, NULL, false);
if (hdl_radio_rx->error)
return hdl_radio_rx->error;
dev->radio_rx_dev.ctrl_handler = hdl_radio_rx;
}
if (dev->has_radio_tx) {
v4l2_ctrl_add_handler(hdl_radio_tx, hdl_user_gen, NULL);
v4l2_ctrl_add_handler(hdl_radio_tx, hdl_user_aud, NULL);
v4l2_ctrl_add_handler(hdl_radio_tx, hdl_user_gen, NULL, false);
v4l2_ctrl_add_handler(hdl_radio_tx, hdl_user_aud, NULL, false);
if (hdl_radio_tx->error)
return hdl_radio_tx->error;
dev->radio_tx_dev.ctrl_handler = hdl_radio_tx;
}
if (dev->has_sdr_cap) {
v4l2_ctrl_add_handler(hdl_sdr_cap, hdl_user_gen, NULL);
v4l2_ctrl_add_handler(hdl_sdr_cap, hdl_streaming, NULL);
v4l2_ctrl_add_handler(hdl_sdr_cap, hdl_user_gen, NULL, false);
v4l2_ctrl_add_handler(hdl_sdr_cap, hdl_streaming, NULL, false);
if (hdl_sdr_cap->error)
return hdl_sdr_cap->error;
dev->sdr_cap_dev.ctrl_handler = hdl_sdr_cap;

View File

@ -703,6 +703,8 @@ static void vivid_thread_vid_cap_tick(struct vivid_dev *dev, int dropped_bufs)
goto update_mv;
if (vid_cap_buf) {
v4l2_ctrl_request_setup(vid_cap_buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_vid_cap);
/* Fill buffer */
vivid_fillbuff(dev, vid_cap_buf);
dprintk(dev, 1, "filled buffer %d\n",
@ -713,6 +715,8 @@ static void vivid_thread_vid_cap_tick(struct vivid_dev *dev, int dropped_bufs)
dev->fb_cap.fmt.pixelformat == dev->fmt_cap->fourcc)
vivid_overlay(dev, vid_cap_buf);
v4l2_ctrl_request_complete(vid_cap_buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_vid_cap);
vb2_buffer_done(&vid_cap_buf->vb.vb2_buf, dev->dqbuf_error ?
VB2_BUF_STATE_ERROR : VB2_BUF_STATE_DONE);
dprintk(dev, 2, "vid_cap buffer %d done\n",
@ -720,10 +724,14 @@ static void vivid_thread_vid_cap_tick(struct vivid_dev *dev, int dropped_bufs)
}
if (vbi_cap_buf) {
v4l2_ctrl_request_setup(vbi_cap_buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_vbi_cap);
if (dev->stream_sliced_vbi_cap)
vivid_sliced_vbi_cap_process(dev, vbi_cap_buf);
else
vivid_raw_vbi_cap_process(dev, vbi_cap_buf);
v4l2_ctrl_request_complete(vbi_cap_buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_vbi_cap);
vb2_buffer_done(&vbi_cap_buf->vb.vb2_buf, dev->dqbuf_error ?
VB2_BUF_STATE_ERROR : VB2_BUF_STATE_DONE);
dprintk(dev, 2, "vbi_cap %d done\n",
@ -891,6 +899,8 @@ void vivid_stop_generating_vid_cap(struct vivid_dev *dev, bool *pstreaming)
buf = list_entry(dev->vid_cap_active.next,
struct vivid_buffer, list);
list_del(&buf->list);
v4l2_ctrl_request_complete(buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_vid_cap);
vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
dprintk(dev, 2, "vid_cap buffer %d done\n",
buf->vb.vb2_buf.index);
@ -904,6 +914,8 @@ void vivid_stop_generating_vid_cap(struct vivid_dev *dev, bool *pstreaming)
buf = list_entry(dev->vbi_cap_active.next,
struct vivid_buffer, list);
list_del(&buf->list);
v4l2_ctrl_request_complete(buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_vbi_cap);
vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
dprintk(dev, 2, "vbi_cap buffer %d done\n",
buf->vb.vb2_buf.index);

View File

@ -75,6 +75,10 @@ static void vivid_thread_vid_out_tick(struct vivid_dev *dev)
return;
if (vid_out_buf) {
v4l2_ctrl_request_setup(vid_out_buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_vid_out);
v4l2_ctrl_request_complete(vid_out_buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_vid_out);
vid_out_buf->vb.sequence = dev->vid_out_seq_count;
if (dev->field_out == V4L2_FIELD_ALTERNATE) {
/*
@ -92,6 +96,10 @@ static void vivid_thread_vid_out_tick(struct vivid_dev *dev)
}
if (vbi_out_buf) {
v4l2_ctrl_request_setup(vbi_out_buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_vbi_out);
v4l2_ctrl_request_complete(vbi_out_buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_vbi_out);
if (dev->stream_sliced_vbi_out)
vivid_sliced_vbi_out_process(dev, vbi_out_buf);
@ -262,6 +270,8 @@ void vivid_stop_generating_vid_out(struct vivid_dev *dev, bool *pstreaming)
buf = list_entry(dev->vid_out_active.next,
struct vivid_buffer, list);
list_del(&buf->list);
v4l2_ctrl_request_complete(buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_vid_out);
vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
dprintk(dev, 2, "vid_out buffer %d done\n",
buf->vb.vb2_buf.index);
@ -275,6 +285,8 @@ void vivid_stop_generating_vid_out(struct vivid_dev *dev, bool *pstreaming)
buf = list_entry(dev->vbi_out_active.next,
struct vivid_buffer, list);
list_del(&buf->list);
v4l2_ctrl_request_complete(buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_vbi_out);
vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
dprintk(dev, 2, "vbi_out buffer %d done\n",
buf->vb.vb2_buf.index);

View File

@ -102,6 +102,10 @@ static void vivid_thread_sdr_cap_tick(struct vivid_dev *dev)
if (sdr_cap_buf) {
sdr_cap_buf->vb.sequence = dev->sdr_cap_seq_count;
v4l2_ctrl_request_setup(sdr_cap_buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_sdr_cap);
v4l2_ctrl_request_complete(sdr_cap_buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_sdr_cap);
vivid_sdr_cap_process(dev, sdr_cap_buf);
sdr_cap_buf->vb.vb2_buf.timestamp =
ktime_get_ns() + dev->time_wrap_offset;
@ -272,6 +276,8 @@ static int sdr_cap_start_streaming(struct vb2_queue *vq, unsigned count)
list_for_each_entry_safe(buf, tmp, &dev->sdr_cap_active, list) {
list_del(&buf->list);
v4l2_ctrl_request_complete(buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_sdr_cap);
vb2_buffer_done(&buf->vb.vb2_buf,
VB2_BUF_STATE_QUEUED);
}
@ -293,6 +299,8 @@ static void sdr_cap_stop_streaming(struct vb2_queue *vq)
buf = list_entry(dev->sdr_cap_active.next,
struct vivid_buffer, list);
list_del(&buf->list);
v4l2_ctrl_request_complete(buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_sdr_cap);
vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
}
@ -303,12 +311,20 @@ static void sdr_cap_stop_streaming(struct vb2_queue *vq)
mutex_lock(&dev->mutex);
}
static void sdr_cap_buf_request_complete(struct vb2_buffer *vb)
{
struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
v4l2_ctrl_request_complete(vb->req_obj.req, &dev->ctrl_hdl_sdr_cap);
}
const struct vb2_ops vivid_sdr_cap_qops = {
.queue_setup = sdr_cap_queue_setup,
.buf_prepare = sdr_cap_buf_prepare,
.buf_queue = sdr_cap_buf_queue,
.start_streaming = sdr_cap_start_streaming,
.stop_streaming = sdr_cap_stop_streaming,
.buf_request_complete = sdr_cap_buf_request_complete,
.wait_prepare = vb2_ops_wait_prepare,
.wait_finish = vb2_ops_wait_finish,
};

View File

@ -204,6 +204,8 @@ static int vbi_cap_start_streaming(struct vb2_queue *vq, unsigned count)
list_for_each_entry_safe(buf, tmp, &dev->vbi_cap_active, list) {
list_del(&buf->list);
v4l2_ctrl_request_complete(buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_vbi_cap);
vb2_buffer_done(&buf->vb.vb2_buf,
VB2_BUF_STATE_QUEUED);
}
@ -220,12 +222,20 @@ static void vbi_cap_stop_streaming(struct vb2_queue *vq)
vivid_stop_generating_vid_cap(dev, &dev->vbi_cap_streaming);
}
static void vbi_cap_buf_request_complete(struct vb2_buffer *vb)
{
struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
v4l2_ctrl_request_complete(vb->req_obj.req, &dev->ctrl_hdl_vbi_cap);
}
const struct vb2_ops vivid_vbi_cap_qops = {
.queue_setup = vbi_cap_queue_setup,
.buf_prepare = vbi_cap_buf_prepare,
.buf_queue = vbi_cap_buf_queue,
.start_streaming = vbi_cap_start_streaming,
.stop_streaming = vbi_cap_stop_streaming,
.buf_request_complete = vbi_cap_buf_request_complete,
.wait_prepare = vb2_ops_wait_prepare,
.wait_finish = vb2_ops_wait_finish,
};

View File

@ -96,6 +96,8 @@ static int vbi_out_start_streaming(struct vb2_queue *vq, unsigned count)
list_for_each_entry_safe(buf, tmp, &dev->vbi_out_active, list) {
list_del(&buf->list);
v4l2_ctrl_request_complete(buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_vbi_out);
vb2_buffer_done(&buf->vb.vb2_buf,
VB2_BUF_STATE_QUEUED);
}
@ -115,12 +117,20 @@ static void vbi_out_stop_streaming(struct vb2_queue *vq)
dev->vbi_out_have_cc[1] = false;
}
static void vbi_out_buf_request_complete(struct vb2_buffer *vb)
{
struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
v4l2_ctrl_request_complete(vb->req_obj.req, &dev->ctrl_hdl_vbi_out);
}
const struct vb2_ops vivid_vbi_out_qops = {
.queue_setup = vbi_out_queue_setup,
.buf_prepare = vbi_out_buf_prepare,
.buf_queue = vbi_out_buf_queue,
.start_streaming = vbi_out_start_streaming,
.stop_streaming = vbi_out_stop_streaming,
.buf_request_complete = vbi_out_buf_request_complete,
.wait_prepare = vb2_ops_wait_prepare,
.wait_finish = vb2_ops_wait_finish,
};

View File

@ -243,6 +243,8 @@ static int vid_cap_start_streaming(struct vb2_queue *vq, unsigned count)
list_for_each_entry_safe(buf, tmp, &dev->vid_cap_active, list) {
list_del(&buf->list);
v4l2_ctrl_request_complete(buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_vid_cap);
vb2_buffer_done(&buf->vb.vb2_buf,
VB2_BUF_STATE_QUEUED);
}
@ -260,6 +262,13 @@ static void vid_cap_stop_streaming(struct vb2_queue *vq)
dev->can_loop_video = false;
}
static void vid_cap_buf_request_complete(struct vb2_buffer *vb)
{
struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
v4l2_ctrl_request_complete(vb->req_obj.req, &dev->ctrl_hdl_vid_cap);
}
const struct vb2_ops vivid_vid_cap_qops = {
.queue_setup = vid_cap_queue_setup,
.buf_prepare = vid_cap_buf_prepare,
@ -267,6 +276,7 @@ const struct vb2_ops vivid_vid_cap_qops = {
.buf_queue = vid_cap_buf_queue,
.start_streaming = vid_cap_start_streaming,
.stop_streaming = vid_cap_stop_streaming,
.buf_request_complete = vid_cap_buf_request_complete,
.wait_prepare = vb2_ops_wait_prepare,
.wait_finish = vb2_ops_wait_finish,
};

View File

@ -162,6 +162,8 @@ static int vid_out_start_streaming(struct vb2_queue *vq, unsigned count)
list_for_each_entry_safe(buf, tmp, &dev->vid_out_active, list) {
list_del(&buf->list);
v4l2_ctrl_request_complete(buf->vb.vb2_buf.req_obj.req,
&dev->ctrl_hdl_vid_out);
vb2_buffer_done(&buf->vb.vb2_buf,
VB2_BUF_STATE_QUEUED);
}
@ -179,12 +181,20 @@ static void vid_out_stop_streaming(struct vb2_queue *vq)
dev->can_loop_video = false;
}
static void vid_out_buf_request_complete(struct vb2_buffer *vb)
{
struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
v4l2_ctrl_request_complete(vb->req_obj.req, &dev->ctrl_hdl_vid_out);
}
const struct vb2_ops vivid_vid_out_qops = {
.queue_setup = vid_out_queue_setup,
.buf_prepare = vid_out_buf_prepare,
.buf_queue = vid_out_buf_queue,
.start_streaming = vid_out_start_streaming,
.stop_streaming = vid_out_stop_streaming,
.buf_request_complete = vid_out_buf_request_complete,
.wait_prepare = vb2_ops_wait_prepare,
.wait_finish = vb2_ops_wait_finish,
};

View File

@ -949,7 +949,7 @@ static int cpia2_dqbuf(struct file *file, void *fh, struct v4l2_buffer *buf)
buf->m.offset = cam->buffers[buf->index].data - cam->frame_buffer;
buf->length = cam->frame_size;
buf->reserved2 = 0;
buf->reserved = 0;
buf->request_fd = 0;
memset(&buf->timecode, 0, sizeof(buf->timecode));
DBG("DQBUF #%d status:%d seq:%d length:%d\n", buf->index,

View File

@ -1992,7 +1992,7 @@ int cx231xx_417_register(struct cx231xx *dev)
dev->mpeg_ctrl_handler.ops = &cx231xx_ops;
if (dev->sd_cx25840)
v4l2_ctrl_add_handler(&dev->mpeg_ctrl_handler.hdl,
dev->sd_cx25840->ctrl_handler, NULL);
dev->sd_cx25840->ctrl_handler, NULL, false);
if (dev->mpeg_ctrl_handler.hdl.error) {
err = dev->mpeg_ctrl_handler.hdl.error;
dprintk(3, "%s: can't add cx25840 controls\n", dev->name);

View File

@ -2204,10 +2204,10 @@ int cx231xx_register_analog_devices(struct cx231xx *dev)
if (dev->sd_cx25840) {
v4l2_ctrl_add_handler(&dev->ctrl_handler,
dev->sd_cx25840->ctrl_handler, NULL);
dev->sd_cx25840->ctrl_handler, NULL, true);
v4l2_ctrl_add_handler(&dev->radio_ctrl_handler,
dev->sd_cx25840->ctrl_handler,
v4l2_ctrl_radio_filter);
v4l2_ctrl_radio_filter, true);
}
if (dev->ctrl_handler.error)

View File

@ -1278,7 +1278,7 @@ static int msi2500_probe(struct usb_interface *intf,
}
/* currently all controls are from subdev */
v4l2_ctrl_add_handler(&dev->hdl, sd->ctrl_handler, NULL);
v4l2_ctrl_add_handler(&dev->hdl, sd->ctrl_handler, NULL, true);
dev->v4l2_dev.ctrl_handler = &dev->hdl;
dev->vdev.v4l2_dev = &dev->v4l2_dev;

View File

@ -1627,7 +1627,7 @@ int tm6000_v4l2_register(struct tm6000_core *dev)
v4l2_ctrl_new_std(&dev->ctrl_handler, &tm6000_ctrl_ops,
V4L2_CID_HUE, -128, 127, 1, 0);
v4l2_ctrl_add_handler(&dev->ctrl_handler,
&dev->radio_ctrl_handler, NULL);
&dev->radio_ctrl_handler, NULL, false);
if (dev->radio_ctrl_handler.error)
ret = dev->radio_ctrl_handler.error;

View File

@ -300,12 +300,13 @@ int uvc_create_buffers(struct uvc_video_queue *queue,
return ret;
}
int uvc_queue_buffer(struct uvc_video_queue *queue, struct v4l2_buffer *buf)
int uvc_queue_buffer(struct uvc_video_queue *queue,
struct media_device *mdev, struct v4l2_buffer *buf)
{
int ret;
mutex_lock(&queue->mutex);
ret = vb2_qbuf(&queue->queue, buf);
ret = vb2_qbuf(&queue->queue, mdev, buf);
mutex_unlock(&queue->mutex);
return ret;

View File

@ -751,7 +751,8 @@ static int uvc_ioctl_qbuf(struct file *file, void *fh, struct v4l2_buffer *buf)
if (!uvc_has_privileges(handle))
return -EBUSY;
return uvc_queue_buffer(&stream->queue, buf);
return uvc_queue_buffer(&stream->queue,
stream->vdev.v4l2_dev->mdev, buf);
}
static int uvc_ioctl_expbuf(struct file *file, void *fh,

View File

@ -700,6 +700,7 @@ int uvc_query_buffer(struct uvc_video_queue *queue,
int uvc_create_buffers(struct uvc_video_queue *queue,
struct v4l2_create_buffers *v4l2_cb);
int uvc_queue_buffer(struct uvc_video_queue *queue,
struct media_device *mdev,
struct v4l2_buffer *v4l2_buf);
int uvc_export_buffer(struct uvc_video_queue *queue,
struct v4l2_exportbuffer *exp);

View File

@ -244,6 +244,7 @@ struct v4l2_format32 {
* return: number of created buffers
* @memory: buffer memory type
* @format: frame format, for which buffers are requested
* @capabilities: capabilities of this buffer type.
* @reserved: future extensions
*/
struct v4l2_create_buffers32 {
@ -251,7 +252,8 @@ struct v4l2_create_buffers32 {
__u32 count;
__u32 memory; /* enum v4l2_memory */
struct v4l2_format32 format;
__u32 reserved[8];
__u32 capabilities;
__u32 reserved[7];
};
static int __bufsize_v4l2_format(struct v4l2_format32 __user *p32, u32 *size)
@ -411,6 +413,7 @@ static int put_v4l2_create32(struct v4l2_create_buffers __user *p64,
if (!access_ok(VERIFY_WRITE, p32, sizeof(*p32)) ||
copy_in_user(p32, p64,
offsetof(struct v4l2_create_buffers32, format)) ||
assign_in_user(&p32->capabilities, &p64->capabilities) ||
copy_in_user(p32->reserved, p64->reserved, sizeof(p64->reserved)))
return -EFAULT;
return __put_v4l2_format32(&p64->format, &p32->format);
@ -482,7 +485,7 @@ struct v4l2_buffer32 {
} m;
__u32 length;
__u32 reserved2;
__u32 reserved;
__s32 request_fd;
};
static int get_v4l2_plane32(struct v4l2_plane __user *p64,
@ -581,6 +584,7 @@ static int get_v4l2_buffer32(struct v4l2_buffer __user *p64,
{
u32 type;
u32 length;
s32 request_fd;
enum v4l2_memory memory;
struct v4l2_plane32 __user *uplane32;
struct v4l2_plane __user *uplane;
@ -595,7 +599,9 @@ static int get_v4l2_buffer32(struct v4l2_buffer __user *p64,
get_user(memory, &p32->memory) ||
put_user(memory, &p64->memory) ||
get_user(length, &p32->length) ||
put_user(length, &p64->length))
put_user(length, &p64->length) ||
get_user(request_fd, &p32->request_fd) ||
put_user(request_fd, &p64->request_fd))
return -EFAULT;
if (V4L2_TYPE_IS_OUTPUT(type))
@ -699,7 +705,7 @@ static int put_v4l2_buffer32(struct v4l2_buffer __user *p64,
copy_in_user(&p32->timecode, &p64->timecode, sizeof(p64->timecode)) ||
assign_in_user(&p32->sequence, &p64->sequence) ||
assign_in_user(&p32->reserved2, &p64->reserved2) ||
assign_in_user(&p32->reserved, &p64->reserved) ||
assign_in_user(&p32->request_fd, &p64->request_fd) ||
get_user(length, &p64->length) ||
put_user(length, &p32->length))
return -EFAULT;
@ -834,7 +840,8 @@ struct v4l2_ext_controls32 {
__u32 which;
__u32 count;
__u32 error_idx;
__u32 reserved[2];
__s32 request_fd;
__u32 reserved[1];
compat_caddr_t controls; /* actually struct v4l2_ext_control32 * */
};
@ -909,6 +916,7 @@ static int get_v4l2_ext_controls32(struct file *file,
get_user(count, &p32->count) ||
put_user(count, &p64->count) ||
assign_in_user(&p64->error_idx, &p32->error_idx) ||
assign_in_user(&p64->request_fd, &p32->request_fd) ||
copy_in_user(p64->reserved, p32->reserved, sizeof(p64->reserved)))
return -EFAULT;
@ -974,6 +982,7 @@ static int put_v4l2_ext_controls32(struct file *file,
get_user(count, &p64->count) ||
put_user(count, &p32->count) ||
assign_in_user(&p32->error_idx, &p64->error_idx) ||
assign_in_user(&p32->request_fd, &p64->request_fd) ||
copy_in_user(p32->reserved, p64->reserved, sizeof(p32->reserved)) ||
get_user(kcontrols, &p64->controls))
return -EFAULT;

View File

@ -37,8 +37,8 @@
struct v4l2_ctrl_helper {
/* Pointer to the control reference of the master control */
struct v4l2_ctrl_ref *mref;
/* The control corresponding to the v4l2_ext_control ID field. */
struct v4l2_ctrl *ctrl;
/* The control ref corresponding to the v4l2_ext_control ID field. */
struct v4l2_ctrl_ref *ref;
/* v4l2_ext_control index of the next control belonging to the
same cluster, or 0 if there isn't any. */
u32 next;
@ -844,6 +844,8 @@ const char *v4l2_ctrl_get_name(u32 id)
case V4L2_CID_MPEG_VIDEO_MV_V_SEARCH_RANGE: return "Vertical MV Search Range";
case V4L2_CID_MPEG_VIDEO_REPEAT_SEQ_HEADER: return "Repeat Sequence Header";
case V4L2_CID_MPEG_VIDEO_FORCE_KEY_FRAME: return "Force Key Frame";
case V4L2_CID_MPEG_VIDEO_MPEG2_SLICE_PARAMS: return "MPEG-2 Slice Parameters";
case V4L2_CID_MPEG_VIDEO_MPEG2_QUANTIZATION: return "MPEG-2 Quantization Matrices";
/* VPX controls */
case V4L2_CID_MPEG_VIDEO_VPX_NUM_PARTITIONS: return "VPX Number of Partitions";
@ -1292,6 +1294,12 @@ void v4l2_ctrl_fill(u32 id, const char **name, enum v4l2_ctrl_type *type,
case V4L2_CID_RDS_TX_ALT_FREQS:
*type = V4L2_CTRL_TYPE_U32;
break;
case V4L2_CID_MPEG_VIDEO_MPEG2_SLICE_PARAMS:
*type = V4L2_CTRL_TYPE_MPEG2_SLICE_PARAMS;
break;
case V4L2_CID_MPEG_VIDEO_MPEG2_QUANTIZATION:
*type = V4L2_CTRL_TYPE_MPEG2_QUANTIZATION;
break;
default:
*type = V4L2_CTRL_TYPE_INTEGER;
break;
@ -1550,6 +1558,7 @@ static void std_log(const struct v4l2_ctrl *ctrl)
static int std_validate(const struct v4l2_ctrl *ctrl, u32 idx,
union v4l2_ctrl_ptr ptr)
{
struct v4l2_ctrl_mpeg2_slice_params *p_mpeg2_slice_params;
size_t len;
u64 offset;
s64 val;
@ -1612,6 +1621,54 @@ static int std_validate(const struct v4l2_ctrl *ctrl, u32 idx,
return -ERANGE;
return 0;
case V4L2_CTRL_TYPE_MPEG2_SLICE_PARAMS:
p_mpeg2_slice_params = ptr.p;
switch (p_mpeg2_slice_params->sequence.chroma_format) {
case 1: /* 4:2:0 */
case 2: /* 4:2:2 */
case 3: /* 4:4:4 */
break;
default:
return -EINVAL;
}
switch (p_mpeg2_slice_params->picture.intra_dc_precision) {
case 0: /* 8 bits */
case 1: /* 9 bits */
case 11: /* 11 bits */
break;
default:
return -EINVAL;
}
switch (p_mpeg2_slice_params->picture.picture_structure) {
case 1: /* interlaced top field */
case 2: /* interlaced bottom field */
case 3: /* progressive */
break;
default:
return -EINVAL;
}
switch (p_mpeg2_slice_params->picture.picture_coding_type) {
case V4L2_MPEG2_PICTURE_CODING_TYPE_I:
case V4L2_MPEG2_PICTURE_CODING_TYPE_P:
case V4L2_MPEG2_PICTURE_CODING_TYPE_B:
break;
default:
return -EINVAL;
}
if (p_mpeg2_slice_params->backward_ref_index >= VIDEO_MAX_FRAME ||
p_mpeg2_slice_params->forward_ref_index >= VIDEO_MAX_FRAME)
return -EINVAL;
return 0;
case V4L2_CTRL_TYPE_MPEG2_QUANTIZATION:
return 0;
default:
return -EINVAL;
}
@ -1668,6 +1725,13 @@ static int new_to_user(struct v4l2_ext_control *c,
return ptr_to_user(c, ctrl, ctrl->p_new);
}
/* Helper function: copy the request value back to the caller */
static int req_to_user(struct v4l2_ext_control *c,
struct v4l2_ctrl_ref *ref)
{
return ptr_to_user(c, ref->ctrl, ref->p_req);
}
/* Helper function: copy the initial control value back to the caller */
static int def_to_user(struct v4l2_ext_control *c, struct v4l2_ctrl *ctrl)
{
@ -1787,6 +1851,26 @@ static void cur_to_new(struct v4l2_ctrl *ctrl)
ptr_to_ptr(ctrl, ctrl->p_cur, ctrl->p_new);
}
/* Copy the new value to the request value */
static void new_to_req(struct v4l2_ctrl_ref *ref)
{
if (!ref)
return;
ptr_to_ptr(ref->ctrl, ref->ctrl->p_new, ref->p_req);
ref->req = ref;
}
/* Copy the request value to the new value */
static void req_to_new(struct v4l2_ctrl_ref *ref)
{
if (!ref)
return;
if (ref->req)
ptr_to_ptr(ref->ctrl, ref->req->p_req, ref->ctrl->p_new);
else
ptr_to_ptr(ref->ctrl, ref->ctrl->p_cur, ref->ctrl->p_new);
}
/* Return non-zero if one or more of the controls in the cluster has a new
value that differs from the current value. */
static int cluster_changed(struct v4l2_ctrl *master)
@ -1896,11 +1980,15 @@ int v4l2_ctrl_handler_init_class(struct v4l2_ctrl_handler *hdl,
lockdep_set_class_and_name(hdl->lock, key, name);
INIT_LIST_HEAD(&hdl->ctrls);
INIT_LIST_HEAD(&hdl->ctrl_refs);
INIT_LIST_HEAD(&hdl->requests);
INIT_LIST_HEAD(&hdl->requests_queued);
hdl->request_is_queued = false;
hdl->nr_of_buckets = 1 + nr_of_controls_hint / 8;
hdl->buckets = kvmalloc_array(hdl->nr_of_buckets,
sizeof(hdl->buckets[0]),
GFP_KERNEL | __GFP_ZERO);
hdl->error = hdl->buckets ? 0 : -ENOMEM;
media_request_object_init(&hdl->req_obj);
return hdl->error;
}
EXPORT_SYMBOL(v4l2_ctrl_handler_init_class);
@ -1915,6 +2003,14 @@ void v4l2_ctrl_handler_free(struct v4l2_ctrl_handler *hdl)
if (hdl == NULL || hdl->buckets == NULL)
return;
if (!hdl->req_obj.req && !list_empty(&hdl->requests)) {
struct v4l2_ctrl_handler *req, *next_req;
list_for_each_entry_safe(req, next_req, &hdl->requests, requests) {
media_request_object_unbind(&req->req_obj);
media_request_object_put(&req->req_obj);
}
}
mutex_lock(hdl->lock);
/* Free all nodes */
list_for_each_entry_safe(ref, next_ref, &hdl->ctrl_refs, node) {
@ -2016,13 +2112,19 @@ EXPORT_SYMBOL(v4l2_ctrl_find);
/* Allocate a new v4l2_ctrl_ref and hook it into the handler. */
static int handler_new_ref(struct v4l2_ctrl_handler *hdl,
struct v4l2_ctrl *ctrl)
struct v4l2_ctrl *ctrl,
struct v4l2_ctrl_ref **ctrl_ref,
bool from_other_dev, bool allocate_req)
{
struct v4l2_ctrl_ref *ref;
struct v4l2_ctrl_ref *new_ref;
u32 id = ctrl->id;
u32 class_ctrl = V4L2_CTRL_ID2WHICH(id) | 1;
int bucket = id % hdl->nr_of_buckets; /* which bucket to use */
unsigned int size_extra_req = 0;
if (ctrl_ref)
*ctrl_ref = NULL;
/*
* Automatically add the control class if it is not yet present and
@ -2036,10 +2138,16 @@ static int handler_new_ref(struct v4l2_ctrl_handler *hdl,
if (hdl->error)
return hdl->error;
new_ref = kzalloc(sizeof(*new_ref), GFP_KERNEL);
if (allocate_req)
size_extra_req = ctrl->elems * ctrl->elem_size;
new_ref = kzalloc(sizeof(*new_ref) + size_extra_req, GFP_KERNEL);
if (!new_ref)
return handler_set_err(hdl, -ENOMEM);
new_ref->ctrl = ctrl;
new_ref->from_other_dev = from_other_dev;
if (size_extra_req)
new_ref->p_req.p = &new_ref[1];
if (ctrl->handler == hdl) {
/* By default each control starts in a cluster of its own.
new_ref->ctrl is basically a cluster array with one
@ -2079,6 +2187,8 @@ insert_in_hash:
/* Insert the control node in the hash */
new_ref->next = hdl->buckets[bucket];
hdl->buckets[bucket] = new_ref;
if (ctrl_ref)
*ctrl_ref = new_ref;
unlock:
mutex_unlock(hdl->lock);
@ -2133,6 +2243,12 @@ static struct v4l2_ctrl *v4l2_ctrl_new(struct v4l2_ctrl_handler *hdl,
case V4L2_CTRL_TYPE_U32:
elem_size = sizeof(u32);
break;
case V4L2_CTRL_TYPE_MPEG2_SLICE_PARAMS:
elem_size = sizeof(struct v4l2_ctrl_mpeg2_slice_params);
break;
case V4L2_CTRL_TYPE_MPEG2_QUANTIZATION:
elem_size = sizeof(struct v4l2_ctrl_mpeg2_quantization);
break;
default:
if (type < V4L2_CTRL_COMPOUND_TYPES)
elem_size = sizeof(s32);
@ -2220,7 +2336,7 @@ static struct v4l2_ctrl *v4l2_ctrl_new(struct v4l2_ctrl_handler *hdl,
ctrl->type_ops->init(ctrl, idx, ctrl->p_new);
}
if (handler_new_ref(hdl, ctrl)) {
if (handler_new_ref(hdl, ctrl, NULL, false, false)) {
kvfree(ctrl);
return NULL;
}
@ -2389,7 +2505,8 @@ EXPORT_SYMBOL(v4l2_ctrl_new_int_menu);
/* Add the controls from another handler to our own. */
int v4l2_ctrl_add_handler(struct v4l2_ctrl_handler *hdl,
struct v4l2_ctrl_handler *add,
bool (*filter)(const struct v4l2_ctrl *ctrl))
bool (*filter)(const struct v4l2_ctrl *ctrl),
bool from_other_dev)
{
struct v4l2_ctrl_ref *ref;
int ret = 0;
@ -2412,7 +2529,7 @@ int v4l2_ctrl_add_handler(struct v4l2_ctrl_handler *hdl,
/* Filter any unwanted controls */
if (filter && !filter(ctrl))
continue;
ret = handler_new_ref(hdl, ctrl);
ret = handler_new_ref(hdl, ctrl, NULL, from_other_dev, false);
if (ret)
break;
}
@ -2815,6 +2932,148 @@ int v4l2_querymenu(struct v4l2_ctrl_handler *hdl, struct v4l2_querymenu *qm)
}
EXPORT_SYMBOL(v4l2_querymenu);
static int v4l2_ctrl_request_clone(struct v4l2_ctrl_handler *hdl,
const struct v4l2_ctrl_handler *from)
{
struct v4l2_ctrl_ref *ref;
int err = 0;
if (WARN_ON(!hdl || hdl == from))
return -EINVAL;
if (hdl->error)
return hdl->error;
WARN_ON(hdl->lock != &hdl->_lock);
mutex_lock(from->lock);
list_for_each_entry(ref, &from->ctrl_refs, node) {
struct v4l2_ctrl *ctrl = ref->ctrl;
struct v4l2_ctrl_ref *new_ref;
/* Skip refs inherited from other devices */
if (ref->from_other_dev)
continue;
/* And buttons */
if (ctrl->type == V4L2_CTRL_TYPE_BUTTON)
continue;
err = handler_new_ref(hdl, ctrl, &new_ref, false, true);
if (err)
break;
}
mutex_unlock(from->lock);
return err;
}
static void v4l2_ctrl_request_queue(struct media_request_object *obj)
{
struct v4l2_ctrl_handler *hdl =
container_of(obj, struct v4l2_ctrl_handler, req_obj);
struct v4l2_ctrl_handler *main_hdl = obj->priv;
struct v4l2_ctrl_handler *prev_hdl = NULL;
struct v4l2_ctrl_ref *ref_ctrl, *ref_ctrl_prev = NULL;
if (list_empty(&main_hdl->requests_queued))
goto queue;
prev_hdl = list_last_entry(&main_hdl->requests_queued,
struct v4l2_ctrl_handler, requests_queued);
/*
* Note: prev_hdl and hdl must contain the same list of control
* references, so if any differences are detected then that is a
* driver bug and the WARN_ON is triggered.
*/
mutex_lock(prev_hdl->lock);
ref_ctrl_prev = list_first_entry(&prev_hdl->ctrl_refs,
struct v4l2_ctrl_ref, node);
list_for_each_entry(ref_ctrl, &hdl->ctrl_refs, node) {
if (ref_ctrl->req)
continue;
while (ref_ctrl_prev->ctrl->id < ref_ctrl->ctrl->id) {
/* Should never happen, but just in case... */
if (list_is_last(&ref_ctrl_prev->node,
&prev_hdl->ctrl_refs))
break;
ref_ctrl_prev = list_next_entry(ref_ctrl_prev, node);
}
if (WARN_ON(ref_ctrl_prev->ctrl->id != ref_ctrl->ctrl->id))
break;
ref_ctrl->req = ref_ctrl_prev->req;
}
mutex_unlock(prev_hdl->lock);
queue:
list_add_tail(&hdl->requests_queued, &main_hdl->requests_queued);
hdl->request_is_queued = true;
}
static void v4l2_ctrl_request_unbind(struct media_request_object *obj)
{
struct v4l2_ctrl_handler *hdl =
container_of(obj, struct v4l2_ctrl_handler, req_obj);
list_del_init(&hdl->requests);
if (hdl->request_is_queued) {
list_del_init(&hdl->requests_queued);
hdl->request_is_queued = false;
}
}
static void v4l2_ctrl_request_release(struct media_request_object *obj)
{
struct v4l2_ctrl_handler *hdl =
container_of(obj, struct v4l2_ctrl_handler, req_obj);
v4l2_ctrl_handler_free(hdl);
kfree(hdl);
}
static const struct media_request_object_ops req_ops = {
.queue = v4l2_ctrl_request_queue,
.unbind = v4l2_ctrl_request_unbind,
.release = v4l2_ctrl_request_release,
};
struct v4l2_ctrl_handler *v4l2_ctrl_request_hdl_find(struct media_request *req,
struct v4l2_ctrl_handler *parent)
{
struct media_request_object *obj;
if (WARN_ON(req->state != MEDIA_REQUEST_STATE_VALIDATING &&
req->state != MEDIA_REQUEST_STATE_QUEUED))
return NULL;
obj = media_request_object_find(req, &req_ops, parent);
if (obj)
return container_of(obj, struct v4l2_ctrl_handler, req_obj);
return NULL;
}
EXPORT_SYMBOL_GPL(v4l2_ctrl_request_hdl_find);
struct v4l2_ctrl *
v4l2_ctrl_request_hdl_ctrl_find(struct v4l2_ctrl_handler *hdl, u32 id)
{
struct v4l2_ctrl_ref *ref = find_ref_lock(hdl, id);
return (ref && ref->req == ref) ? ref->ctrl : NULL;
}
EXPORT_SYMBOL_GPL(v4l2_ctrl_request_hdl_ctrl_find);
static int v4l2_ctrl_request_bind(struct media_request *req,
struct v4l2_ctrl_handler *hdl,
struct v4l2_ctrl_handler *from)
{
int ret;
ret = v4l2_ctrl_request_clone(hdl, from);
if (!ret) {
ret = media_request_object_bind(req, &req_ops,
from, false, &hdl->req_obj);
if (!ret)
list_add_tail(&hdl->requests, &from->requests);
}
return ret;
}
/* Some general notes on the atomic requirements of VIDIOC_G/TRY/S_EXT_CTRLS:
@ -2876,6 +3135,7 @@ static int prepare_ext_ctrls(struct v4l2_ctrl_handler *hdl,
if (cs->which &&
cs->which != V4L2_CTRL_WHICH_DEF_VAL &&
cs->which != V4L2_CTRL_WHICH_REQUEST_VAL &&
V4L2_CTRL_ID2WHICH(id) != cs->which)
return -EINVAL;
@ -2886,6 +3146,7 @@ static int prepare_ext_ctrls(struct v4l2_ctrl_handler *hdl,
ref = find_ref_lock(hdl, id);
if (ref == NULL)
return -EINVAL;
h->ref = ref;
ctrl = ref->ctrl;
if (ctrl->flags & V4L2_CTRL_FLAG_DISABLED)
return -EINVAL;
@ -2908,7 +3169,6 @@ static int prepare_ext_ctrls(struct v4l2_ctrl_handler *hdl,
}
/* Store the ref to the master control of the cluster */
h->mref = ref;
h->ctrl = ctrl;
/* Initially set next to 0, meaning that there is no other
control in this helper array belonging to the same
cluster */
@ -2955,15 +3215,15 @@ static int prepare_ext_ctrls(struct v4l2_ctrl_handler *hdl,
whether there are any controls at all. */
static int class_check(struct v4l2_ctrl_handler *hdl, u32 which)
{
if (which == 0 || which == V4L2_CTRL_WHICH_DEF_VAL)
if (which == 0 || which == V4L2_CTRL_WHICH_DEF_VAL ||
which == V4L2_CTRL_WHICH_REQUEST_VAL)
return 0;
return find_ref_lock(hdl, which | 1) ? 0 : -EINVAL;
}
/* Get extended controls. Allocates the helpers array if needed. */
int v4l2_g_ext_ctrls(struct v4l2_ctrl_handler *hdl, struct v4l2_ext_controls *cs)
static int v4l2_g_ext_ctrls_common(struct v4l2_ctrl_handler *hdl,
struct v4l2_ext_controls *cs)
{
struct v4l2_ctrl_helper helper[4];
struct v4l2_ctrl_helper *helpers = helper;
@ -2993,7 +3253,7 @@ int v4l2_g_ext_ctrls(struct v4l2_ctrl_handler *hdl, struct v4l2_ext_controls *cs
cs->error_idx = cs->count;
for (i = 0; !ret && i < cs->count; i++)
if (helpers[i].ctrl->flags & V4L2_CTRL_FLAG_WRITE_ONLY)
if (helpers[i].ref->ctrl->flags & V4L2_CTRL_FLAG_WRITE_ONLY)
ret = -EACCES;
for (i = 0; !ret && i < cs->count; i++) {
@ -3027,8 +3287,12 @@ int v4l2_g_ext_ctrls(struct v4l2_ctrl_handler *hdl, struct v4l2_ext_controls *cs
u32 idx = i;
do {
ret = ctrl_to_user(cs->controls + idx,
helpers[idx].ctrl);
if (helpers[idx].ref->req)
ret = req_to_user(cs->controls + idx,
helpers[idx].ref->req);
else
ret = ctrl_to_user(cs->controls + idx,
helpers[idx].ref->ctrl);
idx = helpers[idx].next;
} while (!ret && idx);
}
@ -3039,6 +3303,91 @@ int v4l2_g_ext_ctrls(struct v4l2_ctrl_handler *hdl, struct v4l2_ext_controls *cs
kvfree(helpers);
return ret;
}
static struct media_request_object *
v4l2_ctrls_find_req_obj(struct v4l2_ctrl_handler *hdl,
struct media_request *req, bool set)
{
struct media_request_object *obj;
struct v4l2_ctrl_handler *new_hdl;
int ret;
if (IS_ERR(req))
return ERR_CAST(req);
if (set && WARN_ON(req->state != MEDIA_REQUEST_STATE_UPDATING))
return ERR_PTR(-EBUSY);
obj = media_request_object_find(req, &req_ops, hdl);
if (obj)
return obj;
if (!set)
return ERR_PTR(-ENOENT);
new_hdl = kzalloc(sizeof(*new_hdl), GFP_KERNEL);
if (!new_hdl)
return ERR_PTR(-ENOMEM);
obj = &new_hdl->req_obj;
ret = v4l2_ctrl_handler_init(new_hdl, (hdl->nr_of_buckets - 1) * 8);
if (!ret)
ret = v4l2_ctrl_request_bind(req, new_hdl, hdl);
if (ret) {
kfree(new_hdl);
return ERR_PTR(ret);
}
media_request_object_get(obj);
return obj;
}
int v4l2_g_ext_ctrls(struct v4l2_ctrl_handler *hdl, struct media_device *mdev,
struct v4l2_ext_controls *cs)
{
struct media_request_object *obj = NULL;
struct media_request *req = NULL;
int ret;
if (cs->which == V4L2_CTRL_WHICH_REQUEST_VAL) {
if (!mdev || cs->request_fd < 0)
return -EINVAL;
req = media_request_get_by_fd(mdev, cs->request_fd);
if (IS_ERR(req))
return PTR_ERR(req);
if (req->state != MEDIA_REQUEST_STATE_COMPLETE) {
media_request_put(req);
return -EACCES;
}
ret = media_request_lock_for_access(req);
if (ret) {
media_request_put(req);
return ret;
}
obj = v4l2_ctrls_find_req_obj(hdl, req, false);
if (IS_ERR(obj)) {
media_request_unlock_for_access(req);
media_request_put(req);
return PTR_ERR(obj);
}
hdl = container_of(obj, struct v4l2_ctrl_handler,
req_obj);
}
ret = v4l2_g_ext_ctrls_common(hdl, cs);
if (obj) {
media_request_unlock_for_access(req);
media_request_object_put(obj);
media_request_put(req);
}
return ret;
}
EXPORT_SYMBOL(v4l2_g_ext_ctrls);
/* Helper function to get a single control */
@ -3180,7 +3529,7 @@ static int validate_ctrls(struct v4l2_ext_controls *cs,
cs->error_idx = cs->count;
for (i = 0; i < cs->count; i++) {
struct v4l2_ctrl *ctrl = helpers[i].ctrl;
struct v4l2_ctrl *ctrl = helpers[i].ref->ctrl;
union v4l2_ctrl_ptr p_new;
cs->error_idx = i;
@ -3227,9 +3576,9 @@ static void update_from_auto_cluster(struct v4l2_ctrl *master)
}
/* Try or try-and-set controls */
static int try_set_ext_ctrls(struct v4l2_fh *fh, struct v4l2_ctrl_handler *hdl,
struct v4l2_ext_controls *cs,
bool set)
static int try_set_ext_ctrls_common(struct v4l2_fh *fh,
struct v4l2_ctrl_handler *hdl,
struct v4l2_ext_controls *cs, bool set)
{
struct v4l2_ctrl_helper helper[4];
struct v4l2_ctrl_helper *helpers = helper;
@ -3292,7 +3641,7 @@ static int try_set_ext_ctrls(struct v4l2_fh *fh, struct v4l2_ctrl_handler *hdl,
do {
/* Check if the auto control is part of the
list, and remember the new value. */
if (helpers[tmp_idx].ctrl == master)
if (helpers[tmp_idx].ref->ctrl == master)
new_auto_val = cs->controls[tmp_idx].value;
tmp_idx = helpers[tmp_idx].next;
} while (tmp_idx);
@ -3305,7 +3654,7 @@ static int try_set_ext_ctrls(struct v4l2_fh *fh, struct v4l2_ctrl_handler *hdl,
/* Copy the new caller-supplied control values.
user_to_new() sets 'is_new' to 1. */
do {
struct v4l2_ctrl *ctrl = helpers[idx].ctrl;
struct v4l2_ctrl *ctrl = helpers[idx].ref->ctrl;
ret = user_to_new(cs->controls + idx, ctrl);
if (!ret && ctrl->is_ptr)
@ -3314,14 +3663,23 @@ static int try_set_ext_ctrls(struct v4l2_fh *fh, struct v4l2_ctrl_handler *hdl,
} while (!ret && idx);
if (!ret)
ret = try_or_set_cluster(fh, master, set, 0);
ret = try_or_set_cluster(fh, master,
!hdl->req_obj.req && set, 0);
if (!ret && hdl->req_obj.req && set) {
for (j = 0; j < master->ncontrols; j++) {
struct v4l2_ctrl_ref *ref =
find_ref(hdl, master->cluster[j]->id);
new_to_req(ref);
}
}
/* Copy the new values back to userspace. */
if (!ret) {
idx = i;
do {
ret = new_to_user(cs->controls + idx,
helpers[idx].ctrl);
helpers[idx].ref->ctrl);
idx = helpers[idx].next;
} while (!ret && idx);
}
@ -3333,16 +3691,60 @@ static int try_set_ext_ctrls(struct v4l2_fh *fh, struct v4l2_ctrl_handler *hdl,
return ret;
}
int v4l2_try_ext_ctrls(struct v4l2_ctrl_handler *hdl, struct v4l2_ext_controls *cs)
static int try_set_ext_ctrls(struct v4l2_fh *fh,
struct v4l2_ctrl_handler *hdl, struct media_device *mdev,
struct v4l2_ext_controls *cs, bool set)
{
return try_set_ext_ctrls(NULL, hdl, cs, false);
struct media_request_object *obj = NULL;
struct media_request *req = NULL;
int ret;
if (cs->which == V4L2_CTRL_WHICH_REQUEST_VAL) {
if (!mdev || cs->request_fd < 0)
return -EINVAL;
req = media_request_get_by_fd(mdev, cs->request_fd);
if (IS_ERR(req))
return PTR_ERR(req);
ret = media_request_lock_for_update(req);
if (ret) {
media_request_put(req);
return ret;
}
obj = v4l2_ctrls_find_req_obj(hdl, req, set);
if (IS_ERR(obj)) {
media_request_unlock_for_update(req);
media_request_put(req);
return PTR_ERR(obj);
}
hdl = container_of(obj, struct v4l2_ctrl_handler,
req_obj);
}
ret = try_set_ext_ctrls_common(fh, hdl, cs, set);
if (obj) {
media_request_unlock_for_update(req);
media_request_object_put(obj);
media_request_put(req);
}
return ret;
}
int v4l2_try_ext_ctrls(struct v4l2_ctrl_handler *hdl, struct media_device *mdev,
struct v4l2_ext_controls *cs)
{
return try_set_ext_ctrls(NULL, hdl, mdev, cs, false);
}
EXPORT_SYMBOL(v4l2_try_ext_ctrls);
int v4l2_s_ext_ctrls(struct v4l2_fh *fh, struct v4l2_ctrl_handler *hdl,
struct v4l2_ext_controls *cs)
struct media_device *mdev, struct v4l2_ext_controls *cs)
{
return try_set_ext_ctrls(fh, hdl, cs, true);
return try_set_ext_ctrls(fh, hdl, mdev, cs, true);
}
EXPORT_SYMBOL(v4l2_s_ext_ctrls);
@ -3441,6 +3843,162 @@ int __v4l2_ctrl_s_ctrl_string(struct v4l2_ctrl *ctrl, const char *s)
}
EXPORT_SYMBOL(__v4l2_ctrl_s_ctrl_string);
void v4l2_ctrl_request_complete(struct media_request *req,
struct v4l2_ctrl_handler *main_hdl)
{
struct media_request_object *obj;
struct v4l2_ctrl_handler *hdl;
struct v4l2_ctrl_ref *ref;
if (!req || !main_hdl)
return;
/*
* Note that it is valid if nothing was found. It means
* that this request doesn't have any controls and so just
* wants to leave the controls unchanged.
*/
obj = media_request_object_find(req, &req_ops, main_hdl);
if (!obj)
return;
hdl = container_of(obj, struct v4l2_ctrl_handler, req_obj);
list_for_each_entry(ref, &hdl->ctrl_refs, node) {
struct v4l2_ctrl *ctrl = ref->ctrl;
struct v4l2_ctrl *master = ctrl->cluster[0];
unsigned int i;
if (ctrl->flags & V4L2_CTRL_FLAG_VOLATILE) {
ref->req = ref;
v4l2_ctrl_lock(master);
/* g_volatile_ctrl will update the current control values */
for (i = 0; i < master->ncontrols; i++)
cur_to_new(master->cluster[i]);
call_op(master, g_volatile_ctrl);
new_to_req(ref);
v4l2_ctrl_unlock(master);
continue;
}
if (ref->req == ref)
continue;
v4l2_ctrl_lock(ctrl);
if (ref->req)
ptr_to_ptr(ctrl, ref->req->p_req, ref->p_req);
else
ptr_to_ptr(ctrl, ctrl->p_cur, ref->p_req);
v4l2_ctrl_unlock(ctrl);
}
WARN_ON(!hdl->request_is_queued);
list_del_init(&hdl->requests_queued);
hdl->request_is_queued = false;
media_request_object_complete(obj);
media_request_object_put(obj);
}
EXPORT_SYMBOL(v4l2_ctrl_request_complete);
void v4l2_ctrl_request_setup(struct media_request *req,
struct v4l2_ctrl_handler *main_hdl)
{
struct media_request_object *obj;
struct v4l2_ctrl_handler *hdl;
struct v4l2_ctrl_ref *ref;
if (!req || !main_hdl)
return;
if (WARN_ON(req->state != MEDIA_REQUEST_STATE_QUEUED))
return;
/*
* Note that it is valid if nothing was found. It means
* that this request doesn't have any controls and so just
* wants to leave the controls unchanged.
*/
obj = media_request_object_find(req, &req_ops, main_hdl);
if (!obj)
return;
if (obj->completed) {
media_request_object_put(obj);
return;
}
hdl = container_of(obj, struct v4l2_ctrl_handler, req_obj);
list_for_each_entry(ref, &hdl->ctrl_refs, node)
ref->req_done = false;
list_for_each_entry(ref, &hdl->ctrl_refs, node) {
struct v4l2_ctrl *ctrl = ref->ctrl;
struct v4l2_ctrl *master = ctrl->cluster[0];
bool have_new_data = false;
int i;
/*
* Skip if this control was already handled by a cluster.
* Skip button controls and read-only controls.
*/
if (ref->req_done || ctrl->type == V4L2_CTRL_TYPE_BUTTON ||
(ctrl->flags & V4L2_CTRL_FLAG_READ_ONLY))
continue;
v4l2_ctrl_lock(master);
for (i = 0; i < master->ncontrols; i++) {
if (master->cluster[i]) {
struct v4l2_ctrl_ref *r =
find_ref(hdl, master->cluster[i]->id);
if (r->req && r == r->req) {
have_new_data = true;
break;
}
}
}
if (!have_new_data) {
v4l2_ctrl_unlock(master);
continue;
}
for (i = 0; i < master->ncontrols; i++) {
if (master->cluster[i]) {
struct v4l2_ctrl_ref *r =
find_ref(hdl, master->cluster[i]->id);
req_to_new(r);
master->cluster[i]->is_new = 1;
r->req_done = true;
}
}
/*
* For volatile autoclusters that are currently in auto mode
* we need to discover if it will be set to manual mode.
* If so, then we have to copy the current volatile values
* first since those will become the new manual values (which
* may be overwritten by explicit new values from this set
* of controls).
*/
if (master->is_auto && master->has_volatiles &&
!is_cur_manual(master)) {
s32 new_auto_val = *master->p_new.p_s32;
/*
* If the new value == the manual value, then copy
* the current volatile values.
*/
if (new_auto_val == master->manual_mode_value)
update_from_auto_cluster(master);
}
try_or_set_cluster(NULL, master, true, 0);
v4l2_ctrl_unlock(master);
}
media_request_object_put(obj);
}
EXPORT_SYMBOL(v4l2_ctrl_request_setup);
void v4l2_ctrl_notify(struct v4l2_ctrl *ctrl, v4l2_ctrl_notify_fnc notify, void *priv)
{
if (ctrl == NULL)

View File

@ -444,8 +444,22 @@ static int v4l2_release(struct inode *inode, struct file *filp)
struct video_device *vdev = video_devdata(filp);
int ret = 0;
if (vdev->fops->release)
ret = vdev->fops->release(filp);
/*
* We need to serialize the release() with queueing new requests.
* The release() may trigger the cancellation of a streaming
* operation, and that should not be mixed with queueing a new
* request at the same time.
*/
if (vdev->fops->release) {
if (v4l2_device_supports_requests(vdev->v4l2_dev)) {
mutex_lock(&vdev->v4l2_dev->mdev->req_queue_mutex);
ret = vdev->fops->release(filp);
mutex_unlock(&vdev->v4l2_dev->mdev->req_queue_mutex);
} else {
ret = vdev->fops->release(filp);
}
}
if (vdev->dev_debug & V4L2_DEV_DEBUG_FOP)
dprintk("%s: release\n",
video_device_node_name(vdev));

View File

@ -178,7 +178,8 @@ int v4l2_device_register_subdev(struct v4l2_device *v4l2_dev,
sd->v4l2_dev = v4l2_dev;
/* This just returns 0 if either of the two args is NULL */
err = v4l2_ctrl_add_handler(v4l2_dev->ctrl_handler, sd->ctrl_handler, NULL);
err = v4l2_ctrl_add_handler(v4l2_dev->ctrl_handler, sd->ctrl_handler,
NULL, true);
if (err)
goto error_module;

View File

@ -474,13 +474,13 @@ static void v4l_print_buffer(const void *arg, bool write_only)
const struct v4l2_plane *plane;
int i;
pr_cont("%02ld:%02d:%02d.%08ld index=%d, type=%s, flags=0x%08x, field=%s, sequence=%d, memory=%s",
pr_cont("%02ld:%02d:%02d.%08ld index=%d, type=%s, request_fd=%d, flags=0x%08x, field=%s, sequence=%d, memory=%s",
p->timestamp.tv_sec / 3600,
(int)(p->timestamp.tv_sec / 60) % 60,
(int)(p->timestamp.tv_sec % 60),
(long)p->timestamp.tv_usec,
p->index,
prt_names(p->type, v4l2_type_names),
prt_names(p->type, v4l2_type_names), p->request_fd,
p->flags, prt_names(p->field, v4l2_field_names),
p->sequence, prt_names(p->memory, v4l2_memory_names));
@ -590,8 +590,8 @@ static void v4l_print_ext_controls(const void *arg, bool write_only)
const struct v4l2_ext_controls *p = arg;
int i;
pr_cont("which=0x%x, count=%d, error_idx=%d",
p->which, p->count, p->error_idx);
pr_cont("which=0x%x, count=%d, error_idx=%d, request_fd=%d",
p->which, p->count, p->error_idx, p->request_fd);
for (i = 0; i < p->count; i++) {
if (!p->controls[i].size)
pr_cont(", id/val=0x%x/0x%x",
@ -907,7 +907,7 @@ static int check_ext_ctrls(struct v4l2_ext_controls *c, int allow_priv)
__u32 i;
/* zero the reserved fields */
c->reserved[0] = c->reserved[1] = 0;
c->reserved[0] = 0;
for (i = 0; i < c->count; i++)
c->controls[i].reserved2[0] = 0;
@ -1309,6 +1309,7 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
case V4L2_PIX_FMT_H263: descr = "H.263"; break;
case V4L2_PIX_FMT_MPEG1: descr = "MPEG-1 ES"; break;
case V4L2_PIX_FMT_MPEG2: descr = "MPEG-2 ES"; break;
case V4L2_PIX_FMT_MPEG2_SLICE: descr = "MPEG-2 Parsed Slice Data"; break;
case V4L2_PIX_FMT_MPEG4: descr = "MPEG-4 part 2 ES"; break;
case V4L2_PIX_FMT_XVID: descr = "Xvid"; break;
case V4L2_PIX_FMT_VC1_ANNEX_G: descr = "VC-1 (SMPTE 412M Annex G)"; break;
@ -1336,6 +1337,7 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
case V4L2_PIX_FMT_SE401: descr = "GSPCA SE401"; break;
case V4L2_PIX_FMT_S5C_UYVY_JPG: descr = "S5C73MX interleaved UYVY/JPEG"; break;
case V4L2_PIX_FMT_MT21C: descr = "Mediatek Compressed Format"; break;
case V4L2_PIX_FMT_SUNXI_TILED_NV12: descr = "Sunxi Tiled NV12 Format"; break;
default:
WARN(1, "Unknown pixelformat 0x%08x\n", fmt->pixelformat);
if (fmt->description[0])
@ -1877,7 +1879,7 @@ static int v4l_reqbufs(const struct v4l2_ioctl_ops *ops,
if (ret)
return ret;
CLEAR_AFTER_FIELD(p, memory);
CLEAR_AFTER_FIELD(p, capabilities);
return ops->vidioc_reqbufs(file, fh, p);
}
@ -1918,7 +1920,7 @@ static int v4l_create_bufs(const struct v4l2_ioctl_ops *ops,
if (ret)
return ret;
CLEAR_AFTER_FIELD(create, format);
CLEAR_AFTER_FIELD(create, capabilities);
v4l_sanitize_format(&create->format);
@ -2109,9 +2111,9 @@ static int v4l_g_ext_ctrls(const struct v4l2_ioctl_ops *ops,
p->error_idx = p->count;
if (vfh && vfh->ctrl_handler)
return v4l2_g_ext_ctrls(vfh->ctrl_handler, p);
return v4l2_g_ext_ctrls(vfh->ctrl_handler, vfd->v4l2_dev->mdev, p);
if (vfd->ctrl_handler)
return v4l2_g_ext_ctrls(vfd->ctrl_handler, p);
return v4l2_g_ext_ctrls(vfd->ctrl_handler, vfd->v4l2_dev->mdev, p);
if (ops->vidioc_g_ext_ctrls == NULL)
return -ENOTTY;
return check_ext_ctrls(p, 0) ? ops->vidioc_g_ext_ctrls(file, fh, p) :
@ -2128,9 +2130,9 @@ static int v4l_s_ext_ctrls(const struct v4l2_ioctl_ops *ops,
p->error_idx = p->count;
if (vfh && vfh->ctrl_handler)
return v4l2_s_ext_ctrls(vfh, vfh->ctrl_handler, p);
return v4l2_s_ext_ctrls(vfh, vfh->ctrl_handler, vfd->v4l2_dev->mdev, p);
if (vfd->ctrl_handler)
return v4l2_s_ext_ctrls(NULL, vfd->ctrl_handler, p);
return v4l2_s_ext_ctrls(NULL, vfd->ctrl_handler, vfd->v4l2_dev->mdev, p);
if (ops->vidioc_s_ext_ctrls == NULL)
return -ENOTTY;
return check_ext_ctrls(p, 0) ? ops->vidioc_s_ext_ctrls(file, fh, p) :
@ -2147,9 +2149,9 @@ static int v4l_try_ext_ctrls(const struct v4l2_ioctl_ops *ops,
p->error_idx = p->count;
if (vfh && vfh->ctrl_handler)
return v4l2_try_ext_ctrls(vfh->ctrl_handler, p);
return v4l2_try_ext_ctrls(vfh->ctrl_handler, vfd->v4l2_dev->mdev, p);
if (vfd->ctrl_handler)
return v4l2_try_ext_ctrls(vfd->ctrl_handler, p);
return v4l2_try_ext_ctrls(vfd->ctrl_handler, vfd->v4l2_dev->mdev, p);
if (ops->vidioc_try_ext_ctrls == NULL)
return -ENOTTY;
return check_ext_ctrls(p, 0) ? ops->vidioc_try_ext_ctrls(file, fh, p) :
@ -2780,6 +2782,7 @@ static long __video_do_ioctl(struct file *file,
unsigned int cmd, void *arg)
{
struct video_device *vfd = video_devdata(file);
struct mutex *req_queue_lock = NULL;
struct mutex *lock; /* ioctl serialization mutex */
const struct v4l2_ioctl_ops *ops = vfd->ioctl_ops;
bool write_only = false;
@ -2799,10 +2802,27 @@ static long __video_do_ioctl(struct file *file,
if (test_bit(V4L2_FL_USES_V4L2_FH, &vfd->flags))
vfh = file->private_data;
/*
* We need to serialize streamon/off with queueing new requests.
* These ioctls may trigger the cancellation of a streaming
* operation, and that should not be mixed with queueing a new
* request at the same time.
*/
if (v4l2_device_supports_requests(vfd->v4l2_dev) &&
(cmd == VIDIOC_STREAMON || cmd == VIDIOC_STREAMOFF)) {
req_queue_lock = &vfd->v4l2_dev->mdev->req_queue_mutex;
if (mutex_lock_interruptible(req_queue_lock))
return -ERESTARTSYS;
}
lock = v4l2_ioctl_get_lock(vfd, vfh, cmd, arg);
if (lock && mutex_lock_interruptible(lock))
if (lock && mutex_lock_interruptible(lock)) {
if (req_queue_lock)
mutex_unlock(req_queue_lock);
return -ERESTARTSYS;
}
if (!video_is_registered(vfd)) {
ret = -ENODEV;
@ -2861,6 +2881,8 @@ done:
unlock:
if (lock)
mutex_unlock(lock);
if (req_queue_lock)
mutex_unlock(req_queue_lock);
return ret;
}

View File

@ -387,7 +387,7 @@ static void v4l2_m2m_cancel_job(struct v4l2_m2m_ctx *m2m_ctx)
spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
if (m2m_dev->m2m_ops->job_abort)
m2m_dev->m2m_ops->job_abort(m2m_ctx->priv);
dprintk("m2m_ctx %p running, will wait to complete", m2m_ctx);
dprintk("m2m_ctx %p running, will wait to complete\n", m2m_ctx);
wait_event(m2m_ctx->finished,
!(m2m_ctx->job_flags & TRANS_RUNNING));
} else if (m2m_ctx->job_flags & TRANS_QUEUED) {
@ -473,12 +473,19 @@ EXPORT_SYMBOL_GPL(v4l2_m2m_querybuf);
int v4l2_m2m_qbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
struct v4l2_buffer *buf)
{
struct video_device *vdev = video_devdata(file);
struct vb2_queue *vq;
int ret;
vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
ret = vb2_qbuf(vq, buf);
if (!ret)
if (!V4L2_TYPE_IS_OUTPUT(vq->type) &&
(buf->flags & V4L2_BUF_FLAG_REQUEST_FD)) {
dprintk("%s: requests cannot be used with capture buffers\n",
__func__);
return -EPERM;
}
ret = vb2_qbuf(vq, vdev->v4l2_dev->mdev, buf);
if (!ret && !(buf->flags & V4L2_BUF_FLAG_IN_REQUEST))
v4l2_m2m_try_schedule(m2m_ctx);
return ret;
@ -498,15 +505,11 @@ EXPORT_SYMBOL_GPL(v4l2_m2m_dqbuf);
int v4l2_m2m_prepare_buf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
struct v4l2_buffer *buf)
{
struct video_device *vdev = video_devdata(file);
struct vb2_queue *vq;
int ret;
vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
ret = vb2_prepare_buf(vq, buf);
if (!ret)
v4l2_m2m_try_schedule(m2m_ctx);
return ret;
return vb2_prepare_buf(vq, vdev->v4l2_dev->mdev, buf);
}
EXPORT_SYMBOL_GPL(v4l2_m2m_prepare_buf);
@ -950,6 +953,52 @@ void v4l2_m2m_buf_queue(struct v4l2_m2m_ctx *m2m_ctx,
}
EXPORT_SYMBOL_GPL(v4l2_m2m_buf_queue);
void vb2_m2m_request_queue(struct media_request *req)
{
struct media_request_object *obj, *obj_safe;
struct v4l2_m2m_ctx *m2m_ctx = NULL;
/*
* Queue all objects. Note that buffer objects are at the end of the
* objects list, after all other object types. Once buffer objects
* are queued, the driver might delete them immediately (if the driver
* processes the buffer at once), so we have to use
* list_for_each_entry_safe() to handle the case where the object we
* queue is deleted.
*/
list_for_each_entry_safe(obj, obj_safe, &req->objects, list) {
struct v4l2_m2m_ctx *m2m_ctx_obj;
struct vb2_buffer *vb;
if (!obj->ops->queue)
continue;
if (vb2_request_object_is_buffer(obj)) {
/* Sanity checks */
vb = container_of(obj, struct vb2_buffer, req_obj);
WARN_ON(!V4L2_TYPE_IS_OUTPUT(vb->vb2_queue->type));
m2m_ctx_obj = container_of(vb->vb2_queue,
struct v4l2_m2m_ctx,
out_q_ctx.q);
WARN_ON(m2m_ctx && m2m_ctx_obj != m2m_ctx);
m2m_ctx = m2m_ctx_obj;
}
/*
* The buffer we queue here can in theory be immediately
* unbound, hence the use of list_for_each_entry_safe()
* above and why we call the queue op last.
*/
obj->ops->queue(obj);
}
WARN_ON(!m2m_ctx);
if (m2m_ctx)
v4l2_m2m_try_schedule(m2m_ctx);
}
EXPORT_SYMBOL_GPL(vb2_m2m_request_queue);
/* Videobuf2 ioctl helpers */
int v4l2_m2m_ioctl_reqbufs(struct file *file, void *priv,

View File

@ -222,17 +222,20 @@ static long subdev_do_ioctl(struct file *file, unsigned int cmd, void *arg)
case VIDIOC_G_EXT_CTRLS:
if (!vfh->ctrl_handler)
return -ENOTTY;
return v4l2_g_ext_ctrls(vfh->ctrl_handler, arg);
return v4l2_g_ext_ctrls(vfh->ctrl_handler,
sd->v4l2_dev->mdev, arg);
case VIDIOC_S_EXT_CTRLS:
if (!vfh->ctrl_handler)
return -ENOTTY;
return v4l2_s_ext_ctrls(vfh, vfh->ctrl_handler, arg);
return v4l2_s_ext_ctrls(vfh, vfh->ctrl_handler,
sd->v4l2_dev->mdev, arg);
case VIDIOC_TRY_EXT_CTRLS:
if (!vfh->ctrl_handler)
return -ENOTTY;
return v4l2_try_ext_ctrls(vfh->ctrl_handler, arg);
return v4l2_try_ext_ctrls(vfh->ctrl_handler,
sd->v4l2_dev->mdev, arg);
case VIDIOC_DQEVENT:
if (!(sd->flags & V4L2_SUBDEV_FL_HAS_EVENTS))

View File

@ -31,6 +31,8 @@ source "drivers/staging/media/mt9t031/Kconfig"
source "drivers/staging/media/omap4iss/Kconfig"
source "drivers/staging/media/sunxi/Kconfig"
source "drivers/staging/media/tegra-vde/Kconfig"
source "drivers/staging/media/zoran/Kconfig"

View File

@ -5,5 +5,6 @@ obj-$(CONFIG_SOC_CAMERA_IMX074) += imx074/
obj-$(CONFIG_SOC_CAMERA_MT9T031) += mt9t031/
obj-$(CONFIG_VIDEO_DM365_VPFE) += davinci_vpfe/
obj-$(CONFIG_VIDEO_OMAP4) += omap4iss/
obj-$(CONFIG_VIDEO_SUNXI) += sunxi/
obj-$(CONFIG_TEGRA_VDE) += tegra-vde/
obj-$(CONFIG_VIDEO_ZORAN) += zoran/

View File

@ -1135,10 +1135,6 @@ static int vpfe_buffer_prepare(struct vb2_buffer *vb)
v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_buffer_prepare\n");
if (vb->state != VB2_BUF_STATE_ACTIVE &&
vb->state != VB2_BUF_STATE_PREPARED)
return 0;
/* Initialize buffer */
vb2_set_plane_payload(vb, 0, video->fmt.fmt.pix.sizeimage);
if (vb2_plane_vaddr(vb, 0) &&
@ -1429,7 +1425,8 @@ static int vpfe_qbuf(struct file *file, void *priv,
return -EACCES;
}
return vb2_qbuf(&video->buffer_queue, p);
return vb2_qbuf(&video->buffer_queue,
video->video_dev.v4l2_dev->mdev, p);
}
/*

View File

@ -350,7 +350,7 @@ static int imx_media_inherit_controls(struct imx_media_dev *imxmd,
ret = v4l2_ctrl_add_handler(vfd->ctrl_handler,
sd->ctrl_handler,
NULL);
NULL, true);
if (ret)
return ret;
}

View File

@ -463,7 +463,7 @@ int imx_media_fim_add_controls(struct imx_media_fim *fim)
{
/* add the FIM controls to the calling subdev ctrl handler */
return v4l2_ctrl_add_handler(fim->sd->ctrl_handler,
&fim->ctrl_handler, NULL);
&fim->ctrl_handler, NULL, false);
}
EXPORT_SYMBOL_GPL(imx_media_fim_add_controls);

View File

@ -802,9 +802,10 @@ iss_video_querybuf(struct file *file, void *fh, struct v4l2_buffer *b)
static int
iss_video_qbuf(struct file *file, void *fh, struct v4l2_buffer *b)
{
struct iss_video *video = video_drvdata(file);
struct iss_video_fh *vfh = to_iss_video_fh(fh);
return vb2_qbuf(&vfh->queue, b);
return vb2_qbuf(&vfh->queue, video->video.v4l2_dev->mdev, b);
}
static int

View File

@ -0,0 +1,15 @@
config VIDEO_SUNXI
bool "Allwinner sunXi family Video Devices"
depends on ARCH_SUNXI || COMPILE_TEST
help
If you have an Allwinner SoC based on the sunXi family, say Y.
Note that this option doesn't include new drivers in the
kernel: saying N will just cause Kconfig to skip all the
questions about Allwinner media devices.
if VIDEO_SUNXI
source "drivers/staging/media/sunxi/cedrus/Kconfig"
endif

View File

@ -0,0 +1 @@
obj-$(CONFIG_VIDEO_SUNXI_CEDRUS) += cedrus/

View File

@ -0,0 +1,14 @@
config VIDEO_SUNXI_CEDRUS
tristate "Allwinner Cedrus VPU driver"
depends on VIDEO_DEV && VIDEO_V4L2 && MEDIA_CONTROLLER
depends on HAS_DMA
depends on OF
select SUNXI_SRAM
select VIDEOBUF2_DMA_CONTIG
select V4L2_MEM2MEM_DEV
help
Support for the VPU found in Allwinner SoCs, also known as the Cedar
video engine.
To compile this driver as a module, choose M here: the module
will be called sunxi-cedrus.

View File

@ -0,0 +1,3 @@
obj-$(CONFIG_VIDEO_SUNXI_CEDRUS) += sunxi-cedrus.o
sunxi-cedrus-y = cedrus.o cedrus_video.o cedrus_hw.o cedrus_dec.o cedrus_mpeg2.o

View File

@ -0,0 +1,7 @@
Before this stateless decoder driver can leave the staging area:
* The Request API needs to be stabilized;
* The codec-specific controls need to be thoroughly reviewed to ensure they
cover all intended uses cases;
* Userspace support for the Request API needs to be reviewed;
* Another stateless decoder driver should be submitted;
* At least one stateless encoder driver should be submitted.

View File

@ -0,0 +1,431 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Cedrus VPU driver
*
* Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
* Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
* Copyright (C) 2018 Bootlin
*
* Based on the vim2m driver, that is:
*
* Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
* Pawel Osciak, <pawel@osciak.com>
* Marek Szyprowski, <m.szyprowski@samsung.com>
*/
#include <linux/platform_device.h>
#include <linux/module.h>
#include <linux/of.h>
#include <media/v4l2-device.h>
#include <media/v4l2-ioctl.h>
#include <media/v4l2-ctrls.h>
#include <media/v4l2-mem2mem.h>
#include "cedrus.h"
#include "cedrus_video.h"
#include "cedrus_dec.h"
#include "cedrus_hw.h"
static const struct cedrus_control cedrus_controls[] = {
{
.id = V4L2_CID_MPEG_VIDEO_MPEG2_SLICE_PARAMS,
.elem_size = sizeof(struct v4l2_ctrl_mpeg2_slice_params),
.codec = CEDRUS_CODEC_MPEG2,
.required = true,
},
{
.id = V4L2_CID_MPEG_VIDEO_MPEG2_QUANTIZATION,
.elem_size = sizeof(struct v4l2_ctrl_mpeg2_quantization),
.codec = CEDRUS_CODEC_MPEG2,
.required = false,
},
};
#define CEDRUS_CONTROLS_COUNT ARRAY_SIZE(cedrus_controls)
void *cedrus_find_control_data(struct cedrus_ctx *ctx, u32 id)
{
unsigned int i;
for (i = 0; ctx->ctrls[i]; i++)
if (ctx->ctrls[i]->id == id)
return ctx->ctrls[i]->p_cur.p;
return NULL;
}
static int cedrus_init_ctrls(struct cedrus_dev *dev, struct cedrus_ctx *ctx)
{
struct v4l2_ctrl_handler *hdl = &ctx->hdl;
struct v4l2_ctrl *ctrl;
unsigned int ctrl_size;
unsigned int i;
v4l2_ctrl_handler_init(hdl, CEDRUS_CONTROLS_COUNT);
if (hdl->error) {
v4l2_err(&dev->v4l2_dev,
"Failed to initialize control handler\n");
return hdl->error;
}
ctrl_size = sizeof(ctrl) * CEDRUS_CONTROLS_COUNT + 1;
ctx->ctrls = kzalloc(ctrl_size, GFP_KERNEL);
memset(ctx->ctrls, 0, ctrl_size);
for (i = 0; i < CEDRUS_CONTROLS_COUNT; i++) {
struct v4l2_ctrl_config cfg = { 0 };
cfg.elem_size = cedrus_controls[i].elem_size;
cfg.id = cedrus_controls[i].id;
ctrl = v4l2_ctrl_new_custom(hdl, &cfg, NULL);
if (hdl->error) {
v4l2_err(&dev->v4l2_dev,
"Failed to create new custom control\n");
v4l2_ctrl_handler_free(hdl);
kfree(ctx->ctrls);
return hdl->error;
}
ctx->ctrls[i] = ctrl;
}
ctx->fh.ctrl_handler = hdl;
v4l2_ctrl_handler_setup(hdl);
return 0;
}
static int cedrus_request_validate(struct media_request *req)
{
struct media_request_object *obj;
struct v4l2_ctrl_handler *parent_hdl, *hdl;
struct cedrus_ctx *ctx = NULL;
struct v4l2_ctrl *ctrl_test;
unsigned int count;
unsigned int i;
count = vb2_request_buffer_cnt(req);
if (!count) {
v4l2_info(&ctx->dev->v4l2_dev,
"No buffer was provided with the request\n");
return -ENOENT;
} else if (count > 1) {
v4l2_info(&ctx->dev->v4l2_dev,
"More than one buffer was provided with the request\n");
return -EINVAL;
}
list_for_each_entry(obj, &req->objects, list) {
struct vb2_buffer *vb;
if (vb2_request_object_is_buffer(obj)) {
vb = container_of(obj, struct vb2_buffer, req_obj);
ctx = vb2_get_drv_priv(vb->vb2_queue);
break;
}
}
if (!ctx)
return -ENOENT;
parent_hdl = &ctx->hdl;
hdl = v4l2_ctrl_request_hdl_find(req, parent_hdl);
if (!hdl) {
v4l2_info(&ctx->dev->v4l2_dev, "Missing codec control(s)\n");
return -ENOENT;
}
for (i = 0; i < CEDRUS_CONTROLS_COUNT; i++) {
if (cedrus_controls[i].codec != ctx->current_codec ||
!cedrus_controls[i].required)
continue;
ctrl_test = v4l2_ctrl_request_hdl_ctrl_find(hdl,
cedrus_controls[i].id);
if (!ctrl_test) {
v4l2_info(&ctx->dev->v4l2_dev,
"Missing required codec control\n");
return -ENOENT;
}
}
v4l2_ctrl_request_hdl_put(hdl);
return vb2_request_validate(req);
}
static int cedrus_open(struct file *file)
{
struct cedrus_dev *dev = video_drvdata(file);
struct cedrus_ctx *ctx = NULL;
int ret;
if (mutex_lock_interruptible(&dev->dev_mutex))
return -ERESTARTSYS;
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx) {
mutex_unlock(&dev->dev_mutex);
return -ENOMEM;
}
v4l2_fh_init(&ctx->fh, video_devdata(file));
file->private_data = &ctx->fh;
ctx->dev = dev;
ret = cedrus_init_ctrls(dev, ctx);
if (ret)
goto err_free;
ctx->fh.m2m_ctx = v4l2_m2m_ctx_init(dev->m2m_dev, ctx,
&cedrus_queue_init);
if (IS_ERR(ctx->fh.m2m_ctx)) {
ret = PTR_ERR(ctx->fh.m2m_ctx);
goto err_ctrls;
}
v4l2_fh_add(&ctx->fh);
mutex_unlock(&dev->dev_mutex);
return 0;
err_ctrls:
v4l2_ctrl_handler_free(&ctx->hdl);
err_free:
kfree(ctx);
mutex_unlock(&dev->dev_mutex);
return ret;
}
static int cedrus_release(struct file *file)
{
struct cedrus_dev *dev = video_drvdata(file);
struct cedrus_ctx *ctx = container_of(file->private_data,
struct cedrus_ctx, fh);
mutex_lock(&dev->dev_mutex);
v4l2_fh_del(&ctx->fh);
v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
v4l2_ctrl_handler_free(&ctx->hdl);
kfree(ctx->ctrls);
v4l2_fh_exit(&ctx->fh);
kfree(ctx);
mutex_unlock(&dev->dev_mutex);
return 0;
}
static const struct v4l2_file_operations cedrus_fops = {
.owner = THIS_MODULE,
.open = cedrus_open,
.release = cedrus_release,
.poll = v4l2_m2m_fop_poll,
.unlocked_ioctl = video_ioctl2,
.mmap = v4l2_m2m_fop_mmap,
};
static const struct video_device cedrus_video_device = {
.name = CEDRUS_NAME,
.vfl_dir = VFL_DIR_M2M,
.fops = &cedrus_fops,
.ioctl_ops = &cedrus_ioctl_ops,
.minor = -1,
.release = video_device_release_empty,
.device_caps = V4L2_CAP_VIDEO_M2M | V4L2_CAP_STREAMING,
};
static const struct v4l2_m2m_ops cedrus_m2m_ops = {
.device_run = cedrus_device_run,
};
static const struct media_device_ops cedrus_m2m_media_ops = {
.req_validate = cedrus_request_validate,
.req_queue = vb2_m2m_request_queue,
};
static int cedrus_probe(struct platform_device *pdev)
{
struct cedrus_dev *dev;
struct video_device *vfd;
int ret;
dev = devm_kzalloc(&pdev->dev, sizeof(*dev), GFP_KERNEL);
if (!dev)
return -ENOMEM;
dev->vfd = cedrus_video_device;
dev->dev = &pdev->dev;
dev->pdev = pdev;
ret = cedrus_hw_probe(dev);
if (ret) {
dev_err(&pdev->dev, "Failed to probe hardware\n");
return ret;
}
dev->dec_ops[CEDRUS_CODEC_MPEG2] = &cedrus_dec_ops_mpeg2;
mutex_init(&dev->dev_mutex);
spin_lock_init(&dev->irq_lock);
ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev);
if (ret) {
dev_err(&pdev->dev, "Failed to register V4L2 device\n");
return ret;
}
vfd = &dev->vfd;
vfd->lock = &dev->dev_mutex;
vfd->v4l2_dev = &dev->v4l2_dev;
snprintf(vfd->name, sizeof(vfd->name), "%s", cedrus_video_device.name);
video_set_drvdata(vfd, dev);
dev->m2m_dev = v4l2_m2m_init(&cedrus_m2m_ops);
if (IS_ERR(dev->m2m_dev)) {
v4l2_err(&dev->v4l2_dev,
"Failed to initialize V4L2 M2M device\n");
ret = PTR_ERR(dev->m2m_dev);
goto err_video;
}
dev->mdev.dev = &pdev->dev;
strscpy(dev->mdev.model, CEDRUS_NAME, sizeof(dev->mdev.model));
media_device_init(&dev->mdev);
dev->mdev.ops = &cedrus_m2m_media_ops;
dev->v4l2_dev.mdev = &dev->mdev;
ret = v4l2_m2m_register_media_controller(dev->m2m_dev, vfd,
MEDIA_ENT_F_PROC_VIDEO_DECODER);
if (ret) {
v4l2_err(&dev->v4l2_dev,
"Failed to initialize V4L2 M2M media controller\n");
goto err_m2m;
}
ret = video_register_device(vfd, VFL_TYPE_GRABBER, 0);
if (ret) {
v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
goto err_v4l2;
}
v4l2_info(&dev->v4l2_dev,
"Device registered as /dev/video%d\n", vfd->num);
ret = media_device_register(&dev->mdev);
if (ret) {
v4l2_err(&dev->v4l2_dev, "Failed to register media device\n");
goto err_m2m_mc;
}
platform_set_drvdata(pdev, dev);
return 0;
err_m2m_mc:
v4l2_m2m_unregister_media_controller(dev->m2m_dev);
err_m2m:
v4l2_m2m_release(dev->m2m_dev);
err_video:
video_unregister_device(&dev->vfd);
err_v4l2:
v4l2_device_unregister(&dev->v4l2_dev);
return ret;
}
static int cedrus_remove(struct platform_device *pdev)
{
struct cedrus_dev *dev = platform_get_drvdata(pdev);
if (media_devnode_is_registered(dev->mdev.devnode)) {
media_device_unregister(&dev->mdev);
v4l2_m2m_unregister_media_controller(dev->m2m_dev);
media_device_cleanup(&dev->mdev);
}
v4l2_m2m_release(dev->m2m_dev);
video_unregister_device(&dev->vfd);
v4l2_device_unregister(&dev->v4l2_dev);
cedrus_hw_remove(dev);
return 0;
}
static const struct cedrus_variant sun4i_a10_cedrus_variant = {
/* No particular capability. */
};
static const struct cedrus_variant sun5i_a13_cedrus_variant = {
/* No particular capability. */
};
static const struct cedrus_variant sun7i_a20_cedrus_variant = {
/* No particular capability. */
};
static const struct cedrus_variant sun8i_a33_cedrus_variant = {
.capabilities = CEDRUS_CAPABILITY_UNTILED,
};
static const struct cedrus_variant sun8i_h3_cedrus_variant = {
.capabilities = CEDRUS_CAPABILITY_UNTILED,
};
static const struct of_device_id cedrus_dt_match[] = {
{
.compatible = "allwinner,sun4i-a10-video-engine",
.data = &sun4i_a10_cedrus_variant,
},
{
.compatible = "allwinner,sun5i-a13-video-engine",
.data = &sun5i_a13_cedrus_variant,
},
{
.compatible = "allwinner,sun7i-a20-video-engine",
.data = &sun7i_a20_cedrus_variant,
},
{
.compatible = "allwinner,sun8i-a33-video-engine",
.data = &sun8i_a33_cedrus_variant,
},
{
.compatible = "allwinner,sun8i-h3-video-engine",
.data = &sun8i_h3_cedrus_variant,
},
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, cedrus_dt_match);
static struct platform_driver cedrus_driver = {
.probe = cedrus_probe,
.remove = cedrus_remove,
.driver = {
.name = CEDRUS_NAME,
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(cedrus_dt_match),
},
};
module_platform_driver(cedrus_driver);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Florent Revest <florent.revest@free-electrons.com>");
MODULE_AUTHOR("Paul Kocialkowski <paul.kocialkowski@bootlin.com>");
MODULE_AUTHOR("Maxime Ripard <maxime.ripard@bootlin.com>");
MODULE_DESCRIPTION("Cedrus VPU driver");

View File

@ -0,0 +1,167 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Cedrus VPU driver
*
* Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
* Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
* Copyright (C) 2018 Bootlin
*
* Based on the vim2m driver, that is:
*
* Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
* Pawel Osciak, <pawel@osciak.com>
* Marek Szyprowski, <m.szyprowski@samsung.com>
*/
#ifndef _CEDRUS_H_
#define _CEDRUS_H_
#include <media/v4l2-ctrls.h>
#include <media/v4l2-device.h>
#include <media/v4l2-mem2mem.h>
#include <media/videobuf2-v4l2.h>
#include <media/videobuf2-dma-contig.h>
#include <linux/platform_device.h>
#define CEDRUS_NAME "cedrus"
#define CEDRUS_CAPABILITY_UNTILED BIT(0)
enum cedrus_codec {
CEDRUS_CODEC_MPEG2,
CEDRUS_CODEC_LAST,
};
enum cedrus_irq_status {
CEDRUS_IRQ_NONE,
CEDRUS_IRQ_ERROR,
CEDRUS_IRQ_OK,
};
struct cedrus_control {
u32 id;
u32 elem_size;
enum cedrus_codec codec;
unsigned char required:1;
};
struct cedrus_mpeg2_run {
const struct v4l2_ctrl_mpeg2_slice_params *slice_params;
const struct v4l2_ctrl_mpeg2_quantization *quantization;
};
struct cedrus_run {
struct vb2_v4l2_buffer *src;
struct vb2_v4l2_buffer *dst;
union {
struct cedrus_mpeg2_run mpeg2;
};
};
struct cedrus_buffer {
struct v4l2_m2m_buffer m2m_buf;
};
struct cedrus_ctx {
struct v4l2_fh fh;
struct cedrus_dev *dev;
struct v4l2_pix_format src_fmt;
struct v4l2_pix_format dst_fmt;
enum cedrus_codec current_codec;
struct v4l2_ctrl_handler hdl;
struct v4l2_ctrl **ctrls;
struct vb2_buffer *dst_bufs[VIDEO_MAX_FRAME];
};
struct cedrus_dec_ops {
void (*irq_clear)(struct cedrus_ctx *ctx);
void (*irq_disable)(struct cedrus_ctx *ctx);
enum cedrus_irq_status (*irq_status)(struct cedrus_ctx *ctx);
void (*setup)(struct cedrus_ctx *ctx, struct cedrus_run *run);
int (*start)(struct cedrus_ctx *ctx);
void (*stop)(struct cedrus_ctx *ctx);
void (*trigger)(struct cedrus_ctx *ctx);
};
struct cedrus_variant {
unsigned int capabilities;
};
struct cedrus_dev {
struct v4l2_device v4l2_dev;
struct video_device vfd;
struct media_device mdev;
struct media_pad pad[2];
struct platform_device *pdev;
struct device *dev;
struct v4l2_m2m_dev *m2m_dev;
struct cedrus_dec_ops *dec_ops[CEDRUS_CODEC_LAST];
/* Device file mutex */
struct mutex dev_mutex;
/* Interrupt spinlock */
spinlock_t irq_lock;
void __iomem *base;
struct clk *mod_clk;
struct clk *ahb_clk;
struct clk *ram_clk;
struct reset_control *rstc;
unsigned int capabilities;
};
extern struct cedrus_dec_ops cedrus_dec_ops_mpeg2;
static inline void cedrus_write(struct cedrus_dev *dev, u32 reg, u32 val)
{
writel(val, dev->base + reg);
}
static inline u32 cedrus_read(struct cedrus_dev *dev, u32 reg)
{
return readl(dev->base + reg);
}
static inline dma_addr_t cedrus_buf_addr(struct vb2_buffer *buf,
struct v4l2_pix_format *pix_fmt,
unsigned int plane)
{
dma_addr_t addr = vb2_dma_contig_plane_dma_addr(buf, 0);
return addr + (pix_fmt ? (dma_addr_t)pix_fmt->bytesperline *
pix_fmt->height * plane : 0);
}
static inline dma_addr_t cedrus_dst_buf_addr(struct cedrus_ctx *ctx,
unsigned int index,
unsigned int plane)
{
struct vb2_buffer *buf = ctx->dst_bufs[index];
return buf ? cedrus_buf_addr(buf, &ctx->dst_fmt, plane) : 0;
}
static inline struct cedrus_buffer *
vb2_v4l2_to_cedrus_buffer(const struct vb2_v4l2_buffer *p)
{
return container_of(p, struct cedrus_buffer, m2m_buf.vb);
}
static inline struct cedrus_buffer *
vb2_to_cedrus_buffer(const struct vb2_buffer *p)
{
return vb2_v4l2_to_cedrus_buffer(to_vb2_v4l2_buffer(p));
}
void *cedrus_find_control_data(struct cedrus_ctx *ctx, u32 id);
#endif

View File

@ -0,0 +1,70 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Cedrus VPU driver
*
* Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
* Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
* Copyright (C) 2018 Bootlin
*
* Based on the vim2m driver, that is:
*
* Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
* Pawel Osciak, <pawel@osciak.com>
* Marek Szyprowski, <m.szyprowski@samsung.com>
*/
#include <media/v4l2-device.h>
#include <media/v4l2-ioctl.h>
#include <media/v4l2-event.h>
#include <media/v4l2-mem2mem.h>
#include "cedrus.h"
#include "cedrus_dec.h"
#include "cedrus_hw.h"
void cedrus_device_run(void *priv)
{
struct cedrus_ctx *ctx = priv;
struct cedrus_dev *dev = ctx->dev;
struct cedrus_run run = { 0 };
struct media_request *src_req;
unsigned long flags;
run.src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
run.dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
/* Apply request(s) controls if needed. */
src_req = run.src->vb2_buf.req_obj.req;
if (src_req)
v4l2_ctrl_request_setup(src_req, &ctx->hdl);
spin_lock_irqsave(&ctx->dev->irq_lock, flags);
switch (ctx->src_fmt.pixelformat) {
case V4L2_PIX_FMT_MPEG2_SLICE:
run.mpeg2.slice_params = cedrus_find_control_data(ctx,
V4L2_CID_MPEG_VIDEO_MPEG2_SLICE_PARAMS);
run.mpeg2.quantization = cedrus_find_control_data(ctx,
V4L2_CID_MPEG_VIDEO_MPEG2_QUANTIZATION);
break;
default:
break;
}
dev->dec_ops[ctx->current_codec]->setup(ctx, &run);
spin_unlock_irqrestore(&ctx->dev->irq_lock, flags);
/* Complete request(s) controls if needed. */
if (src_req)
v4l2_ctrl_request_complete(src_req, &ctx->hdl);
spin_lock_irqsave(&ctx->dev->irq_lock, flags);
dev->dec_ops[ctx->current_codec]->trigger(ctx);
spin_unlock_irqrestore(&ctx->dev->irq_lock, flags);
}

View File

@ -0,0 +1,27 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Cedrus VPU driver
*
* Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
* Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
* Copyright (C) 2018 Bootlin
*
* Based on the vim2m driver, that is:
*
* Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
* Pawel Osciak, <pawel@osciak.com>
* Marek Szyprowski, <m.szyprowski@samsung.com>
*/
#ifndef _CEDRUS_DEC_H_
#define _CEDRUS_DEC_H_
extern const struct v4l2_ioctl_ops cedrus_ioctl_ops;
void cedrus_device_work(struct work_struct *work);
void cedrus_device_run(void *priv);
int cedrus_queue_init(void *priv, struct vb2_queue *src_vq,
struct vb2_queue *dst_vq);
#endif

View File

@ -0,0 +1,327 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Cedrus VPU driver
*
* Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
* Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
* Copyright (C) 2018 Bootlin
*
* Based on the vim2m driver, that is:
*
* Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
* Pawel Osciak, <pawel@osciak.com>
* Marek Szyprowski, <m.szyprowski@samsung.com>
*/
#include <linux/platform_device.h>
#include <linux/of_reserved_mem.h>
#include <linux/of_device.h>
#include <linux/dma-mapping.h>
#include <linux/interrupt.h>
#include <linux/clk.h>
#include <linux/regmap.h>
#include <linux/reset.h>
#include <linux/soc/sunxi/sunxi_sram.h>
#include <media/videobuf2-core.h>
#include <media/v4l2-mem2mem.h>
#include "cedrus.h"
#include "cedrus_hw.h"
#include "cedrus_regs.h"
int cedrus_engine_enable(struct cedrus_dev *dev, enum cedrus_codec codec)
{
u32 reg = 0;
/*
* FIXME: This is only valid on 32-bits DDR's, we should test
* it on the A13/A33.
*/
reg |= VE_MODE_REC_WR_MODE_2MB;
reg |= VE_MODE_DDR_MODE_BW_128;
switch (codec) {
case CEDRUS_CODEC_MPEG2:
reg |= VE_MODE_DEC_MPEG;
break;
default:
return -EINVAL;
}
cedrus_write(dev, VE_MODE, reg);
return 0;
}
void cedrus_engine_disable(struct cedrus_dev *dev)
{
cedrus_write(dev, VE_MODE, VE_MODE_DISABLED);
}
void cedrus_dst_format_set(struct cedrus_dev *dev,
struct v4l2_pix_format *fmt)
{
unsigned int width = fmt->width;
unsigned int height = fmt->height;
u32 chroma_size;
u32 reg;
switch (fmt->pixelformat) {
case V4L2_PIX_FMT_NV12:
chroma_size = ALIGN(width, 16) * ALIGN(height, 16) / 2;
reg = VE_PRIMARY_OUT_FMT_NV12;
cedrus_write(dev, VE_PRIMARY_OUT_FMT, reg);
reg = VE_CHROMA_BUF_LEN_SDRT(chroma_size / 2);
cedrus_write(dev, VE_CHROMA_BUF_LEN, reg);
reg = chroma_size / 2;
cedrus_write(dev, VE_PRIMARY_CHROMA_BUF_LEN, reg);
reg = VE_PRIMARY_FB_LINE_STRIDE_LUMA(ALIGN(width, 16)) |
VE_PRIMARY_FB_LINE_STRIDE_CHROMA(ALIGN(width, 16) / 2);
cedrus_write(dev, VE_PRIMARY_FB_LINE_STRIDE, reg);
break;
case V4L2_PIX_FMT_SUNXI_TILED_NV12:
default:
reg = VE_PRIMARY_OUT_FMT_TILED_32_NV12;
cedrus_write(dev, VE_PRIMARY_OUT_FMT, reg);
reg = VE_SECONDARY_OUT_FMT_TILED_32_NV12;
cedrus_write(dev, VE_CHROMA_BUF_LEN, reg);
break;
}
}
static irqreturn_t cedrus_bh(int irq, void *data)
{
struct cedrus_dev *dev = data;
struct cedrus_ctx *ctx;
ctx = v4l2_m2m_get_curr_priv(dev->m2m_dev);
if (!ctx) {
v4l2_err(&dev->v4l2_dev,
"Instance released before the end of transaction\n");
return IRQ_HANDLED;
}
v4l2_m2m_job_finish(ctx->dev->m2m_dev, ctx->fh.m2m_ctx);
return IRQ_HANDLED;
}
static irqreturn_t cedrus_irq(int irq, void *data)
{
struct cedrus_dev *dev = data;
struct cedrus_ctx *ctx;
struct vb2_v4l2_buffer *src_buf, *dst_buf;
enum vb2_buffer_state state;
enum cedrus_irq_status status;
unsigned long flags;
spin_lock_irqsave(&dev->irq_lock, flags);
ctx = v4l2_m2m_get_curr_priv(dev->m2m_dev);
if (!ctx) {
v4l2_err(&dev->v4l2_dev,
"Instance released before the end of transaction\n");
spin_unlock_irqrestore(&dev->irq_lock, flags);
return IRQ_NONE;
}
status = dev->dec_ops[ctx->current_codec]->irq_status(ctx);
if (status == CEDRUS_IRQ_NONE) {
spin_unlock_irqrestore(&dev->irq_lock, flags);
return IRQ_NONE;
}
dev->dec_ops[ctx->current_codec]->irq_disable(ctx);
dev->dec_ops[ctx->current_codec]->irq_clear(ctx);
src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
dst_buf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
if (!src_buf || !dst_buf) {
v4l2_err(&dev->v4l2_dev,
"Missing source and/or destination buffers\n");
spin_unlock_irqrestore(&dev->irq_lock, flags);
return IRQ_HANDLED;
}
if (status == CEDRUS_IRQ_ERROR)
state = VB2_BUF_STATE_ERROR;
else
state = VB2_BUF_STATE_DONE;
v4l2_m2m_buf_done(src_buf, state);
v4l2_m2m_buf_done(dst_buf, state);
spin_unlock_irqrestore(&dev->irq_lock, flags);
return IRQ_WAKE_THREAD;
}
int cedrus_hw_probe(struct cedrus_dev *dev)
{
const struct cedrus_variant *variant;
struct resource *res;
int irq_dec;
int ret;
variant = of_device_get_match_data(dev->dev);
if (!variant)
return -EINVAL;
dev->capabilities = variant->capabilities;
irq_dec = platform_get_irq(dev->pdev, 0);
if (irq_dec <= 0) {
v4l2_err(&dev->v4l2_dev, "Failed to get IRQ\n");
return irq_dec;
}
ret = devm_request_threaded_irq(dev->dev, irq_dec, cedrus_irq,
cedrus_bh, 0, dev_name(dev->dev),
dev);
if (ret) {
v4l2_err(&dev->v4l2_dev, "Failed to request IRQ\n");
return ret;
}
/*
* The VPU is only able to handle bus addresses so we have to subtract
* the RAM offset to the physcal addresses.
*
* This information will eventually be obtained from device-tree.
*/
#ifdef PHYS_PFN_OFFSET
dev->dev->dma_pfn_offset = PHYS_PFN_OFFSET;
#endif
ret = of_reserved_mem_device_init(dev->dev);
if (ret && ret != -ENODEV) {
v4l2_err(&dev->v4l2_dev, "Failed to reserve memory\n");
return ret;
}
ret = sunxi_sram_claim(dev->dev);
if (ret) {
v4l2_err(&dev->v4l2_dev, "Failed to claim SRAM\n");
goto err_mem;
}
dev->ahb_clk = devm_clk_get(dev->dev, "ahb");
if (IS_ERR(dev->ahb_clk)) {
v4l2_err(&dev->v4l2_dev, "Failed to get AHB clock\n");
ret = PTR_ERR(dev->ahb_clk);
goto err_sram;
}
dev->mod_clk = devm_clk_get(dev->dev, "mod");
if (IS_ERR(dev->mod_clk)) {
v4l2_err(&dev->v4l2_dev, "Failed to get MOD clock\n");
ret = PTR_ERR(dev->mod_clk);
goto err_sram;
}
dev->ram_clk = devm_clk_get(dev->dev, "ram");
if (IS_ERR(dev->ram_clk)) {
v4l2_err(&dev->v4l2_dev, "Failed to get RAM clock\n");
ret = PTR_ERR(dev->ram_clk);
goto err_sram;
}
dev->rstc = devm_reset_control_get(dev->dev, NULL);
if (IS_ERR(dev->rstc)) {
v4l2_err(&dev->v4l2_dev, "Failed to get reset control\n");
ret = PTR_ERR(dev->rstc);
goto err_sram;
}
res = platform_get_resource(dev->pdev, IORESOURCE_MEM, 0);
dev->base = devm_ioremap_resource(dev->dev, res);
if (!dev->base) {
v4l2_err(&dev->v4l2_dev, "Failed to map registers\n");
ret = -ENOMEM;
goto err_sram;
}
ret = clk_set_rate(dev->mod_clk, CEDRUS_CLOCK_RATE_DEFAULT);
if (ret) {
v4l2_err(&dev->v4l2_dev, "Failed to set clock rate\n");
goto err_sram;
}
ret = clk_prepare_enable(dev->ahb_clk);
if (ret) {
v4l2_err(&dev->v4l2_dev, "Failed to enable AHB clock\n");
goto err_sram;
}
ret = clk_prepare_enable(dev->mod_clk);
if (ret) {
v4l2_err(&dev->v4l2_dev, "Failed to enable MOD clock\n");
goto err_ahb_clk;
}
ret = clk_prepare_enable(dev->ram_clk);
if (ret) {
v4l2_err(&dev->v4l2_dev, "Failed to enable RAM clock\n");
goto err_mod_clk;
}
ret = reset_control_reset(dev->rstc);
if (ret) {
v4l2_err(&dev->v4l2_dev, "Failed to apply reset\n");
goto err_ram_clk;
}
return 0;
err_ram_clk:
clk_disable_unprepare(dev->ram_clk);
err_mod_clk:
clk_disable_unprepare(dev->mod_clk);
err_ahb_clk:
clk_disable_unprepare(dev->ahb_clk);
err_sram:
sunxi_sram_release(dev->dev);
err_mem:
of_reserved_mem_device_release(dev->dev);
return ret;
}
void cedrus_hw_remove(struct cedrus_dev *dev)
{
reset_control_assert(dev->rstc);
clk_disable_unprepare(dev->ram_clk);
clk_disable_unprepare(dev->mod_clk);
clk_disable_unprepare(dev->ahb_clk);
sunxi_sram_release(dev->dev);
of_reserved_mem_device_release(dev->dev);
}

View File

@ -0,0 +1,30 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Cedrus VPU driver
*
* Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
* Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
* Copyright (C) 2018 Bootlin
*
* Based on the vim2m driver, that is:
*
* Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
* Pawel Osciak, <pawel@osciak.com>
* Marek Szyprowski, <m.szyprowski@samsung.com>
*/
#ifndef _CEDRUS_HW_H_
#define _CEDRUS_HW_H_
#define CEDRUS_CLOCK_RATE_DEFAULT 320000000
int cedrus_engine_enable(struct cedrus_dev *dev, enum cedrus_codec codec);
void cedrus_engine_disable(struct cedrus_dev *dev);
void cedrus_dst_format_set(struct cedrus_dev *dev,
struct v4l2_pix_format *fmt);
int cedrus_hw_probe(struct cedrus_dev *dev);
void cedrus_hw_remove(struct cedrus_dev *dev);
#endif

View File

@ -0,0 +1,246 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Cedrus VPU driver
*
* Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
* Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
* Copyright (C) 2018 Bootlin
*/
#include <media/videobuf2-dma-contig.h>
#include "cedrus.h"
#include "cedrus_hw.h"
#include "cedrus_regs.h"
/* Default MPEG-2 quantization coefficients, from the specification. */
static const u8 intra_quantization_matrix_default[64] = {
8, 16, 16, 19, 16, 19, 22, 22,
22, 22, 22, 22, 26, 24, 26, 27,
27, 27, 26, 26, 26, 26, 27, 27,
27, 29, 29, 29, 34, 34, 34, 29,
29, 29, 27, 27, 29, 29, 32, 32,
34, 34, 37, 38, 37, 35, 35, 34,
35, 38, 38, 40, 40, 40, 48, 48,
46, 46, 56, 56, 58, 69, 69, 83
};
static const u8 non_intra_quantization_matrix_default[64] = {
16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16
};
static enum cedrus_irq_status cedrus_mpeg2_irq_status(struct cedrus_ctx *ctx)
{
struct cedrus_dev *dev = ctx->dev;
u32 reg;
reg = cedrus_read(dev, VE_DEC_MPEG_STATUS);
reg &= VE_DEC_MPEG_STATUS_CHECK_MASK;
if (!reg)
return CEDRUS_IRQ_NONE;
if (reg & VE_DEC_MPEG_STATUS_CHECK_ERROR ||
!(reg & VE_DEC_MPEG_STATUS_SUCCESS))
return CEDRUS_IRQ_ERROR;
return CEDRUS_IRQ_OK;
}
static void cedrus_mpeg2_irq_clear(struct cedrus_ctx *ctx)
{
struct cedrus_dev *dev = ctx->dev;
cedrus_write(dev, VE_DEC_MPEG_STATUS, VE_DEC_MPEG_STATUS_CHECK_MASK);
}
static void cedrus_mpeg2_irq_disable(struct cedrus_ctx *ctx)
{
struct cedrus_dev *dev = ctx->dev;
u32 reg = cedrus_read(dev, VE_DEC_MPEG_CTRL);
reg &= ~VE_DEC_MPEG_CTRL_IRQ_MASK;
cedrus_write(dev, VE_DEC_MPEG_CTRL, reg);
}
static void cedrus_mpeg2_setup(struct cedrus_ctx *ctx, struct cedrus_run *run)
{
const struct v4l2_ctrl_mpeg2_slice_params *slice_params;
const struct v4l2_mpeg2_sequence *sequence;
const struct v4l2_mpeg2_picture *picture;
const struct v4l2_ctrl_mpeg2_quantization *quantization;
dma_addr_t src_buf_addr, dst_luma_addr, dst_chroma_addr;
dma_addr_t fwd_luma_addr, fwd_chroma_addr;
dma_addr_t bwd_luma_addr, bwd_chroma_addr;
struct cedrus_dev *dev = ctx->dev;
const u8 *matrix;
unsigned int i;
u32 reg;
slice_params = run->mpeg2.slice_params;
sequence = &slice_params->sequence;
picture = &slice_params->picture;
quantization = run->mpeg2.quantization;
/* Activate MPEG engine. */
cedrus_engine_enable(dev, CEDRUS_CODEC_MPEG2);
/* Set intra quantization matrix. */
if (quantization && quantization->load_intra_quantiser_matrix)
matrix = quantization->intra_quantiser_matrix;
else
matrix = intra_quantization_matrix_default;
for (i = 0; i < 64; i++) {
reg = VE_DEC_MPEG_IQMINPUT_WEIGHT(i, matrix[i]);
reg |= VE_DEC_MPEG_IQMINPUT_FLAG_INTRA;
cedrus_write(dev, VE_DEC_MPEG_IQMINPUT, reg);
}
/* Set non-intra quantization matrix. */
if (quantization && quantization->load_non_intra_quantiser_matrix)
matrix = quantization->non_intra_quantiser_matrix;
else
matrix = non_intra_quantization_matrix_default;
for (i = 0; i < 64; i++) {
reg = VE_DEC_MPEG_IQMINPUT_WEIGHT(i, matrix[i]);
reg |= VE_DEC_MPEG_IQMINPUT_FLAG_NON_INTRA;
cedrus_write(dev, VE_DEC_MPEG_IQMINPUT, reg);
}
/* Set MPEG picture header. */
reg = VE_DEC_MPEG_MP12HDR_SLICE_TYPE(picture->picture_coding_type);
reg |= VE_DEC_MPEG_MP12HDR_F_CODE(0, 0, picture->f_code[0][0]);
reg |= VE_DEC_MPEG_MP12HDR_F_CODE(0, 1, picture->f_code[0][1]);
reg |= VE_DEC_MPEG_MP12HDR_F_CODE(1, 0, picture->f_code[1][0]);
reg |= VE_DEC_MPEG_MP12HDR_F_CODE(1, 1, picture->f_code[1][1]);
reg |= VE_DEC_MPEG_MP12HDR_INTRA_DC_PRECISION(picture->intra_dc_precision);
reg |= VE_DEC_MPEG_MP12HDR_INTRA_PICTURE_STRUCTURE(picture->picture_structure);
reg |= VE_DEC_MPEG_MP12HDR_TOP_FIELD_FIRST(picture->top_field_first);
reg |= VE_DEC_MPEG_MP12HDR_FRAME_PRED_FRAME_DCT(picture->frame_pred_frame_dct);
reg |= VE_DEC_MPEG_MP12HDR_CONCEALMENT_MOTION_VECTORS(picture->concealment_motion_vectors);
reg |= VE_DEC_MPEG_MP12HDR_Q_SCALE_TYPE(picture->q_scale_type);
reg |= VE_DEC_MPEG_MP12HDR_INTRA_VLC_FORMAT(picture->intra_vlc_format);
reg |= VE_DEC_MPEG_MP12HDR_ALTERNATE_SCAN(picture->alternate_scan);
reg |= VE_DEC_MPEG_MP12HDR_FULL_PEL_FORWARD_VECTOR(0);
reg |= VE_DEC_MPEG_MP12HDR_FULL_PEL_BACKWARD_VECTOR(0);
cedrus_write(dev, VE_DEC_MPEG_MP12HDR, reg);
/* Set frame dimensions. */
reg = VE_DEC_MPEG_PICCODEDSIZE_WIDTH(sequence->horizontal_size);
reg |= VE_DEC_MPEG_PICCODEDSIZE_HEIGHT(sequence->vertical_size);
cedrus_write(dev, VE_DEC_MPEG_PICCODEDSIZE, reg);
reg = VE_DEC_MPEG_PICBOUNDSIZE_WIDTH(ctx->src_fmt.width);
reg |= VE_DEC_MPEG_PICBOUNDSIZE_HEIGHT(ctx->src_fmt.height);
cedrus_write(dev, VE_DEC_MPEG_PICBOUNDSIZE, reg);
/* Forward and backward prediction reference buffers. */
fwd_luma_addr = cedrus_dst_buf_addr(ctx,
slice_params->forward_ref_index,
0);
fwd_chroma_addr = cedrus_dst_buf_addr(ctx,
slice_params->forward_ref_index,
1);
cedrus_write(dev, VE_DEC_MPEG_FWD_REF_LUMA_ADDR, fwd_luma_addr);
cedrus_write(dev, VE_DEC_MPEG_FWD_REF_CHROMA_ADDR, fwd_chroma_addr);
bwd_luma_addr = cedrus_dst_buf_addr(ctx,
slice_params->backward_ref_index,
0);
bwd_chroma_addr = cedrus_dst_buf_addr(ctx,
slice_params->backward_ref_index,
1);
cedrus_write(dev, VE_DEC_MPEG_BWD_REF_LUMA_ADDR, bwd_luma_addr);
cedrus_write(dev, VE_DEC_MPEG_BWD_REF_CHROMA_ADDR, bwd_chroma_addr);
/* Destination luma and chroma buffers. */
dst_luma_addr = cedrus_dst_buf_addr(ctx, run->dst->vb2_buf.index, 0);
dst_chroma_addr = cedrus_dst_buf_addr(ctx, run->dst->vb2_buf.index, 1);
cedrus_write(dev, VE_DEC_MPEG_REC_LUMA, dst_luma_addr);
cedrus_write(dev, VE_DEC_MPEG_REC_CHROMA, dst_chroma_addr);
/* Source offset and length in bits. */
cedrus_write(dev, VE_DEC_MPEG_VLD_OFFSET,
slice_params->data_bit_offset);
reg = slice_params->bit_size - slice_params->data_bit_offset;
cedrus_write(dev, VE_DEC_MPEG_VLD_LEN, reg);
/* Source beginning and end addresses. */
src_buf_addr = vb2_dma_contig_plane_dma_addr(&run->src->vb2_buf, 0);
reg = VE_DEC_MPEG_VLD_ADDR_BASE(src_buf_addr);
reg |= VE_DEC_MPEG_VLD_ADDR_VALID_PIC_DATA;
reg |= VE_DEC_MPEG_VLD_ADDR_LAST_PIC_DATA;
reg |= VE_DEC_MPEG_VLD_ADDR_FIRST_PIC_DATA;
cedrus_write(dev, VE_DEC_MPEG_VLD_ADDR, reg);
reg = src_buf_addr + DIV_ROUND_UP(slice_params->bit_size, 8);
cedrus_write(dev, VE_DEC_MPEG_VLD_END_ADDR, reg);
/* Macroblock address: start at the beginning. */
reg = VE_DEC_MPEG_MBADDR_Y(0) | VE_DEC_MPEG_MBADDR_X(0);
cedrus_write(dev, VE_DEC_MPEG_MBADDR, reg);
/* Clear previous errors. */
cedrus_write(dev, VE_DEC_MPEG_ERROR, 0);
/* Clear correct macroblocks register. */
cedrus_write(dev, VE_DEC_MPEG_CRTMBADDR, 0);
/* Enable appropriate interruptions and components. */
reg = VE_DEC_MPEG_CTRL_IRQ_MASK | VE_DEC_MPEG_CTRL_MC_NO_WRITEBACK |
VE_DEC_MPEG_CTRL_MC_CACHE_EN;
cedrus_write(dev, VE_DEC_MPEG_CTRL, reg);
}
static void cedrus_mpeg2_trigger(struct cedrus_ctx *ctx)
{
struct cedrus_dev *dev = ctx->dev;
u32 reg;
/* Trigger MPEG engine. */
reg = VE_DEC_MPEG_TRIGGER_HW_MPEG_VLD | VE_DEC_MPEG_TRIGGER_MPEG2 |
VE_DEC_MPEG_TRIGGER_MB_BOUNDARY;
cedrus_write(dev, VE_DEC_MPEG_TRIGGER, reg);
}
struct cedrus_dec_ops cedrus_dec_ops_mpeg2 = {
.irq_clear = cedrus_mpeg2_irq_clear,
.irq_disable = cedrus_mpeg2_irq_disable,
.irq_status = cedrus_mpeg2_irq_status,
.setup = cedrus_mpeg2_setup,
.trigger = cedrus_mpeg2_trigger,
};

View File

@ -0,0 +1,235 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Cedrus VPU driver
*
* Copyright (c) 2013-2016 Jens Kuske <jenskuske@gmail.com>
* Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
* Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
*/
#ifndef _CEDRUS_REGS_H_
#define _CEDRUS_REGS_H_
/*
* Common acronyms and contractions used in register descriptions:
* * VLD : Variable-Length Decoder
* * IQ: Inverse Quantization
* * IDCT: Inverse Discrete Cosine Transform
* * MC: Motion Compensation
* * STCD: Start Code Detect
* * SDRT: Scale Down and Rotate
*/
#define VE_ENGINE_DEC_MPEG 0x100
#define VE_ENGINE_DEC_H264 0x200
#define VE_MODE 0x00
#define VE_MODE_REC_WR_MODE_2MB (0x01 << 20)
#define VE_MODE_REC_WR_MODE_1MB (0x00 << 20)
#define VE_MODE_DDR_MODE_BW_128 (0x03 << 16)
#define VE_MODE_DDR_MODE_BW_256 (0x02 << 16)
#define VE_MODE_DISABLED (0x07 << 0)
#define VE_MODE_DEC_H265 (0x04 << 0)
#define VE_MODE_DEC_H264 (0x01 << 0)
#define VE_MODE_DEC_MPEG (0x00 << 0)
#define VE_PRIMARY_CHROMA_BUF_LEN 0xc4
#define VE_PRIMARY_FB_LINE_STRIDE 0xc8
#define VE_PRIMARY_FB_LINE_STRIDE_CHROMA(s) (((s) << 16) & GENMASK(31, 16))
#define VE_PRIMARY_FB_LINE_STRIDE_LUMA(s) (((s) << 0) & GENMASK(15, 0))
#define VE_CHROMA_BUF_LEN 0xe8
#define VE_SECONDARY_OUT_FMT_TILED_32_NV12 (0x00 << 30)
#define VE_SECONDARY_OUT_FMT_EXT (0x01 << 30)
#define VE_SECONDARY_OUT_FMT_YU12 (0x02 << 30)
#define VE_SECONDARY_OUT_FMT_YV12 (0x03 << 30)
#define VE_CHROMA_BUF_LEN_SDRT(l) ((l) & GENMASK(27, 0))
#define VE_PRIMARY_OUT_FMT 0xec
#define VE_PRIMARY_OUT_FMT_TILED_32_NV12 (0x00 << 4)
#define VE_PRIMARY_OUT_FMT_TILED_128_NV12 (0x01 << 4)
#define VE_PRIMARY_OUT_FMT_YU12 (0x02 << 4)
#define VE_PRIMARY_OUT_FMT_YV12 (0x03 << 4)
#define VE_PRIMARY_OUT_FMT_NV12 (0x04 << 4)
#define VE_PRIMARY_OUT_FMT_NV21 (0x05 << 4)
#define VE_SECONDARY_OUT_FMT_EXT_TILED_32_NV12 (0x00 << 0)
#define VE_SECONDARY_OUT_FMT_EXT_TILED_128_NV12 (0x01 << 0)
#define VE_SECONDARY_OUT_FMT_EXT_YU12 (0x02 << 0)
#define VE_SECONDARY_OUT_FMT_EXT_YV12 (0x03 << 0)
#define VE_SECONDARY_OUT_FMT_EXT_NV12 (0x04 << 0)
#define VE_SECONDARY_OUT_FMT_EXT_NV21 (0x05 << 0)
#define VE_VERSION 0xf0
#define VE_VERSION_SHIFT 16
#define VE_DEC_MPEG_MP12HDR (VE_ENGINE_DEC_MPEG + 0x00)
#define VE_DEC_MPEG_MP12HDR_SLICE_TYPE(t) (((t) << 28) & GENMASK(30, 28))
#define VE_DEC_MPEG_MP12HDR_F_CODE_SHIFT(x, y) (24 - 4 * (y) - 8 * (x))
#define VE_DEC_MPEG_MP12HDR_F_CODE(__x, __y, __v) \
(((__v) & GENMASK(3, 0)) << VE_DEC_MPEG_MP12HDR_F_CODE_SHIFT(__x, __y))
#define VE_DEC_MPEG_MP12HDR_INTRA_DC_PRECISION(p) \
(((p) << 10) & GENMASK(11, 10))
#define VE_DEC_MPEG_MP12HDR_INTRA_PICTURE_STRUCTURE(s) \
(((s) << 8) & GENMASK(9, 8))
#define VE_DEC_MPEG_MP12HDR_TOP_FIELD_FIRST(v) \
((v) ? BIT(7) : 0)
#define VE_DEC_MPEG_MP12HDR_FRAME_PRED_FRAME_DCT(v) \
((v) ? BIT(6) : 0)
#define VE_DEC_MPEG_MP12HDR_CONCEALMENT_MOTION_VECTORS(v) \
((v) ? BIT(5) : 0)
#define VE_DEC_MPEG_MP12HDR_Q_SCALE_TYPE(v) \
((v) ? BIT(4) : 0)
#define VE_DEC_MPEG_MP12HDR_INTRA_VLC_FORMAT(v) \
((v) ? BIT(3) : 0)
#define VE_DEC_MPEG_MP12HDR_ALTERNATE_SCAN(v) \
((v) ? BIT(2) : 0)
#define VE_DEC_MPEG_MP12HDR_FULL_PEL_FORWARD_VECTOR(v) \
((v) ? BIT(1) : 0)
#define VE_DEC_MPEG_MP12HDR_FULL_PEL_BACKWARD_VECTOR(v) \
((v) ? BIT(0) : 0)
#define VE_DEC_MPEG_PICCODEDSIZE (VE_ENGINE_DEC_MPEG + 0x08)
#define VE_DEC_MPEG_PICCODEDSIZE_WIDTH(w) \
((DIV_ROUND_UP((w), 16) << 8) & GENMASK(15, 8))
#define VE_DEC_MPEG_PICCODEDSIZE_HEIGHT(h) \
((DIV_ROUND_UP((h), 16) << 0) & GENMASK(7, 0))
#define VE_DEC_MPEG_PICBOUNDSIZE (VE_ENGINE_DEC_MPEG + 0x0c)
#define VE_DEC_MPEG_PICBOUNDSIZE_WIDTH(w) (((w) << 16) & GENMASK(27, 16))
#define VE_DEC_MPEG_PICBOUNDSIZE_HEIGHT(h) (((h) << 0) & GENMASK(11, 0))
#define VE_DEC_MPEG_MBADDR (VE_ENGINE_DEC_MPEG + 0x10)
#define VE_DEC_MPEG_MBADDR_X(w) (((w) << 8) & GENMASK(15, 8))
#define VE_DEC_MPEG_MBADDR_Y(h) (((h) << 0) & GENMASK(0, 7))
#define VE_DEC_MPEG_CTRL (VE_ENGINE_DEC_MPEG + 0x14)
#define VE_DEC_MPEG_CTRL_MC_CACHE_EN BIT(31)
#define VE_DEC_MPEG_CTRL_SW_VLD BIT(27)
#define VE_DEC_MPEG_CTRL_SW_IQ_IS BIT(17)
#define VE_DEC_MPEG_CTRL_QP_AC_DC_OUT_EN BIT(14)
#define VE_DEC_MPEG_CTRL_ROTATE_SCALE_OUT_EN BIT(8)
#define VE_DEC_MPEG_CTRL_MC_NO_WRITEBACK BIT(7)
#define VE_DEC_MPEG_CTRL_ROTATE_IRQ_EN BIT(6)
#define VE_DEC_MPEG_CTRL_VLD_DATA_REQ_IRQ_EN BIT(5)
#define VE_DEC_MPEG_CTRL_ERROR_IRQ_EN BIT(4)
#define VE_DEC_MPEG_CTRL_FINISH_IRQ_EN BIT(3)
#define VE_DEC_MPEG_CTRL_IRQ_MASK \
(VE_DEC_MPEG_CTRL_FINISH_IRQ_EN | VE_DEC_MPEG_CTRL_ERROR_IRQ_EN | \
VE_DEC_MPEG_CTRL_VLD_DATA_REQ_IRQ_EN)
#define VE_DEC_MPEG_TRIGGER (VE_ENGINE_DEC_MPEG + 0x18)
#define VE_DEC_MPEG_TRIGGER_MB_BOUNDARY BIT(31)
#define VE_DEC_MPEG_TRIGGER_CHROMA_FMT_420 (0x00 << 27)
#define VE_DEC_MPEG_TRIGGER_CHROMA_FMT_411 (0x01 << 27)
#define VE_DEC_MPEG_TRIGGER_CHROMA_FMT_422 (0x02 << 27)
#define VE_DEC_MPEG_TRIGGER_CHROMA_FMT_444 (0x03 << 27)
#define VE_DEC_MPEG_TRIGGER_CHROMA_FMT_422T (0x04 << 27)
#define VE_DEC_MPEG_TRIGGER_MPEG1 (0x01 << 24)
#define VE_DEC_MPEG_TRIGGER_MPEG2 (0x02 << 24)
#define VE_DEC_MPEG_TRIGGER_JPEG (0x03 << 24)
#define VE_DEC_MPEG_TRIGGER_MPEG4 (0x04 << 24)
#define VE_DEC_MPEG_TRIGGER_VP62 (0x05 << 24)
#define VE_DEC_MPEG_TRIGGER_VP62_AC_GET_BITS BIT(7)
#define VE_DEC_MPEG_TRIGGER_STCD_VC1 (0x02 << 4)
#define VE_DEC_MPEG_TRIGGER_STCD_MPEG2 (0x01 << 4)
#define VE_DEC_MPEG_TRIGGER_STCD_AVC (0x00 << 4)
#define VE_DEC_MPEG_TRIGGER_HW_MPEG_VLD (0x0f << 0)
#define VE_DEC_MPEG_TRIGGER_HW_JPEG_VLD (0x0e << 0)
#define VE_DEC_MPEG_TRIGGER_HW_MB (0x0d << 0)
#define VE_DEC_MPEG_TRIGGER_HW_ROTATE (0x0c << 0)
#define VE_DEC_MPEG_TRIGGER_HW_VP6_VLD (0x0b << 0)
#define VE_DEC_MPEG_TRIGGER_HW_MAF (0x0a << 0)
#define VE_DEC_MPEG_TRIGGER_HW_STCD_END (0x09 << 0)
#define VE_DEC_MPEG_TRIGGER_HW_STCD_BEGIN (0x08 << 0)
#define VE_DEC_MPEG_TRIGGER_SW_MC (0x07 << 0)
#define VE_DEC_MPEG_TRIGGER_SW_IQ (0x06 << 0)
#define VE_DEC_MPEG_TRIGGER_SW_IDCT (0x05 << 0)
#define VE_DEC_MPEG_TRIGGER_SW_SCALE (0x04 << 0)
#define VE_DEC_MPEG_TRIGGER_SW_VP6 (0x03 << 0)
#define VE_DEC_MPEG_TRIGGER_SW_VP62_AC_GET_BITS (0x02 << 0)
#define VE_DEC_MPEG_STATUS (VE_ENGINE_DEC_MPEG + 0x1c)
#define VE_DEC_MPEG_STATUS_START_DETECT_BUSY BIT(27)
#define VE_DEC_MPEG_STATUS_VP6_BIT BIT(26)
#define VE_DEC_MPEG_STATUS_VP6_BIT_BUSY BIT(25)
#define VE_DEC_MPEG_STATUS_MAF_BUSY BIT(23)
#define VE_DEC_MPEG_STATUS_VP6_MVP_BUSY BIT(22)
#define VE_DEC_MPEG_STATUS_JPEG_BIT_END BIT(21)
#define VE_DEC_MPEG_STATUS_JPEG_RESTART_ERROR BIT(20)
#define VE_DEC_MPEG_STATUS_JPEG_MARKER BIT(19)
#define VE_DEC_MPEG_STATUS_ROTATE_BUSY BIT(18)
#define VE_DEC_MPEG_STATUS_DEBLOCKING_BUSY BIT(17)
#define VE_DEC_MPEG_STATUS_SCALE_DOWN_BUSY BIT(16)
#define VE_DEC_MPEG_STATUS_IQIS_BUF_EMPTY BIT(15)
#define VE_DEC_MPEG_STATUS_IDCT_BUF_EMPTY BIT(14)
#define VE_DEC_MPEG_STATUS_VE_BUSY BIT(13)
#define VE_DEC_MPEG_STATUS_MC_BUSY BIT(12)
#define VE_DEC_MPEG_STATUS_IDCT_BUSY BIT(11)
#define VE_DEC_MPEG_STATUS_IQIS_BUSY BIT(10)
#define VE_DEC_MPEG_STATUS_DCAC_BUSY BIT(9)
#define VE_DEC_MPEG_STATUS_VLD_BUSY BIT(8)
#define VE_DEC_MPEG_STATUS_ROTATE_SUCCESS BIT(3)
#define VE_DEC_MPEG_STATUS_VLD_DATA_REQ BIT(2)
#define VE_DEC_MPEG_STATUS_ERROR BIT(1)
#define VE_DEC_MPEG_STATUS_SUCCESS BIT(0)
#define VE_DEC_MPEG_STATUS_CHECK_MASK \
(VE_DEC_MPEG_STATUS_SUCCESS | VE_DEC_MPEG_STATUS_ERROR | \
VE_DEC_MPEG_STATUS_VLD_DATA_REQ)
#define VE_DEC_MPEG_STATUS_CHECK_ERROR \
(VE_DEC_MPEG_STATUS_ERROR | VE_DEC_MPEG_STATUS_VLD_DATA_REQ)
#define VE_DEC_MPEG_VLD_ADDR (VE_ENGINE_DEC_MPEG + 0x28)
#define VE_DEC_MPEG_VLD_ADDR_FIRST_PIC_DATA BIT(30)
#define VE_DEC_MPEG_VLD_ADDR_LAST_PIC_DATA BIT(29)
#define VE_DEC_MPEG_VLD_ADDR_VALID_PIC_DATA BIT(28)
#define VE_DEC_MPEG_VLD_ADDR_BASE(a) \
({ \
u32 _tmp = (a); \
u32 _lo = _tmp & GENMASK(27, 4); \
u32 _hi = (_tmp >> 28) & GENMASK(3, 0); \
(_lo | _hi); \
})
#define VE_DEC_MPEG_VLD_OFFSET (VE_ENGINE_DEC_MPEG + 0x2c)
#define VE_DEC_MPEG_VLD_LEN (VE_ENGINE_DEC_MPEG + 0x30)
#define VE_DEC_MPEG_VLD_END_ADDR (VE_ENGINE_DEC_MPEG + 0x34)
#define VE_DEC_MPEG_REC_LUMA (VE_ENGINE_DEC_MPEG + 0x48)
#define VE_DEC_MPEG_REC_CHROMA (VE_ENGINE_DEC_MPEG + 0x4c)
#define VE_DEC_MPEG_FWD_REF_LUMA_ADDR (VE_ENGINE_DEC_MPEG + 0x50)
#define VE_DEC_MPEG_FWD_REF_CHROMA_ADDR (VE_ENGINE_DEC_MPEG + 0x54)
#define VE_DEC_MPEG_BWD_REF_LUMA_ADDR (VE_ENGINE_DEC_MPEG + 0x58)
#define VE_DEC_MPEG_BWD_REF_CHROMA_ADDR (VE_ENGINE_DEC_MPEG + 0x5c)
#define VE_DEC_MPEG_IQMINPUT (VE_ENGINE_DEC_MPEG + 0x80)
#define VE_DEC_MPEG_IQMINPUT_FLAG_INTRA (0x01 << 14)
#define VE_DEC_MPEG_IQMINPUT_FLAG_NON_INTRA (0x00 << 14)
#define VE_DEC_MPEG_IQMINPUT_WEIGHT(i, v) \
(((v) & GENMASK(7, 0)) | (((i) << 8) & GENMASK(13, 8)))
#define VE_DEC_MPEG_ERROR (VE_ENGINE_DEC_MPEG + 0xc4)
#define VE_DEC_MPEG_CRTMBADDR (VE_ENGINE_DEC_MPEG + 0xc8)
#define VE_DEC_MPEG_ROT_LUMA (VE_ENGINE_DEC_MPEG + 0xcc)
#define VE_DEC_MPEG_ROT_CHROMA (VE_ENGINE_DEC_MPEG + 0xd0)
#endif

View File

@ -0,0 +1,542 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Cedrus VPU driver
*
* Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
* Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
* Copyright (C) 2018 Bootlin
*
* Based on the vim2m driver, that is:
*
* Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
* Pawel Osciak, <pawel@osciak.com>
* Marek Szyprowski, <m.szyprowski@samsung.com>
*/
#include <media/videobuf2-dma-contig.h>
#include <media/v4l2-device.h>
#include <media/v4l2-ioctl.h>
#include <media/v4l2-event.h>
#include <media/v4l2-mem2mem.h>
#include "cedrus.h"
#include "cedrus_video.h"
#include "cedrus_dec.h"
#include "cedrus_hw.h"
#define CEDRUS_DECODE_SRC BIT(0)
#define CEDRUS_DECODE_DST BIT(1)
#define CEDRUS_MIN_WIDTH 16U
#define CEDRUS_MIN_HEIGHT 16U
#define CEDRUS_MAX_WIDTH 3840U
#define CEDRUS_MAX_HEIGHT 2160U
static struct cedrus_format cedrus_formats[] = {
{
.pixelformat = V4L2_PIX_FMT_MPEG2_SLICE,
.directions = CEDRUS_DECODE_SRC,
},
{
.pixelformat = V4L2_PIX_FMT_SUNXI_TILED_NV12,
.directions = CEDRUS_DECODE_DST,
},
{
.pixelformat = V4L2_PIX_FMT_NV12,
.directions = CEDRUS_DECODE_DST,
.capabilities = CEDRUS_CAPABILITY_UNTILED,
},
};
#define CEDRUS_FORMATS_COUNT ARRAY_SIZE(cedrus_formats)
static inline struct cedrus_ctx *cedrus_file2ctx(struct file *file)
{
return container_of(file->private_data, struct cedrus_ctx, fh);
}
static struct cedrus_format *cedrus_find_format(u32 pixelformat, u32 directions,
unsigned int capabilities)
{
struct cedrus_format *fmt;
unsigned int i;
for (i = 0; i < CEDRUS_FORMATS_COUNT; i++) {
fmt = &cedrus_formats[i];
if (fmt->capabilities && (fmt->capabilities & capabilities) !=
fmt->capabilities)
continue;
if (fmt->pixelformat == pixelformat &&
(fmt->directions & directions) != 0)
break;
}
if (i == CEDRUS_FORMATS_COUNT)
return NULL;
return &cedrus_formats[i];
}
static bool cedrus_check_format(u32 pixelformat, u32 directions,
unsigned int capabilities)
{
return cedrus_find_format(pixelformat, directions, capabilities);
}
static void cedrus_prepare_format(struct v4l2_pix_format *pix_fmt)
{
unsigned int width = pix_fmt->width;
unsigned int height = pix_fmt->height;
unsigned int sizeimage = pix_fmt->sizeimage;
unsigned int bytesperline = pix_fmt->bytesperline;
pix_fmt->field = V4L2_FIELD_NONE;
/* Limit to hardware min/max. */
width = clamp(width, CEDRUS_MIN_WIDTH, CEDRUS_MAX_WIDTH);
height = clamp(height, CEDRUS_MIN_HEIGHT, CEDRUS_MAX_HEIGHT);
switch (pix_fmt->pixelformat) {
case V4L2_PIX_FMT_MPEG2_SLICE:
/* Zero bytes per line for encoded source. */
bytesperline = 0;
break;
case V4L2_PIX_FMT_SUNXI_TILED_NV12:
/* 32-aligned stride. */
bytesperline = ALIGN(width, 32);
/* 32-aligned height. */
height = ALIGN(height, 32);
/* Luma plane size. */
sizeimage = bytesperline * height;
/* Chroma plane size. */
sizeimage += bytesperline * height / 2;
break;
case V4L2_PIX_FMT_NV12:
/* 16-aligned stride. */
bytesperline = ALIGN(width, 16);
/* 16-aligned height. */
height = ALIGN(height, 16);
/* Luma plane size. */
sizeimage = bytesperline * height;
/* Chroma plane size. */
sizeimage += bytesperline * height / 2;
break;
}
pix_fmt->width = width;
pix_fmt->height = height;
pix_fmt->bytesperline = bytesperline;
pix_fmt->sizeimage = sizeimage;
}
static int cedrus_querycap(struct file *file, void *priv,
struct v4l2_capability *cap)
{
strscpy(cap->driver, CEDRUS_NAME, sizeof(cap->driver));
strscpy(cap->card, CEDRUS_NAME, sizeof(cap->card));
snprintf(cap->bus_info, sizeof(cap->bus_info),
"platform:%s", CEDRUS_NAME);
return 0;
}
static int cedrus_enum_fmt(struct file *file, struct v4l2_fmtdesc *f,
u32 direction)
{
struct cedrus_ctx *ctx = cedrus_file2ctx(file);
struct cedrus_dev *dev = ctx->dev;
unsigned int capabilities = dev->capabilities;
struct cedrus_format *fmt;
unsigned int i, index;
/* Index among formats that match the requested direction. */
index = 0;
for (i = 0; i < CEDRUS_FORMATS_COUNT; i++) {
fmt = &cedrus_formats[i];
if (fmt->capabilities && (fmt->capabilities & capabilities) !=
fmt->capabilities)
continue;
if (!(cedrus_formats[i].directions & direction))
continue;
if (index == f->index)
break;
index++;
}
/* Matched format. */
if (i < CEDRUS_FORMATS_COUNT) {
f->pixelformat = cedrus_formats[i].pixelformat;
return 0;
}
return -EINVAL;
}
static int cedrus_enum_fmt_vid_cap(struct file *file, void *priv,
struct v4l2_fmtdesc *f)
{
return cedrus_enum_fmt(file, f, CEDRUS_DECODE_DST);
}
static int cedrus_enum_fmt_vid_out(struct file *file, void *priv,
struct v4l2_fmtdesc *f)
{
return cedrus_enum_fmt(file, f, CEDRUS_DECODE_SRC);
}
static int cedrus_g_fmt_vid_cap(struct file *file, void *priv,
struct v4l2_format *f)
{
struct cedrus_ctx *ctx = cedrus_file2ctx(file);
/* Fall back to dummy default by lack of hardware configuration. */
if (!ctx->dst_fmt.width || !ctx->dst_fmt.height) {
f->fmt.pix.pixelformat = V4L2_PIX_FMT_SUNXI_TILED_NV12;
cedrus_prepare_format(&f->fmt.pix);
return 0;
}
f->fmt.pix = ctx->dst_fmt;
return 0;
}
static int cedrus_g_fmt_vid_out(struct file *file, void *priv,
struct v4l2_format *f)
{
struct cedrus_ctx *ctx = cedrus_file2ctx(file);
/* Fall back to dummy default by lack of hardware configuration. */
if (!ctx->dst_fmt.width || !ctx->dst_fmt.height) {
f->fmt.pix.pixelformat = V4L2_PIX_FMT_MPEG2_SLICE;
f->fmt.pix.sizeimage = SZ_1K;
cedrus_prepare_format(&f->fmt.pix);
return 0;
}
f->fmt.pix = ctx->src_fmt;
return 0;
}
static int cedrus_try_fmt_vid_cap(struct file *file, void *priv,
struct v4l2_format *f)
{
struct cedrus_ctx *ctx = cedrus_file2ctx(file);
struct cedrus_dev *dev = ctx->dev;
struct v4l2_pix_format *pix_fmt = &f->fmt.pix;
if (!cedrus_check_format(pix_fmt->pixelformat, CEDRUS_DECODE_DST,
dev->capabilities))
return -EINVAL;
cedrus_prepare_format(pix_fmt);
return 0;
}
static int cedrus_try_fmt_vid_out(struct file *file, void *priv,
struct v4l2_format *f)
{
struct cedrus_ctx *ctx = cedrus_file2ctx(file);
struct cedrus_dev *dev = ctx->dev;
struct v4l2_pix_format *pix_fmt = &f->fmt.pix;
if (!cedrus_check_format(pix_fmt->pixelformat, CEDRUS_DECODE_SRC,
dev->capabilities))
return -EINVAL;
/* Source image size has to be provided by userspace. */
if (pix_fmt->sizeimage == 0)
return -EINVAL;
cedrus_prepare_format(pix_fmt);
return 0;
}
static int cedrus_s_fmt_vid_cap(struct file *file, void *priv,
struct v4l2_format *f)
{
struct cedrus_ctx *ctx = cedrus_file2ctx(file);
struct cedrus_dev *dev = ctx->dev;
int ret;
ret = cedrus_try_fmt_vid_cap(file, priv, f);
if (ret)
return ret;
ctx->dst_fmt = f->fmt.pix;
cedrus_dst_format_set(dev, &ctx->dst_fmt);
return 0;
}
static int cedrus_s_fmt_vid_out(struct file *file, void *priv,
struct v4l2_format *f)
{
struct cedrus_ctx *ctx = cedrus_file2ctx(file);
int ret;
ret = cedrus_try_fmt_vid_out(file, priv, f);
if (ret)
return ret;
ctx->src_fmt = f->fmt.pix;
/* Propagate colorspace information to capture. */
ctx->dst_fmt.colorspace = f->fmt.pix.colorspace;
ctx->dst_fmt.xfer_func = f->fmt.pix.xfer_func;
ctx->dst_fmt.ycbcr_enc = f->fmt.pix.ycbcr_enc;
ctx->dst_fmt.quantization = f->fmt.pix.quantization;
return 0;
}
const struct v4l2_ioctl_ops cedrus_ioctl_ops = {
.vidioc_querycap = cedrus_querycap,
.vidioc_enum_fmt_vid_cap = cedrus_enum_fmt_vid_cap,
.vidioc_g_fmt_vid_cap = cedrus_g_fmt_vid_cap,
.vidioc_try_fmt_vid_cap = cedrus_try_fmt_vid_cap,
.vidioc_s_fmt_vid_cap = cedrus_s_fmt_vid_cap,
.vidioc_enum_fmt_vid_out = cedrus_enum_fmt_vid_out,
.vidioc_g_fmt_vid_out = cedrus_g_fmt_vid_out,
.vidioc_try_fmt_vid_out = cedrus_try_fmt_vid_out,
.vidioc_s_fmt_vid_out = cedrus_s_fmt_vid_out,
.vidioc_reqbufs = v4l2_m2m_ioctl_reqbufs,
.vidioc_querybuf = v4l2_m2m_ioctl_querybuf,
.vidioc_qbuf = v4l2_m2m_ioctl_qbuf,
.vidioc_dqbuf = v4l2_m2m_ioctl_dqbuf,
.vidioc_prepare_buf = v4l2_m2m_ioctl_prepare_buf,
.vidioc_create_bufs = v4l2_m2m_ioctl_create_bufs,
.vidioc_expbuf = v4l2_m2m_ioctl_expbuf,
.vidioc_streamon = v4l2_m2m_ioctl_streamon,
.vidioc_streamoff = v4l2_m2m_ioctl_streamoff,
.vidioc_subscribe_event = v4l2_ctrl_subscribe_event,
.vidioc_unsubscribe_event = v4l2_event_unsubscribe,
};
static int cedrus_queue_setup(struct vb2_queue *vq, unsigned int *nbufs,
unsigned int *nplanes, unsigned int sizes[],
struct device *alloc_devs[])
{
struct cedrus_ctx *ctx = vb2_get_drv_priv(vq);
struct cedrus_dev *dev = ctx->dev;
struct v4l2_pix_format *pix_fmt;
u32 directions;
if (V4L2_TYPE_IS_OUTPUT(vq->type)) {
directions = CEDRUS_DECODE_SRC;
pix_fmt = &ctx->src_fmt;
} else {
directions = CEDRUS_DECODE_DST;
pix_fmt = &ctx->dst_fmt;
}
if (!cedrus_check_format(pix_fmt->pixelformat, directions,
dev->capabilities))
return -EINVAL;
if (*nplanes) {
if (sizes[0] < pix_fmt->sizeimage)
return -EINVAL;
} else {
sizes[0] = pix_fmt->sizeimage;
*nplanes = 1;
}
return 0;
}
static void cedrus_queue_cleanup(struct vb2_queue *vq, u32 state)
{
struct cedrus_ctx *ctx = vb2_get_drv_priv(vq);
struct vb2_v4l2_buffer *vbuf;
unsigned long flags;
for (;;) {
spin_lock_irqsave(&ctx->dev->irq_lock, flags);
if (V4L2_TYPE_IS_OUTPUT(vq->type))
vbuf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
else
vbuf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
spin_unlock_irqrestore(&ctx->dev->irq_lock, flags);
if (!vbuf)
return;
v4l2_ctrl_request_complete(vbuf->vb2_buf.req_obj.req,
&ctx->hdl);
v4l2_m2m_buf_done(vbuf, state);
}
}
static int cedrus_buf_init(struct vb2_buffer *vb)
{
struct vb2_queue *vq = vb->vb2_queue;
struct cedrus_ctx *ctx = vb2_get_drv_priv(vq);
if (!V4L2_TYPE_IS_OUTPUT(vq->type))
ctx->dst_bufs[vb->index] = vb;
return 0;
}
static void cedrus_buf_cleanup(struct vb2_buffer *vb)
{
struct vb2_queue *vq = vb->vb2_queue;
struct cedrus_ctx *ctx = vb2_get_drv_priv(vq);
if (!V4L2_TYPE_IS_OUTPUT(vq->type))
ctx->dst_bufs[vb->index] = NULL;
}
static int cedrus_buf_prepare(struct vb2_buffer *vb)
{
struct vb2_queue *vq = vb->vb2_queue;
struct cedrus_ctx *ctx = vb2_get_drv_priv(vq);
struct v4l2_pix_format *pix_fmt;
if (V4L2_TYPE_IS_OUTPUT(vq->type))
pix_fmt = &ctx->src_fmt;
else
pix_fmt = &ctx->dst_fmt;
if (vb2_plane_size(vb, 0) < pix_fmt->sizeimage)
return -EINVAL;
vb2_set_plane_payload(vb, 0, pix_fmt->sizeimage);
return 0;
}
static int cedrus_start_streaming(struct vb2_queue *vq, unsigned int count)
{
struct cedrus_ctx *ctx = vb2_get_drv_priv(vq);
struct cedrus_dev *dev = ctx->dev;
int ret = 0;
switch (ctx->src_fmt.pixelformat) {
case V4L2_PIX_FMT_MPEG2_SLICE:
ctx->current_codec = CEDRUS_CODEC_MPEG2;
break;
default:
return -EINVAL;
}
if (V4L2_TYPE_IS_OUTPUT(vq->type) &&
dev->dec_ops[ctx->current_codec]->start)
ret = dev->dec_ops[ctx->current_codec]->start(ctx);
if (ret)
cedrus_queue_cleanup(vq, VB2_BUF_STATE_QUEUED);
return ret;
}
static void cedrus_stop_streaming(struct vb2_queue *vq)
{
struct cedrus_ctx *ctx = vb2_get_drv_priv(vq);
struct cedrus_dev *dev = ctx->dev;
if (V4L2_TYPE_IS_OUTPUT(vq->type) &&
dev->dec_ops[ctx->current_codec]->stop)
dev->dec_ops[ctx->current_codec]->stop(ctx);
cedrus_queue_cleanup(vq, VB2_BUF_STATE_ERROR);
}
static void cedrus_buf_queue(struct vb2_buffer *vb)
{
struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
struct cedrus_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vbuf);
}
static void cedrus_buf_request_complete(struct vb2_buffer *vb)
{
struct cedrus_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
v4l2_ctrl_request_complete(vb->req_obj.req, &ctx->hdl);
}
static struct vb2_ops cedrus_qops = {
.queue_setup = cedrus_queue_setup,
.buf_prepare = cedrus_buf_prepare,
.buf_init = cedrus_buf_init,
.buf_cleanup = cedrus_buf_cleanup,
.buf_queue = cedrus_buf_queue,
.buf_request_complete = cedrus_buf_request_complete,
.start_streaming = cedrus_start_streaming,
.stop_streaming = cedrus_stop_streaming,
.wait_prepare = vb2_ops_wait_prepare,
.wait_finish = vb2_ops_wait_finish,
};
int cedrus_queue_init(void *priv, struct vb2_queue *src_vq,
struct vb2_queue *dst_vq)
{
struct cedrus_ctx *ctx = priv;
int ret;
src_vq->type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
src_vq->io_modes = VB2_MMAP | VB2_DMABUF;
src_vq->drv_priv = ctx;
src_vq->buf_struct_size = sizeof(struct cedrus_buffer);
src_vq->min_buffers_needed = 1;
src_vq->ops = &cedrus_qops;
src_vq->mem_ops = &vb2_dma_contig_memops;
src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
src_vq->lock = &ctx->dev->dev_mutex;
src_vq->dev = ctx->dev->dev;
src_vq->supports_requests = true;
ret = vb2_queue_init(src_vq);
if (ret)
return ret;
dst_vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
dst_vq->io_modes = VB2_MMAP | VB2_DMABUF;
dst_vq->drv_priv = ctx;
dst_vq->buf_struct_size = sizeof(struct cedrus_buffer);
dst_vq->min_buffers_needed = 1;
dst_vq->ops = &cedrus_qops;
dst_vq->mem_ops = &vb2_dma_contig_memops;
dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
dst_vq->lock = &ctx->dev->dev_mutex;
dst_vq->dev = ctx->dev->dev;
return vb2_queue_init(dst_vq);
}

View File

@ -0,0 +1,30 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Cedrus VPU driver
*
* Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
* Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
* Copyright (C) 2018 Bootlin
*
* Based on the vim2m driver, that is:
*
* Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
* Pawel Osciak, <pawel@osciak.com>
* Marek Szyprowski, <m.szyprowski@samsung.com>
*/
#ifndef _CEDRUS_VIDEO_H_
#define _CEDRUS_VIDEO_H_
struct cedrus_format {
u32 pixelformat;
u32 directions;
unsigned int capabilities;
};
extern const struct v4l2_ioctl_ops cedrus_ioctl_ops;
int cedrus_queue_init(void *priv, struct vb2_queue *src_vq,
struct vb2_queue *dst_vq);
#endif

View File

@ -166,7 +166,7 @@ int uvcg_queue_buffer(struct uvc_video_queue *queue, struct v4l2_buffer *buf)
unsigned long flags;
int ret;
ret = vb2_qbuf(&queue->queue, buf);
ret = vb2_qbuf(&queue->queue, NULL, buf);
if (ret < 0)
return ret;

View File

@ -27,6 +27,7 @@
struct ida;
struct device;
struct media_device;
/**
* struct media_entity_notify - Media Entity Notify
@ -50,10 +51,32 @@ struct media_entity_notify {
* struct media_device_ops - Media device operations
* @link_notify: Link state change notification callback. This callback is
* called with the graph_mutex held.
* @req_alloc: Allocate a request. Set this if you need to allocate a struct
* larger then struct media_request. @req_alloc and @req_free must
* either both be set or both be NULL.
* @req_free: Free a request. Set this if @req_alloc was set as well, leave
* to NULL otherwise.
* @req_validate: Validate a request, but do not queue yet. The req_queue_mutex
* lock is held when this op is called.
* @req_queue: Queue a validated request, cannot fail. If something goes
* wrong when queueing this request then it should be marked
* as such internally in the driver and any related buffers
* must eventually return to vb2 with state VB2_BUF_STATE_ERROR.
* The req_queue_mutex lock is held when this op is called.
* It is important that vb2 buffer objects are queued last after
* all other object types are queued: queueing a buffer kickstarts
* the request processing, so all other objects related to the
* request (and thus the buffer) must be available to the driver.
* And once a buffer is queued, then the driver can complete
* or delete objects from the request before req_queue exits.
*/
struct media_device_ops {
int (*link_notify)(struct media_link *link, u32 flags,
unsigned int notification);
struct media_request *(*req_alloc)(struct media_device *mdev);
void (*req_free)(struct media_request *req);
int (*req_validate)(struct media_request *req);
void (*req_queue)(struct media_request *req);
};
/**
@ -88,6 +111,9 @@ struct media_device_ops {
* @disable_source: Disable Source Handler function pointer
*
* @ops: Operation handler callbacks
* @req_queue_mutex: Serialise the MEDIA_REQUEST_IOC_QUEUE ioctl w.r.t.
* other operations that stop or start streaming.
* @request_id: Used to generate unique request IDs
*
* This structure represents an abstract high-level media device. It allows easy
* access to entities and provides basic media device-level support. The
@ -158,6 +184,9 @@ struct media_device {
void (*disable_source)(struct media_entity *entity);
const struct media_device_ops *ops;
struct mutex req_queue_mutex;
atomic_t request_id;
};
/* We don't need to include pci.h or usb.h here */

View File

@ -0,0 +1,442 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Media device request objects
*
* Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
* Copyright (C) 2018 Intel Corporation
*
* Author: Hans Verkuil <hans.verkuil@cisco.com>
* Author: Sakari Ailus <sakari.ailus@linux.intel.com>
*/
#ifndef MEDIA_REQUEST_H
#define MEDIA_REQUEST_H
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/refcount.h>
#include <media/media-device.h>
/**
* enum media_request_state - media request state
*
* @MEDIA_REQUEST_STATE_IDLE: Idle
* @MEDIA_REQUEST_STATE_VALIDATING: Validating the request, no state changes
* allowed
* @MEDIA_REQUEST_STATE_QUEUED: Queued
* @MEDIA_REQUEST_STATE_COMPLETE: Completed, the request is done
* @MEDIA_REQUEST_STATE_CLEANING: Cleaning, the request is being re-inited
* @MEDIA_REQUEST_STATE_UPDATING: The request is being updated, i.e.
* request objects are being added,
* modified or removed
* @NR_OF_MEDIA_REQUEST_STATE: The number of media request states, used
* internally for sanity check purposes
*/
enum media_request_state {
MEDIA_REQUEST_STATE_IDLE,
MEDIA_REQUEST_STATE_VALIDATING,
MEDIA_REQUEST_STATE_QUEUED,
MEDIA_REQUEST_STATE_COMPLETE,
MEDIA_REQUEST_STATE_CLEANING,
MEDIA_REQUEST_STATE_UPDATING,
NR_OF_MEDIA_REQUEST_STATE,
};
struct media_request_object;
/**
* struct media_request - Media device request
* @mdev: Media device this request belongs to
* @kref: Reference count
* @debug_str: Prefix for debug messages (process name:fd)
* @state: The state of the request
* @updating_count: count the number of request updates that are in progress
* @access_count: count the number of request accesses that are in progress
* @objects: List of @struct media_request_object request objects
* @num_incomplete_objects: The number of incomplete objects in the request
* @poll_wait: Wait queue for poll
* @lock: Serializes access to this struct
*/
struct media_request {
struct media_device *mdev;
struct kref kref;
char debug_str[TASK_COMM_LEN + 11];
enum media_request_state state;
unsigned int updating_count;
unsigned int access_count;
struct list_head objects;
unsigned int num_incomplete_objects;
struct wait_queue_head poll_wait;
spinlock_t lock;
};
#ifdef CONFIG_MEDIA_CONTROLLER
/**
* media_request_lock_for_access - Lock the request to access its objects
*
* @req: The media request
*
* Use before accessing a completed request. A reference to the request must
* be held during the access. This usually takes place automatically through
* a file handle. Use @media_request_unlock_for_access when done.
*/
static inline int __must_check
media_request_lock_for_access(struct media_request *req)
{
unsigned long flags;
int ret = -EBUSY;
spin_lock_irqsave(&req->lock, flags);
if (req->state == MEDIA_REQUEST_STATE_COMPLETE) {
req->access_count++;
ret = 0;
}
spin_unlock_irqrestore(&req->lock, flags);
return ret;
}
/**
* media_request_unlock_for_access - Unlock a request previously locked for
* access
*
* @req: The media request
*
* Unlock a request that has previously been locked using
* @media_request_lock_for_access.
*/
static inline void media_request_unlock_for_access(struct media_request *req)
{
unsigned long flags;
spin_lock_irqsave(&req->lock, flags);
if (!WARN_ON(!req->access_count))
req->access_count--;
spin_unlock_irqrestore(&req->lock, flags);
}
/**
* media_request_lock_for_update - Lock the request for updating its objects
*
* @req: The media request
*
* Use before updating a request, i.e. adding, modifying or removing a request
* object in it. A reference to the request must be held during the update. This
* usually takes place automatically through a file handle. Use
* @media_request_unlock_for_update when done.
*/
static inline int __must_check
media_request_lock_for_update(struct media_request *req)
{
unsigned long flags;
int ret = 0;
spin_lock_irqsave(&req->lock, flags);
if (req->state == MEDIA_REQUEST_STATE_IDLE ||
req->state == MEDIA_REQUEST_STATE_UPDATING) {
req->state = MEDIA_REQUEST_STATE_UPDATING;
req->updating_count++;
} else {
ret = -EBUSY;
}
spin_unlock_irqrestore(&req->lock, flags);
return ret;
}
/**
* media_request_unlock_for_update - Unlock a request previously locked for
* update
*
* @req: The media request
*
* Unlock a request that has previously been locked using
* @media_request_lock_for_update.
*/
static inline void media_request_unlock_for_update(struct media_request *req)
{
unsigned long flags;
spin_lock_irqsave(&req->lock, flags);
WARN_ON(req->updating_count <= 0);
if (!--req->updating_count)
req->state = MEDIA_REQUEST_STATE_IDLE;
spin_unlock_irqrestore(&req->lock, flags);
}
/**
* media_request_get - Get the media request
*
* @req: The media request
*
* Get the media request.
*/
static inline void media_request_get(struct media_request *req)
{
kref_get(&req->kref);
}
/**
* media_request_put - Put the media request
*
* @req: The media request
*
* Put the media request. The media request will be released
* when the refcount reaches 0.
*/
void media_request_put(struct media_request *req);
/**
* media_request_get_by_fd - Get a media request by fd
*
* @mdev: Media device this request belongs to
* @request_fd: The file descriptor of the request
*
* Get the request represented by @request_fd that is owned
* by the media device.
*
* Return a -EACCES error pointer if requests are not supported
* by this driver. Return -EINVAL if the request was not found.
* Return the pointer to the request if found: the caller will
* have to call @media_request_put when it finished using the
* request.
*/
struct media_request *
media_request_get_by_fd(struct media_device *mdev, int request_fd);
/**
* media_request_alloc - Allocate the media request
*
* @mdev: Media device this request belongs to
* @alloc_fd: Store the request's file descriptor in this int
*
* Allocated the media request and put the fd in @alloc_fd.
*/
int media_request_alloc(struct media_device *mdev,
int *alloc_fd);
#else
static inline void media_request_get(struct media_request *req)
{
}
static inline void media_request_put(struct media_request *req)
{
}
static inline struct media_request *
media_request_get_by_fd(struct media_device *mdev, int request_fd)
{
return ERR_PTR(-EACCES);
}
#endif
/**
* struct media_request_object_ops - Media request object operations
* @prepare: Validate and prepare the request object, optional.
* @unprepare: Unprepare the request object, optional.
* @queue: Queue the request object, optional.
* @unbind: Unbind the request object, optional.
* @release: Release the request object, required.
*/
struct media_request_object_ops {
int (*prepare)(struct media_request_object *object);
void (*unprepare)(struct media_request_object *object);
void (*queue)(struct media_request_object *object);
void (*unbind)(struct media_request_object *object);
void (*release)(struct media_request_object *object);
};
/**
* struct media_request_object - An opaque object that belongs to a media
* request
*
* @ops: object's operations
* @priv: object's priv pointer
* @req: the request this object belongs to (can be NULL)
* @list: List entry of the object for @struct media_request
* @kref: Reference count of the object, acquire before releasing req->lock
* @completed: If true, then this object was completed.
*
* An object related to the request. This struct is always embedded in
* another struct that contains the actual data for this request object.
*/
struct media_request_object {
const struct media_request_object_ops *ops;
void *priv;
struct media_request *req;
struct list_head list;
struct kref kref;
bool completed;
};
#ifdef CONFIG_MEDIA_CONTROLLER
/**
* media_request_object_get - Get a media request object
*
* @obj: The object
*
* Get a media request object.
*/
static inline void media_request_object_get(struct media_request_object *obj)
{
kref_get(&obj->kref);
}
/**
* media_request_object_put - Put a media request object
*
* @obj: The object
*
* Put a media request object. Once all references are gone, the
* object's memory is released.
*/
void media_request_object_put(struct media_request_object *obj);
/**
* media_request_object_find - Find an object in a request
*
* @req: The media request
* @ops: Find an object with this ops value
* @priv: Find an object with this priv value
*
* Both @ops and @priv must be non-NULL.
*
* Returns the object pointer or NULL if not found. The caller must
* call media_request_object_put() once it finished using the object.
*
* Since this function needs to walk the list of objects it takes
* the @req->lock spin lock to make this safe.
*/
struct media_request_object *
media_request_object_find(struct media_request *req,
const struct media_request_object_ops *ops,
void *priv);
/**
* media_request_object_init - Initialise a media request object
*
* @obj: The object
*
* Initialise a media request object. The object will be released using the
* release callback of the ops once it has no references (this function
* initialises references to one).
*/
void media_request_object_init(struct media_request_object *obj);
/**
* media_request_object_bind - Bind a media request object to a request
*
* @req: The media request
* @ops: The object ops for this object
* @priv: A driver-specific priv pointer associated with this object
* @is_buffer: Set to true if the object a buffer object.
* @obj: The object
*
* Bind this object to the request and set the ops and priv values of
* the object so it can be found later with media_request_object_find().
*
* Every bound object must be unbound or completed by the kernel at some
* point in time, otherwise the request will never complete. When the
* request is released all completed objects will be unbound by the
* request core code.
*
* Buffer objects will be added to the end of the request's object
* list, non-buffer objects will be added to the front of the list.
* This ensures that all buffer objects are at the end of the list
* and that all non-buffer objects that they depend on are processed
* first.
*/
int media_request_object_bind(struct media_request *req,
const struct media_request_object_ops *ops,
void *priv, bool is_buffer,
struct media_request_object *obj);
/**
* media_request_object_unbind - Unbind a media request object
*
* @obj: The object
*
* Unbind the media request object from the request.
*/
void media_request_object_unbind(struct media_request_object *obj);
/**
* media_request_object_complete - Mark the media request object as complete
*
* @obj: The object
*
* Mark the media request object as complete. Only bound objects can
* be completed.
*/
void media_request_object_complete(struct media_request_object *obj);
#else
static inline int __must_check
media_request_lock_for_access(struct media_request *req)
{
return -EINVAL;
}
static inline void media_request_unlock_for_access(struct media_request *req)
{
}
static inline int __must_check
media_request_lock_for_update(struct media_request *req)
{
return -EINVAL;
}
static inline void media_request_unlock_for_update(struct media_request *req)
{
}
static inline void media_request_object_get(struct media_request_object *obj)
{
}
static inline void media_request_object_put(struct media_request_object *obj)
{
}
static inline struct media_request_object *
media_request_object_find(struct media_request *req,
const struct media_request_object_ops *ops,
void *priv)
{
return NULL;
}
static inline void media_request_object_init(struct media_request_object *obj)
{
obj->ops = NULL;
obj->req = NULL;
}
static inline int media_request_object_bind(struct media_request *req,
const struct media_request_object_ops *ops,
void *priv, bool is_buffer,
struct media_request_object *obj)
{
return 0;
}
static inline void media_request_object_unbind(struct media_request_object *obj)
{
}
static inline void media_request_object_complete(struct media_request_object *obj)
{
}
#endif
#endif

View File

@ -20,6 +20,7 @@
#include <linux/list.h>
#include <linux/mutex.h>
#include <linux/videodev2.h>
#include <media/media-request.h>
/* forward references */
struct file;
@ -34,13 +35,15 @@ struct poll_table_struct;
/**
* union v4l2_ctrl_ptr - A pointer to a control value.
* @p_s32: Pointer to a 32-bit signed value.
* @p_s64: Pointer to a 64-bit signed value.
* @p_u8: Pointer to a 8-bit unsigned value.
* @p_u16: Pointer to a 16-bit unsigned value.
* @p_u32: Pointer to a 32-bit unsigned value.
* @p_char: Pointer to a string.
* @p: Pointer to a compound value.
* @p_s32: Pointer to a 32-bit signed value.
* @p_s64: Pointer to a 64-bit signed value.
* @p_u8: Pointer to a 8-bit unsigned value.
* @p_u16: Pointer to a 16-bit unsigned value.
* @p_u32: Pointer to a 32-bit unsigned value.
* @p_char: Pointer to a string.
* @p_mpeg2_slice_params: Pointer to a MPEG2 slice parameters structure.
* @p_mpeg2_quantization: Pointer to a MPEG2 quantization data structure.
* @p: Pointer to a compound value.
*/
union v4l2_ctrl_ptr {
s32 *p_s32;
@ -49,6 +52,8 @@ union v4l2_ctrl_ptr {
u16 *p_u16;
u32 *p_u32;
char *p_char;
struct v4l2_ctrl_mpeg2_slice_params *p_mpeg2_slice_params;
struct v4l2_ctrl_mpeg2_quantization *p_mpeg2_quantization;
void *p;
};
@ -247,6 +252,19 @@ struct v4l2_ctrl {
* @ctrl: The actual control information.
* @helper: Pointer to helper struct. Used internally in
* ``prepare_ext_ctrls`` function at ``v4l2-ctrl.c``.
* @from_other_dev: If true, then @ctrl was defined in another
* device than the &struct v4l2_ctrl_handler.
* @req_done: Internal flag: if the control handler containing this control
* reference is bound to a media request, then this is set when
* the control has been applied. This prevents applying controls
* from a cluster with multiple controls twice (when the first
* control of a cluster is applied, they all are).
* @req: If set, this refers to another request that sets this control.
* @p_req: If the control handler containing this control reference
* is bound to a media request, then this points to the
* value of the control that should be applied when the request
* is executed, or to the value of the control at the time
* that the request was completed.
*
* Each control handler has a list of these refs. The list_head is used to
* keep a sorted-by-control-ID list of all controls, while the next pointer
@ -257,6 +275,10 @@ struct v4l2_ctrl_ref {
struct v4l2_ctrl_ref *next;
struct v4l2_ctrl *ctrl;
struct v4l2_ctrl_helper *helper;
bool from_other_dev;
bool req_done;
struct v4l2_ctrl_ref *req;
union v4l2_ctrl_ptr p_req;
};
/**
@ -280,6 +302,17 @@ struct v4l2_ctrl_ref {
* @notify_priv: Passed as argument to the v4l2_ctrl notify callback.
* @nr_of_buckets: Total number of buckets in the array.
* @error: The error code of the first failed control addition.
* @request_is_queued: True if the request was queued.
* @requests: List to keep track of open control handler request objects.
* For the parent control handler (@req_obj.req == NULL) this
* is the list header. When the parent control handler is
* removed, it has to unbind and put all these requests since
* they refer to the parent.
* @requests_queued: List of the queued requests. This determines the order
* in which these controls are applied. Once the request is
* completed it is removed from this list.
* @req_obj: The &struct media_request_object, used to link into a
* &struct media_request. This request object has a refcount.
*/
struct v4l2_ctrl_handler {
struct mutex _lock;
@ -292,6 +325,10 @@ struct v4l2_ctrl_handler {
void *notify_priv;
u16 nr_of_buckets;
int error;
bool request_is_queued;
struct list_head requests;
struct list_head requests_queued;
struct media_request_object req_obj;
};
/**
@ -633,6 +670,8 @@ typedef bool (*v4l2_ctrl_filter)(const struct v4l2_ctrl *ctrl);
* @add: The control handler whose controls you want to add to
* the @hdl control handler.
* @filter: This function will filter which controls should be added.
* @from_other_dev: If true, then the controls in @add were defined in another
* device than @hdl.
*
* Does nothing if either of the two handlers is a NULL pointer.
* If @filter is NULL, then all controls are added. Otherwise only those
@ -642,7 +681,8 @@ typedef bool (*v4l2_ctrl_filter)(const struct v4l2_ctrl *ctrl);
*/
int v4l2_ctrl_add_handler(struct v4l2_ctrl_handler *hdl,
struct v4l2_ctrl_handler *add,
v4l2_ctrl_filter filter);
v4l2_ctrl_filter filter,
bool from_other_dev);
/**
* v4l2_ctrl_radio_filter() - Standard filter for radio controls.
@ -1070,6 +1110,84 @@ int v4l2_ctrl_subscribe_event(struct v4l2_fh *fh,
*/
__poll_t v4l2_ctrl_poll(struct file *file, struct poll_table_struct *wait);
/**
* v4l2_ctrl_request_setup - helper function to apply control values in a request
*
* @req: The request
* @parent: The parent control handler ('priv' in media_request_object_find())
*
* This is a helper function to call the control handler's s_ctrl callback with
* the control values contained in the request. Do note that this approach of
* applying control values in a request is only applicable to memory-to-memory
* devices.
*/
void v4l2_ctrl_request_setup(struct media_request *req,
struct v4l2_ctrl_handler *parent);
/**
* v4l2_ctrl_request_complete - Complete a control handler request object
*
* @req: The request
* @parent: The parent control handler ('priv' in media_request_object_find())
*
* This function is to be called on each control handler that may have had a
* request object associated with it, i.e. control handlers of a driver that
* supports requests.
*
* The function first obtains the values of any volatile controls in the control
* handler and attach them to the request. Then, the function completes the
* request object.
*/
void v4l2_ctrl_request_complete(struct media_request *req,
struct v4l2_ctrl_handler *parent);
/**
* v4l2_ctrl_request_hdl_find - Find the control handler in the request
*
* @req: The request
* @parent: The parent control handler ('priv' in media_request_object_find())
*
* This function finds the control handler in the request. It may return
* NULL if not found. When done, you must call v4l2_ctrl_request_put_hdl()
* with the returned handler pointer.
*
* If the request is not in state VALIDATING or QUEUED, then this function
* will always return NULL.
*
* Note that in state VALIDATING the req_queue_mutex is held, so
* no objects can be added or deleted from the request.
*
* In state QUEUED it is the driver that will have to ensure this.
*/
struct v4l2_ctrl_handler *v4l2_ctrl_request_hdl_find(struct media_request *req,
struct v4l2_ctrl_handler *parent);
/**
* v4l2_ctrl_request_hdl_put - Put the control handler
*
* @hdl: Put this control handler
*
* This function released the control handler previously obtained from'
* v4l2_ctrl_request_hdl_find().
*/
static inline void v4l2_ctrl_request_hdl_put(struct v4l2_ctrl_handler *hdl)
{
if (hdl)
media_request_object_put(&hdl->req_obj);
}
/**
* v4l2_ctrl_request_ctrl_find() - Find a control with the given ID.
*
* @hdl: The control handler from the request.
* @id: The ID of the control to find.
*
* This function returns a pointer to the control if this control is
* part of the request or NULL otherwise.
*/
struct v4l2_ctrl *
v4l2_ctrl_request_hdl_ctrl_find(struct v4l2_ctrl_handler *hdl, u32 id);
/* Helpers for ioctl_ops */
/**
@ -1136,11 +1254,12 @@ int v4l2_s_ctrl(struct v4l2_fh *fh, struct v4l2_ctrl_handler *hdl,
* :ref:`VIDIOC_G_EXT_CTRLS <vidioc_g_ext_ctrls>` ioctl
*
* @hdl: pointer to &struct v4l2_ctrl_handler
* @mdev: pointer to &struct media_device
* @c: pointer to &struct v4l2_ext_controls
*
* If hdl == NULL then they will all return -EINVAL.
*/
int v4l2_g_ext_ctrls(struct v4l2_ctrl_handler *hdl,
int v4l2_g_ext_ctrls(struct v4l2_ctrl_handler *hdl, struct media_device *mdev,
struct v4l2_ext_controls *c);
/**
@ -1148,11 +1267,13 @@ int v4l2_g_ext_ctrls(struct v4l2_ctrl_handler *hdl,
* :ref:`VIDIOC_TRY_EXT_CTRLS <vidioc_g_ext_ctrls>` ioctl
*
* @hdl: pointer to &struct v4l2_ctrl_handler
* @mdev: pointer to &struct media_device
* @c: pointer to &struct v4l2_ext_controls
*
* If hdl == NULL then they will all return -EINVAL.
*/
int v4l2_try_ext_ctrls(struct v4l2_ctrl_handler *hdl,
struct media_device *mdev,
struct v4l2_ext_controls *c);
/**
@ -1161,11 +1282,13 @@ int v4l2_try_ext_ctrls(struct v4l2_ctrl_handler *hdl,
*
* @fh: pointer to &struct v4l2_fh
* @hdl: pointer to &struct v4l2_ctrl_handler
* @mdev: pointer to &struct media_device
* @c: pointer to &struct v4l2_ext_controls
*
* If hdl == NULL then they will all return -EINVAL.
*/
int v4l2_s_ext_ctrls(struct v4l2_fh *fh, struct v4l2_ctrl_handler *hdl,
struct media_device *mdev,
struct v4l2_ext_controls *c);
/**

View File

@ -211,6 +211,17 @@ static inline void v4l2_subdev_notify(struct v4l2_subdev *sd,
sd->v4l2_dev->notify(sd, notification, arg);
}
/**
* v4l2_device_supports_requests - Test if requests are supported.
*
* @v4l2_dev: pointer to struct v4l2_device
*/
static inline bool v4l2_device_supports_requests(struct v4l2_device *v4l2_dev)
{
return v4l2_dev->mdev && v4l2_dev->mdev->ops &&
v4l2_dev->mdev->ops->req_queue;
}
/* Helper macros to iterate over all subdevs. */
/**

View File

@ -622,6 +622,10 @@ v4l2_m2m_dst_buf_remove_by_idx(struct v4l2_m2m_ctx *m2m_ctx, unsigned int idx)
return v4l2_m2m_buf_remove_by_idx(&m2m_ctx->cap_q_ctx, idx);
}
/* v4l2 request helper */
void vb2_m2m_request_queue(struct media_request *req);
/* v4l2 ioctl helpers */
int v4l2_m2m_ioctl_reqbufs(struct file *file, void *priv,

View File

@ -17,6 +17,7 @@
#include <linux/poll.h>
#include <linux/dma-buf.h>
#include <linux/bitops.h>
#include <media/media-request.h>
#define VB2_MAX_FRAME (32)
#define VB2_MAX_PLANES (8)
@ -203,8 +204,8 @@ enum vb2_io_modes {
/**
* enum vb2_buffer_state - current video buffer state.
* @VB2_BUF_STATE_DEQUEUED: buffer under userspace control.
* @VB2_BUF_STATE_IN_REQUEST: buffer is queued in media request.
* @VB2_BUF_STATE_PREPARING: buffer is being prepared in videobuf.
* @VB2_BUF_STATE_PREPARED: buffer prepared in videobuf and by the driver.
* @VB2_BUF_STATE_QUEUED: buffer queued in videobuf, but not in driver.
* @VB2_BUF_STATE_REQUEUEING: re-queue a buffer to the driver.
* @VB2_BUF_STATE_ACTIVE: buffer queued in driver and possibly used
@ -217,8 +218,8 @@ enum vb2_io_modes {
*/
enum vb2_buffer_state {
VB2_BUF_STATE_DEQUEUED,
VB2_BUF_STATE_IN_REQUEST,
VB2_BUF_STATE_PREPARING,
VB2_BUF_STATE_PREPARED,
VB2_BUF_STATE_QUEUED,
VB2_BUF_STATE_REQUEUEING,
VB2_BUF_STATE_ACTIVE,
@ -238,6 +239,8 @@ struct vb2_queue;
* @num_planes: number of planes in the buffer
* on an internal driver queue.
* @timestamp: frame timestamp in ns.
* @req_obj: used to bind this buffer to a request. This
* request object has a refcount.
*/
struct vb2_buffer {
struct vb2_queue *vb2_queue;
@ -246,10 +249,17 @@ struct vb2_buffer {
unsigned int memory;
unsigned int num_planes;
u64 timestamp;
struct media_request_object req_obj;
/* private: internal use only
*
* state: current buffer state; do not change
* synced: this buffer has been synced for DMA, i.e. the
* 'prepare' memop was called. It is cleared again
* after the 'finish' memop is called.
* prepared: this buffer has been prepared, i.e. the
* buf_prepare op was called. It is cleared again
* after the 'buf_finish' op is called.
* queued_entry: entry on the queued buffers list, which holds
* all buffers queued from userspace
* done_entry: entry on the list that stores all buffers ready
@ -257,6 +267,8 @@ struct vb2_buffer {
* vb2_plane: per-plane information; do not change
*/
enum vb2_buffer_state state;
bool synced;
bool prepared;
struct vb2_plane planes[VB2_MAX_PLANES];
struct list_head queued_entry;
@ -287,6 +299,7 @@ struct vb2_buffer {
u32 cnt_buf_finish;
u32 cnt_buf_cleanup;
u32 cnt_buf_queue;
u32 cnt_buf_request_complete;
/* This counts the number of calls to vb2_buffer_done() */
u32 cnt_buf_done;
@ -380,6 +393,11 @@ struct vb2_buffer {
* ioctl; might be called before @start_streaming callback
* if user pre-queued buffers before calling
* VIDIOC_STREAMON().
* @buf_request_complete: a buffer that was never queued to the driver but is
* associated with a queued request was canceled.
* The driver will have to mark associated objects in the
* request as completed; required if requests are
* supported.
*/
struct vb2_ops {
int (*queue_setup)(struct vb2_queue *q,
@ -398,6 +416,8 @@ struct vb2_ops {
void (*stop_streaming)(struct vb2_queue *q);
void (*buf_queue)(struct vb2_buffer *vb);
void (*buf_request_complete)(struct vb2_buffer *vb);
};
/**
@ -406,6 +426,9 @@ struct vb2_ops {
* @verify_planes_array: Verify that a given user space structure contains
* enough planes for the buffer. This is called
* for each dequeued buffer.
* @init_buffer: given a &vb2_buffer initialize the extra data after
* struct vb2_buffer.
* For V4L2 this is a &struct vb2_v4l2_buffer.
* @fill_user_buffer: given a &vb2_buffer fill in the userspace structure.
* For V4L2 this is a &struct v4l2_buffer.
* @fill_vb2_buffer: given a userspace structure, fill in the &vb2_buffer.
@ -416,9 +439,9 @@ struct vb2_ops {
*/
struct vb2_buf_ops {
int (*verify_planes_array)(struct vb2_buffer *vb, const void *pb);
void (*init_buffer)(struct vb2_buffer *vb);
void (*fill_user_buffer)(struct vb2_buffer *vb, void *pb);
int (*fill_vb2_buffer)(struct vb2_buffer *vb, const void *pb,
struct vb2_plane *planes);
int (*fill_vb2_buffer)(struct vb2_buffer *vb, struct vb2_plane *planes);
void (*copy_timestamp)(struct vb2_buffer *vb, const void *pb);
};
@ -449,6 +472,13 @@ struct vb2_buf_ops {
* @quirk_poll_must_check_waiting_for_buffers: Return %EPOLLERR at poll when QBUF
* has not been called. This is a vb1 idiom that has been adopted
* also by vb2.
* @supports_requests: this queue supports the Request API.
* @uses_qbuf: qbuf was used directly for this queue. Set to 1 the first
* time this is called. Set to 0 when the queue is canceled.
* If this is 1, then you cannot queue buffers from a request.
* @uses_requests: requests are used for this queue. Set to 1 the first time
* a request is queued. Set to 0 when the queue is canceled.
* If this is 1, then you cannot queue buffers directly.
* @lock: pointer to a mutex that protects the &struct vb2_queue. The
* driver can set this to a mutex to let the v4l2 core serialize
* the queuing ioctls. If the driver wants to handle locking
@ -516,6 +546,9 @@ struct vb2_queue {
unsigned fileio_write_immediately:1;
unsigned allow_zero_bytesused:1;
unsigned quirk_poll_must_check_waiting_for_buffers:1;
unsigned supports_requests:1;
unsigned uses_qbuf:1;
unsigned uses_requests:1;
struct mutex *lock;
void *owner;
@ -752,12 +785,17 @@ int vb2_core_prepare_buf(struct vb2_queue *q, unsigned int index, void *pb);
* @index: id number of the buffer
* @pb: buffer structure passed from userspace to
* v4l2_ioctl_ops->vidioc_qbuf handler in driver
* @req: pointer to &struct media_request, may be NULL.
*
* Videobuf2 core helper to implement VIDIOC_QBUF() operation. It is called
* internally by VB2 by an API-specific handler, like ``videobuf2-v4l2.h``.
*
* This function:
*
* #) If @req is non-NULL, then the buffer will be bound to this
* media request and it returns. The buffer will be prepared and
* queued to the driver (i.e. the next two steps) when the request
* itself is queued.
* #) if necessary, calls &vb2_ops->buf_prepare callback in the driver
* (if provided), in which driver-specific buffer initialization can
* be performed;
@ -766,7 +804,8 @@ int vb2_core_prepare_buf(struct vb2_queue *q, unsigned int index, void *pb);
*
* Return: returns zero on success; an error code otherwise.
*/
int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb);
int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb,
struct media_request *req);
/**
* vb2_core_dqbuf() - Dequeue a buffer to the userspace
@ -1143,4 +1182,19 @@ bool vb2_buffer_in_use(struct vb2_queue *q, struct vb2_buffer *vb);
*/
int vb2_verify_memory_type(struct vb2_queue *q,
enum vb2_memory memory, unsigned int type);
/**
* vb2_request_object_is_buffer() - return true if the object is a buffer
*
* @obj: the request object.
*/
bool vb2_request_object_is_buffer(struct media_request_object *obj);
/**
* vb2_request_buffer_cnt() - return the number of buffers in the request
*
* @req: the request.
*/
unsigned int vb2_request_buffer_cnt(struct media_request *req);
#endif /* _MEDIA_VIDEOBUF2_CORE_H */

View File

@ -32,6 +32,8 @@
* &enum v4l2_field.
* @timecode: frame timecode.
* @sequence: sequence count of this frame.
* @request_fd: the request_fd associated with this buffer
* @planes: plane information (userptr/fd, length, bytesused, data_offset).
*
* Should contain enough information to be able to cover all the fields
* of &struct v4l2_buffer at ``videodev2.h``.
@ -43,6 +45,8 @@ struct vb2_v4l2_buffer {
__u32 field;
struct v4l2_timecode timecode;
__u32 sequence;
__s32 request_fd;
struct vb2_plane planes[VB2_MAX_PLANES];
};
/*
@ -77,6 +81,7 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create);
* vb2_prepare_buf() - Pass ownership of a buffer from userspace to the kernel
*
* @q: pointer to &struct vb2_queue with videobuf2 queue.
* @mdev: pointer to &struct media_device, may be NULL.
* @b: buffer structure passed from userspace to
* &v4l2_ioctl_ops->vidioc_prepare_buf handler in driver
*
@ -88,15 +93,19 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create);
* #) verifies the passed buffer,
* #) calls &vb2_ops->buf_prepare callback in the driver (if provided),
* in which driver-specific buffer initialization can be performed.
* #) if @b->request_fd is non-zero and @mdev->ops->req_queue is set,
* then bind the prepared buffer to the request.
*
* The return values from this function are intended to be directly returned
* from &v4l2_ioctl_ops->vidioc_prepare_buf handler in driver.
*/
int vb2_prepare_buf(struct vb2_queue *q, struct v4l2_buffer *b);
int vb2_prepare_buf(struct vb2_queue *q, struct media_device *mdev,
struct v4l2_buffer *b);
/**
* vb2_qbuf() - Queue a buffer from userspace
* @q: pointer to &struct vb2_queue with videobuf2 queue.
* @mdev: pointer to &struct media_device, may be NULL.
* @b: buffer structure passed from userspace to
* &v4l2_ioctl_ops->vidioc_qbuf handler in driver
*
@ -105,6 +114,8 @@ int vb2_prepare_buf(struct vb2_queue *q, struct v4l2_buffer *b);
* This function:
*
* #) verifies the passed buffer;
* #) if @b->request_fd is non-zero and @mdev->ops->req_queue is set,
* then bind the buffer to the request.
* #) if necessary, calls &vb2_ops->buf_prepare callback in the driver
* (if provided), in which driver-specific buffer initialization can
* be performed;
@ -114,7 +125,8 @@ int vb2_prepare_buf(struct vb2_queue *q, struct v4l2_buffer *b);
* The return values from this function are intended to be directly returned
* from &v4l2_ioctl_ops->vidioc_qbuf handler in driver.
*/
int vb2_qbuf(struct vb2_queue *q, struct v4l2_buffer *b);
int vb2_qbuf(struct vb2_queue *q, struct media_device *mdev,
struct v4l2_buffer *b);
/**
* vb2_expbuf() - Export a buffer as a file descriptor
@ -291,4 +303,8 @@ void vb2_ops_wait_prepare(struct vb2_queue *vq);
*/
void vb2_ops_wait_finish(struct vb2_queue *vq);
struct media_request;
int vb2_request_validate(struct media_request *req);
void vb2_request_queue(struct media_request *req);
#endif /* _MEDIA_VIDEOBUF2_V4L2_H */

View File

@ -369,6 +369,14 @@ struct media_v2_topology {
#define MEDIA_IOC_ENUM_LINKS _IOWR('|', 0x02, struct media_links_enum)
#define MEDIA_IOC_SETUP_LINK _IOWR('|', 0x03, struct media_link_desc)
#define MEDIA_IOC_G_TOPOLOGY _IOWR('|', 0x04, struct media_v2_topology)
#define MEDIA_IOC_REQUEST_ALLOC _IOR ('|', 0x05, int)
/*
* These ioctls are called on the request file descriptor as returned
* by MEDIA_IOC_REQUEST_ALLOC.
*/
#define MEDIA_REQUEST_IOC_QUEUE _IO('|', 0x80)
#define MEDIA_REQUEST_IOC_REINIT _IO('|', 0x81)
#ifndef __KERNEL__

Some files were not shown because too many files have changed in this diff Show More