1
0
Fork 0
Commit Graph

146 Commits (735f463af70e9601881ec879961ec42aef051733)

Author SHA1 Message Date
Chris Wilson 636918f130 drm/i915: Mark the GT as busy before idling the previous request
In a synchronous setup, we may retire the last request before we
complete allocating the next request. As the last request is retired, we
queue a timer to mark the device as idle, and promptly have to execute
ad cancel that timer once we complete allocating the request and need to
keep the device awake. If we rearrange the mark_busy() to occur before
we retire the previous request, we can skip this ping-pong.

v2: Joonas pointed out that unreserve_seqno() was now doing more than
doing seqno handling and should be renamed to reflect its wider purpose.
That also highlighted the new asymmetry with reserve_seqno(), so fixup
that and rename both to [un]reserve_engine().

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20170817144719.10968-1-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2017-08-18 12:01:41 +01:00
Chris Wilson 535275d323 drm/i915: Squelch reset messages during selftests
During our selftests, we try reseting the GPU tens of thousands of
times, flooding the dmesg with our reset spam drowning out any potential
warnings. Add an option to i915_reset()/i915_reset_engine() to specify a
quiet reset for selftesting.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20170721123238.16428-19-chris@chris-wilson.co.uk
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2017-07-27 09:38:57 +02:00
Chris Wilson 77b25a972b drm/i915: Make i915_gem_context_mark_guilty() safe for unlocked updates
Since we make call i915_gem_context_mark_guilty() concurrently when
resetting different engines in parallel, we need to make sure that our
updates are safe for the unlocked access.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michel Thierry <michel.thierry@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20170721123238.16428-12-chris@chris-wilson.co.uk
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2017-07-27 09:38:47 +02:00
Chris Wilson 4d53568cca drm/i915: Move idle checks before intel_engine_init_global_seqno()
intel_engine_init_globa_seqno() may be called from an uncontrolled
set-wedged path where we have given up waiting for broken hw and declare
it defunct. Along that path, any sanity checks that the hw is idle
before we adjust its state will expectedly fail, so we simply cannot.
Instead of asserting inside init_global_seqno, we move them to the
normal caller reset_all_global_seqno() as it handles runtime seqno
wraparound.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20170721123238.16428-8-chris@chris-wilson.co.uk
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2017-07-27 09:38:46 +02:00
Chris Wilson 7b92c1bd05 drm/i915: Avoid keeping waitboost active for signaling threads
Once a client has requested a waitboost, we keep that waitboost active
until all clients are no longer waiting. This is because we don't
distinguish which waiter deserves the boost. However, with the advent of
fence signaling, the signaler threads appear as waiters to the RPS
interrupt handler. So instead of using a single boolean to track when to
keep the waitboost active, use a counter of all outstanding waitboosted
requests.

At this point, I have removed all vestiges of the rate limiting on
clients. Whilst this means that compositors should remain more fluid,
it also means that boosts are more prevalent. See commit b29c19b645
("drm/i915: Boost RPS frequency for CPU stalls") for a longer discussion
on the pros and cons of both approaches.

A drawback of this implementation is that it requires constant request
submission to keep the waitboost trimmed (as it is now cancelled when the
request is completed). This will be fine for a busy system, but near
idle the boosts may be kept for longer than desired (effectively tens of
vblanks worstcase) and there is a reliance on rc6 instead.

v2: Remove defunct rps.client_lock

Reported-by: Michał Winiarski <michal.winiarski@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170628123548.9236-1-chris@chris-wilson.co.uk
2017-06-28 15:23:12 +01:00
Chris Wilson 6b567085c1 drm/i915: Remove the spin-request during execbuf await_request
Originally we would enable and disable the breadcrumb interrupt
immediately on demand. This was slow enough to have a large impact
(>30%) on tasks that hopped between engines. However, by using a shadow
to keep the irq alive for an extra interrupt (see commit 67b807a892
("drm/i915: Delay disabling the user interrupt for breadcrumbs")) and
by recently reducing the cost in adding ourselves to the signal tree, we
no longer need to spin-request during await_request to avoid delays in
throughput tests. Without the earlier patches to stop the wakeup when
signaling if the irq was already active, we saw no improvement in
execbuf overhead (and corresponding contention in other clients) despite
the removal of the spinner in a simple test like glxgears. This means
there will be scenarios where now we spend longer enabling the interrupt
than we would have spent spinning, but these are not likely to have as
noticeable an impact as the high frequency test cases (where there
should not be any regression).

Ulterior motive: generalising the engine->sync_to to handle different
types of semaphores and non-semaphores.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170608111405.16466-4-chris@chris-wilson.co.uk
2017-06-08 12:33:09 +01:00
Chris Wilson 9f90ff3928 drm/i915: Check signaled state after enabling signaling
Setting up the irq to signal the request completion takes a finite
amount of time, during which it is possible that the request already
completed. Check afterwards, just in case, so that we can respond
immediately.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170608111405.16466-1-chris@chris-wilson.co.uk
2017-06-08 12:33:08 +01:00
Michał Winiarski 1d5fd00313 drm/i915: Remove misleading comment in request_alloc
Passing NULL ctx to request_alloc would lead to null-ptr-deref.

v2: Let's not replace the comment with a BUG_ON

Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170523102400.9614-1-michal.winiarski@intel.com
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2017-05-23 11:53:22 +01:00
Chris Wilson 6c067579e6 drm/i915: Split execlist priority queue into rbtree + linked list
All the requests at the same priority are executed in FIFO order. They
do not need to be stored in the rbtree themselves, as they are a simple
list within a level. If we move the requests at one priority into a list,
we can then reduce the rbtree to the set of priorities. This should keep
the height of the rbtree small, as the number of active priorities can not
exceed the number of active requests and should be typically only a few.

Currently, we have ~2k possible different priority levels, that may
increase to allow even more fine grained selection. Allocating those in
advance seems a waste (and may be impossible), so we opt for allocating
upon first use, and freeing after its requests are depleted. To avoid
the possibility of an allocation failure causing us to lose a request,
we preallocate the default priority (0) and bump any request to that
priority if we fail to allocate it the appropriate plist. Having a
request (that is ready to run, so not leading to corruption) execute
out-of-order is better than leaking the request (and its dependency
tree) entirely.

There should be a benefit to reducing execlists_dequeue() to principally
using a simple list (and reducing the frequency of both rbtree iteration
and balancing on erase) but for typical workloads, request coalescing
should be small enough that we don't notice any change. The main gain is
from improving PI calls to schedule, and the explicit list within a
level should make request unwinding simpler (we just need to insert at
the head of the list rather than the tail and not have to make the
rbtree search more complicated).

v2: Avoid use-after-free when deleting a depleted priolist

v3: Michał found the solution to handling the allocation failure
gracefully. If we disable all priority scheduling following the
allocation failure, those requests will be executed in fifo and we will
ensure that this request and its dependencies are in strict fifo (even
when it doesn't realise it is only a single list). Normal scheduling is
restored once we know the device is idle, until the next failure!
Suggested-by: Michał Wajdeczko <michal.wajdeczko@intel.com>

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170517121007.27224-8-chris@chris-wilson.co.uk
2017-05-17 13:38:09 +01:00
Chris Wilson 266a240bf0 drm/i915: Use engine->context_pin() to report the intel_ring
Since unifying ringbuffer/execlist submission to use
engine->pin_context, we ensure that the intel_ring is available before
we start constructing the request. We can therefore move the assignment
of the request->ring to the central i915_gem_request_alloc() and not
require it in every engine->request_alloc() callback. Another small step
towards simplification (of the core, but at a cost of handling error
pointers in less important callers of engine->pin_context).

v2: Rearrange a few branches to reduce impact of PTR_ERR() on gcc's code
generation.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Oscar Mateo <oscar.mateo@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Oscar Mateo <oscar.mateo@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170504093308.4137-1-chris@chris-wilson.co.uk
2017-05-04 11:54:43 +01:00
Chris Wilson 49f08598bf drm/i915: Switch the global i915.semaphores check to a local predicate
Rather than use a global modparam, we can just check to see if the
engine has semaphores configured upon it.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170503093924.5320-7-chris@chris-wilson.co.uk
2017-05-03 11:08:53 +01:00
Chris Wilson fc9d4d2b6f drm/i915: Do not record a successful syncpoint for a dma-await
As we may unwind the requests, even though the request we are awaiting
has a global_seqno that seqno may be revoked during the await and so we
can not reliably use it as a barrier for all future awaits on the same
timeline.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170503093924.5320-6-chris@chris-wilson.co.uk
2017-05-03 11:08:53 +01:00
Chris Wilson 7e8894e97d drm/i915: Rename intel_timeline.sync_seqno[] to .global_sync[]
With the addition of the inter-context intel_time.sync map, having a
very similar sync_seqno[] is confusing. Aide the reader by denoting that
this is a pre-allocated array for storing semaphore sync points wrt to
the global seqno.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170503093924.5320-5-chris@chris-wilson.co.uk
2017-05-03 11:08:52 +01:00
Chris Wilson 4797948071 drm/i915: Squash repeated awaits on the same fence
Track the latest fence waited upon on each context, and only add a new
asynchronous wait if the new fence is more recent than the recorded
fence for that context. This requires us to filter out unordered
timelines, which are noted by DMA_FENCE_NO_CONTEXT. However, in the
absence of a universal identifier, we have to use our own
i915->mm.unordered_timeline token.

v2: Throw around the debug crutches
v3: Inline the likely case of the pre-allocation cache being full.
v4: Drop the pre-allocation support, we can lose the most recent fence
in case of allocation failure -- it just means we may emit more awaits
than strictly necessary but will not break.
v5: Trim allocation size for leaf nodes, they only need an array of u32
not pointers.
v6: Create mock_timeline to tidy selftest writing
v7: s/intel_timeline_sync_get/intel_timeline_sync_is_later/ (Tvrtko)
v8: Prune the stale sync points when we idle.
v9: Include a small benchmark in the kselftests
v10: Separate the idr implementation into its own compartment. (Tvrkto)
v11: Refactor igt_sync kselftests to avoid deep nesting (Tvrkto)
v12: __sync_leaf_idx() to assert that p->height is 0 when checking leaves
v13: kselftests to investigate struct i915_syncmap itself (Tvrtko)
v14: Foray into ascii art graphs
v15: Take into account that the random lookup/insert does 2 prng calls,
not 1, when benchmarking, and use for_each_set_bit() (Tvrtko)
v16: Improved ascii art

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170503093924.5320-4-chris@chris-wilson.co.uk
2017-05-03 11:08:48 +01:00
Chris Wilson ceae14bd4c drm/i915: Lift timeline ordering to await_dma_fence
Currently we filter out repeated use of the same timeline in the low
level i915_gem_request_await_request(), after having added the
dependency on the old request. However, we can lift this to
i915_gem_request_await_dma_fence() (before the dependency is added)
using the observation that requests along the same timeline are
explicitly ordered via i915_add_request (along with the dependencies).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170503093924.5320-3-chris@chris-wilson.co.uk
2017-05-03 11:08:47 +01:00
Chris Wilson 29ef3fa987 drm/i915: Unwrap top level fence-array
By first unwrapping an incoming fence-array into its child fences, we
can simplify the internal branching, and so avoid triggering a potential
bug in the next patch when not squashing the child fences on the same
timeline.

It will also have the advantage of keeping the (top-level) fence arrays
out of any fence/timeline caching since these are unordered timelines
but with a random context id.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170503093924.5320-2-chris@chris-wilson.co.uk
2017-05-03 11:08:46 +01:00
Chris Wilson f7b02a529a drm/i915: Skip waking the signaler when enabling before request submission
If we are enabling the breadcrumbs signaling prior to submitting the
request, we know that we cannot have missed the interrupt and can
therefore skip immediately waking the signaler to check.

This reduces a significant chunk of the __i915_gem_request_submit()
overhead for inter-engine synchronisation, for example in gem_exec_whisper.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170426080659.28771-1-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-04-26 11:51:31 +01:00
Chris Wilson e6ba9992de drm/i915: Differentiate between sw write location into ring and last hw read
We need to keep track of the last location we ask the hw to read up to
(RING_TAIL) separately from our last write location into the ring, so
that in the event of a GPU reset we do not tell the HW to proceed into
a partially written request (which can happen if that request is waiting
for an external signal before being executed).

v2: Refactor intel_ring_reset() (Mika)

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100144
Testcase: igt/gem_exec_fence/await-hang
Fixes: 821ed7df6e ("drm/i915: Update reset path to fix incomplete requests")
Fixes: d55ac5bf97 ("drm/i915: Defer transfer onto execution timeline to actual hw submission")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170425130049.26147-1-chris@chris-wilson.co.uk
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
2017-04-25 15:33:22 +01:00
Chris Wilson ade0b0c965 drm/i915: Confirm the request is still active before adding it to the await
Although we do check the completion-status of the request before
actually adding a wait on it (either to its submit fence or its
completion dma-fence), we currently do not check before adding it to the
dependency lists.

In fact, without checking for a completed request we may try to use the
signaler after it has been retired and its dependency tree freed:

[   60.044057] BUG: KASAN: use-after-free in __list_add_valid+0x1d/0xd0 at addr ffff880348c9e6a0
[   60.044118] Read of size 8 by task gem_exec_fence/530
[   60.044164] CPU: 1 PID: 530 Comm: gem_exec_fence Tainted: G            E   4.11.0-rc7+ #46
[   60.044226] Hardware name: ��������������������������������� ���������������������������������/���������������������������������, BIOS RYBDWi35.86A.0246.2
[   60.044290] Call Trace:
[   60.044337]  dump_stack+0x4d/0x6a
[   60.044383]  kasan_object_err+0x21/0x70
[   60.044435]  kasan_report+0x225/0x4e0
[   60.044488]  ? __list_add_valid+0x1d/0xd0
[   60.044534]  ? kasan_kmalloc+0xad/0xe0
[   60.044587]  __asan_load8+0x5e/0x70
[   60.044639]  __list_add_valid+0x1d/0xd0
[   60.044788]  __i915_priotree_add_dependency+0x67/0x130 [i915]
[   60.044895]  i915_gem_request_await_request+0xa8/0x370 [i915]
[   60.044974]  i915_gem_request_await_dma_fence+0x129/0x140 [i915]
[   60.045049]  i915_gem_do_execbuffer.isra.37+0xb0a/0x26b0 [i915]
[   60.045077]  ? save_stack+0xb1/0xd0
[   60.045105]  ? save_stack_trace+0x1b/0x20
[   60.045132]  ? save_stack+0x46/0xd0
[   60.045158]  ? kasan_kmalloc+0xad/0xe0
[   60.045184]  ? __kmalloc+0xd8/0x670
[   60.045229]  ? drm_ioctl+0x359/0x640 [drm]
[   60.045256]  ? SyS_ioctl+0x41/0x70
[   60.045330]  ? i915_vma_move_to_active+0x540/0x540 [i915]
[   60.045360]  ? tty_insert_flip_string_flags+0xa1/0xf0
[   60.045387]  ? tty_flip_buffer_push+0x63/0x70
[   60.045414]  ? remove_wait_queue+0xa9/0xc0
[   60.045441]  ? kasan_unpoison_shadow+0x35/0x50
[   60.045467]  ? kasan_kmalloc+0xad/0xe0
[   60.045494]  ? kasan_check_write+0x14/0x20
[   60.045568]  i915_gem_execbuffer2+0xdb/0x2a0 [i915]
[   60.045616]  drm_ioctl+0x359/0x640 [drm]
[   60.045705]  ? i915_gem_execbuffer+0x5a0/0x5a0 [i915]
[   60.045751]  ? drm_version+0x150/0x150 [drm]
[   60.045778]  ? compat_start_thread+0x60/0x60
[   60.045805]  ? plist_del+0xda/0x1a0
[   60.045833]  do_vfs_ioctl+0x12e/0x910
[   60.045860]  ? ioctl_preallocate+0x130/0x130
[   60.045886]  ? pci_mmcfg_check_reserved+0xc0/0xc0
[   60.045913]  ? vfs_write+0x196/0x240
[   60.045939]  ? __fget_light+0xa7/0xc0
[   60.045965]  SyS_ioctl+0x41/0x70
[   60.045991]  entry_SYSCALL_64_fastpath+0x17/0x98
[   60.046017] RIP: 0033:0x7feb2baefc47
[   60.046042] RSP: 002b:00007fff56d28e58 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[   60.046075] RAX: ffffffffffffffda RBX: 00007fff56d290a8 RCX: 00007feb2baefc47
[   60.046102] RDX: 00007fff56d29050 RSI: 00000000c0406469 RDI: 0000000000000003
[   60.046129] RBP: 00007fff56d29050 R08: 000055ecc4cd27d0 R09: 00007feb2bda8600
[   60.046154] R10: 0000000000000073 R11: 0000000000000246 R12: 00000000c0406469
[   60.046177] R13: 0000000000000003 R14: 000000000000000f R15: 0000000000000099
[   60.046203] Object at ffff880348c9e680, in cache i915_dependency size: 64
[   60.046225] Allocated:
[   60.046246] PID = 530
[   60.046269]  save_stack_trace+0x1b/0x20
[   60.046292]  save_stack+0x46/0xd0
[   60.046318]  kasan_kmalloc+0xad/0xe0
[   60.046343]  kasan_slab_alloc+0x12/0x20
[   60.046368]  kmem_cache_alloc+0xab/0x650
[   60.046445]  i915_gem_request_await_request+0x88/0x370 [i915]
[   60.046559]  i915_gem_request_await_dma_fence+0x129/0x140 [i915]
[   60.046705]  i915_gem_do_execbuffer.isra.37+0xb0a/0x26b0 [i915]
[   60.046849]  i915_gem_execbuffer2+0xdb/0x2a0 [i915]
[   60.046936]  drm_ioctl+0x359/0x640 [drm]
[   60.046987]  do_vfs_ioctl+0x12e/0x910
[   60.047038]  SyS_ioctl+0x41/0x70
[   60.047090]  entry_SYSCALL_64_fastpath+0x17/0x98
[   60.047139] Freed:
[   60.047179] PID = 530
[   60.047223]  save_stack_trace+0x1b/0x20
[   60.047269]  save_stack+0x46/0xd0
[   60.047317]  kasan_slab_free+0x72/0xc0
[   60.047366]  kmem_cache_free+0x39/0x160
[   60.047512]  i915_gem_request_retire+0x83f/0x930 [i915]
[   60.047657]  i915_gem_request_alloc+0x166/0x600 [i915]
[   60.047799]  i915_gem_do_execbuffer.isra.37+0xad8/0x26b0 [i915]
[   60.047897]  i915_gem_execbuffer2+0xdb/0x2a0 [i915]
[   60.047942]  drm_ioctl+0x359/0x640 [drm]
[   60.047968]  do_vfs_ioctl+0x12e/0x910
[   60.047993]  SyS_ioctl+0x41/0x70
[   60.048019]  entry_SYSCALL_64_fastpath+0x17/0x98
[   60.048044] Memory state around the buggy address:
[   60.048066]  ffff880348c9e580: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc
[   60.048105]  ffff880348c9e600: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc
[   60.048138] >ffff880348c9e680: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   60.048170]                                ^
[   60.048191]  ffff880348c9e700: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc
[   60.048225]  ffff880348c9e780: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc

Note to hit the use-after-free requires us to be passed back a request
via a fence-array, that is from explicit fencing accumulated into a
sync-file fence-array.

Fixes: 52e5420907 ("drm/i915/scheduler: Record all dependencies upon request construction")
Testcase: igt/gem_exec_fence/expired-history
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170422081537.6468-1-chris@chris-wilson.co.uk
2017-04-22 09:47:44 +01:00
Chris Wilson b0fd47adc6 drm/i915: Copy user requested buffers into the error state
Introduce a new execobject.flag (EXEC_OBJECT_CAPTURE) that userspace may
use to indicate that it wants the contents of this buffer preserved in
the error state (/sys/class/drm/cardN/error) following a GPU hang
involving this batch.

Use this at your discretion, the contents of the error state. although
compressed, are allocated with GFP_ATOMIC (i.e. limited) and kept for all
eternity (until the error state is destroyed).

Based on an earlier patch by Ben Widawsky <ben@bwidawsk.net>
Testcase: igt/gem_exec_capture
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <ben@bwidawsk.net>
Cc: Matt Turner <mattst88@gmail.com>
Acked-by: Ben Widawsky <ben@bwidawsk.net>
Acked-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170415093902.22581-1-chris@chris-wilson.co.uk
2017-04-15 12:39:57 +01:00
Chris Wilson cbb60b4b98 drm/i915: Advance ring->head fully when idle
When we retire the last request on the ring, before we ever access that
ring again we know it will be completely idle and so we can advance the
ring->head fully to the end (i.e. ring->tail) and not just to the start
of the breadcrumb. This allows us to skip re-emitting the breadcrumb
after resetting the GPU if the ring was entirely idle. This prevents us
from overwriting a seqno wraparound by re-executing a stale breadcrumb,
i.e.
	submit_request(1)
	intel_engine_init_global_seqno(0)
	i915_reset()
would then leave 1 in the HWS, but the next request to execute would
also be with seqno 1. The sanity checks upon submission detect this as a
timewarp and explode. By setting the ring as empty, upon reset the HWS
is left as 0, leaving it consistent with the timeline.

v2: Fix check for deleting last element of list. We know that this
request is always the first element of the ring, so only if next
points back to the start will this be the only request in flight.
v3: Remove opencoding of list_is_last()
v4: Move the block to its own function for some clarity.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100144
Testcase: igt/gem_exec_whisper/hang-*
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170406170028.26871-1-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2017-04-07 10:10:58 +01:00
Chris Wilson 2ca9faa551 drm/i915: Assert the engine is idle before overwiting the HWS
When we update the global seqno (on the engine timeline), we modify HW
state (both registers and mapped pages). As we do this, we should be
sure that the HW is idle and we are not causing a conflict. The caller
is supposed to wait_for_idle before calling us to update the seqno, so
let's assert they have and the engine is indeed idle.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170405153055.28123-1-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2017-04-07 10:10:13 +01:00
Chris Wilson ae351beb1f drm/i915: Combine reset_all_global_seqno() loops into one
We can merge the pair of loops over the engines and their timelines into
a single loop, making it easier to read and more consistent with the
commentary.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170330145041.9005-6-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2017-03-31 12:10:53 +01:00
Chris Wilson 7a453fb82f drm/i915: Remove redudant wait for each engine to idle from seqno wrap
Having added the wait upon each engine to idle into the central
i915_gem_wait_for_idle(), we can remove the now redundant wait from
reset_all_global_seqno(). This has the advantage of removing the late
detection of an error (an engine still busy) which left the seqno reset
only partially complete (though it should be safe enough!).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170330145041.9005-5-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2017-03-31 12:09:46 +01:00
Chris Wilson 72022a705e drm/i915: Move retire-requests into i915_gem_wait_for_idle()
As we now distinguish everywhere that can call
i915_gem_retire_requests() following a successful wait_for_idle, we can
remove the duplication by moving that call into i915_gem_wait_for_idle()
itself.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170330145041.9005-3-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2017-03-31 12:03:46 +01:00
Chris Wilson 05506b5be0 drm/i915: Use a dummy timeline name for a signaled fence
Michał Winiarski pointed out that the debugging infrastructure (such as
trace_dma_fence_release) likes to pretty print the timeline name, long
after we have freed the timeline. Our timelines currently live as part of
the GTT (due to the strict ordering we currently use through each) which
belong to the context. We aim to free the context and release its
hardware resources as soon as we able to (i.e. when the last
fence/request using it has been signaled and retired). As the
.get_timeline_name is purely a debug feature, rather than extending the
lifetime of the context, or splitting it into many different release
phases just to keep the name around, replace the timeline name with a
constant after the fence has been signaled. This avoids the potential
use-after-free.

Reported-by: Krzysztof Olinski <krzysztof.e.olinski@intel.com>
Fixes: 80b204bce8 ("drm/i915: Enable multiple timelines")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: <stable@vger.kernel.org> # v4.10+
Link: http://patchwork.freedesktop.org/patch/msgid/20170330111614.29757-1-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
2017-03-30 14:18:38 +01:00
Chris Wilson fe085f13c7 drm/i915: Remove intel_ring.last_retired_head
Storing the position of the breadcrumb of the last retired request as
a separate last_retired_head is superfluous as we always copy that into
head prior to recalculation of the intel_ring.space.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170321102552.24357-1-chris@chris-wilson.co.uk
2017-03-21 14:21:50 +00:00
Chris Wilson 8c185ecaf4 drm/i915: Split I915_RESET_IN_PROGRESS into two flags
I915_RESET_IN_PROGRESS is being used for both signaling the requirement
to i915_mutex_lock_interruptible() to avoid taking the struct_mutex and
to instruct a waiter (already holding the struct_mutex) to perform the
reset. To allow for a little more coordination, split these two meaning
into a couple of distinct flags. I915_RESET_BACKOFF tells
i915_mutex_lock_interruptible() not to acquire the mutex and
I915_RESET_HANDOFF tells the waiter to call i915_reset().

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170316171305.12972-1-chris@chris-wilson.co.uk
2017-03-16 17:17:10 +00:00
Dave Airlie 2e16101780 Merge tag 'drm-intel-next-2017-03-06' of git://anongit.freedesktop.org/git/drm-intel into drm-next
4 weeks worth of stuff since I was traveling&lazy:

- lspcon improvements (Imre)
- proper atomic state for cdclk handling (Ville)
- gpu reset improvements (Chris)
- lots and lots of polish around fences, requests, waiting and
  everything related all over (both gem and modeset code), from Chris
- atomic by default on gen5+ minus byt/bsw (Maarten did the patch to
  flip the default, really this is a massive joint team effort)
- moar power domains, now 64bit (Ander)
- big pile of in-kernel unit tests for various gem subsystems (Chris),
  including simple mock objects for i915 device and and the ggtt
  manager.
- i915_gpu_info in debugfs, for taking a snapshot of the current gpu
  state. Same thing as i915_error_state, but useful if the kernel didn't
  notice something is stick. From Chris.
- bxt dsi fixes (Umar Shankar)
- bxt w/a updates (Jani)
- no more struct_mutex for gem object unreference (Chris)
- some execlist refactoring (Tvrtko)
- color manager support for glk (Ander)
- improve the power-well sync code to better take over from the
  firmware (Imre)
- gem tracepoint polish (Tvrtko)
- lots of glk fixes all around (Ander)
- ctx switch improvements (Chris)
- glk dsi support&fixes (Deepak M)
- dsi fixes for vlv and clanups, lots of them (Hans de Goede)
- switch to i915.ko types in lots of our internal modeset code (Ander)
- byt/bsw atomic wm update code, yay (Ville)

* tag 'drm-intel-next-2017-03-06' of git://anongit.freedesktop.org/git/drm-intel: (432 commits)
  drm/i915: Update DRIVER_DATE to 20170306
  drm/i915: Don't use enums for hardware engine id
  drm/i915: Split breadcrumbs spinlock into two
  drm/i915: Refactor wakeup of the next breadcrumb waiter
  drm/i915: Take reference for signaling the request from hardirq
  drm/i915: Add FIFO underrun tracepoints
  drm/i915: Add cxsr toggle tracepoint
  drm/i915: Add VLV/CHV watermark/FIFO programming tracepoints
  drm/i915: Add plane update/disable tracepoints
  drm/i915: Kill level 0 wm hack for VLV/CHV
  drm/i915: Workaround VLV/CHV sprite1->sprite0 enable underrun
  drm/i915: Sanitize VLV/CHV watermarks properly
  drm/i915: Only use update_wm_{pre,post} for pre-ilk platforms
  drm/i915: Nuke crtc->wm.cxsr_allowed
  drm/i915: Compute proper intermediate wms for vlv/cvh
  drm/i915: Skip useless watermark/FIFO related work on VLV/CHV when not needed
  drm/i915: Compute vlv/chv wms the atomic way
  drm/i915: Compute VLV/CHV FIFO sizes based on the PM2 watermarks
  drm/i915: Plop vlv/chv fifo sizes into crtc state
  drm/i915: Plop vlv wm state into crtc_state
  ...
2017-03-08 12:41:47 +10:00
Chris Wilson 5400367a86 drm/i915: Ensure the engine is idle before manually changing HWS
During reset_all_global_seqno() on seqno rollover, we have to update the
HWS. This causes all in flight requests to be completed, so first we
wait. However, we were only waiting for the requests themselves to be
completed and clearing out the waiter rbtrees - what I had missed was
the extra reference in execlists->port[]. Since commit fe9ae7a3bf
("drm/i915/execlists: Detect an out-of-order context switch") we can
detect when the request is retired before the context switch interrupt
is completed. The impact should be neglible outside of debugging.

Testcase: igt/gem_exec_whisper
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170303121947.20482-1-chris@chris-wilson.co.uk
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
2017-03-03 13:08:04 +00:00
Chris Wilson c8659efac5 drm/i915: Drop spinlocks around adding to the client request list
Adding to the tail of the client request list as the only other user is
in the throttle ioctl that iterates forwards over the list. It only
needs protection against deletion of a request as it reads it, it simply
won't see a new request added to the end of the list, or it would be too
early and rejected. We can further reduce the number of spinlocks
required when throttling by removing stale requests from the client_list
as we throttle.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170302122525.19675-1-chris@chris-wilson.co.uk
2017-03-02 22:33:41 +00:00
Chris Wilson 675204153e drm/i915: s/assert_spin_locked/lockdep_assert_held/
assert_spin_locked() becomes an unconditionally compiled BUG_ON(),
adding debug code right into the heart of critical routines like
interrupt handlers.

   text	   data	    bss	    dec	    hex
1296480	  19944	   2272	1318696	 141f28	before (lockdep disabled)
1295984	  19944	   2272	1318200	 141d38	after

1336261	  21139	   3208	1360608	 14c2e0	before (lockdep enabled)
1339920	  21139	   3208	1364267	 14d12b	after

Small saving for release; hopefully more instructive in debug.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170302132801.599-1-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2017-03-02 15:18:55 +00:00
Chris Wilson e60a870d7f drm/i915: Assert that fence->lock is held in an irq-safe manner
Everytime we take the fence->lock (aka request->lock), we must do so
with irqs disabled since it may be used from within an hardirq context.
As sometimes we are taking the lock in a nested manner, assert that the
caller did disable the irqs for us.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170302115130.28434-1-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2017-03-02 15:18:55 +00:00
Ingo Molnar f361bf4a66 sched/headers: Prepare for the reduction of <linux/sched.h>'s signal API dependency
Instead of including the full <linux/signal.h>, we are going to include the
types-only <linux/signal_types.h> header in <linux/sched.h>, to further
decouple the scheduler header from the signal headers.

This means that various files which relied on the full <linux/signal.h> need
to be updated to gain an explicit dependency on it.

Update the code that relies on sched.h's inclusion of the <linux/signal.h> header.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-03-02 08:42:37 +01:00
Ingo Molnar e601757102 sched/headers: Prepare for new header dependencies before moving code to <linux/sched/clock.h>
We are going to split <linux/sched/clock.h> out of <linux/sched.h>, which
will have to be picked up from other headers and .c files.

Create a trivial placeholder <linux/sched/clock.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.

Include the new header in the files that are going to need it.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-03-02 08:42:27 +01:00
Chris Wilson 56299fb7d9 drm/i915: Signal first fence from irq handler if complete
As execlists and other non-semaphore multi-engine devices coordinate
between engines using interrupts, we can shave off a few 10s of
microsecond of scheduling latency by doing the fence signaling from the
interrupt as opposed to a RT kthread. (Realistically the delay adds
about 1% to an individual cross-engine workload.) We only signal the
first fence in order to limit the amount of work we move into the
interrupt handler. We also have to remember that our breadcrumbs may be
unordered with respect to the interrupt and so we still require the
waiter process to perform some heavyweight coherency fixups, as well as
traversing the tree of waiters.

v2: No need for early exit in irq handler - it breaks the flow between
patches and prevents the tracepoint
v3: Restore rcu hold across irq signaling of request

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170227205850.2828-2-chris@chris-wilson.co.uk
2017-02-27 21:57:20 +00:00
Chris Wilson 0f2f61d4a8 drm/i915: Remove one level of indention from wait-for-execute
Now that the code is getting simpler, we can reduce the indentation when
waiting for the global_seqno.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223074422.4125-17-chris@chris-wilson.co.uk
2017-02-23 14:56:36 +00:00
Chris Wilson 24f417ec0c drm/i915: Immediately process a reset before starting waiting
As we handoff the GPU reset to the waiter, we need to check we don't
miss a wakeup if it has already been sent prior to us starting the wait.

v2: Tweak checking for reset to be clear to the need before sleeping
after changing the task state.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223074422.4125-16-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-02-23 14:56:08 +00:00
Chris Wilson e07051142c drm/i915: Refactor direct GPU reset from request waiters
Combine the common code for the pair of waiters into a single function.

v2: Rename reset_request to wait_request_check_and_reset

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223074422.4125-15-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-02-23 14:51:26 +00:00
Chris Wilson a49625f906 drm/i915: Replace reset_wait_queue with default_wake_function
If we change the wait_queue_t from using the autoremove_wake_function to
the default_wake_function, we no longer have to restore the wait_queue_t
entry on the wait_queue_head_t list after being woken up by it, as we
are unusual in sleeping multiple times on the same wait_queue_t.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223074422.4125-14-chris@chris-wilson.co.uk
2017-02-23 14:51:02 +00:00
Chris Wilson d6a2289d9d drm/i915: Remove the preempted request from the execution queue
After the request is cancelled, we then need to remove it from the
global execution timeline and return it to the context timeline, the
inverse of submit_request().

v2: Move manipulation of struct intel_wait to helpers

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223074422.4125-12-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-02-23 14:50:07 +00:00
Chris Wilson 754c9fd576 drm/i915: Protect the request->global_seqno with the engine->timeline lock
A request is assigned a global seqno only when it is on the hardware
execution queue. The global seqno can be used to maintain a list of
requests on the same engine in retirement order, for example for
constructing a priority queue for waiting. Prior to its execution, or
if it is subsequently removed in the event of preemption, its global
seqno is zero. As both insertion and removal from the execution queue
may operate in IRQ context, it is not guarded by the usual struct_mutex
BKL. Instead those relying on the global seqno must be prepared for its
value to change between reads. Only when the request is complete can
the global seqno be stable (due to the memory barriers on submitting
the commands to the hardware to write the breadcrumb, if the HWS shows
that it has passed the global seqno and the global seqno is unchanged
after the read, it is indeed complete).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223074422.4125-9-chris@chris-wilson.co.uk
2017-02-23 14:49:32 +00:00
Chris Wilson fe49789fab drm/i915: Deconstruct execute fence
On reflection, we are only using the execute fence as a waitqueue on the
global_seqno and not using it for dependency tracking between fences
(unlike the submit and dma fences). By only treating it as a waitqueue,
we can then treat it similar to the other waitqueues during submit,
making the code simpler.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223074422.4125-8-chris@chris-wilson.co.uk
2017-02-23 14:49:31 +00:00
Chris Wilson 541ca6ed79 drm/i915: Inline __i915_gem_request_wait_for_execute()
It had only one callsite and existed to keep the code clearer. Now
having shared the wait-on-error between phases and with plans to change
the wait-for-execute in the next few patches, remove the out of line
wait loop and move it into the main body of i915_wait_request.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223074422.4125-7-chris@chris-wilson.co.uk
2017-02-23 14:49:30 +00:00
Chris Wilson 7de53bf7e6 drm/i915: Add ourselves to the gpu error waitqueue for the entire wait
Add ourselves to the gpu error waitqueue earlier on, even before we
determine we have to wait on the seqno. This is so that we can then
share the waitqueue between stages in subsequent patches.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223074422.4125-6-chris@chris-wilson.co.uk
2017-02-23 14:49:29 +00:00
Chris Wilson 4b36b2e506 drm/i915: Use a local to shorten req->i915->gpu_error.wait_queue
Use a local variable to avoid having to type out the full name of the
gpu_error wait_queue.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223074422.4125-5-chris@chris-wilson.co.uk
2017-02-23 14:49:29 +00:00
Chris Wilson 12d3173b2e drm/i915: Move reserve_seqno() next to unreserve_seqno()
Move the companion functions next to each other.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223074422.4125-4-chris@chris-wilson.co.uk
2017-02-23 14:49:28 +00:00
Chris Wilson 9b6586ae9f drm/i915: Keep a global seqno per-engine
Replace the global device seqno with one for each engine, and account
for in-flight seqno on each separately. This is consistent with
dma-fence as each timeline has separate fence-contexts for each engine
and a seqno is only ordered within a fence-context (i.e.  seqno do not
need to be ordered wrt to other engines, just ordered within a single
engine). This is required to enable request rewinding for preemption on
individual engines (we have to rewind the global seqno to avoid
overflow, and we do not have to rewind all engines just to preempt one.)

v2: Rename active_seqno to inflight_seqnos to more clearly indicate that
it is a counter and not equivalent to the existing seqno. Update
functions that operated on active_seqno similarly.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223074422.4125-3-chris@chris-wilson.co.uk
2017-02-23 14:49:26 +00:00
Tvrtko Ursulin 354d036fcf drm/i915/tracepoints: Add request submit and execute tracepoints
These new tracepoints are emitted once the request is ready to
be submitted to the GPU and once the request is about to
be submitted to the GPU, respectively.

Former condition triggers as soon as all the fences and
dependencies have been resolved, and the latter once the
backend is about to submit it to the GPU.

New tracepoint are enabled via the new
DRM_I915_LOW_LEVEL_TRACEPOINTS Kconfig option which is disabled
by default to alleviate the performance impact concerns.

v2: Move execute tracepoint to __i915_gem_request_submit.
    (Chris Wilson)

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
2017-02-21 13:18:14 +00:00
Tvrtko Ursulin 9369250298 drm/i915/tracepoints: Tidy i915_gem_request_wait_begin
Provide the same information as the other request event classes.

v2: Pass in flags so we can properly report the blocking status.
    (Chris Wilson)

v3: Log hex with 0x prefix for clarity.

v4: Derive blocking status from flags. (Chris Wilson)

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
2017-02-21 13:18:04 +00:00