1
0
Fork 0
alistair23-linux/drivers/gpu/drm/i915/gt
Chris Wilson 6d06779e86 drm/i915: Load balancing across a virtual engine
Having allowed the user to define a set of engines that they will want
to only use, we go one step further and allow them to bind those engines
into a single virtual instance. Submitting a batch to the virtual engine
will then forward it to any one of the set in a manner as best to
distribute load.  The virtual engine has a single timeline across all
engines (it operates as a single queue), so it is not able to concurrently
run batches across multiple engines by itself; that is left up to the user
to submit multiple concurrent batches to multiple queues. Multiple users
will be load balanced across the system.

The mechanism used for load balancing in this patch is a late greedy
balancer. When a request is ready for execution, it is added to each
engine's queue, and when an engine is ready for its next request it
claims it from the virtual engine. The first engine to do so, wins, i.e.
the request is executed at the earliest opportunity (idle moment) in the
system.

As not all HW is created equal, the user is still able to skip the
virtual engine and execute the batch on a specific engine, all within the
same queue. It will then be executed in order on the correct engine,
with execution on other virtual engines being moved away due to the load
detection.

A couple of areas for potential improvement left!

- The virtual engine always take priority over equal-priority tasks.
Mostly broken up by applying FQ_CODEL rules for prioritising new clients,
and hopefully the virtual and real engines are not then congested (i.e.
all work is via virtual engines, or all work is to the real engine).

- We require the breadcrumb irq around every virtual engine request. For
normal engines, we eliminate the need for the slow round trip via
interrupt by using the submit fence and queueing in order. For virtual
engines, we have to allow any job to transfer to a new ring, and cannot
coalesce the submissions, so require the completion fence instead,
forcing the persistent use of interrupts.

- We only drip feed single requests through each virtual engine and onto
the physical engines, even if there was enough work to fill all ELSP,
leaving small stalls with an idle CS event at the end of every request.
Could we be greedy and fill both slots? Being lazy is virtuous for load
distribution on less-than-full workloads though.

Other areas of improvement are more general, such as reducing lock
contention, reducing dispatch overhead, looking at direct submission
rather than bouncing around tasklets etc.

sseu: Lift the restriction to allow sseu to be reconfigured on virtual
engines composed of RENDER_CLASS (rcs).

v2: macroize check_user_mbz()
v3: Cancel virtual engines on wedging
v4: Commence commenting
v5: Replace 64b sibling_mask with a list of class:instance
v6: Drop the one-element array in the uabi
v7: Assert it is an virtual engine in to_virtual_engine()
v8: Skip over holes in [class][inst] so we can selftest with (vcs0, vcs2)

Link: https://github.com/intel/media-driver/pull/283
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190521211134.16117-6-chris@chris-wilson.co.uk
2019-05-22 08:40:38 +01:00
..
Makefile
Makefile.header-test
intel_breadcrumbs.c drm/i915: Seal races between async GPU cancellation, retirement and signaling 2019-05-08 16:02:41 +01:00
intel_context.c drm/i915: Disable semaphore busywaits on saturated systems 2019-05-04 09:18:02 +01:00
intel_context.h drm/i915: Switch back to an array of logical per-engine HW contexts 2019-04-26 18:32:11 +01:00
intel_context_types.h drm/i915: Disable semaphore busywaits on saturated systems 2019-05-04 09:18:02 +01:00
intel_engine.h drm/i915/hangcheck: Replace hangcheck.seqno with RING_HEAD 2019-05-08 15:06:35 +01:00
intel_engine_cs.c drm/i915/hangcheck: Replace hangcheck.seqno with RING_HEAD 2019-05-08 15:06:35 +01:00
intel_engine_pm.c drm/i915/execlists: Flush the tasklet on parking 2019-05-03 11:35:31 +01:00
intel_engine_pm.h drm/i915/execlists: Flush the tasklet on parking 2019-05-03 11:35:31 +01:00
intel_engine_types.h drm/i915: Load balancing across a virtual engine 2019-05-22 08:40:38 +01:00
intel_gpu_commands.h
intel_gt_pm.c drm/i915: Invert the GEM wakeref hierarchy 2019-04-24 22:26:49 +01:00
intel_gt_pm.h drm/i915: Invert the GEM wakeref hierarchy 2019-04-24 22:26:49 +01:00
intel_hangcheck.c drm/i915/hangcheck: Replace hangcheck.seqno with RING_HEAD 2019-05-08 15:06:35 +01:00
intel_lrc.c drm/i915: Load balancing across a virtual engine 2019-05-22 08:40:38 +01:00
intel_lrc.h drm/i915: Load balancing across a virtual engine 2019-05-22 08:40:38 +01:00
intel_lrc_reg.h
intel_mocs.c
intel_mocs.h
intel_reset.c drm/i915: Reboot CI if forcewake fails 2019-05-08 13:58:31 +01:00
intel_reset.h drm/i915: Invert the GEM wakeref hierarchy 2019-04-24 22:26:49 +01:00
intel_ringbuffer.c drm/i915/hangcheck: Replace hangcheck.seqno with RING_HEAD 2019-05-08 15:06:35 +01:00
intel_sseu.c
intel_sseu.h
intel_workarounds.c drm/i915/icl: Whitelist GEN9_SLICE_COMMON_ECO_CHICKEN1 2019-04-30 07:50:58 +01:00
intel_workarounds.h
intel_workarounds_types.h
mock_engine.c drm/i915: Switch back to an array of logical per-engine HW contexts 2019-04-26 18:32:11 +01:00
mock_engine.h drm/i915: Split engine setup/init into two phases 2019-04-26 18:32:07 +01:00
selftest_engine_cs.c
selftest_hangcheck.c drm/i915: Move i915_request_alloc into selftests/ 2019-04-26 18:32:20 +01:00
selftest_lrc.c drm/i915: Load balancing across a virtual engine 2019-05-22 08:40:38 +01:00
selftest_workarounds.c drm/i915: Move i915_request_alloc into selftests/ 2019-04-26 18:32:20 +01:00