1
0
Fork 0
Commit Graph

194 Commits (c20f5b9633bb0953bd2422f0f1430a2028cdbd0a)

Author SHA1 Message Date
Andy Grover c20f5b9633 RDS/IB: Use SLAB_HWCACHE_ALIGN flag for kmem_cache_create()
We are *definitely* counting cycles as closely as DaveM, so
ensure hwcache alignment for our recv ring control structs.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:16:38 -07:00
Zach Brown d455ab6409 RDS/IB: always process recv completions
The recv refill path was leaking fragments because the recv event handler had
marked a ring element as free without freeing its frag.  This was happening
because it wasn't processing receives when the conn wasn't marked up or
connecting, as can be the case if it races with rmmod.

Two observations support always processing receives in the callback.

First, buildup should only post receives, thus triggering recv event handler
calls, once it has built up all the state to handle them.  Teardown should
destroy the CQ and drain the ring before tearing down the state needed to
process recvs.  Both appear to be true today.

Second, this test was fundamentally racy.  There is nothing to stop rmmod and
connection destruction from swooping in the moment after the conn state was
sampled but before real receive procesing starts.

Signed-off-by: Zach Brown <zach.brown@oracle.com>
2010-09-08 18:16:36 -07:00
Zach Brown 80c51be56f RDS: return to a single-threaded krdsd
We were seeing very nasty bugs due to fundamental assumption the current code
makes about concurrent work struct processing.  The code simpy isn't able to
handle concurrent connection shutdown work function execution today, for
example, which is very much possible once a multi-threaded krdsd was
introduced.  The problem compounds as additional work structs are added to the
mix.

krdsd is no longer perforance critical now that send and receive posting and
FMR flushing are done elsewhere, so the safest fix is to move back to the
single threaded krdsd that the current code was built around.

Signed-off-by: Zach Brown <zach.brown@oracle.com>
2010-09-08 18:16:35 -07:00
Zach Brown 515e079dab RDS/IB: create a work queue for FMR flushing
This patch moves the FMR flushing work in to its own mult-threaded work queue.
This is to maintain performance in preparation for returning the main krdsd
work queue back to a single threaded work queue to avoid deep-rooted
concurrency bugs.

This is also good because it further separates FMRs, which might be removed
some day, from the rest of the code base.

Signed-off-by: Zach Brown <zach.brown@oracle.com>
2010-09-08 18:16:34 -07:00
Zach Brown 8aeb1ba663 RDS/IB: destroy connections on rmmod
IB connections were not being destroyed during rmmod.

First, recently IB device removal callback was changed to disconnect
connections that used the removing device rather than destroying them.  So
connections with devices during rmmod were not being destroyed.

Second, rds_ib_destroy_nodev_conns() was being called before connections are
disassociated with devices.  It would almost never find connections in the
nodev list.

We first get rid of rds_ib_destroy_conns(), which is no longer called, and
refactor the existing caller into the main body of the function and get rid of
the list and lock wrappers.

Then we call rds_ib_destroy_nodev_conns() *after* ib_unregister_client() has
removed the IB device from all the conns and put the conns on the nodev list.

The result is that IB connections are destroyed by rmmod.

Signed-off-by: Zach Brown <zach.brown@oracle.com>
2010-09-08 18:16:33 -07:00
Zach Brown 24fa163a4b RDS/IB: wait for IB dev freeing work to finish during rmmod
The RDS IB client removal callback can queue work to drop the final reference
to an IB device.  We have to make sure that this function has returned before
we complete rmmod or the work threads can try to execute freed code.

Signed-off-by: Zach Brown <zach.brown@oracle.com>
2010-09-08 18:16:32 -07:00
Andy Grover b6fb0df12d RDS/IB: Make ib_recv_refill return void
Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:16:31 -07:00
Andy Grover fbf4d7e3d0 RDS: Remove unused XLIST_PTR_TAIL and xlist_protect()
Not used.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:16:06 -07:00
Andy Grover c9455d9996 RDS: whitespace 2010-09-08 18:15:32 -07:00
Chris Mason 7a0ff5dbdd RDS: use delayed work for the FMR flushes
Using a delayed work queue helps us make sure a healthy number of FMRs
have queued up over the limit.  It makes for a large improvement in RDMA
iops.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-09-08 18:15:30 -07:00
Chris Mason eabb732279 rds: more FMRs are faster
When we add more FMRs, we flush them less often and so we go faster.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-09-08 18:15:29 -07:00
Chris Mason 6fa70da608 rds: recycle FMRs through lockless lists
FRM allocation and recycling is performance critical and fairly lock
intensive.  The current code has a per connection lock that all
processes bang on and it becomes a major bottleneck on large systems.

This changes things to use a number of cmpxchg based lists instead,
allowing us to go through the whole FMR lifecycle without locking inside
RDS.

Zach Brown pointed out that our usage of cmpxchg for xlist removal is
racey if someone manages to remove and add back an FMR struct into the list
while another CPU can see the FMR's address at the head of the list.

The second CPU might assume the list hasn't changed when in fact any
number of operations might have happened in between the deletion and
reinsertion.

This commit maintains a per cpu count of CPUs that are currently
in xlist removal, and establishes a grace period to make sure that
nobody can see an entry we have just removed from the list.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-09-08 18:15:28 -07:00
Zach Brown 0f4b1c7e89 rds: fix rds_send_xmit() serialization
rds_send_xmit() was changed to hold an interrupt masking spinlock instead of a
mutex so that it could be called from the IB receive tasklet path.  This broke
the TCP transport because its xmit method can block and masks and unmasks
interrupts.

This patch serializes callers to rds_send_xmit() with a simple bit instead of
the current spinlock or previous mutex.  This enables rds_send_xmit() to be
called from any context and to call functions which block.  Getting rid of the
c_send_lock exposes the bare c_lock acquisitions which are changed to block
interrupts.

A waitqueue is added so that rds_conn_shutdown() can wait for callers to leave
rds_send_xmit() before tearing down partial send state.  This lets us get rid
of c_senders.

rds_send_xmit() is changed to check the conn state after acquiring the
RDS_IN_XMIT bit to resolve races with the shutdown path.  Previously both
worked with the conn state and then the lock in the same order, allowing them
to race and execute the paths concurrently.

rds_send_reset() isn't racing with rds_send_xmit() now that rds_conn_shutdown()
properly ensures that rds_send_xmit() can't start once the conn state has been
changed.  We can remove its previous use of the spinlock.

Finally, c_send_generation is redundant.  Callers can race to test the c_flags
bit by simply retrying instead of racing to test the c_send_generation atomic.

Signed-off-by: Zach Brown <zach.brown@oracle.com>
2010-09-08 18:15:27 -07:00
Zach Brown 501dcccdb7 rds: block ints when acquiring c_lock in rds_conn_message_info()
conn->c_lock is acquired in interrupt context.  rds_conn_message_info() is
called from user context and was acquiring c_lock without blocking interrupts,
leading to possible deadlocks.

Signed-off-by: Zach Brown <zach.brown@oracle.com>
2010-09-08 18:15:26 -07:00
Zach Brown 671202f349 rds: remove unused rds_send_acked_before()
rds_send_acked_before() wasn't blocking interrupts when acquiring c_lock from
user context but nothing calls it.  Rather than fix its use of c_lock we just
remove the function.

Signed-off-by: Zach Brown <zach.brown@oracle.com>
2010-09-08 18:15:25 -07:00
Chris Mason 037f18a307 RDS: use friendly gfp masks for prefill
When prefilling the rds frags, we end up doing a lot of allocations.
We're not in atomic context here, and so there's no reason to dip into
atomic reserves.  This changes the prefills to use masks that allow
waiting.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-09-08 18:15:24 -07:00
Chris Mason 3324412587 RDS/IB: Add caching of frags and incs
This patch is based heavily on an initial patch by Chris Mason.
Instead of freeing slab memory and pages, it keeps them, and
funnels them back to be reused.

The lock minimization strategy uses xchg and cmpxchg atomic ops
for manipulation of pointers to list heads. We anchor the lists with a
pointer to a list_head struct instead of a static list_head struct.
We just have to carefully use the existing primitives with
the difference between a pointer and a static head struct.

For example, 'list_empty()' means that our anchor pointer points to a list with
a single item instead of meaning that our static head element doesn't point to
any list items.

Original patch by Chris, with significant mods and fixes by Andy and Zach.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
Signed-off-by: Andy Grover <andy.grover@oracle.com>
Signed-off-by: Zach Brown <zach.brown@oracle.com>
2010-09-08 18:15:23 -07:00
Andy Grover fc24f78085 RDS/IB: Remove ib_recv_unmap_page()
All it does is call unmap_sg(), so just call that directly.

The comment above unmap_page also may be incorrect, so we
shouldn't hold on to it, either.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:15:22 -07:00
Andy Grover 3427e854e1 RDS: Assume recv->r_frag is always NULL in refill_one()
refill_one() should never be called on a recv struct that
doesn't need a new r_frag allocated. Add a WARN and remove
conditional around r_frag alloc code.

Also, add a comment to explain why r_ibinc may or may not
need refilling.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:15:21 -07:00
Andy Grover 0b088e003c RDS: Use page_remainder_alloc() for recv bufs
Instead of splitting up a page into RDS_FRAG_SIZE chunks
ourselves, ask rds_page_remainder_alloc() to do it. While it
is possible PAGE_SIZE > FRAG_SIZE, on x86en it isn't, so having
duplicate "carve up a page into buffers" code seems excessive.

The other modification this spawns is the use of a single
struct scatterlist in rds_page_frag instead of a bare page ptr.
This causes verbosity to increase in some places, and decrease
in others.

Finally, I decided to unify the lifetimes and alloc/free of
rds_page_frag and its page. This is a nice simplification in itself,
but will be extra-nice once we come to adding cmason's recycling
patch.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:15:20 -07:00
Zach Brown fc19de38be RDS/IB: disconnect when IB devices are removed
Currently IB device removal destroys connections which are associated with the
device.  This prevents connections from being re-established when replacement
devices are added.

Instead we'll queue shutdown work on the connections as their devices are
removed.  When we see that devices are added we triger connection attempts on
all connections that don't currently have a device.

The result is that RDS sockets can resume device-independent work (bcopy, not
RDMA) across IB device removal and restoration.

Signed-off-by: Zach Brown <zach.brown@oracle.com>
2010-09-08 18:15:19 -07:00
Zach Brown f3c6808d3d RDS: introduce rds_conn_connect_if_down()
A few paths had the same block of code to queue a connection's connect work if
it was in the right state.  Let's move this in to a helper function.

Signed-off-by: Zach Brown <zach.brown@oracle.com>
2010-09-08 18:15:18 -07:00
Zach Brown 3e0249f9c0 RDS/IB: add refcount tracking to struct rds_ib_device
The RDS IB client .remove callback used to free the rds_ibdev for the given
device unconditionally.  This could race other users of the struct.  This patch
adds refcounting so that we only free the rds_ibdev once all of its users are
done.

Many rds_ibdev users are tied to connections.  We give the connection a
reference and change these users to reference the device in the connection
instead of looking it up in the IB client data.  The only user of the IB client
data remaining is the first lookup of the device as connections are built up.

Incrementing the reference count of a device found in the IB client data could
race with final freeing so we use an RCU grace period to make sure that freeing
won't happen until those lookups are done.

MRs need the rds_ibdev to get at the pool that they're freed in to.  They exist
outside a connection and many MRs can reference different devices from one
socket, so it was natural to have each MR hold a reference.  MR refs can be
dropped from interrupt handlers and final device teardown can block so we push
it off to a work struct.  Pool teardown had to be fixed to cancel its pending
work instead of deadlocking waiting for all queued work, including itself, to
finish.

MRs get their reference from the global device list, which gets a reference.
It is left unprotected by locks and remains racy.  A simple global lock would
be a significant bottleneck.  More scalable (complicated) locking should be
done carefully in a later patch.

Signed-off-by: Zach Brown <zach.brown@oracle.com>
2010-09-08 18:15:17 -07:00
Zach Brown 89bf9d4158 RDS/IB: get the xmit max_sge from the RDS IB device on the connection
rds_ib_xmit_rdma() was calling ib_get_client_data() to get at the rds_ibdevice
just to get the max_sge for the transmit.  This patch instead has it get it
directly off the rds_ibdev which is stored on the connection.

The current code won't free the rds_ibdev until all the IB connections that use
it are freed.  So it's safe to reference the rds_ibdev this way.  In the future
it also makes it easier to support proper reference counting of the rds_ibdev
struct.

As an additional bonus, this gets rid of the performance hit of calling in to
the IB stack to look up the rds_ibdev.  The current implementation in the IB
stack acquires an interrupt blocking spinlock to protect the registration of
client callback data.

Signed-off-by: Zach Brown <zach.brown@oracle.com>
2010-09-08 18:15:16 -07:00
Zach Brown a46ca94e7f RDS/IB: rds_ib_cm_handle_connect() forgot to unlock c_cm_lock
rds_ib_cm_handle_connect() could return without unlocking the c_conn_lock if
rds_setup_qp() failed.  Rather than adding another imbalanced mutex_unlock() to
this error path we only unlock the mutex once as we exit the function, reducing
the likelyhood of making this same mistake in the future.  We remove the
previous mulitple return sites, leaving one unambigious return path.

Signed-off-by: Zach Brown <zach.brown@oracle.com>
2010-09-08 18:15:15 -07:00
Chris Mason 1cc2228c59 rds: Fix reference counting on the for xmit_atomic and xmit_rdma
This makes sure we have the proper number of references in
rds_ib_xmit_atomic and rds_ib_xmit_rdma.  We also consistently
drop references the same way for all message types as the IOs end.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-09-08 18:15:13 -07:00
Chris Mason bcf50ef2ce rds: use RCU to protect the connection hash
The connection hash was almost entirely RCU ready, this
just makes the final couple of changes to use RCU instead
of spinlocks for everything.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-09-08 18:15:12 -07:00
Chris Mason abf454398c RDS: use locking on the connection hash list
rds_conn_destroy really needs locking while it changes the
connection hash.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-09-08 18:15:11 -07:00
Chris Mason c9e65383a2 rds: Fix RDMA message reference counting
The RDS send_xmit code was trying to get fancy with message
counting and was dropping the final reference on the RDMA messages
too early.  This resulted in memory corruption and oopsen.

The fix here is to always add a ref as the parts of the message passes
through rds_send_xmit, and always drop a ref as the parts of the message
go through completion handling.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-09-08 18:15:10 -07:00
Chris Mason 7e3f2952ee rds: don't let RDS shutdown a connection while senders are present
This is the first in a long line of patches that tries to fix races
between RDS connection shutdown and RDS traffic.

Here we are maintaining a count of active senders to make sure
the connection doesn't go away while they are using it.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-09-08 18:15:09 -07:00
Chris Mason 38a4e5e613 rds: Use RCU for the bind lookup searches
The RDS bind lookups are somewhat expensive in terms of CPU
time and locking overhead.  This commit changes them into a
faster RCU based hash tree instead of the rbtrees they were using
before.

On large NUMA systems it is a significant improvement.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-09-08 18:15:08 -07:00
Andy Grover e4c52c98e0 RDS/IB: add _to_node() macros for numa and use {k,v}malloc_node()
Allocate send/recv rings in memory that is node-local to the HCA.
This significantly helps performance.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:14:06 -07:00
Andy Grover 4a81802b5e RDS/IB: Remove unused variable in ib_remove_addr()
Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:12:29 -07:00
Chris Mason 764f2dd92f rds: rcu-ize rds_ib_get_device()
rds_ib_get_device is called very often as we turn an
ip address into a corresponding device structure.  It currently
take a global spinlock as it walks different lists to find active
devices.

This commit changes the lists over to RCU, which isn't very complex
because they are not updated very often at all.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-09-08 18:12:28 -07:00
Chris Mason c83188dcd7 rds: per-rm flush_wait waitq
This removes a global waitqueue used to wait for rds messages
and replaces it with a waitqueue inside the rds_message struct.

The global waitqueue turns into a global lock and significantly
bottlenecks operations on large machines.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-09-08 18:12:27 -07:00
Chris Mason 976673ee1b rds: switch to rwlock on bind_lock
The bind_lock is almost entirely readonly, but it gets
hammered during normal operations and is a major bottleneck.

This commit changes it to an rwlock, which takes it from 80%
of the system time on a big numa machine down to much lower
numbers.

A better fix would involve RCU, which is done in a later commit

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-09-08 18:12:26 -07:00
Andy Grover ce47f52f42 RDS: Update comments in rds_send_xmit()
Update comments to reflect changes in previous commit.

Keeping as separate commits due to different authorship.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:12:25 -07:00
Chris Mason 9e29db0e36 RDS: Use a generation counter to avoid rds_send_xmit loop
rds_send_xmit is required to loop around after it releases the lock
because someone else could done a trylock, found someone working on the
list and backed off.

But, once we drop our lock, it is possible that someone else does come
in and make progress on the list.  We should detect this and not loop
around if another process is actually working on the list.

This patch adds a generation counter that is bumped every time we
get the lock and do some send work.  If the retry notices someone else
has bumped the generation counter, it does not need to loop around and
continue working.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:12:24 -07:00
Andy Grover acfcd4d4ec RDS: Get pong working again
Call send_xmit() directly from pong()

Set pongs as op_active

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:12:23 -07:00
Andy Grover a40aa9233a RDS: Do wait_event_interruptible instead of wait_event
Can't see a reason not to allow signals to interrupt the wait.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:12:22 -07:00
Andy Grover fcc5450c63 RDS: Remove send_quota from send_xmit()
The purpose of the send quota was really to give fairness
when different connections were all using the same
workq thread to send backlogged msgs -- they could only send
so many before another connection could make progress.

Now that each connection is pushing the backlog from its
completion handler, they are all guaranteed to make progress
and the quota isn't needed any longer.

A thread *will* have to send all previously queued data, as well
as any further msgs placed on the queue while while c_send_lock
was held. In a pathological case a single process can get
roped into doing this for long periods while other threads
get off free. But, since it can only do this until the transport
reports full, this is a bounded scenario.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:12:21 -07:00
Andy Grover 51e2cba8b5 RDS: Move atomic stats from general to ib-specific area
Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:12:20 -07:00
Andy Grover ab1a6926f5 RDS: rds_message_unmapped() doesn't need to check if queue active
If the queue has nobody on it, then wake_up does nothing.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:12:19 -07:00
Andy Grover cf4b7389ee RDS: Fix locking in send on m_rs_lock
Do not nest m_rs_lock under c_lock

Disable interrupts in {rdma,atomic}_send_complete

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:12:18 -07:00
Andy Grover f2ec76f288 RDS: Use NOWAIT in message_map_pages()
Can no longer block, so use NOWAIT.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:12:17 -07:00
Andy Grover 2fa57129df RDS: Bypass workqueue when queueing cong updates
Now that rds_send_xmit() does not block, we can call it directly
instead of going through the helper thread.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:12:16 -07:00
Andy Grover a7d3a28148 RDS: Call rds_send_xmit() directly from sendmsg()
rds_sendmsg() is calling the send worker function to
send the just-queued datagrams, presumably because it wants
the behavior where anything not sent will re-call the send
worker. We now ensure all queued datagrams are sent by retrying
from the send completion handler, so this isn't needed any more.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:12:15 -07:00
Andy Grover 2ad8099b58 RDS: rds_send_xmit() locking/irq fixes
rds_message_put() cannot be called with irqs off, so move it after
irqs are re-enabled.

Spinlocks throughout the function do not to use _irqsave because
the lock of c_send_lock at top already disabled irqs.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:12:13 -07:00
Andy Grover 049ee3f500 RDS: Change send lock from a mutex to a spinlock
This change allows us to call rds_send_xmit() from a tasklet,
which is crucial to our new operating model.

* Change c_send_lock to a spinlock
* Update stats fields "sem_" to "_lock"
* Remove unneeded rds_conn_is_sending()

About locking between shutdown and send -- send checks if the
connection is up. Shutdown puts the connection into
DISCONNECTING. After this, all threads entering send will exit
immediately. However, a thread could be *in* send_xmit(), so
shutdown acquires the c_send_lock to ensure everyone is out
before proceeding with connection shutdown.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:12:12 -07:00
Andy Grover f17a1a55fb RDS: Refill recv ring directly from tasklet
Performance is better if we use allocations that don't block
to refill the receive ring. Since the whole reason we were
kicking out to the worker thread was so we could do blocking
allocs, we no longer need to do this.

Remove gfp params from rds_ib_recv_refill(); we always use
GFP_NOWAIT.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
2010-09-08 18:12:11 -07:00