1
0
Fork 0
Commit Graph

22 Commits (95a22caee396cef0bb2ca8fafdd82966a49367bb)

Author SHA1 Message Date
David Howells 2629c7fa7c rxrpc: When activating client conn channels, do state check inside lock
In rxrpc_activate_channels(), the connection cache state is checked outside
of the lock, which means it can change whilst we're waking calls up,
thereby changing whether or not we're allowed to wake calls up.

Fix this by moving the check inside the locked region.  The check to see if
all the channels are currently busy can stay outside of the locked region.

Whilst we're at it:

 (1) Split the locked section out into its own function so that we can call
     it from other places in a later patch.

 (2) Determine the mask of channels dependent on the state as we're going
     to add another state in a later patch that will restrict the number of
     simultaneous calls to 1 on a connection.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-09-29 22:57:47 +01:00
David Howells 8732db67c6 rxrpc: Fix exclusive client connections
Exclusive connections are currently reusable (which they shouldn't be)
because rxrpc_alloc_client_connection() checks the exclusive flag in the
rxrpc_connection struct before it's initialised from the function
parameters.  This means that the DONT_REUSE flag doesn't get set.

Fix this by checking the function parameters for the exclusive flag.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-09-29 22:37:15 +01:00
David Howells 363deeab6d rxrpc: Add connection tracepoint and client conn state tracepoint
Add a pair of tracepoints, one to track rxrpc_connection struct ref
counting and the other to track the client connection cache state.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-09-17 11:24:03 +01:00
David Howells 78883793f8 rxrpc: Fix unexposed client conn release
If the last call on a client connection is release after the connection has
had a bunch of calls allocated but before any DATA packets are sent (so
that it's not yet marked RXRPC_CONN_EXPOSED), an assertion will happen in
rxrpc_disconnect_client_call().

	af_rxrpc: Assertion failed - 1(0x1) >= 2(0x2) is false
	------------[ cut here ]------------
	kernel BUG at ../net/rxrpc/conn_client.c:753!

This is because it's expecting the conn to have been exposed and to have 2
or more refs - but this isn't necessarily the case.

Simply remove the assertion.  This allows the conn to be moved into the
inactive state and deleted if it isn't resurrected before the final put is
called.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-09-17 10:53:21 +01:00
David Howells 66d58af7f4 rxrpc: Fix the putting of client connections
In rxrpc_put_one_client_conn(), if a connection has RXRPC_CONN_COUNTED set
on it, then it's accounted for in rxrpc_nr_client_conns and may be on
various lists - and this is cleaned up correctly.

However, if the connection doesn't have RXRPC_CONN_COUNTED set on it, then
the put routine returns rather than just skipping the extra bit of cleanup.

Fix this by making the extra bit of clean up conditional instead and always
killing off the connection.

This manifests itself as connections with a zero usage count hanging around
in /proc/net/rxrpc_conns because the connection allocated, but discarded,
due to a race with another process that set up a parallel connection, which
was then shared instead.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-09-17 10:53:20 +01:00
David Howells 278ac0cdd5 rxrpc: Cache the security index in the rxrpc_call struct
Cache the security index in the rxrpc_call struct so that we can get at it
even when the call has been disconnected and the connection pointer
cleared.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-09-07 15:30:22 +01:00
David Howells 090f85deb6 rxrpc: Don't change the epoch
It seems the local epoch should only be changed on boot, so remove the code
that changes it for client connections.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-09-04 21:41:39 +01:00
David Howells af338a9ea6 rxrpc: The client call state must be changed before attachment to conn
We must set the client call state to RXRPC_CALL_CLIENT_SEND_REQUEST before
attaching the call to the connection struct, not after, as it's liable to
receive errors and conn aborts as soon as the assignment is made - and
these will cause its state to be changed outside of the initiating thread's
control.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-09-04 13:10:10 +01:00
David Howells e34d4234b0 rxrpc: Trace rxrpc_call usage
Add a trace event for debuging rxrpc_call struct usage.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-30 16:02:36 +01:00
David Howells f5c17aaeb2 rxrpc: Calls should only have one terminal state
Condense the terminal states of a call state machine to a single state,
plus a separate completion type value.  The value is then set, along with
error and abort code values, only when the call is transitioned to the
completion state.

Helpers are provided to simplify this.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-30 15:58:31 +01:00
David Howells 45025bceef rxrpc: Improve management and caching of client connection objects
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.

Further, there will be limits on the numbers of client connections that may
be live on a machine.  No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.

Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:

 (1) Security is negotiated per-connection and the context is then shared
     between all calls on that connection.  The context can be negotiated
     again if the connection lapses, but that involves holding up calls
     whilst at least two packets are exchanged and various crypto bits are
     performed - so we'd ideally like to cache it for a little while at
     least.

 (2) If a packet goes astray, we will need to retransmit a final ACK or
     ABORT packet.  To make this work, we need to keep around the
     connection details for a little while.

 (3) The locally held structures represent some amount of setup time, to be
     weighed against their occupation of memory when idle.


To this end, the client connection cache is managed by a state machine on
each connection.  There are five states:

 (1) INACTIVE - The connection is not held in any list and may not have
     been exposed to the world.  If it has been previously exposed, it was
     discarded from the idle list after expiring.

 (2) WAITING - The connection is waiting for the number of client conns to
     drop below the maximum capacity.  Calls may be in progress upon it
     from when it was active and got culled.

     The connection is on the rxrpc_waiting_client_conns list which is kept
     in to-be-granted order.  Culled conns with waiters go to the back of
     the queue just like new conns.

 (3) ACTIVE - The connection has at least one call in progress upon it, it
     may freely grant available channels to new calls and calls may be
     waiting on it for channels to become available.

     The connection is on the rxrpc_active_client_conns list which is kept
     in activation order for culling purposes.

 (4) CULLED - The connection got summarily culled to try and free up
     capacity.  Calls currently in progress on the connection are allowed
     to continue, but new calls will have to wait.  There can be no waiters
     in this state - the conn would have to go to the WAITING state
     instead.

 (5) IDLE - The connection has no calls in progress upon it and must have
     been exposed to the world (ie. the EXPOSED flag must be set).  When it
     expires, the EXPOSED flag is cleared and the connection transitions to
     the INACTIVE state.

     The connection is on the rxrpc_idle_client_conns list which is kept in
     order of how soon they'll expire.

A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.

As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue.  This simplifies things a bit and reduces the amount of checking we
need do.


There are a couple flags of relevance to the cache:

 (1) EXPOSED - The connection ID got exposed to the world.  If this flag is
     set, an extra ref is added to the connection preventing it from being
     reaped when it has no calls outstanding.  This flag is cleared and the
     ref dropped when a conn is discarded from the idle list.

 (2) DONT_REUSE - The connection should be discarded as soon as possible and
     should not be reused.


This commit also provides a number of new settings:

 (*) /proc/net/rxrpc/max_client_conns

     The maximum number of live client connections.  Above this number, new
     connections get added to the wait list and must wait for an active
     conn to be culled.  Culled connections can be reused, but they will go
     to the back of the wait list and have to wait.

 (*) /proc/net/rxrpc/reap_client_conns

     If the number of desired connections exceeds the maximum above, the
     active connection list will be culled until there are only this many
     left in it.

 (*) /proc/net/rxrpc/idle_conn_expiry

     The normal expiry time for a client connection, provided there are
     fewer than reap_client_conns of them around.

 (*) /proc/net/rxrpc/idle_conn_fast_expiry

     The expedited expiry time, used when there are more than
     reap_client_conns of them around.


Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.

Note also that, for the moment, the service connection cache still uses the
old connection management code.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 15:17:14 +01:00
David Howells 4d028b2c82 rxrpc: Dup the main conn list for the proc interface
The main connection list is used for two independent purposes: primarily it
is used to find connections to reap and secondarily it is used to list
connections in procfs.

Split the procfs list out from the reap list.  This allows us to stop using
the reap list for client connections when they acquire a separate
management strategy from service collections.

The client connections will not be on a management single list, and sometimes
won't be on a management list at all.  This doesn't leave them floating,
however, as they will also be on an rb-tree rooted on the socket so that the
socket can find them to dispatch calls.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 15:17:14 +01:00
David Howells df5d8bf70f rxrpc: Make /proc/net/rxrpc_calls safer
Make /proc/net/rxrpc_calls safer by stashing a copy of the peer pointer in
the rxrpc_call struct and checking in the show routine that the peer
pointer, the socket pointer and the local pointer obtained from the socket
pointer aren't NULL before we use them.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 15:15:59 +01:00
David Howells 01a90a4598 rxrpc: Drop channel number field from rxrpc_call struct
Drop the channel number (channel) field from the rxrpc_call struct to
reduce the size of the call struct.  The field is redundant: if the call is
attached to a connection, the channel can be obtained from there by AND'ing
with RXRPC_CHANNELMASK.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-23 15:27:24 +01:00
David Howells dabe5a7906 rxrpc: Tidy up the rxrpc_call struct a bit
Do a little tidying of the rxrpc_call struct:

 (1) in_clientflag is no longer compared against the value that's in the
     packet, so keeping it in this form isn't necessary.  Use a flag in
     flags instead and provide a pair of wrapper functions.

 (2) We don't read the epoch value, so that can go.

 (3) Move what remains of the data that were used for hashing up in the
     struct to be with the channel number.

 (4) Get rid of the local pointer.  We can get at this via the socket
     struct and we only use this in the procfs viewer.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-23 15:27:24 +01:00
David Howells 8496af50eb rxrpc: Use RCU to access a peer's service connection tree
Move to using RCU access to a peer's service connection tree when routing
an incoming packet.  This is done using a seqlock to trigger retrying of
the tree walk if a change happened.

Further, we no longer get a ref on the connection looked up in the
data_ready handler unless we queue the connection's work item - and then
only if the refcount > 0.


Note that I'm avoiding the use of a hash table for service connections
because each service connection is addressed by a 62-bit number
(constructed from epoch and connection ID >> 2) that would allow the client
to engage in bucket stuffing, given knowledge of the hash algorithm.
Peers, however, are hashed as the network address is less controllable by
the client.  The total number of peers will also be limited in a future
commit.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-07-06 10:51:14 +01:00
David Howells e8d70ce177 rxrpc: Prune the contents of the rxrpc_conn_proto struct
Prune the contents of the rxrpc_conn_proto struct.  Most of the fields aren't
used anymore.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-07-06 10:51:14 +01:00
David Howells 001c112249 rxrpc: Maintain an extra ref on a conn for the cache list
Overhaul the usage count accounting for the rxrpc_connection struct to make
it easier to implement RCU access from the data_ready handler.

The problem is that currently we're using a lock to prevent the garbage
collector from trying to clean up a connection that we're contemplating
unidling.  We could just stick incoming packets on the connection we find,
but we've then got a problem that we may race when dispatching a work item
to process it as we need to give that a ref to prevent the rxrpc_connection
struct from disappearing in the meantime.

Further, incoming packets may get discarded if attached to an
rxrpc_connection struct that is going away.  Whilst this is not a total
disaster - the client will presumably resend - it would delay processing of
the call.  This would affect the AFS client filesystem's service manager
operation.

To this end:

 (1) We now maintain an extra count on the connection usage count whilst it
     is on the connection list.  This mean it is not in use when its
     refcount is 1.

 (2) When trying to reuse an old connection, we only increment the refcount
     if it is greater than 0.  If it is 0, we replace it in the tree with a
     new candidate connection.

 (3) Two connection flags are added to indicate whether or not a connection
     is in the local's client connection tree (used by sendmsg) or the
     peer's service connection tree (used by data_ready).  This makes sure
     that we don't try and remove a connection if it got replaced.

     The flags are tested under lock with the removal operation to prevent
     the reaper from killing the rxrpc_connection struct whilst someone
     else is trying to effect a replacement.

     This could probably be alleviated by using memory barriers between the
     flag set/test and the rb_tree ops.  The rb_tree op would still need to
     be under the lock, however.

 (4) When trying to reap an old connection, we try to flip the usage count
     from 1 to 0.  If it's not 1 at that point, then it must've come back
     to life temporarily and we ignore it.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-07-06 10:50:04 +01:00
David Howells c6d2b8d764 rxrpc: Split client connection code out into its own file
Split the client-specific connection code out into its own file.  It will
behave somewhat differently from the service-specific connection code, so
it makes sense to separate them.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-07-06 10:43:52 +01:00
David Howells eb9b9d2275 rxrpc: Check that the client conns cache is empty before module removal
Check that the client conns cache is empty before module removal and bug if
not, listing any offending connections that are still present.  Unfortunately,
if there are connections still around, then the transport socket is still
unexpectedly open and active, so we can't just unallocate the connections.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-07-06 10:43:51 +01:00
David Howells 999b69f892 rxrpc: Kill the client connection bundle concept
Kill off the concept of maintaining a bundle of connections to a particular
target service to increase the number of call slots available for any
beyond four for that service (there are four call slots per connection).

This will make cleaning up the connection handling code easier and
facilitate removal of the rxrpc_transport struct.  Bundling can be
reintroduced later if necessary.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-22 09:20:55 +01:00
David Howells 4a3388c803 rxrpc: Use IDR to allocate client conn IDs on a machine-wide basis
Use the IDR facility to allocate client connection IDs on a machine-wide
basis so that each client connection has a unique identifier.  When the
connection ID space wraps, we advance the epoch by 1, thereby effectively
having a 62-bit ID space.  The IDR facility is then used to look up client
connections during incoming packet routing instead of using an rbtree
rooted on the transport.

This change allows for the removal of the transport in the future and also
means that client connections can be looked up directly in the data-ready
handler by connection ID.

The ID management code is placed in a new file, conn-client.c, to which all
the client connection-specific code will eventually move.

Note that the IDR tree gets very expensive on memory if the connection IDs
are widely scattered throughout the number space, so we shall need to
retire connections that have, say, an ID more than four times the maximum
number of client conns away from the current allocation point to try and
keep the IDs concentrated.  We will also need to retire connections from an
old epoch.

Also note that, for the moment, a pointer to the transport has to be passed
through into the ID allocation function so that we can take a BH lock to
prevent a locking issue against in-BH lookup of client connections.  This
will go away later when RCU is used for server connections also.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-22 09:10:02 +01:00