scsi: core: avoid preallocating big SGL for protection information

scsi_mq_setup_tags() currently preallocates a big buffer for protection
SGL entries. scsi_mq_sgl_size() is used to determine the size for both data
and protection information scatterlists but the protection buffer is
usually much smaller. For example, one 512-byte sector needs 8 bytes of
protection information. Given that the maximum number of sectors for one
request is 2560 (BLK_DEF_MAX_SECTORS) sectors, the max protection
information buffer size is just 20K.

The protection information segment count generally matches the number of
bios in the request. As a result, the typical actual number of segments
won't be very big. And should the need arise, allocating a bigger SGL from
slab is fast enough.

Pre-allocate only one SGL entry for protection information and switch to
runtime allocation in case that the protection information segment number
is bigger than 1. This reduces memory tied up by static command
allocations. For example, 500+ MB is saved on single lpfc HBA.

[mkp: attempted to clarify commit desc]

Cc: Christoph Hellwig <hch@lst.de>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: Ewan D. Milne <emilne@redhat.com>
Cc: Hannes Reinecke <hare@suse.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
This commit is contained in:
Ming Lei 2019-04-28 15:39:31 +08:00 committed by Martin K. Petersen
parent 4635873c56
commit 92524fa123

View file

@ -39,6 +39,12 @@
#include "scsi_priv.h"
#include "scsi_logging.h"
/*
* Size of integrity metadata is usually small, 1 inline sg should
* cover normal cases.
*/
#define SCSI_INLINE_PROT_SG_CNT 1
static struct kmem_cache *scsi_sdb_cache;
static struct kmem_cache *scsi_sense_cache;
static struct kmem_cache *scsi_sense_isadma_cache;
@ -543,7 +549,8 @@ static void scsi_mq_free_sgtables(struct scsi_cmnd *cmd)
if (cmd->sdb.table.nents)
sg_free_table_chained(&cmd->sdb.table, SG_CHUNK_SIZE);
if (scsi_prot_sg_count(cmd))
sg_free_table_chained(&cmd->prot_sdb->table, SG_CHUNK_SIZE);
sg_free_table_chained(&cmd->prot_sdb->table,
SCSI_INLINE_PROT_SG_CNT);
}
static void scsi_mq_uninit_cmd(struct scsi_cmnd *cmd)
@ -1032,7 +1039,7 @@ blk_status_t scsi_init_io(struct scsi_cmnd *cmd)
if (sg_alloc_table_chained(&prot_sdb->table, ivecs,
prot_sdb->table.sgl,
SG_CHUNK_SIZE)) {
SCSI_INLINE_PROT_SG_CNT)) {
ret = BLK_STS_RESOURCE;
goto out_free_sgtables;
}
@ -1824,7 +1831,8 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
sgl_size = scsi_mq_sgl_size(shost);
cmd_size = sizeof(struct scsi_cmnd) + shost->hostt->cmd_size + sgl_size;
if (scsi_host_get_prot(shost))
cmd_size += sizeof(struct scsi_data_buffer) + sgl_size;
cmd_size += sizeof(struct scsi_data_buffer) +
sizeof(struct scatterlist) * SCSI_INLINE_PROT_SG_CNT;
memset(&shost->tag_set, 0, sizeof(shost->tag_set));
shost->tag_set.ops = &scsi_mq_ops;