1
0
Fork 0

for-4.11/linus-merge-signed

-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABCAAGBQJYqeb8AAoJEPfTWPspceCmB3UP/3UtcPrzEm8w2cxB9MaWhZN3
 J+jiwlO4vaqhm2HVzQtoJqfaqRlud/iDx5cIXE2S7FnIM54ZKs3CANbKu8X+b1zm
 eJije3zMI8A8qyftigbz6a/Y2kWE4ZqFEc9WU5CWawfTl3ImCVUi8+F5X0wOLU/h
 r50zAQOEyURH4G5usNl9q0olF6FonJ82AcYm1iJ0QP2wYWZRJauC0rRn8IT93tyK
 bZPHnGKdkd7km8yi3zr2GNWOfuZZuA0HWAaF4qfrHPZQ883gITFAUIlFb1f+2TNl
 DkQzRrBB2wPWPnlbfb9KejMkvL94hflzsLb5rHt835DyVXFRyjxsgyAI8A+LPGSz
 vqZ3rsbWj6H4F9z2CkZ+T+AP/ZSWDNjwc0RXPm9HYdR5CDeTxIUVvnFQ44YNsmTv
 Xd5BKrUJ2oKegAxQG6zcuFx23p8JzhT70l+mNrMdtyeKnDD9FRdDvhKG9AHeTipn
 o/DnGivhS3UMQoQ7D68KOO+kuhLDeo7my5XGsnjzMO/iHqg++7IP2HyYYs/Ba4qZ
 cYaCtSDQW71Zt0vsqa6dvPuXBveu4h8Qh8R7uAGjSGS9IAFFb4Cab2tiUdISE6PE
 YnMWzY+G6pT8imlLVOL5/QFuo2Q4pUsaL0AHpXMCN9TZnQtbqXa8eqwnKnQ0m2KN
 7ut0IYYEPaYUX5xFn1K6
 =z7AL
 -----END PGP SIGNATURE-----

Merge tag 'for-4.11/linus-merge-signed' of git://git.kernel.dk/linux-block

Pull block layer updates from Jens Axboe:

 - blk-mq scheduling framework from me and Omar, with a port of the
   deadline scheduler for this framework. A port of BFQ from Paolo is in
   the works, and should be ready for 4.12.

 - Various fixups and improvements to the above scheduling framework
   from Omar, Paolo, Bart, me, others.

 - Cleanup of the exported sysfs blk-mq data into debugfs, from Omar.
   This allows us to export more information that helps debug hangs or
   performance issues, without cluttering or abusing the sysfs API.

 - Fixes for the sbitmap code, the scalable bitmap code that was
   migrated from blk-mq, from Omar.

 - Removal of the BLOCK_PC support in struct request, and refactoring of
   carrying SCSI payloads in the block layer. This cleans up the code
   nicely, and enables us to kill the SCSI specific parts of struct
   request, shrinking it down nicely. From Christoph mainly, with help
   from Hannes.

 - Support for ranged discard requests and discard merging, also from
   Christoph.

 - Support for OPAL in the block layer, and for NVMe as well. Mainly
   from Scott Bauer, with fixes/updates from various others folks.

 - Error code fixup for gdrom from Christophe.

 - cciss pci irq allocation cleanup from Christoph.

 - Making the cdrom device operations read only, from Kees Cook.

 - Fixes for duplicate bdi registrations and bdi/queue life time
   problems from Jan and Dan.

 - Set of fixes and updates for lightnvm, from Matias and Javier.

 - A few fixes for nbd from Josef, using idr to name devices and a
   workqueue deadlock fix on receive. Also marks Josef as the current
   maintainer of nbd.

 - Fix from Josef, overwriting queue settings when the number of
   hardware queues is updated for a blk-mq device.

 - NVMe fix from Keith, ensuring that we don't repeatedly mark and IO
   aborted, if we didn't end up aborting it.

 - SG gap merging fix from Ming Lei for block.

 - Loop fix also from Ming, fixing a race and crash between setting loop
   status and IO.

 - Two block race fixes from Tahsin, fixing request list iteration and
   fixing a race between device registration and udev device add
   notifiations.

 - Double free fix from cgroup writeback, from Tejun.

 - Another double free fix in blkcg, from Hou Tao.

 - Partition overflow fix for EFI from Alden Tondettar.

* tag 'for-4.11/linus-merge-signed' of git://git.kernel.dk/linux-block: (156 commits)
  nvme: Check for Security send/recv support before issuing commands.
  block/sed-opal: allocate struct opal_dev dynamically
  block/sed-opal: tone down not supported warnings
  block: don't defer flushes on blk-mq + scheduling
  blk-mq-sched: ask scheduler for work, if we failed dispatching leftovers
  blk-mq: don't special case flush inserts for blk-mq-sched
  blk-mq-sched: don't add flushes to the head of requeue queue
  blk-mq: have blk_mq_dispatch_rq_list() return if we queued IO or not
  block: do not allow updates through sysfs until registration completes
  lightnvm: set default lun range when no luns are specified
  lightnvm: fix off-by-one error on target initialization
  Maintainers: Modify SED list from nvme to block
  Move stack parameters for sed_ioctl to prevent oversized stack with CONFIG_KASAN
  uapi: sed-opal fix IOW for activate lsp to use correct struct
  cdrom: Make device operations read-only
  elevator: fix loading wrong elevator type for blk-mq devices
  cciss: switch to pci_irq_alloc_vectors
  block/loop: fix race between I/O and set_status
  blk-mq-sched: don't hold queue_lock when calling exit_icq
  block: set make_request_fn manually in blk_mq_update_nr_hw_queues
  ...
hifive-unleashed-5.1
Linus Torvalds 2017-02-21 10:57:33 -08:00
commit 772c8f6f3b
203 changed files with 9732 additions and 5580 deletions

View File

@ -249,7 +249,6 @@ struct& cdrom_device_ops\ \{ \hidewidth\cr
unsigned\ long);\cr
\noalign{\medskip}
&const\ int& capability;& capability flags \cr
&int& n_minors;& number of active minor devices \cr
\};\cr
}
$$
@ -258,13 +257,7 @@ it should add a function pointer to this $struct$. When a particular
function is not implemented, however, this $struct$ should contain a
NULL instead. The $capability$ flags specify the capabilities of the
\cdrom\ hardware and/or low-level \cdrom\ driver when a \cdrom\ drive
is registered with the \UCD. The value $n_minors$ should be a positive
value indicating the number of minor devices that are supported by
the low-level device driver, normally~1. Although these two variables
are `informative' rather than `operational,' they are included in
$cdrom_device_ops$ because they describe the capability of the {\em
driver\/} rather than the {\em drive}. Nomenclature has always been
difficult in computer programming.
is registered with the \UCD.
Note that most functions have fewer parameters than their
$blkdev_fops$ counterparts. This is because very little of the

View File

@ -8620,10 +8620,10 @@ S: Maintained
F: drivers/net/ethernet/netronome/
NETWORK BLOCK DEVICE (NBD)
M: Markus Pargmann <mpa@pengutronix.de>
M: Josef Bacik <jbacik@fb.com>
S: Maintained
L: linux-block@vger.kernel.org
L: nbd-general@lists.sourceforge.net
T: git git://git.pengutronix.de/git/mpa/linux-nbd.git
F: Documentation/blockdev/nbd.txt
F: drivers/block/nbd.c
F: include/uapi/linux/nbd.h
@ -11097,6 +11097,17 @@ L: linux-mmc@vger.kernel.org
S: Maintained
F: drivers/mmc/host/sdhci-spear.c
SECURE ENCRYPTING DEVICE (SED) OPAL DRIVER
M: Scott Bauer <scott.bauer@intel.com>
M: Jonathan Derrick <jonathan.derrick@intel.com>
M: Rafael Antognolli <rafael.antognolli@intel.com>
L: linux-block@vger.kernel.org
S: Supported
F: block/sed*
F: block/opal_proto.h
F: include/linux/sed*
F: include/uapi/linux/sed*
SECURITY SUBSYSTEM
M: James Morris <james.l.morris@oracle.com>
M: "Serge E. Hallyn" <serge@hallyn.com>

View File

@ -49,9 +49,13 @@ config LBDAF
If unsure, say Y.
config BLK_SCSI_REQUEST
bool
config BLK_DEV_BSG
bool "Block layer SG support v4"
default y
select BLK_SCSI_REQUEST
help
Saying Y here will enable generic SG (SCSI generic) v4 support
for any block device.
@ -71,6 +75,7 @@ config BLK_DEV_BSGLIB
bool "Block layer SG support v4 helper lib"
default n
select BLK_DEV_BSG
select BLK_SCSI_REQUEST
help
Subsystems will normally enable this if needed. Users will not
normally need to manually enable this.
@ -147,6 +152,25 @@ config BLK_WBT_MQ
Multiqueue currently doesn't have support for IO scheduling,
enabling this option is recommended.
config BLK_DEBUG_FS
bool "Block layer debugging information in debugfs"
default y
depends on DEBUG_FS
---help---
Include block layer debugging information in debugfs. This information
is mostly useful for kernel developers, but it doesn't incur any cost
at runtime.
Unless you are building a kernel for a tiny system, you should
say Y here.
config BLK_SED_OPAL
bool "Logic for interfacing with Opal enabled SEDs"
---help---
Builds Logic for interfacing with Opal enabled controllers.
Enabling this option enables users to setup/unlock/lock
Locking ranges for SED devices using the Opal protocol.
menu "Partition Types"
source "block/partitions/Kconfig"

View File

@ -63,6 +63,56 @@ config DEFAULT_IOSCHED
default "cfq" if DEFAULT_CFQ
default "noop" if DEFAULT_NOOP
config MQ_IOSCHED_DEADLINE
tristate "MQ deadline I/O scheduler"
default y
---help---
MQ version of the deadline IO scheduler.
config MQ_IOSCHED_NONE
bool
default y
choice
prompt "Default single-queue blk-mq I/O scheduler"
default DEFAULT_SQ_NONE
help
Select the I/O scheduler which will be used by default for blk-mq
managed block devices with a single queue.
config DEFAULT_SQ_DEADLINE
bool "MQ Deadline" if MQ_IOSCHED_DEADLINE=y
config DEFAULT_SQ_NONE
bool "None"
endchoice
config DEFAULT_SQ_IOSCHED
string
default "mq-deadline" if DEFAULT_SQ_DEADLINE
default "none" if DEFAULT_SQ_NONE
choice
prompt "Default multi-queue blk-mq I/O scheduler"
default DEFAULT_MQ_NONE
help
Select the I/O scheduler which will be used by default for blk-mq
managed block devices with multiple queues.
config DEFAULT_MQ_DEADLINE
bool "MQ Deadline" if MQ_IOSCHED_DEADLINE=y
config DEFAULT_MQ_NONE
bool "None"
endchoice
config DEFAULT_MQ_IOSCHED
string
default "mq-deadline" if DEFAULT_MQ_DEADLINE
default "none" if DEFAULT_MQ_NONE
endmenu
endif

View File

@ -6,11 +6,12 @@ obj-$(CONFIG_BLOCK) := bio.o elevator.o blk-core.o blk-tag.o blk-sysfs.o \
blk-flush.o blk-settings.o blk-ioc.o blk-map.o \
blk-exec.o blk-merge.o blk-softirq.o blk-timeout.o \
blk-lib.o blk-mq.o blk-mq-tag.o blk-stat.o \
blk-mq-sysfs.o blk-mq-cpumap.o ioctl.o \
genhd.o scsi_ioctl.o partition-generic.o ioprio.o \
blk-mq-sysfs.o blk-mq-cpumap.o blk-mq-sched.o ioctl.o \
genhd.o partition-generic.o ioprio.o \
badblocks.o partitions/
obj-$(CONFIG_BOUNCE) += bounce.o
obj-$(CONFIG_BOUNCE) += bounce.o
obj-$(CONFIG_BLK_SCSI_REQUEST) += scsi_ioctl.o
obj-$(CONFIG_BLK_DEV_BSG) += bsg.o
obj-$(CONFIG_BLK_DEV_BSGLIB) += bsg-lib.o
obj-$(CONFIG_BLK_CGROUP) += blk-cgroup.o
@ -18,6 +19,7 @@ obj-$(CONFIG_BLK_DEV_THROTTLING) += blk-throttle.o
obj-$(CONFIG_IOSCHED_NOOP) += noop-iosched.o
obj-$(CONFIG_IOSCHED_DEADLINE) += deadline-iosched.o
obj-$(CONFIG_IOSCHED_CFQ) += cfq-iosched.o
obj-$(CONFIG_MQ_IOSCHED_DEADLINE) += mq-deadline.o
obj-$(CONFIG_BLOCK_COMPAT) += compat_ioctl.o
obj-$(CONFIG_BLK_CMDLINE_PARSER) += cmdline-parser.o
@ -25,3 +27,5 @@ obj-$(CONFIG_BLK_DEV_INTEGRITY) += bio-integrity.o blk-integrity.o t10-pi.o
obj-$(CONFIG_BLK_MQ_PCI) += blk-mq-pci.o
obj-$(CONFIG_BLK_DEV_ZONED) += blk-zoned.o
obj-$(CONFIG_BLK_WBT) += blk-wbt.o
obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o
obj-$(CONFIG_BLK_SED_OPAL) += sed-opal.o

View File

@ -1227,9 +1227,6 @@ struct bio *bio_copy_user_iov(struct request_queue *q,
if (!bio)
goto out_bmd;
if (iter->type & WRITE)
bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
ret = 0;
if (map_data) {
@ -1394,16 +1391,10 @@ struct bio *bio_map_user_iov(struct request_queue *q,
kfree(pages);
/*
* set data direction, and check if mapped pages need bouncing
*/
if (iter->type & WRITE)
bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
bio_set_flag(bio, BIO_USER_MAPPED);
/*
* subtle -- if __bio_map_user() ended up bouncing a bio,
* subtle -- if bio_map_user_iov() ended up bouncing a bio,
* it would normally disappear when its bi_end_io is run.
* however, we need it for the unmap, so grab an extra
* reference to it
@ -1445,8 +1436,8 @@ static void __bio_unmap_user(struct bio *bio)
* bio_unmap_user - unmap a bio
* @bio: the bio being unmapped
*
* Unmap a bio previously mapped by bio_map_user(). Must be called with
* a process context.
* Unmap a bio previously mapped by bio_map_user_iov(). Must be called from
* process context.
*
* bio_unmap_user() may sleep.
*/
@ -1590,7 +1581,6 @@ struct bio *bio_copy_kern(struct request_queue *q, void *data, unsigned int len,
bio->bi_private = data;
} else {
bio->bi_end_io = bio_copy_kern_endio;
bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
}
return bio;

View File

@ -184,7 +184,7 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
goto err_free_blkg;
}
wb_congested = wb_congested_get_create(&q->backing_dev_info,
wb_congested = wb_congested_get_create(q->backing_dev_info,
blkcg->css.id,
GFP_NOWAIT | __GFP_NOWARN);
if (!wb_congested) {
@ -469,8 +469,8 @@ static int blkcg_reset_stats(struct cgroup_subsys_state *css,
const char *blkg_dev_name(struct blkcg_gq *blkg)
{
/* some drivers (floppy) instantiate a queue w/o disk registered */
if (blkg->q->backing_dev_info.dev)
return dev_name(blkg->q->backing_dev_info.dev);
if (blkg->q->backing_dev_info->dev)
return dev_name(blkg->q->backing_dev_info->dev);
return NULL;
}
EXPORT_SYMBOL_GPL(blkg_dev_name);
@ -1079,10 +1079,8 @@ int blkcg_init_queue(struct request_queue *q)
if (preloaded)
radix_tree_preload_end();
if (IS_ERR(blkg)) {
blkg_free(new_blkg);
if (IS_ERR(blkg))
return PTR_ERR(blkg);
}
q->root_blkg = blkg;
q->root_rl.blkg = blkg;
@ -1223,7 +1221,10 @@ int blkcg_activate_policy(struct request_queue *q,
if (blkcg_policy_enabled(q, pol))
return 0;
blk_queue_bypass_start(q);
if (q->mq_ops)
blk_mq_freeze_queue(q);
else
blk_queue_bypass_start(q);
pd_prealloc:
if (!pd_prealloc) {
pd_prealloc = pol->pd_alloc_fn(GFP_KERNEL, q->node);
@ -1261,7 +1262,10 @@ pd_prealloc:
spin_unlock_irq(q->queue_lock);
out_bypass_end:
blk_queue_bypass_end(q);
if (q->mq_ops)
blk_mq_unfreeze_queue(q);
else
blk_queue_bypass_end(q);
if (pd_prealloc)
pol->pd_free_fn(pd_prealloc);
return ret;
@ -1284,7 +1288,11 @@ void blkcg_deactivate_policy(struct request_queue *q,
if (!blkcg_policy_enabled(q, pol))
return;
blk_queue_bypass_start(q);
if (q->mq_ops)
blk_mq_freeze_queue(q);
else
blk_queue_bypass_start(q);
spin_lock_irq(q->queue_lock);
__clear_bit(pol->plid, q->blkcg_pols);
@ -1304,7 +1312,11 @@ void blkcg_deactivate_policy(struct request_queue *q,
}
spin_unlock_irq(q->queue_lock);
blk_queue_bypass_end(q);
if (q->mq_ops)
blk_mq_unfreeze_queue(q);
else
blk_queue_bypass_end(q);
}
EXPORT_SYMBOL_GPL(blkcg_deactivate_policy);

View File

@ -33,14 +33,20 @@
#include <linux/ratelimit.h>
#include <linux/pm_runtime.h>
#include <linux/blk-cgroup.h>
#include <linux/debugfs.h>
#define CREATE_TRACE_POINTS
#include <trace/events/block.h>
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-sched.h"
#include "blk-wbt.h"
#ifdef CONFIG_DEBUG_FS
struct dentry *blk_debugfs_root;
#endif
EXPORT_TRACEPOINT_SYMBOL_GPL(block_bio_remap);
EXPORT_TRACEPOINT_SYMBOL_GPL(block_rq_remap);
EXPORT_TRACEPOINT_SYMBOL_GPL(block_bio_complete);
@ -74,7 +80,7 @@ static void blk_clear_congested(struct request_list *rl, int sync)
* flip its congestion state for events on other blkcgs.
*/
if (rl == &rl->q->root_rl)
clear_wb_congested(rl->q->backing_dev_info.wb.congested, sync);
clear_wb_congested(rl->q->backing_dev_info->wb.congested, sync);
#endif
}
@ -85,7 +91,7 @@ static void blk_set_congested(struct request_list *rl, int sync)
#else
/* see blk_clear_congested() */
if (rl == &rl->q->root_rl)
set_wb_congested(rl->q->backing_dev_info.wb.congested, sync);
set_wb_congested(rl->q->backing_dev_info->wb.congested, sync);
#endif
}
@ -104,22 +110,6 @@ void blk_queue_congestion_threshold(struct request_queue *q)
q->nr_congestion_off = nr;
}
/**
* blk_get_backing_dev_info - get the address of a queue's backing_dev_info
* @bdev: device
*
* Locates the passed device's request queue and returns the address of its
* backing_dev_info. This function can only be called if @bdev is opened
* and the return value is never NULL.
*/
struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev)
{
struct request_queue *q = bdev_get_queue(bdev);
return &q->backing_dev_info;
}
EXPORT_SYMBOL(blk_get_backing_dev_info);
void blk_rq_init(struct request_queue *q, struct request *rq)
{
memset(rq, 0, sizeof(*rq));
@ -131,9 +121,8 @@ void blk_rq_init(struct request_queue *q, struct request *rq)
rq->__sector = (sector_t) -1;
INIT_HLIST_NODE(&rq->hash);
RB_CLEAR_NODE(&rq->rb_node);
rq->cmd = rq->__cmd;
rq->cmd_len = BLK_MAX_CDB;
rq->tag = -1;
rq->internal_tag = -1;
rq->start_time = jiffies;
set_start_time_ns(rq);
rq->part = NULL;
@ -158,10 +147,8 @@ static void req_bio_endio(struct request *rq, struct bio *bio,
void blk_dump_rq_flags(struct request *rq, char *msg)
{
int bit;
printk(KERN_INFO "%s: dev %s: type=%x, flags=%llx\n", msg,
rq->rq_disk ? rq->rq_disk->disk_name : "?", rq->cmd_type,
printk(KERN_INFO "%s: dev %s: flags=%llx\n", msg,
rq->rq_disk ? rq->rq_disk->disk_name : "?",
(unsigned long long) rq->cmd_flags);
printk(KERN_INFO " sector %llu, nr/cnr %u/%u\n",
@ -169,13 +156,6 @@ void blk_dump_rq_flags(struct request *rq, char *msg)
blk_rq_sectors(rq), blk_rq_cur_sectors(rq));
printk(KERN_INFO " bio %p, biotail %p, len %u\n",
rq->bio, rq->biotail, blk_rq_bytes(rq));
if (rq->cmd_type == REQ_TYPE_BLOCK_PC) {
printk(KERN_INFO " cdb: ");
for (bit = 0; bit < BLK_MAX_CDB; bit++)
printk("%02x ", rq->cmd[bit]);
printk("\n");
}
}
EXPORT_SYMBOL(blk_dump_rq_flags);
@ -525,12 +505,14 @@ void blk_set_queue_dying(struct request_queue *q)
else {
struct request_list *rl;
spin_lock_irq(q->queue_lock);
blk_queue_for_each_rl(rl, q) {
if (rl->rq_pool) {
wake_up(&rl->wait[BLK_RW_SYNC]);
wake_up(&rl->wait[BLK_RW_ASYNC]);
}
}
spin_unlock_irq(q->queue_lock);
}
}
EXPORT_SYMBOL_GPL(blk_set_queue_dying);
@ -584,7 +566,7 @@ void blk_cleanup_queue(struct request_queue *q)
blk_flush_integrity();
/* @q won't process any more request, flush async actions */
del_timer_sync(&q->backing_dev_info.laptop_mode_wb_timer);
del_timer_sync(&q->backing_dev_info->laptop_mode_wb_timer);
blk_sync_queue(q);
if (q->mq_ops)
@ -596,7 +578,8 @@ void blk_cleanup_queue(struct request_queue *q)
q->queue_lock = &q->__queue_lock;
spin_unlock_irq(lock);
bdi_unregister(&q->backing_dev_info);
bdi_unregister(q->backing_dev_info);
put_disk_devt(q->disk_devt);
/* @q is and will stay empty, shutdown and put */
blk_put_queue(q);
@ -604,17 +587,41 @@ void blk_cleanup_queue(struct request_queue *q)
EXPORT_SYMBOL(blk_cleanup_queue);
/* Allocate memory local to the request queue */
static void *alloc_request_struct(gfp_t gfp_mask, void *data)
static void *alloc_request_simple(gfp_t gfp_mask, void *data)
{
int nid = (int)(long)data;
return kmem_cache_alloc_node(request_cachep, gfp_mask, nid);
struct request_queue *q = data;
return kmem_cache_alloc_node(request_cachep, gfp_mask, q->node);
}
static void free_request_struct(void *element, void *unused)
static void free_request_simple(void *element, void *data)
{
kmem_cache_free(request_cachep, element);
}
static void *alloc_request_size(gfp_t gfp_mask, void *data)
{
struct request_queue *q = data;
struct request *rq;
rq = kmalloc_node(sizeof(struct request) + q->cmd_size, gfp_mask,
q->node);
if (rq && q->init_rq_fn && q->init_rq_fn(q, rq, gfp_mask) < 0) {
kfree(rq);
rq = NULL;
}
return rq;
}
static void free_request_size(void *element, void *data)
{
struct request_queue *q = data;
if (q->exit_rq_fn)
q->exit_rq_fn(q, element);
kfree(element);
}
int blk_init_rl(struct request_list *rl, struct request_queue *q,
gfp_t gfp_mask)
{
@ -627,10 +634,15 @@ int blk_init_rl(struct request_list *rl, struct request_queue *q,
init_waitqueue_head(&rl->wait[BLK_RW_SYNC]);
init_waitqueue_head(&rl->wait[BLK_RW_ASYNC]);
rl->rq_pool = mempool_create_node(BLKDEV_MIN_RQ, alloc_request_struct,
free_request_struct,
(void *)(long)q->node, gfp_mask,
q->node);
if (q->cmd_size) {
rl->rq_pool = mempool_create_node(BLKDEV_MIN_RQ,
alloc_request_size, free_request_size,
q, gfp_mask, q->node);
} else {
rl->rq_pool = mempool_create_node(BLKDEV_MIN_RQ,
alloc_request_simple, free_request_simple,
q, gfp_mask, q->node);
}
if (!rl->rq_pool)
return -ENOMEM;
@ -693,7 +705,6 @@ static void blk_rq_timed_out_timer(unsigned long data)
struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
{
struct request_queue *q;
int err;
q = kmem_cache_alloc_node(blk_requestq_cachep,
gfp_mask | __GFP_ZERO, node_id);
@ -708,17 +719,17 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
if (!q->bio_split)
goto fail_id;
q->backing_dev_info.ra_pages =
(VM_MAX_READAHEAD * 1024) / PAGE_SIZE;
q->backing_dev_info.capabilities = BDI_CAP_CGROUP_WRITEBACK;
q->backing_dev_info.name = "block";
q->node = node_id;
err = bdi_init(&q->backing_dev_info);
if (err)
q->backing_dev_info = bdi_alloc_node(gfp_mask, node_id);
if (!q->backing_dev_info)
goto fail_split;
setup_timer(&q->backing_dev_info.laptop_mode_wb_timer,
q->backing_dev_info->ra_pages =
(VM_MAX_READAHEAD * 1024) / PAGE_SIZE;
q->backing_dev_info->capabilities = BDI_CAP_CGROUP_WRITEBACK;
q->backing_dev_info->name = "block";
q->node = node_id;
setup_timer(&q->backing_dev_info->laptop_mode_wb_timer,
laptop_mode_timer_fn, (unsigned long) q);
setup_timer(&q->timeout, blk_rq_timed_out_timer, (unsigned long) q);
INIT_LIST_HEAD(&q->queue_head);
@ -768,7 +779,7 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
fail_ref:
percpu_ref_exit(&q->q_usage_counter);
fail_bdi:
bdi_destroy(&q->backing_dev_info);
bdi_put(q->backing_dev_info);
fail_split:
bioset_free(q->bio_split);
fail_id:
@ -821,15 +832,19 @@ EXPORT_SYMBOL(blk_init_queue);
struct request_queue *
blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id)
{
struct request_queue *uninit_q, *q;
struct request_queue *q;
uninit_q = blk_alloc_queue_node(GFP_KERNEL, node_id);
if (!uninit_q)
q = blk_alloc_queue_node(GFP_KERNEL, node_id);
if (!q)
return NULL;
q = blk_init_allocated_queue(uninit_q, rfn, lock);
if (!q)
blk_cleanup_queue(uninit_q);
q->request_fn = rfn;
if (lock)
q->queue_lock = lock;
if (blk_init_allocated_queue(q) < 0) {
blk_cleanup_queue(q);
return NULL;
}
return q;
}
@ -837,30 +852,22 @@ EXPORT_SYMBOL(blk_init_queue_node);
static blk_qc_t blk_queue_bio(struct request_queue *q, struct bio *bio);
struct request_queue *
blk_init_allocated_queue(struct request_queue *q, request_fn_proc *rfn,
spinlock_t *lock)
{
if (!q)
return NULL;
q->fq = blk_alloc_flush_queue(q, NUMA_NO_NODE, 0);
int blk_init_allocated_queue(struct request_queue *q)
{
q->fq = blk_alloc_flush_queue(q, NUMA_NO_NODE, q->cmd_size);
if (!q->fq)
return NULL;
return -ENOMEM;
if (q->init_rq_fn && q->init_rq_fn(q, q->fq->flush_rq, GFP_KERNEL))
goto out_free_flush_queue;
if (blk_init_rl(&q->root_rl, q, GFP_KERNEL))
goto fail;
goto out_exit_flush_rq;
INIT_WORK(&q->timeout_work, blk_timeout_work);
q->request_fn = rfn;
q->prep_rq_fn = NULL;
q->unprep_rq_fn = NULL;
q->queue_flags |= QUEUE_FLAG_DEFAULT;
/* Override internal queue lock with supplied lock pointer */
if (lock)
q->queue_lock = lock;
/*
* This also sets hw/phys segments, boundary and size
*/
@ -874,17 +881,19 @@ blk_init_allocated_queue(struct request_queue *q, request_fn_proc *rfn,
/* init elevator */
if (elevator_init(q, NULL)) {
mutex_unlock(&q->sysfs_lock);
goto fail;
goto out_exit_flush_rq;
}
mutex_unlock(&q->sysfs_lock);
return 0;
return q;
fail:
out_exit_flush_rq:
if (q->exit_rq_fn)
q->exit_rq_fn(q, q->fq->flush_rq);
out_free_flush_queue:
blk_free_flush_queue(q->fq);
wbt_exit(q);
return NULL;
return -ENOMEM;
}
EXPORT_SYMBOL(blk_init_allocated_queue);
@ -1020,41 +1029,6 @@ int blk_update_nr_requests(struct request_queue *q, unsigned int nr)
return 0;
}
/*
* Determine if elevator data should be initialized when allocating the
* request associated with @bio.
*/
static bool blk_rq_should_init_elevator(struct bio *bio)
{
if (!bio)
return true;
/*
* Flush requests do not use the elevator so skip initialization.
* This allows a request to share the flush and elevator data.
*/
if (bio->bi_opf & (REQ_PREFLUSH | REQ_FUA))
return false;
return true;
}
/**
* rq_ioc - determine io_context for request allocation
* @bio: request being allocated is for this bio (can be %NULL)
*
* Determine io_context to use for request allocation for @bio. May return
* %NULL if %current->io_context doesn't exist.
*/
static struct io_context *rq_ioc(struct bio *bio)
{
#ifdef CONFIG_BLK_CGROUP
if (bio && bio->bi_ioc)
return bio->bi_ioc;
#endif
return current->io_context;
}
/**
* __get_request - get a free request
* @rl: request list to allocate from
@ -1133,10 +1107,13 @@ static struct request *__get_request(struct request_list *rl, unsigned int op,
* request is freed. This guarantees icq's won't be destroyed and
* makes creating new ones safe.
*
* Flush requests do not use the elevator so skip initialization.
* This allows a request to share the flush and elevator data.
*
* Also, lookup icq while holding queue_lock. If it doesn't exist,
* it will be created after releasing queue_lock.
*/
if (blk_rq_should_init_elevator(bio) && !blk_queue_bypass(q)) {
if (!op_is_flush(op) && !blk_queue_bypass(q)) {
rq_flags |= RQF_ELVPRIV;
q->nr_rqs_elvpriv++;
if (et->icq_cache && ioc)
@ -1196,7 +1173,7 @@ fail_elvpriv:
* disturb iosched and blkcg but weird is bettern than dead.
*/
printk_ratelimited(KERN_WARNING "%s: dev %s: request aux data allocation failed, iosched may be disturbed\n",
__func__, dev_name(q->backing_dev_info.dev));
__func__, dev_name(q->backing_dev_info->dev));
rq->rq_flags &= ~RQF_ELVPRIV;
rq->elv.icq = NULL;
@ -1290,8 +1267,6 @@ static struct request *blk_old_get_request(struct request_queue *q, int rw,
{
struct request *rq;
BUG_ON(rw != READ && rw != WRITE);
/* create ioc upfront */
create_io_context(gfp_mask, q->node);
@ -1320,18 +1295,6 @@ struct request *blk_get_request(struct request_queue *q, int rw, gfp_t gfp_mask)
}
EXPORT_SYMBOL(blk_get_request);
/**
* blk_rq_set_block_pc - initialize a request to type BLOCK_PC
* @rq: request to be initialized
*
*/
void blk_rq_set_block_pc(struct request *rq)
{
rq->cmd_type = REQ_TYPE_BLOCK_PC;
memset(rq->__cmd, 0, sizeof(rq->__cmd));
}
EXPORT_SYMBOL(blk_rq_set_block_pc);
/**
* blk_requeue_request - put a request back on queue
* @q: request queue where request should be inserted
@ -1522,6 +1485,30 @@ bool bio_attempt_front_merge(struct request_queue *q, struct request *req,
return true;
}
bool bio_attempt_discard_merge(struct request_queue *q, struct request *req,
struct bio *bio)
{
unsigned short segments = blk_rq_nr_discard_segments(req);
if (segments >= queue_max_discard_segments(q))
goto no_merge;
if (blk_rq_sectors(req) + bio_sectors(bio) >
blk_rq_get_max_sectors(req, blk_rq_pos(req)))
goto no_merge;
req->biotail->bi_next = bio;
req->biotail = bio;
req->__data_len += bio->bi_iter.bi_size;
req->ioprio = ioprio_best(req->ioprio, bio_prio(bio));
req->nr_phys_segments = segments + 1;
blk_account_io_start(req, false);
return true;
no_merge:
req_set_nomerge(q, req);
return false;
}
/**
* blk_attempt_plug_merge - try to merge with %current's plugged list
* @q: request_queue new bio is being queued at
@ -1550,12 +1537,11 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
{
struct blk_plug *plug;
struct request *rq;
bool ret = false;
struct list_head *plug_list;
plug = current->plug;
if (!plug)
goto out;
return false;
*request_count = 0;
if (q->mq_ops)
@ -1564,7 +1550,7 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
plug_list = &plug->list;
list_for_each_entry_reverse(rq, plug_list, queuelist) {
int el_ret;
bool merged = false;
if (rq->q == q) {
(*request_count)++;
@ -1580,19 +1566,25 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
if (rq->q != q || !blk_rq_merge_ok(rq, bio))
continue;
el_ret = blk_try_merge(rq, bio);
if (el_ret == ELEVATOR_BACK_MERGE) {
ret = bio_attempt_back_merge(q, rq, bio);
if (ret)
break;
} else if (el_ret == ELEVATOR_FRONT_MERGE) {
ret = bio_attempt_front_merge(q, rq, bio);
if (ret)
break;
switch (blk_try_merge(rq, bio)) {
case ELEVATOR_BACK_MERGE:
merged = bio_attempt_back_merge(q, rq, bio);
break;
case ELEVATOR_FRONT_MERGE:
merged = bio_attempt_front_merge(q, rq, bio);
break;
case ELEVATOR_DISCARD_MERGE:
merged = bio_attempt_discard_merge(q, rq, bio);
break;
default:
break;
}
if (merged)
return true;
}
out:
return ret;
return false;
}
unsigned int blk_plug_queued_count(struct request_queue *q)
@ -1621,7 +1613,6 @@ out:
void init_request_from_bio(struct request *req, struct bio *bio)
{
req->cmd_type = REQ_TYPE_FS;
if (bio->bi_opf & REQ_RAHEAD)
req->cmd_flags |= REQ_FAILFAST_MASK;
@ -1635,8 +1626,8 @@ void init_request_from_bio(struct request *req, struct bio *bio)
static blk_qc_t blk_queue_bio(struct request_queue *q, struct bio *bio)
{
struct blk_plug *plug;
int el_ret, where = ELEVATOR_INSERT_SORT;
struct request *req;
int where = ELEVATOR_INSERT_SORT;
struct request *req, *free;
unsigned int request_count = 0;
unsigned int wb_acct;
@ -1655,7 +1646,7 @@ static blk_qc_t blk_queue_bio(struct request_queue *q, struct bio *bio)
return BLK_QC_T_NONE;
}
if (bio->bi_opf & (REQ_PREFLUSH | REQ_FUA)) {
if (op_is_flush(bio->bi_opf)) {
spin_lock_irq(q->queue_lock);
where = ELEVATOR_INSERT_FLUSH;
goto get_rq;
@ -1673,21 +1664,29 @@ static blk_qc_t blk_queue_bio(struct request_queue *q, struct bio *bio)
spin_lock_irq(q->queue_lock);
el_ret = elv_merge(q, &req, bio);
if (el_ret == ELEVATOR_BACK_MERGE) {
if (bio_attempt_back_merge(q, req, bio)) {
elv_bio_merged(q, req, bio);
if (!attempt_back_merge(q, req))
elv_merged_request(q, req, el_ret);
goto out_unlock;
}
} else if (el_ret == ELEVATOR_FRONT_MERGE) {
if (bio_attempt_front_merge(q, req, bio)) {
elv_bio_merged(q, req, bio);
if (!attempt_front_merge(q, req))
elv_merged_request(q, req, el_ret);
goto out_unlock;
}
switch (elv_merge(q, &req, bio)) {
case ELEVATOR_BACK_MERGE:
if (!bio_attempt_back_merge(q, req, bio))
break;
elv_bio_merged(q, req, bio);
free = attempt_back_merge(q, req);
if (free)
__blk_put_request(q, free);
else
elv_merged_request(q, req, ELEVATOR_BACK_MERGE);
goto out_unlock;
case ELEVATOR_FRONT_MERGE:
if (!bio_attempt_front_merge(q, req, bio))
break;
elv_bio_merged(q, req, bio);
free = attempt_front_merge(q, req);
if (free)
__blk_put_request(q, free);
else
elv_merged_request(q, req, ELEVATOR_FRONT_MERGE);
goto out_unlock;
default:
break;
}
get_rq:
@ -1894,7 +1893,7 @@ generic_make_request_checks(struct bio *bio)
* drivers without flush support don't have to worry
* about them.
*/
if ((bio->bi_opf & (REQ_PREFLUSH | REQ_FUA)) &&
if (op_is_flush(bio->bi_opf) &&
!test_bit(QUEUE_FLAG_WC, &q->queue_flags)) {
bio->bi_opf &= ~(REQ_PREFLUSH | REQ_FUA);
if (!nr_sectors) {
@ -2143,7 +2142,7 @@ int blk_insert_cloned_request(struct request_queue *q, struct request *rq)
if (q->mq_ops) {
if (blk_queue_io_stat(q))
blk_account_io_start(rq, true);
blk_mq_insert_request(rq, false, true, false);
blk_mq_sched_insert_request(rq, false, true, false, false);
return 0;
}
@ -2159,7 +2158,7 @@ int blk_insert_cloned_request(struct request_queue *q, struct request *rq)
*/
BUG_ON(blk_queued_rq(rq));
if (rq->cmd_flags & (REQ_PREFLUSH | REQ_FUA))
if (op_is_flush(rq->cmd_flags))
where = ELEVATOR_INSERT_FLUSH;
add_acct_request(q, rq, where);
@ -2464,14 +2463,6 @@ void blk_start_request(struct request *req)
wbt_issue(req->q->rq_wb, &req->issue_stat);
}
/*
* We are now handing the request to the hardware, initialize
* resid_len to full count and add the timeout handler.
*/
req->resid_len = blk_rq_bytes(req);
if (unlikely(blk_bidi_rq(req)))
req->next_rq->resid_len = blk_rq_bytes(req->next_rq);
BUG_ON(test_bit(REQ_ATOM_COMPLETE, &req->atomic_flags));
blk_add_timer(req);
}
@ -2542,10 +2533,10 @@ bool blk_update_request(struct request *req, int error, unsigned int nr_bytes)
* TODO: tj: This is too subtle. It would be better to let
* low level drivers do what they see fit.
*/
if (req->cmd_type == REQ_TYPE_FS)
if (!blk_rq_is_passthrough(req))
req->errors = 0;
if (error && req->cmd_type == REQ_TYPE_FS &&
if (error && !blk_rq_is_passthrough(req) &&
!(req->rq_flags & RQF_QUIET)) {
char *error_type;
@ -2617,7 +2608,7 @@ bool blk_update_request(struct request *req, int error, unsigned int nr_bytes)
req->__data_len -= total_bytes;
/* update sector only for requests with clear definition of sector */
if (req->cmd_type == REQ_TYPE_FS)
if (!blk_rq_is_passthrough(req))
req->__sector += total_bytes >> 9;
/* mixed attributes always follow the first bio */
@ -2695,8 +2686,8 @@ void blk_finish_request(struct request *req, int error)
BUG_ON(blk_queued_rq(req));
if (unlikely(laptop_mode) && req->cmd_type == REQ_TYPE_FS)
laptop_io_completion(&req->q->backing_dev_info);
if (unlikely(laptop_mode) && !blk_rq_is_passthrough(req))
laptop_io_completion(req->q->backing_dev_info);
blk_delete_timer(req);
@ -3019,8 +3010,6 @@ EXPORT_SYMBOL_GPL(blk_rq_unprep_clone);
static void __blk_rq_prep_clone(struct request *dst, struct request *src)
{
dst->cpu = src->cpu;
dst->cmd_flags = src->cmd_flags | REQ_NOMERGE;
dst->cmd_type = src->cmd_type;
dst->__sector = blk_rq_pos(src);
dst->__data_len = blk_rq_bytes(src);
dst->nr_phys_segments = src->nr_phys_segments;
@ -3270,7 +3259,7 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
/*
* rq is already accounted, so use raw insert
*/
if (rq->cmd_flags & (REQ_PREFLUSH | REQ_FUA))
if (op_is_flush(rq->cmd_flags))
__elv_add_request(q, rq, ELEVATOR_INSERT_FLUSH);
else
__elv_add_request(q, rq, ELEVATOR_INSERT_SORT_MERGE);
@ -3496,5 +3485,9 @@ int __init blk_dev_init(void)
blk_requestq_cachep = kmem_cache_create("request_queue",
sizeof(struct request_queue), 0, SLAB_PANIC, NULL);
#ifdef CONFIG_DEBUG_FS
blk_debugfs_root = debugfs_create_dir("block", NULL);
#endif
return 0;
}

View File

@ -9,11 +9,7 @@
#include <linux/sched/sysctl.h>
#include "blk.h"
/*
* for max sense size
*/
#include <scsi/scsi_cmnd.h>
#include "blk-mq-sched.h"
/**
* blk_end_sync_rq - executes a completion event on a request
@ -55,7 +51,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
WARN_ON(irqs_disabled());
WARN_ON(rq->cmd_type == REQ_TYPE_FS);
WARN_ON(!blk_rq_is_passthrough(rq));
rq->rq_disk = bd_disk;
rq->end_io = done;
@ -65,7 +61,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
* be reused after dying flag is set
*/
if (q->mq_ops) {
blk_mq_insert_request(rq, at_head, true, false);
blk_mq_sched_insert_request(rq, at_head, true, false, false);
return;
}
@ -100,16 +96,9 @@ int blk_execute_rq(struct request_queue *q, struct gendisk *bd_disk,
struct request *rq, int at_head)
{
DECLARE_COMPLETION_ONSTACK(wait);
char sense[SCSI_SENSE_BUFFERSIZE];
int err = 0;
unsigned long hang_check;
if (!rq->sense) {
memset(sense, 0, sizeof(sense));
rq->sense = sense;
rq->sense_len = 0;
}
rq->end_io_data = &wait;
blk_execute_rq_nowait(q, bd_disk, rq, at_head, blk_end_sync_rq);
@ -123,11 +112,6 @@ int blk_execute_rq(struct request_queue *q, struct gendisk *bd_disk,
if (rq->errors)
err = -EIO;
if (rq->sense == sense) {
rq->sense = NULL;
rq->sense_len = 0;
}
return err;
}
EXPORT_SYMBOL(blk_execute_rq);

View File

@ -74,6 +74,7 @@
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-tag.h"
#include "blk-mq-sched.h"
/* FLUSH/FUA sequences */
enum {
@ -296,8 +297,14 @@ static bool blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq)
if (fq->flush_pending_idx != fq->flush_running_idx || list_empty(pending))
return false;
/* C2 and C3 */
/* C2 and C3
*
* For blk-mq + scheduling, we can risk having all driver tags
* assigned to empty flushes, and we deadlock if we are expecting
* other requests to make progress. Don't defer for that case.
*/
if (!list_empty(&fq->flush_data_in_flight) &&
!(q->mq_ops && q->elevator) &&
time_before(jiffies,
fq->flush_pending_since + FLUSH_PENDING_TIMEOUT))
return false;
@ -326,7 +333,6 @@ static bool blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq)
blk_mq_tag_set_rq(hctx, first_rq->tag, flush_rq);
}
flush_rq->cmd_type = REQ_TYPE_FS;
flush_rq->cmd_flags = REQ_OP_FLUSH | REQ_PREFLUSH;
flush_rq->rq_flags |= RQF_FLUSH_SEQ;
flush_rq->rq_disk = first_rq->rq_disk;
@ -391,9 +397,10 @@ static void mq_flush_data_end_io(struct request *rq, int error)
* the comment in flush_end_io().
*/
spin_lock_irqsave(&fq->mq_flush_lock, flags);
if (blk_flush_complete_seq(rq, fq, REQ_FSEQ_DATA, error))
blk_mq_run_hw_queue(hctx, true);
blk_flush_complete_seq(rq, fq, REQ_FSEQ_DATA, error);
spin_unlock_irqrestore(&fq->mq_flush_lock, flags);
blk_mq_run_hw_queue(hctx, true);
}
/**
@ -453,9 +460,9 @@ void blk_insert_flush(struct request *rq)
*/
if ((policy & REQ_FSEQ_DATA) &&
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
if (q->mq_ops) {
blk_mq_insert_request(rq, false, true, false);
} else
if (q->mq_ops)
blk_mq_sched_insert_request(rq, false, true, false, false);
else
list_add_tail(&rq->queuelist, &q->queue_head);
return;
}
@ -545,11 +552,10 @@ struct blk_flush_queue *blk_alloc_flush_queue(struct request_queue *q,
if (!fq)
goto fail;
if (q->mq_ops) {
if (q->mq_ops)
spin_lock_init(&fq->mq_flush_lock);
rq_sz = round_up(rq_sz + cmd_size, cache_line_size());
}
rq_sz = round_up(rq_sz + cmd_size, cache_line_size());
fq->flush_rq = kzalloc_node(rq_sz, GFP_KERNEL, node);
if (!fq->flush_rq)
goto fail_rq;

View File

@ -443,10 +443,10 @@ void blk_integrity_revalidate(struct gendisk *disk)
return;
if (bi->profile)
disk->queue->backing_dev_info.capabilities |=
disk->queue->backing_dev_info->capabilities |=
BDI_CAP_STABLE_WRITES;
else
disk->queue->backing_dev_info.capabilities &=
disk->queue->backing_dev_info->capabilities &=
~BDI_CAP_STABLE_WRITES;
}

View File

@ -35,7 +35,10 @@ static void icq_free_icq_rcu(struct rcu_head *head)
kmem_cache_free(icq->__rcu_icq_cache, icq);
}
/* Exit an icq. Called with both ioc and q locked. */
/*
* Exit an icq. Called with both ioc and q locked for sq, only ioc locked for
* mq.
*/
static void ioc_exit_icq(struct io_cq *icq)
{
struct elevator_type *et = icq->q->elevator->type;
@ -43,8 +46,10 @@ static void ioc_exit_icq(struct io_cq *icq)
if (icq->flags & ICQ_EXITED)
return;
if (et->ops.elevator_exit_icq_fn)
et->ops.elevator_exit_icq_fn(icq);
if (et->uses_mq && et->ops.mq.exit_icq)
et->ops.mq.exit_icq(icq);
else if (!et->uses_mq && et->ops.sq.elevator_exit_icq_fn)
et->ops.sq.elevator_exit_icq_fn(icq);
icq->flags |= ICQ_EXITED;
}
@ -164,6 +169,7 @@ EXPORT_SYMBOL(put_io_context);
*/
void put_io_context_active(struct io_context *ioc)
{
struct elevator_type *et;
unsigned long flags;
struct io_cq *icq;
@ -182,13 +188,19 @@ retry:
hlist_for_each_entry(icq, &ioc->icq_list, ioc_node) {
if (icq->flags & ICQ_EXITED)
continue;
if (spin_trylock(icq->q->queue_lock)) {
et = icq->q->elevator->type;
if (et->uses_mq) {
ioc_exit_icq(icq);
spin_unlock(icq->q->queue_lock);
} else {
spin_unlock_irqrestore(&ioc->lock, flags);
cpu_relax();
goto retry;
if (spin_trylock(icq->q->queue_lock)) {
ioc_exit_icq(icq);
spin_unlock(icq->q->queue_lock);
} else {
spin_unlock_irqrestore(&ioc->lock, flags);
cpu_relax();
goto retry;
}
}
}
spin_unlock_irqrestore(&ioc->lock, flags);
@ -383,8 +395,10 @@ struct io_cq *ioc_create_icq(struct io_context *ioc, struct request_queue *q,
if (likely(!radix_tree_insert(&ioc->icq_tree, q->id, icq))) {
hlist_add_head(&icq->ioc_node, &ioc->icq_list);
list_add(&icq->q_node, &q->icq_list);
if (et->ops.elevator_init_icq_fn)
et->ops.elevator_init_icq_fn(icq);
if (et->uses_mq && et->ops.mq.init_icq)
et->ops.mq.init_icq(icq);
else if (!et->uses_mq && et->ops.sq.elevator_init_icq_fn)
et->ops.sq.elevator_init_icq_fn(icq);
} else {
kmem_cache_free(et->icq_cache, icq);
icq = ioc_lookup_icq(ioc, q);

View File

@ -16,8 +16,6 @@
int blk_rq_append_bio(struct request *rq, struct bio *bio)
{
if (!rq->bio) {
rq->cmd_flags &= REQ_OP_MASK;
rq->cmd_flags |= (bio->bi_opf & REQ_OP_MASK);
blk_rq_bio_prep(rq->q, rq, bio);
} else {
if (!ll_back_merge_fn(rq->q, rq, bio))
@ -62,6 +60,9 @@ static int __blk_rq_map_user_iov(struct request *rq,
if (IS_ERR(bio))
return PTR_ERR(bio);
bio->bi_opf &= ~REQ_OP_MASK;
bio->bi_opf |= req_op(rq);
if (map_data && map_data->null_mapped)
bio_set_flag(bio, BIO_NULL_MAPPED);
@ -90,7 +91,7 @@ static int __blk_rq_map_user_iov(struct request *rq,
}
/**
* blk_rq_map_user_iov - map user data to a request, for REQ_TYPE_BLOCK_PC usage
* blk_rq_map_user_iov - map user data to a request, for passthrough requests
* @q: request queue where request should be inserted
* @rq: request to map data to
* @map_data: pointer to the rq_map_data holding pages (if necessary)
@ -199,7 +200,7 @@ int blk_rq_unmap_user(struct bio *bio)
EXPORT_SYMBOL(blk_rq_unmap_user);
/**
* blk_rq_map_kern - map kernel data to a request, for REQ_TYPE_BLOCK_PC usage
* blk_rq_map_kern - map kernel data to a request, for passthrough requests
* @q: request queue where request should be inserted
* @rq: request to fill
* @kbuf: the kernel buffer
@ -234,8 +235,8 @@ int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf,
if (IS_ERR(bio))
return PTR_ERR(bio);
if (!reading)
bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
bio->bi_opf &= ~REQ_OP_MASK;
bio->bi_opf |= req_op(rq);
if (do_copy)
rq->rq_flags |= RQF_COPY_USER;

View File

@ -482,13 +482,6 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq,
}
EXPORT_SYMBOL(blk_rq_map_sg);
static void req_set_nomerge(struct request_queue *q, struct request *req)
{
req->cmd_flags |= REQ_NOMERGE;
if (req == q->last_merge)
q->last_merge = NULL;
}
static inline int ll_new_hw_segment(struct request_queue *q,
struct request *req,
struct bio *bio)
@ -659,31 +652,32 @@ static void blk_account_io_merge(struct request *req)
}
/*
* Has to be called with the request spinlock acquired
* For non-mq, this has to be called with the request spinlock acquired.
* For mq with scheduling, the appropriate queue wide lock should be held.
*/
static int attempt_merge(struct request_queue *q, struct request *req,
struct request *next)
static struct request *attempt_merge(struct request_queue *q,
struct request *req, struct request *next)
{
if (!rq_mergeable(req) || !rq_mergeable(next))
return 0;
return NULL;
if (req_op(req) != req_op(next))
return 0;
return NULL;
/*
* not contiguous
*/
if (blk_rq_pos(req) + blk_rq_sectors(req) != blk_rq_pos(next))
return 0;
return NULL;
if (rq_data_dir(req) != rq_data_dir(next)
|| req->rq_disk != next->rq_disk
|| req_no_special_merge(next))
return 0;
return NULL;
if (req_op(req) == REQ_OP_WRITE_SAME &&
!blk_write_same_mergeable(req->bio, next->bio))
return 0;
return NULL;
/*
* If we are allowed to merge, then append bio list
@ -692,7 +686,7 @@ static int attempt_merge(struct request_queue *q, struct request *req,
* counts here.
*/
if (!ll_merge_requests_fn(q, req, next))
return 0;
return NULL;
/*
* If failfast settings disagree or any of the two is already
@ -732,42 +726,51 @@ static int attempt_merge(struct request_queue *q, struct request *req,
if (blk_rq_cpu_valid(next))
req->cpu = next->cpu;
/* owner-ship of bio passed from next to req */
/*
* ownership of bio passed from next to req, return 'next' for
* the caller to free
*/
next->bio = NULL;
__blk_put_request(q, next);
return 1;
return next;
}
int attempt_back_merge(struct request_queue *q, struct request *rq)
struct request *attempt_back_merge(struct request_queue *q, struct request *rq)
{
struct request *next = elv_latter_request(q, rq);
if (next)
return attempt_merge(q, rq, next);
return 0;
return NULL;
}
int attempt_front_merge(struct request_queue *q, struct request *rq)
struct request *attempt_front_merge(struct request_queue *q, struct request *rq)
{
struct request *prev = elv_former_request(q, rq);
if (prev)
return attempt_merge(q, prev, rq);
return 0;
return NULL;
}
int blk_attempt_req_merge(struct request_queue *q, struct request *rq,
struct request *next)
{
struct elevator_queue *e = q->elevator;
struct request *free;
if (e->type->ops.elevator_allow_rq_merge_fn)
if (!e->type->ops.elevator_allow_rq_merge_fn(q, rq, next))
if (!e->uses_mq && e->type->ops.sq.elevator_allow_rq_merge_fn)
if (!e->type->ops.sq.elevator_allow_rq_merge_fn(q, rq, next))
return 0;
return attempt_merge(q, rq, next);
free = attempt_merge(q, rq, next);
if (free) {
__blk_put_request(q, free);
return 1;
}
return 0;
}
bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
@ -798,9 +801,12 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
return true;
}
int blk_try_merge(struct request *rq, struct bio *bio)
enum elv_merge blk_try_merge(struct request *rq, struct bio *bio)
{
if (blk_rq_pos(rq) + blk_rq_sectors(rq) == bio->bi_iter.bi_sector)
if (req_op(rq) == REQ_OP_DISCARD &&
queue_max_discard_segments(rq->q) > 1)
return ELEVATOR_DISCARD_MERGE;
else if (blk_rq_pos(rq) + blk_rq_sectors(rq) == bio->bi_iter.bi_sector)
return ELEVATOR_BACK_MERGE;
else if (blk_rq_pos(rq) - bio_sectors(bio) == bio->bi_iter.bi_sector)
return ELEVATOR_FRONT_MERGE;

View File

@ -0,0 +1,772 @@
/*
* Copyright (C) 2017 Facebook
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <https://www.gnu.org/licenses/>.
*/
#include <linux/kernel.h>
#include <linux/blkdev.h>
#include <linux/debugfs.h>
#include <linux/blk-mq.h>
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-tag.h"
struct blk_mq_debugfs_attr {
const char *name;
umode_t mode;
const struct file_operations *fops;
};
static int blk_mq_debugfs_seq_open(struct inode *inode, struct file *file,
const struct seq_operations *ops)
{
struct seq_file *m;
int ret;
ret = seq_open(file, ops);
if (!ret) {
m = file->private_data;
m->private = inode->i_private;
}
return ret;
}
static int hctx_state_show(struct seq_file *m, void *v)
{
struct blk_mq_hw_ctx *hctx = m->private;
seq_printf(m, "0x%lx\n", hctx->state);
return 0;
}
static int hctx_state_open(struct inode *inode, struct file *file)
{
return single_open(file, hctx_state_show, inode->i_private);
}
static const struct file_operations hctx_state_fops = {
.open = hctx_state_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int hctx_flags_show(struct seq_file *m, void *v)
{
struct blk_mq_hw_ctx *hctx = m->private;
seq_printf(m, "0x%lx\n", hctx->flags);
return 0;
}
static int hctx_flags_open(struct inode *inode, struct file *file)
{
return single_open(file, hctx_flags_show, inode->i_private);
}
static const struct file_operations hctx_flags_fops = {
.open = hctx_flags_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int blk_mq_debugfs_rq_show(struct seq_file *m, void *v)
{
struct request *rq = list_entry_rq(v);
seq_printf(m, "%p {.cmd_flags=0x%x, .rq_flags=0x%x, .tag=%d, .internal_tag=%d}\n",
rq, rq->cmd_flags, (__force unsigned int)rq->rq_flags,
rq->tag, rq->internal_tag);
return 0;
}
static void *hctx_dispatch_start(struct seq_file *m, loff_t *pos)
__acquires(&hctx->lock)
{
struct blk_mq_hw_ctx *hctx = m->private;
spin_lock(&hctx->lock);
return seq_list_start(&hctx->dispatch, *pos);
}
static void *hctx_dispatch_next(struct seq_file *m, void *v, loff_t *pos)
{
struct blk_mq_hw_ctx *hctx = m->private;
return seq_list_next(v, &hctx->dispatch, pos);
}
static void hctx_dispatch_stop(struct seq_file *m, void *v)
__releases(&hctx->lock)
{
struct blk_mq_hw_ctx *hctx = m->private;
spin_unlock(&hctx->lock);
}
static const struct seq_operations hctx_dispatch_seq_ops = {
.start = hctx_dispatch_start,
.next = hctx_dispatch_next,
.stop = hctx_dispatch_stop,
.show = blk_mq_debugfs_rq_show,
};
static int hctx_dispatch_open(struct inode *inode, struct file *file)
{
return blk_mq_debugfs_seq_open(inode, file, &hctx_dispatch_seq_ops);
}
static const struct file_operations hctx_dispatch_fops = {
.open = hctx_dispatch_open,
.read = seq_read,
.llseek = seq_lseek,
.release = seq_release,
};
static int hctx_ctx_map_show(struct seq_file *m, void *v)
{
struct blk_mq_hw_ctx *hctx = m->private;
sbitmap_bitmap_show(&hctx->ctx_map, m);
return 0;
}
static int hctx_ctx_map_open(struct inode *inode, struct file *file)
{
return single_open(file, hctx_ctx_map_show, inode->i_private);
}
static const struct file_operations hctx_ctx_map_fops = {
.open = hctx_ctx_map_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static void blk_mq_debugfs_tags_show(struct seq_file *m,
struct blk_mq_tags *tags)
{
seq_printf(m, "nr_tags=%u\n", tags->nr_tags);
seq_printf(m, "nr_reserved_tags=%u\n", tags->nr_reserved_tags);
seq_printf(m, "active_queues=%d\n",
atomic_read(&tags->active_queues));
seq_puts(m, "\nbitmap_tags:\n");
sbitmap_queue_show(&tags->bitmap_tags, m);
if (tags->nr_reserved_tags) {
seq_puts(m, "\nbreserved_tags:\n");
sbitmap_queue_show(&tags->breserved_tags, m);
}
}
static int hctx_tags_show(struct seq_file *m, void *v)
{
struct blk_mq_hw_ctx *hctx = m->private;
struct request_queue *q = hctx->queue;
int res;
res = mutex_lock_interruptible(&q->sysfs_lock);
if (res)
goto out;
if (hctx->tags)
blk_mq_debugfs_tags_show(m, hctx->tags);
mutex_unlock(&q->sysfs_lock);
out:
return res;
}
static int hctx_tags_open(struct inode *inode, struct file *file)
{
return single_open(file, hctx_tags_show, inode->i_private);
}
static const struct file_operations hctx_tags_fops = {
.open = hctx_tags_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int hctx_tags_bitmap_show(struct seq_file *m, void *v)
{
struct blk_mq_hw_ctx *hctx = m->private;
struct request_queue *q = hctx->queue;
int res;
res = mutex_lock_interruptible(&q->sysfs_lock);
if (res)
goto out;
if (hctx->tags)
sbitmap_bitmap_show(&hctx->tags->bitmap_tags.sb, m);
mutex_unlock(&q->sysfs_lock);
out:
return res;
}
static int hctx_tags_bitmap_open(struct inode *inode, struct file *file)
{
return single_open(file, hctx_tags_bitmap_show, inode->i_private);
}
static const struct file_operations hctx_tags_bitmap_fops = {
.open = hctx_tags_bitmap_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int hctx_sched_tags_show(struct seq_file *m, void *v)
{
struct blk_mq_hw_ctx *hctx = m->private;
struct request_queue *q = hctx->queue;
int res;
res = mutex_lock_interruptible(&q->sysfs_lock);
if (res)
goto out;
if (hctx->sched_tags)
blk_mq_debugfs_tags_show(m, hctx->sched_tags);
mutex_unlock(&q->sysfs_lock);
out:
return res;
}
static int hctx_sched_tags_open(struct inode *inode, struct file *file)
{
return single_open(file, hctx_sched_tags_show, inode->i_private);
}
static const struct file_operations hctx_sched_tags_fops = {
.open = hctx_sched_tags_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int hctx_sched_tags_bitmap_show(struct seq_file *m, void *v)
{
struct blk_mq_hw_ctx *hctx = m->private;
struct request_queue *q = hctx->queue;
int res;
res = mutex_lock_interruptible(&q->sysfs_lock);
if (res)
goto out;
if (hctx->sched_tags)
sbitmap_bitmap_show(&hctx->sched_tags->bitmap_tags.sb, m);
mutex_unlock(&q->sysfs_lock);
out:
return res;
}
static int hctx_sched_tags_bitmap_open(struct inode *inode, struct file *file)
{
return single_open(file, hctx_sched_tags_bitmap_show, inode->i_private);
}
static const struct file_operations hctx_sched_tags_bitmap_fops = {
.open = hctx_sched_tags_bitmap_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int hctx_io_poll_show(struct seq_file *m, void *v)
{
struct blk_mq_hw_ctx *hctx = m->private;
seq_printf(m, "considered=%lu\n", hctx->poll_considered);
seq_printf(m, "invoked=%lu\n", hctx->poll_invoked);
seq_printf(m, "success=%lu\n", hctx->poll_success);
return 0;
}
static int hctx_io_poll_open(struct inode *inode, struct file *file)
{
return single_open(file, hctx_io_poll_show, inode->i_private);
}
static ssize_t hctx_io_poll_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct seq_file *m = file->private_data;
struct blk_mq_hw_ctx *hctx = m->private;
hctx->poll_considered = hctx->poll_invoked = hctx->poll_success = 0;
return count;
}
static const struct file_operations hctx_io_poll_fops = {
.open = hctx_io_poll_open,
.read = seq_read,
.write = hctx_io_poll_write,
.llseek = seq_lseek,
.release = single_release,
};
static void print_stat(struct seq_file *m, struct blk_rq_stat *stat)
{
seq_printf(m, "samples=%d, mean=%lld, min=%llu, max=%llu",
stat->nr_samples, stat->mean, stat->min, stat->max);
}
static int hctx_stats_show(struct seq_file *m, void *v)
{
struct blk_mq_hw_ctx *hctx = m->private;
struct blk_rq_stat stat[2];
blk_stat_init(&stat[BLK_STAT_READ]);
blk_stat_init(&stat[BLK_STAT_WRITE]);
blk_hctx_stat_get(hctx, stat);
seq_puts(m, "read: ");
print_stat(m, &stat[BLK_STAT_READ]);
seq_puts(m, "\n");
seq_puts(m, "write: ");
print_stat(m, &stat[BLK_STAT_WRITE]);
seq_puts(m, "\n");
return 0;
}
static int hctx_stats_open(struct inode *inode, struct file *file)
{
return single_open(file, hctx_stats_show, inode->i_private);
}
static ssize_t hctx_stats_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct seq_file *m = file->private_data;
struct blk_mq_hw_ctx *hctx = m->private;
struct blk_mq_ctx *ctx;
int i;
hctx_for_each_ctx(hctx, ctx, i) {
blk_stat_init(&ctx->stat[BLK_STAT_READ]);
blk_stat_init(&ctx->stat[BLK_STAT_WRITE]);
}
return count;
}
static const struct file_operations hctx_stats_fops = {
.open = hctx_stats_open,
.read = seq_read,
.write = hctx_stats_write,
.llseek = seq_lseek,
.release = single_release,
};
static int hctx_dispatched_show(struct seq_file *m, void *v)
{
struct blk_mq_hw_ctx *hctx = m->private;
int i;
seq_printf(m, "%8u\t%lu\n", 0U, hctx->dispatched[0]);
for (i = 1; i < BLK_MQ_MAX_DISPATCH_ORDER - 1; i++) {
unsigned int d = 1U << (i - 1);
seq_printf(m, "%8u\t%lu\n", d, hctx->dispatched[i]);
}
seq_printf(m, "%8u+\t%lu\n", 1U << (i - 1), hctx->dispatched[i]);
return 0;
}
static int hctx_dispatched_open(struct inode *inode, struct file *file)
{
return single_open(file, hctx_dispatched_show, inode->i_private);
}
static ssize_t hctx_dispatched_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct seq_file *m = file->private_data;
struct blk_mq_hw_ctx *hctx = m->private;
int i;
for (i = 0; i < BLK_MQ_MAX_DISPATCH_ORDER; i++)
hctx->dispatched[i] = 0;
return count;
}
static const struct file_operations hctx_dispatched_fops = {
.open = hctx_dispatched_open,
.read = seq_read,
.write = hctx_dispatched_write,
.llseek = seq_lseek,
.release = single_release,
};
static int hctx_queued_show(struct seq_file *m, void *v)
{
struct blk_mq_hw_ctx *hctx = m->private;
seq_printf(m, "%lu\n", hctx->queued);
return 0;
}
static int hctx_queued_open(struct inode *inode, struct file *file)
{
return single_open(file, hctx_queued_show, inode->i_private);
}
static ssize_t hctx_queued_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct seq_file *m = file->private_data;
struct blk_mq_hw_ctx *hctx = m->private;
hctx->queued = 0;
return count;
}
static const struct file_operations hctx_queued_fops = {
.open = hctx_queued_open,
.read = seq_read,
.write = hctx_queued_write,
.llseek = seq_lseek,
.release = single_release,
};
static int hctx_run_show(struct seq_file *m, void *v)
{
struct blk_mq_hw_ctx *hctx = m->private;
seq_printf(m, "%lu\n", hctx->run);
return 0;
}
static int hctx_run_open(struct inode *inode, struct file *file)
{
return single_open(file, hctx_run_show, inode->i_private);
}
static ssize_t hctx_run_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct seq_file *m = file->private_data;
struct blk_mq_hw_ctx *hctx = m->private;
hctx->run = 0;
return count;
}
static const struct file_operations hctx_run_fops = {
.open = hctx_run_open,
.read = seq_read,
.write = hctx_run_write,
.llseek = seq_lseek,
.release = single_release,
};
static int hctx_active_show(struct seq_file *m, void *v)
{
struct blk_mq_hw_ctx *hctx = m->private;
seq_printf(m, "%d\n", atomic_read(&hctx->nr_active));
return 0;
}
static int hctx_active_open(struct inode *inode, struct file *file)
{
return single_open(file, hctx_active_show, inode->i_private);
}
static const struct file_operations hctx_active_fops = {
.open = hctx_active_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static void *ctx_rq_list_start(struct seq_file *m, loff_t *pos)
__acquires(&ctx->lock)
{
struct blk_mq_ctx *ctx = m->private;
spin_lock(&ctx->lock);
return seq_list_start(&ctx->rq_list, *pos);
}
static void *ctx_rq_list_next(struct seq_file *m, void *v, loff_t *pos)
{
struct blk_mq_ctx *ctx = m->private;
return seq_list_next(v, &ctx->rq_list, pos);
}
static void ctx_rq_list_stop(struct seq_file *m, void *v)
__releases(&ctx->lock)
{
struct blk_mq_ctx *ctx = m->private;
spin_unlock(&ctx->lock);
}
static const struct seq_operations ctx_rq_list_seq_ops = {
.start = ctx_rq_list_start,
.next = ctx_rq_list_next,
.stop = ctx_rq_list_stop,
.show = blk_mq_debugfs_rq_show,
};
static int ctx_rq_list_open(struct inode *inode, struct file *file)
{
return blk_mq_debugfs_seq_open(inode, file, &ctx_rq_list_seq_ops);
}
static const struct file_operations ctx_rq_list_fops = {
.open = ctx_rq_list_open,
.read = seq_read,
.llseek = seq_lseek,
.release = seq_release,
};
static int ctx_dispatched_show(struct seq_file *m, void *v)
{
struct blk_mq_ctx *ctx = m->private;
seq_printf(m, "%lu %lu\n", ctx->rq_dispatched[1], ctx->rq_dispatched[0]);
return 0;
}
static int ctx_dispatched_open(struct inode *inode, struct file *file)
{
return single_open(file, ctx_dispatched_show, inode->i_private);
}
static ssize_t ctx_dispatched_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct seq_file *m = file->private_data;
struct blk_mq_ctx *ctx = m->private;
ctx->rq_dispatched[0] = ctx->rq_dispatched[1] = 0;
return count;
}
static const struct file_operations ctx_dispatched_fops = {
.open = ctx_dispatched_open,
.read = seq_read,
.write = ctx_dispatched_write,
.llseek = seq_lseek,
.release = single_release,
};
static int ctx_merged_show(struct seq_file *m, void *v)
{
struct blk_mq_ctx *ctx = m->private;
seq_printf(m, "%lu\n", ctx->rq_merged);
return 0;
}
static int ctx_merged_open(struct inode *inode, struct file *file)
{
return single_open(file, ctx_merged_show, inode->i_private);
}
static ssize_t ctx_merged_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct seq_file *m = file->private_data;
struct blk_mq_ctx *ctx = m->private;
ctx->rq_merged = 0;
return count;
}
static const struct file_operations ctx_merged_fops = {
.open = ctx_merged_open,
.read = seq_read,
.write = ctx_merged_write,
.llseek = seq_lseek,
.release = single_release,
};
static int ctx_completed_show(struct seq_file *m, void *v)
{
struct blk_mq_ctx *ctx = m->private;
seq_printf(m, "%lu %lu\n", ctx->rq_completed[1], ctx->rq_completed[0]);
return 0;
}
static int ctx_completed_open(struct inode *inode, struct file *file)
{
return single_open(file, ctx_completed_show, inode->i_private);
}
static ssize_t ctx_completed_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct seq_file *m = file->private_data;
struct blk_mq_ctx *ctx = m->private;
ctx->rq_completed[0] = ctx->rq_completed[1] = 0;
return count;
}
static const struct file_operations ctx_completed_fops = {
.open = ctx_completed_open,
.read = seq_read,
.write = ctx_completed_write,
.llseek = seq_lseek,
.release = single_release,
};
static const struct blk_mq_debugfs_attr blk_mq_debugfs_hctx_attrs[] = {
{"state", 0400, &hctx_state_fops},
{"flags", 0400, &hctx_flags_fops},
{"dispatch", 0400, &hctx_dispatch_fops},
{"ctx_map", 0400, &hctx_ctx_map_fops},
{"tags", 0400, &hctx_tags_fops},
{"tags_bitmap", 0400, &hctx_tags_bitmap_fops},
{"sched_tags", 0400, &hctx_sched_tags_fops},
{"sched_tags_bitmap", 0400, &hctx_sched_tags_bitmap_fops},
{"io_poll", 0600, &hctx_io_poll_fops},
{"stats", 0600, &hctx_stats_fops},
{"dispatched", 0600, &hctx_dispatched_fops},
{"queued", 0600, &hctx_queued_fops},
{"run", 0600, &hctx_run_fops},
{"active", 0400, &hctx_active_fops},
{},
};
static const struct blk_mq_debugfs_attr blk_mq_debugfs_ctx_attrs[] = {
{"rq_list", 0400, &ctx_rq_list_fops},
{"dispatched", 0600, &ctx_dispatched_fops},
{"merged", 0600, &ctx_merged_fops},
{"completed", 0600, &ctx_completed_fops},
{},
};
int blk_mq_debugfs_register(struct request_queue *q, const char *name)
{
if (!blk_debugfs_root)
return -ENOENT;
q->debugfs_dir = debugfs_create_dir(name, blk_debugfs_root);
if (!q->debugfs_dir)
goto err;
if (blk_mq_debugfs_register_hctxs(q))
goto err;
return 0;
err:
blk_mq_debugfs_unregister(q);
return -ENOMEM;
}
void blk_mq_debugfs_unregister(struct request_queue *q)
{
debugfs_remove_recursive(q->debugfs_dir);
q->mq_debugfs_dir = NULL;
q->debugfs_dir = NULL;
}
static bool debugfs_create_files(struct dentry *parent, void *data,
const struct blk_mq_debugfs_attr *attr)
{
for (; attr->name; attr++) {
if (!debugfs_create_file(attr->name, attr->mode, parent,
data, attr->fops))
return false;
}
return true;
}
static int blk_mq_debugfs_register_ctx(struct request_queue *q,
struct blk_mq_ctx *ctx,
struct dentry *hctx_dir)
{
struct dentry *ctx_dir;
char name[20];
snprintf(name, sizeof(name), "cpu%u", ctx->cpu);
ctx_dir = debugfs_create_dir(name, hctx_dir);
if (!ctx_dir)
return -ENOMEM;
if (!debugfs_create_files(ctx_dir, ctx, blk_mq_debugfs_ctx_attrs))
return -ENOMEM;
return 0;
}
static int blk_mq_debugfs_register_hctx(struct request_queue *q,
struct blk_mq_hw_ctx *hctx)
{
struct blk_mq_ctx *ctx;
struct dentry *hctx_dir;
char name[20];
int i;
snprintf(name, sizeof(name), "%u", hctx->queue_num);
hctx_dir = debugfs_create_dir(name, q->mq_debugfs_dir);
if (!hctx_dir)
return -ENOMEM;
if (!debugfs_create_files(hctx_dir, hctx, blk_mq_debugfs_hctx_attrs))
return -ENOMEM;
hctx_for_each_ctx(hctx, ctx, i) {
if (blk_mq_debugfs_register_ctx(q, ctx, hctx_dir))
return -ENOMEM;
}
return 0;
}
int blk_mq_debugfs_register_hctxs(struct request_queue *q)
{
struct blk_mq_hw_ctx *hctx;
int i;
if (!q->debugfs_dir)
return -ENOENT;
q->mq_debugfs_dir = debugfs_create_dir("mq", q->debugfs_dir);
if (!q->mq_debugfs_dir)
goto err;
queue_for_each_hw_ctx(q, hctx, i) {
if (blk_mq_debugfs_register_hctx(q, hctx))
goto err;
}
return 0;
err:
blk_mq_debugfs_unregister_hctxs(q);
return -ENOMEM;
}
void blk_mq_debugfs_unregister_hctxs(struct request_queue *q)
{
debugfs_remove_recursive(q->mq_debugfs_dir);
q->mq_debugfs_dir = NULL;
}

View File

@ -0,0 +1,515 @@
/*
* blk-mq scheduling framework
*
* Copyright (C) 2016 Jens Axboe
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/blk-mq.h>
#include <trace/events/block.h>
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-sched.h"
#include "blk-mq-tag.h"
#include "blk-wbt.h"
void blk_mq_sched_free_hctx_data(struct request_queue *q,
void (*exit)(struct blk_mq_hw_ctx *))
{
struct blk_mq_hw_ctx *hctx;
int i;
queue_for_each_hw_ctx(q, hctx, i) {
if (exit && hctx->sched_data)
exit(hctx);
kfree(hctx->sched_data);
hctx->sched_data = NULL;
}
}
EXPORT_SYMBOL_GPL(blk_mq_sched_free_hctx_data);
int blk_mq_sched_init_hctx_data(struct request_queue *q, size_t size,
int (*init)(struct blk_mq_hw_ctx *),
void (*exit)(struct blk_mq_hw_ctx *))
{
struct blk_mq_hw_ctx *hctx;
int ret;
int i;
queue_for_each_hw_ctx(q, hctx, i) {
hctx->sched_data = kmalloc_node(size, GFP_KERNEL, hctx->numa_node);
if (!hctx->sched_data) {
ret = -ENOMEM;
goto error;
}
if (init) {
ret = init(hctx);
if (ret) {
/*
* We don't want to give exit() a partially
* initialized sched_data. init() must clean up
* if it fails.
*/
kfree(hctx->sched_data);
hctx->sched_data = NULL;
goto error;
}
}
}
return 0;
error:
blk_mq_sched_free_hctx_data(q, exit);
return ret;
}
EXPORT_SYMBOL_GPL(blk_mq_sched_init_hctx_data);
static void __blk_mq_sched_assign_ioc(struct request_queue *q,
struct request *rq,
struct bio *bio,
struct io_context *ioc)
{
struct io_cq *icq;
spin_lock_irq(q->queue_lock);
icq = ioc_lookup_icq(ioc, q);
spin_unlock_irq(q->queue_lock);
if (!icq) {
icq = ioc_create_icq(ioc, q, GFP_ATOMIC);
if (!icq)
return;
}
rq->elv.icq = icq;
if (!blk_mq_sched_get_rq_priv(q, rq, bio)) {
rq->rq_flags |= RQF_ELVPRIV;
get_io_context(icq->ioc);
return;
}
rq->elv.icq = NULL;
}
static void blk_mq_sched_assign_ioc(struct request_queue *q,
struct request *rq, struct bio *bio)
{
struct io_context *ioc;
ioc = rq_ioc(bio);
if (ioc)
__blk_mq_sched_assign_ioc(q, rq, bio, ioc);
}
struct request *blk_mq_sched_get_request(struct request_queue *q,
struct bio *bio,
unsigned int op,
struct blk_mq_alloc_data *data)
{
struct elevator_queue *e = q->elevator;
struct blk_mq_hw_ctx *hctx;
struct blk_mq_ctx *ctx;
struct request *rq;
blk_queue_enter_live(q);
ctx = blk_mq_get_ctx(q);
hctx = blk_mq_map_queue(q, ctx->cpu);
blk_mq_set_alloc_data(data, q, data->flags, ctx, hctx);
if (e) {
data->flags |= BLK_MQ_REQ_INTERNAL;
/*
* Flush requests are special and go directly to the
* dispatch list.
*/
if (!op_is_flush(op) && e->type->ops.mq.get_request) {
rq = e->type->ops.mq.get_request(q, op, data);
if (rq)
rq->rq_flags |= RQF_QUEUED;
} else
rq = __blk_mq_alloc_request(data, op);
} else {
rq = __blk_mq_alloc_request(data, op);
if (rq)
data->hctx->tags->rqs[rq->tag] = rq;
}
if (rq) {
if (!op_is_flush(op)) {
rq->elv.icq = NULL;
if (e && e->type->icq_cache)
blk_mq_sched_assign_ioc(q, rq, bio);
}
data->hctx->queued++;
return rq;
}
blk_queue_exit(q);
return NULL;
}
void blk_mq_sched_put_request(struct request *rq)
{
struct request_queue *q = rq->q;
struct elevator_queue *e = q->elevator;
if (rq->rq_flags & RQF_ELVPRIV) {
blk_mq_sched_put_rq_priv(rq->q, rq);
if (rq->elv.icq) {
put_io_context(rq->elv.icq->ioc);
rq->elv.icq = NULL;
}
}
if ((rq->rq_flags & RQF_QUEUED) && e && e->type->ops.mq.put_request)
e->type->ops.mq.put_request(rq);
else
blk_mq_finish_request(rq);
}
void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx)
{
struct elevator_queue *e = hctx->queue->elevator;
const bool has_sched_dispatch = e && e->type->ops.mq.dispatch_request;
bool did_work = false;
LIST_HEAD(rq_list);
if (unlikely(blk_mq_hctx_stopped(hctx)))
return;
hctx->run++;
/*
* If we have previous entries on our dispatch list, grab them first for
* more fair dispatch.
*/
if (!list_empty_careful(&hctx->dispatch)) {
spin_lock(&hctx->lock);
if (!list_empty(&hctx->dispatch))
list_splice_init(&hctx->dispatch, &rq_list);
spin_unlock(&hctx->lock);
}
/*
* Only ask the scheduler for requests, if we didn't have residual
* requests from the dispatch list. This is to avoid the case where
* we only ever dispatch a fraction of the requests available because
* of low device queue depth. Once we pull requests out of the IO
* scheduler, we can no longer merge or sort them. So it's best to
* leave them there for as long as we can. Mark the hw queue as
* needing a restart in that case.
*/
if (!list_empty(&rq_list)) {
blk_mq_sched_mark_restart(hctx);
did_work = blk_mq_dispatch_rq_list(hctx, &rq_list);
} else if (!has_sched_dispatch) {
blk_mq_flush_busy_ctxs(hctx, &rq_list);
blk_mq_dispatch_rq_list(hctx, &rq_list);
}
/*
* We want to dispatch from the scheduler if we had no work left
* on the dispatch list, OR if we did have work but weren't able
* to make progress.
*/
if (!did_work && has_sched_dispatch) {
do {
struct request *rq;
rq = e->type->ops.mq.dispatch_request(hctx);
if (!rq)
break;
list_add(&rq->queuelist, &rq_list);
} while (blk_mq_dispatch_rq_list(hctx, &rq_list));
}
}
void blk_mq_sched_move_to_dispatch(struct blk_mq_hw_ctx *hctx,
struct list_head *rq_list,
struct request *(*get_rq)(struct blk_mq_hw_ctx *))
{
do {
struct request *rq;
rq = get_rq(hctx);
if (!rq)
break;
list_add_tail(&rq->queuelist, rq_list);
} while (1);
}
EXPORT_SYMBOL_GPL(blk_mq_sched_move_to_dispatch);
bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio,
struct request **merged_request)
{
struct request *rq;
switch (elv_merge(q, &rq, bio)) {
case ELEVATOR_BACK_MERGE:
if (!blk_mq_sched_allow_merge(q, rq, bio))
return false;
if (!bio_attempt_back_merge(q, rq, bio))
return false;
*merged_request = attempt_back_merge(q, rq);
if (!*merged_request)
elv_merged_request(q, rq, ELEVATOR_BACK_MERGE);
return true;
case ELEVATOR_FRONT_MERGE:
if (!blk_mq_sched_allow_merge(q, rq, bio))
return false;
if (!bio_attempt_front_merge(q, rq, bio))
return false;
*merged_request = attempt_front_merge(q, rq);
if (!*merged_request)
elv_merged_request(q, rq, ELEVATOR_FRONT_MERGE);
return true;
default:
return false;
}
}
EXPORT_SYMBOL_GPL(blk_mq_sched_try_merge);
bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio)
{
struct elevator_queue *e = q->elevator;
if (e->type->ops.mq.bio_merge) {
struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu);
blk_mq_put_ctx(ctx);
return e->type->ops.mq.bio_merge(hctx, bio);
}
return false;
}
bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq)
{
return rq_mergeable(rq) && elv_attempt_insert_merge(q, rq);
}
EXPORT_SYMBOL_GPL(blk_mq_sched_try_insert_merge);
void blk_mq_sched_request_inserted(struct request *rq)
{
trace_block_rq_insert(rq->q, rq);
}
EXPORT_SYMBOL_GPL(blk_mq_sched_request_inserted);
static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
struct request *rq)
{
if (rq->tag == -1) {
rq->rq_flags |= RQF_SORTED;
return false;
}
/*
* If we already have a real request tag, send directly to
* the dispatch list.
*/
spin_lock(&hctx->lock);
list_add(&rq->queuelist, &hctx->dispatch);
spin_unlock(&hctx->lock);
return true;
}
static void blk_mq_sched_restart_hctx(struct blk_mq_hw_ctx *hctx)
{
if (test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state)) {
clear_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);
if (blk_mq_hctx_has_pending(hctx))
blk_mq_run_hw_queue(hctx, true);
}
}
void blk_mq_sched_restart_queues(struct blk_mq_hw_ctx *hctx)
{
unsigned int i;
if (!(hctx->flags & BLK_MQ_F_TAG_SHARED))
blk_mq_sched_restart_hctx(hctx);
else {
struct request_queue *q = hctx->queue;
if (!test_bit(QUEUE_FLAG_RESTART, &q->queue_flags))
return;
clear_bit(QUEUE_FLAG_RESTART, &q->queue_flags);
queue_for_each_hw_ctx(q, hctx, i)
blk_mq_sched_restart_hctx(hctx);
}
}
/*
* Add flush/fua to the queue. If we fail getting a driver tag, then
* punt to the requeue list. Requeue will re-invoke us from a context
* that's safe to block from.
*/
static void blk_mq_sched_insert_flush(struct blk_mq_hw_ctx *hctx,
struct request *rq, bool can_block)
{
if (blk_mq_get_driver_tag(rq, &hctx, can_block)) {
blk_insert_flush(rq);
blk_mq_run_hw_queue(hctx, true);
} else
blk_mq_add_to_requeue_list(rq, false, true);
}
void blk_mq_sched_insert_request(struct request *rq, bool at_head,
bool run_queue, bool async, bool can_block)
{
struct request_queue *q = rq->q;
struct elevator_queue *e = q->elevator;
struct blk_mq_ctx *ctx = rq->mq_ctx;
struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu);
if (rq->tag == -1 && op_is_flush(rq->cmd_flags)) {
blk_mq_sched_insert_flush(hctx, rq, can_block);
return;
}
if (e && blk_mq_sched_bypass_insert(hctx, rq))
goto run;
if (e && e->type->ops.mq.insert_requests) {
LIST_HEAD(list);
list_add(&rq->queuelist, &list);
e->type->ops.mq.insert_requests(hctx, &list, at_head);
} else {
spin_lock(&ctx->lock);
__blk_mq_insert_request(hctx, rq, at_head);
spin_unlock(&ctx->lock);
}
run:
if (run_queue)
blk_mq_run_hw_queue(hctx, async);
}
void blk_mq_sched_insert_requests(struct request_queue *q,
struct blk_mq_ctx *ctx,
struct list_head *list, bool run_queue_async)
{
struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu);
struct elevator_queue *e = hctx->queue->elevator;
if (e) {
struct request *rq, *next;
/*
* We bypass requests that already have a driver tag assigned,
* which should only be flushes. Flushes are only ever inserted
* as single requests, so we shouldn't ever hit the
* WARN_ON_ONCE() below (but let's handle it just in case).
*/
list_for_each_entry_safe(rq, next, list, queuelist) {
if (WARN_ON_ONCE(rq->tag != -1)) {
list_del_init(&rq->queuelist);
blk_mq_sched_bypass_insert(hctx, rq);
}
}
}
if (e && e->type->ops.mq.insert_requests)
e->type->ops.mq.insert_requests(hctx, list, false);
else
blk_mq_insert_requests(hctx, ctx, list);
blk_mq_run_hw_queue(hctx, run_queue_async);
}
static void blk_mq_sched_free_tags(struct blk_mq_tag_set *set,
struct blk_mq_hw_ctx *hctx,
unsigned int hctx_idx)
{
if (hctx->sched_tags) {
blk_mq_free_rqs(set, hctx->sched_tags, hctx_idx);
blk_mq_free_rq_map(hctx->sched_tags);
hctx->sched_tags = NULL;
}
}
int blk_mq_sched_setup(struct request_queue *q)
{
struct blk_mq_tag_set *set = q->tag_set;
struct blk_mq_hw_ctx *hctx;
int ret, i;
/*
* Default to 256, since we don't split into sync/async like the
* old code did. Additionally, this is a per-hw queue depth.
*/
q->nr_requests = 2 * BLKDEV_MAX_RQ;
/*
* We're switching to using an IO scheduler, so setup the hctx
* scheduler tags and switch the request map from the regular
* tags to scheduler tags. First allocate what we need, so we
* can safely fail and fallback, if needed.
*/
ret = 0;
queue_for_each_hw_ctx(q, hctx, i) {
hctx->sched_tags = blk_mq_alloc_rq_map(set, i, q->nr_requests, 0);
if (!hctx->sched_tags) {
ret = -ENOMEM;
break;
}
ret = blk_mq_alloc_rqs(set, hctx->sched_tags, i, q->nr_requests);
if (ret)
break;
}
/*
* If we failed, free what we did allocate
*/
if (ret) {
queue_for_each_hw_ctx(q, hctx, i) {
if (!hctx->sched_tags)
continue;
blk_mq_sched_free_tags(set, hctx, i);
}
return ret;
}
return 0;
}
void blk_mq_sched_teardown(struct request_queue *q)
{
struct blk_mq_tag_set *set = q->tag_set;
struct blk_mq_hw_ctx *hctx;
int i;
queue_for_each_hw_ctx(q, hctx, i)
blk_mq_sched_free_tags(set, hctx, i);
}
int blk_mq_sched_init(struct request_queue *q)
{
int ret;
#if defined(CONFIG_DEFAULT_SQ_NONE)
if (q->nr_hw_queues == 1)
return 0;
#endif
#if defined(CONFIG_DEFAULT_MQ_NONE)
if (q->nr_hw_queues > 1)
return 0;
#endif
mutex_lock(&q->sysfs_lock);
ret = elevator_init(q, NULL);
mutex_unlock(&q->sysfs_lock);
return ret;
}

View File

@ -0,0 +1,143 @@
#ifndef BLK_MQ_SCHED_H
#define BLK_MQ_SCHED_H
#include "blk-mq.h"
#include "blk-mq-tag.h"
int blk_mq_sched_init_hctx_data(struct request_queue *q, size_t size,
int (*init)(struct blk_mq_hw_ctx *),
void (*exit)(struct blk_mq_hw_ctx *));
void blk_mq_sched_free_hctx_data(struct request_queue *q,
void (*exit)(struct blk_mq_hw_ctx *));
struct request *blk_mq_sched_get_request(struct request_queue *q, struct bio *bio, unsigned int op, struct blk_mq_alloc_data *data);
void blk_mq_sched_put_request(struct request *rq);
void blk_mq_sched_request_inserted(struct request *rq);
bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio,
struct request **merged_request);
bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio);
bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq);
void blk_mq_sched_restart_queues(struct blk_mq_hw_ctx *hctx);
void blk_mq_sched_insert_request(struct request *rq, bool at_head,
bool run_queue, bool async, bool can_block);
void blk_mq_sched_insert_requests(struct request_queue *q,
struct blk_mq_ctx *ctx,
struct list_head *list, bool run_queue_async);
void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx);
void blk_mq_sched_move_to_dispatch(struct blk_mq_hw_ctx *hctx,
struct list_head *rq_list,
struct request *(*get_rq)(struct blk_mq_hw_ctx *));
int blk_mq_sched_setup(struct request_queue *q);
void blk_mq_sched_teardown(struct request_queue *q);
int blk_mq_sched_init(struct request_queue *q);
static inline bool
blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio)
{
struct elevator_queue *e = q->elevator;
if (!e || blk_queue_nomerges(q) || !bio_mergeable(bio))
return false;
return __blk_mq_sched_bio_merge(q, bio);
}
static inline int blk_mq_sched_get_rq_priv(struct request_queue *q,
struct request *rq,
struct bio *bio)
{
struct elevator_queue *e = q->elevator;
if (e && e->type->ops.mq.get_rq_priv)
return e->type->ops.mq.get_rq_priv(q, rq, bio);
return 0;
}
static inline void blk_mq_sched_put_rq_priv(struct request_queue *q,
struct request *rq)
{
struct elevator_queue *e = q->elevator;
if (e && e->type->ops.mq.put_rq_priv)
e->type->ops.mq.put_rq_priv(q, rq);
}
static inline bool
blk_mq_sched_allow_merge(struct request_queue *q, struct request *rq,
struct bio *bio)
{
struct elevator_queue *e = q->elevator;
if (e && e->type->ops.mq.allow_merge)
return e->type->ops.mq.allow_merge(q, rq, bio);
return true;
}
static inline void
blk_mq_sched_completed_request(struct blk_mq_hw_ctx *hctx, struct request *rq)
{
struct elevator_queue *e = hctx->queue->elevator;
if (e && e->type->ops.mq.completed_request)
e->type->ops.mq.completed_request(hctx, rq);
BUG_ON(rq->internal_tag == -1);
blk_mq_put_tag(hctx, hctx->sched_tags, rq->mq_ctx, rq->internal_tag);
}
static inline void blk_mq_sched_started_request(struct request *rq)
{
struct request_queue *q = rq->q;
struct elevator_queue *e = q->elevator;
if (e && e->type->ops.mq.started_request)
e->type->ops.mq.started_request(rq);
}
static inline void blk_mq_sched_requeue_request(struct request *rq)
{
struct request_queue *q = rq->q;
struct elevator_queue *e = q->elevator;
if (e && e->type->ops.mq.requeue_request)
e->type->ops.mq.requeue_request(rq);
}
static inline bool blk_mq_sched_has_work(struct blk_mq_hw_ctx *hctx)
{
struct elevator_queue *e = hctx->queue->elevator;
if (e && e->type->ops.mq.has_work)
return e->type->ops.mq.has_work(hctx);
return false;
}
static inline void blk_mq_sched_mark_restart(struct blk_mq_hw_ctx *hctx)
{
if (!test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state)) {
set_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);
if (hctx->flags & BLK_MQ_F_TAG_SHARED) {
struct request_queue *q = hctx->queue;
if (!test_bit(QUEUE_FLAG_RESTART, &q->queue_flags))
set_bit(QUEUE_FLAG_RESTART, &q->queue_flags);
}
}
}
static inline bool blk_mq_sched_needs_restart(struct blk_mq_hw_ctx *hctx)
{
return test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);
}
#endif

View File

@ -122,123 +122,16 @@ static ssize_t blk_mq_hw_sysfs_store(struct kobject *kobj,
return res;
}
static ssize_t blk_mq_sysfs_dispatched_show(struct blk_mq_ctx *ctx, char *page)
{
return sprintf(page, "%lu %lu\n", ctx->rq_dispatched[1],
ctx->rq_dispatched[0]);
}
static ssize_t blk_mq_sysfs_merged_show(struct blk_mq_ctx *ctx, char *page)
{
return sprintf(page, "%lu\n", ctx->rq_merged);
}
static ssize_t blk_mq_sysfs_completed_show(struct blk_mq_ctx *ctx, char *page)
{
return sprintf(page, "%lu %lu\n", ctx->rq_completed[1],
ctx->rq_completed[0]);
}
static ssize_t sysfs_list_show(char *page, struct list_head *list, char *msg)
{
struct request *rq;
int len = snprintf(page, PAGE_SIZE - 1, "%s:\n", msg);
list_for_each_entry(rq, list, queuelist) {
const int rq_len = 2 * sizeof(rq) + 2;
/* if the output will be truncated */
if (PAGE_SIZE - 1 < len + rq_len) {
/* backspacing if it can't hold '\t...\n' */
if (PAGE_SIZE - 1 < len + 5)
len -= rq_len;
len += snprintf(page + len, PAGE_SIZE - 1 - len,
"\t...\n");
break;
}
len += snprintf(page + len, PAGE_SIZE - 1 - len,
"\t%p\n", rq);
}
return len;
}
static ssize_t blk_mq_sysfs_rq_list_show(struct blk_mq_ctx *ctx, char *page)
{
ssize_t ret;
spin_lock(&ctx->lock);
ret = sysfs_list_show(page, &ctx->rq_list, "CTX pending");
spin_unlock(&ctx->lock);
return ret;
}
static ssize_t blk_mq_hw_sysfs_poll_show(struct blk_mq_hw_ctx *hctx, char *page)
{
return sprintf(page, "considered=%lu, invoked=%lu, success=%lu\n",
hctx->poll_considered, hctx->poll_invoked,
hctx->poll_success);
}
static ssize_t blk_mq_hw_sysfs_poll_store(struct blk_mq_hw_ctx *hctx,
const char *page, size_t size)
{
hctx->poll_considered = hctx->poll_invoked = hctx->poll_success = 0;
return size;
}
static ssize_t blk_mq_hw_sysfs_queued_show(struct blk_mq_hw_ctx *hctx,
char *page)
{
return sprintf(page, "%lu\n", hctx->queued);
}
static ssize_t blk_mq_hw_sysfs_run_show(struct blk_mq_hw_ctx *hctx, char *page)
{
return sprintf(page, "%lu\n", hctx->run);
}
static ssize_t blk_mq_hw_sysfs_dispatched_show(struct blk_mq_hw_ctx *hctx,
char *page)
{
char *start_page = page;
int i;
page += sprintf(page, "%8u\t%lu\n", 0U, hctx->dispatched[0]);
for (i = 1; i < BLK_MQ_MAX_DISPATCH_ORDER - 1; i++) {
unsigned int d = 1U << (i - 1);
page += sprintf(page, "%8u\t%lu\n", d, hctx->dispatched[i]);
}
page += sprintf(page, "%8u+\t%lu\n", 1U << (i - 1),
hctx->dispatched[i]);
return page - start_page;
}
static ssize_t blk_mq_hw_sysfs_rq_list_show(struct blk_mq_hw_ctx *hctx,
static ssize_t blk_mq_hw_sysfs_nr_tags_show(struct blk_mq_hw_ctx *hctx,
char *page)
{
ssize_t ret;
spin_lock(&hctx->lock);
ret = sysfs_list_show(page, &hctx->dispatch, "HCTX pending");
spin_unlock(&hctx->lock);
return ret;
return sprintf(page, "%u\n", hctx->tags->nr_tags);
}
static ssize_t blk_mq_hw_sysfs_tags_show(struct blk_mq_hw_ctx *hctx, char *page)
static ssize_t blk_mq_hw_sysfs_nr_reserved_tags_show(struct blk_mq_hw_ctx *hctx,
char *page)
{
return blk_mq_tag_sysfs_show(hctx->tags, page);
}
static ssize_t blk_mq_hw_sysfs_active_show(struct blk_mq_hw_ctx *hctx, char *page)
{
return sprintf(page, "%u\n", atomic_read(&hctx->nr_active));
return sprintf(page, "%u\n", hctx->tags->nr_reserved_tags);
}
static ssize_t blk_mq_hw_sysfs_cpus_show(struct blk_mq_hw_ctx *hctx, char *page)
@ -259,121 +152,27 @@ static ssize_t blk_mq_hw_sysfs_cpus_show(struct blk_mq_hw_ctx *hctx, char *page)
return ret;
}
static void blk_mq_stat_clear(struct blk_mq_hw_ctx *hctx)
{
struct blk_mq_ctx *ctx;
unsigned int i;
hctx_for_each_ctx(hctx, ctx, i) {
blk_stat_init(&ctx->stat[BLK_STAT_READ]);
blk_stat_init(&ctx->stat[BLK_STAT_WRITE]);
}
}
static ssize_t blk_mq_hw_sysfs_stat_store(struct blk_mq_hw_ctx *hctx,
const char *page, size_t count)
{
blk_mq_stat_clear(hctx);
return count;
}
static ssize_t print_stat(char *page, struct blk_rq_stat *stat, const char *pre)
{
return sprintf(page, "%s samples=%llu, mean=%lld, min=%lld, max=%lld\n",
pre, (long long) stat->nr_samples,
(long long) stat->mean, (long long) stat->min,
(long long) stat->max);
}
static ssize_t blk_mq_hw_sysfs_stat_show(struct blk_mq_hw_ctx *hctx, char *page)
{
struct blk_rq_stat stat[2];
ssize_t ret;
blk_stat_init(&stat[BLK_STAT_READ]);
blk_stat_init(&stat[BLK_STAT_WRITE]);
blk_hctx_stat_get(hctx, stat);
ret = print_stat(page, &stat[BLK_STAT_READ], "read :");
ret += print_stat(page + ret, &stat[BLK_STAT_WRITE], "write:");
return ret;
}
static struct blk_mq_ctx_sysfs_entry blk_mq_sysfs_dispatched = {
.attr = {.name = "dispatched", .mode = S_IRUGO },
.show = blk_mq_sysfs_dispatched_show,
};
static struct blk_mq_ctx_sysfs_entry blk_mq_sysfs_merged = {
.attr = {.name = "merged", .mode = S_IRUGO },
.show = blk_mq_sysfs_merged_show,
};
static struct blk_mq_ctx_sysfs_entry blk_mq_sysfs_completed = {
.attr = {.name = "completed", .mode = S_IRUGO },
.show = blk_mq_sysfs_completed_show,
};
static struct blk_mq_ctx_sysfs_entry blk_mq_sysfs_rq_list = {
.attr = {.name = "rq_list", .mode = S_IRUGO },
.show = blk_mq_sysfs_rq_list_show,
};
static struct attribute *default_ctx_attrs[] = {
&blk_mq_sysfs_dispatched.attr,
&blk_mq_sysfs_merged.attr,
&blk_mq_sysfs_completed.attr,
&blk_mq_sysfs_rq_list.attr,
NULL,
};
static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_queued = {
.attr = {.name = "queued", .mode = S_IRUGO },
.show = blk_mq_hw_sysfs_queued_show,
static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_nr_tags = {
.attr = {.name = "nr_tags", .mode = S_IRUGO },
.show = blk_mq_hw_sysfs_nr_tags_show,
};
static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_run = {
.attr = {.name = "run", .mode = S_IRUGO },
.show = blk_mq_hw_sysfs_run_show,
};
static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_dispatched = {
.attr = {.name = "dispatched", .mode = S_IRUGO },
.show = blk_mq_hw_sysfs_dispatched_show,
};
static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_active = {
.attr = {.name = "active", .mode = S_IRUGO },
.show = blk_mq_hw_sysfs_active_show,
};
static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_pending = {
.attr = {.name = "pending", .mode = S_IRUGO },
.show = blk_mq_hw_sysfs_rq_list_show,
};
static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_tags = {
.attr = {.name = "tags", .mode = S_IRUGO },
.show = blk_mq_hw_sysfs_tags_show,
static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_nr_reserved_tags = {
.attr = {.name = "nr_reserved_tags", .mode = S_IRUGO },
.show = blk_mq_hw_sysfs_nr_reserved_tags_show,
};
static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_cpus = {
.attr = {.name = "cpu_list", .mode = S_IRUGO },
.show = blk_mq_hw_sysfs_cpus_show,
};
static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_poll = {
.attr = {.name = "io_poll", .mode = S_IWUSR | S_IRUGO },
.show = blk_mq_hw_sysfs_poll_show,
.store = blk_mq_hw_sysfs_poll_store,
};
static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_stat = {
.attr = {.name = "stats", .mode = S_IRUGO | S_IWUSR },
.show = blk_mq_hw_sysfs_stat_show,
.store = blk_mq_hw_sysfs_stat_store,
};
static struct attribute *default_hw_ctx_attrs[] = {
&blk_mq_hw_sysfs_queued.attr,
&blk_mq_hw_sysfs_run.attr,
&blk_mq_hw_sysfs_dispatched.attr,
&blk_mq_hw_sysfs_pending.attr,
&blk_mq_hw_sysfs_tags.attr,
&blk_mq_hw_sysfs_nr_tags.attr,
&blk_mq_hw_sysfs_nr_reserved_tags.attr,
&blk_mq_hw_sysfs_cpus.attr,
&blk_mq_hw_sysfs_active.attr,
&blk_mq_hw_sysfs_poll.attr,
&blk_mq_hw_sysfs_stat.attr,
NULL,
};
@ -455,6 +254,8 @@ static void __blk_mq_unregister_dev(struct device *dev, struct request_queue *q)
kobject_put(&hctx->kobj);
}
blk_mq_debugfs_unregister_hctxs(q);
kobject_uevent(&q->mq_kobj, KOBJ_REMOVE);
kobject_del(&q->mq_kobj);
kobject_put(&q->mq_kobj);
@ -504,6 +305,8 @@ int blk_mq_register_dev(struct device *dev, struct request_queue *q)
kobject_uevent(&q->mq_kobj, KOBJ_ADD);
blk_mq_debugfs_register(q, kobject_name(&dev->kobj));
queue_for_each_hw_ctx(q, hctx, i) {
ret = blk_mq_register_hctx(hctx);
if (ret)
@ -529,6 +332,8 @@ void blk_mq_sysfs_unregister(struct request_queue *q)
if (!q->mq_sysfs_init_done)
return;
blk_mq_debugfs_unregister_hctxs(q);
queue_for_each_hw_ctx(q, hctx, i)
blk_mq_unregister_hctx(hctx);
}
@ -541,6 +346,8 @@ int blk_mq_sysfs_register(struct request_queue *q)
if (!q->mq_sysfs_init_done)
return ret;
blk_mq_debugfs_register_hctxs(q);
queue_for_each_hw_ctx(q, hctx, i) {
ret = blk_mq_register_hctx(hctx);
if (ret)

View File

@ -90,113 +90,97 @@ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx,
return atomic_read(&hctx->nr_active) < depth;
}
static int __bt_get(struct blk_mq_hw_ctx *hctx, struct sbitmap_queue *bt)
static int __blk_mq_get_tag(struct blk_mq_alloc_data *data,
struct sbitmap_queue *bt)
{
if (!hctx_may_queue(hctx, bt))
if (!(data->flags & BLK_MQ_REQ_INTERNAL) &&
!hctx_may_queue(data->hctx, bt))
return -1;
return __sbitmap_queue_get(bt);
}
static int bt_get(struct blk_mq_alloc_data *data, struct sbitmap_queue *bt,
struct blk_mq_hw_ctx *hctx, struct blk_mq_tags *tags)
unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
{
struct blk_mq_tags *tags = blk_mq_tags_from_data(data);
struct sbitmap_queue *bt;
struct sbq_wait_state *ws;
DEFINE_WAIT(wait);
unsigned int tag_offset;
bool drop_ctx;
int tag;
tag = __bt_get(hctx, bt);
if (data->flags & BLK_MQ_REQ_RESERVED) {
if (unlikely(!tags->nr_reserved_tags)) {
WARN_ON_ONCE(1);
return BLK_MQ_TAG_FAIL;
}
bt = &tags->breserved_tags;
tag_offset = 0;
} else {
bt = &tags->bitmap_tags;
tag_offset = tags->nr_reserved_tags;
}
tag = __blk_mq_get_tag(data, bt);
if (tag != -1)
return tag;
goto found_tag;
if (data->flags & BLK_MQ_REQ_NOWAIT)
return -1;
return BLK_MQ_TAG_FAIL;
ws = bt_wait_ptr(bt, hctx);
ws = bt_wait_ptr(bt, data->hctx);
drop_ctx = data->ctx == NULL;
do {
prepare_to_wait(&ws->wait, &wait, TASK_UNINTERRUPTIBLE);
tag = __bt_get(hctx, bt);
tag = __blk_mq_get_tag(data, bt);
if (tag != -1)
break;
/*
* We're out of tags on this hardware queue, kick any
* pending IO submits before going to sleep waiting for
* some to complete. Note that hctx can be NULL here for
* reserved tag allocation.
* some to complete.
*/
if (hctx)
blk_mq_run_hw_queue(hctx, false);
blk_mq_run_hw_queue(data->hctx, false);
/*
* Retry tag allocation after running the hardware queue,
* as running the queue may also have found completions.
*/
tag = __bt_get(hctx, bt);
tag = __blk_mq_get_tag(data, bt);
if (tag != -1)
break;
blk_mq_put_ctx(data->ctx);
if (data->ctx)
blk_mq_put_ctx(data->ctx);
io_schedule();
data->ctx = blk_mq_get_ctx(data->q);
data->hctx = blk_mq_map_queue(data->q, data->ctx->cpu);
if (data->flags & BLK_MQ_REQ_RESERVED) {
bt = &data->hctx->tags->breserved_tags;
} else {
hctx = data->hctx;
bt = &hctx->tags->bitmap_tags;
}
tags = blk_mq_tags_from_data(data);
if (data->flags & BLK_MQ_REQ_RESERVED)
bt = &tags->breserved_tags;
else
bt = &tags->bitmap_tags;
finish_wait(&ws->wait, &wait);
ws = bt_wait_ptr(bt, hctx);
ws = bt_wait_ptr(bt, data->hctx);
} while (1);
if (drop_ctx && data->ctx)
blk_mq_put_ctx(data->ctx);
finish_wait(&ws->wait, &wait);
return tag;
found_tag:
return tag + tag_offset;
}
static unsigned int __blk_mq_get_tag(struct blk_mq_alloc_data *data)
void blk_mq_put_tag(struct blk_mq_hw_ctx *hctx, struct blk_mq_tags *tags,
struct blk_mq_ctx *ctx, unsigned int tag)
{
int tag;
tag = bt_get(data, &data->hctx->tags->bitmap_tags, data->hctx,
data->hctx->tags);
if (tag >= 0)
return tag + data->hctx->tags->nr_reserved_tags;
return BLK_MQ_TAG_FAIL;
}
static unsigned int __blk_mq_get_reserved_tag(struct blk_mq_alloc_data *data)
{
int tag;
if (unlikely(!data->hctx->tags->nr_reserved_tags)) {
WARN_ON_ONCE(1);
return BLK_MQ_TAG_FAIL;
}
tag = bt_get(data, &data->hctx->tags->breserved_tags, NULL,
data->hctx->tags);
if (tag < 0)
return BLK_MQ_TAG_FAIL;
return tag;
}
unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
{
if (data->flags & BLK_MQ_REQ_RESERVED)
return __blk_mq_get_reserved_tag(data);
return __blk_mq_get_tag(data);
}
void blk_mq_put_tag(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
unsigned int tag)
{
struct blk_mq_tags *tags = hctx->tags;
if (tag >= tags->nr_reserved_tags) {
const int real_tag = tag - tags->nr_reserved_tags;
@ -312,11 +296,11 @@ int blk_mq_reinit_tagset(struct blk_mq_tag_set *set)
struct blk_mq_tags *tags = set->tags[i];
for (j = 0; j < tags->nr_tags; j++) {
if (!tags->rqs[j])
if (!tags->static_rqs[j])
continue;
ret = set->ops->reinit_request(set->driver_data,
tags->rqs[j]);
tags->static_rqs[j]);
if (ret)
goto out;
}
@ -351,11 +335,6 @@ void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn,
}
static unsigned int bt_unused_tags(const struct sbitmap_queue *bt)
{
return bt->sb.depth - sbitmap_weight(&bt->sb);
}
static int bt_alloc(struct sbitmap_queue *bt, unsigned int depth,
bool round_robin, int node)
{
@ -411,19 +390,56 @@ void blk_mq_free_tags(struct blk_mq_tags *tags)
kfree(tags);
}
int blk_mq_tag_update_depth(struct blk_mq_tags *tags, unsigned int tdepth)
int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
struct blk_mq_tags **tagsptr, unsigned int tdepth,
bool can_grow)
{
tdepth -= tags->nr_reserved_tags;
if (tdepth > tags->nr_tags)
struct blk_mq_tags *tags = *tagsptr;
if (tdepth <= tags->nr_reserved_tags)
return -EINVAL;
/*
* Don't need (or can't) update reserved tags here, they remain
* static and should never need resizing.
*/
sbitmap_queue_resize(&tags->bitmap_tags, tdepth);
tdepth -= tags->nr_reserved_tags;
/*
* If we are allowed to grow beyond the original size, allocate
* a new set of tags before freeing the old one.
*/
if (tdepth > tags->nr_tags) {
struct blk_mq_tag_set *set = hctx->queue->tag_set;
struct blk_mq_tags *new;
bool ret;
if (!can_grow)
return -EINVAL;
/*
* We need some sort of upper limit, set it high enough that
* no valid use cases should require more.
*/
if (tdepth > 16 * BLKDEV_MAX_RQ)
return -EINVAL;
new = blk_mq_alloc_rq_map(set, hctx->queue_num, tdepth, 0);
if (!new)
return -ENOMEM;
ret = blk_mq_alloc_rqs(set, new, hctx->queue_num, tdepth);
if (ret) {
blk_mq_free_rq_map(new);
return -ENOMEM;
}
blk_mq_free_rqs(set, *tagsptr, hctx->queue_num);
blk_mq_free_rq_map(*tagsptr);
*tagsptr = new;
} else {
/*
* Don't need (or can't) update reserved tags here, they
* remain static and should never need resizing.
*/
sbitmap_queue_resize(&tags->bitmap_tags, tdepth);
}
blk_mq_tag_wakeup_all(tags, false);
return 0;
}
@ -454,25 +470,3 @@ u32 blk_mq_unique_tag(struct request *rq)
(rq->tag & BLK_MQ_UNIQUE_TAG_MASK);
}
EXPORT_SYMBOL(blk_mq_unique_tag);
ssize_t blk_mq_tag_sysfs_show(struct blk_mq_tags *tags, char *page)
{
char *orig_page = page;
unsigned int free, res;
if (!tags)
return 0;
page += sprintf(page, "nr_tags=%u, reserved_tags=%u, "
"bits_per_word=%u\n",
tags->nr_tags, tags->nr_reserved_tags,
1U << tags->bitmap_tags.sb.shift);
free = bt_unused_tags(&tags->bitmap_tags);
res = bt_unused_tags(&tags->breserved_tags);
page += sprintf(page, "nr_free=%u, nr_reserved=%u\n", free, res);
page += sprintf(page, "active_queues=%u\n", atomic_read(&tags->active_queues));
return page - orig_page;
}

View File

@ -16,6 +16,7 @@ struct blk_mq_tags {
struct sbitmap_queue breserved_tags;
struct request **rqs;
struct request **static_rqs;
struct list_head page_list;
};
@ -24,11 +25,12 @@ extern struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags, unsigned int r
extern void blk_mq_free_tags(struct blk_mq_tags *tags);
extern unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data);
extern void blk_mq_put_tag(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
unsigned int tag);
extern void blk_mq_put_tag(struct blk_mq_hw_ctx *hctx, struct blk_mq_tags *tags,
struct blk_mq_ctx *ctx, unsigned int tag);
extern bool blk_mq_has_free_tags(struct blk_mq_tags *tags);
extern ssize_t blk_mq_tag_sysfs_show(struct blk_mq_tags *tags, char *page);
extern int blk_mq_tag_update_depth(struct blk_mq_tags *tags, unsigned int depth);
extern int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
struct blk_mq_tags **tags,
unsigned int depth, bool can_grow);
extern void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool);
void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn,
void *priv);

File diff suppressed because it is too large Load Diff

View File

@ -32,7 +32,31 @@ void blk_mq_free_queue(struct request_queue *q);
int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr);
void blk_mq_wake_waiters(struct request_queue *q);
bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *, struct list_head *);
void blk_mq_flush_busy_ctxs(struct blk_mq_hw_ctx *hctx, struct list_head *list);
bool blk_mq_hctx_has_pending(struct blk_mq_hw_ctx *hctx);
bool blk_mq_get_driver_tag(struct request *rq, struct blk_mq_hw_ctx **hctx,
bool wait);
/*
* Internal helpers for allocating/freeing the request map
*/
void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
unsigned int hctx_idx);
void blk_mq_free_rq_map(struct blk_mq_tags *tags);
struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set,
unsigned int hctx_idx,
unsigned int nr_tags,
unsigned int reserved_tags);
int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
unsigned int hctx_idx, unsigned int depth);
/*
* Internal helpers for request insertion into sw queues
*/
void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
bool at_head);
void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
struct list_head *list);
/*
* CPU hotplug helpers
*/
@ -57,6 +81,35 @@ extern int blk_mq_sysfs_register(struct request_queue *q);
extern void blk_mq_sysfs_unregister(struct request_queue *q);
extern void blk_mq_hctx_kobj_init(struct blk_mq_hw_ctx *hctx);
/*
* debugfs helpers
*/
#ifdef CONFIG_BLK_DEBUG_FS
int blk_mq_debugfs_register(struct request_queue *q, const char *name);
void blk_mq_debugfs_unregister(struct request_queue *q);
int blk_mq_debugfs_register_hctxs(struct request_queue *q);
void blk_mq_debugfs_unregister_hctxs(struct request_queue *q);
#else
static inline int blk_mq_debugfs_register(struct request_queue *q,
const char *name)
{
return 0;
}
static inline void blk_mq_debugfs_unregister(struct request_queue *q)
{
}
static inline int blk_mq_debugfs_register_hctxs(struct request_queue *q)
{
return 0;
}
static inline void blk_mq_debugfs_unregister_hctxs(struct request_queue *q)
{
}
#endif
extern void blk_mq_rq_timed_out(struct request *req, bool reserved);
void blk_mq_release(struct request_queue *q);
@ -103,6 +156,25 @@ static inline void blk_mq_set_alloc_data(struct blk_mq_alloc_data *data,
data->hctx = hctx;
}
static inline struct blk_mq_tags *blk_mq_tags_from_data(struct blk_mq_alloc_data *data)
{
if (data->flags & BLK_MQ_REQ_INTERNAL)
return data->hctx->sched_tags;
return data->hctx->tags;
}
/*
* Internal helpers for request allocation/init/free
*/
void blk_mq_rq_ctx_init(struct request_queue *q, struct blk_mq_ctx *ctx,
struct request *rq, unsigned int op);
void __blk_mq_finish_request(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
struct request *rq);
void blk_mq_finish_request(struct request *rq);
struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data,
unsigned int op);
static inline bool blk_mq_hctx_stopped(struct blk_mq_hw_ctx *hctx)
{
return test_bit(BLK_MQ_S_STOPPED, &hctx->state);

View File

@ -88,6 +88,7 @@ EXPORT_SYMBOL_GPL(blk_queue_lld_busy);
void blk_set_default_limits(struct queue_limits *lim)
{
lim->max_segments = BLK_MAX_SEGMENTS;
lim->max_discard_segments = 1;
lim->max_integrity_segments = 0;
lim->seg_boundary_mask = BLK_SEG_BOUNDARY_MASK;
lim->virt_boundary_mask = 0;
@ -128,6 +129,7 @@ void blk_set_stacking_limits(struct queue_limits *lim)
/* Inherit limits from component devices */
lim->discard_zeroes_data = 1;
lim->max_segments = USHRT_MAX;
lim->max_discard_segments = 1;
lim->max_hw_sectors = UINT_MAX;
lim->max_segment_size = UINT_MAX;
lim->max_sectors = UINT_MAX;
@ -253,7 +255,7 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
max_sectors = min_not_zero(max_hw_sectors, limits->max_dev_sectors);
max_sectors = min_t(unsigned int, max_sectors, BLK_DEF_MAX_SECTORS);
limits->max_sectors = max_sectors;
q->backing_dev_info.io_pages = max_sectors >> (PAGE_SHIFT - 9);
q->backing_dev_info->io_pages = max_sectors >> (PAGE_SHIFT - 9);
}
EXPORT_SYMBOL(blk_queue_max_hw_sectors);
@ -336,6 +338,22 @@ void blk_queue_max_segments(struct request_queue *q, unsigned short max_segments
}
EXPORT_SYMBOL(blk_queue_max_segments);
/**
* blk_queue_max_discard_segments - set max segments for discard requests
* @q: the request queue for the device
* @max_segments: max number of segments
*
* Description:
* Enables a low level driver to set an upper limit on the number of
* segments in a discard request.
**/
void blk_queue_max_discard_segments(struct request_queue *q,
unsigned short max_segments)
{
q->limits.max_discard_segments = max_segments;
}
EXPORT_SYMBOL_GPL(blk_queue_max_discard_segments);
/**
* blk_queue_max_segment_size - set max segment size for blk_rq_map_sg
* @q: the request queue for the device
@ -553,6 +571,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
b->virt_boundary_mask);
t->max_segments = min_not_zero(t->max_segments, b->max_segments);
t->max_discard_segments = min_not_zero(t->max_discard_segments,
b->max_discard_segments);
t->max_integrity_segments = min_not_zero(t->max_integrity_segments,
b->max_integrity_segments);

View File

@ -89,7 +89,7 @@ queue_requests_store(struct request_queue *q, const char *page, size_t count)
static ssize_t queue_ra_show(struct request_queue *q, char *page)
{
unsigned long ra_kb = q->backing_dev_info.ra_pages <<
unsigned long ra_kb = q->backing_dev_info->ra_pages <<
(PAGE_SHIFT - 10);
return queue_var_show(ra_kb, (page));
@ -104,7 +104,7 @@ queue_ra_store(struct request_queue *q, const char *page, size_t count)
if (ret < 0)
return ret;
q->backing_dev_info.ra_pages = ra_kb >> (PAGE_SHIFT - 10);
q->backing_dev_info->ra_pages = ra_kb >> (PAGE_SHIFT - 10);
return ret;
}
@ -121,6 +121,12 @@ static ssize_t queue_max_segments_show(struct request_queue *q, char *page)
return queue_var_show(queue_max_segments(q), (page));
}
static ssize_t queue_max_discard_segments_show(struct request_queue *q,
char *page)
{
return queue_var_show(queue_max_discard_segments(q), (page));
}
static ssize_t queue_max_integrity_segments_show(struct request_queue *q, char *page)
{
return queue_var_show(q->limits.max_integrity_segments, (page));
@ -236,7 +242,7 @@ queue_max_sectors_store(struct request_queue *q, const char *page, size_t count)
spin_lock_irq(q->queue_lock);
q->limits.max_sectors = max_sectors_kb << 1;
q->backing_dev_info.io_pages = max_sectors_kb >> (PAGE_SHIFT - 10);
q->backing_dev_info->io_pages = max_sectors_kb >> (PAGE_SHIFT - 10);
spin_unlock_irq(q->queue_lock);
return ret;
@ -545,6 +551,11 @@ static struct queue_sysfs_entry queue_max_segments_entry = {
.show = queue_max_segments_show,
};
static struct queue_sysfs_entry queue_max_discard_segments_entry = {
.attr = {.name = "max_discard_segments", .mode = S_IRUGO },
.show = queue_max_discard_segments_show,
};
static struct queue_sysfs_entry queue_max_integrity_segments_entry = {
.attr = {.name = "max_integrity_segments", .mode = S_IRUGO },
.show = queue_max_integrity_segments_show,
@ -697,6 +708,7 @@ static struct attribute *default_attrs[] = {
&queue_max_hw_sectors_entry.attr,
&queue_max_sectors_entry.attr,
&queue_max_segments_entry.attr,
&queue_max_discard_segments_entry.attr,
&queue_max_integrity_segments_entry.attr,
&queue_max_segment_size_entry.attr,
&queue_iosched_entry.attr,
@ -799,7 +811,7 @@ static void blk_release_queue(struct kobject *kobj)
container_of(kobj, struct request_queue, kobj);
wbt_exit(q);
bdi_exit(&q->backing_dev_info);
bdi_put(q->backing_dev_info);
blkcg_exit_queue(q);
if (q->elevator) {
@ -814,13 +826,19 @@ static void blk_release_queue(struct kobject *kobj)
if (q->queue_tags)
__blk_queue_free_tags(q);
if (!q->mq_ops)
if (!q->mq_ops) {
if (q->exit_rq_fn)
q->exit_rq_fn(q, q->fq->flush_rq);
blk_free_flush_queue(q->fq);
else
} else {
blk_mq_release(q);
}
blk_trace_shutdown(q);
if (q->mq_ops)
blk_mq_debugfs_unregister(q);
if (q->bio_split)
bioset_free(q->bio_split);
@ -884,32 +902,36 @@ int blk_register_queue(struct gendisk *disk)
if (ret)
return ret;
if (q->mq_ops)
blk_mq_register_dev(dev, q);
/* Prevent changes through sysfs until registration is completed. */
mutex_lock(&q->sysfs_lock);
ret = kobject_add(&q->kobj, kobject_get(&dev->kobj), "%s", "queue");
if (ret < 0) {
blk_trace_remove_sysfs(dev);
return ret;
goto unlock;
}
kobject_uevent(&q->kobj, KOBJ_ADD);
if (q->mq_ops)
blk_mq_register_dev(dev, q);
blk_wb_init(q);
if (!q->request_fn)
return 0;
ret = elv_register_queue(q);
if (ret) {
kobject_uevent(&q->kobj, KOBJ_REMOVE);
kobject_del(&q->kobj);
blk_trace_remove_sysfs(dev);
kobject_put(&dev->kobj);
return ret;
if (q->request_fn || (q->mq_ops && q->elevator)) {
ret = elv_register_queue(q);
if (ret) {
kobject_uevent(&q->kobj, KOBJ_REMOVE);
kobject_del(&q->kobj);
blk_trace_remove_sysfs(dev);
kobject_put(&dev->kobj);
goto unlock;
}
}
return 0;
ret = 0;
unlock:
mutex_unlock(&q->sysfs_lock);
return ret;
}
void blk_unregister_queue(struct gendisk *disk)
@ -922,7 +944,7 @@ void blk_unregister_queue(struct gendisk *disk)
if (q->mq_ops)
blk_mq_unregister_dev(disk_to_dev(disk), q);
if (q->request_fn)
if (q->request_fn || (q->mq_ops && q->elevator))
elv_unregister_queue(q);
kobject_uevent(&q->kobj, KOBJ_REMOVE);

View File

@ -272,6 +272,7 @@ void blk_queue_end_tag(struct request_queue *q, struct request *rq)
list_del_init(&rq->queuelist);
rq->rq_flags &= ~RQF_QUEUED;
rq->tag = -1;
rq->internal_tag = -1;
if (unlikely(bqt->tag_index[tag] == NULL))
printk(KERN_ERR "%s: tag %d is missing\n",

View File

@ -866,10 +866,12 @@ static void tg_update_disptime(struct throtl_grp *tg)
unsigned long read_wait = -1, write_wait = -1, min_wait = -1, disptime;
struct bio *bio;
if ((bio = throtl_peek_queued(&sq->queued[READ])))
bio = throtl_peek_queued(&sq->queued[READ]);
if (bio)
tg_may_dispatch(tg, bio, &read_wait);
if ((bio = throtl_peek_queued(&sq->queued[WRITE])))
bio = throtl_peek_queued(&sq->queued[WRITE]);
if (bio)
tg_may_dispatch(tg, bio, &write_wait);
min_wait = min(read_wait, write_wait);

View File

@ -96,7 +96,7 @@ static void wb_timestamp(struct rq_wb *rwb, unsigned long *var)
*/
static bool wb_recent_wait(struct rq_wb *rwb)
{
struct bdi_writeback *wb = &rwb->queue->backing_dev_info.wb;
struct bdi_writeback *wb = &rwb->queue->backing_dev_info->wb;
return time_before(jiffies, wb->dirty_sleep + HZ);
}
@ -279,7 +279,7 @@ enum {
static int __latency_exceeded(struct rq_wb *rwb, struct blk_rq_stat *stat)
{
struct backing_dev_info *bdi = &rwb->queue->backing_dev_info;
struct backing_dev_info *bdi = rwb->queue->backing_dev_info;
u64 thislat;
/*
@ -339,7 +339,7 @@ static int latency_exceeded(struct rq_wb *rwb)
static void rwb_trace_step(struct rq_wb *rwb, const char *msg)
{
struct backing_dev_info *bdi = &rwb->queue->backing_dev_info;
struct backing_dev_info *bdi = rwb->queue->backing_dev_info;
trace_wbt_step(bdi, msg, rwb->scale_step, rwb->cur_win_nsec,
rwb->wb_background, rwb->wb_normal, rwb->wb_max);
@ -423,7 +423,7 @@ static void wb_timer_fn(unsigned long data)
status = latency_exceeded(rwb);
trace_wbt_timer(&rwb->queue->backing_dev_info, status, rwb->scale_step,
trace_wbt_timer(rwb->queue->backing_dev_info, status, rwb->scale_step,
inflight);
/*

View File

@ -14,6 +14,10 @@
/* Max future timer expiry for timeouts */
#define BLK_MAX_TIMEOUT (5 * HZ)
#ifdef CONFIG_DEBUG_FS
extern struct dentry *blk_debugfs_root;
#endif
struct blk_flush_queue {
unsigned int flush_queue_delayed:1;
unsigned int flush_pending_idx:1;
@ -96,6 +100,8 @@ bool bio_attempt_front_merge(struct request_queue *q, struct request *req,
struct bio *bio);
bool bio_attempt_back_merge(struct request_queue *q, struct request *req,
struct bio *bio);
bool bio_attempt_discard_merge(struct request_queue *q, struct request *req,
struct bio *bio);
bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
unsigned int *request_count,
struct request **same_queue_rq);
@ -167,7 +173,7 @@ static inline struct request *__elv_next_request(struct request_queue *q)
return NULL;
}
if (unlikely(blk_queue_bypass(q)) ||
!q->elevator->type->ops.elevator_dispatch_fn(q, 0))
!q->elevator->type->ops.sq.elevator_dispatch_fn(q, 0))
return NULL;
}
}
@ -176,16 +182,16 @@ static inline void elv_activate_rq(struct request_queue *q, struct request *rq)
{
struct elevator_queue *e = q->elevator;
if (e->type->ops.elevator_activate_req_fn)
e->type->ops.elevator_activate_req_fn(q, rq);
if (e->type->ops.sq.elevator_activate_req_fn)
e->type->ops.sq.elevator_activate_req_fn(q, rq);
}
static inline void elv_deactivate_rq(struct request_queue *q, struct request *rq)
{
struct elevator_queue *e = q->elevator;
if (e->type->ops.elevator_deactivate_req_fn)
e->type->ops.elevator_deactivate_req_fn(q, rq);
if (e->type->ops.sq.elevator_deactivate_req_fn)
e->type->ops.sq.elevator_deactivate_req_fn(q, rq);
}
#ifdef CONFIG_FAIL_IO_TIMEOUT
@ -204,14 +210,14 @@ int ll_back_merge_fn(struct request_queue *q, struct request *req,
struct bio *bio);
int ll_front_merge_fn(struct request_queue *q, struct request *req,
struct bio *bio);
int attempt_back_merge(struct request_queue *q, struct request *rq);
int attempt_front_merge(struct request_queue *q, struct request *rq);
struct request *attempt_back_merge(struct request_queue *q, struct request *rq);
struct request *attempt_front_merge(struct request_queue *q, struct request *rq);
int blk_attempt_req_merge(struct request_queue *q, struct request *rq,
struct request *next);
void blk_recalc_rq_segments(struct request *rq);
void blk_rq_set_mixed_merge(struct request *rq);
bool blk_rq_merge_ok(struct request *rq, struct bio *bio);
int blk_try_merge(struct request *rq, struct bio *bio);
enum elv_merge blk_try_merge(struct request *rq, struct bio *bio);
void blk_queue_congestion_threshold(struct request_queue *q);
@ -249,7 +255,14 @@ static inline int blk_do_io_stat(struct request *rq)
{
return rq->rq_disk &&
(rq->rq_flags & RQF_IO_STAT) &&
(rq->cmd_type == REQ_TYPE_FS);
!blk_rq_is_passthrough(rq);
}
static inline void req_set_nomerge(struct request_queue *q, struct request *req)
{
req->cmd_flags |= REQ_NOMERGE;
if (req == q->last_merge)
q->last_merge = NULL;
}
/*
@ -263,6 +276,22 @@ void ioc_clear_queue(struct request_queue *q);
int create_task_io_context(struct task_struct *task, gfp_t gfp_mask, int node);
/**
* rq_ioc - determine io_context for request allocation
* @bio: request being allocated is for this bio (can be %NULL)
*
* Determine io_context to use for request allocation for @bio. May return
* %NULL if %current->io_context doesn't exist.
*/
static inline struct io_context *rq_ioc(struct bio *bio)
{
#ifdef CONFIG_BLK_CGROUP
if (bio && bio->bi_ioc)
return bio->bi_ioc;
#endif
return current->io_context;
}
/**
* create_io_context - try to create task->io_context
* @gfp_mask: allocation mask

View File

@ -71,22 +71,24 @@ void bsg_job_done(struct bsg_job *job, int result,
{
struct request *req = job->req;
struct request *rsp = req->next_rq;
struct scsi_request *rq = scsi_req(req);
int err;
err = job->req->errors = result;
if (err < 0)
/* we're only returning the result field in the reply */
job->req->sense_len = sizeof(u32);
rq->sense_len = sizeof(u32);
else
job->req->sense_len = job->reply_len;
rq->sense_len = job->reply_len;
/* we assume all request payload was transferred, residual == 0 */
req->resid_len = 0;
rq->resid_len = 0;
if (rsp) {
WARN_ON(reply_payload_rcv_len > rsp->resid_len);
WARN_ON(reply_payload_rcv_len > scsi_req(rsp)->resid_len);
/* set reply (bidi) residual */
rsp->resid_len -= min(reply_payload_rcv_len, rsp->resid_len);
scsi_req(rsp)->resid_len -=
min(reply_payload_rcv_len, scsi_req(rsp)->resid_len);
}
blk_complete_request(req);
}
@ -113,6 +115,7 @@ static int bsg_map_buffer(struct bsg_buffer *buf, struct request *req)
if (!buf->sg_list)
return -ENOMEM;
sg_init_table(buf->sg_list, req->nr_phys_segments);
scsi_req(req)->resid_len = blk_rq_bytes(req);
buf->sg_cnt = blk_rq_map_sg(req->q, req, buf->sg_list);
buf->payload_len = blk_rq_bytes(req);
return 0;
@ -127,6 +130,7 @@ static int bsg_create_job(struct device *dev, struct request *req)
{
struct request *rsp = req->next_rq;
struct request_queue *q = req->q;
struct scsi_request *rq = scsi_req(req);
struct bsg_job *job;
int ret;
@ -140,9 +144,9 @@ static int bsg_create_job(struct device *dev, struct request *req)
job->req = req;
if (q->bsg_job_size)
job->dd_data = (void *)&job[1];
job->request = req->cmd;
job->request_len = req->cmd_len;
job->reply = req->sense;
job->request = rq->cmd;
job->request_len = rq->cmd_len;
job->reply = rq->sense;
job->reply_len = SCSI_SENSE_BUFFERSIZE; /* Size of sense buffer
* allocated */
if (req->bio) {
@ -177,7 +181,7 @@ failjob_rls_job:
*
* Drivers/subsys should pass this to the queue init function.
*/
void bsg_request_fn(struct request_queue *q)
static void bsg_request_fn(struct request_queue *q)
__releases(q->queue_lock)
__acquires(q->queue_lock)
{
@ -214,24 +218,30 @@ void bsg_request_fn(struct request_queue *q)
put_device(dev);
spin_lock_irq(q->queue_lock);
}
EXPORT_SYMBOL_GPL(bsg_request_fn);
/**
* bsg_setup_queue - Create and add the bsg hooks so we can receive requests
* @dev: device to attach bsg device to
* @q: request queue setup by caller
* @name: device to give bsg device
* @job_fn: bsg job handler
* @dd_job_size: size of LLD data needed for each job
*
* The caller should have setup the reuqest queue with bsg_request_fn
* as the request_fn.
*/
int bsg_setup_queue(struct device *dev, struct request_queue *q,
char *name, bsg_job_fn *job_fn, int dd_job_size)
struct request_queue *bsg_setup_queue(struct device *dev, char *name,
bsg_job_fn *job_fn, int dd_job_size)
{
struct request_queue *q;
int ret;
q = blk_alloc_queue(GFP_KERNEL);
if (!q)
return ERR_PTR(-ENOMEM);
q->cmd_size = sizeof(struct scsi_request);
q->request_fn = bsg_request_fn;
ret = blk_init_allocated_queue(q);
if (ret)
goto out_cleanup_queue;
q->queuedata = dev;
q->bsg_job_size = dd_job_size;
q->bsg_job_fn = job_fn;
@ -243,9 +253,12 @@ int bsg_setup_queue(struct device *dev, struct request_queue *q,
if (ret) {
printk(KERN_ERR "%s: bsg interface failed to "
"initialize - register queue\n", dev->kobj.name);
return ret;
goto out_cleanup_queue;
}
return 0;
return q;
out_cleanup_queue:
blk_cleanup_queue(q);
return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(bsg_setup_queue);

View File

@ -85,7 +85,6 @@ struct bsg_command {
struct bio *bidi_bio;
int err;
struct sg_io_v4 hdr;
char sense[SCSI_SENSE_BUFFERSIZE];
};
static void bsg_free_command(struct bsg_command *bc)
@ -140,18 +139,20 @@ static int blk_fill_sgv4_hdr_rq(struct request_queue *q, struct request *rq,
struct sg_io_v4 *hdr, struct bsg_device *bd,
fmode_t has_write_perm)
{
struct scsi_request *req = scsi_req(rq);
if (hdr->request_len > BLK_MAX_CDB) {
rq->cmd = kzalloc(hdr->request_len, GFP_KERNEL);
if (!rq->cmd)
req->cmd = kzalloc(hdr->request_len, GFP_KERNEL);
if (!req->cmd)
return -ENOMEM;
}
if (copy_from_user(rq->cmd, (void __user *)(unsigned long)hdr->request,
if (copy_from_user(req->cmd, (void __user *)(unsigned long)hdr->request,
hdr->request_len))
return -EFAULT;
if (hdr->subprotocol == BSG_SUB_PROTOCOL_SCSI_CMD) {
if (blk_verify_command(rq->cmd, has_write_perm))
if (blk_verify_command(req->cmd, has_write_perm))
return -EPERM;
} else if (!capable(CAP_SYS_RAWIO))
return -EPERM;
@ -159,7 +160,7 @@ static int blk_fill_sgv4_hdr_rq(struct request_queue *q, struct request *rq,
/*
* fill in request structure
*/
rq->cmd_len = hdr->request_len;
req->cmd_len = hdr->request_len;
rq->timeout = msecs_to_jiffies(hdr->timeout);
if (!rq->timeout)
@ -176,7 +177,7 @@ static int blk_fill_sgv4_hdr_rq(struct request_queue *q, struct request *rq,
* Check if sg_io_v4 from user is allowed and valid
*/
static int
bsg_validate_sgv4_hdr(struct sg_io_v4 *hdr, int *rw)
bsg_validate_sgv4_hdr(struct sg_io_v4 *hdr, int *op)
{
int ret = 0;
@ -197,7 +198,7 @@ bsg_validate_sgv4_hdr(struct sg_io_v4 *hdr, int *rw)
ret = -EINVAL;
}
*rw = hdr->dout_xfer_len ? WRITE : READ;
*op = hdr->dout_xfer_len ? REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN;
return ret;
}
@ -205,13 +206,12 @@ bsg_validate_sgv4_hdr(struct sg_io_v4 *hdr, int *rw)
* map sg_io_v4 to a request.
*/
static struct request *
bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm,
u8 *sense)
bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm)
{
struct request_queue *q = bd->queue;
struct request *rq, *next_rq = NULL;
int ret, rw;
unsigned int dxfer_len;
int ret;
unsigned int op, dxfer_len;
void __user *dxferp = NULL;
struct bsg_class_device *bcd = &q->bsg_dev;
@ -226,36 +226,35 @@ bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm,
hdr->dout_xfer_len, (unsigned long long) hdr->din_xferp,
hdr->din_xfer_len);
ret = bsg_validate_sgv4_hdr(hdr, &rw);
ret = bsg_validate_sgv4_hdr(hdr, &op);
if (ret)
return ERR_PTR(ret);
/*
* map scatter-gather elements separately and string them to request
*/
rq = blk_get_request(q, rw, GFP_KERNEL);
rq = blk_get_request(q, op, GFP_KERNEL);
if (IS_ERR(rq))
return rq;
blk_rq_set_block_pc(rq);
scsi_req_init(rq);
ret = blk_fill_sgv4_hdr_rq(q, rq, hdr, bd, has_write_perm);
if (ret)
goto out;
if (rw == WRITE && hdr->din_xfer_len) {
if (op == REQ_OP_SCSI_OUT && hdr->din_xfer_len) {
if (!test_bit(QUEUE_FLAG_BIDI, &q->queue_flags)) {
ret = -EOPNOTSUPP;
goto out;
}
next_rq = blk_get_request(q, READ, GFP_KERNEL);
next_rq = blk_get_request(q, REQ_OP_SCSI_IN, GFP_KERNEL);
if (IS_ERR(next_rq)) {
ret = PTR_ERR(next_rq);
next_rq = NULL;
goto out;
}
rq->next_rq = next_rq;
next_rq->cmd_type = rq->cmd_type;
dxferp = (void __user *)(unsigned long)hdr->din_xferp;
ret = blk_rq_map_user(q, next_rq, NULL, dxferp,
@ -280,13 +279,9 @@ bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm,
goto out;
}
rq->sense = sense;
rq->sense_len = 0;
return rq;
out:
if (rq->cmd != rq->__cmd)
kfree(rq->cmd);
scsi_req_free_cmd(scsi_req(rq));
blk_put_request(rq);
if (next_rq) {
blk_rq_unmap_user(next_rq->bio);
@ -393,6 +388,7 @@ static struct bsg_command *bsg_get_done_cmd(struct bsg_device *bd)
static int blk_complete_sgv4_hdr_rq(struct request *rq, struct sg_io_v4 *hdr,
struct bio *bio, struct bio *bidi_bio)
{
struct scsi_request *req = scsi_req(rq);
int ret = 0;
dprintk("rq %p bio %p 0x%x\n", rq, bio, rq->errors);
@ -407,12 +403,12 @@ static int blk_complete_sgv4_hdr_rq(struct request *rq, struct sg_io_v4 *hdr,
hdr->info |= SG_INFO_CHECK;
hdr->response_len = 0;
if (rq->sense_len && hdr->response) {
if (req->sense_len && hdr->response) {
int len = min_t(unsigned int, hdr->max_response_len,
rq->sense_len);
req->sense_len);
ret = copy_to_user((void __user *)(unsigned long)hdr->response,
rq->sense, len);
req->sense, len);
if (!ret)
hdr->response_len = len;
else
@ -420,14 +416,14 @@ static int blk_complete_sgv4_hdr_rq(struct request *rq, struct sg_io_v4 *hdr,
}
if (rq->next_rq) {
hdr->dout_resid = rq->resid_len;
hdr->din_resid = rq->next_rq->resid_len;
hdr->dout_resid = req->resid_len;
hdr->din_resid = scsi_req(rq->next_rq)->resid_len;
blk_rq_unmap_user(bidi_bio);
blk_put_request(rq->next_rq);
} else if (rq_data_dir(rq) == READ)
hdr->din_resid = rq->resid_len;
hdr->din_resid = req->resid_len;
else
hdr->dout_resid = rq->resid_len;
hdr->dout_resid = req->resid_len;
/*
* If the request generated a negative error number, return it
@ -439,8 +435,7 @@ static int blk_complete_sgv4_hdr_rq(struct request *rq, struct sg_io_v4 *hdr,
ret = rq->errors;
blk_rq_unmap_user(bio);
if (rq->cmd != rq->__cmd)
kfree(rq->cmd);
scsi_req_free_cmd(req);
blk_put_request(rq);
return ret;
@ -625,7 +620,7 @@ static int __bsg_write(struct bsg_device *bd, const char __user *buf,
/*
* get a request, fill in the blanks, and add to request queue
*/
rq = bsg_map_hdr(bd, &bc->hdr, has_write_perm, bc->sense);
rq = bsg_map_hdr(bd, &bc->hdr, has_write_perm);
if (IS_ERR(rq)) {
ret = PTR_ERR(rq);
rq = NULL;
@ -911,12 +906,11 @@ static long bsg_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
struct bio *bio, *bidi_bio = NULL;
struct sg_io_v4 hdr;
int at_head;
u8 sense[SCSI_SENSE_BUFFERSIZE];
if (copy_from_user(&hdr, uarg, sizeof(hdr)))
return -EFAULT;
rq = bsg_map_hdr(bd, &hdr, file->f_mode & FMODE_WRITE, sense);
rq = bsg_map_hdr(bd, &hdr, file->f_mode & FMODE_WRITE);
if (IS_ERR(rq))
return PTR_ERR(rq);

View File

@ -2528,7 +2528,7 @@ static void cfq_remove_request(struct request *rq)
}
}
static int cfq_merge(struct request_queue *q, struct request **req,
static enum elv_merge cfq_merge(struct request_queue *q, struct request **req,
struct bio *bio)
{
struct cfq_data *cfqd = q->elevator->elevator_data;
@ -2544,7 +2544,7 @@ static int cfq_merge(struct request_queue *q, struct request **req,
}
static void cfq_merged_request(struct request_queue *q, struct request *req,
int type)
enum elv_merge type)
{
if (type == ELEVATOR_FRONT_MERGE) {
struct cfq_queue *cfqq = RQ_CFQQ(req);
@ -2749,9 +2749,11 @@ static struct cfq_queue *cfq_get_next_queue_forced(struct cfq_data *cfqd)
if (!cfqg)
return NULL;
for_each_cfqg_st(cfqg, i, j, st)
if ((cfqq = cfq_rb_first(st)) != NULL)
for_each_cfqg_st(cfqg, i, j, st) {
cfqq = cfq_rb_first(st);
if (cfqq)
return cfqq;
}
return NULL;
}
@ -3860,6 +3862,8 @@ cfq_get_queue(struct cfq_data *cfqd, bool is_sync, struct cfq_io_cq *cic,
goto out;
}
/* cfq_init_cfqq() assumes cfqq->ioprio_class is initialized. */
cfqq->ioprio_class = IOPRIO_CLASS_NONE;
cfq_init_cfqq(cfqd, cfqq, current->pid, is_sync);
cfq_init_prio_data(cfqq, cic);
cfq_link_cfqq_cfqg(cfqq, cfqg);
@ -4838,7 +4842,7 @@ static struct elv_fs_entry cfq_attrs[] = {
};
static struct elevator_type iosched_cfq = {
.ops = {
.ops.sq = {
.elevator_merge_fn = cfq_merge,
.elevator_merged_fn = cfq_merged_request,
.elevator_merge_req_fn = cfq_merged_requests,

View File

@ -661,7 +661,6 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg)
struct block_device *bdev = inode->i_bdev;
struct gendisk *disk = bdev->bd_disk;
fmode_t mode = file->f_mode;
struct backing_dev_info *bdi;
loff_t size;
unsigned int max_sectors;
@ -708,9 +707,8 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg)
case BLKFRAGET:
if (!arg)
return -EINVAL;
bdi = blk_get_backing_dev_info(bdev);
return compat_put_long(arg,
(bdi->ra_pages * PAGE_SIZE) / 512);
(bdev->bd_bdi->ra_pages * PAGE_SIZE) / 512);
case BLKROGET: /* compatible */
return compat_put_int(arg, bdev_read_only(bdev) != 0);
case BLKBSZGET_32: /* get the logical block size (cf. BLKSSZGET) */
@ -728,8 +726,7 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg)
case BLKFRASET:
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
bdi = blk_get_backing_dev_info(bdev);
bdi->ra_pages = (arg * 512) / PAGE_SIZE;
bdev->bd_bdi->ra_pages = (arg * 512) / PAGE_SIZE;
return 0;
case BLKGETSIZE:
size = i_size_read(bdev->bd_inode);

View File

@ -120,12 +120,11 @@ static void deadline_remove_request(struct request_queue *q, struct request *rq)
deadline_del_rq_rb(dd, rq);
}
static int
static enum elv_merge
deadline_merge(struct request_queue *q, struct request **req, struct bio *bio)
{
struct deadline_data *dd = q->elevator->elevator_data;
struct request *__rq;
int ret;
/*
* check for front merge
@ -138,20 +137,17 @@ deadline_merge(struct request_queue *q, struct request **req, struct bio *bio)
BUG_ON(sector != blk_rq_pos(__rq));
if (elv_bio_merge_ok(__rq, bio)) {
ret = ELEVATOR_FRONT_MERGE;
goto out;
*req = __rq;
return ELEVATOR_FRONT_MERGE;
}
}
}
return ELEVATOR_NO_MERGE;
out:
*req = __rq;
return ret;
}
static void deadline_merged_request(struct request_queue *q,
struct request *req, int type)
struct request *req, enum elv_merge type)
{
struct deadline_data *dd = q->elevator->elevator_data;
@ -439,7 +435,7 @@ static struct elv_fs_entry deadline_attrs[] = {
};
static struct elevator_type iosched_deadline = {
.ops = {
.ops.sq = {
.elevator_merge_fn = deadline_merge,
.elevator_merged_fn = deadline_merged_request,
.elevator_merge_req_fn = deadline_merged_requests,

View File

@ -40,6 +40,7 @@
#include <trace/events/block.h>
#include "blk.h"
#include "blk-mq-sched.h"
static DEFINE_SPINLOCK(elv_list_lock);
static LIST_HEAD(elv_list);
@ -58,8 +59,10 @@ static int elv_iosched_allow_bio_merge(struct request *rq, struct bio *bio)
struct request_queue *q = rq->q;
struct elevator_queue *e = q->elevator;
if (e->type->ops.elevator_allow_bio_merge_fn)
return e->type->ops.elevator_allow_bio_merge_fn(q, rq, bio);
if (e->uses_mq && e->type->ops.mq.allow_merge)
return e->type->ops.mq.allow_merge(q, rq, bio);
else if (!e->uses_mq && e->type->ops.sq.elevator_allow_bio_merge_fn)
return e->type->ops.sq.elevator_allow_bio_merge_fn(q, rq, bio);
return 1;
}
@ -163,6 +166,7 @@ struct elevator_queue *elevator_alloc(struct request_queue *q,
kobject_init(&eq->kobj, &elv_ktype);
mutex_init(&eq->sysfs_lock);
hash_init(eq->hash);
eq->uses_mq = e->uses_mq;
return eq;
}
@ -203,11 +207,12 @@ int elevator_init(struct request_queue *q, char *name)
}
/*
* Use the default elevator specified by config boot param or
* config option. Don't try to load modules as we could be running
* off async and request_module() isn't allowed from async.
* Use the default elevator specified by config boot param for
* non-mq devices, or by config option. Don't try to load modules
* as we could be running off async and request_module() isn't
* allowed from async.
*/
if (!e && *chosen_elevator) {
if (!e && !q->mq_ops && *chosen_elevator) {
e = elevator_get(chosen_elevator, false);
if (!e)
printk(KERN_ERR "I/O scheduler %s not found\n",
@ -215,18 +220,32 @@ int elevator_init(struct request_queue *q, char *name)
}
if (!e) {
e = elevator_get(CONFIG_DEFAULT_IOSCHED, false);
if (q->mq_ops && q->nr_hw_queues == 1)
e = elevator_get(CONFIG_DEFAULT_SQ_IOSCHED, false);
else if (q->mq_ops)
e = elevator_get(CONFIG_DEFAULT_MQ_IOSCHED, false);
else
e = elevator_get(CONFIG_DEFAULT_IOSCHED, false);
if (!e) {
printk(KERN_ERR
"Default I/O scheduler not found. " \
"Using noop.\n");
"Using noop/none.\n");
e = elevator_get("noop", false);
}
}
err = e->ops.elevator_init_fn(q, e);
if (err)
if (e->uses_mq) {
err = blk_mq_sched_setup(q);
if (!err)
err = e->ops.mq.init_sched(q, e);
} else
err = e->ops.sq.elevator_init_fn(q, e);
if (err) {
if (e->uses_mq)
blk_mq_sched_teardown(q);
elevator_put(e);
}
return err;
}
EXPORT_SYMBOL(elevator_init);
@ -234,8 +253,10 @@ EXPORT_SYMBOL(elevator_init);
void elevator_exit(struct elevator_queue *e)
{
mutex_lock(&e->sysfs_lock);
if (e->type->ops.elevator_exit_fn)
e->type->ops.elevator_exit_fn(e);
if (e->uses_mq && e->type->ops.mq.exit_sched)
e->type->ops.mq.exit_sched(e);
else if (!e->uses_mq && e->type->ops.sq.elevator_exit_fn)
e->type->ops.sq.elevator_exit_fn(e);
mutex_unlock(&e->sysfs_lock);
kobject_put(&e->kobj);
@ -253,6 +274,7 @@ void elv_rqhash_del(struct request_queue *q, struct request *rq)
if (ELV_ON_HASH(rq))
__elv_rqhash_del(rq);
}
EXPORT_SYMBOL_GPL(elv_rqhash_del);
void elv_rqhash_add(struct request_queue *q, struct request *rq)
{
@ -262,6 +284,7 @@ void elv_rqhash_add(struct request_queue *q, struct request *rq)
hash_add(e->hash, &rq->hash, rq_hash_key(rq));
rq->rq_flags |= RQF_HASHED;
}
EXPORT_SYMBOL_GPL(elv_rqhash_add);
void elv_rqhash_reposition(struct request_queue *q, struct request *rq)
{
@ -405,11 +428,11 @@ void elv_dispatch_add_tail(struct request_queue *q, struct request *rq)
}
EXPORT_SYMBOL(elv_dispatch_add_tail);
int elv_merge(struct request_queue *q, struct request **req, struct bio *bio)
enum elv_merge elv_merge(struct request_queue *q, struct request **req,
struct bio *bio)
{
struct elevator_queue *e = q->elevator;
struct request *__rq;
int ret;
/*
* Levels of merges:
@ -424,7 +447,8 @@ int elv_merge(struct request_queue *q, struct request **req, struct bio *bio)
* First try one-hit cache.
*/
if (q->last_merge && elv_bio_merge_ok(q->last_merge, bio)) {
ret = blk_try_merge(q->last_merge, bio);
enum elv_merge ret = blk_try_merge(q->last_merge, bio);
if (ret != ELEVATOR_NO_MERGE) {
*req = q->last_merge;
return ret;
@ -443,8 +467,10 @@ int elv_merge(struct request_queue *q, struct request **req, struct bio *bio)
return ELEVATOR_BACK_MERGE;
}
if (e->type->ops.elevator_merge_fn)
return e->type->ops.elevator_merge_fn(q, req, bio);
if (e->uses_mq && e->type->ops.mq.request_merge)
return e->type->ops.mq.request_merge(q, req, bio);
else if (!e->uses_mq && e->type->ops.sq.elevator_merge_fn)
return e->type->ops.sq.elevator_merge_fn(q, req, bio);
return ELEVATOR_NO_MERGE;
}
@ -456,8 +482,7 @@ int elv_merge(struct request_queue *q, struct request **req, struct bio *bio)
*
* Returns true if we merged, false otherwise
*/
static bool elv_attempt_insert_merge(struct request_queue *q,
struct request *rq)
bool elv_attempt_insert_merge(struct request_queue *q, struct request *rq)
{
struct request *__rq;
bool ret;
@ -491,12 +516,15 @@ static bool elv_attempt_insert_merge(struct request_queue *q,
return ret;
}
void elv_merged_request(struct request_queue *q, struct request *rq, int type)
void elv_merged_request(struct request_queue *q, struct request *rq,
enum elv_merge type)
{
struct elevator_queue *e = q->elevator;
if (e->type->ops.elevator_merged_fn)
e->type->ops.elevator_merged_fn(q, rq, type);
if (e->uses_mq && e->type->ops.mq.request_merged)
e->type->ops.mq.request_merged(q, rq, type);
else if (!e->uses_mq && e->type->ops.sq.elevator_merged_fn)
e->type->ops.sq.elevator_merged_fn(q, rq, type);
if (type == ELEVATOR_BACK_MERGE)
elv_rqhash_reposition(q, rq);
@ -508,10 +536,15 @@ void elv_merge_requests(struct request_queue *q, struct request *rq,
struct request *next)
{
struct elevator_queue *e = q->elevator;
const int next_sorted = next->rq_flags & RQF_SORTED;
bool next_sorted = false;
if (next_sorted && e->type->ops.elevator_merge_req_fn)
e->type->ops.elevator_merge_req_fn(q, rq, next);
if (e->uses_mq && e->type->ops.mq.requests_merged)
e->type->ops.mq.requests_merged(q, rq, next);
else if (e->type->ops.sq.elevator_merge_req_fn) {
next_sorted = (__force bool)(next->rq_flags & RQF_SORTED);
if (next_sorted)
e->type->ops.sq.elevator_merge_req_fn(q, rq, next);
}
elv_rqhash_reposition(q, rq);
@ -528,8 +561,11 @@ void elv_bio_merged(struct request_queue *q, struct request *rq,
{
struct elevator_queue *e = q->elevator;
if (e->type->ops.elevator_bio_merged_fn)
e->type->ops.elevator_bio_merged_fn(q, rq, bio);
if (WARN_ON_ONCE(e->uses_mq))
return;
if (e->type->ops.sq.elevator_bio_merged_fn)
e->type->ops.sq.elevator_bio_merged_fn(q, rq, bio);
}
#ifdef CONFIG_PM
@ -574,11 +610,15 @@ void elv_requeue_request(struct request_queue *q, struct request *rq)
void elv_drain_elevator(struct request_queue *q)
{
struct elevator_queue *e = q->elevator;
static int printed;
if (WARN_ON_ONCE(e->uses_mq))
return;
lockdep_assert_held(q->queue_lock);
while (q->elevator->type->ops.elevator_dispatch_fn(q, 1))
while (e->type->ops.sq.elevator_dispatch_fn(q, 1))
;
if (q->nr_sorted && printed++ < 10) {
printk(KERN_ERR "%s: forced dispatching is broken "
@ -597,7 +637,7 @@ void __elv_add_request(struct request_queue *q, struct request *rq, int where)
if (rq->rq_flags & RQF_SOFTBARRIER) {
/* barriers are scheduling boundary, update end_sector */
if (rq->cmd_type == REQ_TYPE_FS) {
if (!blk_rq_is_passthrough(rq)) {
q->end_sector = rq_end_sector(rq);
q->boundary_rq = rq;
}
@ -639,7 +679,7 @@ void __elv_add_request(struct request_queue *q, struct request *rq, int where)
if (elv_attempt_insert_merge(q, rq))
break;
case ELEVATOR_INSERT_SORT:
BUG_ON(rq->cmd_type != REQ_TYPE_FS);
BUG_ON(blk_rq_is_passthrough(rq));
rq->rq_flags |= RQF_SORTED;
q->nr_sorted++;
if (rq_mergeable(rq)) {
@ -653,7 +693,7 @@ void __elv_add_request(struct request_queue *q, struct request *rq, int where)
* rq cannot be accessed after calling
* elevator_add_req_fn.
*/
q->elevator->type->ops.elevator_add_req_fn(q, rq);
q->elevator->type->ops.sq.elevator_add_req_fn(q, rq);
break;
case ELEVATOR_INSERT_FLUSH:
@ -682,8 +722,11 @@ struct request *elv_latter_request(struct request_queue *q, struct request *rq)
{
struct elevator_queue *e = q->elevator;
if (e->type->ops.elevator_latter_req_fn)
return e->type->ops.elevator_latter_req_fn(q, rq);
if (e->uses_mq && e->type->ops.mq.next_request)
return e->type->ops.mq.next_request(q, rq);
else if (!e->uses_mq && e->type->ops.sq.elevator_latter_req_fn)
return e->type->ops.sq.elevator_latter_req_fn(q, rq);
return NULL;
}
@ -691,8 +734,10 @@ struct request *elv_former_request(struct request_queue *q, struct request *rq)
{
struct elevator_queue *e = q->elevator;
if (e->type->ops.elevator_former_req_fn)
return e->type->ops.elevator_former_req_fn(q, rq);
if (e->uses_mq && e->type->ops.mq.former_request)
return e->type->ops.mq.former_request(q, rq);
if (!e->uses_mq && e->type->ops.sq.elevator_former_req_fn)
return e->type->ops.sq.elevator_former_req_fn(q, rq);
return NULL;
}
@ -701,8 +746,11 @@ int elv_set_request(struct request_queue *q, struct request *rq,
{
struct elevator_queue *e = q->elevator;
if (e->type->ops.elevator_set_req_fn)
return e->type->ops.elevator_set_req_fn(q, rq, bio, gfp_mask);
if (WARN_ON_ONCE(e->uses_mq))
return 0;
if (e->type->ops.sq.elevator_set_req_fn)
return e->type->ops.sq.elevator_set_req_fn(q, rq, bio, gfp_mask);
return 0;
}
@ -710,16 +758,22 @@ void elv_put_request(struct request_queue *q, struct request *rq)
{
struct elevator_queue *e = q->elevator;
if (e->type->ops.elevator_put_req_fn)
e->type->ops.elevator_put_req_fn(rq);
if (WARN_ON_ONCE(e->uses_mq))
return;
if (e->type->ops.sq.elevator_put_req_fn)
e->type->ops.sq.elevator_put_req_fn(rq);
}
int elv_may_queue(struct request_queue *q, unsigned int op)
{
struct elevator_queue *e = q->elevator;
if (e->type->ops.elevator_may_queue_fn)
return e->type->ops.elevator_may_queue_fn(q, op);
if (WARN_ON_ONCE(e->uses_mq))
return 0;
if (e->type->ops.sq.elevator_may_queue_fn)
return e->type->ops.sq.elevator_may_queue_fn(q, op);
return ELV_MQUEUE_MAY;
}
@ -728,14 +782,17 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
{
struct elevator_queue *e = q->elevator;
if (WARN_ON_ONCE(e->uses_mq))
return;
/*
* request is released from the driver, io must be done
*/
if (blk_account_rq(rq)) {
q->in_flight[rq_is_sync(rq)]--;
if ((rq->rq_flags & RQF_SORTED) &&
e->type->ops.elevator_completed_req_fn)
e->type->ops.elevator_completed_req_fn(q, rq);
e->type->ops.sq.elevator_completed_req_fn)
e->type->ops.sq.elevator_completed_req_fn(q, rq);
}
}
@ -803,8 +860,8 @@ int elv_register_queue(struct request_queue *q)
}
kobject_uevent(&e->kobj, KOBJ_ADD);
e->registered = 1;
if (e->type->ops.elevator_registered_fn)
e->type->ops.elevator_registered_fn(q);
if (!e->uses_mq && e->type->ops.sq.elevator_registered_fn)
e->type->ops.sq.elevator_registered_fn(q);
}
return error;
}
@ -891,9 +948,14 @@ EXPORT_SYMBOL_GPL(elv_unregister);
static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
{
struct elevator_queue *old = q->elevator;
bool registered = old->registered;
bool old_registered = false;
int err;
if (q->mq_ops) {
blk_mq_freeze_queue(q);
blk_mq_quiesce_queue(q);
}
/*
* Turn on BYPASS and drain all requests w/ elevator private data.
* Block layer doesn't call into a quiesced elevator - all requests
@ -901,42 +963,76 @@ static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
* using INSERT_BACK. All requests have SOFTBARRIER set and no
* merge happens either.
*/
blk_queue_bypass_start(q);
if (old) {
old_registered = old->registered;
/* unregister and clear all auxiliary data of the old elevator */
if (registered)
elv_unregister_queue(q);
if (old->uses_mq)
blk_mq_sched_teardown(q);
spin_lock_irq(q->queue_lock);
ioc_clear_queue(q);
spin_unlock_irq(q->queue_lock);
if (!q->mq_ops)
blk_queue_bypass_start(q);
/* unregister and clear all auxiliary data of the old elevator */
if (old_registered)
elv_unregister_queue(q);
spin_lock_irq(q->queue_lock);
ioc_clear_queue(q);
spin_unlock_irq(q->queue_lock);
}
/* allocate, init and register new elevator */
err = new_e->ops.elevator_init_fn(q, new_e);
if (err)
goto fail_init;
if (new_e) {
if (new_e->uses_mq) {
err = blk_mq_sched_setup(q);
if (!err)
err = new_e->ops.mq.init_sched(q, new_e);
} else
err = new_e->ops.sq.elevator_init_fn(q, new_e);
if (err)
goto fail_init;
if (registered) {
err = elv_register_queue(q);
if (err)
goto fail_register;
}
} else
q->elevator = NULL;
/* done, kill the old one and finish */
elevator_exit(old);
blk_queue_bypass_end(q);
if (old) {
elevator_exit(old);
if (!q->mq_ops)
blk_queue_bypass_end(q);
}
blk_add_trace_msg(q, "elv switch: %s", new_e->elevator_name);
if (q->mq_ops) {
blk_mq_unfreeze_queue(q);
blk_mq_start_stopped_hw_queues(q, true);
}
if (new_e)
blk_add_trace_msg(q, "elv switch: %s", new_e->elevator_name);
else
blk_add_trace_msg(q, "elv switch: none");
return 0;
fail_register:
if (q->mq_ops)
blk_mq_sched_teardown(q);
elevator_exit(q->elevator);
fail_init:
/* switch failed, restore and re-register old elevator */
q->elevator = old;
elv_register_queue(q);
blk_queue_bypass_end(q);
if (old) {
q->elevator = old;
elv_register_queue(q);
if (!q->mq_ops)
blk_queue_bypass_end(q);
}
if (q->mq_ops) {
blk_mq_unfreeze_queue(q);
blk_mq_start_stopped_hw_queues(q, true);
}
return err;
}
@ -949,8 +1045,11 @@ static int __elevator_change(struct request_queue *q, const char *name)
char elevator_name[ELV_NAME_MAX];
struct elevator_type *e;
if (!q->elevator)
return -ENXIO;
/*
* Special case for mq, turn off scheduling
*/
if (q->mq_ops && !strncmp(name, "none", 4))
return elevator_switch(q, NULL);
strlcpy(elevator_name, name, sizeof(elevator_name));
e = elevator_get(strstrip(elevator_name), true);
@ -959,11 +1058,21 @@ static int __elevator_change(struct request_queue *q, const char *name)
return -EINVAL;
}
if (!strcmp(elevator_name, q->elevator->type->elevator_name)) {
if (q->elevator &&
!strcmp(elevator_name, q->elevator->type->elevator_name)) {
elevator_put(e);
return 0;
}
if (!e->uses_mq && q->mq_ops) {
elevator_put(e);
return -EINVAL;
}
if (e->uses_mq && !q->mq_ops) {
elevator_put(e);
return -EINVAL;
}
return elevator_switch(q, e);
}
@ -985,7 +1094,7 @@ ssize_t elv_iosched_store(struct request_queue *q, const char *name,
{
int ret;
if (!q->elevator)
if (!(q->mq_ops || q->request_fn))
return count;
ret = __elevator_change(q, name);
@ -999,24 +1108,34 @@ ssize_t elv_iosched_store(struct request_queue *q, const char *name,
ssize_t elv_iosched_show(struct request_queue *q, char *name)
{
struct elevator_queue *e = q->elevator;
struct elevator_type *elv;
struct elevator_type *elv = NULL;
struct elevator_type *__e;
int len = 0;
if (!q->elevator || !blk_queue_stackable(q))
if (!blk_queue_stackable(q))
return sprintf(name, "none\n");
elv = e->type;
if (!q->elevator)
len += sprintf(name+len, "[none] ");
else
elv = e->type;
spin_lock(&elv_list_lock);
list_for_each_entry(__e, &elv_list, list) {
if (!strcmp(elv->elevator_name, __e->elevator_name))
if (elv && !strcmp(elv->elevator_name, __e->elevator_name)) {
len += sprintf(name+len, "[%s] ", elv->elevator_name);
else
continue;
}
if (__e->uses_mq && q->mq_ops)
len += sprintf(name+len, "%s ", __e->elevator_name);
else if (!__e->uses_mq && !q->mq_ops)
len += sprintf(name+len, "%s ", __e->elevator_name);
}
spin_unlock(&elv_list_lock);
if (q->mq_ops && q->elevator)
len += sprintf(name+len, "none");
len += sprintf(len+name, "\n");
return len;
}

View File

@ -572,6 +572,20 @@ exit:
disk_part_iter_exit(&piter);
}
void put_disk_devt(struct disk_devt *disk_devt)
{
if (disk_devt && atomic_dec_and_test(&disk_devt->count))
disk_devt->release(disk_devt);
}
EXPORT_SYMBOL(put_disk_devt);
void get_disk_devt(struct disk_devt *disk_devt)
{
if (disk_devt)
atomic_inc(&disk_devt->count);
}
EXPORT_SYMBOL(get_disk_devt);
/**
* device_add_disk - add partitioning information to kernel list
* @parent: parent device for the disk
@ -612,8 +626,15 @@ void device_add_disk(struct device *parent, struct gendisk *disk)
disk_alloc_events(disk);
/*
* Take a reference on the devt and assign it to queue since it
* must not be reallocated while the bdi is registered
*/
disk->queue->disk_devt = disk->disk_devt;
get_disk_devt(disk->disk_devt);
/* Register BDI before referencing it from bdev */
bdi = &disk->queue->backing_dev_info;
bdi = disk->queue->backing_dev_info;
bdi_register_owner(bdi, disk_to_dev(disk));
blk_register_region(disk_devt(disk), disk->minors, NULL,
@ -648,6 +669,8 @@ void del_gendisk(struct gendisk *disk)
disk_part_iter_init(&piter, disk,
DISK_PITER_INCL_EMPTY | DISK_PITER_REVERSE);
while ((part = disk_part_iter_next(&piter))) {
bdev_unhash_inode(MKDEV(disk->major,
disk->first_minor + part->partno));
invalidate_partition(disk, part->partno);
delete_partition(disk, part->partno);
}

View File

@ -505,7 +505,6 @@ static int blkdev_bszset(struct block_device *bdev, fmode_t mode,
int blkdev_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
unsigned long arg)
{
struct backing_dev_info *bdi;
void __user *argp = (void __user *)arg;
loff_t size;
unsigned int max_sectors;
@ -532,8 +531,7 @@ int blkdev_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
case BLKFRAGET:
if (!arg)
return -EINVAL;
bdi = blk_get_backing_dev_info(bdev);
return put_long(arg, (bdi->ra_pages * PAGE_SIZE) / 512);
return put_long(arg, (bdev->bd_bdi->ra_pages*PAGE_SIZE) / 512);
case BLKROGET:
return put_int(arg, bdev_read_only(bdev) != 0);
case BLKBSZGET: /* get block device soft block size (cf. BLKSSZGET) */
@ -560,8 +558,7 @@ int blkdev_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
case BLKFRASET:
if(!capable(CAP_SYS_ADMIN))
return -EACCES;
bdi = blk_get_backing_dev_info(bdev);
bdi->ra_pages = (arg * 512) / PAGE_SIZE;
bdev->bd_bdi->ra_pages = (arg * 512) / PAGE_SIZE;
return 0;
case BLKBSZSET:
return blkdev_bszset(bdev, mode, argp);

556
block/mq-deadline.c 100644
View File

@ -0,0 +1,556 @@
/*
* MQ Deadline i/o scheduler - adaptation of the legacy deadline scheduler,
* for the blk-mq scheduling framework
*
* Copyright (C) 2016 Jens Axboe <axboe@kernel.dk>
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/blkdev.h>
#include <linux/blk-mq.h>
#include <linux/elevator.h>
#include <linux/bio.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/compiler.h>
#include <linux/rbtree.h>
#include <linux/sbitmap.h>
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-tag.h"
#include "blk-mq-sched.h"
/*
* See Documentation/block/deadline-iosched.txt
*/
static const int read_expire = HZ / 2; /* max time before a read is submitted. */
static const int write_expire = 5 * HZ; /* ditto for writes, these limits are SOFT! */
static const int writes_starved = 2; /* max times reads can starve a write */
static const int fifo_batch = 16; /* # of sequential requests treated as one
by the above parameters. For throughput. */
struct deadline_data {
/*
* run time data
*/
/*
* requests (deadline_rq s) are present on both sort_list and fifo_list
*/
struct rb_root sort_list[2];
struct list_head fifo_list[2];
/*
* next in sort order. read, write or both are NULL
*/
struct request *next_rq[2];
unsigned int batching; /* number of sequential requests made */
unsigned int starved; /* times reads have starved writes */
/*
* settings that change how the i/o scheduler behaves
*/
int fifo_expire[2];
int fifo_batch;
int writes_starved;
int front_merges;
spinlock_t lock;
struct list_head dispatch;
};
static inline struct rb_root *
deadline_rb_root(struct deadline_data *dd, struct request *rq)
{
return &dd->sort_list[rq_data_dir(rq)];
}
/*
* get the request after `rq' in sector-sorted order
*/
static inline struct request *
deadline_latter_request(struct request *rq)
{
struct rb_node *node = rb_next(&rq->rb_node);
if (node)
return rb_entry_rq(node);
return NULL;
}
static void
deadline_add_rq_rb(struct deadline_data *dd, struct request *rq)
{
struct rb_root *root = deadline_rb_root(dd, rq);
elv_rb_add(root, rq);
}
static inline void
deadline_del_rq_rb(struct deadline_data *dd, struct request *rq)
{
const int data_dir = rq_data_dir(rq);
if (dd->next_rq[data_dir] == rq)
dd->next_rq[data_dir] = deadline_latter_request(rq);
elv_rb_del(deadline_rb_root(dd, rq), rq);
}
/*
* remove rq from rbtree and fifo.
*/
static void deadline_remove_request(struct request_queue *q, struct request *rq)
{
struct deadline_data *dd = q->elevator->elevator_data;
list_del_init(&rq->queuelist);
/*
* We might not be on the rbtree, if we are doing an insert merge
*/
if (!RB_EMPTY_NODE(&rq->rb_node))
deadline_del_rq_rb(dd, rq);
elv_rqhash_del(q, rq);
if (q->last_merge == rq)
q->last_merge = NULL;
}
static void dd_request_merged(struct request_queue *q, struct request *req,
enum elv_merge type)
{
struct deadline_data *dd = q->elevator->elevator_data;
/*
* if the merge was a front merge, we need to reposition request
*/
if (type == ELEVATOR_FRONT_MERGE) {
elv_rb_del(deadline_rb_root(dd, req), req);
deadline_add_rq_rb(dd, req);
}
}
static void dd_merged_requests(struct request_queue *q, struct request *req,
struct request *next)
{
/*
* if next expires before rq, assign its expire time to rq
* and move into next position (next will be deleted) in fifo
*/
if (!list_empty(&req->queuelist) && !list_empty(&next->queuelist)) {
if (time_before((unsigned long)next->fifo_time,
(unsigned long)req->fifo_time)) {
list_move(&req->queuelist, &next->queuelist);
req->fifo_time = next->fifo_time;
}
}
/*
* kill knowledge of next, this one is a goner
*/
deadline_remove_request(q, next);
}
/*
* move an entry to dispatch queue
*/
static void
deadline_move_request(struct deadline_data *dd, struct request *rq)
{
const int data_dir = rq_data_dir(rq);
dd->next_rq[READ] = NULL;
dd->next_rq[WRITE] = NULL;
dd->next_rq[data_dir] = deadline_latter_request(rq);
/*
* take it off the sort and fifo list
*/
deadline_remove_request(rq->q, rq);
}
/*
* deadline_check_fifo returns 0 if there are no expired requests on the fifo,
* 1 otherwise. Requires !list_empty(&dd->fifo_list[data_dir])
*/
static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
{
struct request *rq = rq_entry_fifo(dd->fifo_list[ddir].next);
/*
* rq is expired!
*/
if (time_after_eq(jiffies, (unsigned long)rq->fifo_time))
return 1;
return 0;
}
/*
* deadline_dispatch_requests selects the best request according to
* read/write expire, fifo_batch, etc
*/
static struct request *__dd_dispatch_request(struct blk_mq_hw_ctx *hctx)
{
struct deadline_data *dd = hctx->queue->elevator->elevator_data;
struct request *rq;
bool reads, writes;
int data_dir;
if (!list_empty(&dd->dispatch)) {
rq = list_first_entry(&dd->dispatch, struct request, queuelist);
list_del_init(&rq->queuelist);
goto done;
}
reads = !list_empty(&dd->fifo_list[READ]);
writes = !list_empty(&dd->fifo_list[WRITE]);
/*
* batches are currently reads XOR writes
*/
if (dd->next_rq[WRITE])
rq = dd->next_rq[WRITE];
else
rq = dd->next_rq[READ];
if (rq && dd->batching < dd->fifo_batch)
/* we have a next request are still entitled to batch */
goto dispatch_request;
/*
* at this point we are not running a batch. select the appropriate
* data direction (read / write)
*/
if (reads) {
BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[READ]));
if (writes && (dd->starved++ >= dd->writes_starved))
goto dispatch_writes;
data_dir = READ;
goto dispatch_find_request;
}
/*
* there are either no reads or writes have been starved
*/
if (writes) {
dispatch_writes:
BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[WRITE]));
dd->starved = 0;
data_dir = WRITE;
goto dispatch_find_request;
}
return NULL;
dispatch_find_request:
/*
* we are not running a batch, find best request for selected data_dir
*/
if (deadline_check_fifo(dd, data_dir) || !dd->next_rq[data_dir]) {
/*
* A deadline has expired, the last request was in the other
* direction, or we have run out of higher-sectored requests.
* Start again from the request with the earliest expiry time.
*/
rq = rq_entry_fifo(dd->fifo_list[data_dir].next);
} else {
/*
* The last req was the same dir and we have a next request in
* sort order. No expired requests so continue on from here.
*/
rq = dd->next_rq[data_dir];
}
dd->batching = 0;
dispatch_request:
/*
* rq is the selected appropriate request.
*/
dd->batching++;
deadline_move_request(dd, rq);
done:
rq->rq_flags |= RQF_STARTED;
return rq;
}
static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx)
{
struct deadline_data *dd = hctx->queue->elevator->elevator_data;
struct request *rq;
spin_lock(&dd->lock);
rq = __dd_dispatch_request(hctx);
spin_unlock(&dd->lock);
return rq;
}
static void dd_exit_queue(struct elevator_queue *e)
{
struct deadline_data *dd = e->elevator_data;
BUG_ON(!list_empty(&dd->fifo_list[READ]));
BUG_ON(!list_empty(&dd->fifo_list[WRITE]));
kfree(dd);
}
/*
* initialize elevator private data (deadline_data).
*/
static int dd_init_queue(struct request_queue *q, struct elevator_type *e)
{
struct deadline_data *dd;
struct elevator_queue *eq;
eq = elevator_alloc(q, e);
if (!eq)
return -ENOMEM;
dd = kzalloc_node(sizeof(*dd), GFP_KERNEL, q->node);
if (!dd) {
kobject_put(&eq->kobj);
return -ENOMEM;
}
eq->elevator_data = dd;
INIT_LIST_HEAD(&dd->fifo_list[READ]);
INIT_LIST_HEAD(&dd->fifo_list[WRITE]);
dd->sort_list[READ] = RB_ROOT;
dd->sort_list[WRITE] = RB_ROOT;
dd->fifo_expire[READ] = read_expire;
dd->fifo_expire[WRITE] = write_expire;
dd->writes_starved = writes_starved;
dd->front_merges = 1;
dd->fifo_batch = fifo_batch;
spin_lock_init(&dd->lock);
INIT_LIST_HEAD(&dd->dispatch);
q->elevator = eq;
return 0;
}
static int dd_request_merge(struct request_queue *q, struct request **rq,
struct bio *bio)
{
struct deadline_data *dd = q->elevator->elevator_data;
sector_t sector = bio_end_sector(bio);
struct request *__rq;
if (!dd->front_merges)
return ELEVATOR_NO_MERGE;
__rq = elv_rb_find(&dd->sort_list[bio_data_dir(bio)], sector);
if (__rq) {
BUG_ON(sector != blk_rq_pos(__rq));
if (elv_bio_merge_ok(__rq, bio)) {
*rq = __rq;
return ELEVATOR_FRONT_MERGE;
}
}
return ELEVATOR_NO_MERGE;
}
static bool dd_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio)
{
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
struct request *free = NULL;
bool ret;
spin_lock(&dd->lock);
ret = blk_mq_sched_try_merge(q, bio, &free);
spin_unlock(&dd->lock);
if (free)
blk_mq_free_request(free);
return ret;
}
/*
* add rq to rbtree and fifo
*/
static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
bool at_head)
{
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
const int data_dir = rq_data_dir(rq);
if (blk_mq_sched_try_insert_merge(q, rq))
return;
blk_mq_sched_request_inserted(rq);
if (at_head || blk_rq_is_passthrough(rq)) {
if (at_head)
list_add(&rq->queuelist, &dd->dispatch);
else
list_add_tail(&rq->queuelist, &dd->dispatch);
} else {
deadline_add_rq_rb(dd, rq);
if (rq_mergeable(rq)) {
elv_rqhash_add(q, rq);
if (!q->last_merge)
q->last_merge = rq;
}
/*
* set expire time and add to fifo list
*/
rq->fifo_time = jiffies + dd->fifo_expire[data_dir];
list_add_tail(&rq->queuelist, &dd->fifo_list[data_dir]);
}
}
static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
struct list_head *list, bool at_head)
{
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
spin_lock(&dd->lock);
while (!list_empty(list)) {
struct request *rq;
rq = list_first_entry(list, struct request, queuelist);
list_del_init(&rq->queuelist);
dd_insert_request(hctx, rq, at_head);
}
spin_unlock(&dd->lock);
}
static bool dd_has_work(struct blk_mq_hw_ctx *hctx)
{
struct deadline_data *dd = hctx->queue->elevator->elevator_data;
return !list_empty_careful(&dd->dispatch) ||
!list_empty_careful(&dd->fifo_list[0]) ||
!list_empty_careful(&dd->fifo_list[1]);
}
/*
* sysfs parts below
*/
static ssize_t
deadline_var_show(int var, char *page)
{
return sprintf(page, "%d\n", var);
}
static ssize_t
deadline_var_store(int *var, const char *page, size_t count)
{
char *p = (char *) page;
*var = simple_strtol(p, &p, 10);
return count;
}
#define SHOW_FUNCTION(__FUNC, __VAR, __CONV) \
static ssize_t __FUNC(struct elevator_queue *e, char *page) \
{ \
struct deadline_data *dd = e->elevator_data; \
int __data = __VAR; \
if (__CONV) \
__data = jiffies_to_msecs(__data); \
return deadline_var_show(__data, (page)); \
}
SHOW_FUNCTION(deadline_read_expire_show, dd->fifo_expire[READ], 1);
SHOW_FUNCTION(deadline_write_expire_show, dd->fifo_expire[WRITE], 1);
SHOW_FUNCTION(deadline_writes_starved_show, dd->writes_starved, 0);
SHOW_FUNCTION(deadline_front_merges_show, dd->front_merges, 0);
SHOW_FUNCTION(deadline_fifo_batch_show, dd->fifo_batch, 0);
#undef SHOW_FUNCTION
#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV) \
static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count) \
{ \
struct deadline_data *dd = e->elevator_data; \
int __data; \
int ret = deadline_var_store(&__data, (page), count); \
if (__data < (MIN)) \
__data = (MIN); \
else if (__data > (MAX)) \
__data = (MAX); \
if (__CONV) \
*(__PTR) = msecs_to_jiffies(__data); \
else \
*(__PTR) = __data; \
return ret; \
}
STORE_FUNCTION(deadline_read_expire_store, &dd->fifo_expire[READ], 0, INT_MAX, 1);
STORE_FUNCTION(deadline_write_expire_store, &dd->fifo_expire[WRITE], 0, INT_MAX, 1);
STORE_FUNCTION(deadline_writes_starved_store, &dd->writes_starved, INT_MIN, INT_MAX, 0);
STORE_FUNCTION(deadline_front_merges_store, &dd->front_merges, 0, 1, 0);
STORE_FUNCTION(deadline_fifo_batch_store, &dd->fifo_batch, 0, INT_MAX, 0);
#undef STORE_FUNCTION
#define DD_ATTR(name) \
__ATTR(name, S_IRUGO|S_IWUSR, deadline_##name##_show, \
deadline_##name##_store)
static struct elv_fs_entry deadline_attrs[] = {
DD_ATTR(read_expire),
DD_ATTR(write_expire),
DD_ATTR(writes_starved),
DD_ATTR(front_merges),
DD_ATTR(fifo_batch),
__ATTR_NULL
};
static struct elevator_type mq_deadline = {
.ops.mq = {
.insert_requests = dd_insert_requests,
.dispatch_request = dd_dispatch_request,
.next_request = elv_rb_latter_request,
.former_request = elv_rb_former_request,
.bio_merge = dd_bio_merge,
.request_merge = dd_request_merge,
.requests_merged = dd_merged_requests,
.request_merged = dd_request_merged,
.has_work = dd_has_work,
.init_sched = dd_init_queue,
.exit_sched = dd_exit_queue,
},
.uses_mq = true,
.elevator_attrs = deadline_attrs,
.elevator_name = "mq-deadline",
.elevator_owner = THIS_MODULE,
};
static int __init deadline_init(void)
{
return elv_register(&mq_deadline);
}
static void __exit deadline_exit(void)
{
elv_unregister(&mq_deadline);
}
module_init(deadline_init);
module_exit(deadline_exit);
MODULE_AUTHOR("Jens Axboe");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("MQ deadline IO scheduler");

View File

@ -92,7 +92,7 @@ static void noop_exit_queue(struct elevator_queue *e)
}
static struct elevator_type elevator_noop = {
.ops = {
.ops.sq = {
.elevator_merge_req_fn = noop_merged_requests,
.elevator_dispatch_fn = noop_dispatch,
.elevator_add_req_fn = noop_add_request,

452
block/opal_proto.h 100644
View File

@ -0,0 +1,452 @@
/*
* Copyright © 2016 Intel Corporation
*
* Authors:
* Rafael Antognolli <rafael.antognolli@intel.com>
* Scott Bauer <scott.bauer@intel.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*/
#include <linux/types.h>
#ifndef _OPAL_PROTO_H
#define _OPAL_PROTO_H
/*
* These constant values come from:
* SPC-4 section
* 6.30 SECURITY PROTOCOL IN command / table 265.
*/
enum {
TCG_SECP_00 = 0,
TCG_SECP_01,
};
/*
* Token defs derived from:
* TCG_Storage_Architecture_Core_Spec_v2.01_r1.00
* 3.2.2 Data Stream Encoding
*/
enum opal_response_token {
OPAL_DTA_TOKENID_BYTESTRING = 0xe0,
OPAL_DTA_TOKENID_SINT = 0xe1,
OPAL_DTA_TOKENID_UINT = 0xe2,
OPAL_DTA_TOKENID_TOKEN = 0xe3, /* actual token is returned */
OPAL_DTA_TOKENID_INVALID = 0X0
};
#define DTAERROR_NO_METHOD_STATUS 0x89
#define GENERIC_HOST_SESSION_NUM 0x41
#define TPER_SYNC_SUPPORTED 0x01
#define TINY_ATOM_DATA_MASK 0x3F
#define TINY_ATOM_SIGNED 0x40
#define SHORT_ATOM_ID 0x80
#define SHORT_ATOM_BYTESTRING 0x20
#define SHORT_ATOM_SIGNED 0x10
#define SHORT_ATOM_LEN_MASK 0xF
#define MEDIUM_ATOM_ID 0xC0
#define MEDIUM_ATOM_BYTESTRING 0x10
#define MEDIUM_ATOM_SIGNED 0x8
#define MEDIUM_ATOM_LEN_MASK 0x7
#define LONG_ATOM_ID 0xe0
#define LONG_ATOM_BYTESTRING 0x2
#define LONG_ATOM_SIGNED 0x1
/* Derived from TCG Core spec 2.01 Section:
* 3.2.2.1
* Data Type
*/
#define TINY_ATOM_BYTE 0x7F
#define SHORT_ATOM_BYTE 0xBF
#define MEDIUM_ATOM_BYTE 0xDF
#define LONG_ATOM_BYTE 0xE3
#define OPAL_INVAL_PARAM 12
#define OPAL_MANUFACTURED_INACTIVE 0x08
#define OPAL_DISCOVERY_COMID 0x0001
#define LOCKING_RANGE_NON_GLOBAL 0x03
/*
* User IDs used in the TCG storage SSCs
* Derived from: TCG_Storage_Architecture_Core_Spec_v2.01_r1.00
* Section: 6.3 Assigned UIDs
*/
#define OPAL_UID_LENGTH 8
#define OPAL_METHOD_LENGTH 8
#define OPAL_MSID_KEYLEN 15
#define OPAL_UID_LENGTH_HALF 4
/* Enum to index OPALUID array */
enum opal_uid {
/* users */
OPAL_SMUID_UID,
OPAL_THISSP_UID,
OPAL_ADMINSP_UID,
OPAL_LOCKINGSP_UID,
OPAL_ENTERPRISE_LOCKINGSP_UID,
OPAL_ANYBODY_UID,
OPAL_SID_UID,
OPAL_ADMIN1_UID,
OPAL_USER1_UID,
OPAL_USER2_UID,
OPAL_PSID_UID,
OPAL_ENTERPRISE_BANDMASTER0_UID,
OPAL_ENTERPRISE_ERASEMASTER_UID,
/* tables */
OPAL_LOCKINGRANGE_GLOBAL,
OPAL_LOCKINGRANGE_ACE_RDLOCKED,
OPAL_LOCKINGRANGE_ACE_WRLOCKED,
OPAL_MBRCONTROL,
OPAL_MBR,
OPAL_AUTHORITY_TABLE,
OPAL_C_PIN_TABLE,
OPAL_LOCKING_INFO_TABLE,
OPAL_ENTERPRISE_LOCKING_INFO_TABLE,
/* C_PIN_TABLE object ID's */
OPAL_C_PIN_MSID,
OPAL_C_PIN_SID,
OPAL_C_PIN_ADMIN1,
/* half UID's (only first 4 bytes used) */
OPAL_HALF_UID_AUTHORITY_OBJ_REF,
OPAL_HALF_UID_BOOLEAN_ACE,
/* omitted optional parameter */
OPAL_UID_HEXFF,
};
#define OPAL_METHOD_LENGTH 8
/* Enum for indexing the OPALMETHOD array */
enum opal_method {
OPAL_PROPERTIES,
OPAL_STARTSESSION,
OPAL_REVERT,
OPAL_ACTIVATE,
OPAL_EGET,
OPAL_ESET,
OPAL_NEXT,
OPAL_EAUTHENTICATE,
OPAL_GETACL,
OPAL_GENKEY,
OPAL_REVERTSP,
OPAL_GET,
OPAL_SET,
OPAL_AUTHENTICATE,
OPAL_RANDOM,
OPAL_ERASE,
};
enum opal_token {
/* Boolean */
OPAL_TRUE = 0x01,
OPAL_FALSE = 0x00,
OPAL_BOOLEAN_EXPR = 0x03,
/* cellblocks */
OPAL_TABLE = 0x00,
OPAL_STARTROW = 0x01,
OPAL_ENDROW = 0x02,
OPAL_STARTCOLUMN = 0x03,
OPAL_ENDCOLUMN = 0x04,
OPAL_VALUES = 0x01,
/* authority table */
OPAL_PIN = 0x03,
/* locking tokens */
OPAL_RANGESTART = 0x03,
OPAL_RANGELENGTH = 0x04,
OPAL_READLOCKENABLED = 0x05,
OPAL_WRITELOCKENABLED = 0x06,
OPAL_READLOCKED = 0x07,
OPAL_WRITELOCKED = 0x08,
OPAL_ACTIVEKEY = 0x0A,
/* locking info table */
OPAL_MAXRANGES = 0x04,
/* mbr control */
OPAL_MBRENABLE = 0x01,
OPAL_MBRDONE = 0x02,
/* properties */
OPAL_HOSTPROPERTIES = 0x00,
/* atoms */
OPAL_STARTLIST = 0xf0,
OPAL_ENDLIST = 0xf1,
OPAL_STARTNAME = 0xf2,
OPAL_ENDNAME = 0xf3,
OPAL_CALL = 0xf8,
OPAL_ENDOFDATA = 0xf9,
OPAL_ENDOFSESSION = 0xfa,
OPAL_STARTTRANSACTON = 0xfb,
OPAL_ENDTRANSACTON = 0xfC,
OPAL_EMPTYATOM = 0xff,
OPAL_WHERE = 0x00,
};
/* Locking state for a locking range */
enum opal_lockingstate {
OPAL_LOCKING_READWRITE = 0x01,
OPAL_LOCKING_READONLY = 0x02,
OPAL_LOCKING_LOCKED = 0x03,
};
/* Packets derived from:
* TCG_Storage_Architecture_Core_Spec_v2.01_r1.00
* Secion: 3.2.3 ComPackets, Packets & Subpackets
*/
/* Comm Packet (header) for transmissions. */
struct opal_compacket {
__be32 reserved0;
u8 extendedComID[4];
__be32 outstandingData;
__be32 minTransfer;
__be32 length;
};
/* Packet structure. */
struct opal_packet {
__be32 tsn;
__be32 hsn;
__be32 seq_number;
__be16 reserved0;
__be16 ack_type;
__be32 acknowledgment;
__be32 length;
};
/* Data sub packet header */
struct opal_data_subpacket {
u8 reserved0[6];
__be16 kind;
__be32 length;
};
/* header of a response */
struct opal_header {
struct opal_compacket cp;
struct opal_packet pkt;
struct opal_data_subpacket subpkt;
};
#define FC_TPER 0x0001
#define FC_LOCKING 0x0002
#define FC_GEOMETRY 0x0003
#define FC_ENTERPRISE 0x0100
#define FC_DATASTORE 0x0202
#define FC_SINGLEUSER 0x0201
#define FC_OPALV100 0x0200
#define FC_OPALV200 0x0203
/*
* The Discovery 0 Header. As defined in
* Opal SSC Documentation
* Section: 3.3.5 Capability Discovery
*/
struct d0_header {
__be32 length; /* the length of the header 48 in 2.00.100 */
__be32 revision; /**< revision of the header 1 in 2.00.100 */
__be32 reserved01;
__be32 reserved02;
/*
* the remainder of the structure is vendor specific and will not be
* addressed now
*/
u8 ignored[32];
};
/*
* TPer Feature Descriptor. Contains flags indicating support for the
* TPer features described in the OPAL specification. The names match the
* OPAL terminology
*
* code == 0x001 in 2.00.100
*/
struct d0_tper_features {
/*
* supported_features bits:
* bit 7: reserved
* bit 6: com ID management
* bit 5: reserved
* bit 4: streaming support
* bit 3: buffer management
* bit 2: ACK/NACK
* bit 1: async
* bit 0: sync
*/
u8 supported_features;
/*
* bytes 5 through 15 are reserved, but we represent the first 3 as
* u8 to keep the other two 32bits integers aligned.
*/
u8 reserved01[3];
__be32 reserved02;
__be32 reserved03;
};
/*
* Locking Feature Descriptor. Contains flags indicating support for the
* locking features described in the OPAL specification. The names match the
* OPAL terminology
*
* code == 0x0002 in 2.00.100
*/
struct d0_locking_features {
/*
* supported_features bits:
* bits 6-7: reserved
* bit 5: MBR done
* bit 4: MBR enabled
* bit 3: media encryption
* bit 2: locked
* bit 1: locking enabled
* bit 0: locking supported
*/
u8 supported_features;
/*
* bytes 5 through 15 are reserved, but we represent the first 3 as
* u8 to keep the other two 32bits integers aligned.
*/
u8 reserved01[3];
__be32 reserved02;
__be32 reserved03;
};
/*
* Geometry Feature Descriptor. Contains flags indicating support for the
* geometry features described in the OPAL specification. The names match the
* OPAL terminology
*
* code == 0x0003 in 2.00.100
*/
struct d0_geometry_features {
/*
* skip 32 bits from header, needed to align the struct to 64 bits.
*/
u8 header[4];
/*
* reserved01:
* bits 1-6: reserved
* bit 0: align
*/
u8 reserved01;
u8 reserved02[7];
__be32 logical_block_size;
__be64 alignment_granularity;
__be64 lowest_aligned_lba;
};
/*
* Enterprise SSC Feature
*
* code == 0x0100
*/
struct d0_enterprise_ssc {
__be16 baseComID;
__be16 numComIDs;
/* range_crossing:
* bits 1-6: reserved
* bit 0: range crossing
*/
u8 range_crossing;
u8 reserved01;
__be16 reserved02;
__be32 reserved03;
__be32 reserved04;
};
/*
* Opal V1 feature
*
* code == 0x0200
*/
struct d0_opal_v100 {
__be16 baseComID;
__be16 numComIDs;
};
/*
* Single User Mode feature
*
* code == 0x0201
*/
struct d0_single_user_mode {
__be32 num_locking_objects;
/* reserved01:
* bit 0: any
* bit 1: all
* bit 2: policy
* bits 3-7: reserved
*/
u8 reserved01;
u8 reserved02;
__be16 reserved03;
__be32 reserved04;
};
/*
* Additonal Datastores feature
*
* code == 0x0202
*/
struct d0_datastore_table {
__be16 reserved01;
__be16 max_tables;
__be32 max_size_tables;
__be32 table_size_alignment;
};
/*
* OPAL 2.0 feature
*
* code == 0x0203
*/
struct d0_opal_v200 {
__be16 baseComID;
__be16 numComIDs;
/* range_crossing:
* bits 1-6: reserved
* bit 0: range crossing
*/
u8 range_crossing;
/* num_locking_admin_auth:
* not aligned to 16 bits, so use two u8.
* stored in big endian:
* 0: MSB
* 1: LSB
*/
u8 num_locking_admin_auth[2];
/* num_locking_user_auth:
* not aligned to 16 bits, so use two u8.
* stored in big endian:
* 0: MSB
* 1: LSB
*/
u8 num_locking_user_auth[2];
u8 initialPIN;
u8 revertedPIN;
u8 reserved01;
__be32 reserved02;
};
/* Union of features used to parse the discovery 0 response */
struct d0_features {
__be16 code;
/*
* r_version bits:
* bits 4-7: version
* bits 0-3: reserved
*/
u8 r_version;
u8 length;
u8 features[];
};
#endif /* _OPAL_PROTO_H */

View File

@ -293,7 +293,7 @@ static gpt_entry *alloc_read_gpt_entries(struct parsed_partitions *state,
if (!gpt)
return NULL;
count = le32_to_cpu(gpt->num_partition_entries) *
count = (size_t)le32_to_cpu(gpt->num_partition_entries) *
le32_to_cpu(gpt->sizeof_partition_entry);
if (!count)
return NULL;
@ -352,7 +352,7 @@ static int is_gpt_valid(struct parsed_partitions *state, u64 lba,
gpt_header **gpt, gpt_entry **ptes)
{
u32 crc, origcrc;
u64 lastlba;
u64 lastlba, pt_size;
if (!ptes)
return 0;
@ -434,13 +434,20 @@ static int is_gpt_valid(struct parsed_partitions *state, u64 lba,
goto fail;
}
/* Sanity check partition table size */
pt_size = (u64)le32_to_cpu((*gpt)->num_partition_entries) *
le32_to_cpu((*gpt)->sizeof_partition_entry);
if (pt_size > KMALLOC_MAX_SIZE) {
pr_debug("GUID Partition Table is too large: %llu > %lu bytes\n",
(unsigned long long)pt_size, KMALLOC_MAX_SIZE);
goto fail;
}
if (!(*ptes = alloc_read_gpt_entries(state, *gpt)))
goto fail;
/* Check the GUID Partition Entry Array CRC */
crc = efi_crc32((const unsigned char *) (*ptes),
le32_to_cpu((*gpt)->num_partition_entries) *
le32_to_cpu((*gpt)->sizeof_partition_entry));
crc = efi_crc32((const unsigned char *) (*ptes), pt_size);
if (crc != le32_to_cpu((*gpt)->partition_entry_array_crc32)) {
pr_debug("GUID Partition Entry Array CRC check failed.\n");

View File

@ -230,15 +230,17 @@ EXPORT_SYMBOL(blk_verify_command);
static int blk_fill_sghdr_rq(struct request_queue *q, struct request *rq,
struct sg_io_hdr *hdr, fmode_t mode)
{
if (copy_from_user(rq->cmd, hdr->cmdp, hdr->cmd_len))
struct scsi_request *req = scsi_req(rq);
if (copy_from_user(req->cmd, hdr->cmdp, hdr->cmd_len))
return -EFAULT;
if (blk_verify_command(rq->cmd, mode & FMODE_WRITE))
if (blk_verify_command(req->cmd, mode & FMODE_WRITE))
return -EPERM;
/*
* fill in request structure
*/
rq->cmd_len = hdr->cmd_len;
req->cmd_len = hdr->cmd_len;
rq->timeout = msecs_to_jiffies(hdr->timeout);
if (!rq->timeout)
@ -254,6 +256,7 @@ static int blk_fill_sghdr_rq(struct request_queue *q, struct request *rq,
static int blk_complete_sghdr_rq(struct request *rq, struct sg_io_hdr *hdr,
struct bio *bio)
{
struct scsi_request *req = scsi_req(rq);
int r, ret = 0;
/*
@ -267,13 +270,13 @@ static int blk_complete_sghdr_rq(struct request *rq, struct sg_io_hdr *hdr,
hdr->info = 0;
if (hdr->masked_status || hdr->host_status || hdr->driver_status)
hdr->info |= SG_INFO_CHECK;
hdr->resid = rq->resid_len;
hdr->resid = req->resid_len;
hdr->sb_len_wr = 0;
if (rq->sense_len && hdr->sbp) {
int len = min((unsigned int) hdr->mx_sb_len, rq->sense_len);
if (req->sense_len && hdr->sbp) {
int len = min((unsigned int) hdr->mx_sb_len, req->sense_len);
if (!copy_to_user(hdr->sbp, rq->sense, len))
if (!copy_to_user(hdr->sbp, req->sense, len))
hdr->sb_len_wr = len;
else
ret = -EFAULT;
@ -294,7 +297,7 @@ static int sg_io(struct request_queue *q, struct gendisk *bd_disk,
int writing = 0;
int at_head = 0;
struct request *rq;
char sense[SCSI_SENSE_BUFFERSIZE];
struct scsi_request *req;
struct bio *bio;
if (hdr->interface_id != 'S')
@ -318,14 +321,16 @@ static int sg_io(struct request_queue *q, struct gendisk *bd_disk,
at_head = 1;
ret = -ENOMEM;
rq = blk_get_request(q, writing ? WRITE : READ, GFP_KERNEL);
rq = blk_get_request(q, writing ? REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN,
GFP_KERNEL);
if (IS_ERR(rq))
return PTR_ERR(rq);
blk_rq_set_block_pc(rq);
req = scsi_req(rq);
scsi_req_init(rq);
if (hdr->cmd_len > BLK_MAX_CDB) {
rq->cmd = kzalloc(hdr->cmd_len, GFP_KERNEL);
if (!rq->cmd)
req->cmd = kzalloc(hdr->cmd_len, GFP_KERNEL);
if (!req->cmd)
goto out_put_request;
}
@ -357,9 +362,6 @@ static int sg_io(struct request_queue *q, struct gendisk *bd_disk,
goto out_free_cdb;
bio = rq->bio;
memset(sense, 0, sizeof(sense));
rq->sense = sense;
rq->sense_len = 0;
rq->retries = 0;
start_time = jiffies;
@ -375,8 +377,7 @@ static int sg_io(struct request_queue *q, struct gendisk *bd_disk,
ret = blk_complete_sghdr_rq(rq, hdr, bio);
out_free_cdb:
if (rq->cmd != rq->__cmd)
kfree(rq->cmd);
scsi_req_free_cmd(req);
out_put_request:
blk_put_request(rq);
return ret;
@ -420,9 +421,10 @@ int sg_scsi_ioctl(struct request_queue *q, struct gendisk *disk, fmode_t mode,
struct scsi_ioctl_command __user *sic)
{
struct request *rq;
struct scsi_request *req;
int err;
unsigned int in_len, out_len, bytes, opcode, cmdlen;
char *buffer = NULL, sense[SCSI_SENSE_BUFFERSIZE];
char *buffer = NULL;
if (!sic)
return -EINVAL;
@ -447,12 +449,14 @@ int sg_scsi_ioctl(struct request_queue *q, struct gendisk *disk, fmode_t mode,
}
rq = blk_get_request(q, in_len ? WRITE : READ, __GFP_RECLAIM);
rq = blk_get_request(q, in_len ? REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN,
__GFP_RECLAIM);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
goto error_free_buffer;
}
blk_rq_set_block_pc(rq);
req = scsi_req(rq);
scsi_req_init(rq);
cmdlen = COMMAND_SIZE(opcode);
@ -460,14 +464,14 @@ int sg_scsi_ioctl(struct request_queue *q, struct gendisk *disk, fmode_t mode,
* get command and data to send to device, if any
*/
err = -EFAULT;
rq->cmd_len = cmdlen;
if (copy_from_user(rq->cmd, sic->data, cmdlen))
req->cmd_len = cmdlen;
if (copy_from_user(req->cmd, sic->data, cmdlen))
goto error;
if (in_len && copy_from_user(buffer, sic->data + cmdlen, in_len))
goto error;
err = blk_verify_command(rq->cmd, mode & FMODE_WRITE);
err = blk_verify_command(req->cmd, mode & FMODE_WRITE);
if (err)
goto error;
@ -503,18 +507,14 @@ int sg_scsi_ioctl(struct request_queue *q, struct gendisk *disk, fmode_t mode,
goto error;
}
memset(sense, 0, sizeof(sense));
rq->sense = sense;
rq->sense_len = 0;
blk_execute_rq(q, disk, rq, 0);
err = rq->errors & 0xff; /* only 8 bit SCSI status */
if (err) {
if (rq->sense_len && rq->sense) {
bytes = (OMAX_SB_LEN > rq->sense_len) ?
rq->sense_len : OMAX_SB_LEN;
if (copy_to_user(sic->data, rq->sense, bytes))
if (req->sense_len && req->sense) {
bytes = (OMAX_SB_LEN > req->sense_len) ?
req->sense_len : OMAX_SB_LEN;
if (copy_to_user(sic->data, req->sense, bytes))
err = -EFAULT;
}
} else {
@ -539,14 +539,14 @@ static int __blk_send_generic(struct request_queue *q, struct gendisk *bd_disk,
struct request *rq;
int err;
rq = blk_get_request(q, WRITE, __GFP_RECLAIM);
rq = blk_get_request(q, REQ_OP_SCSI_OUT, __GFP_RECLAIM);
if (IS_ERR(rq))
return PTR_ERR(rq);
blk_rq_set_block_pc(rq);
scsi_req_init(rq);
rq->timeout = BLK_DEFAULT_SG_TIMEOUT;
rq->cmd[0] = cmd;
rq->cmd[4] = data;
rq->cmd_len = 6;
scsi_req(rq)->cmd[0] = cmd;
scsi_req(rq)->cmd[4] = data;
scsi_req(rq)->cmd_len = 6;
err = blk_execute_rq(q, bd_disk, rq, 0);
blk_put_request(rq);
@ -743,6 +743,17 @@ int scsi_cmd_blk_ioctl(struct block_device *bd, fmode_t mode,
}
EXPORT_SYMBOL(scsi_cmd_blk_ioctl);
void scsi_req_init(struct request *rq)
{
struct scsi_request *req = scsi_req(rq);
memset(req->__cmd, 0, sizeof(req->__cmd));
req->cmd = req->__cmd;
req->cmd_len = BLK_MAX_CDB;
req->sense_len = 0;
}
EXPORT_SYMBOL(scsi_req_init);
static int __init blk_scsi_ioctl_init(void)
{
blk_set_cmd_filter_defaults(&blk_default_cmd_filter);

2488
block/sed-opal.c 100644

File diff suppressed because it is too large Load Diff

View File

@ -1265,13 +1265,13 @@ static void ata_scsi_sdev_config(struct scsi_device *sdev)
*/
static int atapi_drain_needed(struct request *rq)
{
if (likely(rq->cmd_type != REQ_TYPE_BLOCK_PC))
if (likely(!blk_rq_is_passthrough(rq)))
return 0;
if (!blk_rq_bytes(rq) || op_is_write(req_op(rq)))
return 0;
return atapi_cmd_type(rq->cmd[0]) == ATAPI_MISC;
return atapi_cmd_type(scsi_req(rq)->cmd[0]) == ATAPI_MISC;
}
static int ata_scsi_dev_config(struct scsi_device *sdev,

View File

@ -69,6 +69,7 @@ config AMIGA_Z2RAM
config GDROM
tristate "SEGA Dreamcast GD-ROM drive"
depends on SH_DREAMCAST
select BLK_SCSI_REQUEST # only for the generic cdrom code
help
A standard SEGA Dreamcast comes with a modified CD ROM drive called a
"GD-ROM" by SEGA to signify it is capable of reading special disks
@ -114,6 +115,7 @@ config BLK_CPQ_CISS_DA
tristate "Compaq Smart Array 5xxx support"
depends on PCI
select CHECK_SIGNATURE
select BLK_SCSI_REQUEST
help
This is the driver for Compaq Smart Array 5xxx controllers.
Everyone using these boards should say Y here.
@ -386,6 +388,7 @@ config BLK_DEV_RAM_DAX
config CDROM_PKTCDVD
tristate "Packet writing on CD/DVD media (DEPRECATED)"
depends on !UML
select BLK_SCSI_REQUEST
help
Note: This driver is deprecated and will be removed from the
kernel in the near future!
@ -501,6 +504,16 @@ config VIRTIO_BLK
This is the virtual block driver for virtio. It can be used with
lguest or QEMU based VMMs (like KVM or Xen). Say Y or M.
config VIRTIO_BLK_SCSI
bool "SCSI passthrough request for the Virtio block driver"
depends on VIRTIO_BLK
select BLK_SCSI_REQUEST
---help---
Enable support for SCSI passthrough (e.g. the SG_IO ioctl) on
virtio-blk devices. This is only supported for the legacy
virtio protocol and not enabled by default by any hypervisor.
Your probably want to virtio-scsi instead.
config BLK_DEV_HD
bool "Very old hard disk (MFM/RLL/IDE) driver"
depends on HAVE_IDE

View File

@ -396,8 +396,8 @@ aoeblk_gdalloc(void *vp)
WARN_ON(d->gd);
WARN_ON(d->flags & DEVFL_UP);
blk_queue_max_hw_sectors(q, BLK_DEF_MAX_SECTORS);
q->backing_dev_info.name = "aoe";
q->backing_dev_info.ra_pages = READ_AHEAD / PAGE_SIZE;
q->backing_dev_info->name = "aoe";
q->backing_dev_info->ra_pages = READ_AHEAD / PAGE_SIZE;
d->bufpool = mp;
d->blkq = gd->queue = q;
q->queuedata = d;

View File

@ -52,6 +52,7 @@
#include <scsi/scsi.h>
#include <scsi/sg.h>
#include <scsi/scsi_ioctl.h>
#include <scsi/scsi_request.h>
#include <linux/cdrom.h>
#include <linux/scatterlist.h>
#include <linux/kthread.h>
@ -1853,8 +1854,8 @@ static void cciss_softirq_done(struct request *rq)
dev_dbg(&h->pdev->dev, "Done with %p\n", rq);
/* set the residual count for pc requests */
if (rq->cmd_type == REQ_TYPE_BLOCK_PC)
rq->resid_len = c->err_info->ResidualCnt;
if (blk_rq_is_passthrough(rq))
scsi_req(rq)->resid_len = c->err_info->ResidualCnt;
blk_end_request_all(rq, (rq->errors == 0) ? 0 : -EIO);
@ -1941,9 +1942,16 @@ static void cciss_get_serial_no(ctlr_info_t *h, int logvol,
static int cciss_add_disk(ctlr_info_t *h, struct gendisk *disk,
int drv_index)
{
disk->queue = blk_init_queue(do_cciss_request, &h->lock);
disk->queue = blk_alloc_queue(GFP_KERNEL);
if (!disk->queue)
goto init_queue_failure;
disk->queue->cmd_size = sizeof(struct scsi_request);
disk->queue->request_fn = do_cciss_request;
disk->queue->queue_lock = &h->lock;
if (blk_init_allocated_queue(disk->queue) < 0)
goto cleanup_queue;
sprintf(disk->disk_name, "cciss/c%dd%d", h->ctlr, drv_index);
disk->major = h->major;
disk->first_minor = drv_index << NWD_SHIFT;
@ -3075,7 +3083,7 @@ static inline int evaluate_target_status(ctlr_info_t *h,
driver_byte = DRIVER_OK;
msg_byte = cmd->err_info->CommandStatus; /* correct? seems too device specific */
if (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC)
if (blk_rq_is_passthrough(cmd->rq))
host_byte = DID_PASSTHROUGH;
else
host_byte = DID_OK;
@ -3084,7 +3092,7 @@ static inline int evaluate_target_status(ctlr_info_t *h,
host_byte, driver_byte);
if (cmd->err_info->ScsiStatus != SAM_STAT_CHECK_CONDITION) {
if (cmd->rq->cmd_type != REQ_TYPE_BLOCK_PC)
if (!blk_rq_is_passthrough(cmd->rq))
dev_warn(&h->pdev->dev, "cmd %p "
"has SCSI Status 0x%x\n",
cmd, cmd->err_info->ScsiStatus);
@ -3095,31 +3103,23 @@ static inline int evaluate_target_status(ctlr_info_t *h,
sense_key = 0xf & cmd->err_info->SenseInfo[2];
/* no status or recovered error */
if (((sense_key == 0x0) || (sense_key == 0x1)) &&
(cmd->rq->cmd_type != REQ_TYPE_BLOCK_PC))
!blk_rq_is_passthrough(cmd->rq))
error_value = 0;
if (check_for_unit_attention(h, cmd)) {
*retry_cmd = !(cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC);
*retry_cmd = !blk_rq_is_passthrough(cmd->rq);
return 0;
}
/* Not SG_IO or similar? */
if (cmd->rq->cmd_type != REQ_TYPE_BLOCK_PC) {
if (!blk_rq_is_passthrough(cmd->rq)) {
if (error_value != 0)
dev_warn(&h->pdev->dev, "cmd %p has CHECK CONDITION"
" sense key = 0x%x\n", cmd, sense_key);
return error_value;
}
/* SG_IO or similar, copy sense data back */
if (cmd->rq->sense) {
if (cmd->rq->sense_len > cmd->err_info->SenseLen)
cmd->rq->sense_len = cmd->err_info->SenseLen;
memcpy(cmd->rq->sense, cmd->err_info->SenseInfo,
cmd->rq->sense_len);
} else
cmd->rq->sense_len = 0;
scsi_req(cmd->rq)->sense_len = cmd->err_info->SenseLen;
return error_value;
}
@ -3146,15 +3146,14 @@ static inline void complete_command(ctlr_info_t *h, CommandList_struct *cmd,
rq->errors = evaluate_target_status(h, cmd, &retry_cmd);
break;
case CMD_DATA_UNDERRUN:
if (cmd->rq->cmd_type == REQ_TYPE_FS) {
if (!blk_rq_is_passthrough(cmd->rq)) {
dev_warn(&h->pdev->dev, "cmd %p has"
" completed with data underrun "
"reported\n", cmd);
cmd->rq->resid_len = cmd->err_info->ResidualCnt;
}
break;
case CMD_DATA_OVERRUN:
if (cmd->rq->cmd_type == REQ_TYPE_FS)
if (!blk_rq_is_passthrough(cmd->rq))
dev_warn(&h->pdev->dev, "cciss: cmd %p has"
" completed with data overrun "
"reported\n", cmd);
@ -3164,7 +3163,7 @@ static inline void complete_command(ctlr_info_t *h, CommandList_struct *cmd,
"reported invalid\n", cmd);
rq->errors = make_status_bytes(SAM_STAT_GOOD,
cmd->err_info->CommandStatus, DRIVER_OK,
(cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
blk_rq_is_passthrough(cmd->rq) ?
DID_PASSTHROUGH : DID_ERROR);
break;
case CMD_PROTOCOL_ERR:
@ -3172,7 +3171,7 @@ static inline void complete_command(ctlr_info_t *h, CommandList_struct *cmd,
"protocol error\n", cmd);
rq->errors = make_status_bytes(SAM_STAT_GOOD,
cmd->err_info->CommandStatus, DRIVER_OK,
(cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
blk_rq_is_passthrough(cmd->rq) ?
DID_PASSTHROUGH : DID_ERROR);
break;
case CMD_HARDWARE_ERR:
@ -3180,7 +3179,7 @@ static inline void complete_command(ctlr_info_t *h, CommandList_struct *cmd,
" hardware error\n", cmd);
rq->errors = make_status_bytes(SAM_STAT_GOOD,
cmd->err_info->CommandStatus, DRIVER_OK,
(cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
blk_rq_is_passthrough(cmd->rq) ?
DID_PASSTHROUGH : DID_ERROR);
break;
case CMD_CONNECTION_LOST:
@ -3188,7 +3187,7 @@ static inline void complete_command(ctlr_info_t *h, CommandList_struct *cmd,
"connection lost\n", cmd);
rq->errors = make_status_bytes(SAM_STAT_GOOD,
cmd->err_info->CommandStatus, DRIVER_OK,
(cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
blk_rq_is_passthrough(cmd->rq) ?
DID_PASSTHROUGH : DID_ERROR);
break;
case CMD_ABORTED:
@ -3196,7 +3195,7 @@ static inline void complete_command(ctlr_info_t *h, CommandList_struct *cmd,
"aborted\n", cmd);
rq->errors = make_status_bytes(SAM_STAT_GOOD,
cmd->err_info->CommandStatus, DRIVER_OK,
(cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
blk_rq_is_passthrough(cmd->rq) ?
DID_PASSTHROUGH : DID_ABORT);
break;
case CMD_ABORT_FAILED:
@ -3204,7 +3203,7 @@ static inline void complete_command(ctlr_info_t *h, CommandList_struct *cmd,
"abort failed\n", cmd);
rq->errors = make_status_bytes(SAM_STAT_GOOD,
cmd->err_info->CommandStatus, DRIVER_OK,
(cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
blk_rq_is_passthrough(cmd->rq) ?
DID_PASSTHROUGH : DID_ERROR);
break;
case CMD_UNSOLICITED_ABORT:
@ -3219,21 +3218,21 @@ static inline void complete_command(ctlr_info_t *h, CommandList_struct *cmd,
"%p retried too many times\n", cmd);
rq->errors = make_status_bytes(SAM_STAT_GOOD,
cmd->err_info->CommandStatus, DRIVER_OK,
(cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
blk_rq_is_passthrough(cmd->rq) ?
DID_PASSTHROUGH : DID_ABORT);
break;
case CMD_TIMEOUT:
dev_warn(&h->pdev->dev, "cmd %p timedout\n", cmd);
rq->errors = make_status_bytes(SAM_STAT_GOOD,
cmd->err_info->CommandStatus, DRIVER_OK,
(cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
blk_rq_is_passthrough(cmd->rq) ?
DID_PASSTHROUGH : DID_ERROR);
break;
case CMD_UNABORTABLE:
dev_warn(&h->pdev->dev, "cmd %p unabortable\n", cmd);
rq->errors = make_status_bytes(SAM_STAT_GOOD,
cmd->err_info->CommandStatus, DRIVER_OK,
cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC ?
blk_rq_is_passthrough(cmd->rq) ?
DID_PASSTHROUGH : DID_ERROR);
break;
default:
@ -3242,7 +3241,7 @@ static inline void complete_command(ctlr_info_t *h, CommandList_struct *cmd,
cmd->err_info->CommandStatus);
rq->errors = make_status_bytes(SAM_STAT_GOOD,
cmd->err_info->CommandStatus, DRIVER_OK,
(cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
blk_rq_is_passthrough(cmd->rq) ?
DID_PASSTHROUGH : DID_ERROR);
}
@ -3395,7 +3394,9 @@ static void do_cciss_request(struct request_queue *q)
c->Header.SGList = h->max_cmd_sgentries;
set_performant_mode(h, c);
if (likely(creq->cmd_type == REQ_TYPE_FS)) {
switch (req_op(creq)) {
case REQ_OP_READ:
case REQ_OP_WRITE:
if(h->cciss_read == CCISS_READ_10) {
c->Request.CDB[1] = 0;
c->Request.CDB[2] = (start_blk >> 24) & 0xff; /* MSB */
@ -3425,12 +3426,16 @@ static void do_cciss_request(struct request_queue *q)
c->Request.CDB[13]= blk_rq_sectors(creq) & 0xff;
c->Request.CDB[14] = c->Request.CDB[15] = 0;
}
} else if (creq->cmd_type == REQ_TYPE_BLOCK_PC) {
c->Request.CDBLen = creq->cmd_len;
memcpy(c->Request.CDB, creq->cmd, BLK_MAX_CDB);
} else {
break;
case REQ_OP_SCSI_IN:
case REQ_OP_SCSI_OUT:
c->Request.CDBLen = scsi_req(creq)->cmd_len;
memcpy(c->Request.CDB, scsi_req(creq)->cmd, BLK_MAX_CDB);
scsi_req(creq)->sense = c->err_info->SenseInfo;
break;
default:
dev_warn(&h->pdev->dev, "bad request type %d\n",
creq->cmd_type);
creq->cmd_flags);
BUG();
}
@ -4074,41 +4079,27 @@ clean_up:
static void cciss_interrupt_mode(ctlr_info_t *h)
{
#ifdef CONFIG_PCI_MSI
int err;
struct msix_entry cciss_msix_entries[4] = { {0, 0}, {0, 1},
{0, 2}, {0, 3}
};
int ret;
/* Some boards advertise MSI but don't really support it */
if ((h->board_id == 0x40700E11) || (h->board_id == 0x40800E11) ||
(h->board_id == 0x40820E11) || (h->board_id == 0x40830E11))
goto default_int_mode;
if (pci_find_capability(h->pdev, PCI_CAP_ID_MSIX)) {
err = pci_enable_msix_exact(h->pdev, cciss_msix_entries, 4);
if (!err) {
h->intr[0] = cciss_msix_entries[0].vector;
h->intr[1] = cciss_msix_entries[1].vector;
h->intr[2] = cciss_msix_entries[2].vector;
h->intr[3] = cciss_msix_entries[3].vector;
h->msix_vector = 1;
return;
} else {
dev_warn(&h->pdev->dev,
"MSI-X init failed %d\n", err);
}
}
if (pci_find_capability(h->pdev, PCI_CAP_ID_MSI)) {
if (!pci_enable_msi(h->pdev))
h->msi_vector = 1;
else
dev_warn(&h->pdev->dev, "MSI init failed\n");
ret = pci_alloc_irq_vectors(h->pdev, 4, 4, PCI_IRQ_MSIX);
if (ret >= 0) {
h->intr[0] = pci_irq_vector(h->pdev, 0);
h->intr[1] = pci_irq_vector(h->pdev, 1);
h->intr[2] = pci_irq_vector(h->pdev, 2);
h->intr[3] = pci_irq_vector(h->pdev, 3);
return;
}
ret = pci_alloc_irq_vectors(h->pdev, 1, 1, PCI_IRQ_MSI);
default_int_mode:
#endif /* CONFIG_PCI_MSI */
/* if we get here we're going to use the default interrupt mode */
h->intr[h->intr_mode] = h->pdev->irq;
h->intr[h->intr_mode] = pci_irq_vector(h->pdev, 0);
return;
}
@ -4888,7 +4879,7 @@ static int cciss_request_irq(ctlr_info_t *h,
irqreturn_t (*msixhandler)(int, void *),
irqreturn_t (*intxhandler)(int, void *))
{
if (h->msix_vector || h->msi_vector) {
if (h->pdev->msi_enabled || h->pdev->msix_enabled) {
if (!request_irq(h->intr[h->intr_mode], msixhandler,
0, h->devname, h))
return 0;
@ -4934,12 +4925,7 @@ static void cciss_undo_allocations_after_kdump_soft_reset(ctlr_info_t *h)
int ctlr = h->ctlr;
free_irq(h->intr[h->intr_mode], h);
#ifdef CONFIG_PCI_MSI
if (h->msix_vector)
pci_disable_msix(h->pdev);
else if (h->msi_vector)
pci_disable_msi(h->pdev);
#endif /* CONFIG_PCI_MSI */
pci_free_irq_vectors(h->pdev);
cciss_free_sg_chain_blocks(h->cmd_sg_list, h->nr_cmds);
cciss_free_scatterlists(h);
cciss_free_cmd_pool(h);
@ -5295,12 +5281,7 @@ static void cciss_remove_one(struct pci_dev *pdev)
cciss_shutdown(pdev);
#ifdef CONFIG_PCI_MSI
if (h->msix_vector)
pci_disable_msix(h->pdev);
else if (h->msi_vector)
pci_disable_msi(h->pdev);
#endif /* CONFIG_PCI_MSI */
pci_free_irq_vectors(h->pdev);
iounmap(h->transtable);
iounmap(h->cfgtable);

View File

@ -90,8 +90,6 @@ struct ctlr_info
# define SIMPLE_MODE_INT 2
# define MEMQ_MODE_INT 3
unsigned int intr[4];
unsigned int msix_vector;
unsigned int msi_vector;
int intr_mode;
int cciss_max_sectors;
BYTE cciss_read;
@ -333,7 +331,7 @@ static unsigned long SA5_performant_completed(ctlr_info_t *h)
*/
register_value = readl(h->vaddr + SA5_OUTDB_STATUS);
/* msi auto clears the interrupt pending bit. */
if (!(h->msi_vector || h->msix_vector)) {
if (!(h->pdev->msi_enabled || h->pdev->msix_enabled)) {
writel(SA5_OUTDB_CLEAR_PERF_BIT, h->vaddr + SA5_OUTDB_CLEAR);
/* Do a read in order to flush the write to the controller
* (as per spec.)
@ -393,7 +391,7 @@ static bool SA5_performant_intr_pending(ctlr_info_t *h)
if (!register_value)
return false;
if (h->msi_vector || h->msix_vector)
if (h->pdev->msi_enabled || h->pdev->msix_enabled)
return true;
/* Read outbound doorbell to flush */

View File

@ -2462,7 +2462,7 @@ static int drbd_congested(void *congested_data, int bdi_bits)
if (get_ldev(device)) {
q = bdev_get_queue(device->ldev->backing_bdev);
r = bdi_congested(&q->backing_dev_info, bdi_bits);
r = bdi_congested(q->backing_dev_info, bdi_bits);
put_ldev(device);
if (r)
reason = 'b';
@ -2834,8 +2834,8 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
/* we have no partitions. we contain only ourselves. */
device->this_bdev->bd_contains = device->this_bdev;
q->backing_dev_info.congested_fn = drbd_congested;
q->backing_dev_info.congested_data = device;
q->backing_dev_info->congested_fn = drbd_congested;
q->backing_dev_info->congested_data = device;
blk_queue_make_request(q, drbd_make_request);
blk_queue_write_cache(q, true, true);

View File

@ -1328,11 +1328,13 @@ static void drbd_setup_queue_param(struct drbd_device *device, struct drbd_backi
if (b) {
blk_queue_stack_limits(q, b);
if (q->backing_dev_info.ra_pages != b->backing_dev_info.ra_pages) {
if (q->backing_dev_info->ra_pages !=
b->backing_dev_info->ra_pages) {
drbd_info(device, "Adjusting my ra_pages to backing device's (%lu -> %lu)\n",
q->backing_dev_info.ra_pages,
b->backing_dev_info.ra_pages);
q->backing_dev_info.ra_pages = b->backing_dev_info.ra_pages;
q->backing_dev_info->ra_pages,
b->backing_dev_info->ra_pages);
q->backing_dev_info->ra_pages =
b->backing_dev_info->ra_pages;
}
}
fixup_discard_if_not_supported(q);
@ -3345,7 +3347,7 @@ static void device_to_statistics(struct device_statistics *s,
s->dev_disk_flags = md->flags;
q = bdev_get_queue(device->ldev->backing_bdev);
s->dev_lower_blocked =
bdi_congested(&q->backing_dev_info,
bdi_congested(q->backing_dev_info,
(1 << WB_async_congested) |
(1 << WB_sync_congested));
put_ldev(device);

View File

@ -288,7 +288,7 @@ static int drbd_seq_show(struct seq_file *seq, void *v)
seq_printf(seq, "%2d: cs:Unconfigured\n", i);
} else {
/* reset device->congestion_reason */
bdi_rw_congested(&device->rq_queue->backing_dev_info);
bdi_rw_congested(device->rq_queue->backing_dev_info);
nc = rcu_dereference(first_peer_device(device)->connection->net_conf);
wp = nc ? nc->wire_protocol - DRBD_PROT_A + 'A' : ' ';

View File

@ -927,7 +927,7 @@ static bool remote_due_to_read_balancing(struct drbd_device *device, sector_t se
switch (rbm) {
case RB_CONGESTED_REMOTE:
bdi = &device->ldev->backing_bdev->bd_disk->queue->backing_dev_info;
bdi = device->ldev->backing_bdev->bd_disk->queue->backing_dev_info;
return bdi_read_congested(bdi);
case RB_LEAST_PENDING:
return atomic_read(&device->local_cnt) >

View File

@ -2900,8 +2900,8 @@ static void do_fd_request(struct request_queue *q)
return;
if (WARN(atomic_read(&usage_count) == 0,
"warning: usage count=0, current_req=%p sect=%ld type=%x flags=%llx\n",
current_req, (long)blk_rq_pos(current_req), current_req->cmd_type,
"warning: usage count=0, current_req=%p sect=%ld flags=%llx\n",
current_req, (long)blk_rq_pos(current_req),
(unsigned long long) current_req->cmd_flags))
return;
@ -3119,7 +3119,7 @@ static int raw_cmd_copyin(int cmd, void __user *param,
*rcmd = NULL;
loop:
ptr = kmalloc(sizeof(struct floppy_raw_cmd), GFP_USER);
ptr = kmalloc(sizeof(struct floppy_raw_cmd), GFP_KERNEL);
if (!ptr)
return -ENOMEM;
*rcmd = ptr;

View File

@ -626,30 +626,29 @@ repeat:
req_data_dir(req) == READ ? "read" : "writ",
cyl, head, sec, nsect, bio_data(req->bio));
#endif
if (req->cmd_type == REQ_TYPE_FS) {
switch (rq_data_dir(req)) {
case READ:
hd_out(disk, nsect, sec, head, cyl, ATA_CMD_PIO_READ,
&read_intr);
if (reset)
goto repeat;
break;
case WRITE:
hd_out(disk, nsect, sec, head, cyl, ATA_CMD_PIO_WRITE,
&write_intr);
if (reset)
goto repeat;
if (wait_DRQ()) {
bad_rw_intr();
goto repeat;
}
outsw(HD_DATA, bio_data(req->bio), 256);
break;
default:
printk("unknown hd-command\n");
hd_end_request_cur(-EIO);
break;
switch (req_op(req)) {
case REQ_OP_READ:
hd_out(disk, nsect, sec, head, cyl, ATA_CMD_PIO_READ,
&read_intr);
if (reset)
goto repeat;
break;
case REQ_OP_WRITE:
hd_out(disk, nsect, sec, head, cyl, ATA_CMD_PIO_WRITE,
&write_intr);
if (reset)
goto repeat;
if (wait_DRQ()) {
bad_rw_intr();
goto repeat;
}
outsw(HD_DATA, bio_data(req->bio), 256);
break;
default:
printk("unknown hd-command\n");
hd_end_request_cur(-EIO);
break;
}
}

View File

@ -1097,9 +1097,12 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
if ((unsigned int) info->lo_encrypt_key_size > LO_KEY_SIZE)
return -EINVAL;
/* I/O need to be drained during transfer transition */
blk_mq_freeze_queue(lo->lo_queue);
err = loop_release_xfer(lo);
if (err)
return err;
goto exit;
if (info->lo_encrypt_type) {
unsigned int type = info->lo_encrypt_type;
@ -1114,12 +1117,14 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
err = loop_init_xfer(lo, xfer, info);
if (err)
return err;
goto exit;
if (lo->lo_offset != info->lo_offset ||
lo->lo_sizelimit != info->lo_sizelimit)
if (figure_loop_size(lo, info->lo_offset, info->lo_sizelimit))
return -EFBIG;
if (figure_loop_size(lo, info->lo_offset, info->lo_sizelimit)) {
err = -EFBIG;
goto exit;
}
loop_config_discard(lo);
@ -1156,7 +1161,9 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
/* update dio if lo_offset or transfer is changed */
__loop_update_dio(lo, lo->use_dio);
return 0;
exit:
blk_mq_unfreeze_queue(lo->lo_queue);
return err;
}
static int

View File

@ -670,15 +670,17 @@ static void mg_request_poll(struct request_queue *q)
break;
}
if (unlikely(host->req->cmd_type != REQ_TYPE_FS)) {
mg_end_request_cur(host, -EIO);
continue;
}
if (rq_data_dir(host->req) == READ)
switch (req_op(host->req)) {
case REQ_OP_READ:
mg_read(host->req);
else
break;
case REQ_OP_WRITE:
mg_write(host->req);
break;
default:
mg_end_request_cur(host, -EIO);
break;
}
}
}
@ -687,13 +689,15 @@ static unsigned int mg_issue_req(struct request *req,
unsigned int sect_num,
unsigned int sect_cnt)
{
if (rq_data_dir(req) == READ) {
switch (req_op(host->req)) {
case REQ_OP_READ:
if (mg_out(host, sect_num, sect_cnt, MG_CMD_RD, &mg_read_intr)
!= MG_ERR_NONE) {
mg_bad_rw_intr(host);
return host->error;
}
} else {
break;
case REQ_OP_WRITE:
/* TODO : handler */
outb(ATA_NIEN, (unsigned long)host->dev_base + MG_REG_DRV_CTRL);
if (mg_out(host, sect_num, sect_cnt, MG_CMD_WR, &mg_write_intr)
@ -712,6 +716,10 @@ static unsigned int mg_issue_req(struct request *req,
mod_timer(&host->timer, jiffies + 3 * HZ);
outb(MG_CMD_WR_CONF, (unsigned long)host->dev_base +
MG_REG_COMMAND);
break;
default:
mg_end_request_cur(host, -EIO);
break;
}
return MG_ERR_NONE;
}
@ -753,11 +761,6 @@ static void mg_request(struct request_queue *q)
continue;
}
if (unlikely(req->cmd_type != REQ_TYPE_FS)) {
mg_end_request_cur(host, -EIO);
continue;
}
if (!mg_issue_req(req, host, sect_num, sect_cnt))
return;
}

View File

@ -41,6 +41,9 @@
#include <linux/nbd.h>
static DEFINE_IDR(nbd_index_idr);
static DEFINE_MUTEX(nbd_index_mutex);
struct nbd_sock {
struct socket *sock;
struct mutex tx_lock;
@ -89,8 +92,9 @@ static struct dentry *nbd_dbg_dir;
#define NBD_MAGIC 0x68797548
static unsigned int nbds_max = 16;
static struct nbd_device *nbd_dev;
static int max_part;
static struct workqueue_struct *recv_workqueue;
static int part_shift;
static inline struct device *nbd_to_dev(struct nbd_device *nbd)
{
@ -193,13 +197,6 @@ static enum blk_eh_timer_return nbd_xmit_timeout(struct request *req,
set_bit(NBD_TIMEDOUT, &nbd->runtime_flags);
req->errors++;
/*
* If our disconnect packet times out then we're already holding the
* config_lock and could deadlock here, so just set an error and return,
* we'll handle shutting everything down later.
*/
if (req->cmd_type == REQ_TYPE_DRV_PRIV)
return BLK_EH_HANDLED;
mutex_lock(&nbd->config_lock);
sock_shutdown(nbd);
mutex_unlock(&nbd->config_lock);
@ -278,14 +275,29 @@ static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index)
u32 type;
u32 tag = blk_mq_unique_tag(req);
if (req_op(req) == REQ_OP_DISCARD)
switch (req_op(req)) {
case REQ_OP_DISCARD:
type = NBD_CMD_TRIM;
else if (req_op(req) == REQ_OP_FLUSH)
break;
case REQ_OP_FLUSH:
type = NBD_CMD_FLUSH;
else if (rq_data_dir(req) == WRITE)
break;
case REQ_OP_WRITE:
type = NBD_CMD_WRITE;
else
break;
case REQ_OP_READ:
type = NBD_CMD_READ;
break;
default:
return -EIO;
}
if (rq_data_dir(req) == WRITE &&
(nbd->flags & NBD_FLAG_READ_ONLY)) {
dev_err_ratelimited(disk_to_dev(nbd->disk),
"Write on read-only\n");
return -EIO;
}
memset(&request, 0, sizeof(request));
request.magic = htonl(NBD_REQUEST_MAGIC);
@ -510,18 +522,6 @@ static void nbd_handle_cmd(struct nbd_cmd *cmd, int index)
goto error_out;
}
if (req->cmd_type != REQ_TYPE_FS &&
req->cmd_type != REQ_TYPE_DRV_PRIV)
goto error_out;
if (req->cmd_type == REQ_TYPE_FS &&
rq_data_dir(req) == WRITE &&
(nbd->flags & NBD_FLAG_READ_ONLY)) {
dev_err_ratelimited(disk_to_dev(nbd->disk),
"Write on read-only\n");
goto error_out;
}
req->errors = 0;
nsock = nbd->socks[index];
@ -785,7 +785,7 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
INIT_WORK(&args[i].work, recv_work);
args[i].nbd = nbd;
args[i].index = i;
queue_work(system_long_wq, &args[i].work);
queue_work(recv_workqueue, &args[i].work);
}
wait_event_interruptible(nbd->recv_wq,
atomic_read(&nbd->recv_threads) == 0);
@ -996,6 +996,103 @@ static struct blk_mq_ops nbd_mq_ops = {
.timeout = nbd_xmit_timeout,
};
static void nbd_dev_remove(struct nbd_device *nbd)
{
struct gendisk *disk = nbd->disk;
nbd->magic = 0;
if (disk) {
del_gendisk(disk);
blk_cleanup_queue(disk->queue);
blk_mq_free_tag_set(&nbd->tag_set);
put_disk(disk);
}
kfree(nbd);
}
static int nbd_dev_add(int index)
{
struct nbd_device *nbd;
struct gendisk *disk;
struct request_queue *q;
int err = -ENOMEM;
nbd = kzalloc(sizeof(struct nbd_device), GFP_KERNEL);
if (!nbd)
goto out;
disk = alloc_disk(1 << part_shift);
if (!disk)
goto out_free_nbd;
if (index >= 0) {
err = idr_alloc(&nbd_index_idr, nbd, index, index + 1,
GFP_KERNEL);
if (err == -ENOSPC)
err = -EEXIST;
} else {
err = idr_alloc(&nbd_index_idr, nbd, 0, 0, GFP_KERNEL);
if (err >= 0)
index = err;
}
if (err < 0)
goto out_free_disk;
nbd->disk = disk;
nbd->tag_set.ops = &nbd_mq_ops;
nbd->tag_set.nr_hw_queues = 1;
nbd->tag_set.queue_depth = 128;
nbd->tag_set.numa_node = NUMA_NO_NODE;
nbd->tag_set.cmd_size = sizeof(struct nbd_cmd);
nbd->tag_set.flags = BLK_MQ_F_SHOULD_MERGE |
BLK_MQ_F_SG_MERGE | BLK_MQ_F_BLOCKING;
nbd->tag_set.driver_data = nbd;
err = blk_mq_alloc_tag_set(&nbd->tag_set);
if (err)
goto out_free_idr;
q = blk_mq_init_queue(&nbd->tag_set);
if (IS_ERR(q)) {
err = PTR_ERR(q);
goto out_free_tags;
}
disk->queue = q;
/*
* Tell the block layer that we are not a rotational device
*/
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, disk->queue);
queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, disk->queue);
disk->queue->limits.discard_granularity = 512;
blk_queue_max_discard_sectors(disk->queue, UINT_MAX);
disk->queue->limits.discard_zeroes_data = 0;
blk_queue_max_hw_sectors(disk->queue, 65536);
disk->queue->limits.max_sectors = 256;
nbd->magic = NBD_MAGIC;
mutex_init(&nbd->config_lock);
disk->major = NBD_MAJOR;
disk->first_minor = index << part_shift;
disk->fops = &nbd_fops;
disk->private_data = nbd;
sprintf(disk->disk_name, "nbd%d", index);
init_waitqueue_head(&nbd->recv_wq);
nbd_reset(nbd);
add_disk(disk);
return index;
out_free_tags:
blk_mq_free_tag_set(&nbd->tag_set);
out_free_idr:
idr_remove(&nbd_index_idr, index);
out_free_disk:
put_disk(disk);
out_free_nbd:
kfree(nbd);
out:
return err;
}
/*
* And here should be modules and kernel interface
* (Just smiley confuses emacs :-)
@ -1003,9 +1100,7 @@ static struct blk_mq_ops nbd_mq_ops = {
static int __init nbd_init(void)
{
int err = -ENOMEM;
int i;
int part_shift;
BUILD_BUG_ON(sizeof(struct nbd_request) != 28);
@ -1034,111 +1129,38 @@ static int __init nbd_init(void)
if (nbds_max > 1UL << (MINORBITS - part_shift))
return -EINVAL;
nbd_dev = kcalloc(nbds_max, sizeof(*nbd_dev), GFP_KERNEL);
if (!nbd_dev)
recv_workqueue = alloc_workqueue("knbd-recv",
WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
if (!recv_workqueue)
return -ENOMEM;
for (i = 0; i < nbds_max; i++) {
struct request_queue *q;
struct gendisk *disk = alloc_disk(1 << part_shift);
if (!disk)
goto out;
nbd_dev[i].disk = disk;
nbd_dev[i].tag_set.ops = &nbd_mq_ops;
nbd_dev[i].tag_set.nr_hw_queues = 1;
nbd_dev[i].tag_set.queue_depth = 128;
nbd_dev[i].tag_set.numa_node = NUMA_NO_NODE;
nbd_dev[i].tag_set.cmd_size = sizeof(struct nbd_cmd);
nbd_dev[i].tag_set.flags = BLK_MQ_F_SHOULD_MERGE |
BLK_MQ_F_SG_MERGE | BLK_MQ_F_BLOCKING;
nbd_dev[i].tag_set.driver_data = &nbd_dev[i];
err = blk_mq_alloc_tag_set(&nbd_dev[i].tag_set);
if (err) {
put_disk(disk);
goto out;
}
/*
* The new linux 2.5 block layer implementation requires
* every gendisk to have its very own request_queue struct.
* These structs are big so we dynamically allocate them.
*/
q = blk_mq_init_queue(&nbd_dev[i].tag_set);
if (IS_ERR(q)) {
blk_mq_free_tag_set(&nbd_dev[i].tag_set);
put_disk(disk);
goto out;
}
disk->queue = q;
/*
* Tell the block layer that we are not a rotational device
*/
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, disk->queue);
queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, disk->queue);
disk->queue->limits.discard_granularity = 512;
blk_queue_max_discard_sectors(disk->queue, UINT_MAX);
disk->queue->limits.discard_zeroes_data = 0;
blk_queue_max_hw_sectors(disk->queue, 65536);
disk->queue->limits.max_sectors = 256;
}
if (register_blkdev(NBD_MAJOR, "nbd")) {
err = -EIO;
goto out;
}
printk(KERN_INFO "nbd: registered device at major %d\n", NBD_MAJOR);
if (register_blkdev(NBD_MAJOR, "nbd"))
return -EIO;
nbd_dbg_init();
for (i = 0; i < nbds_max; i++) {
struct gendisk *disk = nbd_dev[i].disk;
nbd_dev[i].magic = NBD_MAGIC;
mutex_init(&nbd_dev[i].config_lock);
disk->major = NBD_MAJOR;
disk->first_minor = i << part_shift;
disk->fops = &nbd_fops;
disk->private_data = &nbd_dev[i];
sprintf(disk->disk_name, "nbd%d", i);
init_waitqueue_head(&nbd_dev[i].recv_wq);
nbd_reset(&nbd_dev[i]);
add_disk(disk);
}
mutex_lock(&nbd_index_mutex);
for (i = 0; i < nbds_max; i++)
nbd_dev_add(i);
mutex_unlock(&nbd_index_mutex);
return 0;
}
static int nbd_exit_cb(int id, void *ptr, void *data)
{
struct nbd_device *nbd = ptr;
nbd_dev_remove(nbd);
return 0;
out:
while (i--) {
blk_mq_free_tag_set(&nbd_dev[i].tag_set);
blk_cleanup_queue(nbd_dev[i].disk->queue);
put_disk(nbd_dev[i].disk);
}
kfree(nbd_dev);
return err;
}
static void __exit nbd_cleanup(void)
{
int i;
nbd_dbg_close();
for (i = 0; i < nbds_max; i++) {
struct gendisk *disk = nbd_dev[i].disk;
nbd_dev[i].magic = 0;
if (disk) {
del_gendisk(disk);
blk_cleanup_queue(disk->queue);
blk_mq_free_tag_set(&nbd_dev[i].tag_set);
put_disk(disk);
}
}
idr_for_each(&nbd_index_idr, &nbd_exit_cb, NULL);
idr_destroy(&nbd_index_idr);
destroy_workqueue(recv_workqueue);
unregister_blkdev(NBD_MAJOR, "nbd");
kfree(nbd_dev);
printk(KERN_INFO "nbd: unregistered device at major %d\n", NBD_MAJOR);
}
module_init(nbd_init);

View File

@ -420,7 +420,8 @@ static void null_lnvm_end_io(struct request *rq, int error)
{
struct nvm_rq *rqd = rq->end_io_data;
nvm_end_io(rqd, error);
rqd->error = error;
nvm_end_io(rqd);
blk_put_request(rq);
}
@ -431,11 +432,11 @@ static int null_lnvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
struct request *rq;
struct bio *bio = rqd->bio;
rq = blk_mq_alloc_request(q, bio_data_dir(bio), 0);
rq = blk_mq_alloc_request(q,
op_is_write(bio_op(bio)) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN, 0);
if (IS_ERR(rq))
return -ENOMEM;
rq->cmd_type = REQ_TYPE_DRV_PRIV;
rq->__sector = bio->bi_iter.bi_sector;
rq->ioprio = bio_prio(bio);
@ -460,7 +461,6 @@ static int null_lnvm_id(struct nvm_dev *dev, struct nvm_id *id)
id->ver_id = 0x1;
id->vmnt = 0;
id->cgrps = 1;
id->cap = 0x2;
id->dom = 0x1;
@ -479,7 +479,7 @@ static int null_lnvm_id(struct nvm_dev *dev, struct nvm_id *id)
sector_div(size, bs); /* convert size to pages */
size >>= 8; /* concert size to pgs pr blk */
grp = &id->groups[0];
grp = &id->grp;
grp->mtype = 0;
grp->fmtype = 0;
grp->num_ch = 1;

View File

@ -308,12 +308,6 @@ static void osdblk_rq_fn(struct request_queue *q)
if (!rq)
break;
/* filter out block requests we don't understand */
if (rq->cmd_type != REQ_TYPE_FS) {
blk_end_request_all(rq, 0);
continue;
}
/* deduce our operation (read, write, flush) */
/* I wish the block layer simplified cmd_type/cmd_flags/cmd[]
* into a clearly defined set of RPC commands:

View File

@ -25,6 +25,7 @@ config PARIDE_PD
config PARIDE_PCD
tristate "Parallel port ATAPI CD-ROMs"
depends on PARIDE
select BLK_SCSI_REQUEST # only for the generic cdrom code
---help---
This option enables the high-level driver for ATAPI CD-ROM devices
connected through a parallel port. If you chose to build PARIDE

View File

@ -273,7 +273,7 @@ static const struct block_device_operations pcd_bdops = {
.check_events = pcd_block_check_events,
};
static struct cdrom_device_ops pcd_dops = {
static const struct cdrom_device_ops pcd_dops = {
.open = pcd_open,
.release = pcd_release,
.drive_status = pcd_drive_status,

View File

@ -439,18 +439,16 @@ static int pd_retries = 0; /* i/o error retry count */
static int pd_block; /* address of next requested block */
static int pd_count; /* number of blocks still to do */
static int pd_run; /* sectors in current cluster */
static int pd_cmd; /* current command READ/WRITE */
static char *pd_buf; /* buffer for request in progress */
static enum action do_pd_io_start(void)
{
if (pd_req->cmd_type == REQ_TYPE_DRV_PRIV) {
switch (req_op(pd_req)) {
case REQ_OP_DRV_IN:
phase = pd_special;
return pd_special();
}
pd_cmd = rq_data_dir(pd_req);
if (pd_cmd == READ || pd_cmd == WRITE) {
case REQ_OP_READ:
case REQ_OP_WRITE:
pd_block = blk_rq_pos(pd_req);
pd_count = blk_rq_cur_sectors(pd_req);
if (pd_block + pd_count > get_capacity(pd_req->rq_disk))
@ -458,7 +456,7 @@ static enum action do_pd_io_start(void)
pd_run = blk_rq_sectors(pd_req);
pd_buf = bio_data(pd_req->bio);
pd_retries = 0;
if (pd_cmd == READ)
if (req_op(pd_req) == REQ_OP_READ)
return do_pd_read_start();
else
return do_pd_write_start();
@ -723,11 +721,10 @@ static int pd_special_command(struct pd_unit *disk,
struct request *rq;
int err = 0;
rq = blk_get_request(disk->gd->queue, READ, __GFP_RECLAIM);
rq = blk_get_request(disk->gd->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
if (IS_ERR(rq))
return PTR_ERR(rq);
rq->cmd_type = REQ_TYPE_DRV_PRIV;
rq->special = func;
err = blk_execute_rq(disk->gd->queue, disk->gd, rq, 0);

View File

@ -704,10 +704,10 @@ static int pkt_generic_packet(struct pktcdvd_device *pd, struct packet_command *
int ret = 0;
rq = blk_get_request(q, (cgc->data_direction == CGC_DATA_WRITE) ?
WRITE : READ, __GFP_RECLAIM);
REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN, __GFP_RECLAIM);
if (IS_ERR(rq))
return PTR_ERR(rq);
blk_rq_set_block_pc(rq);
scsi_req_init(rq);
if (cgc->buflen) {
ret = blk_rq_map_kern(q, rq, cgc->buffer, cgc->buflen,
@ -716,8 +716,8 @@ static int pkt_generic_packet(struct pktcdvd_device *pd, struct packet_command *
goto out;
}
rq->cmd_len = COMMAND_SIZE(cgc->cmd[0]);
memcpy(rq->cmd, cgc->cmd, CDROM_PACKET_SIZE);
scsi_req(rq)->cmd_len = COMMAND_SIZE(cgc->cmd[0]);
memcpy(scsi_req(rq)->cmd, cgc->cmd, CDROM_PACKET_SIZE);
rq->timeout = 60*HZ;
if (cgc->quiet)
@ -1243,7 +1243,7 @@ try_next_bio:
&& pd->bio_queue_size <= pd->write_congestion_off);
spin_unlock(&pd->lock);
if (wakeup) {
clear_bdi_congested(&pd->disk->queue->backing_dev_info,
clear_bdi_congested(pd->disk->queue->backing_dev_info,
BLK_RW_ASYNC);
}
@ -2370,7 +2370,7 @@ static void pkt_make_request_write(struct request_queue *q, struct bio *bio)
spin_lock(&pd->lock);
if (pd->write_congestion_on > 0
&& pd->bio_queue_size >= pd->write_congestion_on) {
set_bdi_congested(&q->backing_dev_info, BLK_RW_ASYNC);
set_bdi_congested(q->backing_dev_info, BLK_RW_ASYNC);
do {
spin_unlock(&pd->lock);
congestion_wait(BLK_RW_ASYNC, HZ);

View File

@ -196,16 +196,19 @@ static void ps3disk_do_request(struct ps3_storage_device *dev,
dev_dbg(&dev->sbd.core, "%s:%u\n", __func__, __LINE__);
while ((req = blk_fetch_request(q))) {
if (req_op(req) == REQ_OP_FLUSH) {
switch (req_op(req)) {
case REQ_OP_FLUSH:
if (ps3disk_submit_flush_request(dev, req))
break;
} else if (req->cmd_type == REQ_TYPE_FS) {
return;
break;
case REQ_OP_READ:
case REQ_OP_WRITE:
if (ps3disk_submit_request_sg(dev, req))
break;
} else {
return;
break;
default:
blk_dump_rq_flags(req, DEVICE_NAME " bad request");
__blk_end_request_all(req, -EIO);
continue;
}
}
}

View File

@ -4099,20 +4099,22 @@ static void rbd_queue_workfn(struct work_struct *work)
bool must_be_locked;
int result;
if (rq->cmd_type != REQ_TYPE_FS) {
dout("%s: non-fs request type %d\n", __func__,
(int) rq->cmd_type);
switch (req_op(rq)) {
case REQ_OP_DISCARD:
op_type = OBJ_OP_DISCARD;
break;
case REQ_OP_WRITE:
op_type = OBJ_OP_WRITE;
break;
case REQ_OP_READ:
op_type = OBJ_OP_READ;
break;
default:
dout("%s: non-fs request type %d\n", __func__, req_op(rq));
result = -EIO;
goto err;
}
if (req_op(rq) == REQ_OP_DISCARD)
op_type = OBJ_OP_DISCARD;
else if (req_op(rq) == REQ_OP_WRITE)
op_type = OBJ_OP_WRITE;
else
op_type = OBJ_OP_READ;
/* Ignore/skip any zero-length requests */
if (!length) {
@ -4524,7 +4526,7 @@ static int rbd_init_disk(struct rbd_device *rbd_dev)
q->limits.discard_zeroes_data = 1;
if (!ceph_test_opt(rbd_dev->rbd_client->client, NOCRC))
q->backing_dev_info.capabilities |= BDI_CAP_STABLE_WRITES;
q->backing_dev_info->capabilities |= BDI_CAP_STABLE_WRITES;
disk->queue = q;

View File

@ -1204,10 +1204,11 @@ static void skd_complete_special(struct skd_device *skdev,
static int skd_bdev_ioctl(struct block_device *bdev, fmode_t mode,
uint cmd_in, ulong arg)
{
int rc = 0;
static const int sg_version_num = 30527;
int rc = 0, timeout;
struct gendisk *disk = bdev->bd_disk;
struct skd_device *skdev = disk->private_data;
void __user *p = (void *)arg;
int __user *p = (int __user *)arg;
pr_debug("%s:%s:%d %s: CMD[%s] ioctl mode 0x%x, cmd 0x%x arg %0lx\n",
skdev->name, __func__, __LINE__,
@ -1218,12 +1219,18 @@ static int skd_bdev_ioctl(struct block_device *bdev, fmode_t mode,
switch (cmd_in) {
case SG_SET_TIMEOUT:
rc = get_user(timeout, p);
if (!rc)
disk->queue->sg_timeout = clock_t_to_jiffies(timeout);
break;
case SG_GET_TIMEOUT:
rc = jiffies_to_clock_t(disk->queue->sg_timeout);
break;
case SG_GET_VERSION_NUM:
rc = scsi_cmd_ioctl(disk->queue, disk, mode, cmd_in, p);
rc = put_user(sg_version_num, p);
break;
case SG_IO:
rc = skd_ioctl_sg_io(skdev, mode, p);
rc = skd_ioctl_sg_io(skdev, mode, (void __user *)arg);
break;
default:

View File

@ -567,7 +567,7 @@ static struct carm_request *carm_get_special(struct carm_host *host)
if (!crq)
return NULL;
rq = blk_get_request(host->oob_q, WRITE /* bogus */, GFP_KERNEL);
rq = blk_get_request(host->oob_q, REQ_OP_DRV_OUT, GFP_KERNEL);
if (IS_ERR(rq)) {
spin_lock_irqsave(&host->lock, flags);
carm_put_request(host, crq);
@ -620,7 +620,6 @@ static int carm_array_info (struct carm_host *host, unsigned int array_idx)
spin_unlock_irq(&host->lock);
DPRINTK("blk_execute_rq_nowait, tag == %u\n", idx);
crq->rq->cmd_type = REQ_TYPE_DRV_PRIV;
crq->rq->special = crq;
blk_execute_rq_nowait(host->oob_q, NULL, crq->rq, true, NULL);
@ -661,7 +660,6 @@ static int carm_send_special (struct carm_host *host, carm_sspc_t func)
crq->msg_bucket = (u32) rc;
DPRINTK("blk_execute_rq_nowait, tag == %u\n", idx);
crq->rq->cmd_type = REQ_TYPE_DRV_PRIV;
crq->rq->special = crq;
blk_execute_rq_nowait(host->oob_q, NULL, crq->rq, true, NULL);

View File

@ -52,11 +52,13 @@ struct virtio_blk {
};
struct virtblk_req {
struct request *req;
struct virtio_blk_outhdr out_hdr;
struct virtio_scsi_inhdr in_hdr;
u8 status;
#ifdef CONFIG_VIRTIO_BLK_SCSI
struct scsi_request sreq; /* for SCSI passthrough, must be first */
u8 sense[SCSI_SENSE_BUFFERSIZE];
struct virtio_scsi_inhdr in_hdr;
#endif
struct virtio_blk_outhdr out_hdr;
u8 status;
struct scatterlist sg[];
};
@ -72,28 +74,23 @@ static inline int virtblk_result(struct virtblk_req *vbr)
}
}
static int __virtblk_add_req(struct virtqueue *vq,
struct virtblk_req *vbr,
struct scatterlist *data_sg,
bool have_data)
/*
* If this is a packet command we need a couple of additional headers. Behind
* the normal outhdr we put a segment with the scsi command block, and before
* the normal inhdr we put the sense data and the inhdr with additional status
* information.
*/
#ifdef CONFIG_VIRTIO_BLK_SCSI
static int virtblk_add_req_scsi(struct virtqueue *vq, struct virtblk_req *vbr,
struct scatterlist *data_sg, bool have_data)
{
struct scatterlist hdr, status, cmd, sense, inhdr, *sgs[6];
unsigned int num_out = 0, num_in = 0;
__virtio32 type = vbr->out_hdr.type & ~cpu_to_virtio32(vq->vdev, VIRTIO_BLK_T_OUT);
sg_init_one(&hdr, &vbr->out_hdr, sizeof(vbr->out_hdr));
sgs[num_out++] = &hdr;
/*
* If this is a packet command we need a couple of additional headers.
* Behind the normal outhdr we put a segment with the scsi command
* block, and before the normal inhdr we put the sense data and the
* inhdr with additional status information.
*/
if (type == cpu_to_virtio32(vq->vdev, VIRTIO_BLK_T_SCSI_CMD)) {
sg_init_one(&cmd, vbr->req->cmd, vbr->req->cmd_len);
sgs[num_out++] = &cmd;
}
sg_init_one(&cmd, vbr->sreq.cmd, vbr->sreq.cmd_len);
sgs[num_out++] = &cmd;
if (have_data) {
if (vbr->out_hdr.type & cpu_to_virtio32(vq->vdev, VIRTIO_BLK_T_OUT))
@ -102,12 +99,69 @@ static int __virtblk_add_req(struct virtqueue *vq,
sgs[num_out + num_in++] = data_sg;
}
if (type == cpu_to_virtio32(vq->vdev, VIRTIO_BLK_T_SCSI_CMD)) {
memcpy(vbr->sense, vbr->req->sense, SCSI_SENSE_BUFFERSIZE);
sg_init_one(&sense, vbr->sense, SCSI_SENSE_BUFFERSIZE);
sgs[num_out + num_in++] = &sense;
sg_init_one(&inhdr, &vbr->in_hdr, sizeof(vbr->in_hdr));
sgs[num_out + num_in++] = &inhdr;
sg_init_one(&sense, vbr->sense, SCSI_SENSE_BUFFERSIZE);
sgs[num_out + num_in++] = &sense;
sg_init_one(&inhdr, &vbr->in_hdr, sizeof(vbr->in_hdr));
sgs[num_out + num_in++] = &inhdr;
sg_init_one(&status, &vbr->status, sizeof(vbr->status));
sgs[num_out + num_in++] = &status;
return virtqueue_add_sgs(vq, sgs, num_out, num_in, vbr, GFP_ATOMIC);
}
static inline void virtblk_scsi_reques_done(struct request *req)
{
struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
struct virtio_blk *vblk = req->q->queuedata;
struct scsi_request *sreq = &vbr->sreq;
sreq->resid_len = virtio32_to_cpu(vblk->vdev, vbr->in_hdr.residual);
sreq->sense_len = virtio32_to_cpu(vblk->vdev, vbr->in_hdr.sense_len);
req->errors = virtio32_to_cpu(vblk->vdev, vbr->in_hdr.errors);
}
static int virtblk_ioctl(struct block_device *bdev, fmode_t mode,
unsigned int cmd, unsigned long data)
{
struct gendisk *disk = bdev->bd_disk;
struct virtio_blk *vblk = disk->private_data;
/*
* Only allow the generic SCSI ioctls if the host can support it.
*/
if (!virtio_has_feature(vblk->vdev, VIRTIO_BLK_F_SCSI))
return -ENOTTY;
return scsi_cmd_blk_ioctl(bdev, mode, cmd,
(void __user *)data);
}
#else
static inline int virtblk_add_req_scsi(struct virtqueue *vq,
struct virtblk_req *vbr, struct scatterlist *data_sg,
bool have_data)
{
return -EIO;
}
static inline void virtblk_scsi_reques_done(struct request *req)
{
}
#define virtblk_ioctl NULL
#endif /* CONFIG_VIRTIO_BLK_SCSI */
static int virtblk_add_req(struct virtqueue *vq, struct virtblk_req *vbr,
struct scatterlist *data_sg, bool have_data)
{
struct scatterlist hdr, status, *sgs[3];
unsigned int num_out = 0, num_in = 0;
sg_init_one(&hdr, &vbr->out_hdr, sizeof(vbr->out_hdr));
sgs[num_out++] = &hdr;
if (have_data) {
if (vbr->out_hdr.type & cpu_to_virtio32(vq->vdev, VIRTIO_BLK_T_OUT))
sgs[num_out++] = data_sg;
else
sgs[num_out + num_in++] = data_sg;
}
sg_init_one(&status, &vbr->status, sizeof(vbr->status));
@ -119,15 +173,16 @@ static int __virtblk_add_req(struct virtqueue *vq,
static inline void virtblk_request_done(struct request *req)
{
struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
struct virtio_blk *vblk = req->q->queuedata;
int error = virtblk_result(vbr);
if (req->cmd_type == REQ_TYPE_BLOCK_PC) {
req->resid_len = virtio32_to_cpu(vblk->vdev, vbr->in_hdr.residual);
req->sense_len = virtio32_to_cpu(vblk->vdev, vbr->in_hdr.sense_len);
req->errors = virtio32_to_cpu(vblk->vdev, vbr->in_hdr.errors);
} else if (req->cmd_type == REQ_TYPE_DRV_PRIV) {
switch (req_op(req)) {
case REQ_OP_SCSI_IN:
case REQ_OP_SCSI_OUT:
virtblk_scsi_reques_done(req);
break;
case REQ_OP_DRV_IN:
req->errors = (error != 0);
break;
}
blk_mq_end_request(req, error);
@ -146,7 +201,9 @@ static void virtblk_done(struct virtqueue *vq)
do {
virtqueue_disable_cb(vq);
while ((vbr = virtqueue_get_buf(vblk->vqs[qid].vq, &len)) != NULL) {
blk_mq_complete_request(vbr->req, vbr->req->errors);
struct request *req = blk_mq_rq_from_pdu(vbr);
blk_mq_complete_request(req, req->errors);
req_done = true;
}
if (unlikely(virtqueue_is_broken(vq)))
@ -170,49 +227,50 @@ static int virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
int qid = hctx->queue_num;
int err;
bool notify = false;
u32 type;
BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
vbr->req = req;
if (req_op(req) == REQ_OP_FLUSH) {
vbr->out_hdr.type = cpu_to_virtio32(vblk->vdev, VIRTIO_BLK_T_FLUSH);
vbr->out_hdr.sector = 0;
vbr->out_hdr.ioprio = cpu_to_virtio32(vblk->vdev, req_get_ioprio(vbr->req));
} else {
switch (req->cmd_type) {
case REQ_TYPE_FS:
vbr->out_hdr.type = 0;
vbr->out_hdr.sector = cpu_to_virtio64(vblk->vdev, blk_rq_pos(vbr->req));
vbr->out_hdr.ioprio = cpu_to_virtio32(vblk->vdev, req_get_ioprio(vbr->req));
break;
case REQ_TYPE_BLOCK_PC:
vbr->out_hdr.type = cpu_to_virtio32(vblk->vdev, VIRTIO_BLK_T_SCSI_CMD);
vbr->out_hdr.sector = 0;
vbr->out_hdr.ioprio = cpu_to_virtio32(vblk->vdev, req_get_ioprio(vbr->req));
break;
case REQ_TYPE_DRV_PRIV:
vbr->out_hdr.type = cpu_to_virtio32(vblk->vdev, VIRTIO_BLK_T_GET_ID);
vbr->out_hdr.sector = 0;
vbr->out_hdr.ioprio = cpu_to_virtio32(vblk->vdev, req_get_ioprio(vbr->req));
break;
default:
/* We don't put anything else in the queue. */
BUG();
}
switch (req_op(req)) {
case REQ_OP_READ:
case REQ_OP_WRITE:
type = 0;
break;
case REQ_OP_FLUSH:
type = VIRTIO_BLK_T_FLUSH;
break;
case REQ_OP_SCSI_IN:
case REQ_OP_SCSI_OUT:
type = VIRTIO_BLK_T_SCSI_CMD;
break;
case REQ_OP_DRV_IN:
type = VIRTIO_BLK_T_GET_ID;
break;
default:
WARN_ON_ONCE(1);
return BLK_MQ_RQ_QUEUE_ERROR;
}
vbr->out_hdr.type = cpu_to_virtio32(vblk->vdev, type);
vbr->out_hdr.sector = type ?
0 : cpu_to_virtio64(vblk->vdev, blk_rq_pos(req));
vbr->out_hdr.ioprio = cpu_to_virtio32(vblk->vdev, req_get_ioprio(req));
blk_mq_start_request(req);
num = blk_rq_map_sg(hctx->queue, vbr->req, vbr->sg);
num = blk_rq_map_sg(hctx->queue, req, vbr->sg);
if (num) {
if (rq_data_dir(vbr->req) == WRITE)
if (rq_data_dir(req) == WRITE)
vbr->out_hdr.type |= cpu_to_virtio32(vblk->vdev, VIRTIO_BLK_T_OUT);
else
vbr->out_hdr.type |= cpu_to_virtio32(vblk->vdev, VIRTIO_BLK_T_IN);
}
spin_lock_irqsave(&vblk->vqs[qid].lock, flags);
err = __virtblk_add_req(vblk->vqs[qid].vq, vbr, vbr->sg, num);
if (req_op(req) == REQ_OP_SCSI_IN || req_op(req) == REQ_OP_SCSI_OUT)
err = virtblk_add_req_scsi(vblk->vqs[qid].vq, vbr, vbr->sg, num);
else
err = virtblk_add_req(vblk->vqs[qid].vq, vbr, vbr->sg, num);
if (err) {
virtqueue_kick(vblk->vqs[qid].vq);
blk_mq_stop_hw_queue(hctx);
@ -242,10 +300,9 @@ static int virtblk_get_id(struct gendisk *disk, char *id_str)
struct request *req;
int err;
req = blk_get_request(q, READ, GFP_KERNEL);
req = blk_get_request(q, REQ_OP_DRV_IN, GFP_KERNEL);
if (IS_ERR(req))
return PTR_ERR(req);
req->cmd_type = REQ_TYPE_DRV_PRIV;
err = blk_rq_map_kern(q, req, id_str, VIRTIO_BLK_ID_BYTES, GFP_KERNEL);
if (err)
@ -257,22 +314,6 @@ out:
return err;
}
static int virtblk_ioctl(struct block_device *bdev, fmode_t mode,
unsigned int cmd, unsigned long data)
{
struct gendisk *disk = bdev->bd_disk;
struct virtio_blk *vblk = disk->private_data;
/*
* Only allow the generic SCSI ioctls if the host can support it.
*/
if (!virtio_has_feature(vblk->vdev, VIRTIO_BLK_F_SCSI))
return -ENOTTY;
return scsi_cmd_blk_ioctl(bdev, mode, cmd,
(void __user *)data);
}
/* We provide getgeo only to please some old bootloader/partitioning tools */
static int virtblk_getgeo(struct block_device *bd, struct hd_geometry *geo)
{
@ -538,6 +579,9 @@ static int virtblk_init_request(void *data, struct request *rq,
struct virtio_blk *vblk = data;
struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
#ifdef CONFIG_VIRTIO_BLK_SCSI
vbr->sreq.sense = vbr->sense;
#endif
sg_init_table(vbr->sg, vblk->sg_elems);
return 0;
}
@ -821,7 +865,10 @@ static const struct virtio_device_id id_table[] = {
static unsigned int features_legacy[] = {
VIRTIO_BLK_F_SEG_MAX, VIRTIO_BLK_F_SIZE_MAX, VIRTIO_BLK_F_GEOMETRY,
VIRTIO_BLK_F_RO, VIRTIO_BLK_F_BLK_SIZE, VIRTIO_BLK_F_SCSI,
VIRTIO_BLK_F_RO, VIRTIO_BLK_F_BLK_SIZE,
#ifdef CONFIG_VIRTIO_BLK_SCSI
VIRTIO_BLK_F_SCSI,
#endif
VIRTIO_BLK_F_FLUSH, VIRTIO_BLK_F_TOPOLOGY, VIRTIO_BLK_F_CONFIG_WCE,
VIRTIO_BLK_F_MQ,
}

View File

@ -865,7 +865,7 @@ static inline void flush_requests(struct blkfront_ring_info *rinfo)
static inline bool blkif_request_flush_invalid(struct request *req,
struct blkfront_info *info)
{
return ((req->cmd_type != REQ_TYPE_FS) ||
return (blk_rq_is_passthrough(req) ||
((req_op(req) == REQ_OP_FLUSH) &&
!info->feature_flush) ||
((req->cmd_flags & REQ_FUA) &&

View File

@ -468,7 +468,7 @@ static struct request *ace_get_next_request(struct request_queue *q)
struct request *req;
while ((req = blk_peek_request(q)) != NULL) {
if (req->cmd_type == REQ_TYPE_FS)
if (!blk_rq_is_passthrough(req))
break;
blk_start_request(req);
__blk_end_request_all(req, -EIO);

View File

@ -117,7 +117,7 @@ static void zram_revalidate_disk(struct zram *zram)
{
revalidate_disk(zram->disk);
/* revalidate_disk reset the BDI_CAP_STABLE_WRITES so set again */
zram->disk->queue->backing_dev_info.capabilities |=
zram->disk->queue->backing_dev_info->capabilities |=
BDI_CAP_STABLE_WRITES;
}

View File

@ -281,8 +281,8 @@
#include <linux/fcntl.h>
#include <linux/blkdev.h>
#include <linux/times.h>
#include <linux/uaccess.h>
#include <scsi/scsi_request.h>
/* used to tell the module to turn on full debugging messages */
static bool debug;
@ -342,8 +342,8 @@ static void cdrom_sysctl_register(void);
static LIST_HEAD(cdrom_list);
static int cdrom_dummy_generic_packet(struct cdrom_device_info *cdi,
struct packet_command *cgc)
int cdrom_dummy_generic_packet(struct cdrom_device_info *cdi,
struct packet_command *cgc)
{
if (cgc->sense) {
cgc->sense->sense_key = 0x05;
@ -354,6 +354,7 @@ static int cdrom_dummy_generic_packet(struct cdrom_device_info *cdi,
cgc->stat = -EIO;
return -EIO;
}
EXPORT_SYMBOL(cdrom_dummy_generic_packet);
static int cdrom_flush_cache(struct cdrom_device_info *cdi)
{
@ -371,7 +372,7 @@ static int cdrom_flush_cache(struct cdrom_device_info *cdi)
static int cdrom_get_disc_info(struct cdrom_device_info *cdi,
disc_information *di)
{
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
struct packet_command cgc;
int ret, buflen;
@ -586,7 +587,7 @@ static int cdrom_mrw_set_lba_space(struct cdrom_device_info *cdi, int space)
int register_cdrom(struct cdrom_device_info *cdi)
{
static char banner_printed;
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
int *change_capability = (int *)&cdo->capability; /* hack */
cd_dbg(CD_OPEN, "entering register_cdrom\n");
@ -610,7 +611,6 @@ int register_cdrom(struct cdrom_device_info *cdi)
ENSURE(reset, CDC_RESET);
ENSURE(generic_packet, CDC_GENERIC_PACKET);
cdi->mc_flags = 0;
cdo->n_minors = 0;
cdi->options = CDO_USE_FFLAGS;
if (autoclose == 1 && CDROM_CAN(CDC_CLOSE_TRAY))
@ -630,8 +630,7 @@ int register_cdrom(struct cdrom_device_info *cdi)
else
cdi->cdda_method = CDDA_OLD;
if (!cdo->generic_packet)
cdo->generic_packet = cdrom_dummy_generic_packet;
WARN_ON(!cdo->generic_packet);
cd_dbg(CD_REG_UNREG, "drive \"/dev/%s\" registered\n", cdi->name);
mutex_lock(&cdrom_mutex);
@ -652,7 +651,6 @@ void unregister_cdrom(struct cdrom_device_info *cdi)
if (cdi->exit)
cdi->exit(cdi);
cdi->ops->n_minors--;
cd_dbg(CD_REG_UNREG, "drive \"/dev/%s\" unregistered\n", cdi->name);
}
@ -1036,7 +1034,7 @@ static
int open_for_data(struct cdrom_device_info *cdi)
{
int ret;
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
tracktype tracks;
cd_dbg(CD_OPEN, "entering open_for_data\n");
/* Check if the driver can report drive status. If it can, we
@ -1198,8 +1196,8 @@ err:
/* This code is similar to that in open_for_data. The routine is called
whenever an audio play operation is requested.
*/
static int check_for_audio_disc(struct cdrom_device_info * cdi,
struct cdrom_device_ops * cdo)
static int check_for_audio_disc(struct cdrom_device_info *cdi,
const struct cdrom_device_ops *cdo)
{
int ret;
tracktype tracks;
@ -1254,7 +1252,7 @@ static int check_for_audio_disc(struct cdrom_device_info * cdi,
void cdrom_release(struct cdrom_device_info *cdi, fmode_t mode)
{
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
int opened_for_data;
cd_dbg(CD_CLOSE, "entering cdrom_release\n");
@ -1294,7 +1292,7 @@ static int cdrom_read_mech_status(struct cdrom_device_info *cdi,
struct cdrom_changer_info *buf)
{
struct packet_command cgc;
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
int length;
/*
@ -1643,7 +1641,7 @@ static int dvd_do_auth(struct cdrom_device_info *cdi, dvd_authinfo *ai)
int ret;
u_char buf[20];
struct packet_command cgc;
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
rpc_state_t rpc_state;
memset(buf, 0, sizeof(buf));
@ -1791,7 +1789,7 @@ static int dvd_read_physical(struct cdrom_device_info *cdi, dvd_struct *s,
{
unsigned char buf[21], *base;
struct dvd_layer *layer;
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
int ret, layer_num = s->physical.layer_num;
if (layer_num >= DVD_LAYERS)
@ -1842,7 +1840,7 @@ static int dvd_read_copyright(struct cdrom_device_info *cdi, dvd_struct *s,
{
int ret;
u_char buf[8];
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
init_cdrom_command(cgc, buf, sizeof(buf), CGC_DATA_READ);
cgc->cmd[0] = GPCMD_READ_DVD_STRUCTURE;
@ -1866,7 +1864,7 @@ static int dvd_read_disckey(struct cdrom_device_info *cdi, dvd_struct *s,
{
int ret, size;
u_char *buf;
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
size = sizeof(s->disckey.value) + 4;
@ -1894,7 +1892,7 @@ static int dvd_read_bca(struct cdrom_device_info *cdi, dvd_struct *s,
{
int ret, size = 4 + 188;
u_char *buf;
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
buf = kmalloc(size, GFP_KERNEL);
if (!buf)
@ -1928,7 +1926,7 @@ static int dvd_read_manufact(struct cdrom_device_info *cdi, dvd_struct *s,
{
int ret = 0, size;
u_char *buf;
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
size = sizeof(s->manufact.value) + 4;
@ -1995,7 +1993,7 @@ int cdrom_mode_sense(struct cdrom_device_info *cdi,
struct packet_command *cgc,
int page_code, int page_control)
{
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
memset(cgc->cmd, 0, sizeof(cgc->cmd));
@ -2010,7 +2008,7 @@ int cdrom_mode_sense(struct cdrom_device_info *cdi,
int cdrom_mode_select(struct cdrom_device_info *cdi,
struct packet_command *cgc)
{
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
memset(cgc->cmd, 0, sizeof(cgc->cmd));
memset(cgc->buffer, 0, 2);
@ -2025,7 +2023,7 @@ int cdrom_mode_select(struct cdrom_device_info *cdi,
static int cdrom_read_subchannel(struct cdrom_device_info *cdi,
struct cdrom_subchnl *subchnl, int mcn)
{
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
struct packet_command cgc;
char buffer[32];
int ret;
@ -2073,7 +2071,7 @@ static int cdrom_read_cd(struct cdrom_device_info *cdi,
struct packet_command *cgc, int lba,
int blocksize, int nblocks)
{
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
memset(&cgc->cmd, 0, sizeof(cgc->cmd));
cgc->cmd[0] = GPCMD_READ_10;
@ -2093,7 +2091,7 @@ static int cdrom_read_block(struct cdrom_device_info *cdi,
struct packet_command *cgc,
int lba, int nblocks, int format, int blksize)
{
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
memset(&cgc->cmd, 0, sizeof(cgc->cmd));
cgc->cmd[0] = GPCMD_READ_CD;
@ -2172,6 +2170,7 @@ static int cdrom_read_cdda_bpc(struct cdrom_device_info *cdi, __u8 __user *ubuf,
{
struct request_queue *q = cdi->disk->queue;
struct request *rq;
struct scsi_request *req;
struct bio *bio;
unsigned int len;
int nr, ret = 0;
@ -2190,12 +2189,13 @@ static int cdrom_read_cdda_bpc(struct cdrom_device_info *cdi, __u8 __user *ubuf,
len = nr * CD_FRAMESIZE_RAW;
rq = blk_get_request(q, READ, GFP_KERNEL);
rq = blk_get_request(q, REQ_OP_SCSI_IN, GFP_KERNEL);
if (IS_ERR(rq)) {
ret = PTR_ERR(rq);
break;
}
blk_rq_set_block_pc(rq);
req = scsi_req(rq);
scsi_req_init(rq);
ret = blk_rq_map_user(q, rq, NULL, ubuf, len, GFP_KERNEL);
if (ret) {
@ -2203,23 +2203,23 @@ static int cdrom_read_cdda_bpc(struct cdrom_device_info *cdi, __u8 __user *ubuf,
break;
}
rq->cmd[0] = GPCMD_READ_CD;
rq->cmd[1] = 1 << 2;
rq->cmd[2] = (lba >> 24) & 0xff;
rq->cmd[3] = (lba >> 16) & 0xff;
rq->cmd[4] = (lba >> 8) & 0xff;
rq->cmd[5] = lba & 0xff;
rq->cmd[6] = (nr >> 16) & 0xff;
rq->cmd[7] = (nr >> 8) & 0xff;
rq->cmd[8] = nr & 0xff;
rq->cmd[9] = 0xf8;
req->cmd[0] = GPCMD_READ_CD;
req->cmd[1] = 1 << 2;
req->cmd[2] = (lba >> 24) & 0xff;
req->cmd[3] = (lba >> 16) & 0xff;
req->cmd[4] = (lba >> 8) & 0xff;
req->cmd[5] = lba & 0xff;
req->cmd[6] = (nr >> 16) & 0xff;
req->cmd[7] = (nr >> 8) & 0xff;
req->cmd[8] = nr & 0xff;
req->cmd[9] = 0xf8;
rq->cmd_len = 12;
req->cmd_len = 12;
rq->timeout = 60 * HZ;
bio = rq->bio;
if (blk_execute_rq(q, cdi->disk, rq, 0)) {
struct request_sense *s = rq->sense;
struct request_sense *s = req->sense;
ret = -EIO;
cdi->last_sense = s->sense_key;
}
@ -2764,7 +2764,7 @@ static int cdrom_ioctl_audioctl(struct cdrom_device_info *cdi,
*/
static int cdrom_switch_blocksize(struct cdrom_device_info *cdi, int size)
{
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
struct packet_command cgc;
struct modesel_head mh;
@ -2790,7 +2790,7 @@ static int cdrom_switch_blocksize(struct cdrom_device_info *cdi, int size)
static int cdrom_get_track_info(struct cdrom_device_info *cdi,
__u16 track, __u8 type, track_information *ti)
{
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
struct packet_command cgc;
int ret, buflen;
@ -3049,7 +3049,7 @@ static noinline int mmc_ioctl_cdrom_play_msf(struct cdrom_device_info *cdi,
void __user *arg,
struct packet_command *cgc)
{
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
struct cdrom_msf msf;
cd_dbg(CD_DO_IOCTL, "entering CDROMPLAYMSF\n");
if (copy_from_user(&msf, (struct cdrom_msf __user *)arg, sizeof(msf)))
@ -3069,7 +3069,7 @@ static noinline int mmc_ioctl_cdrom_play_blk(struct cdrom_device_info *cdi,
void __user *arg,
struct packet_command *cgc)
{
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
struct cdrom_blk blk;
cd_dbg(CD_DO_IOCTL, "entering CDROMPLAYBLK\n");
if (copy_from_user(&blk, (struct cdrom_blk __user *)arg, sizeof(blk)))
@ -3164,7 +3164,7 @@ static noinline int mmc_ioctl_cdrom_start_stop(struct cdrom_device_info *cdi,
struct packet_command *cgc,
int cmd)
{
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
cd_dbg(CD_DO_IOCTL, "entering CDROMSTART/CDROMSTOP\n");
cgc->cmd[0] = GPCMD_START_STOP_UNIT;
cgc->cmd[1] = 1;
@ -3177,7 +3177,7 @@ static noinline int mmc_ioctl_cdrom_pause_resume(struct cdrom_device_info *cdi,
struct packet_command *cgc,
int cmd)
{
struct cdrom_device_ops *cdo = cdi->ops;
const struct cdrom_device_ops *cdo = cdi->ops;
cd_dbg(CD_DO_IOCTL, "entering CDROMPAUSE/CDROMRESUME\n");
cgc->cmd[0] = GPCMD_PAUSE_RESUME;
cgc->cmd[8] = (cmd == CDROMRESUME) ? 1 : 0;

View File

@ -481,7 +481,7 @@ static int gdrom_audio_ioctl(struct cdrom_device_info *cdi, unsigned int cmd,
return -EINVAL;
}
static struct cdrom_device_ops gdrom_ops = {
static const struct cdrom_device_ops gdrom_ops = {
.open = gdrom_open,
.release = gdrom_release,
.drive_status = gdrom_drivestatus,
@ -489,9 +489,9 @@ static struct cdrom_device_ops gdrom_ops = {
.get_last_session = gdrom_get_last_session,
.reset = gdrom_hardreset,
.audio_ioctl = gdrom_audio_ioctl,
.generic_packet = cdrom_dummy_generic_packet,
.capability = CDC_MULTI_SESSION | CDC_MEDIA_CHANGED |
CDC_RESET | CDC_DRIVE_STATUS | CDC_CD_R,
.n_minors = 1,
};
static int gdrom_bdops_open(struct block_device *bdev, fmode_t mode)
@ -659,23 +659,24 @@ static void gdrom_request(struct request_queue *rq)
struct request *req;
while ((req = blk_fetch_request(rq)) != NULL) {
if (req->cmd_type != REQ_TYPE_FS) {
printk(KERN_DEBUG "gdrom: Non-fs request ignored\n");
__blk_end_request_all(req, -EIO);
continue;
}
if (rq_data_dir(req) != READ) {
switch (req_op(req)) {
case REQ_OP_READ:
/*
* Add to list of deferred work and then schedule
* workqueue.
*/
list_add_tail(&req->queuelist, &gdrom_deferred);
schedule_work(&work);
break;
case REQ_OP_WRITE:
pr_notice("Read only device - write request ignored\n");
__blk_end_request_all(req, -EIO);
continue;
break;
default:
printk(KERN_DEBUG "gdrom: Non-fs request ignored\n");
__blk_end_request_all(req, -EIO);
break;
}
/*
* Add to list of deferred work and then schedule
* workqueue.
*/
list_add_tail(&req->queuelist, &gdrom_deferred);
schedule_work(&work);
}
}
@ -807,16 +808,20 @@ static int probe_gdrom(struct platform_device *devptr)
if (err)
goto probe_fail_cmdirq_register;
gd.gdrom_rq = blk_init_queue(gdrom_request, &gdrom_lock);
if (!gd.gdrom_rq)
if (!gd.gdrom_rq) {
err = -ENOMEM;
goto probe_fail_requestq;
}
err = probe_gdrom_setupqueue();
if (err)
goto probe_fail_toc;
gd.toc = kzalloc(sizeof(struct gdromtoc), GFP_KERNEL);
if (!gd.toc)
if (!gd.toc) {
err = -ENOMEM;
goto probe_fail_toc;
}
add_disk(gd.disk);
return 0;

View File

@ -10,6 +10,7 @@ menuconfig IDE
tristate "ATA/ATAPI/MFM/RLL support (DEPRECATED)"
depends on HAVE_IDE
depends on BLOCK
select BLK_SCSI_REQUEST
---help---
If you say Y here, your kernel will be able to manage ATA/(E)IDE and
ATAPI units. The most common cases are IDE hard drives and ATAPI

View File

@ -92,8 +92,9 @@ int ide_queue_pc_tail(ide_drive_t *drive, struct gendisk *disk,
struct request *rq;
int error;
rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_DRV_PRIV;
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_MISC;
rq->special = (char *)pc;
if (buf && bufflen) {
@ -103,9 +104,9 @@ int ide_queue_pc_tail(ide_drive_t *drive, struct gendisk *disk,
goto put_req;
}
memcpy(rq->cmd, pc->c, 12);
memcpy(scsi_req(rq)->cmd, pc->c, 12);
if (drive->media == ide_tape)
rq->cmd[13] = REQ_IDETAPE_PC1;
scsi_req(rq)->cmd[13] = REQ_IDETAPE_PC1;
error = blk_execute_rq(drive->queue, disk, rq, 0);
put_req:
blk_put_request(rq);
@ -171,7 +172,8 @@ EXPORT_SYMBOL_GPL(ide_create_request_sense_cmd);
void ide_prep_sense(ide_drive_t *drive, struct request *rq)
{
struct request_sense *sense = &drive->sense_data;
struct request *sense_rq = &drive->sense_rq;
struct request *sense_rq = drive->sense_rq;
struct scsi_request *req = scsi_req(sense_rq);
unsigned int cmd_len, sense_len;
int err;
@ -191,12 +193,13 @@ void ide_prep_sense(ide_drive_t *drive, struct request *rq)
BUG_ON(sense_len > sizeof(*sense));
if (rq->cmd_type == REQ_TYPE_ATA_SENSE || drive->sense_rq_armed)
if (ata_sense_request(rq) || drive->sense_rq_armed)
return;
memset(sense, 0, sizeof(*sense));
blk_rq_init(rq->q, sense_rq);
scsi_req_init(sense_rq);
err = blk_rq_map_kern(drive->queue, sense_rq, sense, sense_len,
GFP_NOIO);
@ -208,13 +211,14 @@ void ide_prep_sense(ide_drive_t *drive, struct request *rq)
}
sense_rq->rq_disk = rq->rq_disk;
sense_rq->cmd[0] = GPCMD_REQUEST_SENSE;
sense_rq->cmd[4] = cmd_len;
sense_rq->cmd_type = REQ_TYPE_ATA_SENSE;
sense_rq->cmd_flags = REQ_OP_DRV_IN;
ide_req(sense_rq)->type = ATA_PRIV_SENSE;
sense_rq->rq_flags |= RQF_PREEMPT;
req->cmd[0] = GPCMD_REQUEST_SENSE;
req->cmd[4] = cmd_len;
if (drive->media == ide_tape)
sense_rq->cmd[13] = REQ_IDETAPE_PC1;
req->cmd[13] = REQ_IDETAPE_PC1;
drive->sense_rq_armed = true;
}
@ -229,12 +233,12 @@ int ide_queue_sense_rq(ide_drive_t *drive, void *special)
return -ENOMEM;
}
drive->sense_rq.special = special;
drive->sense_rq->special = special;
drive->sense_rq_armed = false;
drive->hwif->rq = NULL;
elv_add_request(drive->queue, &drive->sense_rq, ELEVATOR_INSERT_FRONT);
elv_add_request(drive->queue, drive->sense_rq, ELEVATOR_INSERT_FRONT);
return 0;
}
EXPORT_SYMBOL_GPL(ide_queue_sense_rq);
@ -247,14 +251,14 @@ EXPORT_SYMBOL_GPL(ide_queue_sense_rq);
void ide_retry_pc(ide_drive_t *drive)
{
struct request *failed_rq = drive->hwif->rq;
struct request *sense_rq = &drive->sense_rq;
struct request *sense_rq = drive->sense_rq;
struct ide_atapi_pc *pc = &drive->request_sense_pc;
(void)ide_read_error(drive);
/* init pc from sense_rq */
ide_init_pc(pc);
memcpy(pc->c, sense_rq->cmd, 12);
memcpy(pc->c, scsi_req(sense_rq)->cmd, 12);
if (drive->media == ide_tape)
drive->atapi_flags |= IDE_AFLAG_IGNORE_DSC;
@ -286,7 +290,7 @@ int ide_cd_expiry(ide_drive_t *drive)
* commands/drives support that. Let ide_timer_expiry keep polling us
* for these.
*/
switch (rq->cmd[0]) {
switch (scsi_req(rq)->cmd[0]) {
case GPCMD_BLANK:
case GPCMD_FORMAT_UNIT:
case GPCMD_RESERVE_RZONE_TRACK:
@ -297,7 +301,7 @@ int ide_cd_expiry(ide_drive_t *drive)
default:
if (!(rq->rq_flags & RQF_QUIET))
printk(KERN_INFO PFX "cmd 0x%x timed out\n",
rq->cmd[0]);
scsi_req(rq)->cmd[0]);
wait = 0;
break;
}
@ -307,15 +311,21 @@ EXPORT_SYMBOL_GPL(ide_cd_expiry);
int ide_cd_get_xferlen(struct request *rq)
{
switch (rq->cmd_type) {
case REQ_TYPE_FS:
return 32768;
case REQ_TYPE_ATA_SENSE:
case REQ_TYPE_BLOCK_PC:
case REQ_TYPE_ATA_PC:
return blk_rq_bytes(rq);
switch (req_op(rq)) {
default:
return 0;
return 32768;
case REQ_OP_SCSI_IN:
case REQ_OP_SCSI_OUT:
return blk_rq_bytes(rq);
case REQ_OP_DRV_IN:
case REQ_OP_DRV_OUT:
switch (ide_req(rq)->type) {
case ATA_PRIV_PC:
case ATA_PRIV_SENSE:
return blk_rq_bytes(rq);
default:
return 0;
}
}
}
EXPORT_SYMBOL_GPL(ide_cd_get_xferlen);
@ -374,7 +384,7 @@ int ide_check_ireason(ide_drive_t *drive, struct request *rq, int len,
drive->name, __func__, ireason);
}
if (dev_is_idecd(drive) && rq->cmd_type == REQ_TYPE_ATA_PC)
if (dev_is_idecd(drive) && ata_pc_request(rq))
rq->rq_flags |= RQF_FAILED;
return 1;
@ -420,7 +430,7 @@ static ide_startstop_t ide_pc_intr(ide_drive_t *drive)
? "write" : "read");
pc->flags |= PC_FLAG_DMA_ERROR;
} else
rq->resid_len = 0;
scsi_req(rq)->resid_len = 0;
debug_log("%s: DMA finished\n", drive->name);
}
@ -436,7 +446,7 @@ static ide_startstop_t ide_pc_intr(ide_drive_t *drive)
local_irq_enable_in_hardirq();
if (drive->media == ide_tape &&
(stat & ATA_ERR) && rq->cmd[0] == REQUEST_SENSE)
(stat & ATA_ERR) && scsi_req(rq)->cmd[0] == REQUEST_SENSE)
stat &= ~ATA_ERR;
if ((stat & ATA_ERR) || (pc->flags & PC_FLAG_DMA_ERROR)) {
@ -446,7 +456,7 @@ static ide_startstop_t ide_pc_intr(ide_drive_t *drive)
if (drive->media != ide_tape)
pc->rq->errors++;
if (rq->cmd[0] == REQUEST_SENSE) {
if (scsi_req(rq)->cmd[0] == REQUEST_SENSE) {
printk(KERN_ERR PFX "%s: I/O error in request "
"sense command\n", drive->name);
return ide_do_reset(drive);
@ -477,12 +487,12 @@ static ide_startstop_t ide_pc_intr(ide_drive_t *drive)
if (uptodate == 0)
drive->failed_pc = NULL;
if (rq->cmd_type == REQ_TYPE_DRV_PRIV) {
if (ata_misc_request(rq)) {
rq->errors = 0;
error = 0;
} else {
if (rq->cmd_type != REQ_TYPE_FS && uptodate <= 0) {
if (blk_rq_is_passthrough(rq) && uptodate <= 0) {
if (rq->errors == 0)
rq->errors = -EIO;
}
@ -512,7 +522,7 @@ static ide_startstop_t ide_pc_intr(ide_drive_t *drive)
ide_pio_bytes(drive, cmd, write, done);
/* Update transferred byte count */
rq->resid_len -= done;
scsi_req(rq)->resid_len -= done;
bcount -= done;
@ -520,7 +530,7 @@ static ide_startstop_t ide_pc_intr(ide_drive_t *drive)
ide_pad_transfer(drive, write, bcount);
debug_log("[cmd %x] transferred %d bytes, padded %d bytes, resid: %u\n",
rq->cmd[0], done, bcount, rq->resid_len);
rq->cmd[0], done, bcount, scsi_req(rq)->resid_len);
/* And set the interrupt handler again */
ide_set_handler(drive, ide_pc_intr, timeout);
@ -603,7 +613,7 @@ static ide_startstop_t ide_transfer_pc(ide_drive_t *drive)
if (dev_is_idecd(drive)) {
/* ATAPI commands get padded out to 12 bytes minimum */
cmd_len = COMMAND_SIZE(rq->cmd[0]);
cmd_len = COMMAND_SIZE(scsi_req(rq)->cmd[0]);
if (cmd_len < ATAPI_MIN_CDB_BYTES)
cmd_len = ATAPI_MIN_CDB_BYTES;
@ -650,7 +660,7 @@ static ide_startstop_t ide_transfer_pc(ide_drive_t *drive)
/* Send the actual packet */
if ((drive->atapi_flags & IDE_AFLAG_ZIP_DRIVE) == 0)
hwif->tp_ops->output_data(drive, NULL, rq->cmd, cmd_len);
hwif->tp_ops->output_data(drive, NULL, scsi_req(rq)->cmd, cmd_len);
/* Begin DMA, if necessary */
if (dev_is_idecd(drive)) {
@ -695,7 +705,7 @@ ide_startstop_t ide_issue_pc(ide_drive_t *drive, struct ide_cmd *cmd)
bytes, 63 * 1024));
/* We haven't transferred any data yet */
rq->resid_len = bcount;
scsi_req(rq)->resid_len = bcount;
if (pc->flags & PC_FLAG_DMA_ERROR) {
pc->flags &= ~PC_FLAG_DMA_ERROR;

View File

@ -121,7 +121,7 @@ static int cdrom_log_sense(ide_drive_t *drive, struct request *rq)
* don't log START_STOP unit with LoEj set, since we cannot
* reliably check if drive can auto-close
*/
if (rq->cmd[0] == GPCMD_START_STOP_UNIT && sense->asc == 0x24)
if (scsi_req(rq)->cmd[0] == GPCMD_START_STOP_UNIT && sense->asc == 0x24)
break;
log = 1;
break;
@ -163,7 +163,7 @@ static void cdrom_analyze_sense_data(ide_drive_t *drive,
* toc has not been recorded yet, it will fail with 05/24/00 (which is a
* confusing error)
*/
if (failed_command && failed_command->cmd[0] == GPCMD_READ_TOC_PMA_ATIP)
if (failed_command && scsi_req(failed_command)->cmd[0] == GPCMD_READ_TOC_PMA_ATIP)
if (sense->sense_key == 0x05 && sense->asc == 0x24)
return;
@ -176,7 +176,7 @@ static void cdrom_analyze_sense_data(ide_drive_t *drive,
if (!sense->valid)
break;
if (failed_command == NULL ||
failed_command->cmd_type != REQ_TYPE_FS)
blk_rq_is_passthrough(failed_command))
break;
sector = (sense->information[0] << 24) |
(sense->information[1] << 16) |
@ -210,7 +210,7 @@ static void cdrom_analyze_sense_data(ide_drive_t *drive,
static void ide_cd_complete_failed_rq(ide_drive_t *drive, struct request *rq)
{
/*
* For REQ_TYPE_ATA_SENSE, "rq->special" points to the original
* For ATA_PRIV_SENSE, "rq->special" points to the original
* failed request. Also, the sense data should be read
* directly from rq which might be different from the original
* sense buffer if it got copied during mapping.
@ -219,15 +219,12 @@ static void ide_cd_complete_failed_rq(ide_drive_t *drive, struct request *rq)
void *sense = bio_data(rq->bio);
if (failed) {
if (failed->sense) {
/*
* Sense is always read into drive->sense_data.
* Copy back if the failed request has its
* sense pointer set.
*/
memcpy(failed->sense, sense, 18);
failed->sense_len = rq->sense_len;
}
/*
* Sense is always read into drive->sense_data, copy back to the
* original request.
*/
memcpy(scsi_req(failed)->sense, sense, 18);
scsi_req(failed)->sense_len = scsi_req(rq)->sense_len;
cdrom_analyze_sense_data(drive, failed);
if (ide_end_rq(drive, failed, -EIO, blk_rq_bytes(failed)))
@ -285,7 +282,7 @@ static int cdrom_decode_status(ide_drive_t *drive, u8 stat)
"stat 0x%x",
rq->cmd[0], rq->cmd_type, err, stat);
if (rq->cmd_type == REQ_TYPE_ATA_SENSE) {
if (ata_sense_request(rq)) {
/*
* We got an error trying to get sense info from the drive
* (probably while trying to recover from a former error).
@ -296,7 +293,7 @@ static int cdrom_decode_status(ide_drive_t *drive, u8 stat)
}
/* if we have an error, pass CHECK_CONDITION as the SCSI status byte */
if (rq->cmd_type == REQ_TYPE_BLOCK_PC && !rq->errors)
if (blk_rq_is_scsi(rq) && !rq->errors)
rq->errors = SAM_STAT_CHECK_CONDITION;
if (blk_noretry_request(rq))
@ -304,13 +301,13 @@ static int cdrom_decode_status(ide_drive_t *drive, u8 stat)
switch (sense_key) {
case NOT_READY:
if (rq->cmd_type == REQ_TYPE_FS && rq_data_dir(rq) == WRITE) {
if (req_op(rq) == REQ_OP_WRITE) {
if (ide_cd_breathe(drive, rq))
return 1;
} else {
cdrom_saw_media_change(drive);
if (rq->cmd_type == REQ_TYPE_FS &&
if (!blk_rq_is_passthrough(rq) &&
!(rq->rq_flags & RQF_QUIET))
printk(KERN_ERR PFX "%s: tray open\n",
drive->name);
@ -320,7 +317,7 @@ static int cdrom_decode_status(ide_drive_t *drive, u8 stat)
case UNIT_ATTENTION:
cdrom_saw_media_change(drive);
if (rq->cmd_type != REQ_TYPE_FS)
if (blk_rq_is_passthrough(rq))
return 0;
/*
@ -338,7 +335,7 @@ static int cdrom_decode_status(ide_drive_t *drive, u8 stat)
*
* cdrom_log_sense() knows this!
*/
if (rq->cmd[0] == GPCMD_START_STOP_UNIT)
if (scsi_req(rq)->cmd[0] == GPCMD_START_STOP_UNIT)
break;
/* fall-through */
case DATA_PROTECT:
@ -368,7 +365,7 @@ static int cdrom_decode_status(ide_drive_t *drive, u8 stat)
do_end_request = 1;
break;
default:
if (rq->cmd_type != REQ_TYPE_FS)
if (blk_rq_is_passthrough(rq))
break;
if (err & ~ATA_ABORTED) {
/* go to the default handler for other errors */
@ -379,7 +376,7 @@ static int cdrom_decode_status(ide_drive_t *drive, u8 stat)
do_end_request = 1;
}
if (rq->cmd_type != REQ_TYPE_FS) {
if (blk_rq_is_passthrough(rq)) {
rq->rq_flags |= RQF_FAILED;
do_end_request = 1;
}
@ -414,7 +411,7 @@ static void ide_cd_request_sense_fixup(ide_drive_t *drive, struct ide_cmd *cmd)
* Some of the trailing request sense fields are optional,
* and some drives don't send them. Sigh.
*/
if (rq->cmd[0] == GPCMD_REQUEST_SENSE &&
if (scsi_req(rq)->cmd[0] == GPCMD_REQUEST_SENSE &&
cmd->nleft > 0 && cmd->nleft <= 5)
cmd->nleft = 0;
}
@ -425,12 +422,8 @@ int ide_cd_queue_pc(ide_drive_t *drive, const unsigned char *cmd,
req_flags_t rq_flags)
{
struct cdrom_info *info = drive->driver_data;
struct request_sense local_sense;
int retries = 10;
req_flags_t flags = 0;
if (!sense)
sense = &local_sense;
bool failed;
ide_debug_log(IDE_DBG_PC, "cmd[0]: 0x%x, write: 0x%x, timeout: %d, "
"rq_flags: 0x%x",
@ -440,12 +433,13 @@ int ide_cd_queue_pc(ide_drive_t *drive, const unsigned char *cmd,
do {
struct request *rq;
int error;
bool delay = false;
rq = blk_get_request(drive->queue, write, __GFP_RECLAIM);
memcpy(rq->cmd, cmd, BLK_MAX_CDB);
rq->cmd_type = REQ_TYPE_ATA_PC;
rq->sense = sense;
rq = blk_get_request(drive->queue,
write ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN, __GFP_RECLAIM);
scsi_req_init(rq);
memcpy(scsi_req(rq)->cmd, cmd, BLK_MAX_CDB);
ide_req(rq)->type = ATA_PRIV_PC;
rq->rq_flags |= rq_flags;
rq->timeout = timeout;
if (buffer) {
@ -460,21 +454,21 @@ int ide_cd_queue_pc(ide_drive_t *drive, const unsigned char *cmd,
error = blk_execute_rq(drive->queue, info->disk, rq, 0);
if (buffer)
*bufflen = rq->resid_len;
flags = rq->rq_flags;
blk_put_request(rq);
*bufflen = scsi_req(rq)->resid_len;
if (sense)
memcpy(sense, scsi_req(rq)->sense, sizeof(*sense));
/*
* FIXME: we should probably abort/retry or something in case of
* failure.
*/
if (flags & RQF_FAILED) {
failed = (rq->rq_flags & RQF_FAILED) != 0;
if (failed) {
/*
* The request failed. Retry if it was due to a unit
* attention status (usually means media was changed).
*/
struct request_sense *reqbuf = sense;
struct request_sense *reqbuf = scsi_req(rq)->sense;
if (reqbuf->sense_key == UNIT_ATTENTION)
cdrom_saw_media_change(drive);
@ -485,19 +479,20 @@ int ide_cd_queue_pc(ide_drive_t *drive, const unsigned char *cmd,
* a disk. Retry, but wait a little to give
* the drive time to complete the load.
*/
ssleep(2);
delay = true;
} else {
/* otherwise, don't retry */
retries = 0;
}
--retries;
}
/* end of retry loop */
} while ((flags & RQF_FAILED) && retries >= 0);
blk_put_request(rq);
if (delay)
ssleep(2);
} while (failed && retries >= 0);
/* return an error if the command failed */
return (flags & RQF_FAILED) ? -EIO : 0;
return failed ? -EIO : 0;
}
/*
@ -526,7 +521,7 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
ide_expiry_t *expiry = NULL;
int dma_error = 0, dma, thislen, uptodate = 0;
int write = (rq_data_dir(rq) == WRITE) ? 1 : 0, rc = 0;
int sense = (rq->cmd_type == REQ_TYPE_ATA_SENSE);
int sense = ata_sense_request(rq);
unsigned int timeout;
u16 len;
u8 ireason, stat;
@ -569,7 +564,7 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
ide_read_bcount_and_ireason(drive, &len, &ireason);
thislen = (rq->cmd_type == REQ_TYPE_FS) ? len : cmd->nleft;
thislen = !blk_rq_is_passthrough(rq) ? len : cmd->nleft;
if (thislen > len)
thislen = len;
@ -578,7 +573,8 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
/* If DRQ is clear, the command has completed. */
if ((stat & ATA_DRQ) == 0) {
if (rq->cmd_type == REQ_TYPE_FS) {
switch (req_op(rq)) {
default:
/*
* If we're not done reading/writing, complain.
* Otherwise, complete the command normally.
@ -592,7 +588,9 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
rq->rq_flags |= RQF_FAILED;
uptodate = 0;
}
} else if (rq->cmd_type != REQ_TYPE_BLOCK_PC) {
goto out_end;
case REQ_OP_DRV_IN:
case REQ_OP_DRV_OUT:
ide_cd_request_sense_fixup(drive, cmd);
uptodate = cmd->nleft ? 0 : 1;
@ -608,8 +606,11 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
if (!uptodate)
rq->rq_flags |= RQF_FAILED;
goto out_end;
case REQ_OP_SCSI_IN:
case REQ_OP_SCSI_OUT:
goto out_end;
}
goto out_end;
}
rc = ide_check_ireason(drive, rq, len, ireason, write);
@ -636,12 +637,12 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
len -= blen;
if (sense && write == 0)
rq->sense_len += blen;
scsi_req(rq)->sense_len += blen;
}
/* pad, if necessary */
if (len > 0) {
if (rq->cmd_type != REQ_TYPE_FS || write == 0)
if (blk_rq_is_passthrough(rq) || write == 0)
ide_pad_transfer(drive, write, len);
else {
printk(KERN_ERR PFX "%s: confused, missing data\n",
@ -650,12 +651,18 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
}
}
if (rq->cmd_type == REQ_TYPE_BLOCK_PC) {
switch (req_op(rq)) {
case REQ_OP_SCSI_IN:
case REQ_OP_SCSI_OUT:
timeout = rq->timeout;
} else {
break;
case REQ_OP_DRV_IN:
case REQ_OP_DRV_OUT:
expiry = ide_cd_expiry;
/*FALLTHRU*/
default:
timeout = ATAPI_WAIT_PC;
if (rq->cmd_type != REQ_TYPE_FS)
expiry = ide_cd_expiry;
break;
}
hwif->expiry = expiry;
@ -663,15 +670,15 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
return ide_started;
out_end:
if (rq->cmd_type == REQ_TYPE_BLOCK_PC && rc == 0) {
rq->resid_len = 0;
if (blk_rq_is_scsi(rq) && rc == 0) {
scsi_req(rq)->resid_len = 0;
blk_end_request_all(rq, 0);
hwif->rq = NULL;
} else {
if (sense && uptodate)
ide_cd_complete_failed_rq(drive, rq);
if (rq->cmd_type == REQ_TYPE_FS) {
if (!blk_rq_is_passthrough(rq)) {
if (cmd->nleft == 0)
uptodate = 1;
} else {
@ -684,10 +691,10 @@ out_end:
return ide_stopped;
/* make sure it's fully ended */
if (rq->cmd_type != REQ_TYPE_FS) {
rq->resid_len -= cmd->nbytes - cmd->nleft;
if (blk_rq_is_passthrough(rq)) {
scsi_req(rq)->resid_len -= cmd->nbytes - cmd->nleft;
if (uptodate == 0 && (cmd->tf_flags & IDE_TFLAG_WRITE))
rq->resid_len += cmd->last_xfer_len;
scsi_req(rq)->resid_len += cmd->last_xfer_len;
}
ide_complete_rq(drive, uptodate ? 0 : -EIO, blk_rq_bytes(rq));
@ -744,7 +751,7 @@ static void cdrom_do_block_pc(ide_drive_t *drive, struct request *rq)
ide_debug_log(IDE_DBG_PC, "rq->cmd[0]: 0x%x, rq->cmd_type: 0x%x",
rq->cmd[0], rq->cmd_type);
if (rq->cmd_type == REQ_TYPE_BLOCK_PC)
if (blk_rq_is_scsi(rq))
rq->rq_flags |= RQF_QUIET;
else
rq->rq_flags &= ~RQF_FAILED;
@ -786,25 +793,31 @@ static ide_startstop_t ide_cd_do_request(ide_drive_t *drive, struct request *rq,
if (drive->debug_mask & IDE_DBG_RQ)
blk_dump_rq_flags(rq, "ide_cd_do_request");
switch (rq->cmd_type) {
case REQ_TYPE_FS:
switch (req_op(rq)) {
default:
if (cdrom_start_rw(drive, rq) == ide_stopped)
goto out_end;
break;
case REQ_TYPE_ATA_SENSE:
case REQ_TYPE_BLOCK_PC:
case REQ_TYPE_ATA_PC:
case REQ_OP_SCSI_IN:
case REQ_OP_SCSI_OUT:
handle_pc:
if (!rq->timeout)
rq->timeout = ATAPI_WAIT_PC;
cdrom_do_block_pc(drive, rq);
break;
case REQ_TYPE_DRV_PRIV:
/* right now this can only be a reset... */
uptodate = 1;
goto out_end;
default:
BUG();
case REQ_OP_DRV_IN:
case REQ_OP_DRV_OUT:
switch (ide_req(rq)->type) {
case ATA_PRIV_MISC:
/* right now this can only be a reset... */
uptodate = 1;
goto out_end;
case ATA_PRIV_SENSE:
case ATA_PRIV_PC:
goto handle_pc;
default:
BUG();
}
}
/* prepare sense request for this command */
@ -817,7 +830,7 @@ static ide_startstop_t ide_cd_do_request(ide_drive_t *drive, struct request *rq,
cmd.rq = rq;
if (rq->cmd_type == REQ_TYPE_FS || blk_rq_bytes(rq)) {
if (!blk_rq_is_passthrough(rq) || blk_rq_bytes(rq)) {
ide_init_sg_cmd(&cmd, blk_rq_bytes(rq));
ide_map_sg(drive, &cmd);
}
@ -1166,7 +1179,7 @@ void ide_cdrom_update_speed(ide_drive_t *drive, u8 *buf)
CDC_CD_RW | CDC_DVD | CDC_DVD_R | CDC_DVD_RAM | CDC_GENERIC_PACKET | \
CDC_MO_DRIVE | CDC_MRW | CDC_MRW_W | CDC_RAM)
static struct cdrom_device_ops ide_cdrom_dops = {
static const struct cdrom_device_ops ide_cdrom_dops = {
.open = ide_cdrom_open_real,
.release = ide_cdrom_release_real,
.drive_status = ide_cdrom_drive_status,
@ -1312,28 +1325,29 @@ static int ide_cdrom_prep_fs(struct request_queue *q, struct request *rq)
int hard_sect = queue_logical_block_size(q);
long block = (long)blk_rq_pos(rq) / (hard_sect >> 9);
unsigned long blocks = blk_rq_sectors(rq) / (hard_sect >> 9);
struct scsi_request *req = scsi_req(rq);
memset(rq->cmd, 0, BLK_MAX_CDB);
memset(req->cmd, 0, BLK_MAX_CDB);
if (rq_data_dir(rq) == READ)
rq->cmd[0] = GPCMD_READ_10;
req->cmd[0] = GPCMD_READ_10;
else
rq->cmd[0] = GPCMD_WRITE_10;
req->cmd[0] = GPCMD_WRITE_10;
/*
* fill in lba
*/
rq->cmd[2] = (block >> 24) & 0xff;
rq->cmd[3] = (block >> 16) & 0xff;
rq->cmd[4] = (block >> 8) & 0xff;
rq->cmd[5] = block & 0xff;
req->cmd[2] = (block >> 24) & 0xff;
req->cmd[3] = (block >> 16) & 0xff;
req->cmd[4] = (block >> 8) & 0xff;
req->cmd[5] = block & 0xff;
/*
* and transfer length
*/
rq->cmd[7] = (blocks >> 8) & 0xff;
rq->cmd[8] = blocks & 0xff;
rq->cmd_len = 10;
req->cmd[7] = (blocks >> 8) & 0xff;
req->cmd[8] = blocks & 0xff;
req->cmd_len = 10;
return BLKPREP_OK;
}
@ -1343,7 +1357,7 @@ static int ide_cdrom_prep_fs(struct request_queue *q, struct request *rq)
*/
static int ide_cdrom_prep_pc(struct request *rq)
{
u8 *c = rq->cmd;
u8 *c = scsi_req(rq)->cmd;
/* transform 6-byte read/write commands to the 10-byte version */
if (c[0] == READ_6 || c[0] == WRITE_6) {
@ -1354,7 +1368,7 @@ static int ide_cdrom_prep_pc(struct request *rq)
c[2] = 0;
c[1] &= 0xe0;
c[0] += (READ_10 - READ_6);
rq->cmd_len = 10;
scsi_req(rq)->cmd_len = 10;
return BLKPREP_OK;
}
@ -1373,9 +1387,9 @@ static int ide_cdrom_prep_pc(struct request *rq)
static int ide_cdrom_prep_fn(struct request_queue *q, struct request *rq)
{
if (rq->cmd_type == REQ_TYPE_FS)
if (!blk_rq_is_passthrough(rq))
return ide_cdrom_prep_fs(q, rq);
else if (rq->cmd_type == REQ_TYPE_BLOCK_PC)
else if (blk_rq_is_scsi(rq))
return ide_cdrom_prep_pc(rq);
return 0;

View File

@ -303,8 +303,9 @@ int ide_cdrom_reset(struct cdrom_device_info *cdi)
struct request *rq;
int ret;
rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_DRV_PRIV;
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_MISC;
rq->rq_flags = RQF_QUIET;
ret = blk_execute_rq(drive->queue, cd->disk, rq, 0);
blk_put_request(rq);

View File

@ -315,12 +315,12 @@ void ide_cd_log_error(const char *name, struct request *failed_command,
while (hi > lo) {
mid = (lo + hi) / 2;
if (packet_command_texts[mid].packet_command ==
failed_command->cmd[0]) {
scsi_req(failed_command)->cmd[0]) {
s = packet_command_texts[mid].text;
break;
}
if (packet_command_texts[mid].packet_command >
failed_command->cmd[0])
scsi_req(failed_command)->cmd[0])
hi = mid;
else
lo = mid + 1;
@ -329,7 +329,7 @@ void ide_cd_log_error(const char *name, struct request *failed_command,
printk(KERN_ERR " The failed \"%s\" packet command "
"was: \n \"", s);
for (i = 0; i < BLK_MAX_CDB; i++)
printk(KERN_CONT "%02x ", failed_command->cmd[i]);
printk(KERN_CONT "%02x ", scsi_req(failed_command)->cmd[i]);
printk(KERN_CONT "\"\n");
}

View File

@ -165,11 +165,12 @@ int ide_devset_execute(ide_drive_t *drive, const struct ide_devset *setting,
if (!(setting->flags & DS_SYNC))
return setting->set(drive, arg);
rq = blk_get_request(q, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_DRV_PRIV;
rq->cmd_len = 5;
rq->cmd[0] = REQ_DEVSET_EXEC;
*(int *)&rq->cmd[1] = arg;
rq = blk_get_request(q, REQ_OP_DRV_IN, __GFP_RECLAIM);
scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_MISC;
scsi_req(rq)->cmd_len = 5;
scsi_req(rq)->cmd[0] = REQ_DEVSET_EXEC;
*(int *)&scsi_req(rq)->cmd[1] = arg;
rq->special = setting->set;
if (blk_execute_rq(q, NULL, rq, 0))
@ -183,7 +184,7 @@ ide_startstop_t ide_do_devset(ide_drive_t *drive, struct request *rq)
{
int err, (*setfunc)(ide_drive_t *, int) = rq->special;
err = setfunc(drive, *(int *)&rq->cmd[1]);
err = setfunc(drive, *(int *)&scsi_req(rq)->cmd[1]);
if (err)
rq->errors = err;
ide_complete_rq(drive, err, blk_rq_bytes(rq));

View File

@ -184,7 +184,7 @@ static ide_startstop_t ide_do_rw_disk(ide_drive_t *drive, struct request *rq,
ide_hwif_t *hwif = drive->hwif;
BUG_ON(drive->dev_flags & IDE_DFLAG_BLOCKED);
BUG_ON(rq->cmd_type != REQ_TYPE_FS);
BUG_ON(blk_rq_is_passthrough(rq));
ledtrig_disk_activity();
@ -452,8 +452,9 @@ static int idedisk_prep_fn(struct request_queue *q, struct request *rq)
cmd->valid.out.tf = IDE_VALID_OUT_TF | IDE_VALID_DEVICE;
cmd->tf_flags = IDE_TFLAG_DYN;
cmd->protocol = ATA_PROT_NODATA;
rq->cmd_type = REQ_TYPE_ATA_TASKFILE;
rq->cmd_flags &= ~REQ_OP_MASK;
rq->cmd_flags |= REQ_OP_DRV_OUT;
ide_req(rq)->type = ATA_PRIV_TASKFILE;
rq->special = cmd;
cmd->rq = rq;
@ -477,8 +478,9 @@ static int set_multcount(ide_drive_t *drive, int arg)
if (drive->special_flags & IDE_SFLAG_SET_MULTMODE)
return -EBUSY;
rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_ATA_TASKFILE;
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_TASKFILE;
drive->mult_req = arg;
drive->special_flags |= IDE_SFLAG_SET_MULTMODE;

View File

@ -123,8 +123,8 @@ ide_startstop_t ide_error(ide_drive_t *drive, const char *msg, u8 stat)
return ide_stopped;
/* retry only "normal" I/O: */
if (rq->cmd_type != REQ_TYPE_FS) {
if (rq->cmd_type == REQ_TYPE_ATA_TASKFILE) {
if (blk_rq_is_passthrough(rq)) {
if (ata_taskfile_request(rq)) {
struct ide_cmd *cmd = rq->special;
if (cmd)
@ -147,8 +147,8 @@ static inline void ide_complete_drive_reset(ide_drive_t *drive, int err)
{
struct request *rq = drive->hwif->rq;
if (rq && rq->cmd_type == REQ_TYPE_DRV_PRIV &&
rq->cmd[0] == REQ_DRIVE_RESET) {
if (rq && ata_misc_request(rq) &&
scsi_req(rq)->cmd[0] == REQ_DRIVE_RESET) {
if (err <= 0 && rq->errors == 0)
rq->errors = -EIO;
ide_complete_rq(drive, err ? err : 0, blk_rq_bytes(rq));

View File

@ -72,7 +72,7 @@ static int ide_floppy_callback(ide_drive_t *drive, int dsc)
drive->failed_pc = NULL;
if (pc->c[0] == GPCMD_READ_10 || pc->c[0] == GPCMD_WRITE_10 ||
rq->cmd_type == REQ_TYPE_BLOCK_PC)
(req_op(rq) == REQ_OP_SCSI_IN || req_op(rq) == REQ_OP_SCSI_OUT))
uptodate = 1; /* FIXME */
else if (pc->c[0] == GPCMD_REQUEST_SENSE) {
@ -97,7 +97,7 @@ static int ide_floppy_callback(ide_drive_t *drive, int dsc)
"Aborting request!\n");
}
if (rq->cmd_type == REQ_TYPE_DRV_PRIV)
if (ata_misc_request(rq))
rq->errors = uptodate ? 0 : IDE_DRV_ERROR_GENERAL;
return uptodate;
@ -203,7 +203,7 @@ static void idefloppy_create_rw_cmd(ide_drive_t *drive,
put_unaligned(cpu_to_be16(blocks), (unsigned short *)&pc->c[7]);
put_unaligned(cpu_to_be32(block), (unsigned int *) &pc->c[2]);
memcpy(rq->cmd, pc->c, 12);
memcpy(scsi_req(rq)->cmd, pc->c, 12);
pc->rq = rq;
if (cmd == WRITE)
@ -216,7 +216,7 @@ static void idefloppy_blockpc_cmd(struct ide_disk_obj *floppy,
struct ide_atapi_pc *pc, struct request *rq)
{
ide_init_pc(pc);
memcpy(pc->c, rq->cmd, sizeof(pc->c));
memcpy(pc->c, scsi_req(rq)->cmd, sizeof(pc->c));
pc->rq = rq;
if (blk_rq_bytes(rq)) {
pc->flags |= PC_FLAG_DMA_OK;
@ -246,7 +246,7 @@ static ide_startstop_t ide_floppy_do_request(ide_drive_t *drive,
} else
printk(KERN_ERR PFX "%s: I/O error\n", drive->name);
if (rq->cmd_type == REQ_TYPE_DRV_PRIV) {
if (ata_misc_request(rq)) {
rq->errors = 0;
ide_complete_rq(drive, 0, blk_rq_bytes(rq));
return ide_stopped;
@ -254,8 +254,8 @@ static ide_startstop_t ide_floppy_do_request(ide_drive_t *drive,
goto out_end;
}
switch (rq->cmd_type) {
case REQ_TYPE_FS:
switch (req_op(rq)) {
default:
if (((long)blk_rq_pos(rq) % floppy->bs_factor) ||
(blk_rq_sectors(rq) % floppy->bs_factor)) {
printk(KERN_ERR PFX "%s: unsupported r/w rq size\n",
@ -265,16 +265,21 @@ static ide_startstop_t ide_floppy_do_request(ide_drive_t *drive,
pc = &floppy->queued_pc;
idefloppy_create_rw_cmd(drive, pc, rq, (unsigned long)block);
break;
case REQ_TYPE_DRV_PRIV:
case REQ_TYPE_ATA_SENSE:
pc = (struct ide_atapi_pc *)rq->special;
break;
case REQ_TYPE_BLOCK_PC:
case REQ_OP_SCSI_IN:
case REQ_OP_SCSI_OUT:
pc = &floppy->queued_pc;
idefloppy_blockpc_cmd(floppy, pc, rq);
break;
default:
BUG();
case REQ_OP_DRV_IN:
case REQ_OP_DRV_OUT:
switch (ide_req(rq)->type) {
case ATA_PRIV_MISC:
case ATA_PRIV_SENSE:
pc = (struct ide_atapi_pc *)rq->special;
break;
default:
BUG();
}
}
ide_prep_sense(drive, rq);
@ -286,7 +291,7 @@ static ide_startstop_t ide_floppy_do_request(ide_drive_t *drive,
cmd.rq = rq;
if (rq->cmd_type == REQ_TYPE_FS || blk_rq_bytes(rq)) {
if (!blk_rq_is_passthrough(rq) || blk_rq_bytes(rq)) {
ide_init_sg_cmd(&cmd, blk_rq_bytes(rq));
ide_map_sg(drive, &cmd);
}
@ -296,7 +301,7 @@ static ide_startstop_t ide_floppy_do_request(ide_drive_t *drive,
return ide_floppy_issue_pc(drive, &cmd, pc);
out_end:
drive->failed_pc = NULL;
if (rq->cmd_type != REQ_TYPE_FS && rq->errors == 0)
if (blk_rq_is_passthrough(rq) && rq->errors == 0)
rq->errors = -EIO;
ide_complete_rq(drive, -EIO, blk_rq_bytes(rq));
return ide_stopped;

View File

@ -102,7 +102,7 @@ void ide_complete_cmd(ide_drive_t *drive, struct ide_cmd *cmd, u8 stat, u8 err)
drive->dev_flags |= IDE_DFLAG_PARKED;
}
if (rq && rq->cmd_type == REQ_TYPE_ATA_TASKFILE) {
if (rq && ata_taskfile_request(rq)) {
struct ide_cmd *orig_cmd = rq->special;
if (cmd->tf_flags & IDE_TFLAG_DYN)
@ -135,7 +135,7 @@ EXPORT_SYMBOL(ide_complete_rq);
void ide_kill_rq(ide_drive_t *drive, struct request *rq)
{
u8 drv_req = (rq->cmd_type == REQ_TYPE_DRV_PRIV) && rq->rq_disk;
u8 drv_req = ata_misc_request(rq) && rq->rq_disk;
u8 media = drive->media;
drive->failed_pc = NULL;
@ -145,7 +145,7 @@ void ide_kill_rq(ide_drive_t *drive, struct request *rq)
} else {
if (media == ide_tape)
rq->errors = IDE_DRV_ERROR_GENERAL;
else if (rq->cmd_type != REQ_TYPE_FS && rq->errors == 0)
else if (blk_rq_is_passthrough(rq) && rq->errors == 0)
rq->errors = -EIO;
}
@ -279,7 +279,7 @@ static ide_startstop_t execute_drive_cmd (ide_drive_t *drive,
static ide_startstop_t ide_special_rq(ide_drive_t *drive, struct request *rq)
{
u8 cmd = rq->cmd[0];
u8 cmd = scsi_req(rq)->cmd[0];
switch (cmd) {
case REQ_PARK_HEADS:
@ -340,7 +340,7 @@ static ide_startstop_t start_request (ide_drive_t *drive, struct request *rq)
if (drive->current_speed == 0xff)
ide_config_drive_speed(drive, drive->desired_speed);
if (rq->cmd_type == REQ_TYPE_ATA_TASKFILE)
if (ata_taskfile_request(rq))
return execute_drive_cmd(drive, rq);
else if (ata_pm_request(rq)) {
struct ide_pm_state *pm = rq->special;
@ -353,7 +353,7 @@ static ide_startstop_t start_request (ide_drive_t *drive, struct request *rq)
pm->pm_step == IDE_PM_COMPLETED)
ide_complete_pm_rq(drive, rq);
return startstop;
} else if (!rq->rq_disk && rq->cmd_type == REQ_TYPE_DRV_PRIV)
} else if (!rq->rq_disk && ata_misc_request(rq))
/*
* TODO: Once all ULDs have been modified to
* check for specific op codes rather than
@ -545,6 +545,7 @@ repeat:
goto plug_device;
}
scsi_req(rq)->resid_len = blk_rq_bytes(rq);
hwif->rq = rq;
spin_unlock_irq(&hwif->lock);

View File

@ -125,8 +125,9 @@ static int ide_cmd_ioctl(ide_drive_t *drive, unsigned long arg)
if (NULL == (void *) arg) {
struct request *rq;
rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_ATA_TASKFILE;
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_TASKFILE;
err = blk_execute_rq(drive->queue, NULL, rq, 0);
blk_put_request(rq);
@ -221,10 +222,11 @@ static int generic_drive_reset(ide_drive_t *drive)
struct request *rq;
int ret = 0;
rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_DRV_PRIV;
rq->cmd_len = 1;
rq->cmd[0] = REQ_DRIVE_RESET;
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_MISC;
scsi_req(rq)->cmd_len = 1;
scsi_req(rq)->cmd[0] = REQ_DRIVE_RESET;
if (blk_execute_rq(drive->queue, NULL, rq, 1))
ret = rq->errors;
blk_put_request(rq);

View File

@ -31,10 +31,11 @@ static void issue_park_cmd(ide_drive_t *drive, unsigned long timeout)
}
spin_unlock_irq(&hwif->lock);
rq = blk_get_request(q, READ, __GFP_RECLAIM);
rq->cmd[0] = REQ_PARK_HEADS;
rq->cmd_len = 1;
rq->cmd_type = REQ_TYPE_DRV_PRIV;
rq = blk_get_request(q, REQ_OP_DRV_IN, __GFP_RECLAIM);
scsi_req_init(rq);
scsi_req(rq)->cmd[0] = REQ_PARK_HEADS;
scsi_req(rq)->cmd_len = 1;
ide_req(rq)->type = ATA_PRIV_MISC;
rq->special = &timeout;
rc = blk_execute_rq(q, NULL, rq, 1);
blk_put_request(rq);
@ -45,13 +46,14 @@ static void issue_park_cmd(ide_drive_t *drive, unsigned long timeout)
* Make sure that *some* command is sent to the drive after the
* timeout has expired, so power management will be reenabled.
*/
rq = blk_get_request(q, READ, GFP_NOWAIT);
rq = blk_get_request(q, REQ_OP_DRV_IN, GFP_NOWAIT);
scsi_req_init(rq);
if (IS_ERR(rq))
goto out;
rq->cmd[0] = REQ_UNPARK_HEADS;
rq->cmd_len = 1;
rq->cmd_type = REQ_TYPE_DRV_PRIV;
scsi_req(rq)->cmd[0] = REQ_UNPARK_HEADS;
scsi_req(rq)->cmd_len = 1;
ide_req(rq)->type = ATA_PRIV_MISC;
elv_add_request(q, rq, ELEVATOR_INSERT_FRONT);
out:
@ -64,7 +66,7 @@ ide_startstop_t ide_do_park_unpark(ide_drive_t *drive, struct request *rq)
struct ide_taskfile *tf = &cmd.tf;
memset(&cmd, 0, sizeof(cmd));
if (rq->cmd[0] == REQ_PARK_HEADS) {
if (scsi_req(rq)->cmd[0] == REQ_PARK_HEADS) {
drive->sleep = *(unsigned long *)rq->special;
drive->dev_flags |= IDE_DFLAG_SLEEPING;
tf->command = ATA_CMD_IDLEIMMEDIATE;

View File

@ -18,8 +18,9 @@ int generic_ide_suspend(struct device *dev, pm_message_t mesg)
}
memset(&rqpm, 0, sizeof(rqpm));
rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_ATA_PM_SUSPEND;
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_PM_SUSPEND;
rq->special = &rqpm;
rqpm.pm_step = IDE_PM_START_SUSPEND;
if (mesg.event == PM_EVENT_PRETHAW)
@ -88,8 +89,9 @@ int generic_ide_resume(struct device *dev)
}
memset(&rqpm, 0, sizeof(rqpm));
rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_ATA_PM_RESUME;
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_PM_RESUME;
rq->rq_flags |= RQF_PREEMPT;
rq->special = &rqpm;
rqpm.pm_step = IDE_PM_START_RESUME;
@ -221,10 +223,10 @@ void ide_complete_pm_rq(ide_drive_t *drive, struct request *rq)
#ifdef DEBUG_PM
printk("%s: completing PM request, %s\n", drive->name,
(rq->cmd_type == REQ_TYPE_ATA_PM_SUSPEND) ? "suspend" : "resume");
(ide_req(rq)->type == ATA_PRIV_PM_SUSPEND) ? "suspend" : "resume");
#endif
spin_lock_irqsave(q->queue_lock, flags);
if (rq->cmd_type == REQ_TYPE_ATA_PM_SUSPEND)
if (ide_req(rq)->type == ATA_PRIV_PM_SUSPEND)
blk_stop_queue(q);
else
drive->dev_flags &= ~IDE_DFLAG_BLOCKED;
@ -240,11 +242,13 @@ void ide_check_pm_state(ide_drive_t *drive, struct request *rq)
{
struct ide_pm_state *pm = rq->special;
if (rq->cmd_type == REQ_TYPE_ATA_PM_SUSPEND &&
if (blk_rq_is_private(rq) &&
ide_req(rq)->type == ATA_PRIV_PM_SUSPEND &&
pm->pm_step == IDE_PM_START_SUSPEND)
/* Mark drive blocked when starting the suspend sequence. */
drive->dev_flags |= IDE_DFLAG_BLOCKED;
else if (rq->cmd_type == REQ_TYPE_ATA_PM_RESUME &&
else if (blk_rq_is_private(rq) &&
ide_req(rq)->type == ATA_PRIV_PM_RESUME &&
pm->pm_step == IDE_PM_START_RESUME) {
/*
* The first thing we do on wakeup is to wait for BSY bit to

View File

@ -741,6 +741,14 @@ static void ide_port_tune_devices(ide_hwif_t *hwif)
}
}
static int ide_init_rq(struct request_queue *q, struct request *rq, gfp_t gfp)
{
struct ide_request *req = blk_mq_rq_to_pdu(rq);
req->sreq.sense = req->sense;
return 0;
}
/*
* init request queue
*/
@ -758,11 +766,18 @@ static int ide_init_queue(ide_drive_t *drive)
* limits and LBA48 we could raise it but as yet
* do not.
*/
q = blk_init_queue_node(do_ide_request, NULL, hwif_to_node(hwif));
q = blk_alloc_queue_node(GFP_KERNEL, hwif_to_node(hwif));
if (!q)
return 1;
q->request_fn = do_ide_request;
q->init_rq_fn = ide_init_rq;
q->cmd_size = sizeof(struct ide_request);
if (blk_init_allocated_queue(q) < 0) {
blk_cleanup_queue(q);
return 1;
}
q->queuedata = drive;
blk_queue_segment_boundary(q, 0xffff);
@ -1131,10 +1146,12 @@ static void ide_port_init_devices_data(ide_hwif_t *hwif)
ide_port_for_each_dev(i, drive, hwif) {
u8 j = (hwif->index * MAX_DRIVES) + i;
u16 *saved_id = drive->id;
struct request *saved_sense_rq = drive->sense_rq;
memset(drive, 0, sizeof(*drive));
memset(saved_id, 0, SECTOR_SIZE);
drive->id = saved_id;
drive->sense_rq = saved_sense_rq;
drive->media = ide_disk;
drive->select = (i << 4) | ATA_DEVICE_OBS;
@ -1241,6 +1258,7 @@ static void ide_port_free_devices(ide_hwif_t *hwif)
int i;
ide_port_for_each_dev(i, drive, hwif) {
kfree(drive->sense_rq);
kfree(drive->id);
kfree(drive);
}
@ -1248,11 +1266,10 @@ static void ide_port_free_devices(ide_hwif_t *hwif)
static int ide_port_alloc_devices(ide_hwif_t *hwif, int node)
{
ide_drive_t *drive;
int i;
for (i = 0; i < MAX_DRIVES; i++) {
ide_drive_t *drive;
drive = kzalloc_node(sizeof(*drive), GFP_KERNEL, node);
if (drive == NULL)
goto out_nomem;
@ -1267,12 +1284,21 @@ static int ide_port_alloc_devices(ide_hwif_t *hwif, int node)
*/
drive->id = kzalloc_node(SECTOR_SIZE, GFP_KERNEL, node);
if (drive->id == NULL)
goto out_nomem;
goto out_free_drive;
drive->sense_rq = kmalloc(sizeof(struct request) +
sizeof(struct ide_request), GFP_KERNEL);
if (!drive->sense_rq)
goto out_free_id;
hwif->devices[i] = drive;
}
return 0;
out_free_id:
kfree(drive->id);
out_free_drive:
kfree(drive);
out_nomem:
ide_port_free_devices(hwif);
return -ENOMEM;

View File

@ -282,7 +282,7 @@ static void idetape_analyze_error(ide_drive_t *drive)
/* correct remaining bytes to transfer */
if (pc->flags & PC_FLAG_DMA_ERROR)
rq->resid_len = tape->blk_size * get_unaligned_be32(&sense[3]);
scsi_req(rq)->resid_len = tape->blk_size * get_unaligned_be32(&sense[3]);
/*
* If error was the result of a zero-length read or write command,
@ -316,7 +316,7 @@ static void idetape_analyze_error(ide_drive_t *drive)
pc->flags |= PC_FLAG_ABORT;
}
if (!(pc->flags & PC_FLAG_ABORT) &&
(blk_rq_bytes(rq) - rq->resid_len))
(blk_rq_bytes(rq) - scsi_req(rq)->resid_len))
pc->retries = IDETAPE_MAX_PC_RETRIES + 1;
}
}
@ -348,7 +348,7 @@ static int ide_tape_callback(ide_drive_t *drive, int dsc)
"itself - Aborting request!\n");
} else if (pc->c[0] == READ_6 || pc->c[0] == WRITE_6) {
unsigned int blocks =
(blk_rq_bytes(rq) - rq->resid_len) / tape->blk_size;
(blk_rq_bytes(rq) - scsi_req(rq)->resid_len) / tape->blk_size;
tape->avg_size += blocks * tape->blk_size;
@ -560,7 +560,7 @@ static void ide_tape_create_rw_cmd(idetape_tape_t *tape,
pc->flags |= PC_FLAG_WRITING;
}
memcpy(rq->cmd, pc->c, 12);
memcpy(scsi_req(rq)->cmd, pc->c, 12);
}
static ide_startstop_t idetape_do_request(ide_drive_t *drive,
@ -570,14 +570,16 @@ static ide_startstop_t idetape_do_request(ide_drive_t *drive,
idetape_tape_t *tape = drive->driver_data;
struct ide_atapi_pc *pc = NULL;
struct ide_cmd cmd;
struct scsi_request *req = scsi_req(rq);
u8 stat;
ide_debug_log(IDE_DBG_RQ, "cmd: 0x%x, sector: %llu, nr_sectors: %u",
rq->cmd[0], (unsigned long long)blk_rq_pos(rq),
req->cmd[0], (unsigned long long)blk_rq_pos(rq),
blk_rq_sectors(rq));
BUG_ON(!(rq->cmd_type == REQ_TYPE_DRV_PRIV ||
rq->cmd_type == REQ_TYPE_ATA_SENSE));
BUG_ON(!blk_rq_is_private(rq));
BUG_ON(ide_req(rq)->type != ATA_PRIV_MISC &&
ide_req(rq)->type != ATA_PRIV_SENSE);
/* Retry a failed packet command */
if (drive->failed_pc && drive->pc->c[0] == REQUEST_SENSE) {
@ -592,7 +594,7 @@ static ide_startstop_t idetape_do_request(ide_drive_t *drive,
stat = hwif->tp_ops->read_status(hwif);
if ((drive->dev_flags & IDE_DFLAG_DSC_OVERLAP) == 0 &&
(rq->cmd[13] & REQ_IDETAPE_PC2) == 0)
(req->cmd[13] & REQ_IDETAPE_PC2) == 0)
drive->atapi_flags |= IDE_AFLAG_IGNORE_DSC;
if (drive->dev_flags & IDE_DFLAG_POST_RESET) {
@ -609,7 +611,7 @@ static ide_startstop_t idetape_do_request(ide_drive_t *drive,
} else if (time_after(jiffies, tape->dsc_timeout)) {
printk(KERN_ERR "ide-tape: %s: DSC timeout\n",
tape->name);
if (rq->cmd[13] & REQ_IDETAPE_PC2) {
if (req->cmd[13] & REQ_IDETAPE_PC2) {
idetape_media_access_finished(drive);
return ide_stopped;
} else {
@ -626,23 +628,23 @@ static ide_startstop_t idetape_do_request(ide_drive_t *drive,
tape->postponed_rq = false;
}
if (rq->cmd[13] & REQ_IDETAPE_READ) {
if (req->cmd[13] & REQ_IDETAPE_READ) {
pc = &tape->queued_pc;
ide_tape_create_rw_cmd(tape, pc, rq, READ_6);
goto out;
}
if (rq->cmd[13] & REQ_IDETAPE_WRITE) {
if (req->cmd[13] & REQ_IDETAPE_WRITE) {
pc = &tape->queued_pc;
ide_tape_create_rw_cmd(tape, pc, rq, WRITE_6);
goto out;
}
if (rq->cmd[13] & REQ_IDETAPE_PC1) {
if (req->cmd[13] & REQ_IDETAPE_PC1) {
pc = (struct ide_atapi_pc *)rq->special;
rq->cmd[13] &= ~(REQ_IDETAPE_PC1);
rq->cmd[13] |= REQ_IDETAPE_PC2;
req->cmd[13] &= ~(REQ_IDETAPE_PC1);
req->cmd[13] |= REQ_IDETAPE_PC2;
goto out;
}
if (rq->cmd[13] & REQ_IDETAPE_PC2) {
if (req->cmd[13] & REQ_IDETAPE_PC2) {
idetape_media_access_finished(drive);
return ide_stopped;
}
@ -852,9 +854,10 @@ static int idetape_queue_rw_tail(ide_drive_t *drive, int cmd, int size)
BUG_ON(cmd != REQ_IDETAPE_READ && cmd != REQ_IDETAPE_WRITE);
BUG_ON(size < 0 || size % tape->blk_size);
rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_DRV_PRIV;
rq->cmd[13] = cmd;
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_MISC;
scsi_req(rq)->cmd[13] = cmd;
rq->rq_disk = tape->disk;
rq->__sector = tape->first_frame;
@ -868,7 +871,7 @@ static int idetape_queue_rw_tail(ide_drive_t *drive, int cmd, int size)
blk_execute_rq(drive->queue, tape->disk, rq, 0);
/* calculate the number of transferred bytes and update buffer state */
size -= rq->resid_len;
size -= scsi_req(rq)->resid_len;
tape->cur = tape->buf;
if (cmd == REQ_IDETAPE_READ)
tape->valid = size;

View File

@ -428,10 +428,12 @@ int ide_raw_taskfile(ide_drive_t *drive, struct ide_cmd *cmd, u8 *buf,
{
struct request *rq;
int error;
int rw = !(cmd->tf_flags & IDE_TFLAG_WRITE) ? READ : WRITE;
rq = blk_get_request(drive->queue, rw, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_ATA_TASKFILE;
rq = blk_get_request(drive->queue,
(cmd->tf_flags & IDE_TFLAG_WRITE) ?
REQ_OP_DRV_OUT : REQ_OP_DRV_IN, __GFP_RECLAIM);
scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_TASKFILE;
/*
* (ks) We transfer currently only whole sectors.

View File

@ -54,7 +54,7 @@
#define DRV_NAME "sis5513"
/* registers layout and init values are chipset family dependent */
#undef ATA_16
#define ATA_16 0x01
#define ATA_33 0x02
#define ATA_66 0x03

View File

@ -26,15 +26,6 @@ config NVM_DEBUG
It is required to create/remove targets without IOCTLs.
config NVM_GENNVM
tristate "General Non-Volatile Memory Manager for Open-Channel SSDs"
---help---
Non-volatile memory media manager for Open-Channel SSDs that implements
physical media metadata management and block provisioning API.
This is the standard media manager for using Open-Channel SSDs, and
required for targets to be instantiated.
config NVM_RRPC
tristate "Round-robin Hybrid Open-Channel SSD target"
---help---

View File

@ -2,6 +2,5 @@
# Makefile for Open-Channel SSDs.
#
obj-$(CONFIG_NVM) := core.o sysblk.o
obj-$(CONFIG_NVM_GENNVM) += gennvm.o
obj-$(CONFIG_NVM) := core.o
obj-$(CONFIG_NVM_RRPC) += rrpc.o

File diff suppressed because it is too large Load Diff

View File

@ -1,657 +0,0 @@
/*
* Copyright (C) 2015 Matias Bjorling <m@bjorling.me>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License version
* 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; see the file COPYING. If not, write to
* the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139,
* USA.
*
* Implementation of a general nvm manager for Open-Channel SSDs.
*/
#include "gennvm.h"
static struct nvm_target *gen_find_target(struct gen_dev *gn, const char *name)
{
struct nvm_target *tgt;
list_for_each_entry(tgt, &gn->targets, list)
if (!strcmp(name, tgt->disk->disk_name))
return tgt;
return NULL;
}
static const struct block_device_operations gen_fops = {
.owner = THIS_MODULE,
};
static int gen_reserve_luns(struct nvm_dev *dev, struct nvm_target *t,
int lun_begin, int lun_end)
{
int i;
for (i = lun_begin; i <= lun_end; i++) {
if (test_and_set_bit(i, dev->lun_map)) {
pr_err("nvm: lun %d already allocated\n", i);
goto err;
}
}
return 0;
err:
while (--i > lun_begin)
clear_bit(i, dev->lun_map);
return -EBUSY;
}
static void gen_release_luns_err(struct nvm_dev *dev, int lun_begin,
int lun_end)
{
int i;
for (i = lun_begin; i <= lun_end; i++)
WARN_ON(!test_and_clear_bit(i, dev->lun_map));
}
static void gen_remove_tgt_dev(struct nvm_tgt_dev *tgt_dev)
{
struct nvm_dev *dev = tgt_dev->parent;
struct gen_dev_map *dev_map = tgt_dev->map;
int i, j;
for (i = 0; i < dev_map->nr_chnls; i++) {
struct gen_ch_map *ch_map = &dev_map->chnls[i];
int *lun_offs = ch_map->lun_offs;
int ch = i + ch_map->ch_off;
for (j = 0; j < ch_map->nr_luns; j++) {
int lun = j + lun_offs[j];
int lunid = (ch * dev->geo.luns_per_chnl) + lun;
WARN_ON(!test_and_clear_bit(lunid, dev->lun_map));
}
kfree(ch_map->lun_offs);
}
kfree(dev_map->chnls);
kfree(dev_map);
kfree(tgt_dev->luns);
kfree(tgt_dev);
}
static struct nvm_tgt_dev *gen_create_tgt_dev(struct nvm_dev *dev,
int lun_begin, int lun_end)
{
struct nvm_tgt_dev *tgt_dev = NULL;
struct gen_dev_map *dev_rmap = dev->rmap;
struct gen_dev_map *dev_map;
struct ppa_addr *luns;
int nr_luns = lun_end - lun_begin + 1;
int luns_left = nr_luns;
int nr_chnls = nr_luns / dev->geo.luns_per_chnl;
int nr_chnls_mod = nr_luns % dev->geo.luns_per_chnl;
int bch = lun_begin / dev->geo.luns_per_chnl;
int blun = lun_begin % dev->geo.luns_per_chnl;
int lunid = 0;
int lun_balanced = 1;
int prev_nr_luns;
int i, j;
nr_chnls = nr_luns / dev->geo.luns_per_chnl;
nr_chnls = (nr_chnls_mod == 0) ? nr_chnls : nr_chnls + 1;
dev_map = kmalloc(sizeof(struct gen_dev_map), GFP_KERNEL);
if (!dev_map)
goto err_dev;
dev_map->chnls = kcalloc(nr_chnls, sizeof(struct gen_ch_map),
GFP_KERNEL);
if (!dev_map->chnls)
goto err_chnls;
luns = kcalloc(nr_luns, sizeof(struct ppa_addr), GFP_KERNEL);
if (!luns)
goto err_luns;
prev_nr_luns = (luns_left > dev->geo.luns_per_chnl) ?
dev->geo.luns_per_chnl : luns_left;
for (i = 0; i < nr_chnls; i++) {
struct gen_ch_map *ch_rmap = &dev_rmap->chnls[i + bch];
int *lun_roffs = ch_rmap->lun_offs;
struct gen_ch_map *ch_map = &dev_map->chnls[i];
int *lun_offs;
int luns_in_chnl = (luns_left > dev->geo.luns_per_chnl) ?
dev->geo.luns_per_chnl : luns_left;
if (lun_balanced && prev_nr_luns != luns_in_chnl)
lun_balanced = 0;
ch_map->ch_off = ch_rmap->ch_off = bch;
ch_map->nr_luns = luns_in_chnl;
lun_offs = kcalloc(luns_in_chnl, sizeof(int), GFP_KERNEL);
if (!lun_offs)
goto err_ch;
for (j = 0; j < luns_in_chnl; j++) {
luns[lunid].ppa = 0;
luns[lunid].g.ch = i;
luns[lunid++].g.lun = j;
lun_offs[j] = blun;
lun_roffs[j + blun] = blun;
}
ch_map->lun_offs = lun_offs;
/* when starting a new channel, lun offset is reset */
blun = 0;
luns_left -= luns_in_chnl;
}
dev_map->nr_chnls = nr_chnls;
tgt_dev = kmalloc(sizeof(struct nvm_tgt_dev), GFP_KERNEL);
if (!tgt_dev)
goto err_ch;
memcpy(&tgt_dev->geo, &dev->geo, sizeof(struct nvm_geo));
/* Target device only owns a portion of the physical device */
tgt_dev->geo.nr_chnls = nr_chnls;
tgt_dev->geo.nr_luns = nr_luns;
tgt_dev->geo.luns_per_chnl = (lun_balanced) ? prev_nr_luns : -1;
tgt_dev->total_secs = nr_luns * tgt_dev->geo.sec_per_lun;
tgt_dev->q = dev->q;
tgt_dev->map = dev_map;
tgt_dev->luns = luns;
memcpy(&tgt_dev->identity, &dev->identity, sizeof(struct nvm_id));
tgt_dev->parent = dev;
return tgt_dev;
err_ch:
while (--i > 0)
kfree(dev_map->chnls[i].lun_offs);
kfree(luns);
err_luns:
kfree(dev_map->chnls);
err_chnls:
kfree(dev_map);
err_dev:
return tgt_dev;
}
static int gen_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create)
{
struct gen_dev *gn = dev->mp;
struct nvm_ioctl_create_simple *s = &create->conf.s;
struct request_queue *tqueue;
struct gendisk *tdisk;
struct nvm_tgt_type *tt;
struct nvm_target *t;
struct nvm_tgt_dev *tgt_dev;
void *targetdata;
tt = nvm_find_target_type(create->tgttype, 1);
if (!tt) {
pr_err("nvm: target type %s not found\n", create->tgttype);
return -EINVAL;
}
mutex_lock(&gn->lock);
t = gen_find_target(gn, create->tgtname);
if (t) {
pr_err("nvm: target name already exists.\n");
mutex_unlock(&gn->lock);
return -EINVAL;
}
mutex_unlock(&gn->lock);
t = kmalloc(sizeof(struct nvm_target), GFP_KERNEL);
if (!t)
return -ENOMEM;
if (gen_reserve_luns(dev, t, s->lun_begin, s->lun_end))
goto err_t;
tgt_dev = gen_create_tgt_dev(dev, s->lun_begin, s->lun_end);
if (!tgt_dev) {
pr_err("nvm: could not create target device\n");
goto err_reserve;
}
tqueue = blk_alloc_queue_node(GFP_KERNEL, dev->q->node);
if (!tqueue)
goto err_dev;
blk_queue_make_request(tqueue, tt->make_rq);
tdisk = alloc_disk(0);
if (!tdisk)
goto err_queue;
sprintf(tdisk->disk_name, "%s", create->tgtname);
tdisk->flags = GENHD_FL_EXT_DEVT;
tdisk->major = 0;
tdisk->first_minor = 0;
tdisk->fops = &gen_fops;
tdisk->queue = tqueue;
targetdata = tt->init(tgt_dev, tdisk);
if (IS_ERR(targetdata))
goto err_init;
tdisk->private_data = targetdata;
tqueue->queuedata = targetdata;
blk_queue_max_hw_sectors(tqueue, 8 * dev->ops->max_phys_sect);
set_capacity(tdisk, tt->capacity(targetdata));
add_disk(tdisk);
t->type = tt;
t->disk = tdisk;
t->dev = tgt_dev;
mutex_lock(&gn->lock);
list_add_tail(&t->list, &gn->targets);
mutex_unlock(&gn->lock);
return 0;
err_init:
put_disk(tdisk);
err_queue:
blk_cleanup_queue(tqueue);
err_dev:
kfree(tgt_dev);
err_reserve:
gen_release_luns_err(dev, s->lun_begin, s->lun_end);
err_t:
kfree(t);
return -ENOMEM;
}
static void __gen_remove_target(struct nvm_target *t)
{
struct nvm_tgt_type *tt = t->type;
struct gendisk *tdisk = t->disk;
struct request_queue *q = tdisk->queue;
del_gendisk(tdisk);
blk_cleanup_queue(q);
if (tt->exit)
tt->exit(tdisk->private_data);
gen_remove_tgt_dev(t->dev);
put_disk(tdisk);
list_del(&t->list);
kfree(t);
}
/**
* gen_remove_tgt - Removes a target from the media manager
* @dev: device
* @remove: ioctl structure with target name to remove.
*
* Returns:
* 0: on success
* 1: on not found
* <0: on error
*/
static int gen_remove_tgt(struct nvm_dev *dev, struct nvm_ioctl_remove *remove)
{
struct gen_dev *gn = dev->mp;
struct nvm_target *t;
if (!gn)
return 1;
mutex_lock(&gn->lock);
t = gen_find_target(gn, remove->tgtname);
if (!t) {
mutex_unlock(&gn->lock);
return 1;
}
__gen_remove_target(t);
mutex_unlock(&gn->lock);
return 0;
}
static int gen_get_area(struct nvm_dev *dev, sector_t *lba, sector_t len)
{
struct nvm_geo *geo = &dev->geo;
struct gen_dev *gn = dev->mp;
struct gen_area *area, *prev, *next;
sector_t begin = 0;
sector_t max_sectors = (geo->sec_size * dev->total_secs) >> 9;
if (len > max_sectors)
return -EINVAL;
area = kmalloc(sizeof(struct gen_area), GFP_KERNEL);
if (!area)
return -ENOMEM;
prev = NULL;
spin_lock(&dev->lock);
list_for_each_entry(next, &gn->area_list, list) {
if (begin + len > next->begin) {
begin = next->end;
prev = next;
continue;
}
break;
}
if ((begin + len) > max_sectors) {
spin_unlock(&dev->lock);
kfree(area);
return -EINVAL;
}
area->begin = *lba = begin;
area->end = begin + len;
if (prev) /* insert into sorted order */
list_add(&area->list, &prev->list);
else
list_add(&area->list, &gn->area_list);
spin_unlock(&dev->lock);
return 0;
}
static void gen_put_area(struct nvm_dev *dev, sector_t begin)
{
struct gen_dev *gn = dev->mp;
struct gen_area *area;
spin_lock(&dev->lock);
list_for_each_entry(area, &gn->area_list, list) {
if (area->begin != begin)
continue;
list_del(&area->list);
spin_unlock(&dev->lock);
kfree(area);
return;
}
spin_unlock(&dev->lock);
}
static void gen_free(struct nvm_dev *dev)
{
kfree(dev->mp);
kfree(dev->rmap);
dev->mp = NULL;
}
static int gen_register(struct nvm_dev *dev)
{
struct gen_dev *gn;
struct gen_dev_map *dev_rmap;
int i, j;
if (!try_module_get(THIS_MODULE))
return -ENODEV;
gn = kzalloc(sizeof(struct gen_dev), GFP_KERNEL);
if (!gn)
goto err_gn;
dev_rmap = kmalloc(sizeof(struct gen_dev_map), GFP_KERNEL);
if (!dev_rmap)
goto err_rmap;
dev_rmap->chnls = kcalloc(dev->geo.nr_chnls, sizeof(struct gen_ch_map),
GFP_KERNEL);
if (!dev_rmap->chnls)
goto err_chnls;
for (i = 0; i < dev->geo.nr_chnls; i++) {
struct gen_ch_map *ch_rmap;
int *lun_roffs;
int luns_in_chnl = dev->geo.luns_per_chnl;
ch_rmap = &dev_rmap->chnls[i];
ch_rmap->ch_off = -1;
ch_rmap->nr_luns = luns_in_chnl;
lun_roffs = kcalloc(luns_in_chnl, sizeof(int), GFP_KERNEL);
if (!lun_roffs)
goto err_ch;
for (j = 0; j < luns_in_chnl; j++)
lun_roffs[j] = -1;
ch_rmap->lun_offs = lun_roffs;
}
gn->dev = dev;
gn->nr_luns = dev->geo.nr_luns;
INIT_LIST_HEAD(&gn->area_list);
mutex_init(&gn->lock);
INIT_LIST_HEAD(&gn->targets);
dev->mp = gn;
dev->rmap = dev_rmap;
return 1;
err_ch:
while (--i >= 0)
kfree(dev_rmap->chnls[i].lun_offs);
err_chnls:
kfree(dev_rmap);
err_rmap:
gen_free(dev);
err_gn:
module_put(THIS_MODULE);
return -ENOMEM;
}
static void gen_unregister(struct nvm_dev *dev)
{
struct gen_dev *gn = dev->mp;
struct nvm_target *t, *tmp;
mutex_lock(&gn->lock);
list_for_each_entry_safe(t, tmp, &gn->targets, list) {
if (t->dev->parent != dev)
continue;
__gen_remove_target(t);
}
mutex_unlock(&gn->lock);
gen_free(dev);
module_put(THIS_MODULE);
}
static int gen_map_to_dev(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *p)
{
struct gen_dev_map *dev_map = tgt_dev->map;
struct gen_ch_map *ch_map = &dev_map->chnls[p->g.ch];
int lun_off = ch_map->lun_offs[p->g.lun];
struct nvm_dev *dev = tgt_dev->parent;
struct gen_dev_map *dev_rmap = dev->rmap;
struct gen_ch_map *ch_rmap;
int lun_roff;
p->g.ch += ch_map->ch_off;
p->g.lun += lun_off;
ch_rmap = &dev_rmap->chnls[p->g.ch];
lun_roff = ch_rmap->lun_offs[p->g.lun];
if (unlikely(ch_rmap->ch_off < 0 || lun_roff < 0)) {
pr_err("nvm: corrupted device partition table\n");
return -EINVAL;
}
return 0;
}
static int gen_map_to_tgt(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *p)
{
struct nvm_dev *dev = tgt_dev->parent;
struct gen_dev_map *dev_rmap = dev->rmap;
struct gen_ch_map *ch_rmap = &dev_rmap->chnls[p->g.ch];
int lun_roff = ch_rmap->lun_offs[p->g.lun];
p->g.ch -= ch_rmap->ch_off;
p->g.lun -= lun_roff;
return 0;
}
static int gen_trans_rq(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd,
int flag)
{
gen_trans_fn *f;
int i;
int ret = 0;
f = (flag == TRANS_TGT_TO_DEV) ? gen_map_to_dev : gen_map_to_tgt;
if (rqd->nr_ppas == 1)
return f(tgt_dev, &rqd->ppa_addr);
for (i = 0; i < rqd->nr_ppas; i++) {
ret = f(tgt_dev, &rqd->ppa_list[i]);
if (ret)
goto out;
}
out:
return ret;
}
static void gen_end_io(struct nvm_rq *rqd)
{
struct nvm_tgt_dev *tgt_dev = rqd->dev;
struct nvm_tgt_instance *ins = rqd->ins;
/* Convert address space */
if (tgt_dev)
gen_trans_rq(tgt_dev, rqd, TRANS_DEV_TO_TGT);
ins->tt->end_io(rqd);
}
static int gen_submit_io(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd)
{
struct nvm_dev *dev = tgt_dev->parent;
if (!dev->ops->submit_io)
return -ENODEV;
/* Convert address space */
gen_trans_rq(tgt_dev, rqd, TRANS_TGT_TO_DEV);
nvm_generic_to_addr_mode(dev, rqd);
rqd->dev = tgt_dev;
rqd->end_io = gen_end_io;
return dev->ops->submit_io(dev, rqd);
}
static int gen_erase_blk(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *p,
int flags)
{
/* Convert address space */
gen_map_to_dev(tgt_dev, p);
return nvm_erase_ppa(tgt_dev->parent, p, 1, flags);
}
static struct ppa_addr gen_trans_ppa(struct nvm_tgt_dev *tgt_dev,
struct ppa_addr p, int direction)
{
gen_trans_fn *f;
struct ppa_addr ppa = p;
f = (direction == TRANS_TGT_TO_DEV) ? gen_map_to_dev : gen_map_to_tgt;
f(tgt_dev, &ppa);
return ppa;
}
static void gen_part_to_tgt(struct nvm_dev *dev, sector_t *entries,
int len)
{
struct nvm_geo *geo = &dev->geo;
struct gen_dev_map *dev_rmap = dev->rmap;
u64 i;
for (i = 0; i < len; i++) {
struct gen_ch_map *ch_rmap;
int *lun_roffs;
struct ppa_addr gaddr;
u64 pba = le64_to_cpu(entries[i]);
int off;
u64 diff;
if (!pba)
continue;
gaddr = linear_to_generic_addr(geo, pba);
ch_rmap = &dev_rmap->chnls[gaddr.g.ch];
lun_roffs = ch_rmap->lun_offs;
off = gaddr.g.ch * geo->luns_per_chnl + gaddr.g.lun;
diff = ((ch_rmap->ch_off * geo->luns_per_chnl) +
(lun_roffs[gaddr.g.lun])) * geo->sec_per_lun;
entries[i] -= cpu_to_le64(diff);
}
}
static struct nvmm_type gen = {
.name = "gennvm",
.version = {0, 1, 0},
.register_mgr = gen_register,
.unregister_mgr = gen_unregister,
.create_tgt = gen_create_tgt,
.remove_tgt = gen_remove_tgt,
.submit_io = gen_submit_io,
.erase_blk = gen_erase_blk,
.get_area = gen_get_area,
.put_area = gen_put_area,
.trans_ppa = gen_trans_ppa,
.part_to_tgt = gen_part_to_tgt,
};
static int __init gen_module_init(void)
{
return nvm_register_mgr(&gen);
}
static void gen_module_exit(void)
{
nvm_unregister_mgr(&gen);
}
module_init(gen_module_init);
module_exit(gen_module_exit);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("General media manager for Open-Channel SSDs");

View File

@ -1,62 +0,0 @@
/*
* Copyright: Matias Bjorling <mb@bjorling.me>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License version
* 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
*/
#ifndef GENNVM_H_
#define GENNVM_H_
#include <linux/module.h>
#include <linux/vmalloc.h>
#include <linux/lightnvm.h>
struct gen_dev {
struct nvm_dev *dev;
int nr_luns;
struct list_head area_list;
struct mutex lock;
struct list_head targets;
};
/* Map between virtual and physical channel and lun */
struct gen_ch_map {
int ch_off;
int nr_luns;
int *lun_offs;
};
struct gen_dev_map {
struct gen_ch_map *chnls;
int nr_chnls;
};
struct gen_area {
struct list_head list;
sector_t begin;
sector_t end; /* end is excluded */
};
static inline void *ch_map_to_lun_offs(struct gen_ch_map *ch_map)
{
return ch_map + 1;
}
typedef int (gen_trans_fn)(struct nvm_tgt_dev *, struct ppa_addr *);
#define gen_for_each_lun(bm, lun, i) \
for ((i) = 0, lun = &(bm)->luns[0]; \
(i) < (bm)->nr_luns; (i)++, lun = &(bm)->luns[(i)])
#endif /* GENNVM_H_ */

View File

@ -779,7 +779,7 @@ static void rrpc_end_io_write(struct rrpc *rrpc, struct rrpc_rq *rrqd,
static void rrpc_end_io(struct nvm_rq *rqd)
{
struct rrpc *rrpc = container_of(rqd->ins, struct rrpc, instance);
struct rrpc *rrpc = rqd->private;
struct nvm_tgt_dev *dev = rrpc->dev;
struct rrpc_rq *rrqd = nvm_rq_to_pdu(rqd);
uint8_t npages = rqd->nr_ppas;
@ -972,8 +972,9 @@ static int rrpc_submit_io(struct rrpc *rrpc, struct bio *bio,
bio_get(bio);
rqd->bio = bio;
rqd->ins = &rrpc->instance;
rqd->private = rrpc;
rqd->nr_ppas = nr_pages;
rqd->end_io = rrpc_end_io;
rrq->flags = flags;
err = nvm_submit_io(dev, rqd);
@ -1532,7 +1533,6 @@ static void *rrpc_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk)
if (!rrpc)
return ERR_PTR(-ENOMEM);
rrpc->instance.tt = &tt_rrpc;
rrpc->dev = dev;
rrpc->disk = tdisk;
@ -1611,7 +1611,6 @@ static struct nvm_tgt_type tt_rrpc = {
.make_rq = rrpc_make_rq,
.capacity = rrpc_capacity,
.end_io = rrpc_end_io,
.init = rrpc_init,
.exit = rrpc_exit,

View File

@ -102,9 +102,6 @@ struct rrpc_lun {
};
struct rrpc {
/* instance must be kept in top to resolve rrpc in unprep */
struct nvm_tgt_instance instance;
struct nvm_tgt_dev *dev;
struct gendisk *disk;

View File

@ -1,733 +0,0 @@
/*
* Copyright (C) 2015 Matias Bjorling. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License version
* 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; see the file COPYING. If not, write to
* the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139,
* USA.
*
*/
#include <linux/lightnvm.h>
#define MAX_SYSBLKS 3 /* remember to update mapping scheme on change */
#define MAX_BLKS_PR_SYSBLK 2 /* 2 blks with 256 pages and 3000 erases
* enables ~1.5M updates per sysblk unit
*/
struct sysblk_scan {
/* A row is a collection of flash blocks for a system block. */
int nr_rows;
int row;
int act_blk[MAX_SYSBLKS];
int nr_ppas;
struct ppa_addr ppas[MAX_SYSBLKS * MAX_BLKS_PR_SYSBLK];/* all sysblks */
};
static inline int scan_ppa_idx(int row, int blkid)
{
return (row * MAX_BLKS_PR_SYSBLK) + blkid;
}
static void nvm_sysblk_to_cpu(struct nvm_sb_info *info,
struct nvm_system_block *sb)
{
info->seqnr = be32_to_cpu(sb->seqnr);
info->erase_cnt = be32_to_cpu(sb->erase_cnt);
info->version = be16_to_cpu(sb->version);
strncpy(info->mmtype, sb->mmtype, NVM_MMTYPE_LEN);
info->fs_ppa.ppa = be64_to_cpu(sb->fs_ppa);
}
static void nvm_cpu_to_sysblk(struct nvm_system_block *sb,
struct nvm_sb_info *info)
{
sb->magic = cpu_to_be32(NVM_SYSBLK_MAGIC);
sb->seqnr = cpu_to_be32(info->seqnr);
sb->erase_cnt = cpu_to_be32(info->erase_cnt);
sb->version = cpu_to_be16(info->version);
strncpy(sb->mmtype, info->mmtype, NVM_MMTYPE_LEN);
sb->fs_ppa = cpu_to_be64(info->fs_ppa.ppa);
}
static int nvm_setup_sysblks(struct nvm_dev *dev, struct ppa_addr *sysblk_ppas)
{
struct nvm_geo *geo = &dev->geo;
int nr_rows = min_t(int, MAX_SYSBLKS, geo->nr_chnls);
int i;
for (i = 0; i < nr_rows; i++)
sysblk_ppas[i].ppa = 0;
/* if possible, place sysblk at first channel, middle channel and last
* channel of the device. If not, create only one or two sys blocks
*/
switch (geo->nr_chnls) {
case 2:
sysblk_ppas[1].g.ch = 1;
/* fall-through */
case 1:
sysblk_ppas[0].g.ch = 0;
break;
default:
sysblk_ppas[0].g.ch = 0;
sysblk_ppas[1].g.ch = geo->nr_chnls / 2;
sysblk_ppas[2].g.ch = geo->nr_chnls - 1;
break;
}
return nr_rows;
}
static void nvm_setup_sysblk_scan(struct nvm_dev *dev, struct sysblk_scan *s,
struct ppa_addr *sysblk_ppas)
{
memset(s, 0, sizeof(struct sysblk_scan));
s->nr_rows = nvm_setup_sysblks(dev, sysblk_ppas);
}
static int sysblk_get_free_blks(struct nvm_dev *dev, struct ppa_addr ppa,
u8 *blks, int nr_blks,
struct sysblk_scan *s)
{
struct ppa_addr *sppa;
int i, blkid = 0;
nr_blks = nvm_bb_tbl_fold(dev, blks, nr_blks);
if (nr_blks < 0)
return nr_blks;
for (i = 0; i < nr_blks; i++) {
if (blks[i] == NVM_BLK_T_HOST)
return -EEXIST;
if (blks[i] != NVM_BLK_T_FREE)
continue;
sppa = &s->ppas[scan_ppa_idx(s->row, blkid)];
sppa->g.ch = ppa.g.ch;
sppa->g.lun = ppa.g.lun;
sppa->g.blk = i;
s->nr_ppas++;
blkid++;
pr_debug("nvm: use (%u %u %u) as sysblk\n",
sppa->g.ch, sppa->g.lun, sppa->g.blk);
if (blkid > MAX_BLKS_PR_SYSBLK - 1)
return 0;
}
pr_err("nvm: sysblk failed get sysblk\n");
return -EINVAL;
}
static int sysblk_get_host_blks(struct nvm_dev *dev, struct ppa_addr ppa,
u8 *blks, int nr_blks,
struct sysblk_scan *s)
{
int i, nr_sysblk = 0;
nr_blks = nvm_bb_tbl_fold(dev, blks, nr_blks);
if (nr_blks < 0)
return nr_blks;
for (i = 0; i < nr_blks; i++) {
if (blks[i] != NVM_BLK_T_HOST)
continue;
if (s->nr_ppas == MAX_BLKS_PR_SYSBLK * MAX_SYSBLKS) {
pr_err("nvm: too many host blks\n");
return -EINVAL;
}
ppa.g.blk = i;
s->ppas[scan_ppa_idx(s->row, nr_sysblk)] = ppa;
s->nr_ppas++;
nr_sysblk++;
}
return 0;
}
static int nvm_get_all_sysblks(struct nvm_dev *dev, struct sysblk_scan *s,
struct ppa_addr *ppas, int get_free)
{
struct nvm_geo *geo = &dev->geo;
int i, nr_blks, ret = 0;
u8 *blks;
s->nr_ppas = 0;
nr_blks = geo->blks_per_lun * geo->plane_mode;
blks = kmalloc(nr_blks, GFP_KERNEL);
if (!blks)
return -ENOMEM;
for (i = 0; i < s->nr_rows; i++) {
s->row = i;
ret = nvm_get_bb_tbl(dev, ppas[i], blks);
if (ret) {
pr_err("nvm: failed bb tbl for ppa (%u %u)\n",
ppas[i].g.ch,
ppas[i].g.blk);
goto err_get;
}
if (get_free)
ret = sysblk_get_free_blks(dev, ppas[i], blks, nr_blks,
s);
else
ret = sysblk_get_host_blks(dev, ppas[i], blks, nr_blks,
s);
if (ret)
goto err_get;
}
err_get:
kfree(blks);
return ret;
}
/*
* scans a block for latest sysblk.
* Returns:
* 0 - newer sysblk not found. PPA is updated to latest page.
* 1 - newer sysblk found and stored in *cur. PPA is updated to
* next valid page.
* <0- error.
*/
static int nvm_scan_block(struct nvm_dev *dev, struct ppa_addr *ppa,
struct nvm_system_block *sblk)
{
struct nvm_geo *geo = &dev->geo;
struct nvm_system_block *cur;
int pg, ret, found = 0;
/* the full buffer for a flash page is allocated. Only the first of it
* contains the system block information
*/
cur = kmalloc(geo->pfpg_size, GFP_KERNEL);
if (!cur)
return -ENOMEM;
/* perform linear scan through the block */
for (pg = 0; pg < dev->lps_per_blk; pg++) {
ppa->g.pg = ppa_to_slc(dev, pg);
ret = nvm_submit_ppa(dev, ppa, 1, NVM_OP_PREAD, NVM_IO_SLC_MODE,
cur, geo->pfpg_size);
if (ret) {
if (ret == NVM_RSP_ERR_EMPTYPAGE) {
pr_debug("nvm: sysblk scan empty ppa (%u %u %u %u)\n",
ppa->g.ch,
ppa->g.lun,
ppa->g.blk,
ppa->g.pg);
break;
}
pr_err("nvm: read failed (%x) for ppa (%u %u %u %u)",
ret,
ppa->g.ch,
ppa->g.lun,
ppa->g.blk,
ppa->g.pg);
break; /* if we can't read a page, continue to the
* next blk
*/
}
if (be32_to_cpu(cur->magic) != NVM_SYSBLK_MAGIC) {
pr_debug("nvm: scan break for ppa (%u %u %u %u)\n",
ppa->g.ch,
ppa->g.lun,
ppa->g.blk,
ppa->g.pg);
break; /* last valid page already found */
}
if (be32_to_cpu(cur->seqnr) < be32_to_cpu(sblk->seqnr))
continue;
memcpy(sblk, cur, sizeof(struct nvm_system_block));
found = 1;
}
kfree(cur);
return found;
}
static int nvm_sysblk_set_bb_tbl(struct nvm_dev *dev, struct sysblk_scan *s,
int type)
{
return nvm_set_bb_tbl(dev, s->ppas, s->nr_ppas, type);
}
static int nvm_write_and_verify(struct nvm_dev *dev, struct nvm_sb_info *info,
struct sysblk_scan *s)
{
struct nvm_geo *geo = &dev->geo;
struct nvm_system_block nvmsb;
void *buf;
int i, sect, ret = 0;
struct ppa_addr *ppas;
nvm_cpu_to_sysblk(&nvmsb, info);
buf = kzalloc(geo->pfpg_size, GFP_KERNEL);
if (!buf)
return -ENOMEM;
memcpy(buf, &nvmsb, sizeof(struct nvm_system_block));
ppas = kcalloc(geo->sec_per_pg, sizeof(struct ppa_addr), GFP_KERNEL);
if (!ppas) {
ret = -ENOMEM;
goto err;
}
/* Write and verify */
for (i = 0; i < s->nr_rows; i++) {
ppas[0] = s->ppas[scan_ppa_idx(i, s->act_blk[i])];
pr_debug("nvm: writing sysblk to ppa (%u %u %u %u)\n",
ppas[0].g.ch,
ppas[0].g.lun,
ppas[0].g.blk,
ppas[0].g.pg);
/* Expand to all sectors within a flash page */
if (geo->sec_per_pg > 1) {
for (sect = 1; sect < geo->sec_per_pg; sect++) {
ppas[sect].ppa = ppas[0].ppa;
ppas[sect].g.sec = sect;
}
}
ret = nvm_submit_ppa(dev, ppas, geo->sec_per_pg, NVM_OP_PWRITE,
NVM_IO_SLC_MODE, buf, geo->pfpg_size);
if (ret) {
pr_err("nvm: sysblk failed program (%u %u %u)\n",
ppas[0].g.ch,
ppas[0].g.lun,
ppas[0].g.blk);
break;
}
ret = nvm_submit_ppa(dev, ppas, geo->sec_per_pg, NVM_OP_PREAD,
NVM_IO_SLC_MODE, buf, geo->pfpg_size);
if (ret) {
pr_err("nvm: sysblk failed read (%u %u %u)\n",
ppas[0].g.ch,
ppas[0].g.lun,
ppas[0].g.blk);
break;
}
if (memcmp(buf, &nvmsb, sizeof(struct nvm_system_block))) {
pr_err("nvm: sysblk failed verify (%u %u %u)\n",
ppas[0].g.ch,
ppas[0].g.lun,
ppas[0].g.blk);
ret = -EINVAL;
break;
}
}
kfree(ppas);
err:
kfree(buf);
return ret;
}
static int nvm_prepare_new_sysblks(struct nvm_dev *dev, struct sysblk_scan *s)
{
int i, ret;
unsigned long nxt_blk;
struct ppa_addr *ppa;
for (i = 0; i < s->nr_rows; i++) {
nxt_blk = (s->act_blk[i] + 1) % MAX_BLKS_PR_SYSBLK;
ppa = &s->ppas[scan_ppa_idx(i, nxt_blk)];
ppa->g.pg = ppa_to_slc(dev, 0);
ret = nvm_erase_ppa(dev, ppa, 1, 0);
if (ret)
return ret;
s->act_blk[i] = nxt_blk;
}
return 0;
}
int nvm_get_sysblock(struct nvm_dev *dev, struct nvm_sb_info *info)
{
struct ppa_addr sysblk_ppas[MAX_SYSBLKS];
struct sysblk_scan s;
struct nvm_system_block *cur;
int i, j, found = 0;
int ret = -ENOMEM;
/*
* 1. setup sysblk locations
* 2. get bad block list
* 3. filter on host-specific (type 3)
* 4. iterate through all and find the highest seq nr.
* 5. return superblock information
*/
if (!dev->ops->get_bb_tbl)
return -EINVAL;
nvm_setup_sysblk_scan(dev, &s, sysblk_ppas);
mutex_lock(&dev->mlock);
ret = nvm_get_all_sysblks(dev, &s, sysblk_ppas, 0);
if (ret)
goto err_sysblk;
/* no sysblocks initialized */
if (!s.nr_ppas)
goto err_sysblk;
cur = kzalloc(sizeof(struct nvm_system_block), GFP_KERNEL);
if (!cur)
goto err_sysblk;
/* find the latest block across all sysblocks */
for (i = 0; i < s.nr_rows; i++) {
for (j = 0; j < MAX_BLKS_PR_SYSBLK; j++) {
struct ppa_addr ppa = s.ppas[scan_ppa_idx(i, j)];
ret = nvm_scan_block(dev, &ppa, cur);
if (ret > 0)
found = 1;
else if (ret < 0)
break;
}
}
nvm_sysblk_to_cpu(info, cur);
kfree(cur);
err_sysblk:
mutex_unlock(&dev->mlock);
if (found)
return 1;
return ret;
}
int nvm_update_sysblock(struct nvm_dev *dev, struct nvm_sb_info *new)
{
/* 1. for each latest superblock
* 2. if room
* a. write new flash page entry with the updated information
* 3. if no room
* a. find next available block on lun (linear search)
* if none, continue to next lun
* if none at all, report error. also report that it wasn't
* possible to write to all superblocks.
* c. write data to block.
*/
struct ppa_addr sysblk_ppas[MAX_SYSBLKS];
struct sysblk_scan s;
struct nvm_system_block *cur;
int i, j, ppaidx, found = 0;
int ret = -ENOMEM;
if (!dev->ops->get_bb_tbl)
return -EINVAL;
nvm_setup_sysblk_scan(dev, &s, sysblk_ppas);
mutex_lock(&dev->mlock);
ret = nvm_get_all_sysblks(dev, &s, sysblk_ppas, 0);
if (ret)
goto err_sysblk;
cur = kzalloc(sizeof(struct nvm_system_block), GFP_KERNEL);
if (!cur)
goto err_sysblk;
/* Get the latest sysblk for each sysblk row */
for (i = 0; i < s.nr_rows; i++) {
found = 0;
for (j = 0; j < MAX_BLKS_PR_SYSBLK; j++) {
ppaidx = scan_ppa_idx(i, j);
ret = nvm_scan_block(dev, &s.ppas[ppaidx], cur);
if (ret > 0) {
s.act_blk[i] = j;
found = 1;
} else if (ret < 0)
break;
}
}
if (!found) {
pr_err("nvm: no valid sysblks found to update\n");
ret = -EINVAL;
goto err_cur;
}
/*
* All sysblocks found. Check that they have same page id in their flash
* blocks
*/
for (i = 1; i < s.nr_rows; i++) {
struct ppa_addr l = s.ppas[scan_ppa_idx(0, s.act_blk[0])];
struct ppa_addr r = s.ppas[scan_ppa_idx(i, s.act_blk[i])];
if (l.g.pg != r.g.pg) {
pr_err("nvm: sysblks not on same page. Previous update failed.\n");
ret = -EINVAL;
goto err_cur;
}
}
/*
* Check that there haven't been another update to the seqnr since we
* began
*/
if ((new->seqnr - 1) != be32_to_cpu(cur->seqnr)) {
pr_err("nvm: seq is not sequential\n");
ret = -EINVAL;
goto err_cur;
}
/*
* When all pages in a block has been written, a new block is selected
* and writing is performed on the new block.
*/
if (s.ppas[scan_ppa_idx(0, s.act_blk[0])].g.pg ==
dev->lps_per_blk - 1) {
ret = nvm_prepare_new_sysblks(dev, &s);
if (ret)
goto err_cur;
}
ret = nvm_write_and_verify(dev, new, &s);
err_cur:
kfree(cur);
err_sysblk:
mutex_unlock(&dev->mlock);
return ret;
}
int nvm_init_sysblock(struct nvm_dev *dev, struct nvm_sb_info *info)
{
struct nvm_geo *geo = &dev->geo;
struct ppa_addr sysblk_ppas[MAX_SYSBLKS];
struct sysblk_scan s;
int ret;
/*
* 1. select master blocks and select first available blks
* 2. get bad block list
* 3. mark MAX_SYSBLKS block as host-based device allocated.
* 4. write and verify data to block
*/
if (!dev->ops->get_bb_tbl || !dev->ops->set_bb_tbl)
return -EINVAL;
if (!(geo->mccap & NVM_ID_CAP_SLC) || !dev->lps_per_blk) {
pr_err("nvm: memory does not support SLC access\n");
return -EINVAL;
}
/* Index all sysblocks and mark them as host-driven */
nvm_setup_sysblk_scan(dev, &s, sysblk_ppas);
mutex_lock(&dev->mlock);
ret = nvm_get_all_sysblks(dev, &s, sysblk_ppas, 1);
if (ret)
goto err_mark;
ret = nvm_sysblk_set_bb_tbl(dev, &s, NVM_BLK_T_HOST);
if (ret)
goto err_mark;
/* Write to the first block of each row */
ret = nvm_write_and_verify(dev, info, &s);
err_mark:
mutex_unlock(&dev->mlock);
return ret;
}
static int factory_nblks(int nblks)
{
/* Round up to nearest BITS_PER_LONG */
return (nblks + (BITS_PER_LONG - 1)) & ~(BITS_PER_LONG - 1);
}
static unsigned int factory_blk_offset(struct nvm_geo *geo, struct ppa_addr ppa)
{
int nblks = factory_nblks(geo->blks_per_lun);
return ((ppa.g.ch * geo->luns_per_chnl * nblks) + (ppa.g.lun * nblks)) /
BITS_PER_LONG;
}
static int nvm_factory_blks(struct nvm_dev *dev, struct ppa_addr ppa,
u8 *blks, int nr_blks,
unsigned long *blk_bitmap, int flags)
{
int i, lunoff;
nr_blks = nvm_bb_tbl_fold(dev, blks, nr_blks);
if (nr_blks < 0)
return nr_blks;
lunoff = factory_blk_offset(&dev->geo, ppa);
/* non-set bits correspond to the block must be erased */
for (i = 0; i < nr_blks; i++) {
switch (blks[i]) {
case NVM_BLK_T_FREE:
if (flags & NVM_FACTORY_ERASE_ONLY_USER)
set_bit(i, &blk_bitmap[lunoff]);
break;
case NVM_BLK_T_HOST:
if (!(flags & NVM_FACTORY_RESET_HOST_BLKS))
set_bit(i, &blk_bitmap[lunoff]);
break;
case NVM_BLK_T_GRWN_BAD:
if (!(flags & NVM_FACTORY_RESET_GRWN_BBLKS))
set_bit(i, &blk_bitmap[lunoff]);
break;
default:
set_bit(i, &blk_bitmap[lunoff]);
break;
}
}
return 0;
}
static int nvm_fact_get_blks(struct nvm_dev *dev, struct ppa_addr *erase_list,
int max_ppas, unsigned long *blk_bitmap)
{
struct nvm_geo *geo = &dev->geo;
struct ppa_addr ppa;
int ch, lun, blkid, idx, done = 0, ppa_cnt = 0;
unsigned long *offset;
while (!done) {
done = 1;
nvm_for_each_lun_ppa(geo, ppa, ch, lun) {
idx = factory_blk_offset(geo, ppa);
offset = &blk_bitmap[idx];
blkid = find_first_zero_bit(offset, geo->blks_per_lun);
if (blkid >= geo->blks_per_lun)
continue;
set_bit(blkid, offset);
ppa.g.blk = blkid;
pr_debug("nvm: erase ppa (%u %u %u)\n",
ppa.g.ch,
ppa.g.lun,
ppa.g.blk);
erase_list[ppa_cnt] = ppa;
ppa_cnt++;
done = 0;
if (ppa_cnt == max_ppas)
return ppa_cnt;
}
}
return ppa_cnt;
}
static int nvm_fact_select_blks(struct nvm_dev *dev, unsigned long *blk_bitmap,
int flags)
{
struct nvm_geo *geo = &dev->geo;
struct ppa_addr ppa;
int ch, lun, nr_blks, ret = 0;
u8 *blks;
nr_blks = geo->blks_per_lun * geo->plane_mode;
blks = kmalloc(nr_blks, GFP_KERNEL);
if (!blks)
return -ENOMEM;
nvm_for_each_lun_ppa(geo, ppa, ch, lun) {
ret = nvm_get_bb_tbl(dev, ppa, blks);
if (ret)
pr_err("nvm: failed bb tbl for ch%u lun%u\n",
ppa.g.ch, ppa.g.blk);
ret = nvm_factory_blks(dev, ppa, blks, nr_blks, blk_bitmap,
flags);
if (ret)
break;
}
kfree(blks);
return ret;
}
int nvm_dev_factory(struct nvm_dev *dev, int flags)
{
struct nvm_geo *geo = &dev->geo;
struct ppa_addr *ppas;
int ppa_cnt, ret = -ENOMEM;
int max_ppas = dev->ops->max_phys_sect / geo->nr_planes;
struct ppa_addr sysblk_ppas[MAX_SYSBLKS];
struct sysblk_scan s;
unsigned long *blk_bitmap;
blk_bitmap = kzalloc(factory_nblks(geo->blks_per_lun) * geo->nr_luns,
GFP_KERNEL);
if (!blk_bitmap)
return ret;
ppas = kcalloc(max_ppas, sizeof(struct ppa_addr), GFP_KERNEL);
if (!ppas)
goto err_blks;
/* create list of blks to be erased */
ret = nvm_fact_select_blks(dev, blk_bitmap, flags);
if (ret)
goto err_ppas;
/* continue to erase until list of blks until empty */
while ((ppa_cnt =
nvm_fact_get_blks(dev, ppas, max_ppas, blk_bitmap)) > 0)
nvm_erase_ppa(dev, ppas, ppa_cnt, 0);
/* mark host reserved blocks free */
if (flags & NVM_FACTORY_RESET_HOST_BLKS) {
nvm_setup_sysblk_scan(dev, &s, sysblk_ppas);
mutex_lock(&dev->mlock);
ret = nvm_get_all_sysblks(dev, &s, sysblk_ppas, 0);
if (!ret)
ret = nvm_sysblk_set_bb_tbl(dev, &s, NVM_BLK_T_FREE);
mutex_unlock(&dev->mlock);
}
err_ppas:
kfree(ppas);
err_blks:
kfree(blk_bitmap);
return ret;
}
EXPORT_SYMBOL(nvm_dev_factory);

View File

@ -666,7 +666,7 @@ static inline struct search *search_alloc(struct bio *bio,
s->iop.write_prio = 0;
s->iop.error = 0;
s->iop.flags = 0;
s->iop.flush_journal = (bio->bi_opf & (REQ_PREFLUSH|REQ_FUA)) != 0;
s->iop.flush_journal = op_is_flush(bio->bi_opf);
s->iop.wq = bcache_wq;
return s;
@ -1009,7 +1009,7 @@ static int cached_dev_congested(void *data, int bits)
struct request_queue *q = bdev_get_queue(dc->bdev);
int ret = 0;
if (bdi_congested(&q->backing_dev_info, bits))
if (bdi_congested(q->backing_dev_info, bits))
return 1;
if (cached_dev_get(dc)) {
@ -1018,7 +1018,7 @@ static int cached_dev_congested(void *data, int bits)
for_each_cache(ca, d->c, i) {
q = bdev_get_queue(ca->bdev);
ret |= bdi_congested(&q->backing_dev_info, bits);
ret |= bdi_congested(q->backing_dev_info, bits);
}
cached_dev_put(dc);
@ -1032,7 +1032,7 @@ void bch_cached_dev_request_init(struct cached_dev *dc)
struct gendisk *g = dc->disk.disk;
g->queue->make_request_fn = cached_dev_make_request;
g->queue->backing_dev_info.congested_fn = cached_dev_congested;
g->queue->backing_dev_info->congested_fn = cached_dev_congested;
dc->disk.cache_miss = cached_dev_cache_miss;
dc->disk.ioctl = cached_dev_ioctl;
}
@ -1125,7 +1125,7 @@ static int flash_dev_congested(void *data, int bits)
for_each_cache(ca, d->c, i) {
q = bdev_get_queue(ca->bdev);
ret |= bdi_congested(&q->backing_dev_info, bits);
ret |= bdi_congested(q->backing_dev_info, bits);
}
return ret;
@ -1136,7 +1136,7 @@ void bch_flash_dev_request_init(struct bcache_device *d)
struct gendisk *g = d->disk;
g->queue->make_request_fn = flash_dev_make_request;
g->queue->backing_dev_info.congested_fn = flash_dev_congested;
g->queue->backing_dev_info->congested_fn = flash_dev_congested;
d->cache_miss = flash_dev_cache_miss;
d->ioctl = flash_dev_ioctl;
}

View File

@ -807,7 +807,7 @@ static int bcache_device_init(struct bcache_device *d, unsigned block_size,
blk_queue_make_request(q, NULL);
d->disk->queue = q;
q->queuedata = d;
q->backing_dev_info.congested_data = d;
q->backing_dev_info->congested_data = d;
q->limits.max_hw_sectors = UINT_MAX;
q->limits.max_sectors = UINT_MAX;
q->limits.max_segment_size = UINT_MAX;
@ -1132,9 +1132,9 @@ static int cached_dev_init(struct cached_dev *dc, unsigned block_size)
set_capacity(dc->disk.disk,
dc->bdev->bd_part->nr_sects - dc->sb.data_offset);
dc->disk.disk->queue->backing_dev_info.ra_pages =
max(dc->disk.disk->queue->backing_dev_info.ra_pages,
q->backing_dev_info.ra_pages);
dc->disk.disk->queue->backing_dev_info->ra_pages =
max(dc->disk.disk->queue->backing_dev_info->ra_pages,
q->backing_dev_info->ra_pages);
bch_cached_dev_request_init(dc);
bch_cached_dev_writeback_init(dc);

View File

@ -787,8 +787,7 @@ static void check_if_tick_bio_needed(struct cache *cache, struct bio *bio)
struct per_bio_data *pb = get_per_bio_data(bio, pb_data_size);
spin_lock_irqsave(&cache->lock, flags);
if (cache->need_tick_bio &&
!(bio->bi_opf & (REQ_FUA | REQ_PREFLUSH)) &&
if (cache->need_tick_bio && !op_is_flush(bio->bi_opf) &&
bio_op(bio) != REQ_OP_DISCARD) {
pb->tick = true;
cache->need_tick_bio = false;
@ -828,11 +827,6 @@ static dm_oblock_t get_bio_block(struct cache *cache, struct bio *bio)
return to_oblock(block_nr);
}
static int bio_triggers_commit(struct cache *cache, struct bio *bio)
{
return bio->bi_opf & (REQ_PREFLUSH | REQ_FUA);
}
/*
* You must increment the deferred set whilst the prison cell is held. To
* encourage this, we ask for 'cell' to be passed in.
@ -884,7 +878,7 @@ static void issue(struct cache *cache, struct bio *bio)
{
unsigned long flags;
if (!bio_triggers_commit(cache, bio)) {
if (!op_is_flush(bio->bi_opf)) {
accounted_request(cache, bio);
return;
}
@ -1069,8 +1063,7 @@ static void dec_io_migrations(struct cache *cache)
static bool discard_or_flush(struct bio *bio)
{
return bio_op(bio) == REQ_OP_DISCARD ||
bio->bi_opf & (REQ_PREFLUSH | REQ_FUA);
return bio_op(bio) == REQ_OP_DISCARD || op_is_flush(bio->bi_opf);
}
static void __cell_defer(struct cache *cache, struct dm_bio_prison_cell *cell)
@ -2291,7 +2284,7 @@ static void do_waker(struct work_struct *ws)
static int is_congested(struct dm_dev *dev, int bdi_bits)
{
struct request_queue *q = bdev_get_queue(dev->bdev);
return bdi_congested(&q->backing_dev_info, bdi_bits);
return bdi_congested(q->backing_dev_info, bdi_bits);
}
static int cache_is_congested(struct dm_target_callbacks *cb, int bdi_bits)

Some files were not shown because too many files have changed in this diff Show More