MTD core changes:

* partition parser: Support MTD names containing one or more colons.
 * mtdblock: clear cache_state to avoid writing to bad blocks repeatedly.
 
 Raw NAND core changes:
 
 * Stop using nand_release(), patched all drivers.
 * Give more information about the ECC weakness when not matching the
   chip's requirement.
 * MAINTAINERS updates.
 * Support emulated SLC mode on MLC NANDs.
 * Support "constrained" controllers, adapt the core and ONFI/JEDEC
   table parsing and Micron's code.
 * Take check_only into account.
 * Add an invalid ECC mode to discriminate with valid ones.
 * Return an enum from of_get_nand_ecc_algo().
 * Drop OOB_FIRST placement scheme.
 * Introduce nand_extract_bits().
 * Ensure a consistent bitflips numbering.
 * BCH lib:
   - Allow easy bit swapping.
   - Rework a little bit the exported function names.
 * Fix nand_gpio_waitrdy().
 * Propage CS selection to sub operations.
 * Add a NAND_NO_BBM_QUIRK flag.
 * Give the possibility to verify a read operation is supported.
 * Add a helper to check supported operations.
 * Avoid indirect access to ->data_buf().
 * Rename the use_bufpoi variables.
 * Fix comments about the use of bufpoi.
 * Rename a NAND chip option.
 * Reorder the nand_chip->options flags.
 * Translate obscure bitfields into readable macros.
 * Timings:
   - Fix default values.
   - Add mode information to the timings structure.
 
 Raw NAND controller driver changes:
 
 * Fixed many error paths.
 * Arasan
   - New driver
 * Au1550nd:
   - Various cleanups
   - Migration to ->exec_op()
 * brcmnand:
   - Misc cleanup.
   - Support v2.1-v2.2 controllers.
   - Remove unused including <linux/version.h>.
   - Correctly verify erased pages.
   - Fix Hamming OOB layout.
 * Cadence
   - Make cadence_nand_attach_chip static.
 * Cafe:
   - Set the NAND_NO_BBM_QUIRK flag
 * cmx270:
   - Remove this controller driver.
 * cs553x:
   - Misc cleanup
   - Migration to ->exec_op()
 * Davinci:
   - Misc cleanup.
   - Migration to ->exec_op()
 * Denali:
   - Add more delays before latching incoming data
 * Diskonchip:
    - Misc cleanup
    - Migration to ->exec_op()
 * Fsmc:
   - Change to non-atomic bit operations.
 * GPMI:
   - Use nand_extract_bits()
   - Fix runtime PM imbalance.
 * Ingenic:
   - Migration to exec_op()
   - Fix the RB gpio active-high property on qi, lb60
   - Make qi_lb60_ooblayout_ops static.
 * Marvell:
    - Misc cleanup and small fixes
 * Nandsim:
   - Fix the error paths, driver wide.
 * Omap_elm:
   - Fix runtime PM imbalance.
 * STM32_FMC2:
   - Misc cleanups (error cases, comments, timeout valus, cosmetic
     changes).
 
 SPI NOR core changes:
 
 * Add, update support and fix few flashes.
 * Prepare BFPT parsing for JESD216 rev D.
 * Kernel doc fixes.
 
 CFI changes:
 
 * Support the absence of protection registers for Intel CFI flashes.
 * Replace zero-length array with flexible-arrays.
 -----BEGIN PGP SIGNATURE-----
 
 iQJKBAABCAA0FiEEdgfidid8lnn52cLTZvlZhesYu8EFAl7f8msWHHJpY2hhcmRA
 c2lnbWEtc3Rhci5hdAAKCRBm+VmF6xi7wXhlEADO3dfrWS9bsbZokuMppHlOTAsm
 d0hPexu0ztRYr2qWgXScENtrcJ/0ygDPxEQxwIiWYAqwFn6yOBbum+tOo25edbEH
 hGpkV5551vc48vD55nvxdoyWiJAgx93jVmfXU/Ad8EBDV4wGTBwzJJvZ8bxovUIl
 Ccs9p8KU/Z5c7yNhYtLOJChU3gfMS0WS5iQakSnnT82TFJIdC8d8Y+bjupRfvHbz
 ZkEC44Y+QcvSX6C2PJ2U9ScBf6r6WZkHmpOef8UzrxdLRvnhU16u9yRlepsm2D+x
 KycQ81KPBhagLI+9AWGZQYma5GH0z+40LmhxR6YBS0ipS2lAc1wM904KB8RGohfl
 SY4EYQSyx2/42bLEgR823rCfIIrzzNvjwnWdcZik2p2IWsocpzhdW2Fe3eJ7ULUe
 9toQMg8JObawyKw7vRJtdiiX/OsNNv53FJsRu6rHkq3kgLXcmAUQYMh02LQFAkD6
 gT8W8wmseZixI6mnG7tV2KHtRU70xWTTwJgFp5FvvBAP0p7KfbIlgJ0XrzQor2vB
 3Jhb7e2DrOfu2RatZ12bmQpvpoU1Jv1U81UNnwsNpXawCPuRUYG3KCt+hjjr7HiV
 ++7YZ01pQ1GQ/pgcprwwKcpw5iTah0uXUEnE6pVbyX7hxg+OBgsWh8SK9MZMUioM
 3yUGbotWAu2j6uM46g==
 =2M9Y
 -----END PGP SIGNATURE-----

Merge tag 'mtd/for-5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux

Pull MTD updates from Richard Weinberger:
 "MTD core changes:
   - partition parser: Support MTD names containing one or more colons.
   - mtdblock: clear cache_state to avoid writing to bad blocks
     repeatedly.

  Raw NAND core changes:
   - Stop using nand_release(), patched all drivers.
   - Give more information about the ECC weakness when not matching the
     chip's requirement.
   - MAINTAINERS updates.
   - Support emulated SLC mode on MLC NANDs.
   - Support "constrained" controllers, adapt the core and ONFI/JEDEC
     table parsing and Micron's code.
   - Take check_only into account.
   - Add an invalid ECC mode to discriminate with valid ones.
   - Return an enum from of_get_nand_ecc_algo().
   - Drop OOB_FIRST placement scheme.
   - Introduce nand_extract_bits().
   - Ensure a consistent bitflips numbering.
   - BCH lib:
      - Allow easy bit swapping.
      - Rework a little bit the exported function names.
   - Fix nand_gpio_waitrdy().
   - Propage CS selection to sub operations.
   - Add a NAND_NO_BBM_QUIRK flag.
   - Give the possibility to verify a read operation is supported.
   - Add a helper to check supported operations.
   - Avoid indirect access to ->data_buf().
   - Rename the use_bufpoi variables.
   - Fix comments about the use of bufpoi.
   - Rename a NAND chip option.
   - Reorder the nand_chip->options flags.
   - Translate obscure bitfields into readable macros.
   - Timings:
      - Fix default values.
      - Add mode information to the timings structure.

  Raw NAND controller driver changes:
   - Fixed many error paths.
   - Arasan
      - New driver
   - Au1550nd:
      - Various cleanups
      - Migration to ->exec_op()
   - brcmnand:
      - Misc cleanup.
      - Support v2.1-v2.2 controllers.
      - Remove unused including <linux/version.h>.
      - Correctly verify erased pages.
      - Fix Hamming OOB layout.
   - Cadence
      - Make cadence_nand_attach_chip static.
   - Cafe:
      - Set the NAND_NO_BBM_QUIRK flag
   - cmx270:
      - Remove this controller driver.
   - cs553x:
      - Misc cleanup
      - Migration to ->exec_op()
   - Davinci:
      - Misc cleanup.
      - Migration to ->exec_op()
   - Denali:
      - Add more delays before latching incoming data
   - Diskonchip:
      - Misc cleanup
      - Migration to ->exec_op()
   - Fsmc:
      - Change to non-atomic bit operations.
   - GPMI:
      - Use nand_extract_bits()
      - Fix runtime PM imbalance.
   - Ingenic:
      - Migration to exec_op()
      - Fix the RB gpio active-high property on qi, lb60
      - Make qi_lb60_ooblayout_ops static.
   - Marvell:
      - Misc cleanup and small fixes
   - Nandsim:
      - Fix the error paths, driver wide.
   - Omap_elm:
      - Fix runtime PM imbalance.
   - STM32_FMC2:
      - Misc cleanups (error cases, comments, timeout valus, cosmetic
        changes).

  SPI NOR core changes:
   - Add, update support and fix few flashes.
   - Prepare BFPT parsing for JESD216 rev D.
   - Kernel doc fixes.

  CFI changes:
   - Support the absence of protection registers for Intel CFI flashes.
   - Replace zero-length array with flexible-arrays"

* tag 'mtd/for-5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (208 commits)
  mtd: clear cache_state to avoid writing to bad blocks repeatedly
  mtd: parser: cmdline: Support MTD names containing one or more colons
  mtd: physmap_of_gemini: remove defined but not used symbol 'syscon_match'
  mtd: rawnand: Add an invalid ECC mode to discriminate with valid ones
  mtd: rawnand: Return an enum from of_get_nand_ecc_algo()
  mtd: rawnand: Drop OOB_FIRST placement scheme
  mtd: rawnand: Avoid a typedef
  mtd: Fix typo in mtd_ooblayout_set_databytes() description
  mtd: rawnand: Stop using nand_release()
  mtd: rawnand: nandsim: Reorganize ns_cleanup_module()
  mtd: rawnand: nandsim: Rename a label in ns_init_module()
  mtd: rawnand: nandsim: Manage lists on error in ns_init_module()
  mtd: rawnand: nandsim: Fix the label pointing on nand_cleanup()
  mtd: rawnand: nandsim: Free erase_block_wear on error
  mtd: rawnand: nandsim: Use an additional label when freeing the nandsim object
  mtd: rawnand: nandsim: Stop using nand_release()
  mtd: rawnand: nandsim: Free the partition names in ns_free()
  mtd: rawnand: nandsim: Free the allocated device on error in ns_init()
  mtd: rawnand: nandsim: Free partition names on error in ns_init()
  mtd: rawnand: nandsim: Fix the two ns_alloc_device() error paths
  ...
This commit is contained in:
Linus Torvalds 2020-06-10 13:15:17 -07:00
commit 6f51ab9440
100 changed files with 4462 additions and 2644 deletions

View file

@ -0,0 +1,63 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/mtd/arasan,nand-controller.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Arasan NAND Flash Controller with ONFI 3.1 support device tree bindings
allOf:
- $ref: "nand-controller.yaml"
maintainers:
- Naga Sureshkumar Relli <naga.sureshkumar.relli@xilinx.com>
properties:
compatible:
oneOf:
- items:
- enum:
- xlnx,zynqmp-nand-controller
- enum:
- arasan,nfc-v3p10
reg:
maxItems: 1
clocks:
items:
- description: Controller clock
- description: NAND bus clock
clock-names:
items:
- const: controller
- const: bus
interrupts:
maxItems: 1
"#address-cells": true
"#size-cells": true
required:
- compatible
- reg
- clocks
- clock-names
- interrupts
additionalProperties: true
examples:
- |
nfc: nand-controller@ff100000 {
compatible = "xlnx,zynqmp-nand-controller", "arasan,nfc-v3p10";
reg = <0x0 0xff100000 0x0 0x1000>;
clock-names = "controller", "bus";
clocks = <&clk200>, <&clk100>;
interrupt-parent = <&gic>;
interrupts = <0 14 4>;
#address-cells = <1>;
#size-cells = <0>;
};

View file

@ -20,6 +20,8 @@ Required properties:
"brcm,brcmnand" and an appropriate version compatibility
string, like "brcm,brcmnand-v7.0"
Possible values:
brcm,brcmnand-v2.1
brcm,brcmnand-v2.2
brcm,brcmnand-v4.0
brcm,brcmnand-v5.0
brcm,brcmnand-v6.0

View file

@ -61,6 +61,9 @@ Optional properties:
clobbered.
- lock : Do not unlock the partition at initialization time (not supported on
all devices)
- slc-mode: This parameter, if present, allows one to emulate SLC mode on a
partition attached to an MLC NAND thus making this partition immune to
paired-pages corruptions
Examples:

View file

@ -276,8 +276,10 @@ unregisters the partitions in the MTD layer.
#ifdef MODULE
static void __exit board_cleanup (void)
{
/* Release resources, unregister device */
nand_release (mtd_to_nand(board_mtd));
/* Unregister device */
WARN_ON(mtd_device_unregister(board_mtd));
/* Release resources */
nand_cleanup(mtd_to_nand(board_mtd));
/* unmap physical address */
iounmap(baseaddr);

View file

@ -1305,6 +1305,13 @@ S: Supported
W: http://www.aquantia.com
F: drivers/net/ethernet/aquantia/atlantic/aq_ptp*
ARASAN NAND CONTROLLER DRIVER
M: Naga Sureshkumar Relli <nagasure@xilinx.com>
L: linux-mtd@lists.infradead.org
S: Maintained
F: Documentation/devicetree/bindings/mtd/arasan,nand-controller.yaml
F: drivers/mtd/nand/raw/arasan-nand-controller.c
ARC FRAMEBUFFER DRIVER
M: Jaya Kumar <jayalk@intworks.biz>
S: Maintained
@ -3778,9 +3785,8 @@ F: Documentation/devicetree/bindings/media/cdns,*.txt
F: drivers/media/platform/cadence/cdns-csi2*
CADENCE NAND DRIVER
M: Piotr Sroka <piotrs@cadence.com>
L: linux-mtd@lists.infradead.org
S: Maintained
S: Orphan
F: Documentation/devicetree/bindings/mtd/cadence-nand-controller.txt
F: drivers/mtd/nand/raw/cadence-nand-controller.c
@ -10853,9 +10859,8 @@ F: Documentation/devicetree/bindings/i2c/i2c-mt7621.txt
F: drivers/i2c/busses/i2c-mt7621.c
MEDIATEK NAND CONTROLLER DRIVER
M: Xiaolei Li <xiaolei.li@mediatek.com>
L: linux-mtd@lists.infradead.org
S: Maintained
S: Orphan
F: Documentation/devicetree/bindings/mtd/mtk-nand.txt
F: drivers/mtd/nand/raw/mtk_*

View file

@ -420,8 +420,9 @@ read_pri_intelext(struct map_info *map, __u16 adr)
extra_size = 0;
/* Protection Register info */
extra_size += (extp->NumProtectionFields - 1) *
sizeof(struct cfi_intelext_otpinfo);
if (extp->NumProtectionFields)
extra_size += (extp->NumProtectionFields - 1) *
sizeof(struct cfi_intelext_otpinfo);
}
if (extp->MinorVersion >= '1') {
@ -695,14 +696,16 @@ static int cfi_intelext_partition_fixup(struct mtd_info *mtd,
*/
if (extp && extp->MajorVersion == '1' && extp->MinorVersion >= '3'
&& extp->FeatureSupport & (1 << 9)) {
int offs = 0;
struct cfi_private *newcfi;
struct flchip *chip;
struct flchip_shared *shared;
int offs, numregions, numparts, partshift, numvirtchips, i, j;
int numregions, numparts, partshift, numvirtchips, i, j;
/* Protection Register info */
offs = (extp->NumProtectionFields - 1) *
sizeof(struct cfi_intelext_otpinfo);
if (extp->NumProtectionFields)
offs = (extp->NumProtectionFields - 1) *
sizeof(struct cfi_intelext_otpinfo);
/* Burst Read info */
offs += extp->extra[offs+1]+2;

View file

@ -647,7 +647,7 @@ static int doc_ecc_bch_fix_data(struct docg3 *docg3, void *buf, u8 *hwecc)
for (i = 0; i < DOC_ECC_BCH_SIZE; i++)
ecc[i] = bitrev8(hwecc[i]);
numerrs = decode_bch(docg3->cascade->bch, NULL,
numerrs = bch_decode(docg3->cascade->bch, NULL,
DOC_ECC_BCH_COVERED_BYTES,
NULL, ecc, NULL, errorpos);
BUG_ON(numerrs == -EINVAL);
@ -1984,8 +1984,8 @@ static int __init docg3_probe(struct platform_device *pdev)
return ret;
cascade->base = base;
mutex_init(&cascade->lock);
cascade->bch = init_bch(DOC_ECC_BCH_M, DOC_ECC_BCH_T,
DOC_ECC_BCH_PRIMPOLY);
cascade->bch = bch_init(DOC_ECC_BCH_M, DOC_ECC_BCH_T,
DOC_ECC_BCH_PRIMPOLY, false);
if (!cascade->bch)
return ret;
@ -2021,7 +2021,7 @@ notfound:
ret = -ENODEV;
dev_info(dev, "No supported DiskOnChip found\n");
err_probe:
free_bch(cascade->bch);
bch_free(cascade->bch);
for (floor = 0; floor < DOC_MAX_NBFLOORS; floor++)
if (cascade->floors[floor])
doc_release_device(cascade->floors[floor]);
@ -2045,7 +2045,7 @@ static int docg3_release(struct platform_device *pdev)
if (cascade->floors[floor])
doc_release_device(cascade->floors[floor]);
free_bch(docg3->cascade->bch);
bch_free(docg3->cascade->bch);
return 0;
}

View file

@ -46,11 +46,6 @@
#define FLASH_PARALLEL_HIGH_PIN_CNT (1 << 20) /* else low pin cnt */
static const struct of_device_id syscon_match[] = {
{ .compatible = "cortina,gemini-syscon" },
{ },
};
struct gemini_flash {
struct device *dev;
struct pinctrl *p;

View file

@ -89,8 +89,6 @@ static int write_cached_data (struct mtdblk_dev *mtdblk)
ret = erase_write (mtd, mtdblk->cache_offset,
mtdblk->cache_size, mtdblk->cache_data);
if (ret)
return ret;
/*
* Here we could arguably set the cache state to STATE_CLEAN.
@ -98,9 +96,14 @@ static int write_cached_data (struct mtdblk_dev *mtdblk)
* be notified if this content is altered on the flash by other
* means. Let's declare it empty and leave buffering tasks to
* the buffer cache instead.
*
* If this cache_offset points to a bad block, data cannot be
* written to the device. Clear cache_state to avoid writing to
* bad blocks repeatedly.
*/
mtdblk->cache_state = STATE_EMPTY;
return 0;
if (ret == 0 || ret == -EIO)
mtdblk->cache_state = STATE_EMPTY;
return ret;
}

View file

@ -617,6 +617,19 @@ int add_mtd_device(struct mtd_info *mtd)
!(mtd->flags & MTD_NO_ERASE)))
return -EINVAL;
/*
* MTD_SLC_ON_MLC_EMULATION can only be set on partitions, when the
* master is an MLC NAND and has a proper pairing scheme defined.
* We also reject masters that implement ->_writev() for now, because
* NAND controller drivers don't implement this hook, and adding the
* SLC -> MLC address/length conversion to this path is useless if we
* don't have a user.
*/
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION &&
(!mtd_is_partition(mtd) || master->type != MTD_MLCNANDFLASH ||
!master->pairing || master->_writev))
return -EINVAL;
mutex_lock(&mtd_table_mutex);
i = idr_alloc(&mtd_idr, mtd, 0, 0, GFP_KERNEL);
@ -632,6 +645,14 @@ int add_mtd_device(struct mtd_info *mtd)
if (mtd->bitflip_threshold == 0)
mtd->bitflip_threshold = mtd->ecc_strength;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
int ngroups = mtd_pairing_groups(master);
mtd->erasesize /= ngroups;
mtd->size = (u64)mtd_div_by_eb(mtd->size, master) *
mtd->erasesize;
}
if (is_power_of_2(mtd->erasesize))
mtd->erasesize_shift = ffs(mtd->erasesize) - 1;
else
@ -1074,9 +1095,11 @@ int mtd_erase(struct mtd_info *mtd, struct erase_info *instr)
{
struct mtd_info *master = mtd_get_master(mtd);
u64 mst_ofs = mtd_get_master_ofs(mtd, 0);
struct erase_info adjinstr;
int ret;
instr->fail_addr = MTD_FAIL_ADDR_UNKNOWN;
adjinstr = *instr;
if (!mtd->erasesize || !master->_erase)
return -ENOTSUPP;
@ -1091,12 +1114,27 @@ int mtd_erase(struct mtd_info *mtd, struct erase_info *instr)
ledtrig_mtd_activity();
instr->addr += mst_ofs;
ret = master->_erase(master, instr);
if (instr->fail_addr != MTD_FAIL_ADDR_UNKNOWN)
instr->fail_addr -= mst_ofs;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
adjinstr.addr = (loff_t)mtd_div_by_eb(instr->addr, mtd) *
master->erasesize;
adjinstr.len = ((u64)mtd_div_by_eb(instr->addr + instr->len, mtd) *
master->erasesize) -
adjinstr.addr;
}
adjinstr.addr += mst_ofs;
ret = master->_erase(master, &adjinstr);
if (adjinstr.fail_addr != MTD_FAIL_ADDR_UNKNOWN) {
instr->fail_addr = adjinstr.fail_addr - mst_ofs;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
instr->fail_addr = mtd_div_by_eb(instr->fail_addr,
master);
instr->fail_addr *= mtd->erasesize;
}
}
instr->addr -= mst_ofs;
return ret;
}
EXPORT_SYMBOL_GPL(mtd_erase);
@ -1276,6 +1314,101 @@ static int mtd_check_oob_ops(struct mtd_info *mtd, loff_t offs,
return 0;
}
static int mtd_read_oob_std(struct mtd_info *mtd, loff_t from,
struct mtd_oob_ops *ops)
{
struct mtd_info *master = mtd_get_master(mtd);
int ret;
from = mtd_get_master_ofs(mtd, from);
if (master->_read_oob)
ret = master->_read_oob(master, from, ops);
else
ret = master->_read(master, from, ops->len, &ops->retlen,
ops->datbuf);
return ret;
}
static int mtd_write_oob_std(struct mtd_info *mtd, loff_t to,
struct mtd_oob_ops *ops)
{
struct mtd_info *master = mtd_get_master(mtd);
int ret;
to = mtd_get_master_ofs(mtd, to);
if (master->_write_oob)
ret = master->_write_oob(master, to, ops);
else
ret = master->_write(master, to, ops->len, &ops->retlen,
ops->datbuf);
return ret;
}
static int mtd_io_emulated_slc(struct mtd_info *mtd, loff_t start, bool read,
struct mtd_oob_ops *ops)
{
struct mtd_info *master = mtd_get_master(mtd);
int ngroups = mtd_pairing_groups(master);
int npairs = mtd_wunit_per_eb(master) / ngroups;
struct mtd_oob_ops adjops = *ops;
unsigned int wunit, oobavail;
struct mtd_pairing_info info;
int max_bitflips = 0;
u32 ebofs, pageofs;
loff_t base, pos;
ebofs = mtd_mod_by_eb(start, mtd);
base = (loff_t)mtd_div_by_eb(start, mtd) * master->erasesize;
info.group = 0;
info.pair = mtd_div_by_ws(ebofs, mtd);
pageofs = mtd_mod_by_ws(ebofs, mtd);
oobavail = mtd_oobavail(mtd, ops);
while (ops->retlen < ops->len || ops->oobretlen < ops->ooblen) {
int ret;
if (info.pair >= npairs) {
info.pair = 0;
base += master->erasesize;
}
wunit = mtd_pairing_info_to_wunit(master, &info);
pos = mtd_wunit_to_offset(mtd, base, wunit);
adjops.len = ops->len - ops->retlen;
if (adjops.len > mtd->writesize - pageofs)
adjops.len = mtd->writesize - pageofs;
adjops.ooblen = ops->ooblen - ops->oobretlen;
if (adjops.ooblen > oobavail - adjops.ooboffs)
adjops.ooblen = oobavail - adjops.ooboffs;
if (read) {
ret = mtd_read_oob_std(mtd, pos + pageofs, &adjops);
if (ret > 0)
max_bitflips = max(max_bitflips, ret);
} else {
ret = mtd_write_oob_std(mtd, pos + pageofs, &adjops);
}
if (ret < 0)
return ret;
max_bitflips = max(max_bitflips, ret);
ops->retlen += adjops.retlen;
ops->oobretlen += adjops.oobretlen;
adjops.datbuf += adjops.retlen;
adjops.oobbuf += adjops.oobretlen;
adjops.ooboffs = 0;
pageofs = 0;
info.pair++;
}
return max_bitflips;
}
int mtd_read_oob(struct mtd_info *mtd, loff_t from, struct mtd_oob_ops *ops)
{
struct mtd_info *master = mtd_get_master(mtd);
@ -1294,12 +1427,10 @@ int mtd_read_oob(struct mtd_info *mtd, loff_t from, struct mtd_oob_ops *ops)
if (!master->_read_oob && (!master->_read || ops->oobbuf))
return -EOPNOTSUPP;
from = mtd_get_master_ofs(mtd, from);
if (master->_read_oob)
ret_code = master->_read_oob(master, from, ops);
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION)
ret_code = mtd_io_emulated_slc(mtd, from, true, ops);
else
ret_code = master->_read(master, from, ops->len, &ops->retlen,
ops->datbuf);
ret_code = mtd_read_oob_std(mtd, from, ops);
mtd_update_ecc_stats(mtd, master, &old_stats);
@ -1338,13 +1469,10 @@ int mtd_write_oob(struct mtd_info *mtd, loff_t to,
if (!master->_write_oob && (!master->_write || ops->oobbuf))
return -EOPNOTSUPP;
to = mtd_get_master_ofs(mtd, to);
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION)
return mtd_io_emulated_slc(mtd, to, false, ops);
if (master->_write_oob)
return master->_write_oob(master, to, ops);
else
return master->_write(master, to, ops->len, &ops->retlen,
ops->datbuf);
return mtd_write_oob_std(mtd, to, ops);
}
EXPORT_SYMBOL_GPL(mtd_write_oob);
@ -1672,7 +1800,7 @@ EXPORT_SYMBOL_GPL(mtd_ooblayout_get_databytes);
* @start: first ECC byte to set
* @nbytes: number of ECC bytes to set
*
* Works like mtd_ooblayout_get_bytes(), except it acts on free bytes.
* Works like mtd_ooblayout_set_bytes(), except it acts on free bytes.
*
* Returns zero on success, a negative error code otherwise.
*/
@ -1817,6 +1945,12 @@ int mtd_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
return -EINVAL;
if (!len)
return 0;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
len = (u64)mtd_div_by_eb(len, mtd) * master->erasesize;
}
return master->_lock(master, mtd_get_master_ofs(mtd, ofs), len);
}
EXPORT_SYMBOL_GPL(mtd_lock);
@ -1831,6 +1965,12 @@ int mtd_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
return -EINVAL;
if (!len)
return 0;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
len = (u64)mtd_div_by_eb(len, mtd) * master->erasesize;
}
return master->_unlock(master, mtd_get_master_ofs(mtd, ofs), len);
}
EXPORT_SYMBOL_GPL(mtd_unlock);
@ -1845,6 +1985,12 @@ int mtd_is_locked(struct mtd_info *mtd, loff_t ofs, uint64_t len)
return -EINVAL;
if (!len)
return 0;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
len = (u64)mtd_div_by_eb(len, mtd) * master->erasesize;
}
return master->_is_locked(master, mtd_get_master_ofs(mtd, ofs), len);
}
EXPORT_SYMBOL_GPL(mtd_is_locked);
@ -1857,6 +2003,10 @@ int mtd_block_isreserved(struct mtd_info *mtd, loff_t ofs)
return -EINVAL;
if (!master->_block_isreserved)
return 0;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION)
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
return master->_block_isreserved(master, mtd_get_master_ofs(mtd, ofs));
}
EXPORT_SYMBOL_GPL(mtd_block_isreserved);
@ -1869,6 +2019,10 @@ int mtd_block_isbad(struct mtd_info *mtd, loff_t ofs)
return -EINVAL;
if (!master->_block_isbad)
return 0;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION)
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
return master->_block_isbad(master, mtd_get_master_ofs(mtd, ofs));
}
EXPORT_SYMBOL_GPL(mtd_block_isbad);
@ -1885,6 +2039,9 @@ int mtd_block_markbad(struct mtd_info *mtd, loff_t ofs)
if (!(mtd->flags & MTD_WRITEABLE))
return -EROFS;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION)
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
ret = master->_block_markbad(master, mtd_get_master_ofs(mtd, ofs));
if (ret)
return ret;

View file

@ -35,9 +35,12 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
const struct mtd_partition *part,
int partno, uint64_t cur_offset)
{
int wr_alignment = (parent->flags & MTD_NO_ERASE) ? parent->writesize :
parent->erasesize;
struct mtd_info *child, *master = mtd_get_master(parent);
struct mtd_info *master = mtd_get_master(parent);
int wr_alignment = (parent->flags & MTD_NO_ERASE) ?
master->writesize : master->erasesize;
u64 parent_size = mtd_is_partition(parent) ?
parent->part.size : parent->size;
struct mtd_info *child;
u32 remainder;
char *name;
u64 tmp;
@ -56,8 +59,9 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
/* set up the MTD object for this partition */
child->type = parent->type;
child->part.flags = parent->flags & ~part->mask_flags;
child->part.flags |= part->add_flags;
child->flags = child->part.flags;
child->size = part->size;
child->part.size = part->size;
child->writesize = parent->writesize;
child->writebufsize = parent->writebufsize;
child->oobsize = parent->oobsize;
@ -98,29 +102,29 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
}
if (child->part.offset == MTDPART_OFS_RETAIN) {
child->part.offset = cur_offset;
if (parent->size - child->part.offset >= child->size) {
child->size = parent->size - child->part.offset -
child->size;
if (parent_size - child->part.offset >= child->part.size) {
child->part.size = parent_size - child->part.offset -
child->part.size;
} else {
printk(KERN_ERR "mtd partition \"%s\" doesn't have enough space: %#llx < %#llx, disabled\n",
part->name, parent->size - child->part.offset,
child->size);
part->name, parent_size - child->part.offset,
child->part.size);
/* register to preserve ordering */
goto out_register;
}
}
if (child->size == MTDPART_SIZ_FULL)
child->size = parent->size - child->part.offset;
if (child->part.size == MTDPART_SIZ_FULL)
child->part.size = parent_size - child->part.offset;
printk(KERN_NOTICE "0x%012llx-0x%012llx : \"%s\"\n",
child->part.offset, child->part.offset + child->size,
child->part.offset, child->part.offset + child->part.size,
child->name);
/* let's do some sanity checks */
if (child->part.offset >= parent->size) {
if (child->part.offset >= parent_size) {
/* let's register it anyway to preserve ordering */
child->part.offset = 0;
child->size = 0;
child->part.size = 0;
/* Initialize ->erasesize to make add_mtd_device() happy. */
child->erasesize = parent->erasesize;
@ -128,15 +132,16 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
part->name);
goto out_register;
}
if (child->part.offset + child->size > parent->size) {
child->size = parent->size - child->part.offset;
if (child->part.offset + child->part.size > parent->size) {
child->part.size = parent_size - child->part.offset;
printk(KERN_WARNING"mtd: partition \"%s\" extends beyond the end of device \"%s\" -- size truncated to %#llx\n",
part->name, parent->name, child->size);
part->name, parent->name, child->part.size);
}
if (parent->numeraseregions > 1) {
/* Deal with variable erase size stuff */
int i, max = parent->numeraseregions;
u64 end = child->part.offset + child->size;
u64 end = child->part.offset + child->part.size;
struct mtd_erase_region_info *regions = parent->eraseregions;
/* Find the first erase regions which is part of this
@ -156,7 +161,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
BUG_ON(child->erasesize == 0);
} else {
/* Single erase size */
child->erasesize = parent->erasesize;
child->erasesize = master->erasesize;
}
/*
@ -178,7 +183,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
part->name);
}
tmp = mtd_get_master_ofs(child, 0) + child->size;
tmp = mtd_get_master_ofs(child, 0) + child->part.size;
remainder = do_div(tmp, wr_alignment);
if ((child->flags & MTD_WRITEABLE) && remainder) {
child->flags &= ~MTD_WRITEABLE;
@ -186,6 +191,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
part->name);
}
child->size = child->part.size;
child->ecc_step_size = parent->ecc_step_size;
child->ecc_strength = parent->ecc_strength;
child->bitflip_threshold = parent->bitflip_threshold;
@ -193,7 +199,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
if (master->_block_isbad) {
uint64_t offs = 0;
while (offs < child->size) {
while (offs < child->part.size) {
if (mtd_block_isreserved(child, offs))
child->ecc_stats.bbtblocks++;
else if (mtd_block_isbad(child, offs))
@ -234,6 +240,8 @@ int mtd_add_partition(struct mtd_info *parent, const char *name,
long long offset, long long length)
{
struct mtd_info *master = mtd_get_master(parent);
u64 parent_size = mtd_is_partition(parent) ?
parent->part.size : parent->size;
struct mtd_partition part;
struct mtd_info *child;
int ret = 0;
@ -244,7 +252,7 @@ int mtd_add_partition(struct mtd_info *parent, const char *name,
return -EINVAL;
if (length == MTDPART_SIZ_FULL)
length = parent->size - offset;
length = parent_size - offset;
if (length <= 0)
return -EINVAL;
@ -419,7 +427,7 @@ int add_mtd_partitions(struct mtd_info *parent,
/* Look for subpartitions */
parse_mtd_partitions(child, parts[i].types, NULL);
cur_offset = child->part.offset + child->size;
cur_offset = child->part.offset + child->part.size;
}
return 0;

View file

@ -213,10 +213,6 @@ config MTD_NAND_MLC_LPC32XX
Please check the actual NAND chip connected and its support
by the MLC NAND controller.
config MTD_NAND_CM_X270
tristate "CM-X270 modules NAND controller"
depends on MACH_ARMCORE
config MTD_NAND_PASEMI
tristate "PA Semi PWRficient NAND controller"
depends on PPC_PASEMI
@ -457,6 +453,14 @@ config MTD_NAND_CADENCE
Enable the driver for NAND flash on platforms using a Cadence NAND
controller.
config MTD_NAND_ARASAN
tristate "Support for Arasan NAND flash controller"
depends on HAS_IOMEM && HAS_DMA
select BCH
help
Enables the driver for the Arasan NAND flash controller on
Zynq Ultrascale+ MPSoC.
comment "Misc"
config MTD_SM_COMMON

View file

@ -25,7 +25,6 @@ obj-$(CONFIG_MTD_NAND_GPIO) += gpio.o
omap2_nand-objs := omap2.o
obj-$(CONFIG_MTD_NAND_OMAP2) += omap2_nand.o
obj-$(CONFIG_MTD_NAND_OMAP_BCH_BUILD) += omap_elm.o
obj-$(CONFIG_MTD_NAND_CM_X270) += cmx270_nand.o
obj-$(CONFIG_MTD_NAND_MARVELL) += marvell_nand.o
obj-$(CONFIG_MTD_NAND_TMIO) += tmio_nand.o
obj-$(CONFIG_MTD_NAND_PLATFORM) += plat_nand.o
@ -58,6 +57,7 @@ obj-$(CONFIG_MTD_NAND_TEGRA) += tegra_nand.o
obj-$(CONFIG_MTD_NAND_STM32_FMC2) += stm32_fmc2_nand.o
obj-$(CONFIG_MTD_NAND_MESON) += meson_nand.o
obj-$(CONFIG_MTD_NAND_CADENCE) += cadence-nand-controller.o
obj-$(CONFIG_MTD_NAND_ARASAN) += arasan-nand-controller.o
nand-objs := nand_base.o nand_legacy.o nand_bbt.o nand_timings.o nand_ids.o
nand-objs += nand_onfi.o

View file

@ -387,12 +387,15 @@ static int gpio_nand_remove(struct platform_device *pdev)
{
struct gpio_nand *priv = platform_get_drvdata(pdev);
struct mtd_info *mtd = nand_to_mtd(&priv->nand_chip);
int ret;
/* Apply write protection */
gpiod_set_value(priv->gpiod_nwp, 1);
/* Unregister device */
nand_release(mtd_to_nand(mtd));
ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(mtd_to_nand(mtd));
return 0;
}

File diff suppressed because it is too large Load diff

View file

@ -1494,7 +1494,7 @@ static void atmel_nand_init(struct atmel_nand_controller *nc,
* suitable for DMA.
*/
if (nc->dmac)
chip->options |= NAND_USE_BOUNCE_BUFFER;
chip->options |= NAND_USES_DMA;
/* Default to HW ECC if pmecc is available. */
if (nc->pmecc)

View file

@ -16,63 +16,16 @@
struct au1550nd_ctx {
struct nand_controller controller;
struct nand_chip chip;
int cs;
void __iomem *base;
void (*write_byte)(struct nand_chip *, u_char);
};
/**
* au_read_byte - read one byte from the chip
* @this: NAND chip object
*
* read function for 8bit buswidth
*/
static u_char au_read_byte(struct nand_chip *this)
static struct au1550nd_ctx *chip_to_au_ctx(struct nand_chip *this)
{
u_char ret = readb(this->legacy.IO_ADDR_R);
wmb(); /* drain writebuffer */
return ret;
}
/**
* au_write_byte - write one byte to the chip
* @this: NAND chip object
* @byte: pointer to data byte to write
*
* write function for 8it buswidth
*/
static void au_write_byte(struct nand_chip *this, u_char byte)
{
writeb(byte, this->legacy.IO_ADDR_W);
wmb(); /* drain writebuffer */
}
/**
* au_read_byte16 - read one byte endianness aware from the chip
* @this: NAND chip object
*
* read function for 16bit buswidth with endianness conversion
*/
static u_char au_read_byte16(struct nand_chip *this)
{
u_char ret = (u_char) cpu_to_le16(readw(this->legacy.IO_ADDR_R));
wmb(); /* drain writebuffer */
return ret;
}
/**
* au_write_byte16 - write one byte endianness aware to the chip
* @this: NAND chip object
* @byte: pointer to data byte to write
*
* write function for 16bit buswidth with endianness conversion
*/
static void au_write_byte16(struct nand_chip *this, u_char byte)
{
writew(le16_to_cpu((u16) byte), this->legacy.IO_ADDR_W);
wmb(); /* drain writebuffer */
return container_of(this, struct au1550nd_ctx, chip);
}
/**
@ -83,12 +36,15 @@ static void au_write_byte16(struct nand_chip *this, u_char byte)
*
* write function for 8bit buswidth
*/
static void au_write_buf(struct nand_chip *this, const u_char *buf, int len)
static void au_write_buf(struct nand_chip *this, const void *buf,
unsigned int len)
{
struct au1550nd_ctx *ctx = chip_to_au_ctx(this);
const u8 *p = buf;
int i;
for (i = 0; i < len; i++) {
writeb(buf[i], this->legacy.IO_ADDR_W);
writeb(p[i], ctx->base + MEM_STNAND_DATA);
wmb(); /* drain writebuffer */
}
}
@ -101,12 +57,15 @@ static void au_write_buf(struct nand_chip *this, const u_char *buf, int len)
*
* read function for 8bit buswidth
*/
static void au_read_buf(struct nand_chip *this, u_char *buf, int len)
static void au_read_buf(struct nand_chip *this, void *buf,
unsigned int len)
{
struct au1550nd_ctx *ctx = chip_to_au_ctx(this);
u8 *p = buf;
int i;
for (i = 0; i < len; i++) {
buf[i] = readb(this->legacy.IO_ADDR_R);
p[i] = readb(ctx->base + MEM_STNAND_DATA);
wmb(); /* drain writebuffer */
}
}
@ -119,17 +78,18 @@ static void au_read_buf(struct nand_chip *this, u_char *buf, int len)
*
* write function for 16bit buswidth
*/
static void au_write_buf16(struct nand_chip *this, const u_char *buf, int len)
static void au_write_buf16(struct nand_chip *this, const void *buf,
unsigned int len)
{
int i;
u16 *p = (u16 *) buf;
len >>= 1;
struct au1550nd_ctx *ctx = chip_to_au_ctx(this);
const u16 *p = buf;
unsigned int i;
len >>= 1;
for (i = 0; i < len; i++) {
writew(p[i], this->legacy.IO_ADDR_W);
writew(p[i], ctx->base + MEM_STNAND_DATA);
wmb(); /* drain writebuffer */
}
}
/**
@ -140,218 +100,19 @@ static void au_write_buf16(struct nand_chip *this, const u_char *buf, int len)
*
* read function for 16bit buswidth
*/
static void au_read_buf16(struct nand_chip *this, u_char *buf, int len)
static void au_read_buf16(struct nand_chip *this, void *buf, unsigned int len)
{
int i;
u16 *p = (u16 *) buf;
len >>= 1;
struct au1550nd_ctx *ctx = chip_to_au_ctx(this);
unsigned int i;
u16 *p = buf;
len >>= 1;
for (i = 0; i < len; i++) {
p[i] = readw(this->legacy.IO_ADDR_R);
p[i] = readw(ctx->base + MEM_STNAND_DATA);
wmb(); /* drain writebuffer */
}
}
/* Select the chip by setting nCE to low */
#define NAND_CTL_SETNCE 1
/* Deselect the chip by setting nCE to high */
#define NAND_CTL_CLRNCE 2
/* Select the command latch by setting CLE to high */
#define NAND_CTL_SETCLE 3
/* Deselect the command latch by setting CLE to low */
#define NAND_CTL_CLRCLE 4
/* Select the address latch by setting ALE to high */
#define NAND_CTL_SETALE 5
/* Deselect the address latch by setting ALE to low */
#define NAND_CTL_CLRALE 6
static void au1550_hwcontrol(struct mtd_info *mtd, int cmd)
{
struct nand_chip *this = mtd_to_nand(mtd);
struct au1550nd_ctx *ctx = container_of(this, struct au1550nd_ctx,
chip);
switch (cmd) {
case NAND_CTL_SETCLE:
this->legacy.IO_ADDR_W = ctx->base + MEM_STNAND_CMD;
break;
case NAND_CTL_CLRCLE:
this->legacy.IO_ADDR_W = ctx->base + MEM_STNAND_DATA;
break;
case NAND_CTL_SETALE:
this->legacy.IO_ADDR_W = ctx->base + MEM_STNAND_ADDR;
break;
case NAND_CTL_CLRALE:
this->legacy.IO_ADDR_W = ctx->base + MEM_STNAND_DATA;
/* FIXME: Nobody knows why this is necessary,
* but it works only that way */
udelay(1);
break;
case NAND_CTL_SETNCE:
/* assert (force assert) chip enable */
alchemy_wrsmem((1 << (4 + ctx->cs)), AU1000_MEM_STNDCTL);
break;
case NAND_CTL_CLRNCE:
/* deassert chip enable */
alchemy_wrsmem(0, AU1000_MEM_STNDCTL);
break;
}
this->legacy.IO_ADDR_R = this->legacy.IO_ADDR_W;
wmb(); /* Drain the writebuffer */
}
int au1550_device_ready(struct nand_chip *this)
{
return (alchemy_rdsmem(AU1000_MEM_STSTAT) & 0x1) ? 1 : 0;
}
/**
* au1550_select_chip - control -CE line
* Forbid driving -CE manually permitting the NAND controller to do this.
* Keeping -CE asserted during the whole sector reads interferes with the
* NOR flash and PCMCIA drivers as it causes contention on the static bus.
* We only have to hold -CE low for the NAND read commands since the flash
* chip needs it to be asserted during chip not ready time but the NAND
* controller keeps it released.
*
* @this: NAND chip object
* @chip: chipnumber to select, -1 for deselect
*/
static void au1550_select_chip(struct nand_chip *this, int chip)
{
}
/**
* au1550_command - Send command to NAND device
* @this: NAND chip object
* @command: the command to be sent
* @column: the column address for this command, -1 if none
* @page_addr: the page address for this command, -1 if none
*/
static void au1550_command(struct nand_chip *this, unsigned command,
int column, int page_addr)
{
struct mtd_info *mtd = nand_to_mtd(this);
struct au1550nd_ctx *ctx = container_of(this, struct au1550nd_ctx,
chip);
int ce_override = 0, i;
unsigned long flags = 0;
/* Begin command latch cycle */
au1550_hwcontrol(mtd, NAND_CTL_SETCLE);
/*
* Write out the command to the device.
*/
if (command == NAND_CMD_SEQIN) {
int readcmd;
if (column >= mtd->writesize) {
/* OOB area */
column -= mtd->writesize;
readcmd = NAND_CMD_READOOB;
} else if (column < 256) {
/* First 256 bytes --> READ0 */
readcmd = NAND_CMD_READ0;
} else {
column -= 256;
readcmd = NAND_CMD_READ1;
}
ctx->write_byte(this, readcmd);
}
ctx->write_byte(this, command);
/* Set ALE and clear CLE to start address cycle */
au1550_hwcontrol(mtd, NAND_CTL_CLRCLE);
if (column != -1 || page_addr != -1) {
au1550_hwcontrol(mtd, NAND_CTL_SETALE);
/* Serially input address */
if (column != -1) {
/* Adjust columns for 16 bit buswidth */
if (this->options & NAND_BUSWIDTH_16 &&
!nand_opcode_8bits(command))
column >>= 1;
ctx->write_byte(this, column);
}
if (page_addr != -1) {
ctx->write_byte(this, (u8)(page_addr & 0xff));
if (command == NAND_CMD_READ0 ||
command == NAND_CMD_READ1 ||
command == NAND_CMD_READOOB) {
/*
* NAND controller will release -CE after
* the last address byte is written, so we'll
* have to forcibly assert it. No interrupts
* are allowed while we do this as we don't
* want the NOR flash or PCMCIA drivers to
* steal our precious bytes of data...
*/
ce_override = 1;
local_irq_save(flags);
au1550_hwcontrol(mtd, NAND_CTL_SETNCE);
}
ctx->write_byte(this, (u8)(page_addr >> 8));
if (this->options & NAND_ROW_ADDR_3)
ctx->write_byte(this,
((page_addr >> 16) & 0x0f));
}
/* Latch in address */
au1550_hwcontrol(mtd, NAND_CTL_CLRALE);
}
/*
* Program and erase have their own busy handlers.
* Status and sequential in need no delay.
*/
switch (command) {
case NAND_CMD_PAGEPROG:
case NAND_CMD_ERASE1:
case NAND_CMD_ERASE2:
case NAND_CMD_SEQIN:
case NAND_CMD_STATUS:
return;
case NAND_CMD_RESET:
break;
case NAND_CMD_READ0:
case NAND_CMD_READ1:
case NAND_CMD_READOOB:
/* Check if we're really driving -CE low (just in case) */
if (unlikely(!ce_override))
break;
/* Apply a short delay always to ensure that we do wait tWB. */
ndelay(100);
/* Wait for a chip to become ready... */
for (i = this->legacy.chip_delay;
!this->legacy.dev_ready(this) && i > 0; --i)
udelay(1);
/* Release -CE and re-enable interrupts. */
au1550_hwcontrol(mtd, NAND_CTL_CLRNCE);
local_irq_restore(flags);
return;
}
/* Apply this short delay always to ensure that we do wait tWB. */
ndelay(100);
while(!this->legacy.dev_ready(this));
}
static int find_nand_cs(unsigned long nand_base)
{
void __iomem *base =
@ -373,6 +134,112 @@ static int find_nand_cs(unsigned long nand_base)
return -ENODEV;
}
static int au1550nd_waitrdy(struct nand_chip *this, unsigned int timeout_ms)
{
unsigned long timeout_jiffies = jiffies;
timeout_jiffies += msecs_to_jiffies(timeout_ms) + 1;
do {
if (alchemy_rdsmem(AU1000_MEM_STSTAT) & 0x1)
return 0;
usleep_range(10, 100);
} while (time_before(jiffies, timeout_jiffies));
return -ETIMEDOUT;
}
static int au1550nd_exec_instr(struct nand_chip *this,
const struct nand_op_instr *instr)
{
struct au1550nd_ctx *ctx = chip_to_au_ctx(this);
unsigned int i;
int ret = 0;
switch (instr->type) {
case NAND_OP_CMD_INSTR:
writeb(instr->ctx.cmd.opcode,
ctx->base + MEM_STNAND_CMD);
/* Drain the writebuffer */
wmb();
break;
case NAND_OP_ADDR_INSTR:
for (i = 0; i < instr->ctx.addr.naddrs; i++) {
writeb(instr->ctx.addr.addrs[i],
ctx->base + MEM_STNAND_ADDR);
/* Drain the writebuffer */
wmb();
}
break;
case NAND_OP_DATA_IN_INSTR:
if ((this->options & NAND_BUSWIDTH_16) &&
!instr->ctx.data.force_8bit)
au_read_buf16(this, instr->ctx.data.buf.in,
instr->ctx.data.len);
else
au_read_buf(this, instr->ctx.data.buf.in,
instr->ctx.data.len);
break;
case NAND_OP_DATA_OUT_INSTR:
if ((this->options & NAND_BUSWIDTH_16) &&
!instr->ctx.data.force_8bit)
au_write_buf16(this, instr->ctx.data.buf.out,
instr->ctx.data.len);
else
au_write_buf(this, instr->ctx.data.buf.out,
instr->ctx.data.len);
break;
case NAND_OP_WAITRDY_INSTR:
ret = au1550nd_waitrdy(this, instr->ctx.waitrdy.timeout_ms);
break;
default:
return -EINVAL;
}
if (instr->delay_ns)
ndelay(instr->delay_ns);
return ret;
}
static int au1550nd_exec_op(struct nand_chip *this,
const struct nand_operation *op,
bool check_only)
{
struct au1550nd_ctx *ctx = chip_to_au_ctx(this);
unsigned int i;
int ret;
if (check_only)
return 0;
/* assert (force assert) chip enable */
alchemy_wrsmem((1 << (4 + ctx->cs)), AU1000_MEM_STNDCTL);
/* Drain the writebuffer */
wmb();
for (i = 0; i < op->ninstrs; i++) {
ret = au1550nd_exec_instr(this, &op->instrs[i]);
if (ret)
break;
}
/* deassert chip enable */
alchemy_wrsmem(0, AU1000_MEM_STNDCTL);
/* Drain the writebuffer */
wmb();
return ret;
}
static const struct nand_controller_ops au1550nd_ops = {
.exec_op = au1550nd_exec_op,
};
static int au1550nd_probe(struct platform_device *pdev)
{
struct au1550nd_platdata *pd;
@ -424,23 +291,15 @@ static int au1550nd_probe(struct platform_device *pdev)
}
ctx->cs = cs;
this->legacy.dev_ready = au1550_device_ready;
this->legacy.select_chip = au1550_select_chip;
this->legacy.cmdfunc = au1550_command;
/* 30 us command delay time */
this->legacy.chip_delay = 30;
nand_controller_init(&ctx->controller);
ctx->controller.ops = &au1550nd_ops;
this->controller = &ctx->controller;
this->ecc.mode = NAND_ECC_SOFT;
this->ecc.algo = NAND_ECC_HAMMING;
if (pd->devwidth)
this->options |= NAND_BUSWIDTH_16;
this->legacy.read_byte = (pd->devwidth) ? au_read_byte16 : au_read_byte;
ctx->write_byte = (pd->devwidth) ? au_write_byte16 : au_write_byte;
this->legacy.write_buf = (pd->devwidth) ? au_write_buf16 : au_write_buf;
this->legacy.read_buf = (pd->devwidth) ? au_read_buf16 : au_read_buf;
ret = nand_scan(this, 1);
if (ret) {
dev_err(&pdev->dev, "NAND scan failed with %d\n", ret);
@ -466,8 +325,12 @@ static int au1550nd_remove(struct platform_device *pdev)
{
struct au1550nd_ctx *ctx = platform_get_drvdata(pdev);
struct resource *r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
struct nand_chip *chip = &ctx->chip;
int ret;
nand_release(&ctx->chip);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
iounmap(ctx->base);
release_mem_region(r->start, 0x1000);
kfree(ctx);

View file

@ -60,8 +60,12 @@ static int bcm47xxnflash_probe(struct platform_device *pdev)
static int bcm47xxnflash_remove(struct platform_device *pdev)
{
struct bcm47xxnflash *nflash = platform_get_drvdata(pdev);
struct nand_chip *chip = &nflash->nand_chip;
int ret;
nand_release(&nflash->nand_chip);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
return 0;
}

View file

@ -4,7 +4,6 @@
*/
#include <linux/clk.h>
#include <linux/version.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/delay.h>
@ -264,6 +263,7 @@ struct brcmnand_controller {
const unsigned int *block_sizes;
unsigned int max_page_size;
const unsigned int *page_sizes;
unsigned int page_size_shift;
unsigned int max_oob;
u32 features;
@ -338,8 +338,38 @@ enum brcmnand_reg {
BRCMNAND_FC_BASE,
};
/* BRCMNAND v4.0 */
static const u16 brcmnand_regs_v40[] = {
/* BRCMNAND v2.1-v2.2 */
static const u16 brcmnand_regs_v21[] = {
[BRCMNAND_CMD_START] = 0x04,
[BRCMNAND_CMD_EXT_ADDRESS] = 0x08,
[BRCMNAND_CMD_ADDRESS] = 0x0c,
[BRCMNAND_INTFC_STATUS] = 0x5c,
[BRCMNAND_CS_SELECT] = 0x14,
[BRCMNAND_CS_XOR] = 0x18,
[BRCMNAND_LL_OP] = 0,
[BRCMNAND_CS0_BASE] = 0x40,
[BRCMNAND_CS1_BASE] = 0,
[BRCMNAND_CORR_THRESHOLD] = 0,
[BRCMNAND_CORR_THRESHOLD_EXT] = 0,
[BRCMNAND_UNCORR_COUNT] = 0,
[BRCMNAND_CORR_COUNT] = 0,
[BRCMNAND_CORR_EXT_ADDR] = 0x60,
[BRCMNAND_CORR_ADDR] = 0x64,
[BRCMNAND_UNCORR_EXT_ADDR] = 0x68,
[BRCMNAND_UNCORR_ADDR] = 0x6c,
[BRCMNAND_SEMAPHORE] = 0x50,
[BRCMNAND_ID] = 0x54,
[BRCMNAND_ID_EXT] = 0,
[BRCMNAND_LL_RDATA] = 0,
[BRCMNAND_OOB_READ_BASE] = 0x20,
[BRCMNAND_OOB_READ_10_BASE] = 0,
[BRCMNAND_OOB_WRITE_BASE] = 0x30,
[BRCMNAND_OOB_WRITE_10_BASE] = 0,
[BRCMNAND_FC_BASE] = 0x200,
};
/* BRCMNAND v3.3-v4.0 */
static const u16 brcmnand_regs_v33[] = {
[BRCMNAND_CMD_START] = 0x04,
[BRCMNAND_CMD_EXT_ADDRESS] = 0x08,
[BRCMNAND_CMD_ADDRESS] = 0x0c,
@ -536,6 +566,9 @@ enum {
CFG_BUS_WIDTH = BIT(CFG_BUS_WIDTH_SHIFT),
CFG_DEVICE_SIZE_SHIFT = 24,
/* Only for v2.1 */
CFG_PAGE_SIZE_SHIFT_v2_1 = 30,
/* Only for pre-v7.1 (with no CFG_EXT register) */
CFG_PAGE_SIZE_SHIFT = 20,
CFG_BLK_SIZE_SHIFT = 28,
@ -571,12 +604,16 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
{
static const unsigned int block_sizes_v6[] = { 8, 16, 128, 256, 512, 1024, 2048, 0 };
static const unsigned int block_sizes_v4[] = { 16, 128, 8, 512, 256, 1024, 2048, 0 };
static const unsigned int page_sizes[] = { 512, 2048, 4096, 8192, 0 };
static const unsigned int block_sizes_v2_2[] = { 16, 128, 8, 512, 256, 0 };
static const unsigned int block_sizes_v2_1[] = { 16, 128, 8, 512, 0 };
static const unsigned int page_sizes_v3_4[] = { 512, 2048, 4096, 8192, 0 };
static const unsigned int page_sizes_v2_2[] = { 512, 2048, 4096, 0 };
static const unsigned int page_sizes_v2_1[] = { 512, 2048, 0 };
ctrl->nand_version = nand_readreg(ctrl, 0) & 0xffff;
/* Only support v4.0+? */
if (ctrl->nand_version < 0x0400) {
/* Only support v2.1+ */
if (ctrl->nand_version < 0x0201) {
dev_err(ctrl->dev, "version %#x not supported\n",
ctrl->nand_version);
return -ENODEV;
@ -591,8 +628,10 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
ctrl->reg_offsets = brcmnand_regs_v60;
else if (ctrl->nand_version >= 0x0500)
ctrl->reg_offsets = brcmnand_regs_v50;
else if (ctrl->nand_version >= 0x0400)
ctrl->reg_offsets = brcmnand_regs_v40;
else if (ctrl->nand_version >= 0x0303)
ctrl->reg_offsets = brcmnand_regs_v33;
else if (ctrl->nand_version >= 0x0201)
ctrl->reg_offsets = brcmnand_regs_v21;
/* Chip-select stride */
if (ctrl->nand_version >= 0x0701)
@ -606,8 +645,9 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
} else {
ctrl->cs_offsets = brcmnand_cs_offsets;
/* v5.0 and earlier has a different CS0 offset layout */
if (ctrl->nand_version <= 0x0500)
/* v3.3-5.0 have a different CS0 offset layout */
if (ctrl->nand_version >= 0x0303 &&
ctrl->nand_version <= 0x0500)
ctrl->cs0_offsets = brcmnand_cs_offsets_cs0;
}
@ -617,14 +657,32 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
ctrl->max_page_size = 16 * 1024;
ctrl->max_block_size = 2 * 1024 * 1024;
} else {
ctrl->page_sizes = page_sizes;
if (ctrl->nand_version >= 0x0304)
ctrl->page_sizes = page_sizes_v3_4;
else if (ctrl->nand_version >= 0x0202)
ctrl->page_sizes = page_sizes_v2_2;
else
ctrl->page_sizes = page_sizes_v2_1;
if (ctrl->nand_version >= 0x0202)
ctrl->page_size_shift = CFG_PAGE_SIZE_SHIFT;
else
ctrl->page_size_shift = CFG_PAGE_SIZE_SHIFT_v2_1;
if (ctrl->nand_version >= 0x0600)
ctrl->block_sizes = block_sizes_v6;
else
else if (ctrl->nand_version >= 0x0400)
ctrl->block_sizes = block_sizes_v4;
else if (ctrl->nand_version >= 0x0202)
ctrl->block_sizes = block_sizes_v2_2;
else
ctrl->block_sizes = block_sizes_v2_1;
if (ctrl->nand_version < 0x0400) {
ctrl->max_page_size = 4096;
if (ctrl->nand_version < 0x0202)
ctrl->max_page_size = 2048;
else
ctrl->max_page_size = 4096;
ctrl->max_block_size = 512 * 1024;
}
}
@ -810,6 +868,9 @@ static void brcmnand_wr_corr_thresh(struct brcmnand_host *host, u8 val)
enum brcmnand_reg reg = BRCMNAND_CORR_THRESHOLD;
int cs = host->cs;
if (!ctrl->reg_offsets[reg])
return;
if (ctrl->nand_version == 0x0702)
bits = 7;
else if (ctrl->nand_version >= 0x0600)
@ -868,8 +929,10 @@ static inline u32 brcmnand_spare_area_mask(struct brcmnand_controller *ctrl)
return GENMASK(7, 0);
else if (ctrl->nand_version >= 0x0600)
return GENMASK(6, 0);
else
else if (ctrl->nand_version >= 0x0303)
return GENMASK(5, 0);
else
return GENMASK(4, 0);
}
#define NAND_ACC_CONTROL_ECC_SHIFT 16
@ -1100,30 +1163,30 @@ static int brcmnand_hamming_ooblayout_free(struct mtd_info *mtd, int section,
struct brcmnand_cfg *cfg = &host->hwcfg;
int sas = cfg->spare_area_size << cfg->sector_size_1k;
int sectors = cfg->page_size / (512 << cfg->sector_size_1k);
u32 next;
if (section >= sectors * 2)
if (section > sectors)
return -ERANGE;
oobregion->offset = (section / 2) * sas;
next = (section * sas);
if (section < sectors)
next += 6;
if (section & 1) {
oobregion->offset += 9;
oobregion->length = 7;
if (section) {
oobregion->offset = ((section - 1) * sas) + 9;
} else {
oobregion->length = 6;
/* First sector of each page may have BBI */
if (!section) {
/*
* Small-page NAND use byte 6 for BBI while large-page
* NAND use byte 0.
*/
if (cfg->page_size > 512)
oobregion->offset++;
oobregion->length--;
if (cfg->page_size > 512) {
/* Large page NAND uses first 2 bytes for BBI */
oobregion->offset = 2;
} else {
/* Small page NAND uses last byte before ECC for BBI */
oobregion->offset = 0;
next--;
}
}
oobregion->length = next - oobregion->offset;
return 0;
}
@ -2018,28 +2081,31 @@ static int brcmnand_read_by_pio(struct mtd_info *mtd, struct nand_chip *chip,
static int brcmstb_nand_verify_erased_page(struct mtd_info *mtd,
struct nand_chip *chip, void *buf, u64 addr)
{
int i, sas;
void *oob = chip->oob_poi;
struct mtd_oob_region ecc;
int i;
int bitflips = 0;
int page = addr >> chip->page_shift;
int ret;
void *ecc_bytes;
void *ecc_chunk;
if (!buf)
buf = nand_get_data_buf(chip);
sas = mtd->oobsize / chip->ecc.steps;
/* read without ecc for verification */
ret = chip->ecc.read_page_raw(chip, buf, true, page);
if (ret)
return ret;
for (i = 0; i < chip->ecc.steps; i++, oob += sas) {
for (i = 0; i < chip->ecc.steps; i++) {
ecc_chunk = buf + chip->ecc.size * i;
ret = nand_check_erased_ecc_chunk(ecc_chunk,
chip->ecc.size,
oob, sas, NULL, 0,
mtd_ooblayout_ecc(mtd, i, &ecc);
ecc_bytes = chip->oob_poi + ecc.offset;
ret = nand_check_erased_ecc_chunk(ecc_chunk, chip->ecc.size,
ecc_bytes, ecc.length,
NULL, 0,
chip->ecc.strength);
if (ret < 0)
return ret;
@ -2377,7 +2443,7 @@ static int brcmnand_set_cfg(struct brcmnand_host *host,
(!!(cfg->device_width == 16) << CFG_BUS_WIDTH_SHIFT) |
(device_size << CFG_DEVICE_SIZE_SHIFT);
if (cfg_offs == cfg_ext_offs) {
tmp |= (page_size << CFG_PAGE_SIZE_SHIFT) |
tmp |= (page_size << ctrl->page_size_shift) |
(block_size << CFG_BLK_SIZE_SHIFT);
nand_writereg(ctrl, cfg_offs, tmp);
} else {
@ -2389,9 +2455,11 @@ static int brcmnand_set_cfg(struct brcmnand_host *host,
tmp = nand_readreg(ctrl, acc_control_offs);
tmp &= ~brcmnand_ecc_level_mask(ctrl);
tmp |= cfg->ecc_level << NAND_ACC_CONTROL_ECC_SHIFT;
tmp &= ~brcmnand_spare_area_mask(ctrl);
tmp |= cfg->spare_area_size;
if (ctrl->nand_version >= 0x0302) {
tmp |= cfg->ecc_level << NAND_ACC_CONTROL_ECC_SHIFT;
tmp |= cfg->spare_area_size;
}
nand_writereg(ctrl, acc_control_offs, tmp);
brcmnand_set_sector_size_1k(host, cfg->sector_size_1k);
@ -2577,7 +2645,7 @@ static int brcmnand_attach_chip(struct nand_chip *chip)
* to/from, and have nand_base pass us a bounce buffer instead, as
* needed.
*/
chip->options |= NAND_USE_BOUNCE_BUFFER;
chip->options |= NAND_USES_DMA;
if (chip->bbt_options & NAND_BBT_USE_FLASH)
chip->bbt_options |= NAND_BBT_NO_OOB;
@ -2764,6 +2832,8 @@ const struct dev_pm_ops brcmnand_pm_ops = {
EXPORT_SYMBOL_GPL(brcmnand_pm_ops);
static const struct of_device_id brcmnand_of_match[] = {
{ .compatible = "brcm,brcmnand-v2.1" },
{ .compatible = "brcm,brcmnand-v2.2" },
{ .compatible = "brcm,brcmnand-v4.0" },
{ .compatible = "brcm,brcmnand-v5.0" },
{ .compatible = "brcm,brcmnand-v6.0" },
@ -3045,9 +3115,15 @@ int brcmnand_remove(struct platform_device *pdev)
{
struct brcmnand_controller *ctrl = dev_get_drvdata(&pdev->dev);
struct brcmnand_host *host;
struct nand_chip *chip;
int ret;
list_for_each_entry(host, &ctrl->host_list, node)
nand_release(&host->chip);
list_for_each_entry(host, &ctrl->host_list, node) {
chip = &host->chip;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
}
clk_disable_unprepare(ctrl->clk);

View file

@ -2223,10 +2223,12 @@ static int cadence_nand_exec_op(struct nand_chip *chip,
const struct nand_operation *op,
bool check_only)
{
int status = cadence_nand_select_target(chip);
if (!check_only) {
int status = cadence_nand_select_target(chip);
if (status)
return status;
if (status)
return status;
}
return nand_op_parser_exec_op(chip, &cadence_nand_op_parser, op,
check_only);
@ -2592,7 +2594,7 @@ cadence_nand_setup_data_interface(struct nand_chip *chip, int chipnr,
return 0;
}
int cadence_nand_attach_chip(struct nand_chip *chip)
static int cadence_nand_attach_chip(struct nand_chip *chip)
{
struct cdns_nand_ctrl *cdns_ctrl = to_cdns_nand_ctrl(chip->controller);
struct cdns_nand_chip *cdns_chip = to_cdns_nand_chip(chip);
@ -2778,9 +2780,14 @@ static int cadence_nand_chip_init(struct cdns_nand_ctrl *cdns_ctrl,
static void cadence_nand_chips_cleanup(struct cdns_nand_ctrl *cdns_ctrl)
{
struct cdns_nand_chip *entry, *temp;
struct nand_chip *chip;
int ret;
list_for_each_entry_safe(entry, temp, &cdns_ctrl->chips, node) {
nand_release(&entry->chip);
chip = &entry->chip;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
list_del(&entry->node);
}
}

View file

@ -546,11 +546,6 @@ static int cafe_nand_write_page_lowlevel(struct nand_chip *chip,
return nand_prog_page_end_op(chip);
}
static int cafe_nand_block_bad(struct nand_chip *chip, loff_t ofs)
{
return 0;
}
/* F_2[X]/(X**6+X+1) */
static unsigned short gf64_mul(u8 a, u8 b)
{
@ -718,10 +713,8 @@ static int cafe_nand_probe(struct pci_dev *pdev,
/* Enable the following for a flash based bad block table */
cafe->nand.bbt_options = NAND_BBT_USE_FLASH;
if (skipbbt) {
cafe->nand.options |= NAND_SKIP_BBTSCAN;
cafe->nand.legacy.block_bad = cafe_nand_block_bad;
}
if (skipbbt)
cafe->nand.options |= NAND_SKIP_BBTSCAN | NAND_NO_BBM_QUIRK;
if (numtimings && numtimings != 3) {
dev_warn(&cafe->pdev->dev, "%d timing register values ignored; precisely three are required\n", numtimings);
@ -814,11 +807,14 @@ static void cafe_nand_remove(struct pci_dev *pdev)
struct mtd_info *mtd = pci_get_drvdata(pdev);
struct nand_chip *chip = mtd_to_nand(mtd);
struct cafe_priv *cafe = nand_get_controller_data(chip);
int ret;
/* Disable NAND IRQ in global IRQ mask register */
cafe_writel(cafe, ~1 & cafe_readl(cafe, GLOBAL_IRQ_MASK), GLOBAL_IRQ_MASK);
free_irq(pdev->irq, mtd);
nand_release(chip);
ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(chip);
free_rs(cafe->rs);
pci_iounmap(pdev, cafe->mmio);
dma_free_coherent(&cafe->pdev->dev, 2112, cafe->dmabuf, cafe->dmaaddr);

View file

@ -1,236 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2006 Compulab, Ltd.
* Mike Rapoport <mike@compulab.co.il>
*
* Derived from drivers/mtd/nand/h1910.c (removed in v3.10)
* Copyright (C) 2002 Marius Gröger (mag@sysgo.de)
* Copyright (c) 2001 Thomas Gleixner (gleixner@autronix.de)
*
* Overview:
* This is a device driver for the NAND flash device found on the
* CM-X270 board.
*/
#include <linux/mtd/rawnand.h>
#include <linux/mtd/partitions.h>
#include <linux/slab.h>
#include <linux/gpio.h>
#include <linux/module.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/mach-types.h>
#include <mach/pxa2xx-regs.h>
#define GPIO_NAND_CS (11)
#define GPIO_NAND_RB (89)
/* MTD structure for CM-X270 board */
static struct mtd_info *cmx270_nand_mtd;
/* remaped IO address of the device */
static void __iomem *cmx270_nand_io;
/*
* Define static partitions for flash device
*/
static const struct mtd_partition partition_info[] = {
[0] = {
.name = "cmx270-0",
.offset = 0,
.size = MTDPART_SIZ_FULL
}
};
#define NUM_PARTITIONS (ARRAY_SIZE(partition_info))
static u_char cmx270_read_byte(struct nand_chip *this)
{
return (readl(this->legacy.IO_ADDR_R) >> 16);
}
static void cmx270_write_buf(struct nand_chip *this, const u_char *buf,
int len)
{
int i;
for (i=0; i<len; i++)
writel((*buf++ << 16), this->legacy.IO_ADDR_W);
}
static void cmx270_read_buf(struct nand_chip *this, u_char *buf, int len)
{
int i;
for (i=0; i<len; i++)
*buf++ = readl(this->legacy.IO_ADDR_R) >> 16;
}
static inline void nand_cs_on(void)
{
gpio_set_value(GPIO_NAND_CS, 0);
}
static void nand_cs_off(void)
{
dsb();
gpio_set_value(GPIO_NAND_CS, 1);
}
/*
* hardware specific access to control-lines
*/
static void cmx270_hwcontrol(struct nand_chip *this, int dat,
unsigned int ctrl)
{
unsigned int nandaddr = (unsigned int)this->legacy.IO_ADDR_W;
dsb();
if (ctrl & NAND_CTRL_CHANGE) {
if ( ctrl & NAND_ALE )
nandaddr |= (1 << 3);
else
nandaddr &= ~(1 << 3);
if ( ctrl & NAND_CLE )
nandaddr |= (1 << 2);
else
nandaddr &= ~(1 << 2);
if ( ctrl & NAND_NCE )
nand_cs_on();
else
nand_cs_off();
}
dsb();
this->legacy.IO_ADDR_W = (void __iomem*)nandaddr;
if (dat != NAND_CMD_NONE)
writel((dat << 16), this->legacy.IO_ADDR_W);
dsb();
}
/*
* read device ready pin
*/
static int cmx270_device_ready(struct nand_chip *this)
{
dsb();
return (gpio_get_value(GPIO_NAND_RB));
}
/*
* Main initialization routine
*/
static int __init cmx270_init(void)
{
struct nand_chip *this;
int ret;
if (!(machine_is_armcore() && cpu_is_pxa27x()))
return -ENODEV;
ret = gpio_request(GPIO_NAND_CS, "NAND CS");
if (ret) {
pr_warn("CM-X270: failed to request NAND CS gpio\n");
return ret;
}
gpio_direction_output(GPIO_NAND_CS, 1);
ret = gpio_request(GPIO_NAND_RB, "NAND R/B");
if (ret) {
pr_warn("CM-X270: failed to request NAND R/B gpio\n");
goto err_gpio_request;
}
gpio_direction_input(GPIO_NAND_RB);
/* Allocate memory for MTD device structure and private data */
this = kzalloc(sizeof(struct nand_chip), GFP_KERNEL);
if (!this) {
ret = -ENOMEM;
goto err_kzalloc;
}
cmx270_nand_io = ioremap(PXA_CS1_PHYS, 12);
if (!cmx270_nand_io) {
pr_debug("Unable to ioremap NAND device\n");
ret = -EINVAL;
goto err_ioremap;
}
cmx270_nand_mtd = nand_to_mtd(this);
/* Link the private data with the MTD structure */
cmx270_nand_mtd->owner = THIS_MODULE;
/* insert callbacks */
this->legacy.IO_ADDR_R = cmx270_nand_io;
this->legacy.IO_ADDR_W = cmx270_nand_io;
this->legacy.cmd_ctrl = cmx270_hwcontrol;
this->legacy.dev_ready = cmx270_device_ready;
/* 15 us command delay time */
this->legacy.chip_delay = 20;
this->ecc.mode = NAND_ECC_SOFT;
this->ecc.algo = NAND_ECC_HAMMING;
/* read/write functions */
this->legacy.read_byte = cmx270_read_byte;
this->legacy.read_buf = cmx270_read_buf;
this->legacy.write_buf = cmx270_write_buf;
/* Scan to find existence of the device */
ret = nand_scan(this, 1);
if (ret) {
pr_notice("No NAND device\n");
goto err_scan;
}
/* Register the partitions */
ret = mtd_device_register(cmx270_nand_mtd, partition_info,
NUM_PARTITIONS);
if (ret)
goto err_scan;
/* Return happy */
return 0;
err_scan:
iounmap(cmx270_nand_io);
err_ioremap:
kfree(this);
err_kzalloc:
gpio_free(GPIO_NAND_RB);
err_gpio_request:
gpio_free(GPIO_NAND_CS);
return ret;
}
module_init(cmx270_init);
/*
* Clean up routine
*/
static void __exit cmx270_cleanup(void)
{
/* Release resources, unregister device */
nand_release(mtd_to_nand(cmx270_nand_mtd));
gpio_free(GPIO_NAND_RB);
gpio_free(GPIO_NAND_CS);
iounmap(cmx270_nand_io);
kfree(mtd_to_nand(cmx270_nand_mtd));
}
module_exit(cmx270_cleanup);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Mike Rapoport <mike@compulab.co.il>");
MODULE_DESCRIPTION("NAND flash driver for Compulab CM-X270 Module");

View file

@ -21,9 +21,9 @@
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/partitions.h>
#include <linux/iopoll.h>
#include <asm/msr.h>
#include <asm/io.h>
#define NR_CS553X_CONTROLLERS 4
@ -89,76 +89,151 @@
#define CS_NAND_ECC_CLRECC (1<<1)
#define CS_NAND_ECC_ENECC (1<<0)
static void cs553x_read_buf(struct nand_chip *this, u_char *buf, int len)
struct cs553x_nand_controller {
struct nand_controller base;
struct nand_chip chip;
void __iomem *mmio;
};
static struct cs553x_nand_controller *
to_cs553x(struct nand_controller *controller)
{
return container_of(controller, struct cs553x_nand_controller, base);
}
static int cs553x_write_ctrl_byte(struct cs553x_nand_controller *cs553x,
u32 ctl, u8 data)
{
u8 status;
int ret;
writeb(ctl, cs553x->mmio + MM_NAND_CTL);
writeb(data, cs553x->mmio + MM_NAND_IO);
ret = readb_poll_timeout_atomic(cs553x->mmio + MM_NAND_STS, status,
!(status & CS_NAND_CTLR_BUSY), 1,
100000);
if (ret)
return ret;
return 0;
}
static void cs553x_data_in(struct cs553x_nand_controller *cs553x, void *buf,
unsigned int len)
{
writeb(0, cs553x->mmio + MM_NAND_CTL);
while (unlikely(len > 0x800)) {
memcpy_fromio(buf, this->legacy.IO_ADDR_R, 0x800);
memcpy_fromio(buf, cs553x->mmio, 0x800);
buf += 0x800;
len -= 0x800;
}
memcpy_fromio(buf, this->legacy.IO_ADDR_R, len);
memcpy_fromio(buf, cs553x->mmio, len);
}
static void cs553x_write_buf(struct nand_chip *this, const u_char *buf, int len)
static void cs553x_data_out(struct cs553x_nand_controller *cs553x,
const void *buf, unsigned int len)
{
writeb(0, cs553x->mmio + MM_NAND_CTL);
while (unlikely(len > 0x800)) {
memcpy_toio(this->legacy.IO_ADDR_R, buf, 0x800);
memcpy_toio(cs553x->mmio, buf, 0x800);
buf += 0x800;
len -= 0x800;
}
memcpy_toio(this->legacy.IO_ADDR_R, buf, len);
memcpy_toio(cs553x->mmio, buf, len);
}
static unsigned char cs553x_read_byte(struct nand_chip *this)
static int cs553x_wait_ready(struct cs553x_nand_controller *cs553x,
unsigned int timeout_ms)
{
return readb(this->legacy.IO_ADDR_R);
u8 mask = CS_NAND_CTLR_BUSY | CS_NAND_STS_FLASH_RDY;
u8 status;
return readb_poll_timeout(cs553x->mmio + MM_NAND_STS, status,
(status & mask) == CS_NAND_STS_FLASH_RDY, 100,
timeout_ms * 1000);
}
static void cs553x_write_byte(struct nand_chip *this, u_char byte)
static int cs553x_exec_instr(struct cs553x_nand_controller *cs553x,
const struct nand_op_instr *instr)
{
int i = 100000;
unsigned int i;
int ret = 0;
while (i && readb(this->legacy.IO_ADDR_R + MM_NAND_STS) & CS_NAND_CTLR_BUSY) {
udelay(1);
i--;
switch (instr->type) {
case NAND_OP_CMD_INSTR:
ret = cs553x_write_ctrl_byte(cs553x, CS_NAND_CTL_CLE,
instr->ctx.cmd.opcode);
break;
case NAND_OP_ADDR_INSTR:
for (i = 0; i < instr->ctx.addr.naddrs; i++) {
ret = cs553x_write_ctrl_byte(cs553x, CS_NAND_CTL_ALE,
instr->ctx.addr.addrs[i]);
if (ret)
break;
}
break;
case NAND_OP_DATA_IN_INSTR:
cs553x_data_in(cs553x, instr->ctx.data.buf.in,
instr->ctx.data.len);
break;
case NAND_OP_DATA_OUT_INSTR:
cs553x_data_out(cs553x, instr->ctx.data.buf.out,
instr->ctx.data.len);
break;
case NAND_OP_WAITRDY_INSTR:
ret = cs553x_wait_ready(cs553x, instr->ctx.waitrdy.timeout_ms);
break;
}
writeb(byte, this->legacy.IO_ADDR_W + 0x801);
if (instr->delay_ns)
ndelay(instr->delay_ns);
return ret;
}
static void cs553x_hwcontrol(struct nand_chip *this, int cmd,
unsigned int ctrl)
static int cs553x_exec_op(struct nand_chip *this,
const struct nand_operation *op,
bool check_only)
{
void __iomem *mmio_base = this->legacy.IO_ADDR_R;
if (ctrl & NAND_CTRL_CHANGE) {
unsigned char ctl = (ctrl & ~NAND_CTRL_CHANGE ) ^ 0x01;
writeb(ctl, mmio_base + MM_NAND_CTL);
struct cs553x_nand_controller *cs553x = to_cs553x(this->controller);
unsigned int i;
int ret;
if (check_only)
return true;
/* De-assert the CE pin */
writeb(0, cs553x->mmio + MM_NAND_CTL);
for (i = 0; i < op->ninstrs; i++) {
ret = cs553x_exec_instr(cs553x, &op->instrs[i]);
if (ret)
break;
}
if (cmd != NAND_CMD_NONE)
cs553x_write_byte(this, cmd);
}
static int cs553x_device_ready(struct nand_chip *this)
{
void __iomem *mmio_base = this->legacy.IO_ADDR_R;
unsigned char foo = readb(mmio_base + MM_NAND_STS);
/* Re-assert the CE pin. */
writeb(CS_NAND_CTL_CE, cs553x->mmio + MM_NAND_CTL);
return (foo & CS_NAND_STS_FLASH_RDY) && !(foo & CS_NAND_CTLR_BUSY);
return ret;
}
static void cs_enable_hwecc(struct nand_chip *this, int mode)
{
void __iomem *mmio_base = this->legacy.IO_ADDR_R;
struct cs553x_nand_controller *cs553x = to_cs553x(this->controller);
writeb(0x07, mmio_base + MM_NAND_ECC_CTL);
writeb(0x07, cs553x->mmio + MM_NAND_ECC_CTL);
}
static int cs_calculate_ecc(struct nand_chip *this, const u_char *dat,
u_char *ecc_code)
{
struct cs553x_nand_controller *cs553x = to_cs553x(this->controller);
uint32_t ecc;
void __iomem *mmio_base = this->legacy.IO_ADDR_R;
ecc = readl(mmio_base + MM_NAND_STS);
ecc = readl(cs553x->mmio + MM_NAND_STS);
ecc_code[1] = ecc >> 8;
ecc_code[0] = ecc >> 16;
@ -166,10 +241,15 @@ static int cs_calculate_ecc(struct nand_chip *this, const u_char *dat,
return 0;
}
static struct mtd_info *cs553x_mtd[4];
static struct cs553x_nand_controller *controllers[4];
static const struct nand_controller_ops cs553x_nand_controller_ops = {
.exec_op = cs553x_exec_op,
};
static int __init cs553x_init_one(int cs, int mmio, unsigned long adr)
{
struct cs553x_nand_controller *controller;
int err = 0;
struct nand_chip *this;
struct mtd_info *new_mtd;
@ -183,33 +263,29 @@ static int __init cs553x_init_one(int cs, int mmio, unsigned long adr)
}
/* Allocate memory for MTD device structure and private data */
this = kzalloc(sizeof(struct nand_chip), GFP_KERNEL);
if (!this) {
controller = kzalloc(sizeof(*controller), GFP_KERNEL);
if (!controller) {
err = -ENOMEM;
goto out;
}
this = &controller->chip;
nand_controller_init(&controller->base);
controller->base.ops = &cs553x_nand_controller_ops;
this->controller = &controller->base;
new_mtd = nand_to_mtd(this);
/* Link the private data with the MTD structure */
new_mtd->owner = THIS_MODULE;
/* map physical address */
this->legacy.IO_ADDR_R = this->legacy.IO_ADDR_W = ioremap(adr, 4096);
if (!this->legacy.IO_ADDR_R) {
controller->mmio = ioremap(adr, 4096);
if (!controller->mmio) {
pr_warn("ioremap cs553x NAND @0x%08lx failed\n", adr);
err = -EIO;
goto out_mtd;
}
this->legacy.cmd_ctrl = cs553x_hwcontrol;
this->legacy.dev_ready = cs553x_device_ready;
this->legacy.read_byte = cs553x_read_byte;
this->legacy.read_buf = cs553x_read_buf;
this->legacy.write_buf = cs553x_write_buf;
this->legacy.chip_delay = 0;
this->ecc.mode = NAND_ECC_HW;
this->ecc.size = 256;
this->ecc.bytes = 3;
@ -232,15 +308,15 @@ static int __init cs553x_init_one(int cs, int mmio, unsigned long adr)
if (err)
goto out_free;
cs553x_mtd[cs] = new_mtd;
controllers[cs] = controller;
goto out;
out_free:
kfree(new_mtd->name);
out_ior:
iounmap(this->legacy.IO_ADDR_R);
iounmap(controller->mmio);
out_mtd:
kfree(this);
kfree(controller);
out:
return err;
}
@ -295,9 +371,10 @@ static int __init cs553x_init(void)
/* Register all devices together here. This means we can easily hack it to
do mtdconcat etc. if we want to. */
for (i = 0; i < NR_CS553X_CONTROLLERS; i++) {
if (cs553x_mtd[i]) {
if (controllers[i]) {
/* If any devices registered, return success. Else the last error. */
mtd_device_register(cs553x_mtd[i], NULL, 0);
mtd_device_register(nand_to_mtd(&controllers[i]->chip),
NULL, 0);
err = 0;
}
}
@ -312,26 +389,26 @@ static void __exit cs553x_cleanup(void)
int i;
for (i = 0; i < NR_CS553X_CONTROLLERS; i++) {
struct mtd_info *mtd = cs553x_mtd[i];
struct nand_chip *this;
void __iomem *mmio_base;
struct cs553x_nand_controller *controller = controllers[i];
struct nand_chip *this = &controller->chip;
struct mtd_info *mtd = nand_to_mtd(this);
int ret;
if (!mtd)
continue;
this = mtd_to_nand(mtd);
mmio_base = this->legacy.IO_ADDR_R;
/* Release resources, unregister device */
nand_release(this);
ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(this);
kfree(mtd->name);
cs553x_mtd[i] = NULL;
controllers[i] = NULL;
/* unmap physical address */
iounmap(mmio_base);
iounmap(controller->mmio);
/* Free the MTD device structure */
kfree(this);
kfree(controller);
}
}

View file

@ -14,7 +14,7 @@
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/err.h>
#include <linux/io.h>
#include <linux/iopoll.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/partitions.h>
#include <linux/slab.h>
@ -38,6 +38,7 @@
* outputs in a "wire-AND" configuration, with no per-chip signals.
*/
struct davinci_nand_info {
struct nand_controller controller;
struct nand_chip chip;
struct platform_device *pdev;
@ -80,46 +81,6 @@ static inline void davinci_nand_writel(struct davinci_nand_info *info,
/*----------------------------------------------------------------------*/
/*
* Access to hardware control lines: ALE, CLE, secondary chipselect.
*/
static void nand_davinci_hwcontrol(struct nand_chip *nand, int cmd,
unsigned int ctrl)
{
struct davinci_nand_info *info = to_davinci_nand(nand_to_mtd(nand));
void __iomem *addr = info->current_cs;
/* Did the control lines change? */
if (ctrl & NAND_CTRL_CHANGE) {
if ((ctrl & NAND_CTRL_CLE) == NAND_CTRL_CLE)
addr += info->mask_cle;
else if ((ctrl & NAND_CTRL_ALE) == NAND_CTRL_ALE)
addr += info->mask_ale;
nand->legacy.IO_ADDR_W = addr;
}
if (cmd != NAND_CMD_NONE)
iowrite8(cmd, nand->legacy.IO_ADDR_W);
}
static void nand_davinci_select_chip(struct nand_chip *nand, int chip)
{
struct davinci_nand_info *info = to_davinci_nand(nand_to_mtd(nand));
info->current_cs = info->vaddr;
/* maybe kick in a second chipselect */
if (chip > 0)
info->current_cs += info->mask_chipsel;
info->chip.legacy.IO_ADDR_W = info->current_cs;
info->chip.legacy.IO_ADDR_R = info->chip.legacy.IO_ADDR_W;
}
/*----------------------------------------------------------------------*/
/*
* 1-bit hardware ECC ... context maintained for each core chipselect
*/
@ -410,48 +371,75 @@ correct:
return corrected;
}
/*----------------------------------------------------------------------*/
/*
* NOTE: NAND boot requires ALE == EM_A[1], CLE == EM_A[2], so that's
* how these chips are normally wired. This translates to both 8 and 16
* bit busses using ALE == BIT(3) in byte addresses, and CLE == BIT(4).
/**
* nand_read_page_hwecc_oob_first - hw ecc, read oob first
* @chip: nand chip info structure
* @buf: buffer to store read data
* @oob_required: caller requires OOB data read to chip->oob_poi
* @page: page number to read
*
* For now we assume that configuration, or any other one which ignores
* the two LSBs for NAND access ... so we can issue 32-bit reads/writes
* and have that transparently morphed into multiple NAND operations.
* Hardware ECC for large page chips, require OOB to be read first. For this
* ECC mode, the write_page method is re-used from ECC_HW. These methods
* read/write ECC from the OOB area, unlike the ECC_HW_SYNDROME support with
* multiple ECC steps, follows the "infix ECC" scheme and reads/writes ECC from
* the data area, by overwriting the NAND manufacturer bad block markings.
*/
static void nand_davinci_read_buf(struct nand_chip *chip, uint8_t *buf,
int len)
static int nand_davinci_read_page_hwecc_oob_first(struct nand_chip *chip,
uint8_t *buf,
int oob_required, int page)
{
if ((0x03 & ((uintptr_t)buf)) == 0 && (0x03 & len) == 0)
ioread32_rep(chip->legacy.IO_ADDR_R, buf, len >> 2);
else if ((0x01 & ((uintptr_t)buf)) == 0 && (0x01 & len) == 0)
ioread16_rep(chip->legacy.IO_ADDR_R, buf, len >> 1);
else
ioread8_rep(chip->legacy.IO_ADDR_R, buf, len);
}
struct mtd_info *mtd = nand_to_mtd(chip);
int i, eccsize = chip->ecc.size, ret;
int eccbytes = chip->ecc.bytes;
int eccsteps = chip->ecc.steps;
uint8_t *p = buf;
uint8_t *ecc_code = chip->ecc.code_buf;
uint8_t *ecc_calc = chip->ecc.calc_buf;
unsigned int max_bitflips = 0;
static void nand_davinci_write_buf(struct nand_chip *chip, const uint8_t *buf,
int len)
{
if ((0x03 & ((uintptr_t)buf)) == 0 && (0x03 & len) == 0)
iowrite32_rep(chip->legacy.IO_ADDR_R, buf, len >> 2);
else if ((0x01 & ((uintptr_t)buf)) == 0 && (0x01 & len) == 0)
iowrite16_rep(chip->legacy.IO_ADDR_R, buf, len >> 1);
else
iowrite8_rep(chip->legacy.IO_ADDR_R, buf, len);
}
/* Read the OOB area first */
ret = nand_read_oob_op(chip, page, 0, chip->oob_poi, mtd->oobsize);
if (ret)
return ret;
/*
* Check hardware register for wait status. Returns 1 if device is ready,
* 0 if it is still busy.
*/
static int nand_davinci_dev_ready(struct nand_chip *chip)
{
struct davinci_nand_info *info = to_davinci_nand(nand_to_mtd(chip));
ret = nand_read_page_op(chip, page, 0, NULL, 0);
if (ret)
return ret;
return davinci_nand_readl(info, NANDFSR_OFFSET) & BIT(0);
ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, chip->oob_poi, 0,
chip->ecc.total);
if (ret)
return ret;
for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) {
int stat;
chip->ecc.hwctl(chip, NAND_ECC_READ);
ret = nand_read_data_op(chip, p, eccsize, false, false);
if (ret)
return ret;
chip->ecc.calculate(chip, p, &ecc_calc[i]);
stat = chip->ecc.correct(chip, p, &ecc_code[i], NULL);
if (stat == -EBADMSG &&
(chip->ecc.options & NAND_ECC_GENERIC_ERASED_CHECK)) {
/* check for empty pages with bitflips */
stat = nand_check_erased_ecc_chunk(p, eccsize,
&ecc_code[i],
eccbytes, NULL, 0,
chip->ecc.strength);
}
if (stat < 0) {
mtd->ecc_stats.failed++;
} else {
mtd->ecc_stats.corrected += stat;
max_bitflips = max_t(unsigned int, max_bitflips, stat);
}
}
return max_bitflips;
}
/*----------------------------------------------------------------------*/
@ -613,6 +601,13 @@ static int davinci_nand_attach_chip(struct nand_chip *chip)
break;
case NAND_ECC_HW:
if (pdata->ecc_bits == 4) {
int chunks = mtd->writesize / 512;
if (!chunks || mtd->oobsize < 16) {
dev_dbg(&info->pdev->dev, "too small\n");
return -EINVAL;
}
/*
* No sanity checks: CPUs must support this,
* and the chips may not use NAND_BUSWIDTH_16.
@ -635,6 +630,26 @@ static int davinci_nand_attach_chip(struct nand_chip *chip)
info->chip.ecc.bytes = 10;
info->chip.ecc.options = NAND_ECC_GENERIC_ERASED_CHECK;
info->chip.ecc.algo = NAND_ECC_BCH;
/*
* Update ECC layout if needed ... for 1-bit HW ECC, the
* default is OK, but it allocates 6 bytes when only 3
* are needed (for each 512 bytes). For 4-bit HW ECC,
* the default is not usable: 10 bytes needed, not 6.
*
* For small page chips, preserve the manufacturer's
* badblock marking data ... and make sure a flash BBT
* table marker fits in the free bytes.
*/
if (chunks == 1) {
mtd_set_ooblayout(mtd,
&hwecc4_small_ooblayout_ops);
} else if (chunks == 4 || chunks == 8) {
mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
info->chip.ecc.read_page = nand_davinci_read_page_hwecc_oob_first;
} else {
return -EIO;
}
} else {
/* 1bit ecc hamming */
info->chip.ecc.calculate = nand_davinci_calculate_1bit;
@ -650,39 +665,111 @@ static int davinci_nand_attach_chip(struct nand_chip *chip)
return -EINVAL;
}
/*
* Update ECC layout if needed ... for 1-bit HW ECC, the default
* is OK, but it allocates 6 bytes when only 3 are needed (for
* each 512 bytes). For the 4-bit HW ECC, that default is not
* usable: 10 bytes are needed, not 6.
*/
if (pdata->ecc_bits == 4) {
int chunks = mtd->writesize / 512;
return ret;
}
if (!chunks || mtd->oobsize < 16) {
dev_dbg(&info->pdev->dev, "too small\n");
return -EINVAL;
}
static void nand_davinci_data_in(struct davinci_nand_info *info, void *buf,
unsigned int len, bool force_8bit)
{
u32 alignment = ((uintptr_t)buf | len) & 3;
/* For small page chips, preserve the manufacturer's
* badblock marking data ... and make sure a flash BBT
* table marker fits in the free bytes.
*/
if (chunks == 1) {
mtd_set_ooblayout(mtd, &hwecc4_small_ooblayout_ops);
} else if (chunks == 4 || chunks == 8) {
mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
info->chip.ecc.mode = NAND_ECC_HW_OOB_FIRST;
} else {
return -EIO;
if (force_8bit || (alignment & 1))
ioread8_rep(info->current_cs, buf, len);
else if (alignment & 3)
ioread16_rep(info->current_cs, buf, len >> 1);
else
ioread32_rep(info->current_cs, buf, len >> 2);
}
static void nand_davinci_data_out(struct davinci_nand_info *info,
const void *buf, unsigned int len,
bool force_8bit)
{
u32 alignment = ((uintptr_t)buf | len) & 3;
if (force_8bit || (alignment & 1))
iowrite8_rep(info->current_cs, buf, len);
else if (alignment & 3)
iowrite16_rep(info->current_cs, buf, len >> 1);
else
iowrite32_rep(info->current_cs, buf, len >> 2);
}
static int davinci_nand_exec_instr(struct davinci_nand_info *info,
const struct nand_op_instr *instr)
{
unsigned int i, timeout_us;
u32 status;
int ret;
switch (instr->type) {
case NAND_OP_CMD_INSTR:
iowrite8(instr->ctx.cmd.opcode,
info->current_cs + info->mask_cle);
break;
case NAND_OP_ADDR_INSTR:
for (i = 0; i < instr->ctx.addr.naddrs; i++) {
iowrite8(instr->ctx.addr.addrs[i],
info->current_cs + info->mask_ale);
}
break;
case NAND_OP_DATA_IN_INSTR:
nand_davinci_data_in(info, instr->ctx.data.buf.in,
instr->ctx.data.len,
instr->ctx.data.force_8bit);
break;
case NAND_OP_DATA_OUT_INSTR:
nand_davinci_data_out(info, instr->ctx.data.buf.out,
instr->ctx.data.len,
instr->ctx.data.force_8bit);
break;
case NAND_OP_WAITRDY_INSTR:
timeout_us = instr->ctx.waitrdy.timeout_ms * 1000;
ret = readl_relaxed_poll_timeout(info->base + NANDFSR_OFFSET,
status, status & BIT(0), 100,
timeout_us);
if (ret)
return ret;
break;
}
return ret;
if (instr->delay_ns)
ndelay(instr->delay_ns);
return 0;
}
static int davinci_nand_exec_op(struct nand_chip *chip,
const struct nand_operation *op,
bool check_only)
{
struct davinci_nand_info *info = to_davinci_nand(nand_to_mtd(chip));
unsigned int i;
if (check_only)
return 0;
info->current_cs = info->vaddr + (op->cs * info->mask_chipsel);
for (i = 0; i < op->ninstrs; i++) {
int ret;
ret = davinci_nand_exec_instr(info, &op->instrs[i]);
if (ret)
return ret;
}
return 0;
}
static const struct nand_controller_ops davinci_nand_controller_ops = {
.attach_chip = davinci_nand_attach_chip,
.exec_op = davinci_nand_exec_op,
};
static int nand_davinci_probe(struct platform_device *pdev)
@ -746,11 +833,6 @@ static int nand_davinci_probe(struct platform_device *pdev)
mtd->dev.parent = &pdev->dev;
nand_set_flash_node(&info->chip, pdev->dev.of_node);
info->chip.legacy.IO_ADDR_R = vaddr;
info->chip.legacy.IO_ADDR_W = vaddr;
info->chip.legacy.chip_delay = 0;
info->chip.legacy.select_chip = nand_davinci_select_chip;
/* options such as NAND_BBT_USE_FLASH */
info->chip.bbt_options = pdata->bbt_options;
/* options such as 16-bit widths */
@ -767,14 +849,6 @@ static int nand_davinci_probe(struct platform_device *pdev)
info->mask_ale = pdata->mask_ale ? : MASK_ALE;
info->mask_cle = pdata->mask_cle ? : MASK_CLE;
/* Set address of hardware control function */
info->chip.legacy.cmd_ctrl = nand_davinci_hwcontrol;
info->chip.legacy.dev_ready = nand_davinci_dev_ready;
/* Speed up buffer I/O */
info->chip.legacy.read_buf = nand_davinci_read_buf;
info->chip.legacy.write_buf = nand_davinci_write_buf;
/* Use board-specific ECC config */
info->chip.ecc.mode = pdata->ecc_mode;
@ -788,7 +862,9 @@ static int nand_davinci_probe(struct platform_device *pdev)
spin_unlock_irq(&davinci_nand_lock);
/* Scan to find existence of the device(s) */
info->chip.legacy.dummy_controller.ops = &davinci_nand_controller_ops;
nand_controller_init(&info->controller);
info->controller.ops = &davinci_nand_controller_ops;
info->chip.controller = &info->controller;
ret = nand_scan(&info->chip, pdata->mask_chipsel ? 2 : 1);
if (ret < 0) {
dev_dbg(&pdev->dev, "no NAND chip(s) found\n");
@ -817,13 +893,17 @@ err_cleanup_nand:
static int nand_davinci_remove(struct platform_device *pdev)
{
struct davinci_nand_info *info = platform_get_drvdata(pdev);
struct nand_chip *chip = &info->chip;
int ret;
spin_lock_irq(&davinci_nand_lock);
if (info->chip.ecc.mode == NAND_ECC_HW_SYNDROME)
ecc4_busy = false;
spin_unlock_irq(&davinci_nand_lock);
nand_release(&info->chip);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
return 0;
}

View file

@ -764,6 +764,7 @@ static int denali_write_page(struct nand_chip *chip, const u8 *buf,
static int denali_setup_data_interface(struct nand_chip *chip, int chipnr,
const struct nand_data_interface *conf)
{
static const unsigned int data_setup_on_host = 10000;
struct denali_controller *denali = to_denali_controller(chip);
struct denali_chip_sel *sel;
const struct nand_sdr_timings *timings;
@ -796,15 +797,6 @@ static int denali_setup_data_interface(struct nand_chip *chip, int chipnr,
sel = &to_denali_chip(chip)->sels[chipnr];
/* tREA -> ACC_CLKS */
acc_clks = DIV_ROUND_UP(timings->tREA_max, t_x);
acc_clks = min_t(int, acc_clks, ACC_CLKS__VALUE);
tmp = ioread32(denali->reg + ACC_CLKS);
tmp &= ~ACC_CLKS__VALUE;
tmp |= FIELD_PREP(ACC_CLKS__VALUE, acc_clks);
sel->acc_clks = tmp;
/* tRWH -> RE_2_WE */
re_2_we = DIV_ROUND_UP(timings->tRHW_min, t_x);
re_2_we = min_t(int, re_2_we, RE_2_WE__VALUE);
@ -862,14 +854,45 @@ static int denali_setup_data_interface(struct nand_chip *chip, int chipnr,
tmp |= FIELD_PREP(RDWR_EN_HI_CNT__VALUE, rdwr_en_hi);
sel->rdwr_en_hi_cnt = tmp;
/* tRP, tWP -> RDWR_EN_LO_CNT */
/*
* tREA -> ACC_CLKS
* tRP, tWP, tRHOH, tRC, tWC -> RDWR_EN_LO_CNT
*/
/*
* Determine the minimum of acc_clks to meet the setup timing when
* capturing the incoming data.
*
* The delay on the chip side is well-defined as tREA, but we need to
* take additional delay into account. This includes a certain degree
* of unknowledge, such as signal propagation delays on the PCB and
* in the SoC, load capacity of the I/O pins, etc.
*/
acc_clks = DIV_ROUND_UP(timings->tREA_max + data_setup_on_host, t_x);
/* Determine the minimum of rdwr_en_lo_cnt from RE#/WE# pulse width */
rdwr_en_lo = DIV_ROUND_UP(max(timings->tRP_min, timings->tWP_min), t_x);
/* Extend rdwr_en_lo to meet the data hold timing */
rdwr_en_lo = max_t(int, rdwr_en_lo,
acc_clks - timings->tRHOH_min / t_x);
/* Extend rdwr_en_lo to meet the requirement for RE#/WE# cycle time */
rdwr_en_lo_hi = DIV_ROUND_UP(max(timings->tRC_min, timings->tWC_min),
t_x);
rdwr_en_lo_hi = max_t(int, rdwr_en_lo_hi, mult_x);
rdwr_en_lo = max(rdwr_en_lo, rdwr_en_lo_hi - rdwr_en_hi);
rdwr_en_lo = min_t(int, rdwr_en_lo, RDWR_EN_LO_CNT__VALUE);
/* Center the data latch timing for extra safety */
acc_clks = (acc_clks + rdwr_en_lo +
DIV_ROUND_UP(timings->tRHOH_min, t_x)) / 2;
acc_clks = min_t(int, acc_clks, ACC_CLKS__VALUE);
tmp = ioread32(denali->reg + ACC_CLKS);
tmp &= ~ACC_CLKS__VALUE;
tmp |= FIELD_PREP(ACC_CLKS__VALUE, acc_clks);
sel->acc_clks = tmp;
tmp = ioread32(denali->reg + RDWR_EN_LO_CNT);
tmp &= ~RDWR_EN_LO_CNT__VALUE;
tmp |= FIELD_PREP(RDWR_EN_LO_CNT__VALUE, rdwr_en_lo);
@ -1203,7 +1226,7 @@ int denali_chip_init(struct denali_controller *denali,
mtd->name = "denali-nand";
if (denali->dma_avail) {
chip->options |= NAND_USE_BOUNCE_BUFFER;
chip->options |= NAND_USES_DMA;
chip->buf_align = 16;
}
@ -1336,10 +1359,17 @@ EXPORT_SYMBOL(denali_init);
void denali_remove(struct denali_controller *denali)
{
struct denali_chip *dchip;
struct denali_chip *dchip, *tmp;
struct nand_chip *chip;
int ret;
list_for_each_entry(dchip, &denali->chips, node)
nand_release(&dchip->chip);
list_for_each_entry_safe(dchip, tmp, &denali->chips, node) {
chip = &dchip->chip;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
list_del(&dchip->node);
}
denali_disable_irq(denali);
}

View file

@ -58,6 +58,7 @@ static unsigned long doc_locations[] __initdata = {
static struct mtd_info *doclist = NULL;
struct doc_priv {
struct nand_controller base;
void __iomem *virtadr;
unsigned long physadr;
u_char ChipID;
@ -69,6 +70,7 @@ struct doc_priv {
int mh1_page;
struct rs_control *rs_decoder;
struct mtd_info *nextdoc;
bool supports_32b_reads;
/* Handle the last stage of initialization (BBT scan, partitioning) */
int (*late_init)(struct mtd_info *mtd);
@ -84,10 +86,6 @@ static u_char empty_write_ecc[6] = { 0x4b, 0x00, 0xe2, 0x0e, 0x93, 0xf7 };
#define DoC_is_Millennium(doc) ((doc)->ChipID == DOC_ChipID_DocMil)
#define DoC_is_2000(doc) ((doc)->ChipID == DOC_ChipID_Doc2k)
static void doc200x_hwcontrol(struct nand_chip *this, int cmd,
unsigned int bitmask);
static void doc200x_select_chip(struct nand_chip *this, int chip);
static int debug = 0;
module_param(debug, int, 0);
@ -302,20 +300,6 @@ static void doc2000_write_byte(struct nand_chip *this, u_char datum)
WriteDOC(datum, docptr, 2k_CDSN_IO);
}
static u_char doc2000_read_byte(struct nand_chip *this)
{
struct doc_priv *doc = nand_get_controller_data(this);
void __iomem *docptr = doc->virtadr;
u_char ret;
ReadDOC(docptr, CDSNSlowIO);
DoC_Delay(doc, 2);
ret = ReadDOC(docptr, 2k_CDSN_IO);
if (debug)
printk("read_byte returns %02x\n", ret);
return ret;
}
static void doc2000_writebuf(struct nand_chip *this, const u_char *buf,
int len)
{
@ -337,33 +321,42 @@ static void doc2000_readbuf(struct nand_chip *this, u_char *buf, int len)
{
struct doc_priv *doc = nand_get_controller_data(this);
void __iomem *docptr = doc->virtadr;
u32 *buf32 = (u32 *)buf;
int i;
if (debug)
printk("readbuf of %d bytes: ", len);
for (i = 0; i < len; i++)
buf[i] = ReadDOC(docptr, 2k_CDSN_IO + i);
if (!doc->supports_32b_reads ||
((((unsigned long)buf) | len) & 3)) {
for (i = 0; i < len; i++)
buf[i] = ReadDOC(docptr, 2k_CDSN_IO + i);
} else {
for (i = 0; i < len / 4; i++)
buf32[i] = readl(docptr + DoC_2k_CDSN_IO + i);
}
}
static void doc2000_readbuf_dword(struct nand_chip *this, u_char *buf, int len)
/*
* We need our own readid() here because it's called before the NAND chip
* has been initialized, and calling nand_op_readid() would lead to a NULL
* pointer exception when dereferencing the NAND timings.
*/
static void doc200x_readid(struct nand_chip *this, unsigned int cs, u8 *id)
{
struct doc_priv *doc = nand_get_controller_data(this);
void __iomem *docptr = doc->virtadr;
int i;
u8 addr = 0;
struct nand_op_instr instrs[] = {
NAND_OP_CMD(NAND_CMD_READID, 0),
NAND_OP_ADDR(1, &addr, 50),
NAND_OP_8BIT_DATA_IN(2, id, 0),
};
if (debug)
printk("readbuf_dword of %d bytes: ", len);
struct nand_operation op = NAND_OPERATION(cs, instrs);
if (unlikely((((unsigned long)buf) | len) & 3)) {
for (i = 0; i < len; i++) {
*(uint8_t *) (&buf[i]) = ReadDOC(docptr, 2k_CDSN_IO + i);
}
} else {
for (i = 0; i < len; i += 4) {
*(uint32_t *) (&buf[i]) = readl(docptr + DoC_2k_CDSN_IO + i);
}
}
if (!id)
op.ninstrs--;
this->controller->ops->exec_op(this, &op, false);
}
static uint16_t __init doc200x_ident_chip(struct mtd_info *mtd, int nr)
@ -371,20 +364,11 @@ static uint16_t __init doc200x_ident_chip(struct mtd_info *mtd, int nr)
struct nand_chip *this = mtd_to_nand(mtd);
struct doc_priv *doc = nand_get_controller_data(this);
uint16_t ret;
u8 id[2];
doc200x_select_chip(this, nr);
doc200x_hwcontrol(this, NAND_CMD_READID,
NAND_CTRL_CLE | NAND_CTRL_CHANGE);
doc200x_hwcontrol(this, 0, NAND_CTRL_ALE | NAND_CTRL_CHANGE);
doc200x_hwcontrol(this, NAND_CMD_NONE, NAND_NCE | NAND_CTRL_CHANGE);
doc200x_readid(this, nr, id);
/* We can't use dev_ready here, but at least we wait for the
* command to complete
*/
udelay(50);
ret = this->legacy.read_byte(this) << 8;
ret |= this->legacy.read_byte(this);
ret = ((u16)id[0] << 8) | id[1];
if (doc->ChipID == DOC_ChipID_Doc2k && try_dword && !nr) {
/* First chip probe. See if we get same results by 32-bit access */
@ -394,18 +378,12 @@ static uint16_t __init doc200x_ident_chip(struct mtd_info *mtd, int nr)
} ident;
void __iomem *docptr = doc->virtadr;
doc200x_hwcontrol(this, NAND_CMD_READID,
NAND_CTRL_CLE | NAND_CTRL_CHANGE);
doc200x_hwcontrol(this, 0, NAND_CTRL_ALE | NAND_CTRL_CHANGE);
doc200x_hwcontrol(this, NAND_CMD_NONE,
NAND_NCE | NAND_CTRL_CHANGE);
udelay(50);
doc200x_readid(this, nr, NULL);
ident.dword = readl(docptr + DoC_2k_CDSN_IO);
if (((ident.byte[0] << 8) | ident.byte[1]) == ret) {
pr_info("DiskOnChip 2000 responds to DWORD access\n");
this->legacy.read_buf = &doc2000_readbuf_dword;
doc->supports_32b_reads = true;
}
}
@ -434,20 +412,6 @@ static void __init doc2000_count_chips(struct mtd_info *mtd)
pr_debug("Detected %d chips per floor.\n", i);
}
static int doc200x_wait(struct nand_chip *this)
{
struct doc_priv *doc = nand_get_controller_data(this);
int status;
DoC_WaitReady(doc);
nand_status_op(this, NULL);
DoC_WaitReady(doc);
status = (int)this->legacy.read_byte(this);
return status;
}
static void doc2001_write_byte(struct nand_chip *this, u_char datum)
{
struct doc_priv *doc = nand_get_controller_data(this);
@ -458,19 +422,6 @@ static void doc2001_write_byte(struct nand_chip *this, u_char datum)
WriteDOC(datum, docptr, WritePipeTerm);
}
static u_char doc2001_read_byte(struct nand_chip *this)
{
struct doc_priv *doc = nand_get_controller_data(this);
void __iomem *docptr = doc->virtadr;
//ReadDOC(docptr, CDSNSlowIO);
/* 11.4.5 -- delay twice to allow extended length cycle */
DoC_Delay(doc, 2);
ReadDOC(docptr, ReadPipeInit);
//return ReadDOC(docptr, Mil_CDSN_IO);
return ReadDOC(docptr, LastDataRead);
}
static void doc2001_writebuf(struct nand_chip *this, const u_char *buf, int len)
{
struct doc_priv *doc = nand_get_controller_data(this);
@ -499,20 +450,6 @@ static void doc2001_readbuf(struct nand_chip *this, u_char *buf, int len)
buf[i] = ReadDOC(docptr, LastDataRead);
}
static u_char doc2001plus_read_byte(struct nand_chip *this)
{
struct doc_priv *doc = nand_get_controller_data(this);
void __iomem *docptr = doc->virtadr;
u_char ret;
ReadDOC(docptr, Mplus_ReadPipeInit);
ReadDOC(docptr, Mplus_ReadPipeInit);
ret = ReadDOC(docptr, Mplus_LastDataRead);
if (debug)
printk("read_byte returns %02x\n", ret);
return ret;
}
static void doc2001plus_writebuf(struct nand_chip *this, const u_char *buf, int len)
{
struct doc_priv *doc = nand_get_controller_data(this);
@ -550,9 +487,12 @@ static void doc2001plus_readbuf(struct nand_chip *this, u_char *buf, int len)
}
/* Terminate read pipeline */
buf[len - 2] = ReadDOC(docptr, Mplus_LastDataRead);
if (debug && i < 16)
printk("%02x ", buf[len - 2]);
if (len >= 2) {
buf[len - 2] = ReadDOC(docptr, Mplus_LastDataRead);
if (debug && i < 16)
printk("%02x ", buf[len - 2]);
}
buf[len - 1] = ReadDOC(docptr, Mplus_LastDataRead);
if (debug && i < 16)
printk("%02x ", buf[len - 1]);
@ -560,226 +500,163 @@ static void doc2001plus_readbuf(struct nand_chip *this, u_char *buf, int len)
printk("\n");
}
static void doc2001plus_select_chip(struct nand_chip *this, int chip)
static void doc200x_write_control(struct doc_priv *doc, u8 value)
{
WriteDOC(value, doc->virtadr, CDSNControl);
/* 11.4.3 -- 4 NOPs after CSDNControl write */
DoC_Delay(doc, 4);
}
static void doc200x_exec_instr(struct nand_chip *this,
const struct nand_op_instr *instr)
{
struct doc_priv *doc = nand_get_controller_data(this);
void __iomem *docptr = doc->virtadr;
int floor = 0;
unsigned int i;
if (debug)
printk("select chip (%d)\n", chip);
switch (instr->type) {
case NAND_OP_CMD_INSTR:
doc200x_write_control(doc, CDSN_CTRL_CE | CDSN_CTRL_CLE);
doc2000_write_byte(this, instr->ctx.cmd.opcode);
break;
if (chip == -1) {
/* Disable flash internally */
WriteDOC(0, docptr, Mplus_FlashSelect);
return;
case NAND_OP_ADDR_INSTR:
doc200x_write_control(doc, CDSN_CTRL_CE | CDSN_CTRL_ALE);
for (i = 0; i < instr->ctx.addr.naddrs; i++) {
u8 addr = instr->ctx.addr.addrs[i];
if (DoC_is_2000(doc))
doc2000_write_byte(this, addr);
else
doc2001_write_byte(this, addr);
}
break;
case NAND_OP_DATA_IN_INSTR:
doc200x_write_control(doc, CDSN_CTRL_CE);
if (DoC_is_2000(doc))
doc2000_readbuf(this, instr->ctx.data.buf.in,
instr->ctx.data.len);
else
doc2001_readbuf(this, instr->ctx.data.buf.in,
instr->ctx.data.len);
break;
case NAND_OP_DATA_OUT_INSTR:
doc200x_write_control(doc, CDSN_CTRL_CE);
if (DoC_is_2000(doc))
doc2000_writebuf(this, instr->ctx.data.buf.out,
instr->ctx.data.len);
else
doc2001_writebuf(this, instr->ctx.data.buf.out,
instr->ctx.data.len);
break;
case NAND_OP_WAITRDY_INSTR:
DoC_WaitReady(doc);
break;
}
floor = chip / doc->chips_per_floor;
chip -= (floor * doc->chips_per_floor);
if (instr->delay_ns)
ndelay(instr->delay_ns);
}
static int doc200x_exec_op(struct nand_chip *this,
const struct nand_operation *op,
bool check_only)
{
struct doc_priv *doc = nand_get_controller_data(this);
unsigned int i;
if (check_only)
return true;
doc->curchip = op->cs % doc->chips_per_floor;
doc->curfloor = op->cs / doc->chips_per_floor;
WriteDOC(doc->curfloor, doc->virtadr, FloorSelect);
WriteDOC(doc->curchip, doc->virtadr, CDSNDeviceSelect);
/* Assert CE pin */
doc200x_write_control(doc, CDSN_CTRL_CE);
for (i = 0; i < op->ninstrs; i++)
doc200x_exec_instr(this, &op->instrs[i]);
/* De-assert CE pin */
doc200x_write_control(doc, 0);
return 0;
}
static void doc2001plus_write_pipe_term(struct doc_priv *doc)
{
WriteDOC(0x00, doc->virtadr, Mplus_WritePipeTerm);
WriteDOC(0x00, doc->virtadr, Mplus_WritePipeTerm);
}
static void doc2001plus_exec_instr(struct nand_chip *this,
const struct nand_op_instr *instr)
{
struct doc_priv *doc = nand_get_controller_data(this);
unsigned int i;
switch (instr->type) {
case NAND_OP_CMD_INSTR:
WriteDOC(instr->ctx.cmd.opcode, doc->virtadr, Mplus_FlashCmd);
doc2001plus_write_pipe_term(doc);
break;
case NAND_OP_ADDR_INSTR:
for (i = 0; i < instr->ctx.addr.naddrs; i++) {
u8 addr = instr->ctx.addr.addrs[i];
WriteDOC(addr, doc->virtadr, Mplus_FlashAddress);
}
doc2001plus_write_pipe_term(doc);
/* deassert ALE */
WriteDOC(0, doc->virtadr, Mplus_FlashControl);
break;
case NAND_OP_DATA_IN_INSTR:
doc2001plus_readbuf(this, instr->ctx.data.buf.in,
instr->ctx.data.len);
break;
case NAND_OP_DATA_OUT_INSTR:
doc2001plus_writebuf(this, instr->ctx.data.buf.out,
instr->ctx.data.len);
doc2001plus_write_pipe_term(doc);
break;
case NAND_OP_WAITRDY_INSTR:
DoC_WaitReady(doc);
break;
}
if (instr->delay_ns)
ndelay(instr->delay_ns);
}
static int doc2001plus_exec_op(struct nand_chip *this,
const struct nand_operation *op,
bool check_only)
{
struct doc_priv *doc = nand_get_controller_data(this);
unsigned int i;
if (check_only)
return true;
doc->curchip = op->cs % doc->chips_per_floor;
doc->curfloor = op->cs / doc->chips_per_floor;
/* Assert ChipEnable and deassert WriteProtect */
WriteDOC((DOC_FLASH_CE), docptr, Mplus_FlashSelect);
nand_reset_op(this);
WriteDOC(DOC_FLASH_CE, doc->virtadr, Mplus_FlashSelect);
doc->curchip = chip;
doc->curfloor = floor;
}
for (i = 0; i < op->ninstrs; i++)
doc2001plus_exec_instr(this, &op->instrs[i]);
static void doc200x_select_chip(struct nand_chip *this, int chip)
{
struct doc_priv *doc = nand_get_controller_data(this);
void __iomem *docptr = doc->virtadr;
int floor = 0;
/* De-assert ChipEnable */
WriteDOC(0, doc->virtadr, Mplus_FlashSelect);
if (debug)
printk("select chip (%d)\n", chip);
if (chip == -1)
return;
floor = chip / doc->chips_per_floor;
chip -= (floor * doc->chips_per_floor);
/* 11.4.4 -- deassert CE before changing chip */
doc200x_hwcontrol(this, NAND_CMD_NONE, 0 | NAND_CTRL_CHANGE);
WriteDOC(floor, docptr, FloorSelect);
WriteDOC(chip, docptr, CDSNDeviceSelect);
doc200x_hwcontrol(this, NAND_CMD_NONE, NAND_NCE | NAND_CTRL_CHANGE);
doc->curchip = chip;
doc->curfloor = floor;
}
#define CDSN_CTRL_MSK (CDSN_CTRL_CE | CDSN_CTRL_CLE | CDSN_CTRL_ALE)
static void doc200x_hwcontrol(struct nand_chip *this, int cmd,
unsigned int ctrl)
{
struct doc_priv *doc = nand_get_controller_data(this);
void __iomem *docptr = doc->virtadr;
if (ctrl & NAND_CTRL_CHANGE) {
doc->CDSNControl &= ~CDSN_CTRL_MSK;
doc->CDSNControl |= ctrl & CDSN_CTRL_MSK;
if (debug)
printk("hwcontrol(%d): %02x\n", cmd, doc->CDSNControl);
WriteDOC(doc->CDSNControl, docptr, CDSNControl);
/* 11.4.3 -- 4 NOPs after CSDNControl write */
DoC_Delay(doc, 4);
}
if (cmd != NAND_CMD_NONE) {
if (DoC_is_2000(doc))
doc2000_write_byte(this, cmd);
else
doc2001_write_byte(this, cmd);
}
}
static void doc2001plus_command(struct nand_chip *this, unsigned command,
int column, int page_addr)
{
struct mtd_info *mtd = nand_to_mtd(this);
struct doc_priv *doc = nand_get_controller_data(this);
void __iomem *docptr = doc->virtadr;
/*
* Must terminate write pipeline before sending any commands
* to the device.
*/
if (command == NAND_CMD_PAGEPROG) {
WriteDOC(0x00, docptr, Mplus_WritePipeTerm);
WriteDOC(0x00, docptr, Mplus_WritePipeTerm);
}
/*
* Write out the command to the device.
*/
if (command == NAND_CMD_SEQIN) {
int readcmd;
if (column >= mtd->writesize) {
/* OOB area */
column -= mtd->writesize;
readcmd = NAND_CMD_READOOB;
} else if (column < 256) {
/* First 256 bytes --> READ0 */
readcmd = NAND_CMD_READ0;
} else {
column -= 256;
readcmd = NAND_CMD_READ1;
}
WriteDOC(readcmd, docptr, Mplus_FlashCmd);
}
WriteDOC(command, docptr, Mplus_FlashCmd);
WriteDOC(0, docptr, Mplus_WritePipeTerm);
WriteDOC(0, docptr, Mplus_WritePipeTerm);
if (column != -1 || page_addr != -1) {
/* Serially input address */
if (column != -1) {
/* Adjust columns for 16 bit buswidth */
if (this->options & NAND_BUSWIDTH_16 &&
!nand_opcode_8bits(command))
column >>= 1;
WriteDOC(column, docptr, Mplus_FlashAddress);
}
if (page_addr != -1) {
WriteDOC((unsigned char)(page_addr & 0xff), docptr, Mplus_FlashAddress);
WriteDOC((unsigned char)((page_addr >> 8) & 0xff), docptr, Mplus_FlashAddress);
if (this->options & NAND_ROW_ADDR_3) {
WriteDOC((unsigned char)((page_addr >> 16) & 0x0f), docptr, Mplus_FlashAddress);
printk("high density\n");
}
}
WriteDOC(0, docptr, Mplus_WritePipeTerm);
WriteDOC(0, docptr, Mplus_WritePipeTerm);
/* deassert ALE */
if (command == NAND_CMD_READ0 || command == NAND_CMD_READ1 ||
command == NAND_CMD_READOOB || command == NAND_CMD_READID)
WriteDOC(0, docptr, Mplus_FlashControl);
}
/*
* program and erase have their own busy handlers
* status and sequential in needs no delay
*/
switch (command) {
case NAND_CMD_PAGEPROG:
case NAND_CMD_ERASE1:
case NAND_CMD_ERASE2:
case NAND_CMD_SEQIN:
case NAND_CMD_STATUS:
return;
case NAND_CMD_RESET:
if (this->legacy.dev_ready)
break;
udelay(this->legacy.chip_delay);
WriteDOC(NAND_CMD_STATUS, docptr, Mplus_FlashCmd);
WriteDOC(0, docptr, Mplus_WritePipeTerm);
WriteDOC(0, docptr, Mplus_WritePipeTerm);
while (!(this->legacy.read_byte(this) & 0x40)) ;
return;
/* This applies to read commands */
default:
/*
* If we don't have access to the busy pin, we apply the given
* command delay
*/
if (!this->legacy.dev_ready) {
udelay(this->legacy.chip_delay);
return;
}
}
/* Apply this short delay always to ensure that we do wait tWB in
* any case on any machine. */
ndelay(100);
/* wait until command is processed */
while (!this->legacy.dev_ready(this)) ;
}
static int doc200x_dev_ready(struct nand_chip *this)
{
struct doc_priv *doc = nand_get_controller_data(this);
void __iomem *docptr = doc->virtadr;
if (DoC_is_MillenniumPlus(doc)) {
/* 11.4.2 -- must NOP four times before checking FR/B# */
DoC_Delay(doc, 4);
if ((ReadDOC(docptr, Mplus_FlashControl) & CDSN_CTRL_FR_B_MASK) != CDSN_CTRL_FR_B_MASK) {
if (debug)
printk("not ready\n");
return 0;
}
if (debug)
printk("was ready\n");
return 1;
} else {
/* 11.4.2 -- must NOP four times before checking FR/B# */
DoC_Delay(doc, 4);
if (!(ReadDOC(docptr, CDSNControl) & CDSN_CTRL_FR_B)) {
if (debug)
printk("not ready\n");
return 0;
}
/* 11.4.2 -- Must NOP twice if it's ready */
DoC_Delay(doc, 2);
if (debug)
printk("was ready\n");
return 1;
}
}
static int doc200x_block_bad(struct nand_chip *this, loff_t ofs)
{
/* This is our last resort if we couldn't find or create a BBT. Just
pretend all blocks are good. */
return 0;
}
@ -1344,9 +1221,6 @@ static inline int __init doc2000_init(struct mtd_info *mtd)
struct nand_chip *this = mtd_to_nand(mtd);
struct doc_priv *doc = nand_get_controller_data(this);
this->legacy.read_byte = doc2000_read_byte;
this->legacy.write_buf = doc2000_writebuf;
this->legacy.read_buf = doc2000_readbuf;
doc->late_init = nftl_scan_bbt;
doc->CDSNControl = CDSN_CTRL_FLASH_IO | CDSN_CTRL_ECC_IO;
@ -1360,10 +1234,6 @@ static inline int __init doc2001_init(struct mtd_info *mtd)
struct nand_chip *this = mtd_to_nand(mtd);
struct doc_priv *doc = nand_get_controller_data(this);
this->legacy.read_byte = doc2001_read_byte;
this->legacy.write_buf = doc2001_writebuf;
this->legacy.read_buf = doc2001_readbuf;
ReadDOC(doc->virtadr, ChipID);
ReadDOC(doc->virtadr, ChipID);
ReadDOC(doc->virtadr, ChipID);
@ -1390,13 +1260,7 @@ static inline int __init doc2001plus_init(struct mtd_info *mtd)
struct nand_chip *this = mtd_to_nand(mtd);
struct doc_priv *doc = nand_get_controller_data(this);
this->legacy.read_byte = doc2001plus_read_byte;
this->legacy.write_buf = doc2001plus_writebuf;
this->legacy.read_buf = doc2001plus_readbuf;
doc->late_init = inftl_scan_bbt;
this->legacy.cmd_ctrl = NULL;
this->legacy.select_chip = doc2001plus_select_chip;
this->legacy.cmdfunc = doc2001plus_command;
this->ecc.hwctl = doc2001plus_enable_hwecc;
doc->chips_per_floor = 1;
@ -1405,6 +1269,14 @@ static inline int __init doc2001plus_init(struct mtd_info *mtd)
return 1;
}
static const struct nand_controller_ops doc200x_ops = {
.exec_op = doc200x_exec_op,
};
static const struct nand_controller_ops doc2001plus_ops = {
.exec_op = doc2001plus_exec_op,
};
static int __init doc_probe(unsigned long physadr)
{
struct nand_chip *nand = NULL;
@ -1548,7 +1420,6 @@ static int __init doc_probe(unsigned long physadr)
goto fail;
}
/*
* Allocate a RS codec instance
*
@ -1566,6 +1437,12 @@ static int __init doc_probe(unsigned long physadr)
goto fail;
}
nand_controller_init(&doc->base);
if (ChipID == DOC_ChipID_DocMilPlus16)
doc->base.ops = &doc2001plus_ops;
else
doc->base.ops = &doc200x_ops;
mtd = nand_to_mtd(nand);
nand->bbt_td = (struct nand_bbt_descr *) (doc + 1);
nand->bbt_md = nand->bbt_td + 1;
@ -1573,12 +1450,8 @@ static int __init doc_probe(unsigned long physadr)
mtd->owner = THIS_MODULE;
mtd_set_ooblayout(mtd, &doc200x_ooblayout_ops);
nand->controller = &doc->base;
nand_set_controller_data(nand, doc);
nand->legacy.select_chip = doc200x_select_chip;
nand->legacy.cmd_ctrl = doc200x_hwcontrol;
nand->legacy.dev_ready = doc200x_dev_ready;
nand->legacy.waitfunc = doc200x_wait;
nand->legacy.block_bad = doc200x_block_bad;
nand->ecc.hwctl = doc200x_enable_hwecc;
nand->ecc.calculate = doc200x_calculate_ecc;
nand->ecc.correct = doc200x_correct_data;
@ -1590,7 +1463,7 @@ static int __init doc_probe(unsigned long physadr)
nand->ecc.options = NAND_ECC_GENERIC_ERASED_CHECK;
nand->bbt_options = NAND_BBT_USE_FLASH;
/* Skip the automatic BBT scan so we can run it manually */
nand->options |= NAND_SKIP_BBTSCAN;
nand->options |= NAND_SKIP_BBTSCAN | NAND_NO_BBM_QUIRK;
doc->physadr = physadr;
doc->virtadr = virtadr;
@ -1609,13 +1482,10 @@ static int __init doc_probe(unsigned long physadr)
numchips = doc2001_init(mtd);
if ((ret = nand_scan(nand, numchips)) || (ret = doc->late_init(mtd))) {
/* DBB note: i believe nand_release is necessary here, as
/* DBB note: i believe nand_cleanup is necessary here, as
buffers may have been allocated in nand_base. Check with
Thomas. FIX ME! */
/* nand_release will call mtd_device_unregister, but we
haven't yet added it. This is handled without incident by
mtd_device_unregister, as far as I can tell. */
nand_release(nand);
nand_cleanup(nand);
goto fail;
}
@ -1644,13 +1514,16 @@ static void release_nanddoc(void)
struct mtd_info *mtd, *nextmtd;
struct nand_chip *nand;
struct doc_priv *doc;
int ret;
for (mtd = doclist; mtd; mtd = nextmtd) {
nand = mtd_to_nand(mtd);
doc = nand_get_controller_data(nand);
nextmtd = doc->nextdoc;
nand_release(nand);
ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(nand);
iounmap(doc->virtadr);
release_mem_region(doc->physadr, DOC_IOREMAP_LEN);
free_rs(doc->rs_decoder);

View file

@ -956,8 +956,13 @@ static int fsl_elbc_nand_remove(struct platform_device *pdev)
{
struct fsl_elbc_fcm_ctrl *elbc_fcm_ctrl = fsl_lbc_ctrl_dev->nand;
struct fsl_elbc_mtd *priv = dev_get_drvdata(&pdev->dev);
struct nand_chip *chip = &priv->chip;
int ret;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
nand_release(&priv->chip);
fsl_elbc_chip_remove(priv);
mutex_lock(&fsl_elbc_nand_mutex);

View file

@ -1093,8 +1093,13 @@ err:
static int fsl_ifc_nand_remove(struct platform_device *dev)
{
struct fsl_ifc_mtd *priv = dev_get_drvdata(&dev->dev);
struct nand_chip *chip = &priv->chip;
int ret;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
nand_release(&priv->chip);
fsl_ifc_chip_remove(priv);
mutex_lock(&fsl_ifc_nand_mutex);

View file

@ -317,10 +317,13 @@ err1:
static int fun_remove(struct platform_device *ofdev)
{
struct fsl_upm_nand *fun = dev_get_drvdata(&ofdev->dev);
struct mtd_info *mtd = nand_to_mtd(&fun->chip);
int i;
struct nand_chip *chip = &fun->chip;
struct mtd_info *mtd = nand_to_mtd(chip);
int ret, i;
nand_release(&fun->chip);
ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(chip);
kfree(mtd->name);
for (i = 0; i < fun->mchip_count; i++) {

View file

@ -608,6 +608,9 @@ static int fsmc_exec_op(struct nand_chip *chip, const struct nand_operation *op,
unsigned int op_id;
int i;
if (check_only)
return 0;
pr_debug("Executing operation [%d instructions]:\n", op->ninstrs);
for (op_id = 0; op_id < op->ninstrs; op_id++) {
@ -691,7 +694,7 @@ static int fsmc_read_page_hwecc(struct nand_chip *chip, u8 *buf,
for (i = 0, s = 0; s < eccsteps; s++, i += eccbytes, p += eccsize) {
nand_read_page_op(chip, page, s * eccsize, NULL, 0);
chip->ecc.hwctl(chip, NAND_ECC_READ);
ret = nand_read_data_op(chip, p, eccsize, false);
ret = nand_read_data_op(chip, p, eccsize, false, false);
if (ret)
return ret;
@ -809,11 +812,12 @@ static int fsmc_bch8_correct_data(struct nand_chip *chip, u8 *dat,
i = 0;
while (num_err--) {
change_bit(0, (unsigned long *)&err_idx[i]);
change_bit(1, (unsigned long *)&err_idx[i]);
err_idx[i] ^= 3;
if (err_idx[i] < chip->ecc.size * 8) {
change_bit(err_idx[i], (unsigned long *)dat);
int err = err_idx[i];
dat[err >> 3] ^= BIT(err & 7);
i++;
}
}
@ -1132,7 +1136,12 @@ static int fsmc_nand_remove(struct platform_device *pdev)
struct fsmc_nand_data *host = platform_get_drvdata(pdev);
if (host) {
nand_release(&host->nand);
struct nand_chip *chip = &host->nand;
int ret;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
fsmc_nand_disable(host);
if (host->mode == USE_DMA_ACCESS) {

View file

@ -190,8 +190,12 @@ gpio_nand_get_io_sync(struct platform_device *pdev)
static int gpio_nand_remove(struct platform_device *pdev)
{
struct gpiomtd *gpiomtd = platform_get_drvdata(pdev);
struct nand_chip *chip = &gpiomtd->nand_chip;
int ret;
nand_release(&gpiomtd->nand_chip);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
/* Enable write protection and disable the chip */
if (gpiomtd->nwp && !IS_ERR(gpiomtd->nwp))

View file

@ -540,8 +540,10 @@ static int bch_set_geometry(struct gpmi_nand_data *this)
return ret;
ret = pm_runtime_get_sync(this->dev);
if (ret < 0)
if (ret < 0) {
pm_runtime_put_autosuspend(this->dev);
return ret;
}
/*
* Due to erratum #2847 of the MX23, the BCH cannot be soft reset on this
@ -834,158 +836,6 @@ map_fail:
return false;
}
/**
* gpmi_copy_bits - copy bits from one memory region to another
* @dst: destination buffer
* @dst_bit_off: bit offset we're starting to write at
* @src: source buffer
* @src_bit_off: bit offset we're starting to read from
* @nbits: number of bits to copy
*
* This functions copies bits from one memory region to another, and is used by
* the GPMI driver to copy ECC sections which are not guaranteed to be byte
* aligned.
*
* src and dst should not overlap.
*
*/
static void gpmi_copy_bits(u8 *dst, size_t dst_bit_off, const u8 *src,
size_t src_bit_off, size_t nbits)
{
size_t i;
size_t nbytes;
u32 src_buffer = 0;
size_t bits_in_src_buffer = 0;
if (!nbits)
return;
/*
* Move src and dst pointers to the closest byte pointer and store bit
* offsets within a byte.
*/
src += src_bit_off / 8;
src_bit_off %= 8;
dst += dst_bit_off / 8;
dst_bit_off %= 8;
/*
* Initialize the src_buffer value with bits available in the first
* byte of data so that we end up with a byte aligned src pointer.
*/
if (src_bit_off) {
src_buffer = src[0] >> src_bit_off;
if (nbits >= (8 - src_bit_off)) {
bits_in_src_buffer += 8 - src_bit_off;
} else {
src_buffer &= GENMASK(nbits - 1, 0);
bits_in_src_buffer += nbits;
}
nbits -= bits_in_src_buffer;
src++;
}
/* Calculate the number of bytes that can be copied from src to dst. */
nbytes = nbits / 8;
/* Try to align dst to a byte boundary. */
if (dst_bit_off) {
if (bits_in_src_buffer < (8 - dst_bit_off) && nbytes) {
src_buffer |= src[0] << bits_in_src_buffer;
bits_in_src_buffer += 8;
src++;
nbytes--;
}
if (bits_in_src_buffer >= (8 - dst_bit_off)) {
dst[0] &= GENMASK(dst_bit_off - 1, 0);
dst[0] |= src_buffer << dst_bit_off;
src_buffer >>= (8 - dst_bit_off);
bits_in_src_buffer -= (8 - dst_bit_off);
dst_bit_off = 0;
dst++;
if (bits_in_src_buffer > 7) {
bits_in_src_buffer -= 8;
dst[0] = src_buffer;
dst++;
src_buffer >>= 8;
}
}
}
if (!bits_in_src_buffer && !dst_bit_off) {
/*
* Both src and dst pointers are byte aligned, thus we can
* just use the optimized memcpy function.
*/
if (nbytes)
memcpy(dst, src, nbytes);
} else {
/*
* src buffer is not byte aligned, hence we have to copy each
* src byte to the src_buffer variable before extracting a byte
* to store in dst.
*/
for (i = 0; i < nbytes; i++) {
src_buffer |= src[i] << bits_in_src_buffer;
dst[i] = src_buffer;
src_buffer >>= 8;
}
}
/* Update dst and src pointers */
dst += nbytes;
src += nbytes;
/*
* nbits is the number of remaining bits. It should not exceed 8 as
* we've already copied as much bytes as possible.
*/
nbits %= 8;
/*
* If there's no more bits to copy to the destination and src buffer
* was already byte aligned, then we're done.
*/
if (!nbits && !bits_in_src_buffer)
return;
/* Copy the remaining bits to src_buffer */
if (nbits)
src_buffer |= (*src & GENMASK(nbits - 1, 0)) <<
bits_in_src_buffer;
bits_in_src_buffer += nbits;
/*
* In case there were not enough bits to get a byte aligned dst buffer
* prepare the src_buffer variable to match the dst organization (shift
* src_buffer by dst_bit_off and retrieve the least significant bits
* from dst).
*/
if (dst_bit_off)
src_buffer = (src_buffer << dst_bit_off) |
(*dst & GENMASK(dst_bit_off - 1, 0));
bits_in_src_buffer += dst_bit_off;
/*
* Keep most significant bits from dst if we end up with an unaligned
* number of bits.
*/
nbytes = bits_in_src_buffer / 8;
if (bits_in_src_buffer % 8) {
src_buffer |= (dst[nbytes] &
GENMASK(7, bits_in_src_buffer % 8)) <<
(nbytes * 8);
nbytes++;
}
/* Copy the remaining bytes to dst */
for (i = 0; i < nbytes; i++) {
dst[i] = src_buffer;
src_buffer >>= 8;
}
}
/* add our owner bbt descriptor */
static uint8_t scan_ff_pattern[] = { 0xff };
static struct nand_bbt_descr gpmi_bbt_descr = {
@ -1713,7 +1563,7 @@ static int gpmi_ecc_write_oob(struct nand_chip *chip, int page)
* inline (interleaved with payload DATA), and do not align data chunk on
* byte boundaries.
* We thus need to take care moving the payload data and ECC bits stored in the
* page into the provided buffers, which is why we're using gpmi_copy_bits.
* page into the provided buffers, which is why we're using nand_extract_bits().
*
* See set_geometry_by_ecc_info inline comments to have a full description
* of the layout used by the GPMI controller.
@ -1762,9 +1612,8 @@ static int gpmi_ecc_read_page_raw(struct nand_chip *chip, uint8_t *buf,
/* Extract interleaved payload data and ECC bits */
for (step = 0; step < nfc_geo->ecc_chunk_count; step++) {
if (buf)
gpmi_copy_bits(buf, step * eccsize * 8,
tmp_buf, src_bit_off,
eccsize * 8);
nand_extract_bits(buf, step * eccsize, tmp_buf,
src_bit_off, eccsize * 8);
src_bit_off += eccsize * 8;
/* Align last ECC block to align a byte boundary */
@ -1773,9 +1622,8 @@ static int gpmi_ecc_read_page_raw(struct nand_chip *chip, uint8_t *buf,
eccbits += 8 - ((oob_bit_off + eccbits) % 8);
if (oob_required)
gpmi_copy_bits(oob, oob_bit_off,
tmp_buf, src_bit_off,
eccbits);
nand_extract_bits(oob, oob_bit_off, tmp_buf,
src_bit_off, eccbits);
src_bit_off += eccbits;
oob_bit_off += eccbits;
@ -1800,7 +1648,7 @@ static int gpmi_ecc_read_page_raw(struct nand_chip *chip, uint8_t *buf,
* inline (interleaved with payload DATA), and do not align data chunk on
* byte boundaries.
* We thus need to take care moving the OOB area at the right place in the
* final page, which is why we're using gpmi_copy_bits.
* final page, which is why we're using nand_extract_bits().
*
* See set_geometry_by_ecc_info inline comments to have a full description
* of the layout used by the GPMI controller.
@ -1839,8 +1687,8 @@ static int gpmi_ecc_write_page_raw(struct nand_chip *chip, const uint8_t *buf,
/* Interleave payload data and ECC bits */
for (step = 0; step < nfc_geo->ecc_chunk_count; step++) {
if (buf)
gpmi_copy_bits(tmp_buf, dst_bit_off,
buf, step * eccsize * 8, eccsize * 8);
nand_extract_bits(tmp_buf, dst_bit_off, buf,
step * eccsize * 8, eccsize * 8);
dst_bit_off += eccsize * 8;
/* Align last ECC block to align a byte boundary */
@ -1849,8 +1697,8 @@ static int gpmi_ecc_write_page_raw(struct nand_chip *chip, const uint8_t *buf,
eccbits += 8 - ((oob_bit_off + eccbits) % 8);
if (oob_required)
gpmi_copy_bits(tmp_buf, dst_bit_off,
oob, oob_bit_off, eccbits);
nand_extract_bits(tmp_buf, dst_bit_off, oob,
oob_bit_off, eccbits);
dst_bit_off += eccbits;
oob_bit_off += eccbits;
@ -2408,6 +2256,9 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip,
struct completion *completion;
unsigned long to;
if (check_only)
return 0;
this->ntransfers = 0;
for (i = 0; i < GPMI_MAX_TRANSFERS; i++)
this->transfers[i].direction = DMA_NONE;
@ -2658,7 +2509,7 @@ static int gpmi_nand_probe(struct platform_device *pdev)
ret = __gpmi_enable_clk(this, true);
if (ret)
goto exit_nfc_init;
goto exit_acquire_resources;
pm_runtime_set_autosuspend_delay(&pdev->dev, 500);
pm_runtime_use_autosuspend(&pdev->dev);
@ -2693,11 +2544,15 @@ exit_acquire_resources:
static int gpmi_nand_remove(struct platform_device *pdev)
{
struct gpmi_nand_data *this = platform_get_drvdata(pdev);
struct nand_chip *chip = &this->nand;
int ret;
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
nand_release(&this->nand);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
gpmi_free_dma_buffer(this);
release_resources(this);
return 0;

View file

@ -806,8 +806,12 @@ static int hisi_nfc_probe(struct platform_device *pdev)
static int hisi_nfc_remove(struct platform_device *pdev)
{
struct hinfc_host *host = platform_get_drvdata(pdev);
struct nand_chip *chip = &host->chip;
int ret;
nand_release(&host->chip);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
return 0;
}

View file

@ -27,9 +27,6 @@
#define DRV_NAME "ingenic-nand"
/* Command delay when there is no R/B pin. */
#define RB_DELAY_US 100
struct jz_soc_info {
unsigned long data_offset;
unsigned long addr_offset;
@ -49,7 +46,6 @@ struct ingenic_nfc {
struct nand_controller controller;
unsigned int num_banks;
struct list_head chips;
int selected;
struct ingenic_nand_cs cs[];
};
@ -102,7 +98,7 @@ static int qi_lb60_ooblayout_free(struct mtd_info *mtd, int section,
return 0;
}
const struct mtd_ooblayout_ops qi_lb60_ooblayout_ops = {
static const struct mtd_ooblayout_ops qi_lb60_ooblayout_ops = {
.ecc = qi_lb60_ooblayout_ecc,
.free = qi_lb60_ooblayout_free,
};
@ -142,51 +138,6 @@ static const struct mtd_ooblayout_ops jz4725b_ooblayout_ops = {
.free = jz4725b_ooblayout_free,
};
static void ingenic_nand_select_chip(struct nand_chip *chip, int chipnr)
{
struct ingenic_nand *nand = to_ingenic_nand(nand_to_mtd(chip));
struct ingenic_nfc *nfc = to_ingenic_nfc(nand->chip.controller);
struct ingenic_nand_cs *cs;
/* Ensure the currently selected chip is deasserted. */
if (chipnr == -1 && nfc->selected >= 0) {
cs = &nfc->cs[nfc->selected];
jz4780_nemc_assert(nfc->dev, cs->bank, false);
}
nfc->selected = chipnr;
}
static void ingenic_nand_cmd_ctrl(struct nand_chip *chip, int cmd,
unsigned int ctrl)
{
struct ingenic_nand *nand = to_ingenic_nand(nand_to_mtd(chip));
struct ingenic_nfc *nfc = to_ingenic_nfc(nand->chip.controller);
struct ingenic_nand_cs *cs;
if (WARN_ON(nfc->selected < 0))
return;
cs = &nfc->cs[nfc->selected];
jz4780_nemc_assert(nfc->dev, cs->bank, ctrl & NAND_NCE);
if (cmd == NAND_CMD_NONE)
return;
if (ctrl & NAND_ALE)
writeb(cmd, cs->base + nfc->soc_info->addr_offset);
else if (ctrl & NAND_CLE)
writeb(cmd, cs->base + nfc->soc_info->cmd_offset);
}
static int ingenic_nand_dev_ready(struct nand_chip *chip)
{
struct ingenic_nand *nand = to_ingenic_nand(nand_to_mtd(chip));
return !gpiod_get_value_cansleep(nand->busy_gpio);
}
static void ingenic_nand_ecc_hwctl(struct nand_chip *chip, int mode)
{
struct ingenic_nand *nand = to_ingenic_nand(nand_to_mtd(chip));
@ -298,8 +249,91 @@ static int ingenic_nand_attach_chip(struct nand_chip *chip)
return 0;
}
static int ingenic_nand_exec_instr(struct nand_chip *chip,
struct ingenic_nand_cs *cs,
const struct nand_op_instr *instr)
{
struct ingenic_nand *nand = to_ingenic_nand(nand_to_mtd(chip));
struct ingenic_nfc *nfc = to_ingenic_nfc(chip->controller);
unsigned int i;
switch (instr->type) {
case NAND_OP_CMD_INSTR:
writeb(instr->ctx.cmd.opcode,
cs->base + nfc->soc_info->cmd_offset);
return 0;
case NAND_OP_ADDR_INSTR:
for (i = 0; i < instr->ctx.addr.naddrs; i++)
writeb(instr->ctx.addr.addrs[i],
cs->base + nfc->soc_info->addr_offset);
return 0;
case NAND_OP_DATA_IN_INSTR:
if (instr->ctx.data.force_8bit ||
!(chip->options & NAND_BUSWIDTH_16))
ioread8_rep(cs->base + nfc->soc_info->data_offset,
instr->ctx.data.buf.in,
instr->ctx.data.len);
else
ioread16_rep(cs->base + nfc->soc_info->data_offset,
instr->ctx.data.buf.in,
instr->ctx.data.len);
return 0;
case NAND_OP_DATA_OUT_INSTR:
if (instr->ctx.data.force_8bit ||
!(chip->options & NAND_BUSWIDTH_16))
iowrite8_rep(cs->base + nfc->soc_info->data_offset,
instr->ctx.data.buf.out,
instr->ctx.data.len);
else
iowrite16_rep(cs->base + nfc->soc_info->data_offset,
instr->ctx.data.buf.out,
instr->ctx.data.len);
return 0;
case NAND_OP_WAITRDY_INSTR:
if (!nand->busy_gpio)
return nand_soft_waitrdy(chip,
instr->ctx.waitrdy.timeout_ms);
return nand_gpio_waitrdy(chip, nand->busy_gpio,
instr->ctx.waitrdy.timeout_ms);
default:
break;
}
return -EINVAL;
}
static int ingenic_nand_exec_op(struct nand_chip *chip,
const struct nand_operation *op,
bool check_only)
{
struct ingenic_nand *nand = to_ingenic_nand(nand_to_mtd(chip));
struct ingenic_nfc *nfc = to_ingenic_nfc(nand->chip.controller);
struct ingenic_nand_cs *cs;
unsigned int i;
int ret = 0;
if (check_only)
return 0;
cs = &nfc->cs[op->cs];
jz4780_nemc_assert(nfc->dev, cs->bank, true);
for (i = 0; i < op->ninstrs; i++) {
ret = ingenic_nand_exec_instr(chip, cs, &op->instrs[i]);
if (ret)
break;
if (op->instrs[i].delay_ns)
ndelay(op->instrs[i].delay_ns);
}
jz4780_nemc_assert(nfc->dev, cs->bank, false);
return ret;
}
static const struct nand_controller_ops ingenic_nand_controller_ops = {
.attach_chip = ingenic_nand_attach_chip,
.exec_op = ingenic_nand_exec_op,
};
static int ingenic_nand_init_chip(struct platform_device *pdev,
@ -339,10 +373,20 @@ static int ingenic_nand_init_chip(struct platform_device *pdev,
ret = PTR_ERR(nand->busy_gpio);
dev_err(dev, "failed to request busy GPIO: %d\n", ret);
return ret;
} else if (nand->busy_gpio) {
nand->chip.legacy.dev_ready = ingenic_nand_dev_ready;
}
/*
* The rb-gpios semantics was undocumented and qi,lb60 (along with
* the ingenic driver) got it wrong. The active state encodes the
* NAND ready state, which is high level. Since there's no signal
* inverter on this board, it should be active-high. Let's fix that
* here for older DTs so we can re-use the generic nand_gpio_waitrdy()
* helper, and be consistent with what other drivers do.
*/
if (of_machine_is_compatible("qi,lb60") &&
gpiod_is_active_low(nand->busy_gpio))
gpiod_toggle_active_low(nand->busy_gpio);
nand->wp_gpio = devm_gpiod_get_optional(dev, "wp", GPIOD_OUT_LOW);
if (IS_ERR(nand->wp_gpio)) {
@ -359,12 +403,7 @@ static int ingenic_nand_init_chip(struct platform_device *pdev,
return -ENOMEM;
mtd->dev.parent = dev;
chip->legacy.IO_ADDR_R = cs->base + nfc->soc_info->data_offset;
chip->legacy.IO_ADDR_W = cs->base + nfc->soc_info->data_offset;
chip->legacy.chip_delay = RB_DELAY_US;
chip->options = NAND_NO_SUBPAGE_WRITE;
chip->legacy.select_chip = ingenic_nand_select_chip;
chip->legacy.cmd_ctrl = ingenic_nand_cmd_ctrl;
chip->ecc.mode = NAND_ECC_HW;
chip->controller = &nfc->controller;
nand_set_flash_node(chip, np);
@ -376,7 +415,7 @@ static int ingenic_nand_init_chip(struct platform_device *pdev,
ret = mtd_device_register(mtd, NULL, 0);
if (ret) {
nand_release(chip);
nand_cleanup(chip);
return ret;
}
@ -387,13 +426,18 @@ static int ingenic_nand_init_chip(struct platform_device *pdev,
static void ingenic_nand_cleanup_chips(struct ingenic_nfc *nfc)
{
struct ingenic_nand *chip;
struct ingenic_nand *ingenic_chip;
struct nand_chip *chip;
int ret;
while (!list_empty(&nfc->chips)) {
chip = list_first_entry(&nfc->chips,
struct ingenic_nand, chip_list);
nand_release(&chip->chip);
list_del(&chip->chip_list);
ingenic_chip = list_first_entry(&nfc->chips,
struct ingenic_nand, chip_list);
chip = &ingenic_chip->chip;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
list_del(&ingenic_chip->chip_list);
}
}

View file

@ -75,6 +75,9 @@ extern const struct nand_manufacturer_ops micron_nand_manuf_ops;
extern const struct nand_manufacturer_ops samsung_nand_manuf_ops;
extern const struct nand_manufacturer_ops toshiba_nand_manuf_ops;
/* MLC pairing schemes */
extern const struct mtd_pairing_scheme dist3_pairing_scheme;
/* Core functions */
const struct nand_manufacturer *nand_get_manufacturer(u8 id);
int nand_bbm_get_next_page(struct nand_chip *chip, int page);
@ -106,6 +109,15 @@ static inline bool nand_has_exec_op(struct nand_chip *chip)
return true;
}
static inline int nand_check_op(struct nand_chip *chip,
const struct nand_operation *op)
{
if (!nand_has_exec_op(chip))
return 0;
return chip->controller->ops->exec_op(chip, op, true);
}
static inline int nand_exec_op(struct nand_chip *chip,
const struct nand_operation *op)
{

View file

@ -826,8 +826,13 @@ free_gpio:
static int lpc32xx_nand_remove(struct platform_device *pdev)
{
struct lpc32xx_nand_host *host = platform_get_drvdata(pdev);
struct nand_chip *chip = &host->nand_chip;
int ret;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
nand_release(&host->nand_chip);
free_irq(host->irq, host);
if (use_dma)
dma_release_channel(host->dma_chan);

View file

@ -947,8 +947,12 @@ static int lpc32xx_nand_remove(struct platform_device *pdev)
{
uint32_t tmp;
struct lpc32xx_nand_host *host = platform_get_drvdata(pdev);
struct nand_chip *chip = &host->nand_chip;
int ret;
nand_release(&host->nand_chip);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
dma_release_channel(host->dma_chan);
/* Force CE high */

View file

@ -707,7 +707,7 @@ static int marvell_nfc_wait_op(struct nand_chip *chip, unsigned int timeout_ms)
* In case the interrupt was not served in the required time frame,
* check if the ISR was not served or if something went actually wrong.
*/
if (ret && !pending) {
if (!ret && !pending) {
dev_err(nfc->dev, "Timeout waiting for RB signal\n");
return -ETIMEDOUT;
}
@ -932,14 +932,14 @@ static void marvell_nfc_check_empty_chunk(struct nand_chip *chip,
}
/*
* Check a chunk is correct or not according to hardware ECC engine.
* Check if a chunk is correct or not according to the hardware ECC engine.
* mtd->ecc_stats.corrected is updated, as well as max_bitflips, however
* mtd->ecc_stats.failure is not, the function will instead return a non-zero
* value indicating that a check on the emptyness of the subpage must be
* performed before declaring the subpage corrupted.
* performed before actually declaring the subpage as "corrupted".
*/
static int marvell_nfc_hw_ecc_correct(struct nand_chip *chip,
unsigned int *max_bitflips)
static int marvell_nfc_hw_ecc_check_bitflips(struct nand_chip *chip,
unsigned int *max_bitflips)
{
struct mtd_info *mtd = nand_to_mtd(chip);
struct marvell_nfc *nfc = to_marvell_nfc(chip->controller);
@ -1053,7 +1053,7 @@ static int marvell_nfc_hw_ecc_hmg_read_page(struct nand_chip *chip, u8 *buf,
marvell_nfc_enable_hw_ecc(chip);
marvell_nfc_hw_ecc_hmg_do_read_page(chip, buf, chip->oob_poi, false,
page);
ret = marvell_nfc_hw_ecc_correct(chip, &max_bitflips);
ret = marvell_nfc_hw_ecc_check_bitflips(chip, &max_bitflips);
marvell_nfc_disable_hw_ecc(chip);
if (!ret)
@ -1224,12 +1224,12 @@ static int marvell_nfc_hw_ecc_bch_read_page_raw(struct nand_chip *chip, u8 *buf,
/* Read spare bytes */
nand_read_data_op(chip, oob + (lt->spare_bytes * chunk),
spare_len, false);
spare_len, false, false);
/* Read ECC bytes */
nand_read_data_op(chip, oob + ecc_offset +
(ALIGN(lt->ecc_bytes, 32) * chunk),
ecc_len, false);
ecc_len, false, false);
}
return 0;
@ -1336,7 +1336,7 @@ static int marvell_nfc_hw_ecc_bch_read_page(struct nand_chip *chip,
/* Read the chunk and detect number of bitflips */
marvell_nfc_hw_ecc_bch_read_chunk(chip, chunk, data, data_len,
spare, spare_len, page);
ret = marvell_nfc_hw_ecc_correct(chip, &max_bitflips);
ret = marvell_nfc_hw_ecc_check_bitflips(chip, &max_bitflips);
if (ret)
failure_mask |= BIT(chunk);
@ -1358,10 +1358,9 @@ static int marvell_nfc_hw_ecc_bch_read_page(struct nand_chip *chip,
*/
/*
* In case there is any subpage read error reported by ->correct(), we
* usually re-read only ECC bytes in raw mode and check if the whole
* page is empty. In this case, it is normal that the ECC check failed
* and we just ignore the error.
* In case there is any subpage read error, we usually re-read only ECC
* bytes in raw mode and check if the whole page is empty. In this case,
* it is normal that the ECC check failed and we just ignore the error.
*
* However, it has been empirically observed that for some layouts (e.g
* 2k page, 8b strength per 512B chunk), the controller tries to correct
@ -2107,7 +2106,8 @@ static int marvell_nfc_exec_op(struct nand_chip *chip,
{
struct marvell_nfc *nfc = to_marvell_nfc(chip->controller);
marvell_nfc_select_target(chip, op->cs);
if (!check_only)
marvell_nfc_select_target(chip, op->cs);
if (nfc->caps->is_nfcv2)
return nand_op_parser_exec_op(chip, &marvell_nfcv2_op_parser,
@ -2166,8 +2166,8 @@ static const struct mtd_ooblayout_ops marvell_nand_ooblayout_ops = {
.free = marvell_nand_ooblayout_free,
};
static int marvell_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
struct nand_ecc_ctrl *ecc)
static int marvell_nand_hw_ecc_controller_init(struct mtd_info *mtd,
struct nand_ecc_ctrl *ecc)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct marvell_nfc *nfc = to_marvell_nfc(chip->controller);
@ -2261,7 +2261,7 @@ static int marvell_nand_ecc_init(struct mtd_info *mtd,
switch (ecc->mode) {
case NAND_ECC_HW:
ret = marvell_nand_hw_ecc_ctrl_init(mtd, ecc);
ret = marvell_nand_hw_ecc_controller_init(mtd, ecc);
if (ret)
return ret;
break;
@ -2664,7 +2664,7 @@ static int marvell_nand_chip_init(struct device *dev, struct marvell_nfc *nfc,
ret = mtd_device_register(mtd, NULL, 0);
if (ret) {
dev_err(dev, "failed to register mtd device: %d\n", ret);
nand_release(chip);
nand_cleanup(chip);
return ret;
}
@ -2673,6 +2673,21 @@ static int marvell_nand_chip_init(struct device *dev, struct marvell_nfc *nfc,
return 0;
}
static void marvell_nand_chips_cleanup(struct marvell_nfc *nfc)
{
struct marvell_nand_chip *entry, *temp;
struct nand_chip *chip;
int ret;
list_for_each_entry_safe(entry, temp, &nfc->chips, node) {
chip = &entry->chip;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
list_del(&entry->node);
}
}
static int marvell_nand_chips_init(struct device *dev, struct marvell_nfc *nfc)
{
struct device_node *np = dev->of_node;
@ -2707,21 +2722,16 @@ static int marvell_nand_chips_init(struct device *dev, struct marvell_nfc *nfc)
ret = marvell_nand_chip_init(dev, nfc, nand_np);
if (ret) {
of_node_put(nand_np);
return ret;
goto cleanup_chips;
}
}
return 0;
}
static void marvell_nand_chips_cleanup(struct marvell_nfc *nfc)
{
struct marvell_nand_chip *entry, *temp;
cleanup_chips:
marvell_nand_chips_cleanup(nfc);
list_for_each_entry_safe(entry, temp, &nfc->chips, node) {
nand_release(&entry->chip);
list_del(&entry->node);
}
return ret;
}
static int marvell_nfc_init_dma(struct marvell_nfc *nfc)
@ -2854,7 +2864,6 @@ static int marvell_nfc_init(struct marvell_nfc *nfc)
static int marvell_nfc_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct resource *r;
struct marvell_nfc *nfc;
int ret;
int irq;
@ -2869,8 +2878,7 @@ static int marvell_nfc_probe(struct platform_device *pdev)
nfc->controller.ops = &marvell_nand_controller_ops;
INIT_LIST_HEAD(&nfc->chips);
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
nfc->regs = devm_ioremap_resource(dev, r);
nfc->regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(nfc->regs))
return PTR_ERR(nfc->regs);

View file

@ -899,6 +899,9 @@ static int meson_nfc_exec_op(struct nand_chip *nand,
u32 op_id, delay_idle, cmd;
int i;
if (check_only)
return 0;
meson_nfc_select_chip(nand, op->cs);
for (op_id = 0; op_id < op->ninstrs; op_id++) {
instr = &op->instrs[op_id];
@ -1266,7 +1269,7 @@ meson_nfc_nand_chip_init(struct device *dev,
nand_set_flash_node(nand, np);
nand_set_controller_data(nand, nfc);
nand->options |= NAND_USE_BOUNCE_BUFFER;
nand->options |= NAND_USES_DMA;
mtd = nand_to_mtd(nand);
mtd->owner = THIS_MODULE;
mtd->dev.parent = dev;

View file

@ -805,8 +805,11 @@ static int mpc5121_nfc_remove(struct platform_device *op)
{
struct device *dev = &op->dev;
struct mtd_info *mtd = dev_get_drvdata(dev);
int ret;
nand_release(mtd_to_nand(mtd));
ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(mtd_to_nand(mtd));
mpc5121_nfc_free(dev, mtd);
return 0;

View file

@ -1380,7 +1380,7 @@ static int mtk_nfc_nand_chip_init(struct device *dev, struct mtk_nfc *nfc,
nand_set_flash_node(nand, np);
nand_set_controller_data(nand, nfc);
nand->options |= NAND_USE_BOUNCE_BUFFER | NAND_SUBPAGE_READ;
nand->options |= NAND_USES_DMA | NAND_SUBPAGE_READ;
nand->legacy.dev_ready = mtk_nfc_dev_ready;
nand->legacy.select_chip = mtk_nfc_select_chip;
nand->legacy.write_byte = mtk_nfc_write_byte;
@ -1419,7 +1419,7 @@ static int mtk_nfc_nand_chip_init(struct device *dev, struct mtk_nfc *nfc,
ret = mtd_device_register(mtd, NULL, 0);
if (ret) {
dev_err(dev, "mtd parse partition error\n");
nand_release(nand);
nand_cleanup(nand);
return ret;
}
@ -1578,13 +1578,18 @@ release_ecc:
static int mtk_nfc_remove(struct platform_device *pdev)
{
struct mtk_nfc *nfc = platform_get_drvdata(pdev);
struct mtk_nfc_nand_chip *chip;
struct mtk_nfc_nand_chip *mtk_chip;
struct nand_chip *chip;
int ret;
while (!list_empty(&nfc->chips)) {
chip = list_first_entry(&nfc->chips, struct mtk_nfc_nand_chip,
node);
nand_release(&chip->nand);
list_del(&chip->node);
mtk_chip = list_first_entry(&nfc->chips,
struct mtk_nfc_nand_chip, node);
chip = &mtk_chip->nand;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
list_del(&mtk_chip->node);
}
mtk_ecc_release(nfc->ecc);

View file

@ -1919,8 +1919,12 @@ escan:
static int mxcnd_remove(struct platform_device *pdev)
{
struct mxc_nand_host *host = platform_get_drvdata(pdev);
struct nand_chip *chip = &host->nand;
int ret;
nand_release(&host->nand);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
if (host->clk_act)
clk_disable_unprepare(host->clk);

View file

@ -393,6 +393,9 @@ static int mxic_nfc_exec_op(struct nand_chip *chip,
int ret = 0;
unsigned int op_id;
if (check_only)
return 0;
mxic_nfc_cs_enable(nfc);
init_completion(&nfc->complete);
for (op_id = 0; op_id < op->ninstrs; op_id++) {
@ -553,8 +556,13 @@ fail:
static int mxic_nfc_remove(struct platform_device *pdev)
{
struct mxic_nand_ctlr *nfc = platform_get_drvdata(pdev);
struct nand_chip *chip = &nfc->chip;
int ret;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
nand_release(&nfc->chip);
mxic_nfc_clk_disable(nfc);
return 0;
}

View file

@ -205,6 +205,56 @@ static const struct mtd_ooblayout_ops nand_ooblayout_lp_hamming_ops = {
.free = nand_ooblayout_free_lp_hamming,
};
static int nand_pairing_dist3_get_info(struct mtd_info *mtd, int page,
struct mtd_pairing_info *info)
{
int lastpage = (mtd->erasesize / mtd->writesize) - 1;
int dist = 3;
if (page == lastpage)
dist = 2;
if (!page || (page & 1)) {
info->group = 0;
info->pair = (page + 1) / 2;
} else {
info->group = 1;
info->pair = (page + 1 - dist) / 2;
}
return 0;
}
static int nand_pairing_dist3_get_wunit(struct mtd_info *mtd,
const struct mtd_pairing_info *info)
{
int lastpair = ((mtd->erasesize / mtd->writesize) - 1) / 2;
int page = info->pair * 2;
int dist = 3;
if (!info->group && !info->pair)
return 0;
if (info->pair == lastpair && info->group)
dist = 2;
if (!info->group)
page--;
else if (info->pair)
page += dist - 1;
if (page >= mtd->erasesize / mtd->writesize)
return -EINVAL;
return page;
}
const struct mtd_pairing_scheme dist3_pairing_scheme = {
.ngroups = 2,
.get_info = nand_pairing_dist3_get_info,
.get_wunit = nand_pairing_dist3_get_wunit,
};
static int check_offs_len(struct nand_chip *chip, loff_t ofs, uint64_t len)
{
int ret = 0;
@ -224,6 +274,50 @@ static int check_offs_len(struct nand_chip *chip, loff_t ofs, uint64_t len)
return ret;
}
/**
* nand_extract_bits - Copy unaligned bits from one buffer to another one
* @dst: destination buffer
* @dst_off: bit offset at which the writing starts
* @src: source buffer
* @src_off: bit offset at which the reading starts
* @nbits: number of bits to copy from @src to @dst
*
* Copy bits from one memory region to another (overlap authorized).
*/
void nand_extract_bits(u8 *dst, unsigned int dst_off, const u8 *src,
unsigned int src_off, unsigned int nbits)
{
unsigned int tmp, n;
dst += dst_off / 8;
dst_off %= 8;
src += src_off / 8;
src_off %= 8;
while (nbits) {
n = min3(8 - dst_off, 8 - src_off, nbits);
tmp = (*src >> src_off) & GENMASK(n - 1, 0);
*dst &= ~GENMASK(n - 1 + dst_off, dst_off);
*dst |= tmp << dst_off;
dst_off += n;
if (dst_off >= 8) {
dst++;
dst_off -= 8;
}
src_off += n;
if (src_off >= 8) {
src++;
src_off -= 8;
}
nbits -= n;
}
}
EXPORT_SYMBOL_GPL(nand_extract_bits);
/**
* nand_select_target() - Select a NAND target (A.K.A. die)
* @chip: NAND chip object
@ -345,6 +439,9 @@ static int nand_block_bad(struct nand_chip *chip, loff_t ofs)
static int nand_isbad_bbm(struct nand_chip *chip, loff_t ofs)
{
if (chip->options & NAND_NO_BBM_QUIRK)
return 0;
if (chip->legacy.block_bad)
return chip->legacy.block_bad(chip, ofs);
@ -690,7 +787,8 @@ int nand_soft_waitrdy(struct nand_chip *chip, unsigned long timeout_ms)
*/
timeout_ms = jiffies + msecs_to_jiffies(timeout_ms) + 1;
do {
ret = nand_read_data_op(chip, &status, sizeof(status), true);
ret = nand_read_data_op(chip, &status, sizeof(status), true,
false);
if (ret)
break;
@ -736,8 +834,14 @@ EXPORT_SYMBOL_GPL(nand_soft_waitrdy);
int nand_gpio_waitrdy(struct nand_chip *chip, struct gpio_desc *gpiod,
unsigned long timeout_ms)
{
/* Wait until R/B pin indicates chip is ready or timeout occurs */
timeout_ms = jiffies + msecs_to_jiffies(timeout_ms);
/*
* Wait until R/B pin indicates chip is ready or timeout occurs.
* +1 below is necessary because if we are now in the last fraction
* of jiffy and msecs_to_jiffies is 1 then we will wait only that
* small jiffy fraction - possibly leading to false timeout.
*/
timeout_ms = jiffies + msecs_to_jiffies(timeout_ms) + 1;
do {
if (gpiod_get_value_cansleep(gpiod))
return 0;
@ -770,7 +874,7 @@ void panic_nand_wait(struct nand_chip *chip, unsigned long timeo)
u8 status;
ret = nand_read_data_op(chip, &status, sizeof(status),
true);
true, false);
if (ret)
return;
@ -1868,6 +1972,8 @@ EXPORT_SYMBOL_GPL(nand_reset_op);
* @buf: buffer used to store the data
* @len: length of the buffer
* @force_8bit: force 8-bit bus access
* @check_only: do not actually run the command, only checks if the
* controller driver supports it
*
* This function does a raw data read on the bus. Usually used after launching
* another NAND operation like nand_read_page_op().
@ -1876,7 +1982,7 @@ EXPORT_SYMBOL_GPL(nand_reset_op);
* Returns 0 on success, a negative error code otherwise.
*/
int nand_read_data_op(struct nand_chip *chip, void *buf, unsigned int len,
bool force_8bit)
bool force_8bit, bool check_only)
{
if (!len || !buf)
return -EINVAL;
@ -1889,9 +1995,15 @@ int nand_read_data_op(struct nand_chip *chip, void *buf, unsigned int len,
instrs[0].ctx.data.force_8bit = force_8bit;
if (check_only)
return nand_check_op(chip, &op);
return nand_exec_op(chip, &op);
}
if (check_only)
return 0;
if (force_8bit) {
u8 *p = buf;
unsigned int i;
@ -2112,7 +2224,7 @@ static void nand_op_parser_trace(const struct nand_op_parser_ctx *ctx)
char *prefix = " ";
unsigned int i;
pr_debug("executing subop:\n");
pr_debug("executing subop (CS%d):\n", ctx->subop.cs);
for (i = 0; i < ctx->ninstrs; i++) {
instr = &ctx->instrs[i];
@ -2176,6 +2288,7 @@ int nand_op_parser_exec_op(struct nand_chip *chip,
const struct nand_operation *op, bool check_only)
{
struct nand_op_parser_ctx ctx = {
.subop.cs = op->cs,
.subop.instrs = op->instrs,
.instrs = op->instrs,
.ninstrs = op->ninstrs,
@ -2620,7 +2733,7 @@ int nand_read_page_raw(struct nand_chip *chip, uint8_t *buf, int oob_required,
if (oob_required) {
ret = nand_read_data_op(chip, chip->oob_poi, mtd->oobsize,
false);
false, false);
if (ret)
return ret;
}
@ -2629,6 +2742,47 @@ int nand_read_page_raw(struct nand_chip *chip, uint8_t *buf, int oob_required,
}
EXPORT_SYMBOL(nand_read_page_raw);
/**
* nand_monolithic_read_page_raw - Monolithic page read in raw mode
* @chip: NAND chip info structure
* @buf: buffer to store read data
* @oob_required: caller requires OOB data read to chip->oob_poi
* @page: page number to read
*
* This is a raw page read, ie. without any error detection/correction.
* Monolithic means we are requesting all the relevant data (main plus
* eventually OOB) to be loaded in the NAND cache and sent over the
* bus (from the NAND chip to the NAND controller) in a single
* operation. This is an alternative to nand_read_page_raw(), which
* first reads the main data, and if the OOB data is requested too,
* then reads more data on the bus.
*/
int nand_monolithic_read_page_raw(struct nand_chip *chip, u8 *buf,
int oob_required, int page)
{
struct mtd_info *mtd = nand_to_mtd(chip);
unsigned int size = mtd->writesize;
u8 *read_buf = buf;
int ret;
if (oob_required) {
size += mtd->oobsize;
if (buf != chip->data_buf)
read_buf = nand_get_data_buf(chip);
}
ret = nand_read_page_op(chip, page, 0, read_buf, size);
if (ret)
return ret;
if (buf != chip->data_buf)
memcpy(buf, read_buf, mtd->writesize);
return 0;
}
EXPORT_SYMBOL(nand_monolithic_read_page_raw);
/**
* nand_read_page_raw_syndrome - [INTERN] read raw page data without ecc
* @chip: nand chip info structure
@ -2652,7 +2806,7 @@ static int nand_read_page_raw_syndrome(struct nand_chip *chip, uint8_t *buf,
return ret;
for (steps = chip->ecc.steps; steps > 0; steps--) {
ret = nand_read_data_op(chip, buf, eccsize, false);
ret = nand_read_data_op(chip, buf, eccsize, false, false);
if (ret)
return ret;
@ -2660,14 +2814,14 @@ static int nand_read_page_raw_syndrome(struct nand_chip *chip, uint8_t *buf,
if (chip->ecc.prepad) {
ret = nand_read_data_op(chip, oob, chip->ecc.prepad,
false);
false, false);
if (ret)
return ret;
oob += chip->ecc.prepad;
}
ret = nand_read_data_op(chip, oob, eccbytes, false);
ret = nand_read_data_op(chip, oob, eccbytes, false, false);
if (ret)
return ret;
@ -2675,7 +2829,7 @@ static int nand_read_page_raw_syndrome(struct nand_chip *chip, uint8_t *buf,
if (chip->ecc.postpad) {
ret = nand_read_data_op(chip, oob, chip->ecc.postpad,
false);
false, false);
if (ret)
return ret;
@ -2685,7 +2839,7 @@ static int nand_read_page_raw_syndrome(struct nand_chip *chip, uint8_t *buf,
size = mtd->oobsize - (oob - chip->oob_poi);
if (size) {
ret = nand_read_data_op(chip, oob, size, false);
ret = nand_read_data_op(chip, oob, size, false, false);
if (ret)
return ret;
}
@ -2878,14 +3032,15 @@ static int nand_read_page_hwecc(struct nand_chip *chip, uint8_t *buf,
for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) {
chip->ecc.hwctl(chip, NAND_ECC_READ);
ret = nand_read_data_op(chip, p, eccsize, false);
ret = nand_read_data_op(chip, p, eccsize, false, false);
if (ret)
return ret;
chip->ecc.calculate(chip, p, &ecc_calc[i]);
}
ret = nand_read_data_op(chip, chip->oob_poi, mtd->oobsize, false);
ret = nand_read_data_op(chip, chip->oob_poi, mtd->oobsize, false,
false);
if (ret)
return ret;
@ -2920,76 +3075,6 @@ static int nand_read_page_hwecc(struct nand_chip *chip, uint8_t *buf,
return max_bitflips;
}
/**
* nand_read_page_hwecc_oob_first - [REPLACEABLE] hw ecc, read oob first
* @chip: nand chip info structure
* @buf: buffer to store read data
* @oob_required: caller requires OOB data read to chip->oob_poi
* @page: page number to read
*
* Hardware ECC for large page chips, require OOB to be read first. For this
* ECC mode, the write_page method is re-used from ECC_HW. These methods
* read/write ECC from the OOB area, unlike the ECC_HW_SYNDROME support with
* multiple ECC steps, follows the "infix ECC" scheme and reads/writes ECC from
* the data area, by overwriting the NAND manufacturer bad block markings.
*/
static int nand_read_page_hwecc_oob_first(struct nand_chip *chip, uint8_t *buf,
int oob_required, int page)
{
struct mtd_info *mtd = nand_to_mtd(chip);
int i, eccsize = chip->ecc.size, ret;
int eccbytes = chip->ecc.bytes;
int eccsteps = chip->ecc.steps;
uint8_t *p = buf;
uint8_t *ecc_code = chip->ecc.code_buf;
uint8_t *ecc_calc = chip->ecc.calc_buf;
unsigned int max_bitflips = 0;
/* Read the OOB area first */
ret = nand_read_oob_op(chip, page, 0, chip->oob_poi, mtd->oobsize);
if (ret)
return ret;
ret = nand_read_page_op(chip, page, 0, NULL, 0);
if (ret)
return ret;
ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, chip->oob_poi, 0,
chip->ecc.total);
if (ret)
return ret;
for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) {
int stat;
chip->ecc.hwctl(chip, NAND_ECC_READ);
ret = nand_read_data_op(chip, p, eccsize, false);
if (ret)
return ret;
chip->ecc.calculate(chip, p, &ecc_calc[i]);
stat = chip->ecc.correct(chip, p, &ecc_code[i], NULL);
if (stat == -EBADMSG &&
(chip->ecc.options & NAND_ECC_GENERIC_ERASED_CHECK)) {
/* check for empty pages with bitflips */
stat = nand_check_erased_ecc_chunk(p, eccsize,
&ecc_code[i], eccbytes,
NULL, 0,
chip->ecc.strength);
}
if (stat < 0) {
mtd->ecc_stats.failed++;
} else {
mtd->ecc_stats.corrected += stat;
max_bitflips = max_t(unsigned int, max_bitflips, stat);
}
}
return max_bitflips;
}
/**
* nand_read_page_syndrome - [REPLACEABLE] hardware ECC syndrome based page read
* @chip: nand chip info structure
@ -3021,13 +3106,13 @@ static int nand_read_page_syndrome(struct nand_chip *chip, uint8_t *buf,
chip->ecc.hwctl(chip, NAND_ECC_READ);
ret = nand_read_data_op(chip, p, eccsize, false);
ret = nand_read_data_op(chip, p, eccsize, false, false);
if (ret)
return ret;
if (chip->ecc.prepad) {
ret = nand_read_data_op(chip, oob, chip->ecc.prepad,
false);
false, false);
if (ret)
return ret;
@ -3036,7 +3121,7 @@ static int nand_read_page_syndrome(struct nand_chip *chip, uint8_t *buf,
chip->ecc.hwctl(chip, NAND_ECC_READSYN);
ret = nand_read_data_op(chip, oob, eccbytes, false);
ret = nand_read_data_op(chip, oob, eccbytes, false, false);
if (ret)
return ret;
@ -3046,7 +3131,7 @@ static int nand_read_page_syndrome(struct nand_chip *chip, uint8_t *buf,
if (chip->ecc.postpad) {
ret = nand_read_data_op(chip, oob, chip->ecc.postpad,
false);
false, false);
if (ret)
return ret;
@ -3074,7 +3159,7 @@ static int nand_read_page_syndrome(struct nand_chip *chip, uint8_t *buf,
/* Calculate remaining oob bytes */
i = mtd->oobsize - (oob - chip->oob_poi);
if (i) {
ret = nand_read_data_op(chip, oob, i, false);
ret = nand_read_data_op(chip, oob, i, false, false);
if (ret)
return ret;
}
@ -3166,7 +3251,7 @@ static int nand_do_read_ops(struct nand_chip *chip, loff_t from,
uint32_t max_oobsize = mtd_oobavail(mtd, ops);
uint8_t *bufpoi, *oob, *buf;
int use_bufpoi;
int use_bounce_buf;
unsigned int max_bitflips = 0;
int retry_mode = 0;
bool ecc_fail = false;
@ -3184,25 +3269,25 @@ static int nand_do_read_ops(struct nand_chip *chip, loff_t from,
oob_required = oob ? 1 : 0;
while (1) {
unsigned int ecc_failures = mtd->ecc_stats.failed;
struct mtd_ecc_stats ecc_stats = mtd->ecc_stats;
bytes = min(mtd->writesize - col, readlen);
aligned = (bytes == mtd->writesize);
if (!aligned)
use_bufpoi = 1;
else if (chip->options & NAND_USE_BOUNCE_BUFFER)
use_bufpoi = !virt_addr_valid(buf) ||
!IS_ALIGNED((unsigned long)buf,
chip->buf_align);
use_bounce_buf = 1;
else if (chip->options & NAND_USES_DMA)
use_bounce_buf = !virt_addr_valid(buf) ||
!IS_ALIGNED((unsigned long)buf,
chip->buf_align);
else
use_bufpoi = 0;
use_bounce_buf = 0;
/* Is the current page in the buffer? */
if (realpage != chip->pagecache.page || oob) {
bufpoi = use_bufpoi ? chip->data_buf : buf;
bufpoi = use_bounce_buf ? chip->data_buf : buf;
if (use_bufpoi && aligned)
if (use_bounce_buf && aligned)
pr_debug("%s: using read bounce buffer for buf@%p\n",
__func__, buf);
@ -3223,16 +3308,19 @@ read_retry:
ret = chip->ecc.read_page(chip, bufpoi,
oob_required, page);
if (ret < 0) {
if (use_bufpoi)
if (use_bounce_buf)
/* Invalidate page cache */
chip->pagecache.page = -1;
break;
}
/* Transfer not aligned data */
if (use_bufpoi) {
/*
* Copy back the data in the initial buffer when reading
* partial pages or when a bounce buffer is required.
*/
if (use_bounce_buf) {
if (!NAND_HAS_SUBPAGE_READ(chip) && !oob &&
!(mtd->ecc_stats.failed - ecc_failures) &&
!(mtd->ecc_stats.failed - ecc_stats.failed) &&
(ops->mode != MTD_OPS_RAW)) {
chip->pagecache.page = realpage;
chip->pagecache.bitflips = ret;
@ -3240,7 +3328,7 @@ read_retry:
/* Invalidate page cache */
chip->pagecache.page = -1;
}
memcpy(buf, chip->data_buf + col, bytes);
memcpy(buf, bufpoi + col, bytes);
}
if (unlikely(oob)) {
@ -3255,7 +3343,7 @@ read_retry:
nand_wait_readrdy(chip);
if (mtd->ecc_stats.failed - ecc_failures) {
if (mtd->ecc_stats.failed - ecc_stats.failed) {
if (retry_mode + 1 < chip->read_retries) {
retry_mode++;
ret = nand_setup_read_retry(chip,
@ -3263,8 +3351,8 @@ read_retry:
if (ret < 0)
break;
/* Reset failures; retry */
mtd->ecc_stats.failed = ecc_failures;
/* Reset ecc_stats; retry */
mtd->ecc_stats = ecc_stats;
goto read_retry;
} else {
/* No more retry modes; real failure */
@ -3373,7 +3461,7 @@ static int nand_read_oob_syndrome(struct nand_chip *chip, int page)
sndrnd = 1;
toread = min_t(int, length, chunk);
ret = nand_read_data_op(chip, bufpoi, toread, false);
ret = nand_read_data_op(chip, bufpoi, toread, false, false);
if (ret)
return ret;
@ -3381,7 +3469,7 @@ static int nand_read_oob_syndrome(struct nand_chip *chip, int page)
length -= toread;
}
if (length > 0) {
ret = nand_read_data_op(chip, bufpoi, length, false);
ret = nand_read_data_op(chip, bufpoi, length, false, false);
if (ret)
return ret;
}
@ -3633,6 +3721,42 @@ int nand_write_page_raw(struct nand_chip *chip, const uint8_t *buf,
}
EXPORT_SYMBOL(nand_write_page_raw);
/**
* nand_monolithic_write_page_raw - Monolithic page write in raw mode
* @chip: NAND chip info structure
* @buf: data buffer to write
* @oob_required: must write chip->oob_poi to OOB
* @page: page number to write
*
* This is a raw page write, ie. without any error detection/correction.
* Monolithic means we are requesting all the relevant data (main plus
* eventually OOB) to be sent over the bus and effectively programmed
* into the NAND chip arrays in a single operation. This is an
* alternative to nand_write_page_raw(), which first sends the main
* data, then eventually send the OOB data by latching more data
* cycles on the NAND bus, and finally sends the program command to
* synchronyze the NAND chip cache.
*/
int nand_monolithic_write_page_raw(struct nand_chip *chip, const u8 *buf,
int oob_required, int page)
{
struct mtd_info *mtd = nand_to_mtd(chip);
unsigned int size = mtd->writesize;
u8 *write_buf = (u8 *)buf;
if (oob_required) {
size += mtd->oobsize;
if (buf != chip->data_buf) {
write_buf = nand_get_data_buf(chip);
memcpy(write_buf, buf, mtd->writesize);
}
}
return nand_prog_page_op(chip, page, 0, write_buf, size);
}
EXPORT_SYMBOL(nand_monolithic_write_page_raw);
/**
* nand_write_page_raw_syndrome - [INTERN] raw page write function
* @chip: nand chip info structure
@ -4012,20 +4136,23 @@ static int nand_do_write_ops(struct nand_chip *chip, loff_t to,
while (1) {
int bytes = mtd->writesize;
uint8_t *wbuf = buf;
int use_bufpoi;
int use_bounce_buf;
int part_pagewr = (column || writelen < mtd->writesize);
if (part_pagewr)
use_bufpoi = 1;
else if (chip->options & NAND_USE_BOUNCE_BUFFER)
use_bufpoi = !virt_addr_valid(buf) ||
!IS_ALIGNED((unsigned long)buf,
chip->buf_align);
use_bounce_buf = 1;
else if (chip->options & NAND_USES_DMA)
use_bounce_buf = !virt_addr_valid(buf) ||
!IS_ALIGNED((unsigned long)buf,
chip->buf_align);
else
use_bufpoi = 0;
use_bounce_buf = 0;
/* Partial page write?, or need to use bounce buffer */
if (use_bufpoi) {
/*
* Copy the data from the initial buffer when doing partial page
* writes or when a bounce buffer is required.
*/
if (use_bounce_buf) {
pr_debug("%s: using write bounce buffer for buf@%p\n",
__func__, buf);
if (part_pagewr)
@ -4883,7 +5010,6 @@ static const char * const nand_ecc_modes[] = {
[NAND_ECC_SOFT] = "soft",
[NAND_ECC_HW] = "hw",
[NAND_ECC_HW_SYNDROME] = "hw_syndrome",
[NAND_ECC_HW_OOB_FIRST] = "hw_oob_first",
[NAND_ECC_ON_DIE] = "on-die",
};
@ -4896,14 +5022,14 @@ static int of_get_nand_ecc_mode(struct device_node *np)
if (err < 0)
return err;
for (i = 0; i < ARRAY_SIZE(nand_ecc_modes); i++)
for (i = NAND_ECC_NONE; i < ARRAY_SIZE(nand_ecc_modes); i++)
if (!strcasecmp(pm, nand_ecc_modes[i]))
return i;
/*
* For backward compatibility we support few obsoleted values that don't
* have their mappings into nand_ecc_modes_t anymore (they were merged
* with other enums).
* have their mappings into the nand_ecc_mode enum anymore (they were
* merged with other enums).
*/
if (!strcasecmp(pm, "soft_bch"))
return NAND_ECC_SOFT;
@ -4917,17 +5043,20 @@ static const char * const nand_ecc_algos[] = {
[NAND_ECC_RS] = "rs",
};
static int of_get_nand_ecc_algo(struct device_node *np)
static enum nand_ecc_algo of_get_nand_ecc_algo(struct device_node *np)
{
enum nand_ecc_algo ecc_algo;
const char *pm;
int err, i;
int err;
err = of_property_read_string(np, "nand-ecc-algo", &pm);
if (!err) {
for (i = NAND_ECC_HAMMING; i < ARRAY_SIZE(nand_ecc_algos); i++)
if (!strcasecmp(pm, nand_ecc_algos[i]))
return i;
return -ENODEV;
for (ecc_algo = NAND_ECC_HAMMING;
ecc_algo < ARRAY_SIZE(nand_ecc_algos);
ecc_algo++) {
if (!strcasecmp(pm, nand_ecc_algos[ecc_algo]))
return ecc_algo;
}
}
/*
@ -4935,15 +5064,14 @@ static int of_get_nand_ecc_algo(struct device_node *np)
* for some obsoleted values that were specifying ECC algorithm.
*/
err = of_property_read_string(np, "nand-ecc-mode", &pm);
if (err < 0)
return err;
if (!err) {
if (!strcasecmp(pm, "soft"))
return NAND_ECC_HAMMING;
else if (!strcasecmp(pm, "soft_bch"))
return NAND_ECC_BCH;
}
if (!strcasecmp(pm, "soft"))
return NAND_ECC_HAMMING;
else if (!strcasecmp(pm, "soft_bch"))
return NAND_ECC_BCH;
return -ENODEV;
return NAND_ECC_UNKNOWN;
}
static int of_get_nand_ecc_step_size(struct device_node *np)
@ -4988,7 +5116,8 @@ static bool of_get_nand_on_flash_bbt(struct device_node *np)
static int nand_dt_init(struct nand_chip *chip)
{
struct device_node *dn = nand_get_flash_node(chip);
int ecc_mode, ecc_algo, ecc_strength, ecc_step;
enum nand_ecc_algo ecc_algo;
int ecc_mode, ecc_strength, ecc_step;
if (!dn)
return 0;
@ -5010,7 +5139,7 @@ static int nand_dt_init(struct nand_chip *chip)
if (ecc_mode >= 0)
chip->ecc.mode = ecc_mode;
if (ecc_algo >= 0)
if (ecc_algo != NAND_ECC_UNKNOWN)
chip->ecc.algo = ecc_algo;
if (ecc_strength >= 0)
@ -5140,8 +5269,10 @@ static int nand_set_ecc_soft_ops(struct nand_chip *chip)
ecc->read_page = nand_read_page_swecc;
ecc->read_subpage = nand_read_subpage;
ecc->write_page = nand_write_page_swecc;
ecc->read_page_raw = nand_read_page_raw;
ecc->write_page_raw = nand_write_page_raw;
if (!ecc->read_page_raw)
ecc->read_page_raw = nand_read_page_raw;
if (!ecc->write_page_raw)
ecc->write_page_raw = nand_write_page_raw;
ecc->read_oob = nand_read_oob_std;
ecc->write_oob = nand_write_oob_std;
if (!ecc->size)
@ -5163,8 +5294,10 @@ static int nand_set_ecc_soft_ops(struct nand_chip *chip)
ecc->read_page = nand_read_page_swecc;
ecc->read_subpage = nand_read_subpage;
ecc->write_page = nand_write_page_swecc;
ecc->read_page_raw = nand_read_page_raw;
ecc->write_page_raw = nand_write_page_raw;
if (!ecc->read_page_raw)
ecc->read_page_raw = nand_read_page_raw;
if (!ecc->write_page_raw)
ecc->write_page_raw = nand_write_page_raw;
ecc->read_oob = nand_read_oob_std;
ecc->write_oob = nand_write_oob_std;
@ -5628,16 +5761,6 @@ static int nand_scan_tail(struct nand_chip *chip)
*/
switch (ecc->mode) {
case NAND_ECC_HW_OOB_FIRST:
/* Similar to NAND_ECC_HW, but a separate read_page handle */
if (!ecc->calculate || !ecc->correct || !ecc->hwctl) {
WARN(1, "No ECC functions supplied; hardware ECC not possible\n");
ret = -EINVAL;
goto err_nand_manuf_cleanup;
}
if (!ecc->read_page)
ecc->read_page = nand_read_page_hwecc_oob_first;
fallthrough;
case NAND_ECC_HW:
/* Use standard hwecc read page function? */
if (!ecc->read_page)
@ -5781,8 +5904,10 @@ static int nand_scan_tail(struct nand_chip *chip)
/* ECC sanity check: warn if it's too weak */
if (!nand_ecc_strength_good(chip))
pr_warn("WARNING: %s: the ECC used on your system is too weak compared to the one required by the NAND chip\n",
mtd->name);
pr_warn("WARNING: %s: the ECC used on your system (%db/%dB) is too weak compared to the one required by the NAND chip (%db/%dB)\n",
mtd->name, chip->ecc.strength, chip->ecc.size,
chip->base.eccreq.strength,
chip->base.eccreq.step_size);
/* Allow subpage writes up to ecc.steps. Not possible for MLC flash */
if (!(chip->options & NAND_NO_SUBPAGE_WRITE) && nand_is_slc(chip)) {
@ -5975,18 +6100,6 @@ void nand_cleanup(struct nand_chip *chip)
EXPORT_SYMBOL_GPL(nand_cleanup);
/**
* nand_release - [NAND Interface] Unregister the MTD device and free resources
* held by the NAND device
* @chip: NAND chip object
*/
void nand_release(struct nand_chip *chip)
{
mtd_device_unregister(nand_to_mtd(chip));
nand_cleanup(chip);
}
EXPORT_SYMBOL_GPL(nand_release);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Steven J. Hill <sjhill@realitydiluted.com>");
MODULE_AUTHOR("Thomas Gleixner <tglx@linutronix.de>");

View file

@ -41,7 +41,7 @@ int nand_bch_calculate_ecc(struct nand_chip *chip, const unsigned char *buf,
unsigned int i;
memset(code, 0, chip->ecc.bytes);
encode_bch(nbc->bch, buf, chip->ecc.size, code);
bch_encode(nbc->bch, buf, chip->ecc.size, code);
/* apply mask so that an erased page is a valid codeword */
for (i = 0; i < chip->ecc.bytes; i++)
@ -67,7 +67,7 @@ int nand_bch_correct_data(struct nand_chip *chip, unsigned char *buf,
unsigned int *errloc = nbc->errloc;
int i, count;
count = decode_bch(nbc->bch, NULL, chip->ecc.size, read_ecc, calc_ecc,
count = bch_decode(nbc->bch, NULL, chip->ecc.size, read_ecc, calc_ecc,
NULL, errloc);
if (count > 0) {
for (i = 0; i < count; i++) {
@ -130,7 +130,7 @@ struct nand_bch_control *nand_bch_init(struct mtd_info *mtd)
if (!nbc)
goto fail;
nbc->bch = init_bch(m, t, 0);
nbc->bch = bch_init(m, t, 0, false);
if (!nbc->bch)
goto fail;
@ -182,7 +182,7 @@ struct nand_bch_control *nand_bch_init(struct mtd_info *mtd)
goto fail;
memset(erased_page, 0xff, eccsize);
encode_bch(nbc->bch, erased_page, eccsize, nbc->eccmask);
bch_encode(nbc->bch, erased_page, eccsize, nbc->eccmask);
kfree(erased_page);
for (i = 0; i < eccbytes; i++)
@ -205,7 +205,7 @@ EXPORT_SYMBOL(nand_bch_init);
void nand_bch_free(struct nand_bch_control *nbc)
{
if (nbc) {
free_bch(nbc->bch);
bch_free(nbc->bch);
kfree(nbc->errloc);
kfree(nbc->eccmask);
kfree(nbc);

View file

@ -16,6 +16,8 @@
#include "internals.h"
#define JEDEC_PARAM_PAGES 3
/*
* Check if the NAND chip is JEDEC compliant, returns 1 if it is, 0 otherwise.
*/
@ -25,9 +27,11 @@ int nand_jedec_detect(struct nand_chip *chip)
struct nand_memory_organization *memorg;
struct nand_jedec_params *p;
struct jedec_ecc_info *ecc;
bool use_datain = false;
int jedec_version = 0;
char id[5];
int i, val, ret;
u16 crc;
memorg = nanddev_get_memorg(&chip->base);
@ -41,25 +45,31 @@ int nand_jedec_detect(struct nand_chip *chip)
if (!p)
return -ENOMEM;
ret = nand_read_param_page_op(chip, 0x40, NULL, 0);
if (ret) {
ret = 0;
goto free_jedec_param_page;
}
if (!nand_has_exec_op(chip) ||
!nand_read_data_op(chip, p, sizeof(*p), true, true))
use_datain = true;
for (i = 0; i < 3; i++) {
ret = nand_read_data_op(chip, p, sizeof(*p), true);
for (i = 0; i < JEDEC_PARAM_PAGES; i++) {
if (!i)
ret = nand_read_param_page_op(chip, 0x40, p,
sizeof(*p));
else if (use_datain)
ret = nand_read_data_op(chip, p, sizeof(*p), true,
false);
else
ret = nand_change_read_column_op(chip, sizeof(*p) * i,
p, sizeof(*p), true);
if (ret) {
ret = 0;
goto free_jedec_param_page;
}
if (onfi_crc16(ONFI_CRC_BASE, (uint8_t *)p, 510) ==
le16_to_cpu(p->crc))
crc = onfi_crc16(ONFI_CRC_BASE, (u8 *)p, 510);
if (crc == le16_to_cpu(p->crc))
break;
}
if (i == 3) {
if (i == JEDEC_PARAM_PAGES) {
pr_err("Could not find valid JEDEC parameter page; aborting\n");
goto free_jedec_param_page;
}

View file

@ -225,7 +225,8 @@ static void nand_wait_status_ready(struct nand_chip *chip, unsigned long timeo)
do {
u8 status;
ret = nand_read_data_op(chip, &status, sizeof(status), true);
ret = nand_read_data_op(chip, &status, sizeof(status), true,
false);
if (ret)
return;
@ -552,7 +553,8 @@ static int nand_wait(struct nand_chip *chip)
break;
} else {
ret = nand_read_data_op(chip, &status,
sizeof(status), true);
sizeof(status), true,
false);
if (ret)
return ret;
@ -563,7 +565,7 @@ static int nand_wait(struct nand_chip *chip)
} while (time_before(jiffies, timeo));
}
ret = nand_read_data_op(chip, &status, sizeof(status), true);
ret = nand_read_data_op(chip, &status, sizeof(status), true, false);
if (ret)
return ret;

View file

@ -192,6 +192,7 @@ static int micron_nand_on_die_ecc_status_4(struct nand_chip *chip, u8 status,
struct micron_nand *micron = nand_get_manufacturer_data(chip);
struct mtd_info *mtd = nand_to_mtd(chip);
unsigned int step, max_bitflips = 0;
bool use_datain = false;
int ret;
if (!(status & NAND_ECC_STATUS_WRITE_RECOMMENDED)) {
@ -211,8 +212,27 @@ static int micron_nand_on_die_ecc_status_4(struct nand_chip *chip, u8 status,
* in non-raw mode, even if the user did not request those bytes.
*/
if (!oob_required) {
ret = nand_read_data_op(chip, chip->oob_poi, mtd->oobsize,
false);
/*
* We first check which operation is supported by the controller
* before running it. This trick makes it possible to support
* all controllers, even the most constraints, without almost
* any performance hit.
*
* TODO: could be enhanced to avoid repeating the same check
* over and over in the fast path.
*/
if (!nand_has_exec_op(chip) ||
!nand_read_data_op(chip, chip->oob_poi, mtd->oobsize, false,
true))
use_datain = true;
if (use_datain)
ret = nand_read_data_op(chip, chip->oob_poi,
mtd->oobsize, false, false);
else
ret = nand_change_read_column_op(chip, mtd->writesize,
chip->oob_poi,
mtd->oobsize, false);
if (ret)
return ret;
}
@ -285,6 +305,7 @@ micron_nand_read_page_on_die_ecc(struct nand_chip *chip, uint8_t *buf,
int oob_required, int page)
{
struct mtd_info *mtd = nand_to_mtd(chip);
bool use_datain = false;
u8 status;
int ret, max_bitflips = 0;
@ -300,14 +321,36 @@ micron_nand_read_page_on_die_ecc(struct nand_chip *chip, uint8_t *buf,
if (ret)
goto out;
ret = nand_exit_status_op(chip);
if (ret)
goto out;
/*
* We first check which operation is supported by the controller before
* running it. This trick makes it possible to support all controllers,
* even the most constraints, without almost any performance hit.
*
* TODO: could be enhanced to avoid repeating the same check over and
* over in the fast path.
*/
if (!nand_has_exec_op(chip) ||
!nand_read_data_op(chip, buf, mtd->writesize, false, true))
use_datain = true;
ret = nand_read_data_op(chip, buf, mtd->writesize, false);
if (!ret && oob_required)
ret = nand_read_data_op(chip, chip->oob_poi, mtd->oobsize,
if (use_datain) {
ret = nand_exit_status_op(chip);
if (ret)
goto out;
ret = nand_read_data_op(chip, buf, mtd->writesize, false,
false);
if (!ret && oob_required)
ret = nand_read_data_op(chip, chip->oob_poi,
mtd->oobsize, false, false);
} else {
ret = nand_change_read_column_op(chip, 0, buf, mtd->writesize,
false);
if (!ret && oob_required)
ret = nand_change_read_column_op(chip, mtd->writesize,
chip->oob_poi,
mtd->oobsize, false);
}
if (chip->ecc.strength == 4)
max_bitflips = micron_nand_on_die_ecc_status_4(chip, status,
@ -508,8 +551,10 @@ static int micron_nand_init(struct nand_chip *chip)
chip->ecc.read_page_raw = nand_read_page_raw_notsupp;
chip->ecc.write_page_raw = nand_write_page_raw_notsupp;
} else {
chip->ecc.read_page_raw = nand_read_page_raw;
chip->ecc.write_page_raw = nand_write_page_raw;
if (!chip->ecc.read_page_raw)
chip->ecc.read_page_raw = nand_read_page_raw;
if (!chip->ecc.write_page_raw)
chip->ecc.write_page_raw = nand_write_page_raw;
}
}

View file

@ -16,6 +16,8 @@
#include "internals.h"
#define ONFI_PARAM_PAGES 3
u16 onfi_crc16(u16 crc, u8 const *p, size_t len)
{
int i;
@ -45,12 +47,10 @@ static int nand_flash_detect_ext_param_page(struct nand_chip *chip,
if (!ep)
return -ENOMEM;
/* Send our own NAND_CMD_PARAM. */
ret = nand_read_param_page_op(chip, 0, NULL, 0);
if (ret)
goto ext_out;
/* Use the Change Read Column command to skip the ONFI param pages. */
/*
* Use the Change Read Column command to skip the ONFI param pages and
* ensure we read at the right location.
*/
ret = nand_change_read_column_op(chip,
sizeof(*p) * p->num_of_param_pages,
ep, len, true);
@ -141,11 +141,13 @@ int nand_onfi_detect(struct nand_chip *chip)
{
struct mtd_info *mtd = nand_to_mtd(chip);
struct nand_memory_organization *memorg;
struct nand_onfi_params *p;
struct nand_onfi_params *p = NULL, *pbuf;
struct onfi_params *onfi;
bool use_datain = false;
int onfi_version = 0;
char id[4];
int i, ret, val;
u16 crc;
memorg = nanddev_get_memorg(&chip->base);
@ -155,43 +157,54 @@ int nand_onfi_detect(struct nand_chip *chip)
return 0;
/* ONFI chip: allocate a buffer to hold its parameter page */
p = kzalloc((sizeof(*p) * 3), GFP_KERNEL);
if (!p)
pbuf = kzalloc((sizeof(*pbuf) * ONFI_PARAM_PAGES), GFP_KERNEL);
if (!pbuf)
return -ENOMEM;
ret = nand_read_param_page_op(chip, 0, NULL, 0);
if (ret) {
ret = 0;
goto free_onfi_param_page;
}
if (!nand_has_exec_op(chip) ||
!nand_read_data_op(chip, &pbuf[0], sizeof(*pbuf), true, true))
use_datain = true;
for (i = 0; i < 3; i++) {
ret = nand_read_data_op(chip, &p[i], sizeof(*p), true);
for (i = 0; i < ONFI_PARAM_PAGES; i++) {
if (!i)
ret = nand_read_param_page_op(chip, 0, &pbuf[i],
sizeof(*pbuf));
else if (use_datain)
ret = nand_read_data_op(chip, &pbuf[i], sizeof(*pbuf),
true, false);
else
ret = nand_change_read_column_op(chip, sizeof(*pbuf) * i,
&pbuf[i], sizeof(*pbuf),
true);
if (ret) {
ret = 0;
goto free_onfi_param_page;
}
if (onfi_crc16(ONFI_CRC_BASE, (u8 *)&p[i], 254) ==
le16_to_cpu(p->crc)) {
if (i)
memcpy(p, &p[i], sizeof(*p));
crc = onfi_crc16(ONFI_CRC_BASE, (u8 *)&pbuf[i], 254);
if (crc == le16_to_cpu(pbuf[i].crc)) {
p = &pbuf[i];
break;
}
}
if (i == 3) {
const void *srcbufs[3] = {p, p + 1, p + 2};
if (i == ONFI_PARAM_PAGES) {
const void *srcbufs[ONFI_PARAM_PAGES];
unsigned int j;
for (j = 0; j < ONFI_PARAM_PAGES; j++)
srcbufs[j] = pbuf + j;
pr_warn("Could not find a valid ONFI parameter page, trying bit-wise majority to recover it\n");
nand_bit_wise_majority(srcbufs, ARRAY_SIZE(srcbufs), p,
sizeof(*p));
nand_bit_wise_majority(srcbufs, ONFI_PARAM_PAGES, pbuf,
sizeof(*pbuf));
if (onfi_crc16(ONFI_CRC_BASE, (u8 *)p, 254) !=
le16_to_cpu(p->crc)) {
crc = onfi_crc16(ONFI_CRC_BASE, (u8 *)pbuf, 254);
if (crc != le16_to_cpu(pbuf->crc)) {
pr_err("ONFI parameter recovery failed, aborting\n");
goto free_onfi_param_page;
}
p = pbuf;
}
if (chip->manufacturer.desc && chip->manufacturer.desc->ops &&
@ -299,14 +312,14 @@ int nand_onfi_detect(struct nand_chip *chip)
chip->parameters.onfi = onfi;
/* Identification done, free the full ONFI parameter page and exit */
kfree(p);
kfree(pbuf);
return 1;
free_model:
kfree(chip->parameters.model);
free_onfi_param_page:
kfree(p);
kfree(pbuf);
return ret;
}

View file

@ -16,6 +16,7 @@ static const struct nand_data_interface onfi_sdr_timings[] = {
/* Mode 0 */
{
.type = NAND_SDR_IFACE,
.timings.mode = 0,
.timings.sdr = {
.tCCS_min = 500000,
.tR_max = 200000000,
@ -58,6 +59,7 @@ static const struct nand_data_interface onfi_sdr_timings[] = {
/* Mode 1 */
{
.type = NAND_SDR_IFACE,
.timings.mode = 1,
.timings.sdr = {
.tCCS_min = 500000,
.tR_max = 200000000,
@ -100,6 +102,7 @@ static const struct nand_data_interface onfi_sdr_timings[] = {
/* Mode 2 */
{
.type = NAND_SDR_IFACE,
.timings.mode = 2,
.timings.sdr = {
.tCCS_min = 500000,
.tR_max = 200000000,
@ -142,6 +145,7 @@ static const struct nand_data_interface onfi_sdr_timings[] = {
/* Mode 3 */
{
.type = NAND_SDR_IFACE,
.timings.mode = 3,
.timings.sdr = {
.tCCS_min = 500000,
.tR_max = 200000000,
@ -184,6 +188,7 @@ static const struct nand_data_interface onfi_sdr_timings[] = {
/* Mode 4 */
{
.type = NAND_SDR_IFACE,
.timings.mode = 4,
.timings.sdr = {
.tCCS_min = 500000,
.tR_max = 200000000,
@ -226,6 +231,7 @@ static const struct nand_data_interface onfi_sdr_timings[] = {
/* Mode 5 */
{
.type = NAND_SDR_IFACE,
.timings.mode = 5,
.timings.sdr = {
.tCCS_min = 500000,
.tR_max = 200000000,
@ -314,10 +320,9 @@ int onfi_fill_data_interface(struct nand_chip *chip,
/* microseconds -> picoseconds */
timings->tPROG_max = 1000000ULL * ONFI_DYN_TIMING_MAX;
timings->tBERS_max = 1000000ULL * ONFI_DYN_TIMING_MAX;
timings->tR_max = 1000000ULL * 200000000ULL;
/* nanoseconds -> picoseconds */
timings->tCCS_min = 1000UL * 500000;
timings->tR_max = 200000000;
timings->tCCS_min = 500000;
}
return 0;

View file

@ -194,6 +194,17 @@ static void toshiba_nand_decode_id(struct nand_chip *chip)
}
}
static int tc58teg5dclta00_init(struct nand_chip *chip)
{
struct mtd_info *mtd = nand_to_mtd(chip);
chip->onfi_timing_mode_default = 5;
chip->options |= NAND_NEED_SCRAMBLING;
mtd_set_pairing_scheme(mtd, &dist3_pairing_scheme);
return 0;
}
static int toshiba_nand_init(struct nand_chip *chip)
{
if (nand_is_slc(chip))
@ -204,6 +215,9 @@ static int toshiba_nand_init(struct nand_chip *chip)
chip->id.data[4] & TOSHIBA_NAND_ID4_IS_BENAND)
toshiba_nand_benand_init(chip);
if (!strcmp("TC58TEG5DCLTA00", chip->parameters.model))
tc58teg5dclta00_init(chip);
return 0;
}

File diff suppressed because it is too large Load diff

View file

@ -244,9 +244,13 @@ static int ndfc_probe(struct platform_device *ofdev)
static int ndfc_remove(struct platform_device *ofdev)
{
struct ndfc_controller *ndfc = dev_get_drvdata(&ofdev->dev);
struct mtd_info *mtd = nand_to_mtd(&ndfc->chip);
struct nand_chip *chip = &ndfc->chip;
struct mtd_info *mtd = nand_to_mtd(chip);
int ret;
nand_release(&ndfc->chip);
ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(chip);
kfree(mtd->name);
return 0;

View file

@ -2283,14 +2283,18 @@ static int omap_nand_remove(struct platform_device *pdev)
struct mtd_info *mtd = platform_get_drvdata(pdev);
struct nand_chip *nand_chip = mtd_to_nand(mtd);
struct omap_nand_info *info = mtd_to_omap(mtd);
int ret;
if (nand_chip->ecc.priv) {
nand_bch_free(nand_chip->ecc.priv);
nand_chip->ecc.priv = NULL;
}
if (info->dma)
dma_release_channel(info->dma);
nand_release(nand_chip);
return 0;
ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(nand_chip);
return ret;
}
static const struct of_device_id omap_nand_ids[] = {

View file

@ -411,6 +411,7 @@ static int elm_probe(struct platform_device *pdev)
pm_runtime_enable(&pdev->dev);
if (pm_runtime_get_sync(&pdev->dev) < 0) {
ret = -EINVAL;
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
dev_err(&pdev->dev, "can't enable clock\n");
return ret;

View file

@ -180,7 +180,7 @@ static int __init orion_nand_probe(struct platform_device *pdev)
mtd->name = "orion_nand";
ret = mtd_device_register(mtd, board->parts, board->nr_parts);
if (ret) {
nand_release(nc);
nand_cleanup(nc);
goto no_dev;
}
@ -195,8 +195,12 @@ static int orion_nand_remove(struct platform_device *pdev)
{
struct orion_nand_info *info = platform_get_drvdata(pdev);
struct nand_chip *chip = &info->chip;
int ret;
nand_release(chip);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
clk_disable_unprepare(info->clk);

View file

@ -32,6 +32,7 @@ struct oxnas_nand_ctrl {
void __iomem *io_base;
struct clk *clk;
struct nand_chip *chips[OXNAS_NAND_MAX_CHIPS];
unsigned int nchips;
};
static uint8_t oxnas_nand_read_byte(struct nand_chip *chip)
@ -79,9 +80,9 @@ static int oxnas_nand_probe(struct platform_device *pdev)
struct nand_chip *chip;
struct mtd_info *mtd;
struct resource *res;
int nchips = 0;
int count = 0;
int err = 0;
int i;
/* Allocate memory for the device structure (and zero it) */
oxnas = devm_kzalloc(&pdev->dev, sizeof(*oxnas),
@ -140,17 +141,15 @@ static int oxnas_nand_probe(struct platform_device *pdev)
goto err_release_child;
err = mtd_device_register(mtd, NULL, 0);
if (err) {
nand_release(chip);
goto err_release_child;
}
if (err)
goto err_cleanup_nand;
oxnas->chips[nchips] = chip;
++nchips;
oxnas->chips[oxnas->nchips] = chip;
++oxnas->nchips;
}
/* Exit if no chips found */
if (!nchips) {
if (!oxnas->nchips) {
err = -ENODEV;
goto err_clk_unprepare;
}
@ -159,8 +158,17 @@ static int oxnas_nand_probe(struct platform_device *pdev)
return 0;
err_cleanup_nand:
nand_cleanup(chip);
err_release_child:
of_node_put(nand_np);
for (i = 0; i < oxnas->nchips; i++) {
chip = oxnas->chips[i];
WARN_ON(mtd_device_unregister(nand_to_mtd(chip)));
nand_cleanup(chip);
}
err_clk_unprepare:
clk_disable_unprepare(oxnas->clk);
return err;
@ -169,9 +177,14 @@ err_clk_unprepare:
static int oxnas_nand_remove(struct platform_device *pdev)
{
struct oxnas_nand_ctrl *oxnas = platform_get_drvdata(pdev);
struct nand_chip *chip;
int i;
if (oxnas->chips[0])
nand_release(oxnas->chips[0]);
for (i = 0; i < oxnas->nchips; i++) {
chip = oxnas->chips[i];
WARN_ON(mtd_device_unregister(nand_to_mtd(chip)));
nand_cleanup(chip);
}
clk_disable_unprepare(oxnas->clk);

View file

@ -146,7 +146,7 @@ static int pasemi_nand_probe(struct platform_device *ofdev)
if (mtd_device_register(pasemi_nand_mtd, NULL, 0)) {
dev_err(dev, "Unable to register MTD device\n");
err = -ENODEV;
goto out_lpc;
goto out_cleanup_nand;
}
dev_info(dev, "PA Semi NAND flash at %pR, control at I/O %x\n", &res,
@ -154,6 +154,8 @@ static int pasemi_nand_probe(struct platform_device *ofdev)
return 0;
out_cleanup_nand:
nand_cleanup(chip);
out_lpc:
release_region(lpcctl, 4);
out_ior:
@ -167,6 +169,7 @@ static int pasemi_nand_probe(struct platform_device *ofdev)
static int pasemi_nand_remove(struct platform_device *ofdev)
{
struct nand_chip *chip;
int ret;
if (!pasemi_nand_mtd)
return 0;
@ -174,7 +177,9 @@ static int pasemi_nand_remove(struct platform_device *ofdev)
chip = mtd_to_nand(pasemi_nand_mtd);
/* Release resources, unregister device */
nand_release(chip);
ret = mtd_device_unregister(pasemi_nand_mtd);
WARN_ON(ret);
nand_cleanup(chip);
release_region(lpcctl, 4);

View file

@ -92,7 +92,7 @@ static int plat_nand_probe(struct platform_device *pdev)
if (!err)
return err;
nand_release(&data->chip);
nand_cleanup(&data->chip);
out:
if (pdata->ctrl.remove)
pdata->ctrl.remove(pdev);
@ -106,8 +106,12 @@ static int plat_nand_remove(struct platform_device *pdev)
{
struct plat_nand_data *data = platform_get_drvdata(pdev);
struct platform_nand_data *pdata = dev_get_platdata(&pdev->dev);
struct nand_chip *chip = &data->chip;
int ret;
nand_release(&data->chip);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
if (pdata->ctrl.remove)
pdata->ctrl.remove(pdev);

View file

@ -2836,7 +2836,7 @@ static int qcom_nand_host_init_and_register(struct qcom_nand_controller *nandc,
chip->legacy.block_markbad = qcom_nandc_block_markbad;
chip->controller = &nandc->controller;
chip->options |= NAND_NO_SUBPAGE_WRITE | NAND_USE_BOUNCE_BUFFER |
chip->options |= NAND_NO_SUBPAGE_WRITE | NAND_USES_DMA |
NAND_SKIP_BBTSCAN;
/* set up initial status value */
@ -3005,10 +3005,15 @@ static int qcom_nandc_remove(struct platform_device *pdev)
struct qcom_nand_controller *nandc = platform_get_drvdata(pdev);
struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
struct qcom_nand_host *host;
struct nand_chip *chip;
int ret;
list_for_each_entry(host, &nandc->host_list, node)
nand_release(&host->chip);
list_for_each_entry(host, &nandc->host_list, node) {
chip = &host->chip;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
}
qcom_nandc_unalloc(nandc);

View file

@ -651,7 +651,8 @@ static int r852_register_nand_device(struct r852_device *dev)
dev->card_registered = 1;
return 0;
error3:
nand_release(dev->chip);
WARN_ON(mtd_device_unregister(nand_to_mtd(dev->chip)));
nand_cleanup(dev->chip);
error1:
/* Force card redetect */
dev->card_detected = 0;
@ -670,7 +671,8 @@ static void r852_unregister_nand_device(struct r852_device *dev)
return;
device_remove_file(&mtd->dev, &dev_attr_media_type);
nand_release(dev->chip);
WARN_ON(mtd_device_unregister(mtd));
nand_cleanup(dev->chip);
r852_engine_disable(dev);
dev->card_registered = 0;
}

View file

@ -779,7 +779,8 @@ static int s3c24xx_nand_remove(struct platform_device *pdev)
for (mtdno = 0; mtdno < info->mtd_count; mtdno++, ptr++) {
pr_debug("releasing mtd %d (%p)\n", mtdno, ptr);
nand_release(&ptr->chip);
WARN_ON(mtd_device_unregister(nand_to_mtd(&ptr->chip)));
nand_cleanup(&ptr->chip);
}
}

View file

@ -1204,9 +1204,13 @@ err_chip:
static int flctl_remove(struct platform_device *pdev)
{
struct sh_flctl *flctl = platform_get_drvdata(pdev);
struct nand_chip *chip = &flctl->chip;
int ret;
flctl_release_dma(flctl);
nand_release(&flctl->chip);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
pm_runtime_disable(&pdev->dev);
return 0;

View file

@ -183,7 +183,7 @@ static int sharpsl_nand_probe(struct platform_device *pdev)
return 0;
err_add:
nand_release(this);
nand_cleanup(this);
err_scan:
iounmap(sharpsl->io);
@ -199,13 +199,19 @@ err_get_res:
static int sharpsl_nand_remove(struct platform_device *pdev)
{
struct sharpsl_nand *sharpsl = platform_get_drvdata(pdev);
struct nand_chip *chip = &sharpsl->chip;
int ret;
/* Release resources, unregister device */
nand_release(&sharpsl->chip);
/* Unregister device */
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
/* Release resources */
nand_cleanup(chip);
iounmap(sharpsl->io);
/* Free the MTD device structure */
/* Free the driver's structure */
kfree(sharpsl);
return 0;

View file

@ -169,7 +169,7 @@ static int socrates_nand_probe(struct platform_device *ofdev)
if (!res)
return res;
nand_release(nand_chip);
nand_cleanup(nand_chip);
out:
iounmap(host->io_base);
@ -182,8 +182,12 @@ out:
static int socrates_nand_remove(struct platform_device *ofdev)
{
struct socrates_nand_host *host = dev_get_drvdata(&ofdev->dev);
struct nand_chip *chip = &host->nand_chip;
int ret;
nand_release(&host->nand_chip);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
iounmap(host->io_base);

File diff suppressed because it is too large Load diff

View file

@ -1698,7 +1698,7 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct nand_chip *nand,
ecc->read_page = sunxi_nfc_hw_ecc_read_page_dma;
ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage_dma;
ecc->write_page = sunxi_nfc_hw_ecc_write_page_dma;
nand->options |= NAND_USE_BOUNCE_BUFFER;
nand->options |= NAND_USES_DMA;
} else {
ecc->read_page = sunxi_nfc_hw_ecc_read_page;
ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
@ -1907,7 +1907,8 @@ static int sunxi_nfc_exec_op(struct nand_chip *nand,
struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand);
const struct nand_op_parser *parser;
sunxi_nfc_select_chip(nand, op->cs);
if (!check_only)
sunxi_nfc_select_chip(nand, op->cs);
if (sunxi_nand->sels[op->cs].rb >= 0)
parser = &sunxi_nfc_op_parser;
@ -2003,7 +2004,7 @@ static int sunxi_nand_chip_init(struct device *dev, struct sunxi_nfc *nfc,
ret = mtd_device_register(mtd, NULL, 0);
if (ret) {
dev_err(dev, "failed to register mtd device: %d\n", ret);
nand_release(nand);
nand_cleanup(nand);
return ret;
}
@ -2038,13 +2039,18 @@ static int sunxi_nand_chips_init(struct device *dev, struct sunxi_nfc *nfc)
static void sunxi_nand_chips_cleanup(struct sunxi_nfc *nfc)
{
struct sunxi_nand_chip *sunxi_nand;
struct nand_chip *chip;
int ret;
while (!list_empty(&nfc->chips)) {
sunxi_nand = list_first_entry(&nfc->chips,
struct sunxi_nand_chip,
node);
nand_release(&sunxi_nand->nand);
sunxi_nand_ecc_cleanup(&sunxi_nand->nand.ecc);
chip = &sunxi_nand->nand;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
sunxi_nand_ecc_cleanup(&chip->ecc);
list_del(&sunxi_nand->node);
}
}

View file

@ -568,7 +568,7 @@ static int chip_init(struct device *dev, struct device_node *np)
chip->legacy.select_chip = tango_select_chip;
chip->legacy.cmd_ctrl = tango_cmd_ctrl;
chip->legacy.dev_ready = tango_dev_ready;
chip->options = NAND_USE_BOUNCE_BUFFER |
chip->options = NAND_USES_DMA |
NAND_NO_SUBPAGE_WRITE |
NAND_WAIT_TCCS;
chip->controller = &nfc->hw;
@ -600,14 +600,19 @@ static int chip_init(struct device *dev, struct device_node *np)
static int tango_nand_remove(struct platform_device *pdev)
{
int cs;
struct tango_nfc *nfc = platform_get_drvdata(pdev);
struct nand_chip *chip;
int cs, ret;
dma_release_channel(nfc->chan);
for (cs = 0; cs < MAX_CS; ++cs) {
if (nfc->chips[cs])
nand_release(&nfc->chips[cs]->nand_chip);
if (nfc->chips[cs]) {
chip = &nfc->chips[cs]->nand_chip;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
}
}
return 0;

View file

@ -467,7 +467,9 @@ static int tegra_nand_exec_op(struct nand_chip *chip,
const struct nand_operation *op,
bool check_only)
{
tegra_nand_select_target(chip, op->cs);
if (!check_only)
tegra_nand_select_target(chip, op->cs);
return nand_op_parser_exec_op(chip, &tegra_nand_op_parser, op,
check_only);
}
@ -1113,7 +1115,7 @@ static int tegra_nand_chips_init(struct device *dev,
if (!mtd->name)
mtd->name = "tegra_nand";
chip->options = NAND_NO_SUBPAGE_WRITE | NAND_USE_BOUNCE_BUFFER;
chip->options = NAND_NO_SUBPAGE_WRITE | NAND_USES_DMA;
ret = nand_scan(chip, 1);
if (ret)

View file

@ -448,7 +448,7 @@ static int tmio_probe(struct platform_device *dev)
if (!retval)
return retval;
nand_release(nand_chip);
nand_cleanup(nand_chip);
err_irq:
tmio_hw_stop(dev, tmio);
@ -458,8 +458,12 @@ err_irq:
static int tmio_remove(struct platform_device *dev)
{
struct tmio_nand *tmio = platform_get_drvdata(dev);
struct nand_chip *chip = &tmio->chip;
int ret;
nand_release(&tmio->chip);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
tmio_hw_stop(dev, tmio);
return 0;
}

View file

@ -371,7 +371,7 @@ static int __init txx9ndfmc_probe(struct platform_device *dev)
static int __exit txx9ndfmc_remove(struct platform_device *dev)
{
struct txx9ndfmc_drvdata *drvdata = platform_get_drvdata(dev);
int i;
int ret, i;
if (!drvdata)
return 0;
@ -385,7 +385,9 @@ static int __exit txx9ndfmc_remove(struct platform_device *dev)
chip = mtd_to_nand(mtd);
txx9_priv = nand_get_controller_data(chip);
nand_release(chip);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
kfree(txx9_priv->mtdname);
kfree(txx9_priv);
}

View file

@ -502,7 +502,9 @@ static int vf610_nfc_exec_op(struct nand_chip *chip,
const struct nand_operation *op,
bool check_only)
{
vf610_nfc_select_target(chip, op->cs);
if (!check_only)
vf610_nfc_select_target(chip, op->cs);
return nand_op_parser_exec_op(chip, &vf610_nfc_op_parser, op,
check_only);
}
@ -915,8 +917,12 @@ err_disable_clk:
static int vf610_nfc_remove(struct platform_device *pdev)
{
struct vf610_nfc *nfc = platform_get_drvdata(pdev);
struct nand_chip *chip = &nfc->chip;
int ret;
nand_release(&nfc->chip);
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
clk_disable_unprepare(nfc->clk);
return 0;
}

View file

@ -210,7 +210,7 @@ static int xway_nand_probe(struct platform_device *pdev)
err = mtd_device_register(mtd, NULL, 0);
if (err)
nand_release(&data->chip);
nand_cleanup(&data->chip);
return err;
}
@ -221,8 +221,12 @@ static int xway_nand_probe(struct platform_device *pdev)
static int xway_nand_remove(struct platform_device *pdev)
{
struct xway_nand_data *data = platform_get_drvdata(pdev);
struct nand_chip *chip = &data->chip;
int ret;
nand_release(&data->chip);
ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(chip);
return 0;
}

View file

@ -9,7 +9,7 @@
*
* mtdparts=<mtddef>[;<mtddef]
* <mtddef> := <mtd-id>:<partdef>[,<partdef>]
* <partdef> := <size>[@<offset>][<name>][ro][lk]
* <partdef> := <size>[@<offset>][<name>][ro][lk][slc]
* <mtd-id> := unique name used in mapping driver/device (mtd->name)
* <size> := standard linux memsize OR "-" to denote all remaining space
* size is automatically truncated at end of device
@ -92,7 +92,7 @@ static struct mtd_partition * newpart(char *s,
int name_len;
unsigned char *extra_mem;
char delim;
unsigned int mask_flags;
unsigned int mask_flags, add_flags;
/* fetch the partition size */
if (*s == '-') {
@ -109,6 +109,7 @@ static struct mtd_partition * newpart(char *s,
/* fetch partition name and flags */
mask_flags = 0; /* this is going to be a regular partition */
add_flags = 0;
delim = 0;
/* check for offset */
@ -152,6 +153,12 @@ static struct mtd_partition * newpart(char *s,
s += 2;
}
/* if slc is found use emulated SLC mode on this partition*/
if (!strncmp(s, "slc", 3)) {
add_flags |= MTD_SLC_ON_MLC_EMULATION;
s += 3;
}
/* test if more partitions are following */
if (*s == ',') {
if (size == SIZE_REMAINING) {
@ -184,6 +191,7 @@ static struct mtd_partition * newpart(char *s,
parts[this_part].size = size;
parts[this_part].offset = offset;
parts[this_part].mask_flags = mask_flags;
parts[this_part].add_flags = add_flags;
if (name)
strlcpy(extra_mem, name, name_len + 1);
else
@ -218,12 +226,29 @@ static int mtdpart_setup_real(char *s)
struct cmdline_mtd_partition *this_mtd;
struct mtd_partition *parts;
int mtd_id_len, num_parts;
char *p, *mtd_id;
char *p, *mtd_id, *semicol;
/*
* Replace the first ';' by a NULL char so strrchr can work
* properly.
*/
semicol = strchr(s, ';');
if (semicol)
*semicol = '\0';
mtd_id = s;
/* fetch <mtd-id> */
p = strchr(s, ':');
/*
* fetch <mtd-id>. We use strrchr to ignore all ':' that could
* be present in the MTD name, only the last one is interpreted
* as an <mtd-id>/<part-definition> separator.
*/
p = strrchr(s, ':');
/* Restore the ';' now. */
if (semicol)
*semicol = ';';
if (!p) {
pr_err("no mtd-id\n");
return -EINVAL;

View file

@ -117,6 +117,9 @@ static int parse_fixed_partitions(struct mtd_info *master,
if (of_get_property(pp, "lock", &len))
parts[i].mask_flags |= MTD_POWERUP_LOCK;
if (of_property_read_bool(pp, "slc-mode"))
parts[i].add_flags |= MTD_SLC_ON_MLC_EMULATION;
i++;
}

View file

@ -1,12 +1,12 @@
# SPDX-License-Identifier: GPL-2.0-only
menuconfig MTD_SPI_NOR
tristate "SPI-NOR device support"
tristate "SPI NOR device support"
depends on MTD
depends on MTD && SPI_MASTER
select SPI_MEM
help
This is the framework for the SPI NOR which can be used by the SPI
device drivers and the SPI-NOR device driver.
device drivers and the SPI NOR device driver.
if MTD_SPI_NOR

View file

@ -21,11 +21,11 @@ config SPI_CADENCE_QUADSPI
Flash as an MTD device.
config SPI_HISI_SFC
tristate "Hisilicon FMC SPI-NOR Flash Controller(SFC)"
tristate "Hisilicon FMC SPI NOR Flash Controller(SFC)"
depends on ARCH_HISI || COMPILE_TEST
depends on HAS_IOMEM
help
This enables support for HiSilicon FMC SPI-NOR flash controller.
This enables support for HiSilicon FMC SPI NOR flash controller.
config SPI_NXP_SPIFI
tristate "NXP SPI Flash Interface (SPIFI)"

View file

@ -727,7 +727,7 @@ static int aspeed_smc_chip_setup_finish(struct aspeed_smc_chip *chip)
/*
* TODO: Adjust clocks if fast read is supported and interpret
* SPI-NOR flags to adjust controller settings.
* SPI NOR flags to adjust controller settings.
*/
if (chip->nor.read_proto == SNOR_PROTO_1_1_1) {
if (chip->nor.read_dummy == 0)

View file

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* HiSilicon FMC SPI-NOR flash controller driver
* HiSilicon FMC SPI NOR flash controller driver
*
* Copyright (c) 2015-2016 HiSilicon Technologies Co., Ltd.
*/

View file

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* SPI-NOR driver for NXP SPI Flash Interface (SPIFI)
* SPI NOR driver for NXP SPI Flash Interface (SPIFI)
*
* Copyright (C) 2015 Joachim Eastwood <manabian@gmail.com>
*

View file

@ -499,7 +499,7 @@ int spi_nor_xread_sr(struct spi_nor *nor, u8 *sr)
* the flash is ready for new commands.
* @nor: pointer to 'struct spi_nor'.
*
* Return: 0 on success, -errno otherwise.
* Return: 1 if ready, 0 if not ready, -errno on errors.
*/
static int spi_nor_xsr_ready(struct spi_nor *nor)
{
@ -542,7 +542,7 @@ static void spi_nor_clear_sr(struct spi_nor *nor)
* for new commands.
* @nor: pointer to 'struct spi_nor'.
*
* Return: 0 on success, -errno otherwise.
* Return: 1 if ready, 0 if not ready, -errno on errors.
*/
static int spi_nor_sr_ready(struct spi_nor *nor)
{
@ -606,7 +606,7 @@ static void spi_nor_clear_fsr(struct spi_nor *nor)
* ready for new commands.
* @nor: pointer to 'struct spi_nor'.
*
* Return: 0 on success, -errno otherwise.
* Return: 1 if ready, 0 if not ready, -errno on errors.
*/
static int spi_nor_fsr_ready(struct spi_nor *nor)
{
@ -640,14 +640,14 @@ static int spi_nor_fsr_ready(struct spi_nor *nor)
return -EIO;
}
return nor->bouncebuf[0] & FSR_READY;
return !!(nor->bouncebuf[0] & FSR_READY);
}
/**
* spi_nor_ready() - Query the flash to see if it is ready for new commands.
* @nor: pointer to 'struct spi_nor'.
*
* Return: 0 on success, -errno otherwise.
* Return: 1 if ready, 0 if not ready, -errno on errors.
*/
static int spi_nor_ready(struct spi_nor *nor)
{
@ -2469,7 +2469,7 @@ static int spi_nor_select_read(struct spi_nor *nor,
nor->read_proto = read->proto;
/*
* In the spi-nor framework, we don't need to make the difference
* In the SPI NOR framework, we don't need to make the difference
* between mode clock cycles and wait state clock cycles.
* Indeed, the value of the mode clock cycles is used by a QSPI
* flash memory to know whether it should enter or leave its 0-4-4
@ -2675,7 +2675,7 @@ static int spi_nor_setup(struct spi_nor *nor,
/**
* spi_nor_manufacturer_init_params() - Initialize the flash's parameters and
* settings based on MFR register and ->default_init() hook.
* @nor: pointer to a 'struct spi-nor'.
* @nor: pointer to a 'struct spi_nor'.
*/
static void spi_nor_manufacturer_init_params(struct spi_nor *nor)
{
@ -2690,7 +2690,7 @@ static void spi_nor_manufacturer_init_params(struct spi_nor *nor)
/**
* spi_nor_sfdp_init_params() - Initialize the flash's parameters and settings
* based on JESD216 SFDP standard.
* @nor: pointer to a 'struct spi-nor'.
* @nor: pointer to a 'struct spi_nor'.
*
* The method has a roll-back mechanism: in case the SFDP parsing fails, the
* legacy flash parameters and settings will be restored.
@ -2712,7 +2712,7 @@ static void spi_nor_sfdp_init_params(struct spi_nor *nor)
/**
* spi_nor_info_init_params() - Initialize the flash's parameters and settings
* based on nor->info data.
* @nor: pointer to a 'struct spi-nor'.
* @nor: pointer to a 'struct spi_nor'.
*/
static void spi_nor_info_init_params(struct spi_nor *nor)
{
@ -2841,7 +2841,7 @@ static void spi_nor_late_init_params(struct spi_nor *nor)
/**
* spi_nor_init_params() - Initialize the flash's parameters and settings.
* @nor: pointer to a 'struct spi-nor'.
* @nor: pointer to a 'struct spi_nor'.
*
* The flash parameters and settings are initialized based on a sequence of
* calls that are ordered by priority:
@ -3126,7 +3126,7 @@ int spi_nor_scan(struct spi_nor *nor, const char *name,
/*
* Make sure the XSR_RDY flag is set before calling
* spi_nor_wait_till_ready(). Xilinx S3AN share MFR
* with Atmel spi-nor
* with Atmel SPI NOR.
*/
if (info->flags & SPI_NOR_XSR_RDY)
nor->flags |= SNOR_F_READY_XSR_RDY;

View file

@ -63,10 +63,16 @@ static const struct flash_info macronix_parts[] = {
.fixups = &mx25l25635_fixups },
{ "mx25u25635f", INFO(0xc22539, 0, 64 * 1024, 512,
SECT_4K | SPI_NOR_4B_OPCODES) },
{ "mx25u51245g", INFO(0xc2253a, 0, 64 * 1024, 1024,
SECT_4K | SPI_NOR_DUAL_READ |
SPI_NOR_QUAD_READ | SPI_NOR_4B_OPCODES) },
{ "mx25v8035f", INFO(0xc22314, 0, 64 * 1024, 16,
SECT_4K | SPI_NOR_DUAL_READ |
SPI_NOR_QUAD_READ) },
{ "mx25l25655e", INFO(0xc22619, 0, 64 * 1024, 512, 0) },
{ "mx25l51245g", INFO(0xc2201a, 0, 64 * 1024, 1024,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
SPI_NOR_4B_OPCODES) },
{ "mx66l51235l", INFO(0xc2201a, 0, 64 * 1024, 1024,
SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
SPI_NOR_4B_OPCODES) },

View file

@ -29,7 +29,9 @@ static const struct flash_info st_parts[] = {
{ "n25q064a", INFO(0x20bb17, 0, 64 * 1024, 128,
SECT_4K | SPI_NOR_QUAD_READ) },
{ "n25q128a11", INFO(0x20bb18, 0, 64 * 1024, 256,
SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) },
SECT_4K | USE_FSR | SPI_NOR_QUAD_READ |
SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB |
SPI_NOR_4BIT_BP | SPI_NOR_BP3_SR_BIT6) },
{ "n25q128a13", INFO(0x20ba18, 0, 64 * 1024, 256,
SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) },
{ "mt25ql256a", INFO6(0x20ba19, 0x104400, 64 * 1024, 512,
@ -59,6 +61,8 @@ static const struct flash_info st_parts[] = {
SPI_NOR_4BIT_BP | SPI_NOR_BP3_SR_BIT6) },
{ "n25q00", INFO(0x20ba21, 0, 64 * 1024, 2048,
SECT_4K | USE_FSR | SPI_NOR_QUAD_READ |
SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB |
SPI_NOR_4BIT_BP | SPI_NOR_BP3_SR_BIT6 |
NO_CHIP_ERASE) },
{ "n25q00a", INFO(0x20bb21, 0, 64 * 1024, 2048,
SECT_4K | USE_FSR | SPI_NOR_QUAD_READ |

View file

@ -21,10 +21,6 @@
#define SFDP_4BAIT_ID 0xff84 /* 4-byte Address Instruction Table */
#define SFDP_SIGNATURE 0x50444653U
#define SFDP_JESD216_MAJOR 1
#define SFDP_JESD216_MINOR 0
#define SFDP_JESD216A_MINOR 5
#define SFDP_JESD216B_MINOR 6
struct sfdp_header {
u32 signature; /* Ox50444653U <=> "SFDP" */
@ -437,7 +433,7 @@ static int spi_nor_parse_bfpt(struct spi_nor *nor,
struct sfdp_bfpt bfpt;
size_t len;
int i, cmd, err;
u32 addr;
u32 addr, val;
u16 half;
u8 erase_mask;
@ -460,6 +456,7 @@ static int spi_nor_parse_bfpt(struct spi_nor *nor,
/* Number of address bytes. */
switch (bfpt.dwords[BFPT_DWORD(1)] & BFPT_DWORD1_ADDRESS_BYTES_MASK) {
case BFPT_DWORD1_ADDRESS_BYTES_3_ONLY:
case BFPT_DWORD1_ADDRESS_BYTES_3_OR_4:
nor->addr_width = 3;
break;
@ -472,21 +469,21 @@ static int spi_nor_parse_bfpt(struct spi_nor *nor,
}
/* Flash Memory Density (in bits). */
params->size = bfpt.dwords[BFPT_DWORD(2)];
if (params->size & BIT(31)) {
params->size &= ~BIT(31);
val = bfpt.dwords[BFPT_DWORD(2)];
if (val & BIT(31)) {
val &= ~BIT(31);
/*
* Prevent overflows on params->size. Anyway, a NOR of 2^64
* bits is unlikely to exist so this error probably means
* the BFPT we are reading is corrupted/wrong.
*/
if (params->size > 63)
if (val > 63)
return -EINVAL;
params->size = 1ULL << params->size;
params->size = 1ULL << val;
} else {
params->size++;
params->size = val + 1;
}
params->size >>= 3; /* Convert to bytes. */
@ -548,15 +545,15 @@ static int spi_nor_parse_bfpt(struct spi_nor *nor,
SNOR_ERASE_TYPE_MASK;
/* Stop here if not JESD216 rev A or later. */
if (bfpt_header->length < BFPT_DWORD_MAX)
if (bfpt_header->length == BFPT_DWORD_MAX_JESD216)
return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt,
params);
/* Page size: this field specifies 'N' so the page size = 2^N bytes. */
params->page_size = bfpt.dwords[BFPT_DWORD(11)];
params->page_size &= BFPT_DWORD11_PAGE_SIZE_MASK;
params->page_size >>= BFPT_DWORD11_PAGE_SIZE_SHIFT;
params->page_size = 1U << params->page_size;
val = bfpt.dwords[BFPT_DWORD(11)];
val &= BFPT_DWORD11_PAGE_SIZE_MASK;
val >>= BFPT_DWORD11_PAGE_SIZE_SHIFT;
params->page_size = 1U << val;
/* Quad Enable Requirements. */
switch (bfpt.dwords[BFPT_DWORD(15)] & BFPT_DWORD15_QER_MASK) {
@ -604,6 +601,11 @@ static int spi_nor_parse_bfpt(struct spi_nor *nor,
return -EINVAL;
}
/* Stop here if not JESD216 rev C or later. */
if (bfpt_header->length == BFPT_DWORD_MAX_JESD216B)
return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt,
params);
return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt, params);
}

View file

@ -7,14 +7,20 @@
#ifndef __LINUX_MTD_SFDP_H
#define __LINUX_MTD_SFDP_H
/* SFDP revisions */
#define SFDP_JESD216_MAJOR 1
#define SFDP_JESD216_MINOR 0
#define SFDP_JESD216A_MINOR 5
#define SFDP_JESD216B_MINOR 6
/* Basic Flash Parameter Table */
/*
* JESD216 rev B defines a Basic Flash Parameter Table of 16 DWORDs.
* JESD216 rev D defines a Basic Flash Parameter Table of 20 DWORDs.
* They are indexed from 1 but C arrays are indexed from 0.
*/
#define BFPT_DWORD(i) ((i) - 1)
#define BFPT_DWORD_MAX 16
#define BFPT_DWORD_MAX 20
struct sfdp_bfpt {
u32 dwords[BFPT_DWORD_MAX];
@ -22,6 +28,7 @@ struct sfdp_bfpt {
/* The first version of JESD216 defined only 9 DWORDs. */
#define BFPT_DWORD_MAX_JESD216 9
#define BFPT_DWORD_MAX_JESD216B 16
/* 1st DWORD. */
#define BFPT_DWORD1_FAST_READ_1_1_2 BIT(16)

View file

@ -8,6 +8,27 @@
#include "core.h"
static int
s25fs_s_post_bfpt_fixups(struct spi_nor *nor,
const struct sfdp_parameter_header *bfpt_header,
const struct sfdp_bfpt *bfpt,
struct spi_nor_flash_parameter *params)
{
/*
* The S25FS-S chip family reports 512-byte pages in BFPT but
* in reality the write buffer still wraps at the safe default
* of 256 bytes. Overwrite the page size advertised by BFPT
* to get the writes working.
*/
params->page_size = 256;
return 0;
}
static struct spi_nor_fixups s25fs_s_fixups = {
.post_bfpt = s25fs_s_post_bfpt_fixups,
};
static const struct flash_info spansion_parts[] = {
/* Spansion/Cypress -- single (large) sector size only, at least
* for the chips listed here (without boot sectors).
@ -22,16 +43,27 @@ static const struct flash_info spansion_parts[] = {
{ "s25fl128s1", INFO6(0x012018, 0x4d0180, 64 * 1024, 256,
SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
USE_CLSR) },
{ "s25fl256s0", INFO(0x010219, 0x4d00, 256 * 1024, 128, USE_CLSR) },
{ "s25fl256s1", INFO(0x010219, 0x4d01, 64 * 1024, 512,
SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
USE_CLSR) },
{ "s25fl256s0", INFO6(0x010219, 0x4d0080, 256 * 1024, 128,
SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
USE_CLSR) },
{ "s25fl256s1", INFO6(0x010219, 0x4d0180, 64 * 1024, 512,
SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
USE_CLSR) },
{ "s25fl512s", INFO6(0x010220, 0x4d0080, 256 * 1024, 256,
SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
SPI_NOR_HAS_LOCK | USE_CLSR) },
{ "s25fs512s", INFO6(0x010220, 0x4d0081, 256 * 1024, 256,
{ "s25fs128s1", INFO6(0x012018, 0x4d0181, 64 * 1024, 256,
SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | USE_CLSR)
.fixups = &s25fs_s_fixups, },
{ "s25fs256s0", INFO6(0x010219, 0x4d0081, 256 * 1024, 128,
SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
USE_CLSR) },
{ "s25fs256s1", INFO6(0x010219, 0x4d0181, 64 * 1024, 512,
SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
USE_CLSR) },
{ "s25fs512s", INFO6(0x010220, 0x4d0081, 256 * 1024, 256,
SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | USE_CLSR)
.fixups = &s25fs_s_fixups, },
{ "s70fl01gs", INFO(0x010221, 0x4d00, 256 * 1024, 256, 0) },
{ "s25sl12800", INFO(0x012018, 0x0300, 256 * 1024, 64, 0) },
{ "s25sl12801", INFO(0x012018, 0x0301, 64 * 1024, 256, 0) },
@ -70,6 +102,8 @@ static const struct flash_info spansion_parts[] = {
{ "s25fl256l", INFO(0x016019, 0, 64 * 1024, 512,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
SPI_NOR_4B_OPCODES) },
{ "cy15x104q", INFO6(0x042cc2, 0x7f7f7f, 512 * 1024, 1,
SPI_NOR_NO_ERASE) },
};
static void spansion_post_sfdp_fixups(struct spi_nor *nor)

View file

@ -8,6 +8,31 @@
#include "core.h"
static int
w25q256_post_bfpt_fixups(struct spi_nor *nor,
const struct sfdp_parameter_header *bfpt_header,
const struct sfdp_bfpt *bfpt,
struct spi_nor_flash_parameter *params)
{
/*
* W25Q256JV supports 4B opcodes but W25Q256FV does not.
* Unfortunately, Winbond has re-used the same JEDEC ID for both
* variants which prevents us from defining a new entry in the parts
* table.
* To differentiate between W25Q256JV and W25Q256FV check SFDP header
* version: only JV has JESD216A compliant structure (version 5).
*/
if (bfpt_header->major == SFDP_JESD216_MAJOR &&
bfpt_header->minor == SFDP_JESD216A_MINOR)
nor->flags |= SNOR_F_4B_OPCODES;
return 0;
}
static struct spi_nor_fixups w25q256_fixups = {
.post_bfpt = w25q256_post_bfpt_fixups,
};
static const struct flash_info winbond_parts[] = {
/* Winbond -- w25x "blocks" are 64K, "sectors" are 4KiB */
{ "w25x05", INFO(0xef3010, 0, 64 * 1024, 1, SECT_4K) },
@ -53,8 +78,8 @@ static const struct flash_info winbond_parts[] = {
{ "w25q80bl", INFO(0xef4014, 0, 64 * 1024, 16, SECT_4K) },
{ "w25q128", INFO(0xef4018, 0, 64 * 1024, 256, SECT_4K) },
{ "w25q256", INFO(0xef4019, 0, 64 * 1024, 512,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
SPI_NOR_4B_OPCODES) },
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ)
.fixups = &w25q256_fixups },
{ "w25q256jvm", INFO(0xef7019, 0, 64 * 1024, 512,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
{ "w25q256jw", INFO(0xef6019, 0, 64 * 1024, 512,

View file

@ -867,8 +867,11 @@ int ubi_attach_mtd_dev(struct mtd_info *mtd, int ubi_num,
* Both UBI and UBIFS have been designed for SLC NAND and NOR flashes.
* MLC NAND is different and needs special care, otherwise UBI or UBIFS
* will die soon and you will lose all your data.
* Relax this rule if the partition we're attaching to operates in SLC
* mode.
*/
if (mtd->type == MTD_MLCNANDFLASH) {
if (mtd->type == MTD_MLCNANDFLASH &&
!(mtd->flags & MTD_SLC_ON_MLC_EMULATION)) {
pr_err("ubi: refuse attaching mtd%d - MLC NAND is not supported\n",
mtd->index);
return -EINVAL;

View file

@ -33,6 +33,7 @@
* @cache: log-based polynomial representation buffer
* @elp: error locator polynomial
* @poly_2t: temporary polynomials of degree 2t
* @swap_bits: swap bits within data and syndrome bytes
*/
struct bch_control {
unsigned int m;
@ -51,16 +52,18 @@ struct bch_control {
int *cache;
struct gf_poly *elp;
struct gf_poly *poly_2t[4];
bool swap_bits;
};
struct bch_control *init_bch(int m, int t, unsigned int prim_poly);
struct bch_control *bch_init(int m, int t, unsigned int prim_poly,
bool swap_bits);
void free_bch(struct bch_control *bch);
void bch_free(struct bch_control *bch);
void encode_bch(struct bch_control *bch, const uint8_t *data,
void bch_encode(struct bch_control *bch, const uint8_t *data,
unsigned int len, uint8_t *ecc);
int decode_bch(struct bch_control *bch, const uint8_t *data, unsigned int len,
int bch_decode(struct bch_control *bch, const uint8_t *data, unsigned int len,
const uint8_t *recv_ecc, const uint8_t *calc_ecc,
const unsigned int *syn, unsigned int *errloc);

View file

@ -98,7 +98,7 @@ struct nand_bbt_descr {
/*
* Flag set by nand_create_default_bbt_descr(), marking that the nand_bbt_descr
* was allocated dynamicaly and must be freed in nand_release(). Has no meaning
* was allocated dynamicaly and must be freed in nand_cleanup(). Has no meaning
* in nand_chip.bbt_options.
*/
#define NAND_BBT_DYNAMICSTRUCT 0x80000000

View file

@ -138,7 +138,7 @@ struct cfi_ident {
uint16_t InterfaceDesc;
uint16_t MaxBufWriteSize;
uint8_t NumEraseRegions;
uint32_t EraseRegionInfo[0]; /* Not host ordered */
uint32_t EraseRegionInfo[]; /* Not host ordered */
} __packed;
/* Extended Query Structure for both PRI and ALT */
@ -165,7 +165,7 @@ struct cfi_pri_intelext {
uint16_t ProtRegAddr;
uint8_t FactProtRegSize;
uint8_t UserProtRegSize;
uint8_t extra[0];
uint8_t extra[];
} __packed;
struct cfi_intelext_otpinfo {
@ -286,7 +286,7 @@ struct cfi_private {
map_word sector_erase_cmd;
unsigned long chipshift; /* Because they're of the same type */
const char *im_name; /* inter_module name for cmdset_setup */
struct flchip chips[0]; /* per-chip data structure for each chip */
struct flchip chips[]; /* per-chip data structure for each chip */
};
uint32_t cfi_build_cmd_addr(uint32_t cmd_ofs,

View file

@ -200,6 +200,8 @@ struct mtd_debug_info {
*
* @node: list node used to add an MTD partition to the parent partition list
* @offset: offset of the partition relatively to the parent offset
* @size: partition size. Should be equal to mtd->size unless
* MTD_SLC_ON_MLC_EMULATION is set
* @flags: original flags (before the mtdpart logic decided to tweak them based
* on flash constraints, like eraseblock/pagesize alignment)
*
@ -209,6 +211,7 @@ struct mtd_debug_info {
struct mtd_part {
struct list_head node;
u64 offset;
u64 size;
u32 flags;
};
@ -622,7 +625,9 @@ static inline uint32_t mtd_mod_by_ws(uint64_t sz, struct mtd_info *mtd)
static inline int mtd_wunit_per_eb(struct mtd_info *mtd)
{
return mtd->erasesize / mtd->writesize;
struct mtd_info *master = mtd_get_master(mtd);
return master->erasesize / mtd->writesize;
}
static inline int mtd_offset_to_wunit(struct mtd_info *mtd, loff_t offs)

View file

@ -37,6 +37,7 @@
* master MTD flag set for the corresponding MTD partition.
* For example, to force a read-only partition, simply adding
* MTD_WRITEABLE to the mask_flags will do the trick.
* add_flags: contains flags to add to the parent flags
*
* Note: writeable partitions require their size and offset be
* erasesize aligned (e.g. use MTDPART_OFS_NEXTBLK).
@ -48,6 +49,7 @@ struct mtd_partition {
uint64_t size; /* partition size */
uint64_t offset; /* offset within the master MTD space */
uint32_t mask_flags; /* master MTD flags to mask out for this partition */
uint32_t add_flags; /* flags to add to the partition */
struct device_node *of_node;
};

View file

@ -24,7 +24,7 @@ struct lpddr_private {
struct qinfo_chip *qinfo;
int numchips;
unsigned long chipshift;
struct flchip chips[0];
struct flchip chips[];
};
/* qinfo_query_info structure contains request information for

View file

@ -83,14 +83,14 @@ struct nand_chip;
/*
* Constants for ECC_MODES
*/
typedef enum {
enum nand_ecc_mode {
NAND_ECC_INVALID,
NAND_ECC_NONE,
NAND_ECC_SOFT,
NAND_ECC_HW,
NAND_ECC_HW_SYNDROME,
NAND_ECC_HW_OOB_FIRST,
NAND_ECC_ON_DIE,
} nand_ecc_modes_t;
};
enum nand_ecc_algo {
NAND_ECC_UNKNOWN,
@ -118,86 +118,74 @@ enum nand_ecc_algo {
#define NAND_ECC_GENERIC_ERASED_CHECK BIT(0)
#define NAND_ECC_MAXIMIZE BIT(1)
/*
* Option constants for bizarre disfunctionality and real
* features.
*/
/* Buswidth is 16 bit */
#define NAND_BUSWIDTH_16 BIT(1)
/*
* When using software implementation of Hamming, we can specify which byte
* ordering should be used.
*/
#define NAND_ECC_SOFT_HAMMING_SM_ORDER BIT(2)
/*
* Option constants for bizarre disfunctionality and real
* features.
*/
/* Buswidth is 16 bit */
#define NAND_BUSWIDTH_16 0x00000002
/* Chip has cache program function */
#define NAND_CACHEPRG 0x00000008
#define NAND_CACHEPRG BIT(3)
/* Options valid for Samsung large page devices */
#define NAND_SAMSUNG_LP_OPTIONS NAND_CACHEPRG
/*
* Chip requires ready check on read (for auto-incremented sequential read).
* True only for small page devices; large page devices do not support
* autoincrement.
*/
#define NAND_NEED_READRDY 0x00000100
#define NAND_NEED_READRDY BIT(8)
/* Chip does not allow subpage writes */
#define NAND_NO_SUBPAGE_WRITE 0x00000200
#define NAND_NO_SUBPAGE_WRITE BIT(9)
/* Device is one of 'new' xD cards that expose fake nand command set */
#define NAND_BROKEN_XD 0x00000400
#define NAND_BROKEN_XD BIT(10)
/* Device behaves just like nand, but is readonly */
#define NAND_ROM 0x00000800
#define NAND_ROM BIT(11)
/* Device supports subpage reads */
#define NAND_SUBPAGE_READ 0x00001000
#define NAND_SUBPAGE_READ BIT(12)
/* Macros to identify the above */
#define NAND_HAS_SUBPAGE_READ(chip) ((chip->options & NAND_SUBPAGE_READ))
/*
* Some MLC NANDs need data scrambling to limit bitflips caused by repeated
* patterns.
*/
#define NAND_NEED_SCRAMBLING 0x00002000
#define NAND_NEED_SCRAMBLING BIT(13)
/* Device needs 3rd row address cycle */
#define NAND_ROW_ADDR_3 0x00004000
/* Options valid for Samsung large page devices */
#define NAND_SAMSUNG_LP_OPTIONS NAND_CACHEPRG
/* Macros to identify the above */
#define NAND_HAS_SUBPAGE_READ(chip) ((chip->options & NAND_SUBPAGE_READ))
/*
* There are different places where the manufacturer stores the factory bad
* block markers.
*
* Position within the block: Each of these pages needs to be checked for a
* bad block marking pattern.
*/
#define NAND_BBM_FIRSTPAGE 0x01000000
#define NAND_BBM_SECONDPAGE 0x02000000
#define NAND_BBM_LASTPAGE 0x04000000
/* Position within the OOB data of the page */
#define NAND_BBM_POS_SMALL 5
#define NAND_BBM_POS_LARGE 0
#define NAND_ROW_ADDR_3 BIT(14)
/* Non chip related options */
/* This option skips the bbt scan during initialization. */
#define NAND_SKIP_BBTSCAN 0x00010000
#define NAND_SKIP_BBTSCAN BIT(16)
/* Chip may not exist, so silence any errors in scan */
#define NAND_SCAN_SILENT_NODEV 0x00040000
#define NAND_SCAN_SILENT_NODEV BIT(18)
/*
* Autodetect nand buswidth with readid/onfi.
* This suppose the driver will configure the hardware in 8 bits mode
* when calling nand_scan_ident, and update its configuration
* before calling nand_scan_tail.
*/
#define NAND_BUSWIDTH_AUTO 0x00080000
#define NAND_BUSWIDTH_AUTO BIT(19)
/*
* This option could be defined by controller drivers to protect against
* kmap'ed, vmalloc'ed highmem buffers being passed from upper layers
*/
#define NAND_USE_BOUNCE_BUFFER 0x00100000
#define NAND_USES_DMA BIT(20)
/*
* In case your controller is implementing ->legacy.cmd_ctrl() and is relying
@ -207,26 +195,49 @@ enum nand_ecc_algo {
* If your controller already takes care of this delay, you don't need to set
* this flag.
*/
#define NAND_WAIT_TCCS 0x00200000
#define NAND_WAIT_TCCS BIT(21)
/*
* Whether the NAND chip is a boot medium. Drivers might use this information
* to select ECC algorithms supported by the boot ROM or similar restrictions.
*/
#define NAND_IS_BOOT_MEDIUM 0x00400000
#define NAND_IS_BOOT_MEDIUM BIT(22)
/*
* Do not try to tweak the timings at runtime. This is needed when the
* controller initializes the timings on itself or when it relies on
* configuration done by the bootloader.
*/
#define NAND_KEEP_TIMINGS 0x00800000
#define NAND_KEEP_TIMINGS BIT(23)
/*
* There are different places where the manufacturer stores the factory bad
* block markers.
*
* Position within the block: Each of these pages needs to be checked for a
* bad block marking pattern.
*/
#define NAND_BBM_FIRSTPAGE BIT(24)
#define NAND_BBM_SECONDPAGE BIT(25)
#define NAND_BBM_LASTPAGE BIT(26)
/*
* Some controllers with pipelined ECC engines override the BBM marker with
* data or ECC bytes, thus making bad block detection through bad block marker
* impossible. Let's flag those chips so the core knows it shouldn't check the
* BBM and consider all blocks good.
*/
#define NAND_NO_BBM_QUIRK BIT(27)
/* Cell info constants */
#define NAND_CI_CHIPNR_MSK 0x03
#define NAND_CI_CELLTYPE_MSK 0x0C
#define NAND_CI_CELLTYPE_SHIFT 2
/* Position within the OOB data of the page */
#define NAND_BBM_POS_SMALL 5
#define NAND_BBM_POS_LARGE 0
/**
* struct nand_parameters - NAND generic parameters from the parameter page
* @model: Model name
@ -351,7 +362,7 @@ static const struct nand_ecc_caps __name = { \
* @write_oob: function to write chip OOB data
*/
struct nand_ecc_ctrl {
nand_ecc_modes_t mode;
enum nand_ecc_mode mode;
enum nand_ecc_algo algo;
int steps;
int size;
@ -491,13 +502,17 @@ enum nand_data_interface_type {
/**
* struct nand_data_interface - NAND interface timing
* @type: type of the timing
* @timings: The timing, type according to @type
* @timings: The timing information
* @timings.mode: Timing mode as defined in the specification
* @timings.sdr: Use it when @type is %NAND_SDR_IFACE.
*/
struct nand_data_interface {
enum nand_data_interface_type type;
union {
struct nand_sdr_timings sdr;
struct nand_timings {
unsigned int mode;
union {
struct nand_sdr_timings sdr;
};
} timings;
};
@ -694,6 +709,7 @@ struct nand_op_instr {
/**
* struct nand_subop - a sub operation
* @cs: the CS line to select for this NAND sub-operation
* @instrs: array of instructions
* @ninstrs: length of the @instrs array
* @first_instr_start_off: offset to start from for the first instruction
@ -709,6 +725,7 @@ struct nand_op_instr {
* controller driver.
*/
struct nand_subop {
unsigned int cs;
const struct nand_op_instr *instrs;
unsigned int ninstrs;
unsigned int first_instr_start_off;
@ -1321,13 +1338,17 @@ int nand_read_oob_std(struct nand_chip *chip, int page);
int nand_get_set_features_notsupp(struct nand_chip *chip, int addr,
u8 *subfeature_param);
/* Default read_page_raw implementation */
/* read_page_raw implementations */
int nand_read_page_raw(struct nand_chip *chip, uint8_t *buf, int oob_required,
int page);
int nand_monolithic_read_page_raw(struct nand_chip *chip, uint8_t *buf,
int oob_required, int page);
/* Default write_page_raw implementation */
/* write_page_raw implementations */
int nand_write_page_raw(struct nand_chip *chip, const uint8_t *buf,
int oob_required, int page);
int nand_monolithic_write_page_raw(struct nand_chip *chip, const uint8_t *buf,
int oob_required, int page);
/* Reset and initialize a NAND device */
int nand_reset(struct nand_chip *chip, int chipnr);
@ -1356,7 +1377,7 @@ int nand_change_write_column_op(struct nand_chip *chip,
unsigned int offset_in_page, const void *buf,
unsigned int len, bool force_8bit);
int nand_read_data_op(struct nand_chip *chip, void *buf, unsigned int len,
bool force_8bit);
bool force_8bit, bool check_only);
int nand_write_data_op(struct nand_chip *chip, const void *buf,
unsigned int len, bool force_8bit);
@ -1377,8 +1398,6 @@ void nand_wait_ready(struct nand_chip *chip);
* sucessful nand_scan().
*/
void nand_cleanup(struct nand_chip *chip);
/* Unregister the MTD device and calls nand_cleanup() */
void nand_release(struct nand_chip *chip);
/*
* External helper for controller drivers that have to implement the WAITRDY
@ -1393,6 +1412,10 @@ int nand_gpio_waitrdy(struct nand_chip *chip, struct gpio_desc *gpiod,
void nand_select_target(struct nand_chip *chip, unsigned int cs);
void nand_deselect_target(struct nand_chip *chip);
/* Bitops */
void nand_extract_bits(u8 *dst, unsigned int dst_off, const u8 *src,
unsigned int src_off, unsigned int nbits);
/**
* nand_get_data_buf() - Get the internal page buffer
* @chip: NAND chip object

View file

@ -20,6 +20,7 @@
*/
/* Flash opcodes. */
#define SPINOR_OP_WRDI 0x04 /* Write disable */
#define SPINOR_OP_WREN 0x06 /* Write enable */
#define SPINOR_OP_RDSR 0x05 /* Read status register */
#define SPINOR_OP_WRSR 0x01 /* Write status register 1 byte */
@ -80,7 +81,6 @@
/* Used for SST flashes only. */
#define SPINOR_OP_BP 0x02 /* Byte program */
#define SPINOR_OP_WRDI 0x04 /* Write disable */
#define SPINOR_OP_AAI_WP 0xad /* Auto address increment word program */
/* Used for S3AN flashes only */
@ -302,7 +302,7 @@ struct spi_nor;
* @read: read data from the SPI NOR.
* @write: write data to the SPI NOR.
* @erase: erase a sector of the SPI NOR at the offset @offs; if
* not provided by the driver, spi-nor will send the erase
* not provided by the driver, SPI NOR will send the erase
* opcode via write_reg().
*/
struct spi_nor_controller_ops {
@ -327,16 +327,16 @@ struct spi_nor_manufacturer;
struct spi_nor_flash_parameter;
/**
* struct spi_nor - Structure for defining a the SPI NOR layer
* @mtd: point to a mtd_info structure
* struct spi_nor - Structure for defining the SPI NOR layer
* @mtd: an mtd_info structure
* @lock: the lock for the read/write/erase/lock/unlock operations
* @dev: point to a spi device, or a spi nor controller device.
* @spimem: point to the spi mem device
* @dev: pointer to an SPI device or an SPI NOR controller device
* @spimem: pointer to the SPI memory device
* @bouncebuf: bounce buffer used when the buffer passed by the MTD
* layer is not DMA-able
* @bouncebuf_size: size of the bounce buffer
* @info: spi-nor part JDEC MFR id and other info
* @manufacturer: spi-nor manufacturer
* @info: SPI NOR part JEDEC MFR ID and other info
* @manufacturer: SPI NOR manufacturer
* @page_size: the page size of the SPI NOR
* @addr_width: number of address bytes
* @erase_opcode: the opcode for erasing a sector
@ -344,17 +344,17 @@ struct spi_nor_flash_parameter;
* @read_dummy: the dummy needed by the read operation
* @program_opcode: the program opcode
* @sst_write_second: used by the SST write operation
* @flags: flag options for the current SPI-NOR (SNOR_F_*)
* @flags: flag options for the current SPI NOR (SNOR_F_*)
* @read_proto: the SPI protocol for read operations
* @write_proto: the SPI protocol for write operations
* @reg_proto the SPI protocol for read_reg/write_reg/erase operations
* @reg_proto: the SPI protocol for read_reg/write_reg/erase operations
* @controller_ops: SPI NOR controller driver specific operations.
* @params: [FLASH-SPECIFIC] SPI-NOR flash parameters and settings.
* @params: [FLASH-SPECIFIC] SPI NOR flash parameters and settings.
* The structure includes legacy flash parameters and
* settings that can be overwritten by the spi_nor_fixups
* hooks, or dynamically when parsing the SFDP tables.
* @dirmap: pointers to struct spi_mem_dirmap_desc for reads/writes.
* @priv: the private data
* @priv: pointer to the private data
*/
struct spi_nor {
struct mtd_info mtd;

View file

@ -68,7 +68,7 @@ struct davinci_nand_pdata { /* platform_data */
* Newer ones also support 4-bit ECC, but are awkward
* using it with large page chips.
*/
nand_ecc_modes_t ecc_mode;
enum nand_ecc_mode ecc_mode;
u8 ecc_bits;
/* e.g. NAND_BUSWIDTH_16 */

View file

@ -49,7 +49,7 @@ struct s3c2410_platform_nand {
unsigned int ignore_unset_ecc:1;
nand_ecc_modes_t ecc_mode;
enum nand_ecc_mode ecc_mode;
int nr_sets;
struct s3c2410_nand_set *sets;

View file

@ -104,6 +104,7 @@ struct mtd_write_req {
#define MTD_BIT_WRITEABLE 0x800 /* Single bits can be flipped */
#define MTD_NO_ERASE 0x1000 /* No erase necessary */
#define MTD_POWERUP_LOCK 0x2000 /* Always locked after reset */
#define MTD_SLC_ON_MLC_EMULATION 0x4000 /* Emulate SLC behavior on MLC NANDs */
/* Some common devices / combinations of capabilities */
#define MTD_CAP_ROM 0

152
lib/bch.c
View file

@ -23,15 +23,15 @@
* This library provides runtime configurable encoding/decoding of binary
* Bose-Chaudhuri-Hocquenghem (BCH) codes.
*
* Call init_bch to get a pointer to a newly allocated bch_control structure for
* Call bch_init to get a pointer to a newly allocated bch_control structure for
* the given m (Galois field order), t (error correction capability) and
* (optional) primitive polynomial parameters.
*
* Call encode_bch to compute and store ecc parity bytes to a given buffer.
* Call decode_bch to detect and locate errors in received data.
* Call bch_encode to compute and store ecc parity bytes to a given buffer.
* Call bch_decode to detect and locate errors in received data.
*
* On systems supporting hw BCH features, intermediate results may be provided
* to decode_bch in order to skip certain steps. See decode_bch() documentation
* to bch_decode in order to skip certain steps. See bch_decode() documentation
* for details.
*
* Option CONFIG_BCH_CONST_PARAMS can be used to force fixed values of
@ -114,10 +114,53 @@ struct gf_poly_deg1 {
unsigned int c[2];
};
static u8 swap_bits_table[] = {
0x00, 0x80, 0x40, 0xc0, 0x20, 0xa0, 0x60, 0xe0,
0x10, 0x90, 0x50, 0xd0, 0x30, 0xb0, 0x70, 0xf0,
0x08, 0x88, 0x48, 0xc8, 0x28, 0xa8, 0x68, 0xe8,
0x18, 0x98, 0x58, 0xd8, 0x38, 0xb8, 0x78, 0xf8,
0x04, 0x84, 0x44, 0xc4, 0x24, 0xa4, 0x64, 0xe4,
0x14, 0x94, 0x54, 0xd4, 0x34, 0xb4, 0x74, 0xf4,
0x0c, 0x8c, 0x4c, 0xcc, 0x2c, 0xac, 0x6c, 0xec,
0x1c, 0x9c, 0x5c, 0xdc, 0x3c, 0xbc, 0x7c, 0xfc,
0x02, 0x82, 0x42, 0xc2, 0x22, 0xa2, 0x62, 0xe2,
0x12, 0x92, 0x52, 0xd2, 0x32, 0xb2, 0x72, 0xf2,
0x0a, 0x8a, 0x4a, 0xca, 0x2a, 0xaa, 0x6a, 0xea,
0x1a, 0x9a, 0x5a, 0xda, 0x3a, 0xba, 0x7a, 0xfa,
0x06, 0x86, 0x46, 0xc6, 0x26, 0xa6, 0x66, 0xe6,
0x16, 0x96, 0x56, 0xd6, 0x36, 0xb6, 0x76, 0xf6,
0x0e, 0x8e, 0x4e, 0xce, 0x2e, 0xae, 0x6e, 0xee,
0x1e, 0x9e, 0x5e, 0xde, 0x3e, 0xbe, 0x7e, 0xfe,
0x01, 0x81, 0x41, 0xc1, 0x21, 0xa1, 0x61, 0xe1,
0x11, 0x91, 0x51, 0xd1, 0x31, 0xb1, 0x71, 0xf1,
0x09, 0x89, 0x49, 0xc9, 0x29, 0xa9, 0x69, 0xe9,
0x19, 0x99, 0x59, 0xd9, 0x39, 0xb9, 0x79, 0xf9,
0x05, 0x85, 0x45, 0xc5, 0x25, 0xa5, 0x65, 0xe5,
0x15, 0x95, 0x55, 0xd5, 0x35, 0xb5, 0x75, 0xf5,
0x0d, 0x8d, 0x4d, 0xcd, 0x2d, 0xad, 0x6d, 0xed,
0x1d, 0x9d, 0x5d, 0xdd, 0x3d, 0xbd, 0x7d, 0xfd,
0x03, 0x83, 0x43, 0xc3, 0x23, 0xa3, 0x63, 0xe3,
0x13, 0x93, 0x53, 0xd3, 0x33, 0xb3, 0x73, 0xf3,
0x0b, 0x8b, 0x4b, 0xcb, 0x2b, 0xab, 0x6b, 0xeb,
0x1b, 0x9b, 0x5b, 0xdb, 0x3b, 0xbb, 0x7b, 0xfb,
0x07, 0x87, 0x47, 0xc7, 0x27, 0xa7, 0x67, 0xe7,
0x17, 0x97, 0x57, 0xd7, 0x37, 0xb7, 0x77, 0xf7,
0x0f, 0x8f, 0x4f, 0xcf, 0x2f, 0xaf, 0x6f, 0xef,
0x1f, 0x9f, 0x5f, 0xdf, 0x3f, 0xbf, 0x7f, 0xff,
};
static u8 swap_bits(struct bch_control *bch, u8 in)
{
if (!bch->swap_bits)
return in;
return swap_bits_table[in];
}
/*
* same as encode_bch(), but process input data one byte at a time
* same as bch_encode(), but process input data one byte at a time
*/
static void encode_bch_unaligned(struct bch_control *bch,
static void bch_encode_unaligned(struct bch_control *bch,
const unsigned char *data, unsigned int len,
uint32_t *ecc)
{
@ -126,7 +169,9 @@ static void encode_bch_unaligned(struct bch_control *bch,
const int l = BCH_ECC_WORDS(bch)-1;
while (len--) {
p = bch->mod8_tab + (l+1)*(((ecc[0] >> 24)^(*data++)) & 0xff);
u8 tmp = swap_bits(bch, *data++);
p = bch->mod8_tab + (l+1)*(((ecc[0] >> 24)^(tmp)) & 0xff);
for (i = 0; i < l; i++)
ecc[i] = ((ecc[i] << 8)|(ecc[i+1] >> 24))^(*p++);
@ -145,10 +190,16 @@ static void load_ecc8(struct bch_control *bch, uint32_t *dst,
unsigned int i, nwords = BCH_ECC_WORDS(bch)-1;
for (i = 0; i < nwords; i++, src += 4)
dst[i] = (src[0] << 24)|(src[1] << 16)|(src[2] << 8)|src[3];
dst[i] = ((u32)swap_bits(bch, src[0]) << 24) |
((u32)swap_bits(bch, src[1]) << 16) |
((u32)swap_bits(bch, src[2]) << 8) |
swap_bits(bch, src[3]);
memcpy(pad, src, BCH_ECC_BYTES(bch)-4*nwords);
dst[nwords] = (pad[0] << 24)|(pad[1] << 16)|(pad[2] << 8)|pad[3];
dst[nwords] = ((u32)swap_bits(bch, pad[0]) << 24) |
((u32)swap_bits(bch, pad[1]) << 16) |
((u32)swap_bits(bch, pad[2]) << 8) |
swap_bits(bch, pad[3]);
}
/*
@ -161,20 +212,20 @@ static void store_ecc8(struct bch_control *bch, uint8_t *dst,
unsigned int i, nwords = BCH_ECC_WORDS(bch)-1;
for (i = 0; i < nwords; i++) {
*dst++ = (src[i] >> 24);
*dst++ = (src[i] >> 16) & 0xff;
*dst++ = (src[i] >> 8) & 0xff;
*dst++ = (src[i] >> 0) & 0xff;
*dst++ = swap_bits(bch, src[i] >> 24);
*dst++ = swap_bits(bch, src[i] >> 16);
*dst++ = swap_bits(bch, src[i] >> 8);
*dst++ = swap_bits(bch, src[i]);
}
pad[0] = (src[nwords] >> 24);
pad[1] = (src[nwords] >> 16) & 0xff;
pad[2] = (src[nwords] >> 8) & 0xff;
pad[3] = (src[nwords] >> 0) & 0xff;
pad[0] = swap_bits(bch, src[nwords] >> 24);
pad[1] = swap_bits(bch, src[nwords] >> 16);
pad[2] = swap_bits(bch, src[nwords] >> 8);
pad[3] = swap_bits(bch, src[nwords]);
memcpy(dst, pad, BCH_ECC_BYTES(bch)-4*nwords);
}
/**
* encode_bch - calculate BCH ecc parity of data
* bch_encode - calculate BCH ecc parity of data
* @bch: BCH control structure
* @data: data to encode
* @len: data length in bytes
@ -187,7 +238,7 @@ static void store_ecc8(struct bch_control *bch, uint8_t *dst,
* The exact number of computed ecc parity bits is given by member @ecc_bits of
* @bch; it may be less than m*t for large values of t.
*/
void encode_bch(struct bch_control *bch, const uint8_t *data,
void bch_encode(struct bch_control *bch, const uint8_t *data,
unsigned int len, uint8_t *ecc)
{
const unsigned int l = BCH_ECC_WORDS(bch)-1;
@ -215,7 +266,7 @@ void encode_bch(struct bch_control *bch, const uint8_t *data,
m = ((unsigned long)data) & 3;
if (m) {
mlen = (len < (4-m)) ? len : 4-m;
encode_bch_unaligned(bch, data, mlen, bch->ecc_buf);
bch_encode_unaligned(bch, data, mlen, bch->ecc_buf);
data += mlen;
len -= mlen;
}
@ -240,7 +291,13 @@ void encode_bch(struct bch_control *bch, const uint8_t *data,
*/
while (mlen--) {
/* input data is read in big-endian format */
w = r[0]^cpu_to_be32(*pdata++);
w = cpu_to_be32(*pdata++);
if (bch->swap_bits)
w = (u32)swap_bits(bch, w) |
((u32)swap_bits(bch, w >> 8) << 8) |
((u32)swap_bits(bch, w >> 16) << 16) |
((u32)swap_bits(bch, w >> 24) << 24);
w ^= r[0];
p0 = tab0 + (l+1)*((w >> 0) & 0xff);
p1 = tab1 + (l+1)*((w >> 8) & 0xff);
p2 = tab2 + (l+1)*((w >> 16) & 0xff);
@ -255,13 +312,13 @@ void encode_bch(struct bch_control *bch, const uint8_t *data,
/* process last unaligned bytes */
if (len)
encode_bch_unaligned(bch, data, len, bch->ecc_buf);
bch_encode_unaligned(bch, data, len, bch->ecc_buf);
/* store ecc parity bytes into original parity buffer */
if (ecc)
store_ecc8(bch, ecc, bch->ecc_buf);
}
EXPORT_SYMBOL_GPL(encode_bch);
EXPORT_SYMBOL_GPL(bch_encode);
static inline int modulo(struct bch_control *bch, unsigned int v)
{
@ -952,7 +1009,7 @@ static int chien_search(struct bch_control *bch, unsigned int len,
#endif /* USE_CHIEN_SEARCH */
/**
* decode_bch - decode received codeword and find bit error locations
* bch_decode - decode received codeword and find bit error locations
* @bch: BCH control structure
* @data: received data, ignored if @calc_ecc is provided
* @len: data length in bytes, must always be provided
@ -966,22 +1023,22 @@ static int chien_search(struct bch_control *bch, unsigned int len,
* invalid parameters were provided
*
* Depending on the available hw BCH support and the need to compute @calc_ecc
* separately (using encode_bch()), this function should be called with one of
* separately (using bch_encode()), this function should be called with one of
* the following parameter configurations -
*
* by providing @data and @recv_ecc only:
* decode_bch(@bch, @data, @len, @recv_ecc, NULL, NULL, @errloc)
* bch_decode(@bch, @data, @len, @recv_ecc, NULL, NULL, @errloc)
*
* by providing @recv_ecc and @calc_ecc:
* decode_bch(@bch, NULL, @len, @recv_ecc, @calc_ecc, NULL, @errloc)
* bch_decode(@bch, NULL, @len, @recv_ecc, @calc_ecc, NULL, @errloc)
*
* by providing ecc = recv_ecc XOR calc_ecc:
* decode_bch(@bch, NULL, @len, NULL, ecc, NULL, @errloc)
* bch_decode(@bch, NULL, @len, NULL, ecc, NULL, @errloc)
*
* by providing syndrome results @syn:
* decode_bch(@bch, NULL, @len, NULL, NULL, @syn, @errloc)
* bch_decode(@bch, NULL, @len, NULL, NULL, @syn, @errloc)
*
* Once decode_bch() has successfully returned with a positive value, error
* Once bch_decode() has successfully returned with a positive value, error
* locations returned in array @errloc should be interpreted as follows -
*
* if (errloc[n] >= 8*len), then n-th error is located in ecc (no need for
@ -993,7 +1050,7 @@ static int chien_search(struct bch_control *bch, unsigned int len,
* Note that this function does not perform any data correction by itself, it
* merely indicates error locations.
*/
int decode_bch(struct bch_control *bch, const uint8_t *data, unsigned int len,
int bch_decode(struct bch_control *bch, const uint8_t *data, unsigned int len,
const uint8_t *recv_ecc, const uint8_t *calc_ecc,
const unsigned int *syn, unsigned int *errloc)
{
@ -1012,7 +1069,7 @@ int decode_bch(struct bch_control *bch, const uint8_t *data, unsigned int len,
/* compute received data ecc into an internal buffer */
if (!data || !recv_ecc)
return -EINVAL;
encode_bch(bch, data, len, NULL);
bch_encode(bch, data, len, NULL);
} else {
/* load provided calculated ecc */
load_ecc8(bch, bch->ecc_buf, calc_ecc);
@ -1048,12 +1105,14 @@ int decode_bch(struct bch_control *bch, const uint8_t *data, unsigned int len,
break;
}
errloc[i] = nbits-1-errloc[i];
errloc[i] = (errloc[i] & ~7)|(7-(errloc[i] & 7));
if (!bch->swap_bits)
errloc[i] = (errloc[i] & ~7) |
(7-(errloc[i] & 7));
}
}
return (err >= 0) ? err : -EBADMSG;
}
EXPORT_SYMBOL_GPL(decode_bch);
EXPORT_SYMBOL_GPL(bch_decode);
/*
* generate Galois field lookup tables
@ -1236,27 +1295,29 @@ finish:
}
/**
* init_bch - initialize a BCH encoder/decoder
* bch_init - initialize a BCH encoder/decoder
* @m: Galois field order, should be in the range 5-15
* @t: maximum error correction capability, in bits
* @prim_poly: user-provided primitive polynomial (or 0 to use default)
* @swap_bits: swap bits within data and syndrome bytes
*
* Returns:
* a newly allocated BCH control structure if successful, NULL otherwise
*
* This initialization can take some time, as lookup tables are built for fast
* encoding/decoding; make sure not to call this function from a time critical
* path. Usually, init_bch() should be called on module/driver init and
* free_bch() should be called to release memory on exit.
* path. Usually, bch_init() should be called on module/driver init and
* bch_free() should be called to release memory on exit.
*
* You may provide your own primitive polynomial of degree @m in argument
* @prim_poly, or let init_bch() use its default polynomial.
* @prim_poly, or let bch_init() use its default polynomial.
*
* Once init_bch() has successfully returned a pointer to a newly allocated
* Once bch_init() has successfully returned a pointer to a newly allocated
* BCH control structure, ecc length in bytes is given by member @ecc_bytes of
* the structure.
*/
struct bch_control *init_bch(int m, int t, unsigned int prim_poly)
struct bch_control *bch_init(int m, int t, unsigned int prim_poly,
bool swap_bits)
{
int err = 0;
unsigned int i, words;
@ -1321,6 +1382,7 @@ struct bch_control *init_bch(int m, int t, unsigned int prim_poly)
bch->syn = bch_alloc(2*t*sizeof(*bch->syn), &err);
bch->cache = bch_alloc(2*t*sizeof(*bch->cache), &err);
bch->elp = bch_alloc((t+1)*sizeof(struct gf_poly_deg1), &err);
bch->swap_bits = swap_bits;
for (i = 0; i < ARRAY_SIZE(bch->poly_2t); i++)
bch->poly_2t[i] = bch_alloc(GF_POLY_SZ(2*t), &err);
@ -1347,16 +1409,16 @@ struct bch_control *init_bch(int m, int t, unsigned int prim_poly)
return bch;
fail:
free_bch(bch);
bch_free(bch);
return NULL;
}
EXPORT_SYMBOL_GPL(init_bch);
EXPORT_SYMBOL_GPL(bch_init);
/**
* free_bch - free the BCH control structure
* bch_free - free the BCH control structure
* @bch: BCH control structure to release
*/
void free_bch(struct bch_control *bch)
void bch_free(struct bch_control *bch)
{
unsigned int i;
@ -1377,7 +1439,7 @@ void free_bch(struct bch_control *bch)
kfree(bch);
}
}
EXPORT_SYMBOL_GPL(free_bch);
EXPORT_SYMBOL_GPL(bch_free);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Ivan Djelic <ivan.djelic@parrot.com>");