1
0
Fork 0

NAND Core changes:

* Drop useless 'depends on' in Kconfig
 * Add an extra level in the Kconfig hierarchy
 * Trivial spellings
 * Dynamic allocation of the interface configurations
 * Dropping the default ONFI timing mode
 * Various cleanup (types, structures, naming, comments)
 * Hide the chip->data_interface indirection
 * Add the generic rb-gpios property
 * Add the ->choose_interface_config() hook
 * Introduce nand_choose_best_sdr_timings()
 * Use default values for tPROG_max and tBERS_max
 * Avoid redefining tR_max and tCCS_min
 * Add a helper to find the closest ONFI mode
 * bcm63xx MTD parsers: simplify CFE detection
 
 Raw NAND controller drivers changes:
 * fsl-upm: Deprecation of specific DT properties
 * fsl_upm: Driver rework and cleanup in favor of ->exec_op()
 * Ingenic: Cleanup ARRAY_SIZE() vs sizeof() use
 * brcmnand: ECC error handling on EDU transfers
 * brcmnand: Don't default to EDU transfers
 * qcom: Set BAM mode only if not set already
 * qcom: Avoid write to unavailable register
 * gpio: Driver rework in favor of ->exec_op()
 * tango: ->exec_op() conversion
 * mtk: ->exec_op() conversion
 
 Raw NAND chip drivers changes:
 * toshiba: Implement ->choose_interface_config() for TH58NVG2S3HBAI4
 * toshiba: Implement ->choose_interface_config() for TC58NVG0S3E
 * toshiba: Implement ->choose_interface_config() for TC58TEG5DCLTA00
 * hynix: Implement ->choose_interface_config() for H27UCG8T2ATR-BC
 
 HyperBus changes:
 * DMA support for TI's AM654 HyperBus controller driver.
 * HyperBus frontend driver for Renesas RPC-IF driver.
 
 SPI NOR core changes:
 * Support for Winbond w25q64jwm flash
 * Enable 4K sector support for mx25l12805d
 
 SPI NOR controller drivers changes:
 * intel-spi Add Alder Lake-S PCI ID
 
 MTD Core changes:
 * mtdoops: Don't run panic write twice
 * mtdconcat: Correctly handle panic write
 * Use DEFINE_SHOW_ATTRIBUTE
 -----BEGIN PGP SIGNATURE-----
 
 iQJKBAABCAA0FiEEdgfidid8lnn52cLTZvlZhesYu8EFAl+JjRcWHHJpY2hhcmRA
 c2lnbWEtc3Rhci5hdAAKCRBm+VmF6xi7wUmhEACw6IQ5JY7TraffhziGWkHbnEHm
 Awy1uOCzGUYcHviA+3NOcnBl2XcTsQ6LXf0aAi28/+1NHpLcPeMKA0+QBXGqhMGy
 z56+YjXG5LAMlHe74YeaSH7D1uyv3HcfYAb+6fmvOK6IR3sVfnQ0apKnt/Vs5lLv
 vyfYeHgzAYALCxJLVsfVAQHMJApR5qdYvDIk/keur+ds3ypmurwBXJvbRuowRMo3
 n2/S0PJii6LdKoa9zHmsGLr5uPvePbeiudT5ZmsN+QylZT8DW5CVl95K3gKhbrwi
 dP9EAXv/QCSbX8BFK3jsP8MVQkic8vVROPtlA+LZqfeCGEw02wzVqXEN+kkIJJHx
 5BXmJOkjhk1QAMR4ZB6ih79BIN0XuynEMNLqXSz6LFoRAQ7ZXHvyyKruHbFAVDc5
 bRcEUiW0+FQMfCzYtjrmGFl3YOv5mK2yLJXbVZDHo1usgdzEvpdp56ormXXWGO5l
 JKW/8PRPY4hKVNaSteXUiMynVW1hiQYNgvnqi/3g6UCqg1BqpAoPjrf8kuu/gSCq
 627Ni4dTtlfN3pKHOPOXEPy2faKhiOmJTH/0NqCC/LgNJn533t2H+Aj6VX9fGCPh
 2/Idj6OWbfwupLnxUTacxCyoXHVyfdBPsFlw3qae2gMkjnpjsRuJODI20iA+0kad
 PMRnybl5hhX5HIKTBQ==
 =X56H
 -----END PGP SIGNATURE-----

Merge tag 'mtd/for-5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux

Pull MTD updates from Richard Weinberger:
 "NAND core changes:
   - Drop useless 'depends on' in Kconfig
   - Add an extra level in the Kconfig hierarchy
   - Trivial spellings
   - Dynamic allocation of the interface configurations
   - Dropping the default ONFI timing mode
   - Various cleanup (types, structures, naming, comments)
   - Hide the chip->data_interface indirection
   - Add the generic rb-gpios property
   - Add the ->choose_interface_config() hook
   - Introduce nand_choose_best_sdr_timings()
   - Use default values for tPROG_max and tBERS_max
   - Avoid redefining tR_max and tCCS_min
   - Add a helper to find the closest ONFI mode
   - bcm63xx MTD parsers: simplify CFE detection

  Raw NAND controller drivers changes:
   - fsl-upm: Deprecation of specific DT properties
   - fsl_upm: Driver rework and cleanup in favor of ->exec_op()
   - Ingenic: Cleanup ARRAY_SIZE() vs sizeof() use
   - brcmnand: ECC error handling on EDU transfers
   - brcmnand: Don't default to EDU transfers
   - qcom: Set BAM mode only if not set already
   - qcom: Avoid write to unavailable register
   - gpio: Driver rework in favor of ->exec_op()
   - tango: ->exec_op() conversion
   - mtk: ->exec_op() conversion

  Raw NAND chip drivers changes:
   - toshiba: Implement ->choose_interface_config() for TH58NVG2S3HBAI4
   - toshiba: Implement ->choose_interface_config() for TC58NVG0S3E
   - toshiba: Implement ->choose_interface_config() for TC58TEG5DCLTA00
   - hynix: Implement ->choose_interface_config() for H27UCG8T2ATR-BC

  HyperBus changes:
   - DMA support for TI's AM654 HyperBus controller driver.
   - HyperBus frontend driver for Renesas RPC-IF driver.

  SPI NOR core changes:
   - Support for Winbond w25q64jwm flash
   - Enable 4K sector support for mx25l12805d

  SPI NOR controller drivers changes:
   - intel-spi Add Alder Lake-S PCI ID

  MTD Core changes:
   - mtdoops: Don't run panic write twice
   - mtdconcat: Correctly handle panic write
   - Use DEFINE_SHOW_ATTRIBUTE"

* tag 'mtd/for-5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (76 commits)
  mtd: hyperbus: Fix build failure when only RPCIF_HYPERBUS is enabled
  mtd: hyperbus: add Renesas RPC-IF driver
  Revert "mtd: spi-nor: Prefer asynchronous probe"
  mtd: parsers: bcm63xx: Do not make it modular
  mtd: spear_smi: Enable compile testing
  mtd: maps: vmu-flash: fix typos for struct memcard
  mtd: physmap: Add Baikal-T1 physically mapped ROM support
  mtd: maps: vmu-flash: simplify the return expression of probe_maple_vmu
  mtd: onenand: simplify the return expression of onenand_transfer_auto_oob
  mtd: rawnand: cadence: remove a redundant dev_err call
  mtd: rawnand: ams-delta: Fix non-OF build warning
  mtd: rawnand: Don't overwrite the error code from nand_set_ecc_soft_ops()
  mtd: rawnand: Introduce nand_set_ecc_on_host_ops()
  mtd: rawnand: atmel: Check return values for nand_read_data_op
  mtd: rawnand: vf610: Remove unused function vf610_nfc_transfer_size()
  mtd: rawnand: qcom: Simplify with dev_err_probe()
  mtd: rawnand: marvell: Fix and update kerneldoc
  mtd: rawnand: marvell: Simplify with dev_err_probe()
  mtd: rawnand: gpmi: Simplify with dev_err_probe()
  mtd: rawnand: atmel: Simplify with dev_err_probe()
  ...
pull/193/head
Linus Torvalds 2020-10-17 10:45:42 -07:00
commit 6f78b9acf0
121 changed files with 2352 additions and 1112 deletions

View File

@ -55,6 +55,37 @@ patternProperties:
$ref: /schemas/types.yaml#/definitions/string
enum: [none, soft, hw, hw_syndrome, hw_oob_first, on-die]
nand-ecc-engine:
allOf:
- $ref: /schemas/types.yaml#/definitions/phandle
description: |
A phandle on the hardware ECC engine if any. There are
basically three possibilities:
1/ The ECC engine is part of the NAND controller, in this
case the phandle should reference the parent node.
2/ The ECC engine is part of the NAND part (on-die), in this
case the phandle should reference the node itself.
3/ The ECC engine is external, in this case the phandle should
reference the specific ECC engine node.
nand-use-soft-ecc-engine:
type: boolean
description: Use a software ECC engine.
nand-no-ecc-engine:
type: boolean
description: Do not use any ECC correction.
nand-ecc-placement:
allOf:
- $ref: /schemas/types.yaml#/definitions/string
- enum: [ oob, interleaved ]
description:
Location of the ECC bytes. This location is unknown by default
but can be explicitly set to "oob", if all ECC bytes are
known to be stored in the OOB area, or "interleaved" if ECC
bytes will be interleaved with regular data in the main area.
nand-ecc-algo:
description:
Desired ECC algorithm.

View File

@ -306,7 +306,7 @@ static struct davinci_nand_pdata da830_evm_nand_pdata = {
.core_chipsel = 1,
.parts = da830_evm_nand_partitions,
.nr_parts = ARRAY_SIZE(da830_evm_nand_partitions),
.ecc_mode = NAND_ECC_HW,
.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST,
.ecc_bits = 4,
.bbt_options = NAND_BBT_USE_FLASH,
.bbt_td = &da830_evm_nand_bbt_main_descr,

View File

@ -239,7 +239,7 @@ static struct davinci_nand_pdata da850_evm_nandflash_data = {
.core_chipsel = 1,
.parts = da850_evm_nandflash_partition,
.nr_parts = ARRAY_SIZE(da850_evm_nandflash_partition),
.ecc_mode = NAND_ECC_HW,
.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST,
.ecc_bits = 4,
.bbt_options = NAND_BBT_USE_FLASH,
.timing = &da850_evm_nandflash_timing,

View File

@ -82,7 +82,7 @@ static struct davinci_nand_pdata davinci_nand_data = {
.mask_chipsel = BIT(14),
.parts = davinci_nand_partitions,
.nr_parts = ARRAY_SIZE(davinci_nand_partitions),
.ecc_mode = NAND_ECC_HW,
.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST,
.bbt_options = NAND_BBT_USE_FLASH,
.ecc_bits = 4,
};

View File

@ -76,7 +76,8 @@ static struct davinci_nand_pdata davinci_nand_data = {
.mask_chipsel = BIT(14),
.parts = davinci_nand_partitions,
.nr_parts = ARRAY_SIZE(davinci_nand_partitions),
.ecc_mode = NAND_ECC_HW_SYNDROME,
.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST,
.ecc_placement = NAND_ECC_PLACEMENT_INTERLEAVED,
.ecc_bits = 4,
.bbt_options = NAND_BBT_USE_FLASH,
};

View File

@ -146,7 +146,7 @@ static struct davinci_nand_pdata davinci_nand_data = {
.mask_chipsel = BIT(14),
.parts = davinci_nand_partitions,
.nr_parts = ARRAY_SIZE(davinci_nand_partitions),
.ecc_mode = NAND_ECC_HW,
.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST,
.bbt_options = NAND_BBT_USE_FLASH,
.ecc_bits = 4,
};

View File

@ -162,7 +162,7 @@ static struct davinci_nand_pdata davinci_evm_nandflash_data = {
.core_chipsel = 0,
.parts = davinci_evm_nandflash_partition,
.nr_parts = ARRAY_SIZE(davinci_evm_nandflash_partition),
.ecc_mode = NAND_ECC_HW,
.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST,
.ecc_bits = 1,
.bbt_options = NAND_BBT_USE_FLASH,
.timing = &davinci_evm_nandflash_timing,

View File

@ -91,7 +91,7 @@ static struct davinci_nand_pdata davinci_nand_data = {
.mask_ale = 0x40000,
.parts = davinci_nand_partitions,
.nr_parts = ARRAY_SIZE(davinci_nand_partitions),
.ecc_mode = NAND_ECC_HW,
.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST,
.ecc_bits = 1,
.options = 0,
};

View File

@ -432,7 +432,7 @@ static struct davinci_nand_pdata mityomapl138_nandflash_data = {
.core_chipsel = 1,
.parts = mityomapl138_nandflash_partition,
.nr_parts = ARRAY_SIZE(mityomapl138_nandflash_partition),
.ecc_mode = NAND_ECC_HW,
.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST,
.bbt_options = NAND_BBT_USE_FLASH,
.options = NAND_BUSWIDTH_16,
.ecc_bits = 1, /* 4 bit mode is not supported with 16 bit NAND */

View File

@ -90,7 +90,7 @@ static struct davinci_nand_pdata davinci_ntosd2_nandflash_data = {
.core_chipsel = 0,
.parts = davinci_ntosd2_nandflash_partition,
.nr_parts = ARRAY_SIZE(davinci_ntosd2_nandflash_partition),
.ecc_mode = NAND_ECC_HW,
.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST,
.ecc_bits = 1,
.bbt_options = NAND_BBT_USE_FLASH,
};

View File

@ -206,7 +206,7 @@ static struct davinci_nand_pdata omapl138_hawk_nandflash_data = {
.core_chipsel = 1,
.parts = omapl138_hawk_nandflash_partition,
.nr_parts = ARRAY_SIZE(omapl138_hawk_nandflash_partition),
.ecc_mode = NAND_ECC_HW,
.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST,
.ecc_bits = 4,
.bbt_options = NAND_BBT_USE_FLASH,
.options = NAND_BUSWIDTH_16,

View File

@ -191,7 +191,7 @@ static struct s3c2410_platform_nand smdk_nand_info = {
.twrph1 = 20,
.nr_sets = ARRAY_SIZE(smdk_nand_sets),
.sets = smdk_nand_sets,
.ecc_mode = NAND_ECC_SOFT,
.engine_type = NAND_ECC_ENGINE_TYPE_SOFT,
};
/* devices we initialise */

View File

@ -218,7 +218,7 @@ static struct s3c2410_platform_nand __initdata anubis_nand_info = {
.nr_sets = ARRAY_SIZE(anubis_nand_sets),
.sets = anubis_nand_sets,
.select_chip = anubis_nand_select,
.ecc_mode = NAND_ECC_SOFT,
.engine_type = NAND_ECC_ENGINE_TYPE_SOFT,
};
/* IDE channels */

View File

@ -109,7 +109,7 @@ static struct s3c2410_platform_nand __initdata at2440evb_nand_info = {
.twrph1 = 40,
.nr_sets = ARRAY_SIZE(at2440evb_nand_sets),
.sets = at2440evb_nand_sets,
.ecc_mode = NAND_ECC_SOFT,
.engine_type = NAND_ECC_ENGINE_TYPE_SOFT,
};
/* DM9000AEP 10/100 ethernet controller */

View File

@ -294,7 +294,7 @@ static struct s3c2410_platform_nand __initdata bast_nand_info = {
.nr_sets = ARRAY_SIZE(bast_nand_sets),
.sets = bast_nand_sets,
.select_chip = bast_nand_select,
.ecc_mode = NAND_ECC_SOFT,
.engine_type = NAND_ECC_ENGINE_TYPE_SOFT,
};
/* DM9000 */

View File

@ -417,7 +417,7 @@ static struct s3c2410_platform_nand __initdata gta02_nand_info = {
.twrph1 = 15,
.nr_sets = ARRAY_SIZE(gta02_nand_sets),
.sets = gta02_nand_sets,
.ecc_mode = NAND_ECC_SOFT,
.engine_type = NAND_ECC_ENGINE_TYPE_SOFT,
};

View File

@ -228,7 +228,7 @@ static struct s3c2410_platform_nand __initdata jive_nand_info = {
.twrph1 = 40,
.sets = jive_nand_sets,
.nr_sets = ARRAY_SIZE(jive_nand_sets),
.ecc_mode = NAND_ECC_SOFT,
.engine_type = NAND_ECC_ENGINE_TYPE_SOFT,
};
static int __init jive_mtdset(char *options)

View File

@ -296,7 +296,7 @@ static struct s3c2410_platform_nand mini2440_nand_info __initdata = {
.nr_sets = ARRAY_SIZE(mini2440_nand_sets),
.sets = mini2440_nand_sets,
.ignore_unset_ecc = 1,
.ecc_mode = NAND_ECC_HW,
.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST,
};
/* DM9000AEP 10/100 ethernet controller */

View File

@ -234,7 +234,7 @@ static struct s3c2410_platform_nand __initdata osiris_nand_info = {
.nr_sets = ARRAY_SIZE(osiris_nand_sets),
.sets = osiris_nand_sets,
.select_chip = osiris_nand_select,
.ecc_mode = NAND_ECC_SOFT,
.engine_type = NAND_ECC_ENGINE_TYPE_SOFT,
};
/* PCMCIA control and configuration */

View File

@ -287,7 +287,7 @@ static struct s3c2410_platform_nand __initdata qt2410_nand_info = {
.twrph1 = 20,
.nr_sets = ARRAY_SIZE(qt2410_nand_sets),
.sets = qt2410_nand_sets,
.ecc_mode = NAND_ECC_SOFT,
.engine_type = NAND_ECC_ENGINE_TYPE_SOFT,
};
/* UDC */

View File

@ -620,7 +620,7 @@ static struct s3c2410_platform_nand rx1950_nand_info = {
.twrph1 = 15,
.nr_sets = ARRAY_SIZE(rx1950_nand_sets),
.sets = rx1950_nand_sets,
.ecc_mode = NAND_ECC_SOFT,
.engine_type = NAND_ECC_ENGINE_TYPE_SOFT,
};
static struct s3c2410_udc_mach_info rx1950_udc_cfg __initdata = {

View File

@ -158,7 +158,7 @@ static struct s3c2410_platform_nand __initdata rx3715_nand_info = {
.twrph1 = 15,
.nr_sets = ARRAY_SIZE(rx3715_nand_sets),
.sets = rx3715_nand_sets,
.ecc_mode = NAND_ECC_SOFT,
.engine_type = NAND_ECC_ENGINE_TYPE_SOFT,
};
static struct platform_device *rx3715_devices[] __initdata = {

View File

@ -112,7 +112,7 @@ static struct s3c2410_platform_nand __initdata vstms_nand_info = {
.twrph1 = 20,
.nr_sets = ARRAY_SIZE(vstms_nand_sets),
.sets = vstms_nand_sets,
.ecc_mode = NAND_ECC_SOFT,
.engine_type = NAND_ECC_ENGINE_TYPE_SOFT,
};
static struct platform_device *vstms_devices[] __initdata = {

View File

@ -199,7 +199,7 @@ static struct s3c2410_platform_nand hmt_nand_info = {
.twrph1 = 40,
.nr_sets = ARRAY_SIZE(hmt_nand_sets),
.sets = hmt_nand_sets,
.ecc_mode = NAND_ECC_SOFT,
.engine_type = NAND_ECC_ENGINE_TYPE_SOFT,
};
static struct gpio_led hmt_leds[] = {

View File

@ -136,7 +136,7 @@ static struct s3c2410_platform_nand mini6410_nand_info = {
.twrph1 = 40,
.nr_sets = ARRAY_SIZE(mini6410_nand_sets),
.sets = mini6410_nand_sets,
.ecc_mode = NAND_ECC_SOFT,
.engine_type = NAND_ECC_ENGINE_TYPE_SOFT,
};
static struct s3c_fb_pd_win mini6410_lcd_type0_fb_win = {

View File

@ -188,7 +188,7 @@ static struct s3c2410_platform_nand real6410_nand_info = {
.twrph1 = 40,
.nr_sets = ARRAY_SIZE(real6410_nand_sets),
.sets = real6410_nand_sets,
.ecc_mode = NAND_ECC_SOFT,
.engine_type = NAND_ECC_ENGINE_TYPE_SOFT,
};
static struct platform_device *real6410_devices[] __initdata = {

View File

@ -91,7 +91,7 @@ config MTD_MCHP23K256
config MTD_SPEAR_SMI
tristate "SPEAR MTD NOR Support through SMI controller"
depends on PLAT_SPEAR
depends on PLAT_SPEAR || COMPILE_TEST
default y
help
This enable SNOR support on SPEAR platforms using SMI controller

View File

@ -436,7 +436,10 @@ static int flash_read (struct mtd_info *mtd,loff_t from,size_t len,size_t *retle
{
int gap = BUSWIDTH - (from & (BUSWIDTH - 1));
while (len && gap--) *buf++ = read8 (from++), len--;
while (len && gap--) {
*buf++ = read8 (from++);
len--;
}
}
/* now we read dwords until we reach a non-dword boundary */
@ -518,7 +521,10 @@ static int flash_write (struct mtd_info *mtd,loff_t to,size_t len,size_t *retlen
i = n = 0;
while (gap--) tmp[i++] = 0xFF;
while (len && i < BUSWIDTH) tmp[i++] = buf[n++], len--;
while (len && i < BUSWIDTH) {
tmp[i++] = buf[n++];
len--;
}
while (i < BUSWIDTH) tmp[i++] = 0xFF;
if (!write_dword (aligned,*((__u32 *) tmp))) return (-EIO);

View File

@ -793,7 +793,7 @@ static int spear_smi_probe_config_dt(struct platform_device *pdev,
struct device_node *np)
{
struct spear_smi_plat_data *pdata = dev_get_platdata(&pdev->dev);
struct device_node *pp = NULL;
struct device_node *pp;
const __be32 *addr;
u32 val;
int len;
@ -812,7 +812,7 @@ static int spear_smi_probe_config_dt(struct platform_device *pdev,
return -ENOMEM;
/* Fill structs for each subnode (flash device) */
while ((pp = of_get_next_child(np, pp))) {
for_each_child_of_node(np, pp) {
pdata->np[i] = pp;
/* Read base-addr and size from DT */

View File

@ -22,4 +22,11 @@ config HBMC_AM654
This is the driver for HyperBus controller on TI's AM65x and
other SoCs
config RPCIF_HYPERBUS
tristate "Renesas RPC-IF HyperBus driver"
depends on RENESAS_RPCIF
depends on MTD_CFI_BE_BYTE_SWAP
help
This option includes Renesas RPC-IF HyperBus support.
endif # MTD_HYPERBUS

View File

@ -2,3 +2,4 @@
obj-$(CONFIG_MTD_HYPERBUS) += hyperbus-core.o
obj-$(CONFIG_HBMC_AM654) += hbmc-am654.o
obj-$(CONFIG_RPCIF_HYPERBUS) += rpc-if.o

View File

@ -3,6 +3,10 @@
// Copyright (C) 2019 Texas Instruments Incorporated - https://www.ti.com/
// Author: Vignesh Raghavendra <vigneshr@ti.com>
#include <linux/completion.h>
#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
#include <linux/err.h>
#include <linux/kernel.h>
#include <linux/module.h>
@ -13,11 +17,18 @@
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/sched/task_stack.h>
#include <linux/types.h>
#define AM654_HBMC_CALIB_COUNT 25
struct am654_hbmc_device_priv {
struct completion rx_dma_complete;
phys_addr_t device_base;
struct hyperbus_ctlr *ctlr;
struct dma_chan *rx_chan;
};
struct am654_hbmc_priv {
struct hyperbus_ctlr ctlr;
struct hyperbus_device hbdev;
@ -52,13 +63,103 @@ static int am654_hbmc_calibrate(struct hyperbus_device *hbdev)
return ret;
}
static void am654_hbmc_dma_callback(void *param)
{
struct am654_hbmc_device_priv *priv = param;
complete(&priv->rx_dma_complete);
}
static int am654_hbmc_dma_read(struct am654_hbmc_device_priv *priv, void *to,
unsigned long from, ssize_t len)
{
enum dma_ctrl_flags flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
struct dma_chan *rx_chan = priv->rx_chan;
struct dma_async_tx_descriptor *tx;
dma_addr_t dma_dst, dma_src;
dma_cookie_t cookie;
int ret;
if (!priv->rx_chan || !virt_addr_valid(to) || object_is_on_stack(to))
return -EINVAL;
dma_dst = dma_map_single(rx_chan->device->dev, to, len, DMA_FROM_DEVICE);
if (dma_mapping_error(rx_chan->device->dev, dma_dst)) {
dev_dbg(priv->ctlr->dev, "DMA mapping failed\n");
return -EIO;
}
dma_src = priv->device_base + from;
tx = dmaengine_prep_dma_memcpy(rx_chan, dma_dst, dma_src, len, flags);
if (!tx) {
dev_err(priv->ctlr->dev, "device_prep_dma_memcpy error\n");
ret = -EIO;
goto unmap_dma;
}
reinit_completion(&priv->rx_dma_complete);
tx->callback = am654_hbmc_dma_callback;
tx->callback_param = priv;
cookie = dmaengine_submit(tx);
ret = dma_submit_error(cookie);
if (ret) {
dev_err(priv->ctlr->dev, "dma_submit_error %d\n", cookie);
goto unmap_dma;
}
dma_async_issue_pending(rx_chan);
if (!wait_for_completion_timeout(&priv->rx_dma_complete, msecs_to_jiffies(len + 1000))) {
dmaengine_terminate_sync(rx_chan);
dev_err(priv->ctlr->dev, "DMA wait_for_completion_timeout\n");
ret = -ETIMEDOUT;
}
unmap_dma:
dma_unmap_single(rx_chan->device->dev, dma_dst, len, DMA_FROM_DEVICE);
return ret;
}
static void am654_hbmc_read(struct hyperbus_device *hbdev, void *to,
unsigned long from, ssize_t len)
{
struct am654_hbmc_device_priv *priv = hbdev->priv;
if (len < SZ_1K || am654_hbmc_dma_read(priv, to, from, len))
memcpy_fromio(to, hbdev->map.virt + from, len);
}
static const struct hyperbus_ops am654_hbmc_ops = {
.calibrate = am654_hbmc_calibrate,
.copy_from = am654_hbmc_read,
};
static int am654_hbmc_request_mmap_dma(struct am654_hbmc_device_priv *priv)
{
struct dma_chan *rx_chan;
dma_cap_mask_t mask;
dma_cap_zero(mask);
dma_cap_set(DMA_MEMCPY, mask);
rx_chan = dma_request_chan_by_mask(&mask);
if (IS_ERR(rx_chan)) {
if (PTR_ERR(rx_chan) == -EPROBE_DEFER)
return -EPROBE_DEFER;
dev_dbg(priv->ctlr->dev, "No DMA channel available\n");
return 0;
}
priv->rx_chan = rx_chan;
init_completion(&priv->rx_dma_complete);
return 0;
}
static int am654_hbmc_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct am654_hbmc_device_priv *dev_priv;
struct device *dev = &pdev->dev;
struct am654_hbmc_priv *priv;
struct resource res;
@ -70,7 +171,8 @@ static int am654_hbmc_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, priv);
ret = of_address_to_resource(np, 0, &res);
priv->hbdev.np = of_get_next_child(np, NULL);
ret = of_address_to_resource(priv->hbdev.np, 0, &res);
if (ret)
return ret;
@ -88,13 +190,6 @@ static int am654_hbmc_probe(struct platform_device *pdev)
priv->mux_ctrl = control;
}
pm_runtime_enable(dev);
ret = pm_runtime_get_sync(dev);
if (ret < 0) {
pm_runtime_put_noidle(dev);
goto disable_pm;
}
priv->hbdev.map.size = resource_size(&res);
priv->hbdev.map.virt = devm_ioremap_resource(dev, &res);
if (IS_ERR(priv->hbdev.map.virt))
@ -103,17 +198,32 @@ static int am654_hbmc_probe(struct platform_device *pdev)
priv->ctlr.dev = dev;
priv->ctlr.ops = &am654_hbmc_ops;
priv->hbdev.ctlr = &priv->ctlr;
priv->hbdev.np = of_get_next_child(dev->of_node, NULL);
dev_priv = devm_kzalloc(dev, sizeof(*dev_priv), GFP_KERNEL);
if (!dev_priv) {
ret = -ENOMEM;
goto disable_mux;
}
priv->hbdev.priv = dev_priv;
dev_priv->device_base = res.start;
dev_priv->ctlr = &priv->ctlr;
ret = am654_hbmc_request_mmap_dma(dev_priv);
if (ret)
goto disable_mux;
ret = hyperbus_register_device(&priv->hbdev);
if (ret) {
dev_err(dev, "failed to register controller\n");
pm_runtime_put_sync(&pdev->dev);
goto disable_pm;
goto release_dma;
}
return 0;
disable_pm:
pm_runtime_disable(dev);
release_dma:
if (dev_priv->rx_chan)
dma_release_channel(dev_priv->rx_chan);
disable_mux:
if (priv->mux_ctrl)
mux_control_deselect(priv->mux_ctrl);
return ret;
@ -122,13 +232,15 @@ disable_pm:
static int am654_hbmc_remove(struct platform_device *pdev)
{
struct am654_hbmc_priv *priv = platform_get_drvdata(pdev);
struct am654_hbmc_device_priv *dev_priv = priv->hbdev.priv;
int ret;
ret = hyperbus_unregister_device(&priv->hbdev);
if (priv->mux_ctrl)
mux_control_deselect(priv->mux_ctrl);
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
if (dev_priv->rx_chan)
dma_release_channel(dev_priv->rx_chan);
return ret;
}

View File

@ -0,0 +1,170 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Linux driver for RPC-IF HyperFlash
*
* Copyright (C) 2019-2020 Cogent Embedded, Inc.
*/
#include <linux/err.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mtd/hyperbus.h>
#include <linux/mtd/mtd.h>
#include <linux/mux/consumer.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/types.h>
#include <memory/renesas-rpc-if.h>
struct rpcif_hyperbus {
struct rpcif rpc;
struct hyperbus_ctlr ctlr;
struct hyperbus_device hbdev;
};
static const struct rpcif_op rpcif_op_tmpl = {
.cmd = {
.buswidth = 8,
.ddr = true,
},
.ocmd = {
.buswidth = 8,
.ddr = true,
},
.addr = {
.nbytes = 1,
.buswidth = 8,
.ddr = true,
},
.data = {
.buswidth = 8,
.ddr = true,
},
};
static void rpcif_hb_prepare_read(struct rpcif *rpc, void *to,
unsigned long from, ssize_t len)
{
struct rpcif_op op = rpcif_op_tmpl;
op.cmd.opcode = HYPERBUS_RW_READ | HYPERBUS_AS_MEM;
op.addr.val = from >> 1;
op.dummy.buswidth = 1;
op.dummy.ncycles = 15;
op.data.dir = RPCIF_DATA_IN;
op.data.nbytes = len;
op.data.buf.in = to;
rpcif_prepare(rpc, &op, NULL, NULL);
}
static void rpcif_hb_prepare_write(struct rpcif *rpc, unsigned long to,
void *from, ssize_t len)
{
struct rpcif_op op = rpcif_op_tmpl;
op.cmd.opcode = HYPERBUS_RW_WRITE | HYPERBUS_AS_MEM;
op.addr.val = to >> 1;
op.data.dir = RPCIF_DATA_OUT;
op.data.nbytes = len;
op.data.buf.out = from;
rpcif_prepare(rpc, &op, NULL, NULL);
}
static u16 rpcif_hb_read16(struct hyperbus_device *hbdev, unsigned long addr)
{
struct rpcif_hyperbus *hyperbus =
container_of(hbdev, struct rpcif_hyperbus, hbdev);
map_word data;
rpcif_hb_prepare_read(&hyperbus->rpc, &data, addr, 2);
rpcif_manual_xfer(&hyperbus->rpc);
return data.x[0];
}
static void rpcif_hb_write16(struct hyperbus_device *hbdev, unsigned long addr,
u16 data)
{
struct rpcif_hyperbus *hyperbus =
container_of(hbdev, struct rpcif_hyperbus, hbdev);
rpcif_hb_prepare_write(&hyperbus->rpc, addr, &data, 2);
rpcif_manual_xfer(&hyperbus->rpc);
}
static void rpcif_hb_copy_from(struct hyperbus_device *hbdev, void *to,
unsigned long from, ssize_t len)
{
struct rpcif_hyperbus *hyperbus =
container_of(hbdev, struct rpcif_hyperbus, hbdev);
rpcif_hb_prepare_read(&hyperbus->rpc, to, from, len);
rpcif_dirmap_read(&hyperbus->rpc, from, len, to);
}
static const struct hyperbus_ops rpcif_hb_ops = {
.read16 = rpcif_hb_read16,
.write16 = rpcif_hb_write16,
.copy_from = rpcif_hb_copy_from,
};
static int rpcif_hb_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct rpcif_hyperbus *hyperbus;
int error;
hyperbus = devm_kzalloc(dev, sizeof(*hyperbus), GFP_KERNEL);
if (!hyperbus)
return -ENOMEM;
rpcif_sw_init(&hyperbus->rpc, pdev->dev.parent);
platform_set_drvdata(pdev, hyperbus);
rpcif_enable_rpm(&hyperbus->rpc);
rpcif_hw_init(&hyperbus->rpc, true);
hyperbus->hbdev.map.size = hyperbus->rpc.size;
hyperbus->hbdev.map.virt = hyperbus->rpc.dirmap;
hyperbus->ctlr.dev = dev;
hyperbus->ctlr.ops = &rpcif_hb_ops;
hyperbus->hbdev.ctlr = &hyperbus->ctlr;
hyperbus->hbdev.np = of_get_next_child(pdev->dev.parent->of_node, NULL);
error = hyperbus_register_device(&hyperbus->hbdev);
if (error)
rpcif_disable_rpm(&hyperbus->rpc);
return error;
}
static int rpcif_hb_remove(struct platform_device *pdev)
{
struct rpcif_hyperbus *hyperbus = platform_get_drvdata(pdev);
int error = hyperbus_unregister_device(&hyperbus->hbdev);
struct rpcif *rpc = dev_get_drvdata(pdev->dev.parent);
rpcif_disable_rpm(rpc);
return error;
}
static struct platform_driver rpcif_platform_driver = {
.probe = rpcif_hb_probe,
.remove = rpcif_hb_remove,
.driver = {
.name = "rpc-if-hyperflash",
},
};
module_platform_driver(rpcif_platform_driver);
MODULE_DESCRIPTION("Renesas RPC-IF HyperFlash driver");
MODULE_LICENSE("GPL v2");

View File

@ -393,6 +393,17 @@ static int lpddr2_nvm_lock(struct mtd_info *mtd, loff_t start_add,
return lpddr2_nvm_do_block_op(mtd, start_add, len, LPDDR2_NVM_LOCK);
}
static const struct mtd_info lpddr2_nvm_mtd_info = {
.type = MTD_RAM,
.writesize = 1,
.flags = (MTD_CAP_NVRAM | MTD_POWERUP_LOCK),
._read = lpddr2_nvm_read,
._write = lpddr2_nvm_write,
._erase = lpddr2_nvm_erase,
._unlock = lpddr2_nvm_unlock,
._lock = lpddr2_nvm_lock,
};
/*
* lpddr2_nvm driver probe method
*/
@ -433,6 +444,7 @@ static int lpddr2_nvm_probe(struct platform_device *pdev)
.pfow_base = OW_BASE_ADDRESS,
.fldrv_priv = pcm_data,
};
if (IS_ERR(map->virt))
return PTR_ERR(map->virt);
@ -444,22 +456,13 @@ static int lpddr2_nvm_probe(struct platform_device *pdev)
return PTR_ERR(pcm_data->ctl_regs);
/* Populate mtd_info data structure */
*mtd = (struct mtd_info) {
.dev = { .parent = &pdev->dev },
.name = pdev->dev.init_name,
.type = MTD_RAM,
.priv = map,
.size = resource_size(add_range),
.erasesize = ERASE_BLOCKSIZE * pcm_data->bus_width,
.writesize = 1,
.writebufsize = WRITE_BUFFSIZE * pcm_data->bus_width,
.flags = (MTD_CAP_NVRAM | MTD_POWERUP_LOCK),
._read = lpddr2_nvm_read,
._write = lpddr2_nvm_write,
._erase = lpddr2_nvm_erase,
._unlock = lpddr2_nvm_unlock,
._lock = lpddr2_nvm_lock,
};
*mtd = lpddr2_nvm_mtd_info;
mtd->dev.parent = &pdev->dev;
mtd->name = pdev->dev.init_name;
mtd->priv = map;
mtd->size = resource_size(add_range);
mtd->erasesize = ERASE_BLOCKSIZE * pcm_data->bus_width;
mtd->writebufsize = WRITE_BUFFSIZE * pcm_data->bus_width;
/* Verify the presence of the device looking for PFOW string */
if (!lpddr2_nvm_pfow_present(map)) {

View File

@ -94,6 +94,34 @@ struct mtd_info *lpddr_cmdset(struct map_info *map)
}
EXPORT_SYMBOL(lpddr_cmdset);
static void print_drs_error(unsigned int dsr)
{
int prog_status = (dsr & DSR_RPS) >> 8;
if (!(dsr & DSR_AVAILABLE))
pr_notice("DSR.15: (0) Device not Available\n");
if ((prog_status & 0x03) == 0x03)
pr_notice("DSR.9,8: (11) Attempt to program invalid half with 41h command\n");
else if (prog_status & 0x02)
pr_notice("DSR.9,8: (10) Object Mode Program attempt in region with Control Mode data\n");
else if (prog_status & 0x01)
pr_notice("DSR.9,8: (01) Program attempt in region with Object Mode data\n");
if (!(dsr & DSR_READY_STATUS))
pr_notice("DSR.7: (0) Device is Busy\n");
if (dsr & DSR_ESS)
pr_notice("DSR.6: (1) Erase Suspended\n");
if (dsr & DSR_ERASE_STATUS)
pr_notice("DSR.5: (1) Erase/Blank check error\n");
if (dsr & DSR_PROGRAM_STATUS)
pr_notice("DSR.4: (1) Program Error\n");
if (dsr & DSR_VPPS)
pr_notice("DSR.3: (1) Vpp low detect, operation aborted\n");
if (dsr & DSR_PSS)
pr_notice("DSR.2: (1) Program suspended\n");
if (dsr & DSR_DPS)
pr_notice("DSR.1: (1) Aborted Erase/Program attempt on locked block\n");
}
static int wait_for_ready(struct map_info *map, struct flchip *chip,
unsigned int chip_op_time)
{

View File

@ -75,6 +75,17 @@ config MTD_PHYSMAP_OF
physically into the CPU's memory. The mapping description here is
taken from OF device tree.
config MTD_PHYSMAP_BT1_ROM
bool "Baikal-T1 Boot ROMs OF-based physical memory map handling"
depends on MTD_PHYSMAP_OF
depends on MIPS_BAIKAL_T1 || COMPILE_TEST
select MTD_COMPLEX_MAPPINGS
select MULTIPLEXER
select MUX_MMIO
help
This provides some extra DT physmap parsing for the Baikal-T1
platforms, some detection and setting up ROMs-specific accessors.
config MTD_PHYSMAP_VERSATILE
bool "ARM Versatile OF-based physical memory map handling"
depends on MTD_PHYSMAP_OF

View File

@ -18,6 +18,7 @@ obj-$(CONFIG_MTD_CK804XROM) += ck804xrom.o
obj-$(CONFIG_MTD_TSUNAMI) += tsunami_flash.o
obj-$(CONFIG_MTD_PXA2XX) += pxa2xx-flash.o
physmap-objs-y += physmap-core.o
physmap-objs-$(CONFIG_MTD_PHYSMAP_BT1_ROM) += physmap-bt1-rom.o
physmap-objs-$(CONFIG_MTD_PHYSMAP_VERSATILE) += physmap-versatile.o
physmap-objs-$(CONFIG_MTD_PHYSMAP_GEMINI) += physmap-gemini.o
physmap-objs-$(CONFIG_MTD_PHYSMAP_IXP4XX) += physmap-ixp4xx.o

View File

@ -0,0 +1,126 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2020 BAIKAL ELECTRONICS, JSC
*
* Authors:
* Serge Semin <Sergey.Semin@baikalelectronics.ru>
*
* Baikal-T1 Physically Mapped Internal ROM driver
*/
#include <linux/bits.h>
#include <linux/device.h>
#include <linux/kernel.h>
#include <linux/mtd/map.h>
#include <linux/mtd/xip.h>
#include <linux/mux/consumer.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/string.h>
#include <linux/types.h>
#include "physmap-bt1-rom.h"
/*
* Baikal-T1 SoC ROMs are only accessible by the dword-aligned instructions.
* We have to take this into account when implementing the data read-methods.
* Note there is no need in bothering with endianness, since both Baikal-T1
* CPU and MMIO are LE.
*/
static map_word __xipram bt1_rom_map_read(struct map_info *map,
unsigned long ofs)
{
void __iomem *src = map->virt + ofs;
unsigned long shift;
map_word ret;
u32 data;
/* Read data within offset dword. */
shift = (unsigned long)src & 0x3;
data = readl_relaxed(src - shift);
if (!shift) {
ret.x[0] = data;
return ret;
}
ret.x[0] = data >> (shift * BITS_PER_BYTE);
/* Read data from the next dword. */
shift = 4 - shift;
if (ofs + shift >= map->size)
return ret;
data = readl_relaxed(src + shift);
ret.x[0] |= data << (shift * BITS_PER_BYTE);
return ret;
}
static void __xipram bt1_rom_map_copy_from(struct map_info *map,
void *to, unsigned long from,
ssize_t len)
{
void __iomem *src = map->virt + from;
ssize_t shift, chunk;
u32 data;
if (len <= 0 || from >= map->size)
return;
/* Make sure we don't go over the map limit. */
len = min_t(ssize_t, map->size - from, len);
/*
* Since requested data size can be pretty big we have to implement
* the copy procedure as optimal as possible. That's why it's split
* up into the next three stages: unaligned head, aligned body,
* unaligned tail.
*/
shift = (ssize_t)src & 0x3;
if (shift) {
chunk = min_t(ssize_t, 4 - shift, len);
data = readl_relaxed(src - shift);
memcpy(to, &data + shift, chunk);
src += chunk;
to += chunk;
len -= chunk;
}
while (len >= 4) {
data = readl_relaxed(src);
memcpy(to, &data, 4);
src += 4;
to += 4;
len -= 4;
}
if (len) {
data = readl_relaxed(src);
memcpy(to, &data, len);
}
}
int of_flash_probe_bt1_rom(struct platform_device *pdev,
struct device_node *np,
struct map_info *map)
{
struct device *dev = &pdev->dev;
/* It's supposed to be read-only MTD. */
if (!of_device_is_compatible(np, "mtd-rom")) {
dev_info(dev, "No mtd-rom compatible string\n");
return 0;
}
/* Multiplatform guard. */
if (!of_device_is_compatible(np, "baikal,bt1-int-rom"))
return 0;
/* Sanity check the device parameters retrieved from DTB. */
if (map->bankwidth != 4)
dev_warn(dev, "Bank width is supposed to be 32 bits wide\n");
map->read = bt1_rom_map_read;
map->copy_from = bt1_rom_map_copy_from;
return 0;
}

View File

@ -0,0 +1,17 @@
/* SPDX-License-Identifier: GPL-2.0-only */
#include <linux/mtd/map.h>
#include <linux/of.h>
#ifdef CONFIG_MTD_PHYSMAP_BT1_ROM
int of_flash_probe_bt1_rom(struct platform_device *pdev,
struct device_node *np,
struct map_info *map);
#else
static inline
int of_flash_probe_bt1_rom(struct platform_device *pdev,
struct device_node *np,
struct map_info *map)
{
return 0;
}
#endif

View File

@ -41,6 +41,7 @@
#include <linux/pm_runtime.h>
#include <linux/gpio/consumer.h>
#include "physmap-bt1-rom.h"
#include "physmap-gemini.h"
#include "physmap-ixp4xx.h"
#include "physmap-versatile.h"
@ -371,6 +372,10 @@ static int physmap_flash_of_init(struct platform_device *dev)
info->maps[i].bankwidth = bankwidth;
info->maps[i].device_node = dp;
err = of_flash_probe_bt1_rom(dev, dp, &info->maps[i]);
if (err)
return err;
err = of_flash_probe_gemini(dev, dp, &info->maps[i]);
if (err)
return err;
@ -515,7 +520,8 @@ static int physmap_flash_probe(struct platform_device *dev)
dev_notice(&dev->dev, "physmap platform flash device: %pR\n",
res);
info->maps[i].name = dev_name(&dev->dev);
if (!info->maps[i].name)
info->maps[i].name = dev_name(&dev->dev);
if (!info->maps[i].phys)
info->maps[i].phys = res->start;

View File

@ -40,7 +40,7 @@ struct memcard {
u32 blocklen;
u32 writecnt;
u32 readcnt;
u32 removeable;
u32 removable;
int partition;
int read;
unsigned char *blockread;
@ -619,7 +619,7 @@ static int vmu_connect(struct maple_device *mdev)
card->blocklen = ((basic_flash_data >> 16 & 0xFF) + 1) << 5;
card->writecnt = basic_flash_data >> 12 & 0xF;
card->readcnt = basic_flash_data >> 8 & 0xF;
card->removeable = basic_flash_data >> 7 & 1;
card->removable = basic_flash_data >> 7 & 1;
card->partition = 0;
@ -772,7 +772,6 @@ static void vmu_file_error(struct maple_device *mdev, void *recvbuf)
static int probe_maple_vmu(struct device *dev)
{
int error;
struct maple_device *mdev = to_maple_dev(dev);
struct maple_driver *mdrv = to_maple_driver(dev->driver);
@ -780,11 +779,7 @@ static int probe_maple_vmu(struct device *dev)
mdev->fileerr_handler = vmu_file_error;
mdev->driver = mdrv;
error = vmu_connect(mdev);
if (error)
return error;
return 0;
return vmu_connect(mdev);
}
static int remove_maple_vmu(struct device *dev)

View File

@ -102,6 +102,47 @@ concat_read(struct mtd_info *mtd, loff_t from, size_t len,
return -EINVAL;
}
static int
concat_panic_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t * retlen, const u_char * buf)
{
struct mtd_concat *concat = CONCAT(mtd);
int err = -EINVAL;
int i;
for (i = 0; i < concat->num_subdev; i++) {
struct mtd_info *subdev = concat->subdev[i];
size_t size, retsize;
if (to >= subdev->size) {
to -= subdev->size;
continue;
}
if (to + len > subdev->size)
size = subdev->size - to;
else
size = len;
err = mtd_panic_write(subdev, to, size, &retsize, buf);
if (err == -EOPNOTSUPP) {
printk(KERN_ERR "mtdconcat: Cannot write from panic without panic_write\n");
return err;
}
if (err)
break;
*retlen += retsize;
len -= size;
if (len == 0)
break;
err = -EINVAL;
buf += size;
to = 0;
}
return err;
}
static int
concat_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t * retlen, const u_char * buf)
@ -648,6 +689,8 @@ struct mtd_info *mtd_concat_create(struct mtd_info *subdev[], /* subdevices to c
concat->mtd._block_isbad = concat_block_isbad;
if (subdev[0]->_block_markbad)
concat->mtd._block_markbad = concat_block_markbad;
if (subdev[0]->_panic_write)
concat->mtd._panic_write = concat_panic_write;
concat->mtd.ecc_stats.badblocks = subdev[0]->ecc_stats.badblocks;

View File

@ -335,7 +335,7 @@ static const struct device_type mtd_devtype = {
.release = mtd_release,
};
static int mtd_partid_show(struct seq_file *s, void *p)
static int mtd_partid_debug_show(struct seq_file *s, void *p)
{
struct mtd_info *mtd = s->private;
@ -344,19 +344,9 @@ static int mtd_partid_show(struct seq_file *s, void *p)
return 0;
}
static int mtd_partid_debugfs_open(struct inode *inode, struct file *file)
{
return single_open(file, mtd_partid_show, inode->i_private);
}
DEFINE_SHOW_ATTRIBUTE(mtd_partid_debug);
static const struct file_operations mtd_partid_debug_fops = {
.open = mtd_partid_debugfs_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int mtd_partname_show(struct seq_file *s, void *p)
static int mtd_partname_debug_show(struct seq_file *s, void *p)
{
struct mtd_info *mtd = s->private;
@ -365,17 +355,7 @@ static int mtd_partname_show(struct seq_file *s, void *p)
return 0;
}
static int mtd_partname_debugfs_open(struct inode *inode, struct file *file)
{
return single_open(file, mtd_partname_show, inode->i_private);
}
static const struct file_operations mtd_partname_debug_fops = {
.open = mtd_partname_debugfs_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
DEFINE_SHOW_ATTRIBUTE(mtd_partname_debug);
static struct dentry *dfs_dir_mtd;

View File

@ -279,12 +279,13 @@ static void mtdoops_do_dump(struct kmsg_dumper *dumper,
kmsg_dump_get_buffer(dumper, true, cxt->oops_buf + MTDOOPS_HEADER_SIZE,
record_size - MTDOOPS_HEADER_SIZE, NULL);
/* Panics must be written immediately */
if (reason != KMSG_DUMP_OOPS)
if (reason != KMSG_DUMP_OOPS) {
/* Panics must be written immediately */
mtdoops_write(cxt, 1);
/* For other cases, schedule work to write it "nicely" */
schedule_work(&cxt->work_write);
} else {
/* For other cases, schedule work to write it "nicely" */
schedule_work(&cxt->work_write);
}
}
static void mtdoops_notify_add(struct mtd_info *mtd)

View File

@ -9,4 +9,12 @@ source "drivers/mtd/nand/onenand/Kconfig"
source "drivers/mtd/nand/raw/Kconfig"
source "drivers/mtd/nand/spi/Kconfig"
menu "ECC engine support"
config MTD_NAND_ECC
bool
depends on MTD_NAND_CORE
endmenu
endmenu

View File

@ -6,3 +6,5 @@ obj-$(CONFIG_MTD_NAND_CORE) += nandcore.o
obj-y += onenand/
obj-y += raw/
obj-y += spi/
nandcore-$(CONFIG_MTD_NAND_ECC) += ecc.o

View File

@ -0,0 +1,484 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Generic Error-Correcting Code (ECC) engine
*
* Copyright (C) 2019 Macronix
* Author:
* Miquèl RAYNAL <miquel.raynal@bootlin.com>
*
*
* This file describes the abstraction of any NAND ECC engine. It has been
* designed to fit most cases, including parallel NANDs and SPI-NANDs.
*
* There are three main situations where instantiating this ECC engine makes
* sense:
* - external: The ECC engine is outside the NAND pipeline, typically this
* is a software ECC engine, or an hardware engine that is
* outside the NAND controller pipeline.
* - pipelined: The ECC engine is inside the NAND pipeline, ie. on the
* controller's side. This is the case of most of the raw NAND
* controllers. In the pipeline case, the ECC bytes are
* generated/data corrected on the fly when a page is
* written/read.
* - ondie: The ECC engine is inside the NAND pipeline, on the chip's side.
* Some NAND chips can correct themselves the data.
*
* Besides the initial setup and final cleanups, the interfaces are rather
* simple:
* - prepare: Prepare an I/O request. Enable/disable the ECC engine based on
* the I/O request type. In case of software correction or external
* engine, this step may involve to derive the ECC bytes and place
* them in the OOB area before a write.
* - finish: Finish an I/O request. Correct the data in case of a read
* request and report the number of corrected bits/uncorrectable
* errors. Most likely empty for write operations, unless you have
* hardware specific stuff to do, like shutting down the engine to
* save power.
*
* The I/O request should be enclosed in a prepare()/finish() pair of calls
* and will behave differently depending on the requested I/O type:
* - raw: Correction disabled
* - ecc: Correction enabled
*
* The request direction is impacting the logic as well:
* - read: Load data from the NAND chip
* - write: Store data in the NAND chip
*
* Mixing all this combinations together gives the following behavior.
* Those are just examples, drivers are free to add custom steps in their
* prepare/finish hook.
*
* [external ECC engine]
* - external + prepare + raw + read: do nothing
* - external + finish + raw + read: do nothing
* - external + prepare + raw + write: do nothing
* - external + finish + raw + write: do nothing
* - external + prepare + ecc + read: do nothing
* - external + finish + ecc + read: calculate expected ECC bytes, extract
* ECC bytes from OOB buffer, correct
* and report any bitflip/error
* - external + prepare + ecc + write: calculate ECC bytes and store them at
* the right place in the OOB buffer based
* on the OOB layout
* - external + finish + ecc + write: do nothing
*
* [pipelined ECC engine]
* - pipelined + prepare + raw + read: disable the controller's ECC engine if
* activated
* - pipelined + finish + raw + read: do nothing
* - pipelined + prepare + raw + write: disable the controller's ECC engine if
* activated
* - pipelined + finish + raw + write: do nothing
* - pipelined + prepare + ecc + read: enable the controller's ECC engine if
* deactivated
* - pipelined + finish + ecc + read: check the status, report any
* error/bitflip
* - pipelined + prepare + ecc + write: enable the controller's ECC engine if
* deactivated
* - pipelined + finish + ecc + write: do nothing
*
* [ondie ECC engine]
* - ondie + prepare + raw + read: send commands to disable the on-chip ECC
* engine if activated
* - ondie + finish + raw + read: do nothing
* - ondie + prepare + raw + write: send commands to disable the on-chip ECC
* engine if activated
* - ondie + finish + raw + write: do nothing
* - ondie + prepare + ecc + read: send commands to enable the on-chip ECC
* engine if deactivated
* - ondie + finish + ecc + read: send commands to check the status, report
* any error/bitflip
* - ondie + prepare + ecc + write: send commands to enable the on-chip ECC
* engine if deactivated
* - ondie + finish + ecc + write: do nothing
*/
#include <linux/module.h>
#include <linux/mtd/nand.h>
/**
* nand_ecc_init_ctx - Init the ECC engine context
* @nand: the NAND device
*
* On success, the caller is responsible of calling @nand_ecc_cleanup_ctx().
*/
int nand_ecc_init_ctx(struct nand_device *nand)
{
if (!nand->ecc.engine->ops->init_ctx)
return 0;
return nand->ecc.engine->ops->init_ctx(nand);
}
EXPORT_SYMBOL(nand_ecc_init_ctx);
/**
* nand_ecc_cleanup_ctx - Cleanup the ECC engine context
* @nand: the NAND device
*/
void nand_ecc_cleanup_ctx(struct nand_device *nand)
{
if (nand->ecc.engine->ops->cleanup_ctx)
nand->ecc.engine->ops->cleanup_ctx(nand);
}
EXPORT_SYMBOL(nand_ecc_cleanup_ctx);
/**
* nand_ecc_prepare_io_req - Prepare an I/O request
* @nand: the NAND device
* @req: the I/O request
*/
int nand_ecc_prepare_io_req(struct nand_device *nand,
struct nand_page_io_req *req)
{
if (!nand->ecc.engine->ops->prepare_io_req)
return 0;
return nand->ecc.engine->ops->prepare_io_req(nand, req);
}
EXPORT_SYMBOL(nand_ecc_prepare_io_req);
/**
* nand_ecc_finish_io_req - Finish an I/O request
* @nand: the NAND device
* @req: the I/O request
*/
int nand_ecc_finish_io_req(struct nand_device *nand,
struct nand_page_io_req *req)
{
if (!nand->ecc.engine->ops->finish_io_req)
return 0;
return nand->ecc.engine->ops->finish_io_req(nand, req);
}
EXPORT_SYMBOL(nand_ecc_finish_io_req);
/* Define default OOB placement schemes for large and small page devices */
static int nand_ooblayout_ecc_sp(struct mtd_info *mtd, int section,
struct mtd_oob_region *oobregion)
{
struct nand_device *nand = mtd_to_nanddev(mtd);
unsigned int total_ecc_bytes = nand->ecc.ctx.total;
if (section > 1)
return -ERANGE;
if (!section) {
oobregion->offset = 0;
if (mtd->oobsize == 16)
oobregion->length = 4;
else
oobregion->length = 3;
} else {
if (mtd->oobsize == 8)
return -ERANGE;
oobregion->offset = 6;
oobregion->length = total_ecc_bytes - 4;
}
return 0;
}
static int nand_ooblayout_free_sp(struct mtd_info *mtd, int section,
struct mtd_oob_region *oobregion)
{
if (section > 1)
return -ERANGE;
if (mtd->oobsize == 16) {
if (section)
return -ERANGE;
oobregion->length = 8;
oobregion->offset = 8;
} else {
oobregion->length = 2;
if (!section)
oobregion->offset = 3;
else
oobregion->offset = 6;
}
return 0;
}
static const struct mtd_ooblayout_ops nand_ooblayout_sp_ops = {
.ecc = nand_ooblayout_ecc_sp,
.free = nand_ooblayout_free_sp,
};
const struct mtd_ooblayout_ops *nand_get_small_page_ooblayout(void)
{
return &nand_ooblayout_sp_ops;
}
EXPORT_SYMBOL_GPL(nand_get_small_page_ooblayout);
static int nand_ooblayout_ecc_lp(struct mtd_info *mtd, int section,
struct mtd_oob_region *oobregion)
{
struct nand_device *nand = mtd_to_nanddev(mtd);
unsigned int total_ecc_bytes = nand->ecc.ctx.total;
if (section || !total_ecc_bytes)
return -ERANGE;
oobregion->length = total_ecc_bytes;
oobregion->offset = mtd->oobsize - oobregion->length;
return 0;
}
static int nand_ooblayout_free_lp(struct mtd_info *mtd, int section,
struct mtd_oob_region *oobregion)
{
struct nand_device *nand = mtd_to_nanddev(mtd);
unsigned int total_ecc_bytes = nand->ecc.ctx.total;
if (section)
return -ERANGE;
oobregion->length = mtd->oobsize - total_ecc_bytes - 2;
oobregion->offset = 2;
return 0;
}
static const struct mtd_ooblayout_ops nand_ooblayout_lp_ops = {
.ecc = nand_ooblayout_ecc_lp,
.free = nand_ooblayout_free_lp,
};
const struct mtd_ooblayout_ops *nand_get_large_page_ooblayout(void)
{
return &nand_ooblayout_lp_ops;
}
EXPORT_SYMBOL_GPL(nand_get_large_page_ooblayout);
/*
* Support the old "large page" layout used for 1-bit Hamming ECC where ECC
* are placed at a fixed offset.
*/
static int nand_ooblayout_ecc_lp_hamming(struct mtd_info *mtd, int section,
struct mtd_oob_region *oobregion)
{
struct nand_device *nand = mtd_to_nanddev(mtd);
unsigned int total_ecc_bytes = nand->ecc.ctx.total;
if (section)
return -ERANGE;
switch (mtd->oobsize) {
case 64:
oobregion->offset = 40;
break;
case 128:
oobregion->offset = 80;
break;
default:
return -EINVAL;
}
oobregion->length = total_ecc_bytes;
if (oobregion->offset + oobregion->length > mtd->oobsize)
return -ERANGE;
return 0;
}
static int nand_ooblayout_free_lp_hamming(struct mtd_info *mtd, int section,
struct mtd_oob_region *oobregion)
{
struct nand_device *nand = mtd_to_nanddev(mtd);
unsigned int total_ecc_bytes = nand->ecc.ctx.total;
int ecc_offset = 0;
if (section < 0 || section > 1)
return -ERANGE;
switch (mtd->oobsize) {
case 64:
ecc_offset = 40;
break;
case 128:
ecc_offset = 80;
break;
default:
return -EINVAL;
}
if (section == 0) {
oobregion->offset = 2;
oobregion->length = ecc_offset - 2;
} else {
oobregion->offset = ecc_offset + total_ecc_bytes;
oobregion->length = mtd->oobsize - oobregion->offset;
}
return 0;
}
static const struct mtd_ooblayout_ops nand_ooblayout_lp_hamming_ops = {
.ecc = nand_ooblayout_ecc_lp_hamming,
.free = nand_ooblayout_free_lp_hamming,
};
const struct mtd_ooblayout_ops *nand_get_large_page_hamming_ooblayout(void)
{
return &nand_ooblayout_lp_hamming_ops;
}
EXPORT_SYMBOL_GPL(nand_get_large_page_hamming_ooblayout);
static enum nand_ecc_engine_type
of_get_nand_ecc_engine_type(struct device_node *np)
{
struct device_node *eng_np;
if (of_property_read_bool(np, "nand-no-ecc-engine"))
return NAND_ECC_ENGINE_TYPE_NONE;
if (of_property_read_bool(np, "nand-use-soft-ecc-engine"))
return NAND_ECC_ENGINE_TYPE_SOFT;
eng_np = of_parse_phandle(np, "nand-ecc-engine", 0);
of_node_put(eng_np);
if (eng_np) {
if (eng_np == np)
return NAND_ECC_ENGINE_TYPE_ON_DIE;
else
return NAND_ECC_ENGINE_TYPE_ON_HOST;
}
return NAND_ECC_ENGINE_TYPE_INVALID;
}
static const char * const nand_ecc_placement[] = {
[NAND_ECC_PLACEMENT_OOB] = "oob",
[NAND_ECC_PLACEMENT_INTERLEAVED] = "interleaved",
};
static enum nand_ecc_placement of_get_nand_ecc_placement(struct device_node *np)
{
enum nand_ecc_placement placement;
const char *pm;
int err;
err = of_property_read_string(np, "nand-ecc-placement", &pm);
if (!err) {
for (placement = NAND_ECC_PLACEMENT_OOB;
placement < ARRAY_SIZE(nand_ecc_placement); placement++) {
if (!strcasecmp(pm, nand_ecc_placement[placement]))
return placement;
}
}
return NAND_ECC_PLACEMENT_UNKNOWN;
}
static const char * const nand_ecc_algos[] = {
[NAND_ECC_ALGO_HAMMING] = "hamming",
[NAND_ECC_ALGO_BCH] = "bch",
[NAND_ECC_ALGO_RS] = "rs",
};
static enum nand_ecc_algo of_get_nand_ecc_algo(struct device_node *np)
{
enum nand_ecc_algo ecc_algo;
const char *pm;
int err;
err = of_property_read_string(np, "nand-ecc-algo", &pm);
if (!err) {
for (ecc_algo = NAND_ECC_ALGO_HAMMING;
ecc_algo < ARRAY_SIZE(nand_ecc_algos);
ecc_algo++) {
if (!strcasecmp(pm, nand_ecc_algos[ecc_algo]))
return ecc_algo;
}
}
return NAND_ECC_ALGO_UNKNOWN;
}
static int of_get_nand_ecc_step_size(struct device_node *np)
{
int ret;
u32 val;
ret = of_property_read_u32(np, "nand-ecc-step-size", &val);
return ret ? ret : val;
}
static int of_get_nand_ecc_strength(struct device_node *np)
{
int ret;
u32 val;
ret = of_property_read_u32(np, "nand-ecc-strength", &val);
return ret ? ret : val;
}
void of_get_nand_ecc_user_config(struct nand_device *nand)
{
struct device_node *dn = nanddev_get_of_node(nand);
int strength, size;
nand->ecc.user_conf.engine_type = of_get_nand_ecc_engine_type(dn);
nand->ecc.user_conf.algo = of_get_nand_ecc_algo(dn);
nand->ecc.user_conf.placement = of_get_nand_ecc_placement(dn);
strength = of_get_nand_ecc_strength(dn);
if (strength >= 0)
nand->ecc.user_conf.strength = strength;
size = of_get_nand_ecc_step_size(dn);
if (size >= 0)
nand->ecc.user_conf.step_size = size;
if (of_property_read_bool(dn, "nand-ecc-maximize"))
nand->ecc.user_conf.flags |= NAND_ECC_MAXIMIZE_STRENGTH;
}
EXPORT_SYMBOL(of_get_nand_ecc_user_config);
/**
* nand_ecc_is_strong_enough - Check if the chip configuration meets the
* datasheet requirements.
*
* @nand: Device to check
*
* If our configuration corrects A bits per B bytes and the minimum
* required correction level is X bits per Y bytes, then we must ensure
* both of the following are true:
*
* (1) A / B >= X / Y
* (2) A >= X
*
* Requirement (1) ensures we can correct for the required bitflip density.
* Requirement (2) ensures we can correct even when all bitflips are clumped
* in the same sector.
*/
bool nand_ecc_is_strong_enough(struct nand_device *nand)
{
const struct nand_ecc_props *reqs = nanddev_get_ecc_requirements(nand);
const struct nand_ecc_props *conf = nanddev_get_ecc_conf(nand);
struct mtd_info *mtd = nanddev_to_mtd(nand);
int corr, ds_corr;
if (conf->step_size == 0 || reqs->step_size == 0)
/* Not enough information */
return true;
/*
* We get the number of corrected bits per page to compare
* the correction density.
*/
corr = (mtd->writesize * conf->strength) / conf->step_size;
ds_corr = (mtd->writesize * reqs->strength) / reqs->step_size;
return corr >= ds_corr && conf->strength >= reqs->strength;
}
EXPORT_SYMBOL(nand_ecc_is_strong_enough);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Miquel Raynal <miquel.raynal@bootlin.com>");
MODULE_DESCRIPTION("Generic ECC engine");

View File

@ -1052,16 +1052,11 @@ static int onenand_transfer_auto_oob(struct mtd_info *mtd, uint8_t *buf, int col
int thislen)
{
struct onenand_chip *this = mtd->priv;
int ret;
this->read_bufferram(mtd, ONENAND_SPARERAM, this->oob_buf, 0,
mtd->oobsize);
ret = mtd_ooblayout_get_databytes(mtd, buf, this->oob_buf,
column, thislen);
if (ret)
return ret;
return 0;
return mtd_ooblayout_get_databytes(mtd, buf, this->oob_buf,
column, thislen);
}
/**

View File

@ -494,11 +494,8 @@ static int omap2_onenand_probe(struct platform_device *pdev)
c->int_gpiod = devm_gpiod_get_optional(dev, "int", GPIOD_IN);
if (IS_ERR(c->int_gpiod)) {
r = PTR_ERR(c->int_gpiod);
/* Just try again if this happens */
if (r != -EPROBE_DEFER)
dev_err(dev, "error getting gpio: %d\n", r);
return r;
return dev_err_probe(dev, PTR_ERR(c->int_gpiod), "error getting gpio\n");
}
if (c->int_gpiod) {

View File

@ -13,6 +13,7 @@ config MTD_NAND_ECC_SW_HAMMING_SMC
menuconfig MTD_RAW_NAND
tristate "Raw/Parallel NAND Device Support"
select MTD_NAND_CORE
select MTD_NAND_ECC
select MTD_NAND_ECC_SW_HAMMING
help
This enables support for accessing all type of raw/parallel

View File

@ -260,8 +260,8 @@ static int gpio_nand_probe(struct platform_device *pdev)
return err;
}
this->ecc.mode = NAND_ECC_SOFT;
this->ecc.algo = NAND_ECC_HAMMING;
this->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
this->ecc.algo = NAND_ECC_ALGO_HAMMING;
platform_set_drvdata(pdev, priv);
@ -400,12 +400,14 @@ static int gpio_nand_remove(struct platform_device *pdev)
return 0;
}
#ifdef CONFIG_OF
static const struct of_device_id gpio_nand_of_id_table[] = {
{
/* sentinel */
},
};
MODULE_DEVICE_TABLE(of, gpio_nand_of_id_table);
#endif
static const struct platform_device_id gpio_nand_plat_id_table[] = {
{

View File

@ -980,10 +980,10 @@ static int anfc_init_hw_ecc_controller(struct arasan_nfc *nfc,
return -EINVAL;
}
mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
mtd_set_ooblayout(mtd, nand_get_large_page_ooblayout());
ecc->steps = mtd->writesize / ecc->size;
ecc->algo = NAND_ECC_BCH;
ecc->algo = NAND_ECC_ALGO_BCH;
anand->ecc_bits = bch_gf_mag * ecc->strength;
ecc->bytes = DIV_ROUND_UP(anand->ecc_bits, 8);
anand->ecc_total = DIV_ROUND_UP(anand->ecc_bits * ecc->steps, 8);
@ -1056,17 +1056,17 @@ static int anfc_attach_chip(struct nand_chip *chip)
chip->ecc.read_page_raw = nand_monolithic_read_page_raw;
chip->ecc.write_page_raw = nand_monolithic_write_page_raw;
switch (chip->ecc.mode) {
case NAND_ECC_NONE:
case NAND_ECC_SOFT:
case NAND_ECC_ON_DIE:
switch (chip->ecc.engine_type) {
case NAND_ECC_ENGINE_TYPE_NONE:
case NAND_ECC_ENGINE_TYPE_SOFT:
case NAND_ECC_ENGINE_TYPE_ON_DIE:
break;
case NAND_ECC_HW:
case NAND_ECC_ENGINE_TYPE_ON_HOST:
ret = anfc_init_hw_ecc_controller(nfc, chip);
break;
default:
dev_err(nfc->dev, "Unsupported ECC mode: %d\n",
chip->ecc.mode);
chip->ecc.engine_type);
return -EINVAL;
}

View File

@ -202,6 +202,8 @@ struct atmel_nand_controller_ops {
int (*ecc_init)(struct nand_chip *chip);
int (*setup_interface)(struct atmel_nand *nand, int csline,
const struct nand_interface_config *conf);
int (*exec_op)(struct atmel_nand *nand,
const struct nand_operation *op, bool check_only);
};
struct atmel_nand_controller_caps {
@ -259,6 +261,7 @@ struct atmel_hsmc_nand_controller {
struct regmap *io;
struct atmel_nfc_op op;
struct completion complete;
u32 cfg;
int irq;
/* Only used when instantiating from legacy DT bindings. */
@ -414,138 +417,6 @@ err:
return -EIO;
}
static u8 atmel_nand_read_byte(struct nand_chip *chip)
{
struct atmel_nand *nand = to_atmel_nand(chip);
return ioread8(nand->activecs->io.virt);
}
static void atmel_nand_write_byte(struct nand_chip *chip, u8 byte)
{
struct atmel_nand *nand = to_atmel_nand(chip);
if (chip->options & NAND_BUSWIDTH_16)
iowrite16(byte | (byte << 8), nand->activecs->io.virt);
else
iowrite8(byte, nand->activecs->io.virt);
}
static void atmel_nand_read_buf(struct nand_chip *chip, u8 *buf, int len)
{
struct atmel_nand *nand = to_atmel_nand(chip);
struct atmel_nand_controller *nc;
nc = to_nand_controller(chip->controller);
/*
* If the controller supports DMA, the buffer address is DMA-able and
* len is long enough to make DMA transfers profitable, let's trigger
* a DMA transfer. If it fails, fallback to PIO mode.
*/
if (nc->dmac && virt_addr_valid(buf) &&
len >= MIN_DMA_LEN &&
!atmel_nand_dma_transfer(nc, buf, nand->activecs->io.dma, len,
DMA_FROM_DEVICE))
return;
if (chip->options & NAND_BUSWIDTH_16)
ioread16_rep(nand->activecs->io.virt, buf, len / 2);
else
ioread8_rep(nand->activecs->io.virt, buf, len);
}
static void atmel_nand_write_buf(struct nand_chip *chip, const u8 *buf, int len)
{
struct atmel_nand *nand = to_atmel_nand(chip);
struct atmel_nand_controller *nc;
nc = to_nand_controller(chip->controller);
/*
* If the controller supports DMA, the buffer address is DMA-able and
* len is long enough to make DMA transfers profitable, let's trigger
* a DMA transfer. If it fails, fallback to PIO mode.
*/
if (nc->dmac && virt_addr_valid(buf) &&
len >= MIN_DMA_LEN &&
!atmel_nand_dma_transfer(nc, (void *)buf, nand->activecs->io.dma,
len, DMA_TO_DEVICE))
return;
if (chip->options & NAND_BUSWIDTH_16)
iowrite16_rep(nand->activecs->io.virt, buf, len / 2);
else
iowrite8_rep(nand->activecs->io.virt, buf, len);
}
static int atmel_nand_dev_ready(struct nand_chip *chip)
{
struct atmel_nand *nand = to_atmel_nand(chip);
return gpiod_get_value(nand->activecs->rb.gpio);
}
static void atmel_nand_select_chip(struct nand_chip *chip, int cs)
{
struct atmel_nand *nand = to_atmel_nand(chip);
if (cs < 0 || cs >= nand->numcs) {
nand->activecs = NULL;
chip->legacy.dev_ready = NULL;
return;
}
nand->activecs = &nand->cs[cs];
if (nand->activecs->rb.type == ATMEL_NAND_GPIO_RB)
chip->legacy.dev_ready = atmel_nand_dev_ready;
}
static int atmel_hsmc_nand_dev_ready(struct nand_chip *chip)
{
struct atmel_nand *nand = to_atmel_nand(chip);
struct atmel_hsmc_nand_controller *nc;
u32 status;
nc = to_hsmc_nand_controller(chip->controller);
regmap_read(nc->base.smc, ATMEL_HSMC_NFC_SR, &status);
return status & ATMEL_HSMC_NFC_SR_RBEDGE(nand->activecs->rb.id);
}
static void atmel_hsmc_nand_select_chip(struct nand_chip *chip, int cs)
{
struct mtd_info *mtd = nand_to_mtd(chip);
struct atmel_nand *nand = to_atmel_nand(chip);
struct atmel_hsmc_nand_controller *nc;
nc = to_hsmc_nand_controller(chip->controller);
atmel_nand_select_chip(chip, cs);
if (!nand->activecs) {
regmap_write(nc->base.smc, ATMEL_HSMC_NFC_CTRL,
ATMEL_HSMC_NFC_CTRL_DIS);
return;
}
if (nand->activecs->rb.type == ATMEL_NAND_NATIVE_RB)
chip->legacy.dev_ready = atmel_hsmc_nand_dev_ready;
regmap_update_bits(nc->base.smc, ATMEL_HSMC_NFC_CFG,
ATMEL_HSMC_NFC_CFG_PAGESIZE_MASK |
ATMEL_HSMC_NFC_CFG_SPARESIZE_MASK |
ATMEL_HSMC_NFC_CFG_RSPARE |
ATMEL_HSMC_NFC_CFG_WSPARE,
ATMEL_HSMC_NFC_CFG_PAGESIZE(mtd->writesize) |
ATMEL_HSMC_NFC_CFG_SPARESIZE(mtd->oobsize) |
ATMEL_HSMC_NFC_CFG_RSPARE);
regmap_write(nc->base.smc, ATMEL_HSMC_NFC_CTRL,
ATMEL_HSMC_NFC_CTRL_EN);
}
static int atmel_nfc_exec_op(struct atmel_hsmc_nand_controller *nc, bool poll)
{
u8 *addrs = nc->op.addrs;
@ -596,51 +467,250 @@ static int atmel_nfc_exec_op(struct atmel_hsmc_nand_controller *nc, bool poll)
return ret;
}
static void atmel_hsmc_nand_cmd_ctrl(struct nand_chip *chip, int dat,
unsigned int ctrl)
static void atmel_nand_data_in(struct atmel_nand *nand, void *buf,
unsigned int len, bool force_8bit)
{
struct atmel_nand_controller *nc;
nc = to_nand_controller(nand->base.controller);
/*
* If the controller supports DMA, the buffer address is DMA-able and
* len is long enough to make DMA transfers profitable, let's trigger
* a DMA transfer. If it fails, fallback to PIO mode.
*/
if (nc->dmac && virt_addr_valid(buf) &&
len >= MIN_DMA_LEN && !force_8bit &&
!atmel_nand_dma_transfer(nc, buf, nand->activecs->io.dma, len,
DMA_FROM_DEVICE))
return;
if ((nand->base.options & NAND_BUSWIDTH_16) && !force_8bit)
ioread16_rep(nand->activecs->io.virt, buf, len / 2);
else
ioread8_rep(nand->activecs->io.virt, buf, len);
}
static void atmel_nand_data_out(struct atmel_nand *nand, const void *buf,
unsigned int len, bool force_8bit)
{
struct atmel_nand_controller *nc;
nc = to_nand_controller(nand->base.controller);
/*
* If the controller supports DMA, the buffer address is DMA-able and
* len is long enough to make DMA transfers profitable, let's trigger
* a DMA transfer. If it fails, fallback to PIO mode.
*/
if (nc->dmac && virt_addr_valid(buf) &&
len >= MIN_DMA_LEN && !force_8bit &&
!atmel_nand_dma_transfer(nc, (void *)buf, nand->activecs->io.dma,
len, DMA_TO_DEVICE))
return;
if ((nand->base.options & NAND_BUSWIDTH_16) && !force_8bit)
iowrite16_rep(nand->activecs->io.virt, buf, len / 2);
else
iowrite8_rep(nand->activecs->io.virt, buf, len);
}
static int atmel_nand_waitrdy(struct atmel_nand *nand, unsigned int timeout_ms)
{
if (nand->activecs->rb.type == ATMEL_NAND_NO_RB)
return nand_soft_waitrdy(&nand->base, timeout_ms);
return nand_gpio_waitrdy(&nand->base, nand->activecs->rb.gpio,
timeout_ms);
}
static int atmel_hsmc_nand_waitrdy(struct atmel_nand *nand,
unsigned int timeout_ms)
{
struct atmel_hsmc_nand_controller *nc;
u32 status, mask;
if (nand->activecs->rb.type != ATMEL_NAND_NATIVE_RB)
return atmel_nand_waitrdy(nand, timeout_ms);
nc = to_hsmc_nand_controller(nand->base.controller);
mask = ATMEL_HSMC_NFC_SR_RBEDGE(nand->activecs->rb.id);
return regmap_read_poll_timeout_atomic(nc->base.smc, ATMEL_HSMC_NFC_SR,
status, status & mask,
10, timeout_ms * 1000);
}
static void atmel_nand_select_target(struct atmel_nand *nand,
unsigned int cs)
{
nand->activecs = &nand->cs[cs];
}
static void atmel_hsmc_nand_select_target(struct atmel_nand *nand,
unsigned int cs)
{
struct mtd_info *mtd = nand_to_mtd(&nand->base);
struct atmel_hsmc_nand_controller *nc;
u32 cfg = ATMEL_HSMC_NFC_CFG_PAGESIZE(mtd->writesize) |
ATMEL_HSMC_NFC_CFG_SPARESIZE(mtd->oobsize) |
ATMEL_HSMC_NFC_CFG_RSPARE;
nand->activecs = &nand->cs[cs];
nc = to_hsmc_nand_controller(nand->base.controller);
if (nc->cfg == cfg)
return;
regmap_update_bits(nc->base.smc, ATMEL_HSMC_NFC_CFG,
ATMEL_HSMC_NFC_CFG_PAGESIZE_MASK |
ATMEL_HSMC_NFC_CFG_SPARESIZE_MASK |
ATMEL_HSMC_NFC_CFG_RSPARE |
ATMEL_HSMC_NFC_CFG_WSPARE,
cfg);
nc->cfg = cfg;
}
static int atmel_smc_nand_exec_instr(struct atmel_nand *nand,
const struct nand_op_instr *instr)
{
struct atmel_nand_controller *nc;
unsigned int i;
nc = to_nand_controller(nand->base.controller);
switch (instr->type) {
case NAND_OP_CMD_INSTR:
writeb(instr->ctx.cmd.opcode,
nand->activecs->io.virt + nc->caps->cle_offs);
return 0;
case NAND_OP_ADDR_INSTR:
for (i = 0; i < instr->ctx.addr.naddrs; i++)
writeb(instr->ctx.addr.addrs[i],
nand->activecs->io.virt + nc->caps->ale_offs);
return 0;
case NAND_OP_DATA_IN_INSTR:
atmel_nand_data_in(nand, instr->ctx.data.buf.in,
instr->ctx.data.len,
instr->ctx.data.force_8bit);
return 0;
case NAND_OP_DATA_OUT_INSTR:
atmel_nand_data_out(nand, instr->ctx.data.buf.out,
instr->ctx.data.len,
instr->ctx.data.force_8bit);
return 0;
case NAND_OP_WAITRDY_INSTR:
return atmel_nand_waitrdy(nand,
instr->ctx.waitrdy.timeout_ms);
default:
break;
}
return -EINVAL;
}
static int atmel_smc_nand_exec_op(struct atmel_nand *nand,
const struct nand_operation *op,
bool check_only)
{
unsigned int i;
int ret = 0;
if (check_only)
return 0;
atmel_nand_select_target(nand, op->cs);
gpiod_set_value(nand->activecs->csgpio, 0);
for (i = 0; i < op->ninstrs; i++) {
ret = atmel_smc_nand_exec_instr(nand, &op->instrs[i]);
if (ret)
break;
}
gpiod_set_value(nand->activecs->csgpio, 1);
return ret;
}
static int atmel_hsmc_exec_cmd_addr(struct nand_chip *chip,
const struct nand_subop *subop)
{
struct atmel_nand *nand = to_atmel_nand(chip);
struct atmel_hsmc_nand_controller *nc;
unsigned int i, j;
nc = to_hsmc_nand_controller(chip->controller);
if (ctrl & NAND_ALE) {
if (nc->op.naddrs == ATMEL_NFC_MAX_ADDR_CYCLES)
return;
nc->op.cs = nand->activecs->id;
for (i = 0; i < subop->ninstrs; i++) {
const struct nand_op_instr *instr = &subop->instrs[i];
nc->op.addrs[nc->op.naddrs++] = dat;
} else if (ctrl & NAND_CLE) {
if (nc->op.ncmds > 1)
return;
if (instr->type == NAND_OP_CMD_INSTR) {
nc->op.cmds[nc->op.ncmds++] = instr->ctx.cmd.opcode;
continue;
}
nc->op.cmds[nc->op.ncmds++] = dat;
for (j = nand_subop_get_addr_start_off(subop, i);
j < nand_subop_get_num_addr_cyc(subop, i); j++) {
nc->op.addrs[nc->op.naddrs] = instr->ctx.addr.addrs[j];
nc->op.naddrs++;
}
}
if (dat == NAND_CMD_NONE) {
nc->op.cs = nand->activecs->id;
atmel_nfc_exec_op(nc, true);
}
return atmel_nfc_exec_op(nc, true);
}
static void atmel_nand_cmd_ctrl(struct nand_chip *chip, int cmd,
unsigned int ctrl)
static int atmel_hsmc_exec_rw(struct nand_chip *chip,
const struct nand_subop *subop)
{
const struct nand_op_instr *instr = subop->instrs;
struct atmel_nand *nand = to_atmel_nand(chip);
struct atmel_nand_controller *nc;
nc = to_nand_controller(chip->controller);
if (instr->type == NAND_OP_DATA_IN_INSTR)
atmel_nand_data_in(nand, instr->ctx.data.buf.in,
instr->ctx.data.len,
instr->ctx.data.force_8bit);
else
atmel_nand_data_out(nand, instr->ctx.data.buf.out,
instr->ctx.data.len,
instr->ctx.data.force_8bit);
if ((ctrl & NAND_CTRL_CHANGE) && nand->activecs->csgpio) {
if (ctrl & NAND_NCE)
gpiod_set_value(nand->activecs->csgpio, 0);
else
gpiod_set_value(nand->activecs->csgpio, 1);
}
return 0;
}
if (ctrl & NAND_ALE)
writeb(cmd, nand->activecs->io.virt + nc->caps->ale_offs);
else if (ctrl & NAND_CLE)
writeb(cmd, nand->activecs->io.virt + nc->caps->cle_offs);
static int atmel_hsmc_exec_waitrdy(struct nand_chip *chip,
const struct nand_subop *subop)
{
const struct nand_op_instr *instr = subop->instrs;
struct atmel_nand *nand = to_atmel_nand(chip);
return atmel_hsmc_nand_waitrdy(nand, instr->ctx.waitrdy.timeout_ms);
}
static const struct nand_op_parser atmel_hsmc_op_parser = NAND_OP_PARSER(
NAND_OP_PARSER_PATTERN(atmel_hsmc_exec_cmd_addr,
NAND_OP_PARSER_PAT_CMD_ELEM(true),
NAND_OP_PARSER_PAT_ADDR_ELEM(true, 5),
NAND_OP_PARSER_PAT_CMD_ELEM(true)),
NAND_OP_PARSER_PATTERN(atmel_hsmc_exec_rw,
NAND_OP_PARSER_PAT_DATA_IN_ELEM(false, 0)),
NAND_OP_PARSER_PATTERN(atmel_hsmc_exec_rw,
NAND_OP_PARSER_PAT_DATA_OUT_ELEM(false, 0)),
NAND_OP_PARSER_PATTERN(atmel_hsmc_exec_waitrdy,
NAND_OP_PARSER_PAT_WAITRDY_ELEM(false)),
);
static int atmel_hsmc_nand_exec_op(struct atmel_nand *nand,
const struct nand_operation *op,
bool check_only)
{
int ret;
if (check_only)
return nand_op_parser_exec_op(&nand->base,
&atmel_hsmc_op_parser, op, true);
atmel_hsmc_nand_select_target(nand, op->cs);
ret = nand_op_parser_exec_op(&nand->base, &atmel_hsmc_op_parser, op,
false);
return ret;
}
static void atmel_nfc_copy_to_sram(struct nand_chip *chip, const u8 *buf,
@ -838,7 +908,7 @@ static int atmel_nand_pmecc_write_pg(struct nand_chip *chip, const u8 *buf,
if (ret)
return ret;
atmel_nand_write_buf(chip, buf, mtd->writesize);
nand_write_data_op(chip, buf, mtd->writesize, false);
ret = atmel_nand_pmecc_generate_eccbytes(chip, raw);
if (ret) {
@ -848,7 +918,7 @@ static int atmel_nand_pmecc_write_pg(struct nand_chip *chip, const u8 *buf,
atmel_nand_pmecc_disable(chip, raw);
atmel_nand_write_buf(chip, chip->oob_poi, mtd->oobsize);
nand_write_data_op(chip, chip->oob_poi, mtd->oobsize, false);
return nand_prog_page_end_op(chip);
}
@ -878,11 +948,17 @@ static int atmel_nand_pmecc_read_pg(struct nand_chip *chip, u8 *buf,
if (ret)
return ret;
atmel_nand_read_buf(chip, buf, mtd->writesize);
atmel_nand_read_buf(chip, chip->oob_poi, mtd->oobsize);
ret = nand_read_data_op(chip, buf, mtd->writesize, false, false);
if (ret)
goto out_disable;
ret = nand_read_data_op(chip, chip->oob_poi, mtd->oobsize, false, false);
if (ret)
goto out_disable;
ret = atmel_nand_pmecc_correct_data(chip, buf, raw);
out_disable:
atmel_nand_pmecc_disable(chip, raw);
return ret;
@ -907,8 +983,9 @@ static int atmel_hsmc_nand_pmecc_write_pg(struct nand_chip *chip,
struct mtd_info *mtd = nand_to_mtd(chip);
struct atmel_nand *nand = to_atmel_nand(chip);
struct atmel_hsmc_nand_controller *nc;
int ret, status;
int ret;
atmel_hsmc_nand_select_target(nand, chip->cur_cs);
nc = to_hsmc_nand_controller(chip->controller);
atmel_nfc_copy_to_sram(chip, buf, false);
@ -939,21 +1016,9 @@ static int atmel_hsmc_nand_pmecc_write_pg(struct nand_chip *chip,
if (ret)
return ret;
atmel_nand_write_buf(chip, chip->oob_poi, mtd->oobsize);
nand_write_data_op(chip, chip->oob_poi, mtd->oobsize, false);
nc->op.cmds[0] = NAND_CMD_PAGEPROG;
nc->op.ncmds = 1;
nc->op.cs = nand->activecs->id;
ret = atmel_nfc_exec_op(nc, false);
if (ret)
dev_err(nc->base.dev, "Failed to program NAND page (err = %d)\n",
ret);
status = chip->legacy.waitfunc(chip);
if (status & NAND_STATUS_FAIL)
return -EIO;
return ret;
return nand_prog_page_end_op(chip);
}
static int atmel_hsmc_nand_pmecc_write_page(struct nand_chip *chip,
@ -981,6 +1046,7 @@ static int atmel_hsmc_nand_pmecc_read_pg(struct nand_chip *chip, u8 *buf,
struct atmel_hsmc_nand_controller *nc;
int ret;
atmel_hsmc_nand_select_target(nand, chip->cur_cs);
nc = to_hsmc_nand_controller(chip->controller);
/*
@ -988,12 +1054,9 @@ static int atmel_hsmc_nand_pmecc_read_pg(struct nand_chip *chip, u8 *buf,
* connected to a native SoC R/B pin. If that's not the case, fallback
* to the non-optimized one.
*/
if (nand->activecs->rb.type != ATMEL_NAND_NATIVE_RB) {
nand_read_page_op(chip, page, 0, NULL, 0);
if (nand->activecs->rb.type != ATMEL_NAND_NATIVE_RB)
return atmel_nand_pmecc_read_pg(chip, buf, oob_required, page,
raw);
}
nc->op.cmds[nc->op.ncmds++] = NAND_CMD_READ0;
@ -1043,7 +1106,10 @@ static int atmel_hsmc_nand_pmecc_read_page_raw(struct nand_chip *chip,
static int atmel_nand_pmecc_init(struct nand_chip *chip)
{
const struct nand_ecc_props *requirements =
nanddev_get_ecc_requirements(&chip->base);
struct mtd_info *mtd = nand_to_mtd(chip);
struct nand_device *nanddev = mtd_to_nanddev(mtd);
struct atmel_nand *nand = to_atmel_nand(chip);
struct atmel_nand_controller *nc;
struct atmel_pmecc_user_req req;
@ -1068,19 +1134,19 @@ static int atmel_nand_pmecc_init(struct nand_chip *chip)
chip->ecc.size = val;
}
if (chip->ecc.options & NAND_ECC_MAXIMIZE)
if (nanddev->ecc.user_conf.flags & NAND_ECC_MAXIMIZE_STRENGTH)
req.ecc.strength = ATMEL_PMECC_MAXIMIZE_ECC_STRENGTH;
else if (chip->ecc.strength)
req.ecc.strength = chip->ecc.strength;
else if (chip->base.eccreq.strength)
req.ecc.strength = chip->base.eccreq.strength;
else if (requirements->strength)
req.ecc.strength = requirements->strength;
else
req.ecc.strength = ATMEL_PMECC_MAXIMIZE_ECC_STRENGTH;
if (chip->ecc.size)
req.ecc.sectorsize = chip->ecc.size;
else if (chip->base.eccreq.step_size)
req.ecc.sectorsize = chip->base.eccreq.step_size;
else if (requirements->step_size)
req.ecc.sectorsize = requirements->step_size;
else
req.ecc.sectorsize = ATMEL_PMECC_SECTOR_SIZE_AUTO;
@ -1099,14 +1165,14 @@ static int atmel_nand_pmecc_init(struct nand_chip *chip)
if (IS_ERR(nand->pmecc))
return PTR_ERR(nand->pmecc);
chip->ecc.algo = NAND_ECC_BCH;
chip->ecc.algo = NAND_ECC_ALGO_BCH;
chip->ecc.size = req.ecc.sectorsize;
chip->ecc.bytes = req.ecc.bytes / req.ecc.nsectors;
chip->ecc.strength = req.ecc.strength;
chip->options |= NAND_NO_SUBPAGE_WRITE;
mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
mtd_set_ooblayout(mtd, nand_get_large_page_ooblayout());
return 0;
}
@ -1118,15 +1184,15 @@ static int atmel_nand_ecc_init(struct nand_chip *chip)
nc = to_nand_controller(chip->controller);
switch (chip->ecc.mode) {
case NAND_ECC_NONE:
case NAND_ECC_SOFT:
switch (chip->ecc.engine_type) {
case NAND_ECC_ENGINE_TYPE_NONE:
case NAND_ECC_ENGINE_TYPE_SOFT:
/*
* Nothing to do, the core will initialize everything for us.
*/
break;
case NAND_ECC_HW:
case NAND_ECC_ENGINE_TYPE_ON_HOST:
ret = atmel_nand_pmecc_init(chip);
if (ret)
return ret;
@ -1140,7 +1206,7 @@ static int atmel_nand_ecc_init(struct nand_chip *chip)
default:
/* Other modes are not supported. */
dev_err(nc->dev, "Unsupported ECC mode: %d\n",
chip->ecc.mode);
chip->ecc.engine_type);
return -ENOTSUPP;
}
@ -1155,7 +1221,7 @@ static int atmel_hsmc_nand_ecc_init(struct nand_chip *chip)
if (ret)
return ret;
if (chip->ecc.mode != NAND_ECC_HW)
if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST)
return 0;
/* Adjust the ECC operations for the HSMC IP. */
@ -1467,6 +1533,18 @@ static int atmel_nand_setup_interface(struct nand_chip *chip, int csline,
return nc->caps->ops->setup_interface(nand, csline, conf);
}
static int atmel_nand_exec_op(struct nand_chip *chip,
const struct nand_operation *op,
bool check_only)
{
struct atmel_nand *nand = to_atmel_nand(chip);
struct atmel_nand_controller *nc;
nc = to_nand_controller(nand->base.controller);
return nc->caps->ops->exec_op(nand, op, check_only);
}
static void atmel_nand_init(struct atmel_nand_controller *nc,
struct atmel_nand *nand)
{
@ -1476,19 +1554,9 @@ static void atmel_nand_init(struct atmel_nand_controller *nc,
mtd->dev.parent = nc->dev;
nand->base.controller = &nc->base;
chip->legacy.cmd_ctrl = atmel_nand_cmd_ctrl;
chip->legacy.read_byte = atmel_nand_read_byte;
chip->legacy.write_byte = atmel_nand_write_byte;
chip->legacy.read_buf = atmel_nand_read_buf;
chip->legacy.write_buf = atmel_nand_write_buf;
chip->legacy.select_chip = atmel_nand_select_chip;
if (!nc->mck || !nc->caps->ops->setup_interface)
chip->options |= NAND_KEEP_TIMINGS;
/* Some NANDs require a longer delay than the default one (20us). */
chip->legacy.chip_delay = 40;
/*
* Use a bounce buffer when the buffer passed by the MTD user is not
* suitable for DMA.
@ -1498,7 +1566,7 @@ static void atmel_nand_init(struct atmel_nand_controller *nc,
/* Default to HW ECC if pmecc is available. */
if (nc->pmecc)
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
}
static void atmel_smc_nand_init(struct atmel_nand_controller *nc,
@ -1527,18 +1595,6 @@ static void atmel_smc_nand_init(struct atmel_nand_controller *nc,
smc_nc->ebi_csa->nfd0_on_d16);
}
static void atmel_hsmc_nand_init(struct atmel_nand_controller *nc,
struct atmel_nand *nand)
{
struct nand_chip *chip = &nand->base;
atmel_nand_init(nc, nand);
/* Overload some methods for the HSMC controller. */
chip->legacy.cmd_ctrl = atmel_hsmc_nand_cmd_ctrl;
chip->legacy.select_chip = atmel_hsmc_nand_select_chip;
}
static int atmel_nand_controller_remove_nand(struct atmel_nand *nand)
{
struct nand_chip *chip = &nand->base;
@ -1957,6 +2013,7 @@ static int atmel_nand_attach_chip(struct nand_chip *chip)
static const struct nand_controller_ops atmel_nand_controller_ops = {
.attach_chip = atmel_nand_attach_chip,
.setup_interface = atmel_nand_setup_interface,
.exec_op = atmel_nand_exec_op,
};
static int atmel_nand_controller_init(struct atmel_nand_controller *nc,
@ -1976,13 +2033,9 @@ static int atmel_nand_controller_init(struct atmel_nand_controller *nc,
platform_set_drvdata(pdev, nc);
nc->pmecc = devm_atmel_pmecc_get(dev);
if (IS_ERR(nc->pmecc)) {
ret = PTR_ERR(nc->pmecc);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Could not get PMECC object (err = %d)\n",
ret);
return ret;
}
if (IS_ERR(nc->pmecc))
return dev_err_probe(dev, PTR_ERR(nc->pmecc),
"Could not get PMECC object\n");
if (nc->caps->has_dma && !atmel_nand_avoid_dma) {
dma_cap_mask_t mask;
@ -2248,6 +2301,9 @@ atmel_hsmc_nand_controller_remove(struct atmel_nand_controller *nc)
return ret;
hsmc_nc = container_of(nc, struct atmel_hsmc_nand_controller, base);
regmap_write(hsmc_nc->base.smc, ATMEL_HSMC_NFC_CTRL,
ATMEL_HSMC_NFC_CTRL_DIS);
if (hsmc_nc->sram.pool)
gen_pool_free(hsmc_nc->sram.pool,
(unsigned long)hsmc_nc->sram.virt,
@ -2300,6 +2356,8 @@ static int atmel_hsmc_nand_controller_probe(struct platform_device *pdev,
/* Initial NFC configuration. */
regmap_write(nc->base.smc, ATMEL_HSMC_NFC_CFG,
ATMEL_HSMC_NFC_CFG_DTO_MAX);
regmap_write(nc->base.smc, ATMEL_HSMC_NFC_CTRL,
ATMEL_HSMC_NFC_CTRL_EN);
ret = atmel_nand_controller_add_nands(&nc->base);
if (ret)
@ -2317,8 +2375,9 @@ static const struct atmel_nand_controller_ops atmel_hsmc_nc_ops = {
.probe = atmel_hsmc_nand_controller_probe,
.remove = atmel_hsmc_nand_controller_remove,
.ecc_init = atmel_hsmc_nand_ecc_init,
.nand_init = atmel_hsmc_nand_init,
.nand_init = atmel_nand_init,
.setup_interface = atmel_hsmc_nand_setup_interface,
.exec_op = atmel_hsmc_nand_exec_op,
};
static const struct atmel_nand_controller_caps atmel_sama5_nc_caps = {
@ -2385,6 +2444,7 @@ static const struct atmel_nand_controller_ops at91rm9200_nc_ops = {
.remove = atmel_smc_nand_controller_remove,
.ecc_init = atmel_nand_ecc_init,
.nand_init = atmel_smc_nand_init,
.exec_op = atmel_smc_nand_exec_op,
};
static const struct atmel_nand_controller_caps atmel_rm9200_nc_caps = {
@ -2400,6 +2460,7 @@ static const struct atmel_nand_controller_ops atmel_smc_nc_ops = {
.ecc_init = atmel_nand_ecc_init,
.nand_init = atmel_smc_nand_init,
.setup_interface = atmel_smc_nand_setup_interface,
.exec_op = atmel_smc_nand_exec_op,
};
static const struct atmel_nand_controller_caps atmel_sam9260_nc_caps = {

View File

@ -294,8 +294,8 @@ static int au1550nd_probe(struct platform_device *pdev)
nand_controller_init(&ctx->controller);
ctx->controller.ops = &au1550nd_ops;
this->controller = &ctx->controller;
this->ecc.mode = NAND_ECC_SOFT;
this->ecc.algo = NAND_ECC_HAMMING;
this->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
this->ecc.algo = NAND_ECC_ALGO_HAMMING;
if (pd->devwidth)
this->options |= NAND_BUSWIDTH_16;

View File

@ -391,7 +391,8 @@ int bcm47xxnflash_ops_bcm4706_init(struct bcm47xxnflash *b47n)
nand_chip->legacy.chip_delay = 50;
b47n->nand_chip.bbt_options = NAND_BBT_USE_FLASH;
b47n->nand_chip.ecc.mode = NAND_ECC_NONE; /* TODO: implement ECC */
/* TODO: implement ECC */
b47n->nand_chip.ecc.engine_type = NAND_ECC_ENGINE_TYPE_NONE;
/* Enable NAND flash access */
bcma_cc_set32(b47n->cc, BCMA_CC_4706_FLASHSCFG,

View File

@ -2532,6 +2532,8 @@ static int brcmnand_setup_dev(struct brcmnand_host *host)
{
struct mtd_info *mtd = nand_to_mtd(&host->chip);
struct nand_chip *chip = &host->chip;
const struct nand_ecc_props *requirements =
nanddev_get_ecc_requirements(&chip->base);
struct brcmnand_controller *ctrl = host->ctrl;
struct brcmnand_cfg *cfg = &host->hwcfg;
char msg[128];
@ -2565,34 +2567,34 @@ static int brcmnand_setup_dev(struct brcmnand_host *host)
cfg->col_adr_bytes = 2;
cfg->blk_adr_bytes = get_blk_adr_bytes(mtd->size, mtd->writesize);
if (chip->ecc.mode != NAND_ECC_HW) {
if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST) {
dev_err(ctrl->dev, "only HW ECC supported; selected: %d\n",
chip->ecc.mode);
chip->ecc.engine_type);
return -EINVAL;
}
if (chip->ecc.algo == NAND_ECC_UNKNOWN) {
if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN) {
if (chip->ecc.strength == 1 && chip->ecc.size == 512)
/* Default to Hamming for 1-bit ECC, if unspecified */
chip->ecc.algo = NAND_ECC_HAMMING;
chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
else
/* Otherwise, BCH */
chip->ecc.algo = NAND_ECC_BCH;
chip->ecc.algo = NAND_ECC_ALGO_BCH;
}
if (chip->ecc.algo == NAND_ECC_HAMMING && (chip->ecc.strength != 1 ||
chip->ecc.size != 512)) {
if (chip->ecc.algo == NAND_ECC_ALGO_HAMMING &&
(chip->ecc.strength != 1 || chip->ecc.size != 512)) {
dev_err(ctrl->dev, "invalid Hamming params: %d bits per %d bytes\n",
chip->ecc.strength, chip->ecc.size);
return -EINVAL;
}
if (chip->ecc.mode != NAND_ECC_NONE &&
if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_NONE &&
(!chip->ecc.size || !chip->ecc.strength)) {
if (chip->base.eccreq.step_size && chip->base.eccreq.strength) {
if (requirements->step_size && requirements->strength) {
/* use detected ECC parameters */
chip->ecc.size = chip->base.eccreq.step_size;
chip->ecc.strength = chip->base.eccreq.strength;
chip->ecc.size = requirements->step_size;
chip->ecc.strength = requirements->strength;
dev_info(ctrl->dev, "Using ECC step-size %d, strength %d\n",
chip->ecc.size, chip->ecc.strength);
}
@ -2600,7 +2602,7 @@ static int brcmnand_setup_dev(struct brcmnand_host *host)
switch (chip->ecc.size) {
case 512:
if (chip->ecc.algo == NAND_ECC_HAMMING)
if (chip->ecc.algo == NAND_ECC_ALGO_HAMMING)
cfg->ecc_level = 15;
else
cfg->ecc_level = chip->ecc.strength;
@ -2728,7 +2730,7 @@ static int brcmnand_init_cs(struct brcmnand_host *host, struct device_node *dn)
chip->legacy.read_buf = brcmnand_read_buf;
chip->legacy.write_buf = brcmnand_write_buf;
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
chip->ecc.read_page = brcmnand_read_page;
chip->ecc.write_page = brcmnand_write_page;
chip->ecc.read_page_raw = brcmnand_read_page_raw;

View File

@ -2611,7 +2611,7 @@ static int cadence_nand_attach_chip(struct nand_chip *chip)
chip->bbt_options |= NAND_BBT_USE_FLASH;
chip->bbt_options |= NAND_BBT_NO_OOB;
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
chip->options |= NAND_NO_SUBPAGE_WRITE;
@ -2757,7 +2757,7 @@ static int cadence_nand_chip_init(struct cdns_nand_ctrl *cdns_ctrl,
* Default to HW ECC engine mode. If the nand-ecc-mode property is given
* in the DT node, this entry will be overwritten in nand_scan_ident().
*/
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
ret = nand_scan(chip, cdns_chip->nsels);
if (ret) {
@ -2980,18 +2980,14 @@ static int cadence_nand_dt_probe(struct platform_device *ofdev)
dev_info(cdns_ctrl->dev, "IRQ: nr %d\n", cdns_ctrl->irq);
cdns_ctrl->reg = devm_platform_ioremap_resource(ofdev, 0);
if (IS_ERR(cdns_ctrl->reg)) {
dev_err(&ofdev->dev, "devm_ioremap_resource res 0 failed\n");
if (IS_ERR(cdns_ctrl->reg))
return PTR_ERR(cdns_ctrl->reg);
}
res = platform_get_resource(ofdev, IORESOURCE_MEM, 1);
cdns_ctrl->io.dma = res->start;
cdns_ctrl->io.virt = devm_ioremap_resource(&ofdev->dev, res);
if (IS_ERR(cdns_ctrl->io.virt)) {
dev_err(cdns_ctrl->dev, "devm_ioremap_resource res 1 failed\n");
if (IS_ERR(cdns_ctrl->io.virt))
return PTR_ERR(cdns_ctrl->io.virt);
}
dt->clk = devm_clk_get(cdns_ctrl->dev, "nf_clk");
if (IS_ERR(dt->clk))

View File

@ -629,7 +629,8 @@ static int cafe_nand_attach_chip(struct nand_chip *chip)
goto out_free_dma;
}
cafe->nand.ecc.mode = NAND_ECC_HW_SYNDROME;
cafe->nand.ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
cafe->nand.ecc.placement = NAND_ECC_PLACEMENT_INTERLEAVED;
cafe->nand.ecc.size = mtd->writesize;
cafe->nand.ecc.bytes = 14;
cafe->nand.ecc.strength = 4;

View File

@ -286,7 +286,7 @@ static int __init cs553x_init_one(int cs, int mmio, unsigned long adr)
goto out_mtd;
}
this->ecc.mode = NAND_ECC_HW;
this->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
this->ecc.size = 256;
this->ecc.bytes = 3;
this->ecc.hwctl = cs_enable_hwecc;

View File

@ -168,7 +168,7 @@ static int nand_davinci_correct_1bit(struct nand_chip *chip, u_char *dat,
/*
* 4-bit hardware ECC ... context maintained over entire AEMIF
*
* This is a syndrome engine, but we avoid NAND_ECC_HW_SYNDROME
* This is a syndrome engine, but we avoid NAND_ECC_PLACEMENT_INTERLEAVED
* since that forces use of a problematic "infix OOB" layout.
* Among other things, it trashes manufacturer bad block markers.
* Also, and specific to this hardware, it ECC-protects the "prepad"
@ -530,11 +530,11 @@ static struct davinci_nand_pdata
if (!of_property_read_string(pdev->dev.of_node,
"ti,davinci-ecc-mode", &mode)) {
if (!strncmp("none", mode, 4))
pdata->ecc_mode = NAND_ECC_NONE;
pdata->engine_type = NAND_ECC_ENGINE_TYPE_NONE;
if (!strncmp("soft", mode, 4))
pdata->ecc_mode = NAND_ECC_SOFT;
pdata->engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
if (!strncmp("hw", mode, 2))
pdata->ecc_mode = NAND_ECC_HW;
pdata->engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
}
if (!of_property_read_u32(pdev->dev.of_node,
"ti,davinci-ecc-bits", &prop))
@ -585,21 +585,21 @@ static int davinci_nand_attach_chip(struct nand_chip *chip)
if (IS_ERR(pdata))
return PTR_ERR(pdata);
switch (info->chip.ecc.mode) {
case NAND_ECC_NONE:
switch (info->chip.ecc.engine_type) {
case NAND_ECC_ENGINE_TYPE_NONE:
pdata->ecc_bits = 0;
break;
case NAND_ECC_SOFT:
case NAND_ECC_ENGINE_TYPE_SOFT:
pdata->ecc_bits = 0;
/*
* This driver expects Hamming based ECC when ecc_mode is set
* to NAND_ECC_SOFT. Force ecc.algo to NAND_ECC_HAMMING to
* avoid adding an extra ->ecc_algo field to
* davinci_nand_pdata.
* This driver expects Hamming based ECC when engine_type is set
* to NAND_ECC_ENGINE_TYPE_SOFT. Force ecc.algo to
* NAND_ECC_ALGO_HAMMING to avoid adding an extra ->ecc_algo
* field to davinci_nand_pdata.
*/
info->chip.ecc.algo = NAND_ECC_HAMMING;
info->chip.ecc.algo = NAND_ECC_ALGO_HAMMING;
break;
case NAND_ECC_HW:
case NAND_ECC_ENGINE_TYPE_ON_HOST:
if (pdata->ecc_bits == 4) {
int chunks = mtd->writesize / 512;
@ -629,7 +629,7 @@ static int davinci_nand_attach_chip(struct nand_chip *chip)
info->chip.ecc.hwctl = nand_davinci_hwctl_4bit;
info->chip.ecc.bytes = 10;
info->chip.ecc.options = NAND_ECC_GENERIC_ERASED_CHECK;
info->chip.ecc.algo = NAND_ECC_BCH;
info->chip.ecc.algo = NAND_ECC_ALGO_BCH;
/*
* Update ECC layout if needed ... for 1-bit HW ECC, the
@ -645,7 +645,8 @@ static int davinci_nand_attach_chip(struct nand_chip *chip)
mtd_set_ooblayout(mtd,
&hwecc4_small_ooblayout_ops);
} else if (chunks == 4 || chunks == 8) {
mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
mtd_set_ooblayout(mtd,
nand_get_large_page_ooblayout());
info->chip.ecc.read_page = nand_davinci_read_page_hwecc_oob_first;
} else {
return -EIO;
@ -656,7 +657,7 @@ static int davinci_nand_attach_chip(struct nand_chip *chip)
info->chip.ecc.correct = nand_davinci_correct_1bit;
info->chip.ecc.hwctl = nand_davinci_hwctl_1bit;
info->chip.ecc.bytes = 3;
info->chip.ecc.algo = NAND_ECC_HAMMING;
info->chip.ecc.algo = NAND_ECC_ALGO_HAMMING;
}
info->chip.ecc.size = 512;
info->chip.ecc.strength = pdata->ecc_bits;
@ -850,7 +851,8 @@ static int nand_davinci_probe(struct platform_device *pdev)
info->mask_cle = pdata->mask_cle ? : MASK_CLE;
/* Use board-specific ECC config */
info->chip.ecc.mode = pdata->ecc_mode;
info->chip.ecc.engine_type = pdata->engine_type;
info->chip.ecc.placement = pdata->ecc_placement;
spin_lock_irq(&davinci_nand_lock);
@ -897,7 +899,7 @@ static int nand_davinci_remove(struct platform_device *pdev)
int ret;
spin_lock_irq(&davinci_nand_lock);
if (info->chip.ecc.mode == NAND_ECC_HW_SYNDROME)
if (info->chip.ecc.placement == NAND_ECC_PLACEMENT_INTERLEAVED)
ecc4_busy = false;
spin_unlock_irq(&davinci_nand_lock);

View File

@ -1237,7 +1237,8 @@ int denali_chip_init(struct denali_controller *denali,
chip->bbt_options |= NAND_BBT_USE_FLASH;
chip->bbt_options |= NAND_BBT_NO_OOB;
chip->options |= NAND_NO_SUBPAGE_WRITE;
chip->ecc.mode = NAND_ECC_HW_SYNDROME;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
chip->ecc.placement = NAND_ECC_PLACEMENT_INTERLEAVED;
chip->ecc.read_page = denali_read_page;
chip->ecc.write_page = denali_write_page;
chip->ecc.read_page_raw = denali_read_page_raw;

View File

@ -100,7 +100,7 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
goto out_remove_denali;
}
dchip->chip.ecc.options |= NAND_ECC_MAXIMIZE;
dchip->chip.base.ecc.user_conf.flags |= NAND_ECC_MAXIMIZE_STRENGTH;
dchip->nsels = nsels;

View File

@ -1456,7 +1456,8 @@ static int __init doc_probe(unsigned long physadr)
nand->ecc.calculate = doc200x_calculate_ecc;
nand->ecc.correct = doc200x_correct_data;
nand->ecc.mode = NAND_ECC_HW_SYNDROME;
nand->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
nand->ecc.placement = NAND_ECC_PLACEMENT_INTERLEAVED;
nand->ecc.size = 512;
nand->ecc.bytes = 6;
nand->ecc.strength = 2;

View File

@ -244,7 +244,7 @@ static int fsl_elbc_run_command(struct mtd_info *mtd)
return -EIO;
}
if (chip->ecc.mode != NAND_ECC_HW)
if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST)
return 0;
elbc_fcm_ctrl->max_bitflips = 0;
@ -727,12 +727,12 @@ static int fsl_elbc_attach_chip(struct nand_chip *chip)
struct fsl_lbc_regs __iomem *lbc = ctrl->regs;
unsigned int al;
switch (chip->ecc.mode) {
switch (chip->ecc.engine_type) {
/*
* if ECC was not chosen in DT, decide whether to use HW or SW ECC from
* CS Base Register
*/
case NAND_ECC_NONE:
case NAND_ECC_ENGINE_TYPE_NONE:
/* If CS Base Register selects full hardware ECC then use it */
if ((in_be32(&lbc->bank[priv->bank].br) & BR_DECC) ==
BR_DECC_CHK_GEN) {
@ -740,23 +740,23 @@ static int fsl_elbc_attach_chip(struct nand_chip *chip)
chip->ecc.write_page = fsl_elbc_write_page;
chip->ecc.write_subpage = fsl_elbc_write_subpage;
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
mtd_set_ooblayout(mtd, &fsl_elbc_ooblayout_ops);
chip->ecc.size = 512;
chip->ecc.bytes = 3;
chip->ecc.strength = 1;
} else {
/* otherwise fall back to default software ECC */
chip->ecc.mode = NAND_ECC_SOFT;
chip->ecc.algo = NAND_ECC_HAMMING;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
}
break;
/* if SW ECC was chosen in DT, we do not need to set anything here */
case NAND_ECC_SOFT:
case NAND_ECC_ENGINE_TYPE_SOFT:
break;
/* should we also implement NAND_ECC_HW to do as the code above? */
/* should we also implement *_ECC_ENGINE_CONTROLLER to do as above? */
default:
return -EINVAL;
}
@ -786,8 +786,8 @@ static int fsl_elbc_attach_chip(struct nand_chip *chip)
chip->page_shift);
dev_dbg(priv->dev, "fsl_elbc_init: nand->phys_erase_shift = %d\n",
chip->phys_erase_shift);
dev_dbg(priv->dev, "fsl_elbc_init: nand->ecc.mode = %d\n",
chip->ecc.mode);
dev_dbg(priv->dev, "fsl_elbc_init: nand->ecc.engine_type = %d\n",
chip->ecc.engine_type);
dev_dbg(priv->dev, "fsl_elbc_init: nand->ecc.steps = %d\n",
chip->ecc.steps);
dev_dbg(priv->dev, "fsl_elbc_init: nand->ecc.bytes = %d\n",

View File

@ -309,7 +309,7 @@ static void fsl_ifc_cmdfunc(struct nand_chip *chip, unsigned int command,
ifc_nand_ctrl->read_bytes = mtd->writesize + mtd->oobsize;
ifc_nand_ctrl->index += column;
if (chip->ecc.mode == NAND_ECC_HW)
if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST)
ifc_nand_ctrl->eccread = 1;
fsl_ifc_do_read(chip, 0, mtd);
@ -724,8 +724,8 @@ static int fsl_ifc_attach_chip(struct nand_chip *chip)
chip->page_shift);
dev_dbg(priv->dev, "%s: nand->phys_erase_shift = %d\n", __func__,
chip->phys_erase_shift);
dev_dbg(priv->dev, "%s: nand->ecc.mode = %d\n", __func__,
chip->ecc.mode);
dev_dbg(priv->dev, "%s: nand->ecc.engine_type = %d\n", __func__,
chip->ecc.engine_type);
dev_dbg(priv->dev, "%s: nand->ecc.steps = %d\n", __func__,
chip->ecc.steps);
dev_dbg(priv->dev, "%s: nand->ecc.bytes = %d\n", __func__,
@ -912,7 +912,7 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
/* Must also set CSOR_NAND_ECC_ENC_EN if DEC_EN set */
if (csor & CSOR_NAND_ECC_DEC_EN) {
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
mtd_set_ooblayout(mtd, &fsl_ifc_ooblayout_ops);
/* Hardware generates ECC per 512 Bytes */
@ -925,8 +925,8 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
chip->ecc.strength = 8;
}
} else {
chip->ecc.mode = NAND_ECC_SOFT;
chip->ecc.algo = NAND_ECC_HAMMING;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
}
ret = fsl_ifc_sram_init(priv);

View File

@ -47,8 +47,8 @@ static int fun_chip_init(struct fsl_upm_nand *fun,
int ret;
struct device_node *flash_np;
fun->chip.ecc.mode = NAND_ECC_SOFT;
fun->chip.ecc.algo = NAND_ECC_HAMMING;
fun->chip.ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
fun->chip.ecc.algo = NAND_ECC_ALGO_HAMMING;
fun->chip.controller = &fun->base;
mtd->dev.parent = fun->dev;

View File

@ -900,8 +900,8 @@ static int fsmc_nand_attach_chip(struct nand_chip *nand)
return 0;
}
switch (nand->ecc.mode) {
case NAND_ECC_HW:
switch (nand->ecc.engine_type) {
case NAND_ECC_ENGINE_TYPE_ON_HOST:
dev_info(host->dev, "Using 1-bit HW ECC scheme\n");
nand->ecc.calculate = fsmc_read_hwecc_ecc1;
nand->ecc.correct = nand_correct_data;
@ -910,14 +910,14 @@ static int fsmc_nand_attach_chip(struct nand_chip *nand)
nand->ecc.options |= NAND_ECC_SOFT_HAMMING_SM_ORDER;
break;
case NAND_ECC_SOFT:
if (nand->ecc.algo == NAND_ECC_BCH) {
case NAND_ECC_ENGINE_TYPE_SOFT:
if (nand->ecc.algo == NAND_ECC_ALGO_BCH) {
dev_info(host->dev,
"Using 4-bit SW BCH ECC scheme\n");
break;
}
case NAND_ECC_ON_DIE:
case NAND_ECC_ENGINE_TYPE_ON_DIE:
break;
default:
@ -929,7 +929,7 @@ static int fsmc_nand_attach_chip(struct nand_chip *nand)
* Don't set layout for BCH4 SW ECC. This will be
* generated later in nand_bch_init() later.
*/
if (nand->ecc.mode == NAND_ECC_HW) {
if (nand->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST) {
switch (mtd->oobsize) {
case 16:
case 64:
@ -1059,7 +1059,7 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
* Setup default ECC mode. nand_dt_init() called from nand_scan_ident()
* can overwrite this value if the DT provides a different value.
*/
nand->ecc.mode = NAND_ECC_HW;
nand->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
nand->ecc.hwctl = fsmc_enable_hwecc;
nand->ecc.size = 512;
nand->badblockbits = 7;

View File

@ -342,8 +342,8 @@ static int gpio_nand_probe(struct platform_device *pdev)
gpiomtd->base.ops = &gpio_nand_ops;
nand_set_flash_node(chip, pdev->dev.of_node);
chip->ecc.mode = NAND_ECC_SOFT;
chip->ecc.algo = NAND_ECC_HAMMING;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
chip->options = gpiomtd->plat.options;
chip->controller = &gpiomtd->base;

View File

@ -272,8 +272,8 @@ static int set_geometry_by_ecc_info(struct gpmi_nand_data *this,
default:
dev_err(this->dev,
"unsupported nand chip. ecc bits : %d, ecc size : %d\n",
chip->base.eccreq.strength,
chip->base.eccreq.step_size);
nanddev_get_ecc_requirements(&chip->base)->strength,
nanddev_get_ecc_requirements(&chip->base)->step_size);
return -EINVAL;
}
geo->ecc_chunk_size = ecc_step;
@ -510,6 +510,8 @@ static int legacy_set_geometry(struct gpmi_nand_data *this)
static int common_nfc_set_geometry(struct gpmi_nand_data *this)
{
struct nand_chip *chip = &this->nand;
const struct nand_ecc_props *requirements =
nanddev_get_ecc_requirements(&chip->base);
if (chip->ecc.strength > 0 && chip->ecc.size > 0)
return set_geometry_by_ecc_info(this, chip->ecc.strength,
@ -517,13 +519,12 @@ static int common_nfc_set_geometry(struct gpmi_nand_data *this)
if ((of_property_read_bool(this->dev->of_node, "fsl,use-minimum-ecc"))
|| legacy_set_geometry(this)) {
if (!(chip->base.eccreq.strength > 0 &&
chip->base.eccreq.step_size > 0))
if (!(requirements->strength > 0 && requirements->step_size > 0))
return -EINVAL;
return set_geometry_by_ecc_info(this,
chip->base.eccreq.strength,
chip->base.eccreq.step_size);
requirements->strength,
requirements->step_size);
}
return 0;
@ -1003,10 +1004,8 @@ static int acquire_dma_channels(struct gpmi_nand_data *this)
/* request dma channel */
dma_chan = dma_request_chan(&pdev->dev, "rx-tx");
if (IS_ERR(dma_chan)) {
ret = PTR_ERR(dma_chan);
if (ret != -EPROBE_DEFER)
dev_err(this->dev, "DMA channel request failed: %d\n",
ret);
ret = dev_err_probe(this->dev, PTR_ERR(dma_chan),
"DMA channel request failed\n");
release_dma_channels(this);
} else {
this->dma_chans[0] = dma_chan;
@ -2032,7 +2031,7 @@ static int gpmi_init_last(struct gpmi_nand_data *this)
ecc->write_page_raw = gpmi_ecc_write_page_raw;
ecc->read_oob_raw = gpmi_ecc_read_oob_raw;
ecc->write_oob_raw = gpmi_ecc_write_oob_raw;
ecc->mode = NAND_ECC_HW;
ecc->engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
ecc->size = bch_geo->ecc_chunk_size;
ecc->strength = bch_geo->ecc_strength;
mtd_set_ooblayout(mtd, &gpmi_ooblayout_ops);

View File

@ -186,7 +186,7 @@ static void hisi_nfc_dma_transfer(struct hinfc_host *host, int todev)
hinfc_write(host, host->dma_buffer, HINFC504_DMA_ADDR_DATA);
hinfc_write(host, host->dma_oob, HINFC504_DMA_ADDR_OOB);
if (chip->ecc.mode == NAND_ECC_NONE) {
if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_NONE) {
hinfc_write(host, ((mtd->oobsize & HINFC504_DMA_LEN_OOB_MASK)
<< HINFC504_DMA_LEN_OOB_SHIFT), HINFC504_DMA_LEN);
@ -468,7 +468,7 @@ static void hisi_nfc_cmdfunc(struct nand_chip *chip, unsigned command,
case NAND_CMD_STATUS:
flag = hinfc_read(host, HINFC504_CON);
if (chip->ecc.mode == NAND_ECC_HW)
if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST)
hinfc_write(host,
flag & ~(HINFC504_CON_ECCTYPE_MASK <<
HINFC504_CON_ECCTYPE_SHIFT), HINFC504_CON);
@ -721,7 +721,7 @@ static int hisi_nfc_attach_chip(struct nand_chip *chip)
}
hinfc_write(host, flag, HINFC504_CON);
if (chip->ecc.mode == NAND_ECC_HW)
if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST)
hisi_nfc_ecc_probe(host);
return 0;

View File

@ -194,8 +194,8 @@ static int ingenic_nand_attach_chip(struct nand_chip *chip)
(chip->ecc.strength / 8);
}
switch (chip->ecc.mode) {
case NAND_ECC_HW:
switch (chip->ecc.engine_type) {
case NAND_ECC_ENGINE_TYPE_ON_HOST:
if (!nfc->ecc) {
dev_err(nfc->dev, "HW ECC selected, but ECC controller not found\n");
return -ENODEV;
@ -205,22 +205,22 @@ static int ingenic_nand_attach_chip(struct nand_chip *chip)
chip->ecc.calculate = ingenic_nand_ecc_calculate;
chip->ecc.correct = ingenic_nand_ecc_correct;
fallthrough;
case NAND_ECC_SOFT:
case NAND_ECC_ENGINE_TYPE_SOFT:
dev_info(nfc->dev, "using %s (strength %d, size %d, bytes %d)\n",
(nfc->ecc) ? "hardware ECC" : "software ECC",
chip->ecc.strength, chip->ecc.size, chip->ecc.bytes);
break;
case NAND_ECC_NONE:
case NAND_ECC_ENGINE_TYPE_NONE:
dev_info(nfc->dev, "not using ECC\n");
break;
default:
dev_err(nfc->dev, "ECC mode %d not supported\n",
chip->ecc.mode);
chip->ecc.engine_type);
return -EINVAL;
}
/* The NAND core will generate the ECC layout for SW ECC */
if (chip->ecc.mode != NAND_ECC_HW)
if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST)
return 0;
/* Generate ECC layout. ECC codes are right aligned in the OOB area. */
@ -243,8 +243,10 @@ static int ingenic_nand_attach_chip(struct nand_chip *chip)
/* For legacy reasons we use a different layout on the qi,lb60 board. */
if (of_machine_is_compatible("qi,lb60"))
mtd_set_ooblayout(mtd, &qi_lb60_ooblayout_ops);
else
else if (nfc->soc_info->oob_layout)
mtd_set_ooblayout(mtd, nfc->soc_info->oob_layout);
else
mtd_set_ooblayout(mtd, nand_get_large_page_ooblayout());
return 0;
}
@ -404,7 +406,7 @@ static int ingenic_nand_init_chip(struct platform_device *pdev,
mtd->dev.parent = dev;
chip->options = NAND_NO_SUBPAGE_WRITE;
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
chip->controller = &nfc->controller;
nand_set_flash_node(chip, np);
@ -532,7 +534,6 @@ static const struct jz_soc_info jz4740_soc_info = {
.data_offset = 0x00000000,
.cmd_offset = 0x00008000,
.addr_offset = 0x00010000,
.oob_layout = &nand_ooblayout_lp_ops,
};
static const struct jz_soc_info jz4725b_soc_info = {
@ -546,7 +547,6 @@ static const struct jz_soc_info jz4780_soc_info = {
.data_offset = 0x00000000,
.cmd_offset = 0x00400000,
.addr_offset = 0x00800000,
.oob_layout = &nand_ooblayout_lp_ops,
};
static const struct of_device_id ingenic_nand_dt_match[] = {

View File

@ -656,7 +656,7 @@ static int lpc32xx_nand_attach_chip(struct nand_chip *chip)
if (!host->dummy_buf)
return -ENOMEM;
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
chip->ecc.size = 512;
mtd_set_ooblayout(mtd, &lpc32xx_ooblayout_ops);
host->mlcsubpages = mtd->writesize / 512;

View File

@ -881,7 +881,8 @@ static int lpc32xx_nand_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, host);
/* NAND callbacks for LPC32xx SLC hardware */
chip->ecc.mode = NAND_ECC_HW_SYNDROME;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
chip->ecc.placement = NAND_ECC_PLACEMENT_INTERLEAVED;
chip->legacy.read_byte = lpc32xx_nand_read_byte;
chip->legacy.read_buf = lpc32xx_nand_read_buf;
chip->legacy.write_buf = lpc32xx_nand_write_buf;

View File

@ -227,6 +227,8 @@
#define XTYPE_MASK 7
/**
* struct marvell_hw_ecc_layout - layout of Marvell ECC
*
* Marvell ECC engine works differently than the others, in order to limit the
* size of the IP, hardware engineers chose to set a fixed strength at 16 bits
* per subpage, and depending on a the desired strength needed by the NAND chip,
@ -292,6 +294,8 @@ static const struct marvell_hw_ecc_layout marvell_nfc_layouts[] = {
};
/**
* struct marvell_nand_chip_sel - CS line description
*
* The Nand Flash Controller has up to 4 CE and 2 RB pins. The CE selection
* is made by a field in NDCB0 register, and in another field in NDCB2 register.
* The datasheet describes the logic with an error: ADDR5 field is once
@ -312,14 +316,15 @@ struct marvell_nand_chip_sel {
};
/**
* NAND chip structure: stores NAND chip device related information
* struct marvell_nand_chip - stores NAND chip device related information
*
* @chip: Base NAND chip structure
* @node: Used to store NAND chips into a list
* @layout NAND layout when using hardware ECC
* @layout: NAND layout when using hardware ECC
* @ndcr: Controller register value for this NAND chip
* @ndtr0: Timing registers 0 value for this NAND chip
* @ndtr1: Timing registers 1 value for this NAND chip
* @addr_cyc: Amount of cycles needed to pass column address
* @selected_die: Current active CS
* @nsels: Number of CS lines required by the NAND chip
* @sels: Array of CS lines descriptions
@ -349,7 +354,8 @@ static inline struct marvell_nand_chip_sel *to_nand_sel(struct marvell_nand_chip
}
/**
* NAND controller capabilities for distinction between compatible strings
* struct marvell_nfc_caps - NAND controller capabilities for distinction
* between compatible strings
*
* @max_cs_nb: Number of Chip Select lines available
* @max_rb_nb: Number of Ready/Busy lines available
@ -372,7 +378,7 @@ struct marvell_nfc_caps {
};
/**
* NAND controller structure: stores Marvell NAND controller information
* struct marvell_nfc - stores Marvell NAND controller information
*
* @controller: Base controller structure
* @dev: Parent device (used to print error messages)
@ -383,7 +389,9 @@ struct marvell_nfc_caps {
* @assigned_cs: Bitmask describing already assigned CS lines
* @chips: List containing all the NAND chips attached to
* this NAND controller
* @selected_chip: Currently selected target chip
* @caps: NAND controller capabilities for each compatible string
* @use_dma: Whetner DMA is used
* @dma_chan: DMA channel (NFCv1 only)
* @dma_buf: 32-bit aligned buffer for DMA transfers (NFCv1 only)
*/
@ -411,7 +419,8 @@ static inline struct marvell_nfc *to_marvell_nfc(struct nand_controller *ctrl)
}
/**
* NAND controller timings expressed in NAND Controller clock cycles
* struct marvell_nfc_timings - NAND controller timings expressed in NAND
* Controller clock cycles
*
* @tRP: ND_nRE pulse width
* @tRH: ND_nRE high duration
@ -455,8 +464,8 @@ struct marvell_nfc_timings {
period_ns))
/**
* NAND driver structure filled during the parsing of the ->exec_op() subop
* subset of instructions.
* struct marvell_nfc_op - filled during the parsing of the ->exec_op()
* subop subset of instructions.
*
* @ndcb: Array of values written to NDCBx registers
* @cle_ale_delay_ns: Optional delay after the last CMD or ADDR cycle
@ -685,9 +694,31 @@ static int marvell_nfc_wait_cmdd(struct nand_chip *chip)
return marvell_nfc_end_cmd(chip, cs_flag, "CMDD");
}
static int marvell_nfc_poll_status(struct marvell_nfc *nfc, u32 mask,
u32 expected_val, unsigned long timeout_ms)
{
unsigned long limit;
u32 st;
limit = jiffies + msecs_to_jiffies(timeout_ms);
do {
st = readl_relaxed(nfc->regs + NDSR);
if (st & NDSR_RDY(1))
st |= NDSR_RDY(0);
if ((st & mask) == expected_val)
return 0;
cpu_relax();
} while (time_after(limit, jiffies));
return -ETIMEDOUT;
}
static int marvell_nfc_wait_op(struct nand_chip *chip, unsigned int timeout_ms)
{
struct marvell_nfc *nfc = to_marvell_nfc(chip->controller);
struct mtd_info *mtd = nand_to_mtd(chip);
u32 pending;
int ret;
@ -695,12 +726,18 @@ static int marvell_nfc_wait_op(struct nand_chip *chip, unsigned int timeout_ms)
if (!timeout_ms)
timeout_ms = IRQ_TIMEOUT;
init_completion(&nfc->complete);
if (mtd->oops_panic_write) {
ret = marvell_nfc_poll_status(nfc, NDSR_RDY(0),
NDSR_RDY(0),
timeout_ms);
} else {
init_completion(&nfc->complete);
marvell_nfc_enable_int(nfc, NDCR_RDYM);
ret = wait_for_completion_timeout(&nfc->complete,
msecs_to_jiffies(timeout_ms));
marvell_nfc_disable_int(nfc, NDCR_RDYM);
marvell_nfc_enable_int(nfc, NDCR_RDYM);
ret = wait_for_completion_timeout(&nfc->complete,
msecs_to_jiffies(timeout_ms));
marvell_nfc_disable_int(nfc, NDCR_RDYM);
}
pending = marvell_nfc_clear_int(nfc, NDSR_RDY(0) | NDSR_RDY(1));
/*
@ -780,7 +817,7 @@ static void marvell_nfc_enable_hw_ecc(struct nand_chip *chip)
* When enabling BCH, set threshold to 0 to always know the
* number of corrected bitflips.
*/
if (chip->ecc.algo == NAND_ECC_BCH)
if (chip->ecc.algo == NAND_ECC_ALGO_BCH)
writel_relaxed(NDECCCTRL_BCH_EN, nfc->regs + NDECCCTRL);
}
}
@ -792,7 +829,7 @@ static void marvell_nfc_disable_hw_ecc(struct nand_chip *chip)
if (ndcr & NDCR_ECC_EN) {
writel_relaxed(ndcr & ~NDCR_ECC_EN, nfc->regs + NDCR);
if (chip->ecc.algo == NAND_ECC_BCH)
if (chip->ecc.algo == NAND_ECC_ALGO_BCH)
writel_relaxed(0, nfc->regs + NDECCCTRL);
}
}
@ -966,7 +1003,7 @@ static int marvell_nfc_hw_ecc_check_bitflips(struct nand_chip *chip,
if (ndsr & NDSR_CORERR) {
writel_relaxed(ndsr, nfc->regs + NDSR);
if (chip->ecc.algo == NAND_ECC_BCH)
if (chip->ecc.algo == NAND_ECC_ALGO_BCH)
bf = NDSR_ERRCNT(ndsr);
else
bf = 1;
@ -2218,7 +2255,7 @@ static int marvell_nand_hw_ecc_controller_init(struct mtd_info *mtd,
ecc->size = l->data_bytes;
if (ecc->strength == 1) {
chip->ecc.algo = NAND_ECC_HAMMING;
chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
ecc->read_page_raw = marvell_nfc_hw_ecc_hmg_read_page_raw;
ecc->read_page = marvell_nfc_hw_ecc_hmg_read_page;
ecc->read_oob_raw = marvell_nfc_hw_ecc_hmg_read_oob_raw;
@ -2228,7 +2265,7 @@ static int marvell_nand_hw_ecc_controller_init(struct mtd_info *mtd,
ecc->write_oob_raw = marvell_nfc_hw_ecc_hmg_write_oob_raw;
ecc->write_oob = ecc->write_oob_raw;
} else {
chip->ecc.algo = NAND_ECC_BCH;
chip->ecc.algo = NAND_ECC_ALGO_BCH;
ecc->strength = 16;
ecc->read_page_raw = marvell_nfc_hw_ecc_bch_read_page_raw;
ecc->read_page = marvell_nfc_hw_ecc_bch_read_page;
@ -2247,13 +2284,16 @@ static int marvell_nand_ecc_init(struct mtd_info *mtd,
struct nand_ecc_ctrl *ecc)
{
struct nand_chip *chip = mtd_to_nand(mtd);
const struct nand_ecc_props *requirements =
nanddev_get_ecc_requirements(&chip->base);
struct marvell_nfc *nfc = to_marvell_nfc(chip->controller);
int ret;
if (ecc->mode != NAND_ECC_NONE && (!ecc->size || !ecc->strength)) {
if (chip->base.eccreq.step_size && chip->base.eccreq.strength) {
ecc->size = chip->base.eccreq.step_size;
ecc->strength = chip->base.eccreq.strength;
if (ecc->engine_type != NAND_ECC_ENGINE_TYPE_NONE &&
(!ecc->size || !ecc->strength)) {
if (requirements->step_size && requirements->strength) {
ecc->size = requirements->step_size;
ecc->strength = requirements->strength;
} else {
dev_info(nfc->dev,
"No minimum ECC strength, using 1b/512B\n");
@ -2262,15 +2302,15 @@ static int marvell_nand_ecc_init(struct mtd_info *mtd,
}
}
switch (ecc->mode) {
case NAND_ECC_HW:
switch (ecc->engine_type) {
case NAND_ECC_ENGINE_TYPE_ON_HOST:
ret = marvell_nand_hw_ecc_controller_init(mtd, ecc);
if (ret)
return ret;
break;
case NAND_ECC_NONE:
case NAND_ECC_SOFT:
case NAND_ECC_ON_DIE:
case NAND_ECC_ENGINE_TYPE_NONE:
case NAND_ECC_ENGINE_TYPE_SOFT:
case NAND_ECC_ENGINE_TYPE_ON_DIE:
if (!nfc->caps->is_nfcv2 && mtd->writesize != SZ_512 &&
mtd->writesize != SZ_2K) {
dev_err(nfc->dev, "NFCv1 cannot write %d bytes pages\n",
@ -2467,7 +2507,7 @@ static int marvell_nand_attach_chip(struct nand_chip *chip)
return ret;
}
if (chip->ecc.mode == NAND_ECC_HW) {
if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST) {
/*
* Subpage write not available with hardware ECC, prohibit also
* subpage read as in userspace subpage access would still be
@ -2642,7 +2682,7 @@ static int marvell_nand_chip_init(struct device *dev, struct marvell_nfc *nfc,
* Default to HW ECC engine mode. If the nand-ecc-mode property is given
* in the DT node, this entry will be overwritten in nand_scan_ident().
*/
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
/*
* Save a reference value for timing registers before
@ -2759,10 +2799,7 @@ static int marvell_nfc_init_dma(struct marvell_nfc *nfc)
if (IS_ERR(nfc->dma_chan)) {
ret = PTR_ERR(nfc->dma_chan);
nfc->dma_chan = NULL;
if (ret != -EPROBE_DEFER)
dev_err(nfc->dev, "DMA channel request failed: %d\n",
ret);
return ret;
return dev_err_probe(nfc->dev, ret, "DMA channel request failed\n");
}
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);

View File

@ -1197,7 +1197,7 @@ static int meson_nand_attach_chip(struct nand_chip *nand)
if (ret)
return -EINVAL;
nand->ecc.mode = NAND_ECC_HW;
nand->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
nand->ecc.write_page_raw = meson_nfc_write_page_raw;
nand->ecc.write_page = meson_nfc_write_page_hwecc;
nand->ecc.write_oob_raw = nand_write_oob_std;

View File

@ -688,8 +688,8 @@ static int mpc5121_nfc_probe(struct platform_device *op)
chip->legacy.set_features = nand_get_set_features_notsupp;
chip->legacy.get_features = nand_get_set_features_notsupp;
chip->bbt_options = NAND_BBT_USE_FLASH;
chip->ecc.mode = NAND_ECC_SOFT;
chip->ecc.algo = NAND_ECC_HAMMING;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
/* Support external chip-select logic on ADS5121 board */
if (of_machine_is_compatible("fsl,mpc5121ads")) {

View File

@ -1253,21 +1253,23 @@ static int mtk_nfc_set_spare_per_sector(u32 *sps, struct mtd_info *mtd)
static int mtk_nfc_ecc_init(struct device *dev, struct mtd_info *mtd)
{
struct nand_chip *nand = mtd_to_nand(mtd);
const struct nand_ecc_props *requirements =
nanddev_get_ecc_requirements(&nand->base);
struct mtk_nfc *nfc = nand_get_controller_data(nand);
u32 spare;
int free, ret;
/* support only ecc hw mode */
if (nand->ecc.mode != NAND_ECC_HW) {
dev_err(dev, "ecc.mode not supported\n");
if (nand->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST) {
dev_err(dev, "ecc.engine_type not supported\n");
return -EINVAL;
}
/* if optional dt settings not present */
if (!nand->ecc.size || !nand->ecc.strength) {
/* use datasheet requirements */
nand->ecc.strength = nand->base.eccreq.strength;
nand->ecc.size = nand->base.eccreq.step_size;
nand->ecc.strength = requirements->strength;
nand->ecc.size = requirements->step_size;
/*
* align eccstrength and eccsize
@ -1416,7 +1418,7 @@ static int mtk_nfc_nand_chip_init(struct device *dev, struct mtk_nfc *nfc,
nand->options |= NAND_USES_DMA | NAND_SUBPAGE_READ;
/* set default mode in case dt entry is missing */
nand->ecc.mode = NAND_ECC_HW;
nand->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
nand->ecc.write_subpage = mtk_nfc_write_subpage_hwecc;
nand->ecc.write_page_raw = mtk_nfc_write_page_raw;

View File

@ -669,7 +669,7 @@ static void mxc_nand_enable_hwecc_v1_v2(struct nand_chip *chip, bool enable)
struct mxc_nand_host *host = nand_get_controller_data(chip);
uint16_t config1;
if (chip->ecc.mode != NAND_ECC_HW)
if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST)
return;
config1 = readw(NFC_V1_V2_CONFIG1);
@ -687,7 +687,7 @@ static void mxc_nand_enable_hwecc_v3(struct nand_chip *chip, bool enable)
struct mxc_nand_host *host = nand_get_controller_data(chip);
uint32_t config2;
if (chip->ecc.mode != NAND_ECC_HW)
if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST)
return;
config2 = readl(NFC_V3_CONFIG2);
@ -1117,7 +1117,8 @@ static void preset_v1(struct mtd_info *mtd)
struct mxc_nand_host *host = nand_get_controller_data(nand_chip);
uint16_t config1 = 0;
if (nand_chip->ecc.mode == NAND_ECC_HW && mtd->writesize)
if (nand_chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST &&
mtd->writesize)
config1 |= NFC_V1_V2_CONFIG1_ECC_EN;
if (!host->devtype_data->irqpending_quirk)
@ -1227,7 +1228,7 @@ static void preset_v2(struct mtd_info *mtd)
if (mtd->writesize) {
uint16_t pages_per_block = mtd->erasesize / mtd->writesize;
if (nand_chip->ecc.mode == NAND_ECC_HW)
if (nand_chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST)
config1 |= NFC_V1_V2_CONFIG1_ECC_EN;
host->eccsize = get_eccsize(mtd);
@ -1303,7 +1304,7 @@ static void preset_v3(struct mtd_info *mtd)
}
if (mtd->writesize) {
if (chip->ecc.mode == NAND_ECC_HW)
if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST)
config2 |= NFC_V3_CONFIG2_ECC_EN;
config2 |= NFC_V3_CONFIG2_PPB(
@ -1680,8 +1681,8 @@ static int mxcnd_attach_chip(struct nand_chip *chip)
struct mxc_nand_host *host = nand_get_controller_data(chip);
struct device *dev = mtd->dev.parent;
switch (chip->ecc.mode) {
case NAND_ECC_HW:
switch (chip->ecc.engine_type) {
case NAND_ECC_ENGINE_TYPE_ON_HOST:
chip->ecc.read_page = mxc_nand_read_page;
chip->ecc.read_page_raw = mxc_nand_read_page_raw;
chip->ecc.read_oob = mxc_nand_read_oob;
@ -1690,7 +1691,7 @@ static int mxcnd_attach_chip(struct nand_chip *chip)
chip->ecc.write_oob = mxc_nand_write_oob;
break;
case NAND_ECC_SOFT:
case NAND_ECC_ENGINE_TYPE_SOFT:
break;
default:
@ -1728,7 +1729,7 @@ static int mxcnd_attach_chip(struct nand_chip *chip)
*/
host->used_oobsize = min(mtd->oobsize, 218U);
if (chip->ecc.mode == NAND_ECC_HW) {
if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST) {
if (is_imx21_nfc(host) || is_imx27_nfc(host))
chip->ecc.strength = 1;
else
@ -1843,10 +1844,10 @@ static int mxcnd_probe(struct platform_device *pdev)
mtd_set_ooblayout(mtd, host->devtype_data->ooblayout);
if (host->pdata.hw_ecc) {
this->ecc.mode = NAND_ECC_HW;
this->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
} else {
this->ecc.mode = NAND_ECC_SOFT;
this->ecc.algo = NAND_ECC_HAMMING;
this->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
this->ecc.algo = NAND_ECC_ALGO_HAMMING;
}
/* NAND bus width determines access functions used by upper layer */

View File

@ -34,6 +34,7 @@
#include <linux/mm.h>
#include <linux/types.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/nand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/nand_bch.h>
#include <linux/interrupt.h>
@ -45,166 +46,6 @@
#include "internals.h"
/* Define default oob placement schemes for large and small page devices */
static int nand_ooblayout_ecc_sp(struct mtd_info *mtd, int section,
struct mtd_oob_region *oobregion)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct nand_ecc_ctrl *ecc = &chip->ecc;
if (section > 1)
return -ERANGE;
if (!section) {
oobregion->offset = 0;
if (mtd->oobsize == 16)
oobregion->length = 4;
else
oobregion->length = 3;
} else {
if (mtd->oobsize == 8)
return -ERANGE;
oobregion->offset = 6;
oobregion->length = ecc->total - 4;
}
return 0;
}
static int nand_ooblayout_free_sp(struct mtd_info *mtd, int section,
struct mtd_oob_region *oobregion)
{
if (section > 1)
return -ERANGE;
if (mtd->oobsize == 16) {
if (section)
return -ERANGE;
oobregion->length = 8;
oobregion->offset = 8;
} else {
oobregion->length = 2;
if (!section)
oobregion->offset = 3;
else
oobregion->offset = 6;
}
return 0;
}
const struct mtd_ooblayout_ops nand_ooblayout_sp_ops = {
.ecc = nand_ooblayout_ecc_sp,
.free = nand_ooblayout_free_sp,
};
EXPORT_SYMBOL_GPL(nand_ooblayout_sp_ops);
static int nand_ooblayout_ecc_lp(struct mtd_info *mtd, int section,
struct mtd_oob_region *oobregion)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct nand_ecc_ctrl *ecc = &chip->ecc;
if (section || !ecc->total)
return -ERANGE;
oobregion->length = ecc->total;
oobregion->offset = mtd->oobsize - oobregion->length;
return 0;
}
static int nand_ooblayout_free_lp(struct mtd_info *mtd, int section,
struct mtd_oob_region *oobregion)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct nand_ecc_ctrl *ecc = &chip->ecc;
if (section)
return -ERANGE;
oobregion->length = mtd->oobsize - ecc->total - 2;
oobregion->offset = 2;
return 0;
}
const struct mtd_ooblayout_ops nand_ooblayout_lp_ops = {
.ecc = nand_ooblayout_ecc_lp,
.free = nand_ooblayout_free_lp,
};
EXPORT_SYMBOL_GPL(nand_ooblayout_lp_ops);
/*
* Support the old "large page" layout used for 1-bit Hamming ECC where ECC
* are placed at a fixed offset.
*/
static int nand_ooblayout_ecc_lp_hamming(struct mtd_info *mtd, int section,
struct mtd_oob_region *oobregion)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct nand_ecc_ctrl *ecc = &chip->ecc;
if (section)
return -ERANGE;
switch (mtd->oobsize) {
case 64:
oobregion->offset = 40;
break;
case 128:
oobregion->offset = 80;
break;
default:
return -EINVAL;
}
oobregion->length = ecc->total;
if (oobregion->offset + oobregion->length > mtd->oobsize)
return -ERANGE;
return 0;
}
static int nand_ooblayout_free_lp_hamming(struct mtd_info *mtd, int section,
struct mtd_oob_region *oobregion)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct nand_ecc_ctrl *ecc = &chip->ecc;
int ecc_offset = 0;
if (section < 0 || section > 1)
return -ERANGE;
switch (mtd->oobsize) {
case 64:
ecc_offset = 40;
break;
case 128:
ecc_offset = 80;
break;
default:
return -EINVAL;
}
if (section == 0) {
oobregion->offset = 2;
oobregion->length = ecc_offset - 2;
} else {
oobregion->offset = ecc_offset + ecc->total;
oobregion->length = mtd->oobsize - oobregion->offset;
}
return 0;
}
static const struct mtd_ooblayout_ops nand_ooblayout_lp_hamming_ops = {
.ecc = nand_ooblayout_ecc_lp_hamming,
.free = nand_ooblayout_free_lp_hamming,
};
static int nand_pairing_dist3_get_info(struct mtd_info *mtd, int page,
struct mtd_pairing_info *info)
{
@ -4750,6 +4591,8 @@ static inline bool is_full_id_nand(struct nand_flash_dev *type)
static bool find_full_id_nand(struct nand_chip *chip,
struct nand_flash_dev *type)
{
struct nand_device *base = &chip->base;
struct nand_ecc_props requirements;
struct mtd_info *mtd = nand_to_mtd(chip);
struct nand_memory_organization *memorg;
u8 *id_data = chip->id.data;
@ -4771,8 +4614,9 @@ static bool find_full_id_nand(struct nand_chip *chip,
memorg->pagesize *
memorg->pages_per_eraseblock);
chip->options |= type->options;
chip->base.eccreq.strength = NAND_ECC_STRENGTH(type);
chip->base.eccreq.step_size = NAND_ECC_STEP(type);
requirements.strength = NAND_ECC_STRENGTH(type);
requirements.step_size = NAND_ECC_STEP(type);
nanddev_set_ecc_requirements(base, &requirements);
chip->parameters.model = kstrdup(type->name, GFP_KERNEL);
if (!chip->parameters.model)
@ -5033,91 +4877,101 @@ free_detect_allocation:
return ret;
}
static const char * const nand_ecc_modes[] = {
[NAND_ECC_NONE] = "none",
[NAND_ECC_SOFT] = "soft",
[NAND_ECC_HW] = "hw",
[NAND_ECC_HW_SYNDROME] = "hw_syndrome",
[NAND_ECC_ON_DIE] = "on-die",
};
static int of_get_nand_ecc_mode(struct device_node *np)
static enum nand_ecc_engine_type
of_get_rawnand_ecc_engine_type_legacy(struct device_node *np)
{
const char *pm;
int err, i;
err = of_property_read_string(np, "nand-ecc-mode", &pm);
if (err < 0)
return err;
for (i = NAND_ECC_NONE; i < ARRAY_SIZE(nand_ecc_modes); i++)
if (!strcasecmp(pm, nand_ecc_modes[i]))
return i;
/*
* For backward compatibility we support few obsoleted values that don't
* have their mappings into the nand_ecc_mode enum anymore (they were
* merged with other enums).
*/
if (!strcasecmp(pm, "soft_bch"))
return NAND_ECC_SOFT;
return -ENODEV;
}
static const char * const nand_ecc_algos[] = {
[NAND_ECC_HAMMING] = "hamming",
[NAND_ECC_BCH] = "bch",
[NAND_ECC_RS] = "rs",
};
static enum nand_ecc_algo of_get_nand_ecc_algo(struct device_node *np)
{
enum nand_ecc_algo ecc_algo;
enum nand_ecc_legacy_mode {
NAND_ECC_INVALID,
NAND_ECC_NONE,
NAND_ECC_SOFT,
NAND_ECC_SOFT_BCH,
NAND_ECC_HW,
NAND_ECC_HW_SYNDROME,
NAND_ECC_ON_DIE,
};
const char * const nand_ecc_legacy_modes[] = {
[NAND_ECC_NONE] = "none",
[NAND_ECC_SOFT] = "soft",
[NAND_ECC_SOFT_BCH] = "soft_bch",
[NAND_ECC_HW] = "hw",
[NAND_ECC_HW_SYNDROME] = "hw_syndrome",
[NAND_ECC_ON_DIE] = "on-die",
};
enum nand_ecc_legacy_mode eng_type;
const char *pm;
int err;
err = of_property_read_string(np, "nand-ecc-algo", &pm);
if (!err) {
for (ecc_algo = NAND_ECC_HAMMING;
ecc_algo < ARRAY_SIZE(nand_ecc_algos);
ecc_algo++) {
if (!strcasecmp(pm, nand_ecc_algos[ecc_algo]))
return ecc_algo;
err = of_property_read_string(np, "nand-ecc-mode", &pm);
if (err)
return NAND_ECC_ENGINE_TYPE_INVALID;
for (eng_type = NAND_ECC_NONE;
eng_type < ARRAY_SIZE(nand_ecc_legacy_modes); eng_type++) {
if (!strcasecmp(pm, nand_ecc_legacy_modes[eng_type])) {
switch (eng_type) {
case NAND_ECC_NONE:
return NAND_ECC_ENGINE_TYPE_NONE;
case NAND_ECC_SOFT:
case NAND_ECC_SOFT_BCH:
return NAND_ECC_ENGINE_TYPE_SOFT;
case NAND_ECC_HW:
case NAND_ECC_HW_SYNDROME:
return NAND_ECC_ENGINE_TYPE_ON_HOST;
case NAND_ECC_ON_DIE:
return NAND_ECC_ENGINE_TYPE_ON_DIE;
default:
break;
}
}
}
/*
* For backward compatibility we also read "nand-ecc-mode" checking
* for some obsoleted values that were specifying ECC algorithm.
*/
return NAND_ECC_ENGINE_TYPE_INVALID;
}
static enum nand_ecc_placement
of_get_rawnand_ecc_placement_legacy(struct device_node *np)
{
const char *pm;
int err;
err = of_property_read_string(np, "nand-ecc-mode", &pm);
if (!err) {
if (!strcasecmp(pm, "hw_syndrome"))
return NAND_ECC_PLACEMENT_INTERLEAVED;
}
return NAND_ECC_PLACEMENT_UNKNOWN;
}
static enum nand_ecc_algo of_get_rawnand_ecc_algo_legacy(struct device_node *np)
{
const char *pm;
int err;
err = of_property_read_string(np, "nand-ecc-mode", &pm);
if (!err) {
if (!strcasecmp(pm, "soft"))
return NAND_ECC_HAMMING;
return NAND_ECC_ALGO_HAMMING;
else if (!strcasecmp(pm, "soft_bch"))
return NAND_ECC_BCH;
return NAND_ECC_ALGO_BCH;
}
return NAND_ECC_UNKNOWN;
return NAND_ECC_ALGO_UNKNOWN;
}
static int of_get_nand_ecc_step_size(struct device_node *np)
static void of_get_nand_ecc_legacy_user_config(struct nand_chip *chip)
{
int ret;
u32 val;
struct device_node *dn = nand_get_flash_node(chip);
struct nand_ecc_props *user_conf = &chip->base.ecc.user_conf;
ret = of_property_read_u32(np, "nand-ecc-step-size", &val);
return ret ? ret : val;
}
if (user_conf->engine_type == NAND_ECC_ENGINE_TYPE_INVALID)
user_conf->engine_type = of_get_rawnand_ecc_engine_type_legacy(dn);
static int of_get_nand_ecc_strength(struct device_node *np)
{
int ret;
u32 val;
if (user_conf->algo == NAND_ECC_ALGO_UNKNOWN)
user_conf->algo = of_get_rawnand_ecc_algo_legacy(dn);
ret = of_property_read_u32(np, "nand-ecc-strength", &val);
return ret ? ret : val;
if (user_conf->placement == NAND_ECC_PLACEMENT_UNKNOWN)
user_conf->placement = of_get_rawnand_ecc_placement_legacy(dn);
}
static int of_get_nand_bus_width(struct device_node *np)
@ -5141,11 +4995,10 @@ static bool of_get_nand_on_flash_bbt(struct device_node *np)
return of_property_read_bool(np, "nand-on-flash-bbt");
}
static int nand_dt_init(struct nand_chip *chip)
static int rawnand_dt_init(struct nand_chip *chip)
{
struct nand_device *nand = mtd_to_nanddev(nand_to_mtd(chip));
struct device_node *dn = nand_get_flash_node(chip);
enum nand_ecc_algo ecc_algo;
int ecc_mode, ecc_strength, ecc_step;
if (!dn)
return 0;
@ -5159,25 +5012,29 @@ static int nand_dt_init(struct nand_chip *chip)
if (of_get_nand_on_flash_bbt(dn))
chip->bbt_options |= NAND_BBT_USE_FLASH;
ecc_mode = of_get_nand_ecc_mode(dn);
ecc_algo = of_get_nand_ecc_algo(dn);
ecc_strength = of_get_nand_ecc_strength(dn);
ecc_step = of_get_nand_ecc_step_size(dn);
of_get_nand_ecc_user_config(nand);
of_get_nand_ecc_legacy_user_config(chip);
if (ecc_mode >= 0)
chip->ecc.mode = ecc_mode;
/*
* If neither the user nor the NAND controller have requested a specific
* ECC engine type, we will default to NAND_ECC_ENGINE_TYPE_ON_HOST.
*/
nand->ecc.defaults.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
if (ecc_algo != NAND_ECC_UNKNOWN)
chip->ecc.algo = ecc_algo;
/*
* Use the user requested engine type, unless there is none, in this
* case default to the NAND controller choice, otherwise fallback to
* the raw NAND default one.
*/
if (nand->ecc.user_conf.engine_type != NAND_ECC_ENGINE_TYPE_INVALID)
chip->ecc.engine_type = nand->ecc.user_conf.engine_type;
if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_INVALID)
chip->ecc.engine_type = nand->ecc.defaults.engine_type;
if (ecc_strength >= 0)
chip->ecc.strength = ecc_strength;
if (ecc_step > 0)
chip->ecc.size = ecc_step;
if (of_property_read_bool(dn, "nand-ecc-maximize"))
chip->ecc.options |= NAND_ECC_MAXIMIZE;
chip->ecc.placement = nand->ecc.user_conf.placement;
chip->ecc.algo = nand->ecc.user_conf.algo;
chip->ecc.strength = nand->ecc.user_conf.strength;
chip->ecc.size = nand->ecc.user_conf.step_size;
return 0;
}
@ -5215,7 +5072,7 @@ static int nand_scan_ident(struct nand_chip *chip, unsigned int maxchips,
/* Enforce the right timings for reset/detection */
chip->current_interface_config = nand_get_reset_interface_config();
ret = nand_dt_init(chip);
ret = rawnand_dt_init(chip);
if (ret)
return ret;
@ -5282,16 +5139,76 @@ static void nand_scan_ident_cleanup(struct nand_chip *chip)
kfree(chip->parameters.onfi);
}
static int nand_set_ecc_on_host_ops(struct nand_chip *chip)
{
struct nand_ecc_ctrl *ecc = &chip->ecc;
switch (ecc->placement) {
case NAND_ECC_PLACEMENT_UNKNOWN:
case NAND_ECC_PLACEMENT_OOB:
/* Use standard hwecc read page function? */
if (!ecc->read_page)
ecc->read_page = nand_read_page_hwecc;
if (!ecc->write_page)
ecc->write_page = nand_write_page_hwecc;
if (!ecc->read_page_raw)
ecc->read_page_raw = nand_read_page_raw;
if (!ecc->write_page_raw)
ecc->write_page_raw = nand_write_page_raw;
if (!ecc->read_oob)
ecc->read_oob = nand_read_oob_std;
if (!ecc->write_oob)
ecc->write_oob = nand_write_oob_std;
if (!ecc->read_subpage)
ecc->read_subpage = nand_read_subpage;
if (!ecc->write_subpage && ecc->hwctl && ecc->calculate)
ecc->write_subpage = nand_write_subpage_hwecc;
fallthrough;
case NAND_ECC_PLACEMENT_INTERLEAVED:
if ((!ecc->calculate || !ecc->correct || !ecc->hwctl) &&
(!ecc->read_page ||
ecc->read_page == nand_read_page_hwecc ||
!ecc->write_page ||
ecc->write_page == nand_write_page_hwecc)) {
WARN(1, "No ECC functions supplied; hardware ECC not possible\n");
return -EINVAL;
}
/* Use standard syndrome read/write page function? */
if (!ecc->read_page)
ecc->read_page = nand_read_page_syndrome;
if (!ecc->write_page)
ecc->write_page = nand_write_page_syndrome;
if (!ecc->read_page_raw)
ecc->read_page_raw = nand_read_page_raw_syndrome;
if (!ecc->write_page_raw)
ecc->write_page_raw = nand_write_page_raw_syndrome;
if (!ecc->read_oob)
ecc->read_oob = nand_read_oob_syndrome;
if (!ecc->write_oob)
ecc->write_oob = nand_write_oob_syndrome;
break;
default:
pr_warn("Invalid NAND_ECC_PLACEMENT %d\n",
ecc->placement);
return -EINVAL;
}
return 0;
}
static int nand_set_ecc_soft_ops(struct nand_chip *chip)
{
struct mtd_info *mtd = nand_to_mtd(chip);
struct nand_device *nanddev = mtd_to_nanddev(mtd);
struct nand_ecc_ctrl *ecc = &chip->ecc;
if (WARN_ON(ecc->mode != NAND_ECC_SOFT))
if (WARN_ON(ecc->engine_type != NAND_ECC_ENGINE_TYPE_SOFT))
return -EINVAL;
switch (ecc->algo) {
case NAND_ECC_HAMMING:
case NAND_ECC_ALGO_HAMMING:
ecc->calculate = nand_calculate_ecc;
ecc->correct = nand_correct_data;
ecc->read_page = nand_read_page_swecc;
@ -5312,7 +5229,7 @@ static int nand_set_ecc_soft_ops(struct nand_chip *chip)
ecc->options |= NAND_ECC_SOFT_HAMMING_SM_ORDER;
return 0;
case NAND_ECC_BCH:
case NAND_ECC_ALGO_BCH:
if (!mtd_nand_has_bch()) {
WARN(1, "CONFIG_MTD_NAND_ECC_SW_BCH not enabled\n");
return -EINVAL;
@ -5350,7 +5267,7 @@ static int nand_set_ecc_soft_ops(struct nand_chip *chip)
return -EINVAL;
}
mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
mtd_set_ooblayout(mtd, nand_get_large_page_ooblayout());
}
@ -5359,8 +5276,8 @@ static int nand_set_ecc_soft_ops(struct nand_chip *chip)
* used, otherwise we don't know how many bytes can really be
* used.
*/
if (mtd->ooblayout == &nand_ooblayout_lp_ops &&
ecc->options & NAND_ECC_MAXIMIZE) {
if (mtd->ooblayout == nand_get_large_page_ooblayout() &&
nanddev->ecc.user_conf.flags & NAND_ECC_MAXIMIZE_STRENGTH) {
int steps, bytes;
/* Always prefer 1k blocks over 512bytes ones */
@ -5454,10 +5371,12 @@ static int
nand_match_ecc_req(struct nand_chip *chip,
const struct nand_ecc_caps *caps, int oobavail)
{
const struct nand_ecc_props *requirements =
nanddev_get_ecc_requirements(&chip->base);
struct mtd_info *mtd = nand_to_mtd(chip);
const struct nand_ecc_step_info *stepinfo;
int req_step = chip->base.eccreq.step_size;
int req_strength = chip->base.eccreq.strength;
int req_step = requirements->step_size;
int req_strength = requirements->strength;
int req_corr, step_size, strength, nsteps, ecc_bytes, ecc_bytes_total;
int best_step, best_strength, best_ecc_bytes;
int best_ecc_bytes_total = INT_MAX;
@ -5598,11 +5517,12 @@ nand_maximize_ecc(struct nand_chip *chip,
* @caps: ECC engine caps info structure
* @oobavail: OOB size that the ECC engine can use
*
* Choose the ECC configuration according to following logic
* Choose the ECC configuration according to following logic.
*
* 1. If both ECC step size and ECC strength are already set (usually by DT)
* then check if it is supported by this controller.
* 2. If NAND_ECC_MAXIMIZE is set, then select maximum ECC strength.
* 2. If the user provided the nand-ecc-maximize property, then select maximum
* ECC strength.
* 3. Otherwise, try to match the ECC step size and ECC strength closest
* to the chip's requirement. If available OOB size can't fit the chip
* requirement then fallback to the maximum ECC step size and ECC strength.
@ -5613,6 +5533,7 @@ int nand_ecc_choose_conf(struct nand_chip *chip,
const struct nand_ecc_caps *caps, int oobavail)
{
struct mtd_info *mtd = nand_to_mtd(chip);
struct nand_device *nanddev = mtd_to_nanddev(mtd);
if (WARN_ON(oobavail < 0 || oobavail > mtd->oobsize))
return -EINVAL;
@ -5620,7 +5541,7 @@ int nand_ecc_choose_conf(struct nand_chip *chip,
if (chip->ecc.size && chip->ecc.strength)
return nand_check_ecc_caps(chip, caps, oobavail);
if (chip->ecc.options & NAND_ECC_MAXIMIZE)
if (nanddev->ecc.user_conf.flags & NAND_ECC_MAXIMIZE_STRENGTH)
return nand_maximize_ecc(chip, caps, oobavail);
if (!nand_match_ecc_req(chip, caps, oobavail))
@ -5630,41 +5551,6 @@ int nand_ecc_choose_conf(struct nand_chip *chip,
}
EXPORT_SYMBOL_GPL(nand_ecc_choose_conf);
/*
* Check if the chip configuration meet the datasheet requirements.
* If our configuration corrects A bits per B bytes and the minimum
* required correction level is X bits per Y bytes, then we must ensure
* both of the following are true:
*
* (1) A / B >= X / Y
* (2) A >= X
*
* Requirement (1) ensures we can correct for the required bitflip density.
* Requirement (2) ensures we can correct even when all bitflips are clumped
* in the same sector.
*/
static bool nand_ecc_strength_good(struct nand_chip *chip)
{
struct mtd_info *mtd = nand_to_mtd(chip);
struct nand_ecc_ctrl *ecc = &chip->ecc;
int corr, ds_corr;
if (ecc->size == 0 || chip->base.eccreq.step_size == 0)
/* Not enough information */
return true;
/*
* We get the number of corrected bits per page to compare
* the correction density.
*/
corr = (mtd->writesize * ecc->strength) / ecc->size;
ds_corr = (mtd->writesize * chip->base.eccreq.strength) /
chip->base.eccreq.step_size;
return corr >= ds_corr && ecc->strength >= chip->base.eccreq.strength;
}
static int rawnand_erase(struct nand_device *nand, const struct nand_pos *pos)
{
struct nand_chip *chip = container_of(nand, struct nand_chip,
@ -5752,15 +5638,17 @@ static int nand_scan_tail(struct nand_chip *chip)
* If no default placement scheme is given, select an appropriate one.
*/
if (!mtd->ooblayout &&
!(ecc->mode == NAND_ECC_SOFT && ecc->algo == NAND_ECC_BCH)) {
!(ecc->engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
ecc->algo == NAND_ECC_ALGO_BCH)) {
switch (mtd->oobsize) {
case 8:
case 16:
mtd_set_ooblayout(mtd, &nand_ooblayout_sp_ops);
mtd_set_ooblayout(mtd, nand_get_small_page_ooblayout());
break;
case 64:
case 128:
mtd_set_ooblayout(mtd, &nand_ooblayout_lp_hamming_ops);
mtd_set_ooblayout(mtd,
nand_get_large_page_hamming_ooblayout());
break;
default:
/*
@ -5770,9 +5658,9 @@ static int nand_scan_tail(struct nand_chip *chip)
* page with ECC layout when ->oobsize <= 128 for
* compatibility reasons.
*/
if (ecc->mode == NAND_ECC_NONE) {
if (ecc->engine_type == NAND_ECC_ENGINE_TYPE_NONE) {
mtd_set_ooblayout(mtd,
&nand_ooblayout_lp_ops);
nand_get_large_page_ooblayout());
break;
}
@ -5788,49 +5676,11 @@ static int nand_scan_tail(struct nand_chip *chip)
* selected and we have 256 byte pagesize fallback to software ECC
*/
switch (ecc->mode) {
case NAND_ECC_HW:
/* Use standard hwecc read page function? */
if (!ecc->read_page)
ecc->read_page = nand_read_page_hwecc;
if (!ecc->write_page)
ecc->write_page = nand_write_page_hwecc;
if (!ecc->read_page_raw)
ecc->read_page_raw = nand_read_page_raw;
if (!ecc->write_page_raw)
ecc->write_page_raw = nand_write_page_raw;
if (!ecc->read_oob)
ecc->read_oob = nand_read_oob_std;
if (!ecc->write_oob)
ecc->write_oob = nand_write_oob_std;
if (!ecc->read_subpage)
ecc->read_subpage = nand_read_subpage;
if (!ecc->write_subpage && ecc->hwctl && ecc->calculate)
ecc->write_subpage = nand_write_subpage_hwecc;
fallthrough;
case NAND_ECC_HW_SYNDROME:
if ((!ecc->calculate || !ecc->correct || !ecc->hwctl) &&
(!ecc->read_page ||
ecc->read_page == nand_read_page_hwecc ||
!ecc->write_page ||
ecc->write_page == nand_write_page_hwecc)) {
WARN(1, "No ECC functions supplied; hardware ECC not possible\n");
ret = -EINVAL;
switch (ecc->engine_type) {
case NAND_ECC_ENGINE_TYPE_ON_HOST:
ret = nand_set_ecc_on_host_ops(chip);
if (ret)
goto err_nand_manuf_cleanup;
}
/* Use standard syndrome read/write page function? */
if (!ecc->read_page)
ecc->read_page = nand_read_page_syndrome;
if (!ecc->write_page)
ecc->write_page = nand_write_page_syndrome;
if (!ecc->read_page_raw)
ecc->read_page_raw = nand_read_page_raw_syndrome;
if (!ecc->write_page_raw)
ecc->write_page_raw = nand_write_page_raw_syndrome;
if (!ecc->read_oob)
ecc->read_oob = nand_read_oob_syndrome;
if (!ecc->write_oob)
ecc->write_oob = nand_write_oob_syndrome;
if (mtd->writesize >= ecc->size) {
if (!ecc->strength) {
@ -5842,18 +5692,17 @@ static int nand_scan_tail(struct nand_chip *chip)
}
pr_warn("%d byte HW ECC not possible on %d byte page size, fallback to SW ECC\n",
ecc->size, mtd->writesize);
ecc->mode = NAND_ECC_SOFT;
ecc->algo = NAND_ECC_HAMMING;
ecc->engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
ecc->algo = NAND_ECC_ALGO_HAMMING;
fallthrough;
case NAND_ECC_SOFT:
case NAND_ECC_ENGINE_TYPE_SOFT:
ret = nand_set_ecc_soft_ops(chip);
if (ret) {
ret = -EINVAL;
if (ret)
goto err_nand_manuf_cleanup;
}
break;
case NAND_ECC_ON_DIE:
case NAND_ECC_ENGINE_TYPE_ON_DIE:
if (!ecc->read_page || !ecc->write_page) {
WARN(1, "No ECC functions supplied; on-die ECC not possible\n");
ret = -EINVAL;
@ -5865,8 +5714,8 @@ static int nand_scan_tail(struct nand_chip *chip)
ecc->write_oob = nand_write_oob_std;
break;
case NAND_ECC_NONE:
pr_warn("NAND_ECC_NONE selected by board driver. This is not recommended!\n");
case NAND_ECC_ENGINE_TYPE_NONE:
pr_warn("NAND_ECC_ENGINE_TYPE_NONE selected by board driver. This is not recommended!\n");
ecc->read_page = nand_read_page_raw;
ecc->write_page = nand_write_page_raw;
ecc->read_oob = nand_read_oob_std;
@ -5879,7 +5728,7 @@ static int nand_scan_tail(struct nand_chip *chip)
break;
default:
WARN(1, "Invalid NAND_ECC_MODE %d\n", ecc->mode);
WARN(1, "Invalid NAND_ECC_MODE %d\n", ecc->engine_type);
ret = -EINVAL;
goto err_nand_manuf_cleanup;
}
@ -5913,7 +5762,10 @@ static int nand_scan_tail(struct nand_chip *chip)
ret = -EINVAL;
goto err_nand_manuf_cleanup;
}
ecc->total = ecc->steps * ecc->bytes;
chip->base.ecc.ctx.total = ecc->total;
if (ecc->total > mtd->oobsize) {
WARN(1, "Total number of ECC bytes exceeded oobsize\n");
ret = -EINVAL;
@ -5931,11 +5783,11 @@ static int nand_scan_tail(struct nand_chip *chip)
mtd->oobavail = ret;
/* ECC sanity check: warn if it's too weak */
if (!nand_ecc_strength_good(chip))
if (!nand_ecc_is_strong_enough(&chip->base))
pr_warn("WARNING: %s: the ECC used on your system (%db/%dB) is too weak compared to the one required by the NAND chip (%db/%dB)\n",
mtd->name, chip->ecc.strength, chip->ecc.size,
chip->base.eccreq.strength,
chip->base.eccreq.step_size);
nanddev_get_ecc_requirements(&chip->base)->strength,
nanddev_get_ecc_requirements(&chip->base)->step_size);
/* Allow subpage writes up to ecc.steps. Not possible for MLC flash */
if (!(chip->options & NAND_NO_SUBPAGE_WRITE) && nand_is_slc(chip)) {
@ -5956,8 +5808,8 @@ static int nand_scan_tail(struct nand_chip *chip)
chip->pagecache.page = -1;
/* Large page NAND with SOFT_ECC should support subpage reads */
switch (ecc->mode) {
case NAND_ECC_SOFT:
switch (ecc->engine_type) {
case NAND_ECC_ENGINE_TYPE_SOFT:
if (chip->page_shift > 9)
chip->options |= NAND_SUBPAGE_READ;
break;
@ -6101,8 +5953,8 @@ EXPORT_SYMBOL(nand_scan_with_ids);
*/
void nand_cleanup(struct nand_chip *chip)
{
if (chip->ecc.mode == NAND_ECC_SOFT &&
chip->ecc.algo == NAND_ECC_BCH)
if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
chip->ecc.algo == NAND_ECC_ALGO_BCH)
nand_bch_free((struct nand_bch_control *)chip->ecc.priv);
nanddev_cleanup(&chip->base);

View File

@ -165,6 +165,7 @@ struct nand_bch_control *nand_bch_init(struct mtd_info *mtd)
*/
nand->ecc.steps = eccsteps;
nand->ecc.total = eccsteps * eccbytes;
nand->base.ecc.ctx.total = nand->ecc.total;
if (mtd_ooblayout_count_eccbytes(mtd) != (eccsteps*eccbytes)) {
pr_warn("invalid ecc layout\n");
goto fail;

View File

@ -10,27 +10,32 @@
static void esmt_nand_decode_id(struct nand_chip *chip)
{
struct nand_device *base = &chip->base;
struct nand_ecc_props requirements = {};
nand_decode_ext_id(chip);
/* Extract ECC requirements from 5th id byte. */
if (chip->id.len >= 5 && nand_is_slc(chip)) {
chip->base.eccreq.step_size = 512;
requirements.step_size = 512;
switch (chip->id.data[4] & 0x3) {
case 0x0:
chip->base.eccreq.strength = 4;
requirements.strength = 4;
break;
case 0x1:
chip->base.eccreq.strength = 2;
requirements.strength = 2;
break;
case 0x2:
chip->base.eccreq.strength = 1;
requirements.strength = 1;
break;
default:
WARN(1, "Could not get ECC info");
chip->base.eccreq.step_size = 0;
requirements.step_size = 0;
break;
}
}
nanddev_set_ecc_requirements(base, &requirements);
}
static int esmt_nand_init(struct nand_chip *chip)

View File

@ -495,34 +495,36 @@ static void hynix_nand_extract_oobsize(struct nand_chip *chip,
static void hynix_nand_extract_ecc_requirements(struct nand_chip *chip,
bool valid_jedecid)
{
struct nand_device *base = &chip->base;
struct nand_ecc_props requirements = {};
u8 ecc_level = (chip->id.data[4] >> 4) & 0x7;
if (valid_jedecid) {
/* Reference: H27UCG8T2E datasheet */
chip->base.eccreq.step_size = 1024;
requirements.step_size = 1024;
switch (ecc_level) {
case 0:
chip->base.eccreq.step_size = 0;
chip->base.eccreq.strength = 0;
requirements.step_size = 0;
requirements.strength = 0;
break;
case 1:
chip->base.eccreq.strength = 4;
requirements.strength = 4;
break;
case 2:
chip->base.eccreq.strength = 24;
requirements.strength = 24;
break;
case 3:
chip->base.eccreq.strength = 32;
requirements.strength = 32;
break;
case 4:
chip->base.eccreq.strength = 40;
requirements.strength = 40;
break;
case 5:
chip->base.eccreq.strength = 50;
requirements.strength = 50;
break;
case 6:
chip->base.eccreq.strength = 60;
requirements.strength = 60;
break;
default:
/*
@ -543,14 +545,14 @@ static void hynix_nand_extract_ecc_requirements(struct nand_chip *chip,
if (nand_tech < 3) {
/* > 26nm, reference: H27UBG8T2A datasheet */
if (ecc_level < 5) {
chip->base.eccreq.step_size = 512;
chip->base.eccreq.strength = 1 << ecc_level;
requirements.step_size = 512;
requirements.strength = 1 << ecc_level;
} else if (ecc_level < 7) {
if (ecc_level == 5)
chip->base.eccreq.step_size = 2048;
requirements.step_size = 2048;
else
chip->base.eccreq.step_size = 1024;
chip->base.eccreq.strength = 24;
requirements.step_size = 1024;
requirements.strength = 24;
} else {
/*
* We should never reach this case, but if that
@ -563,18 +565,20 @@ static void hynix_nand_extract_ecc_requirements(struct nand_chip *chip,
} else {
/* <= 26nm, reference: H27UBG8T2B datasheet */
if (!ecc_level) {
chip->base.eccreq.step_size = 0;
chip->base.eccreq.strength = 0;
requirements.step_size = 0;
requirements.strength = 0;
} else if (ecc_level < 5) {
chip->base.eccreq.step_size = 512;
chip->base.eccreq.strength = 1 << (ecc_level - 1);
requirements.step_size = 512;
requirements.strength = 1 << (ecc_level - 1);
} else {
chip->base.eccreq.step_size = 1024;
chip->base.eccreq.strength = 24 +
requirements.step_size = 1024;
requirements.strength = 24 +
(8 * (ecc_level - 5));
}
}
}
nanddev_set_ecc_requirements(base, &requirements);
}
static void hynix_nand_extract_scrambling_requirements(struct nand_chip *chip,

View File

@ -23,6 +23,7 @@
*/
int nand_jedec_detect(struct nand_chip *chip)
{
struct nand_device *base = &chip->base;
struct mtd_info *mtd = nand_to_mtd(chip);
struct nand_memory_organization *memorg;
struct nand_jedec_params *p;
@ -120,8 +121,12 @@ int nand_jedec_detect(struct nand_chip *chip)
ecc = &p->ecc_info[0];
if (ecc->codeword_size >= 9) {
chip->base.eccreq.strength = ecc->ecc_bits;
chip->base.eccreq.step_size = 1 << ecc->codeword_size;
struct nand_ecc_props requirements = {
.strength = ecc->ecc_bits,
.step_size = 1 << ecc->codeword_size,
};
nanddev_set_ecc_requirements(base, &requirements);
} else {
pr_warn("Invalid codeword size\n");
}

View File

@ -413,6 +413,8 @@ enum {
*/
static int micron_supports_on_die_ecc(struct nand_chip *chip)
{
const struct nand_ecc_props *requirements =
nanddev_get_ecc_requirements(&chip->base);
u8 id[5];
int ret;
@ -425,7 +427,7 @@ static int micron_supports_on_die_ecc(struct nand_chip *chip)
/*
* We only support on-die ECC of 4/512 or 8/512
*/
if (chip->base.eccreq.strength != 4 && chip->base.eccreq.strength != 8)
if (requirements->strength != 4 && requirements->strength != 8)
return MICRON_ON_DIE_UNSUPPORTED;
/* 0x2 means on-die ECC is available. */
@ -466,7 +468,7 @@ static int micron_supports_on_die_ecc(struct nand_chip *chip)
/*
* We only support on-die ECC of 4/512 or 8/512
*/
if (chip->base.eccreq.strength != 4 && chip->base.eccreq.strength != 8)
if (requirements->strength != 4 && requirements->strength != 8)
return MICRON_ON_DIE_UNSUPPORTED;
return MICRON_ON_DIE_SUPPORTED;
@ -474,6 +476,9 @@ static int micron_supports_on_die_ecc(struct nand_chip *chip)
static int micron_nand_init(struct nand_chip *chip)
{
struct nand_device *base = &chip->base;
const struct nand_ecc_props *requirements =
nanddev_get_ecc_requirements(base);
struct mtd_info *mtd = nand_to_mtd(chip);
struct micron_nand *micron;
int ondie;
@ -497,13 +502,13 @@ static int micron_nand_init(struct nand_chip *chip)
ondie = micron_supports_on_die_ecc(chip);
if (ondie == MICRON_ON_DIE_MANDATORY &&
chip->ecc.mode != NAND_ECC_ON_DIE) {
chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_DIE) {
pr_err("On-die ECC forcefully enabled, not supported\n");
ret = -EINVAL;
goto err_free_manuf_data;
}
if (chip->ecc.mode == NAND_ECC_ON_DIE) {
if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_DIE) {
if (ondie == MICRON_ON_DIE_UNSUPPORTED) {
pr_err("On-die ECC selected but not supported\n");
ret = -EINVAL;
@ -523,7 +528,7 @@ static int micron_nand_init(struct nand_chip *chip)
* That's not needed for 8-bit ECC, because the status expose
* a better approximation of the number of bitflips in a page.
*/
if (chip->base.eccreq.strength == 4) {
if (requirements->strength == 4) {
micron->ecc.rawbuf = kmalloc(mtd->writesize +
mtd->oobsize,
GFP_KERNEL);
@ -533,17 +538,17 @@ static int micron_nand_init(struct nand_chip *chip)
}
}
if (chip->base.eccreq.strength == 4)
if (requirements->strength == 4)
mtd_set_ooblayout(mtd,
&micron_nand_on_die_4_ooblayout_ops);
else
mtd_set_ooblayout(mtd,
&micron_nand_on_die_8_ooblayout_ops);
chip->ecc.bytes = chip->base.eccreq.strength * 2;
chip->ecc.bytes = requirements->strength * 2;
chip->ecc.size = 512;
chip->ecc.strength = chip->base.eccreq.strength;
chip->ecc.algo = NAND_ECC_BCH;
chip->ecc.strength = requirements->strength;
chip->ecc.algo = NAND_ECC_ALGO_BCH;
chip->ecc.read_page = micron_nand_read_page_on_die_ecc;
chip->ecc.write_page = micron_nand_write_page_on_die_ecc;

View File

@ -34,6 +34,8 @@ u16 onfi_crc16(u16 crc, u8 const *p, size_t len)
static int nand_flash_detect_ext_param_page(struct nand_chip *chip,
struct nand_onfi_params *p)
{
struct nand_device *base = &chip->base;
struct nand_ecc_props requirements;
struct onfi_ext_param_page *ep;
struct onfi_ext_section *s;
struct onfi_ext_ecc_info *ecc;
@ -94,8 +96,10 @@ static int nand_flash_detect_ext_param_page(struct nand_chip *chip,
goto ext_out;
}
chip->base.eccreq.strength = ecc->ecc_bits;
chip->base.eccreq.step_size = 1 << ecc->codeword_size;
requirements.strength = ecc->ecc_bits;
requirements.step_size = 1 << ecc->codeword_size;
nanddev_set_ecc_requirements(base, &requirements);
ret = 0;
ext_out:
@ -139,6 +143,7 @@ static void nand_bit_wise_majority(const void **srcbufs,
*/
int nand_onfi_detect(struct nand_chip *chip)
{
struct nand_device *base = &chip->base;
struct mtd_info *mtd = nand_to_mtd(chip);
struct nand_memory_organization *memorg;
struct nand_onfi_params *p = NULL, *pbuf;
@ -265,8 +270,12 @@ int nand_onfi_detect(struct nand_chip *chip)
chip->options |= NAND_BUSWIDTH_16;
if (p->ecc_bits != 0xff) {
chip->base.eccreq.strength = p->ecc_bits;
chip->base.eccreq.step_size = 512;
struct nand_ecc_props requirements = {
.strength = p->ecc_bits,
.step_size = 512,
};
nanddev_set_ecc_requirements(base, &requirements);
} else if (onfi_version >= 21 &&
(le16_to_cpu(p->features) & ONFI_FEATURE_EXT_PARAM_PAGE)) {

View File

@ -10,6 +10,8 @@
static void samsung_nand_decode_id(struct nand_chip *chip)
{
struct nand_device *base = &chip->base;
struct nand_ecc_props requirements = {};
struct mtd_info *mtd = nand_to_mtd(chip);
struct nand_memory_organization *memorg;
@ -71,23 +73,23 @@ static void samsung_nand_decode_id(struct nand_chip *chip)
/* Extract ECC requirements from 5th id byte*/
extid = (chip->id.data[4] >> 4) & 0x07;
if (extid < 5) {
chip->base.eccreq.step_size = 512;
chip->base.eccreq.strength = 1 << extid;
requirements.step_size = 512;
requirements.strength = 1 << extid;
} else {
chip->base.eccreq.step_size = 1024;
requirements.step_size = 1024;
switch (extid) {
case 5:
chip->base.eccreq.strength = 24;
requirements.strength = 24;
break;
case 6:
chip->base.eccreq.strength = 40;
requirements.strength = 40;
break;
case 7:
chip->base.eccreq.strength = 60;
requirements.strength = 60;
break;
default:
WARN(1, "Could not decode ECC info");
chip->base.eccreq.step_size = 0;
requirements.step_size = 0;
}
}
} else {
@ -97,8 +99,8 @@ static void samsung_nand_decode_id(struct nand_chip *chip)
switch (chip->id.data[1]) {
/* K9F4G08U0D-S[I|C]B0(T00) */
case 0xDC:
chip->base.eccreq.step_size = 512;
chip->base.eccreq.strength = 1;
requirements.step_size = 512;
requirements.strength = 1;
break;
/* K9F1G08U0E 21nm chips do not support subpage write */
@ -112,6 +114,8 @@ static void samsung_nand_decode_id(struct nand_chip *chip)
}
}
}
nanddev_set_ecc_requirements(base, &requirements);
}
static int samsung_nand_init(struct nand_chip *chip)

View File

@ -140,11 +140,13 @@ static void toshiba_nand_benand_init(struct nand_chip *chip)
chip->options |= NAND_SUBPAGE_READ;
mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
mtd_set_ooblayout(mtd, nand_get_large_page_ooblayout());
}
static void toshiba_nand_decode_id(struct nand_chip *chip)
{
struct nand_device *base = &chip->base;
struct nand_ecc_props requirements = {};
struct mtd_info *mtd = nand_to_mtd(chip);
struct nand_memory_organization *memorg;
@ -175,23 +177,25 @@ static void toshiba_nand_decode_id(struct nand_chip *chip)
* - 24nm: 8 bit ECC for each 512Byte is required.
*/
if (chip->id.len >= 6 && nand_is_slc(chip)) {
chip->base.eccreq.step_size = 512;
requirements.step_size = 512;
switch (chip->id.data[5] & 0x7) {
case 0x4:
chip->base.eccreq.strength = 1;
requirements.strength = 1;
break;
case 0x5:
chip->base.eccreq.strength = 4;
requirements.strength = 4;
break;
case 0x6:
chip->base.eccreq.strength = 8;
requirements.strength = 8;
break;
default:
WARN(1, "Could not get ECC info");
chip->base.eccreq.step_size = 0;
requirements.step_size = 0;
break;
}
}
nanddev_set_ecc_requirements(base, &requirements);
}
static int
@ -273,7 +277,8 @@ static int toshiba_nand_init(struct nand_chip *chip)
chip->options |= NAND_BBM_FIRSTPAGE | NAND_BBM_SECONDPAGE;
/* Check that chip is BENAND and ECC mode is on-die */
if (nand_is_slc(chip) && chip->ecc.mode == NAND_ECC_ON_DIE &&
if (nand_is_slc(chip) &&
chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_DIE &&
chip->id.data[4] & TOSHIBA_NAND_ID4_IS_BENAND)
toshiba_nand_benand_init(chip);

View File

@ -2234,8 +2234,8 @@ static int ns_attach_chip(struct nand_chip *chip)
return -EINVAL;
}
chip->ecc.mode = NAND_ECC_SOFT;
chip->ecc.algo = NAND_ECC_BCH;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
chip->ecc.algo = NAND_ECC_ALGO_BCH;
chip->ecc.size = 512;
chip->ecc.strength = bch;
chip->ecc.bytes = eccbytes;
@ -2274,8 +2274,8 @@ static int __init ns_init_module(void)
nsmtd = nand_to_mtd(chip);
nand_set_controller_data(chip, (void *)ns);
chip->ecc.mode = NAND_ECC_SOFT;
chip->ecc.algo = NAND_ECC_HAMMING;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
/* The NAND_SKIP_BBTSCAN option is necessary for 'overridesize' */
/* and 'badblocks' parameters to work */
chip->options |= NAND_SKIP_BBTSCAN;

View File

@ -149,7 +149,7 @@ static int ndfc_chip_init(struct ndfc_controller *ndfc,
chip->ecc.correct = nand_correct_data;
chip->ecc.hwctl = ndfc_enable_hwecc;
chip->ecc.calculate = ndfc_calculate_ecc;
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
chip->ecc.size = 256;
chip->ecc.bytes = 3;
chip->ecc.strength = 1;

View File

@ -884,8 +884,8 @@ static int omap_correct_data(struct nand_chip *chip, u_char *dat,
int stat = 0;
/* Ex NAND_ECC_HW12_2048 */
if ((info->nand.ecc.mode == NAND_ECC_HW) &&
(info->nand.ecc.size == 2048))
if (info->nand.ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST &&
info->nand.ecc.size == 2048)
blockCnt = 4;
else
blockCnt = 1;
@ -2006,12 +2006,12 @@ static int omap_nand_attach_chip(struct nand_chip *chip)
return -EINVAL;
/*
* Bail out earlier to let NAND_ECC_SOFT code create its own
* Bail out earlier to let NAND_ECC_ENGINE_TYPE_SOFT code create its own
* ooblayout instead of using ours.
*/
if (info->ecc_opt == OMAP_ECC_HAM1_CODE_SW) {
chip->ecc.mode = NAND_ECC_SOFT;
chip->ecc.algo = NAND_ECC_HAMMING;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
return 0;
}
@ -2019,7 +2019,7 @@ static int omap_nand_attach_chip(struct nand_chip *chip)
switch (info->ecc_opt) {
case OMAP_ECC_HAM1_CODE_HW:
dev_info(dev, "nand: using OMAP_ECC_HAM1_CODE_HW\n");
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
chip->ecc.bytes = 3;
chip->ecc.size = 512;
chip->ecc.strength = 1;
@ -2036,7 +2036,7 @@ static int omap_nand_attach_chip(struct nand_chip *chip)
case OMAP_ECC_BCH4_CODE_HW_DETECTION_SW:
pr_info("nand: using OMAP_ECC_BCH4_CODE_HW_DETECTION_SW\n");
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
chip->ecc.size = 512;
chip->ecc.bytes = 7;
chip->ecc.strength = 4;
@ -2056,7 +2056,7 @@ static int omap_nand_attach_chip(struct nand_chip *chip)
case OMAP_ECC_BCH4_CODE_HW:
pr_info("nand: using OMAP_ECC_BCH4_CODE_HW ECC scheme\n");
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
chip->ecc.size = 512;
/* 14th bit is kept reserved for ROM-code compatibility */
chip->ecc.bytes = 7 + 1;
@ -2078,7 +2078,7 @@ static int omap_nand_attach_chip(struct nand_chip *chip)
case OMAP_ECC_BCH8_CODE_HW_DETECTION_SW:
pr_info("nand: using OMAP_ECC_BCH8_CODE_HW_DETECTION_SW\n");
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
chip->ecc.size = 512;
chip->ecc.bytes = 13;
chip->ecc.strength = 8;
@ -2098,7 +2098,7 @@ static int omap_nand_attach_chip(struct nand_chip *chip)
case OMAP_ECC_BCH8_CODE_HW:
pr_info("nand: using OMAP_ECC_BCH8_CODE_HW ECC scheme\n");
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
chip->ecc.size = 512;
/* 14th bit is kept reserved for ROM-code compatibility */
chip->ecc.bytes = 13 + 1;
@ -2121,7 +2121,7 @@ static int omap_nand_attach_chip(struct nand_chip *chip)
case OMAP_ECC_BCH16_CODE_HW:
pr_info("Using OMAP_ECC_BCH16_CODE_HW ECC scheme\n");
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
chip->ecc.size = 512;
chip->ecc.bytes = 26;
chip->ecc.strength = 16;

View File

@ -139,8 +139,8 @@ static int __init orion_nand_probe(struct platform_device *pdev)
nc->legacy.IO_ADDR_R = nc->legacy.IO_ADDR_W = io_base;
nc->legacy.cmd_ctrl = orion_nand_cmd_ctrl;
nc->legacy.read_buf = orion_nand_read_buf;
nc->ecc.mode = NAND_ECC_SOFT;
nc->ecc.algo = NAND_ECC_HAMMING;
nc->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
nc->ecc.algo = NAND_ECC_ALGO_HAMMING;
if (board->chip_delay)
nc->legacy.chip_delay = board->chip_delay;

View File

@ -68,7 +68,7 @@ static void pasemi_hwcontrol(struct nand_chip *chip, int cmd,
inl(lpcctl);
}
int pasemi_device_ready(struct nand_chip *chip)
static int pasemi_device_ready(struct nand_chip *chip)
{
return !!(inl(lpcctl) & LBICTRL_LPCCTL_NR);
}
@ -132,8 +132,8 @@ static int pasemi_nand_probe(struct platform_device *ofdev)
chip->legacy.read_buf = pasemi_read_buf;
chip->legacy.write_buf = pasemi_write_buf;
chip->legacy.chip_delay = 0;
chip->ecc.mode = NAND_ECC_SOFT;
chip->ecc.algo = NAND_ECC_HAMMING;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
/* Enable the following for a flash based bad block table */
chip->bbt_options = NAND_BBT_USE_FLASH;

View File

@ -66,8 +66,8 @@ static int plat_nand_probe(struct platform_device *pdev)
data->chip.options |= pdata->chip.options;
data->chip.bbt_options |= pdata->chip.bbt_options;
data->chip.ecc.mode = NAND_ECC_SOFT;
data->chip.ecc.algo = NAND_ECC_HAMMING;
data->chip.ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
data->chip.ecc.algo = NAND_ECC_ALGO_HAMMING;
platform_set_drvdata(pdev, data);

View File

@ -2550,7 +2550,7 @@ static int qcom_nand_attach_chip(struct nand_chip *chip)
ecc->write_page_raw = qcom_nandc_write_page_raw;
ecc->write_oob = qcom_nandc_write_oob;
ecc->mode = NAND_ECC_HW;
ecc->engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
mtd_set_ooblayout(mtd, &qcom_nand_ooblayout_ops);
@ -2702,10 +2702,8 @@ static int qcom_nandc_alloc(struct qcom_nand_controller *nandc)
if (IS_ERR(nandc->tx_chan)) {
ret = PTR_ERR(nandc->tx_chan);
nandc->tx_chan = NULL;
if (ret != -EPROBE_DEFER)
dev_err(nandc->dev,
"tx DMA channel request failed: %d\n",
ret);
dev_err_probe(nandc->dev, ret,
"tx DMA channel request failed\n");
goto unalloc;
}
@ -2713,10 +2711,8 @@ static int qcom_nandc_alloc(struct qcom_nand_controller *nandc)
if (IS_ERR(nandc->rx_chan)) {
ret = PTR_ERR(nandc->rx_chan);
nandc->rx_chan = NULL;
if (ret != -EPROBE_DEFER)
dev_err(nandc->dev,
"rx DMA channel request failed: %d\n",
ret);
dev_err_probe(nandc->dev, ret,
"rx DMA channel request failed\n");
goto unalloc;
}
@ -2724,10 +2720,8 @@ static int qcom_nandc_alloc(struct qcom_nand_controller *nandc)
if (IS_ERR(nandc->cmd_chan)) {
ret = PTR_ERR(nandc->cmd_chan);
nandc->cmd_chan = NULL;
if (ret != -EPROBE_DEFER)
dev_err(nandc->dev,
"cmd DMA channel request failed: %d\n",
ret);
dev_err_probe(nandc->dev, ret,
"cmd DMA channel request failed\n");
goto unalloc;
}
@ -2750,10 +2744,8 @@ static int qcom_nandc_alloc(struct qcom_nand_controller *nandc)
if (IS_ERR(nandc->chan)) {
ret = PTR_ERR(nandc->chan);
nandc->chan = NULL;
if (ret != -EPROBE_DEFER)
dev_err(nandc->dev,
"rxtx DMA channel request failed: %d\n",
ret);
dev_err_probe(nandc->dev, ret,
"rxtx DMA channel request failed\n");
return ret;
}
}

View File

@ -859,7 +859,8 @@ static int r852_probe(struct pci_dev *pci_dev, const struct pci_device_id *id)
chip->legacy.write_buf = r852_write_buf;
/* ecc */
chip->ecc.mode = NAND_ECC_HW_SYNDROME;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
chip->ecc.placement = NAND_ECC_PLACEMENT_INTERLEAVED;
chip->ecc.size = R852_DMA_LEN;
chip->ecc.bytes = SM_OOB_SIZE;
chip->ecc.strength = 2;

View File

@ -904,7 +904,7 @@ static void s3c2410_nand_init_chip(struct s3c2410_nand_info *info,
nmtd->info = info;
nmtd->set = set;
chip->ecc.mode = info->platform->ecc_mode;
chip->ecc.engine_type = info->platform->engine_type;
/*
* If you use u-boot BBT creation code, specifying this flag will
@ -929,24 +929,24 @@ static int s3c2410_nand_attach_chip(struct nand_chip *chip)
struct mtd_info *mtd = nand_to_mtd(chip);
struct s3c2410_nand_info *info = s3c2410_nand_mtd_toinfo(mtd);
switch (chip->ecc.mode) {
switch (chip->ecc.engine_type) {
case NAND_ECC_NONE:
case NAND_ECC_ENGINE_TYPE_NONE:
dev_info(info->device, "ECC disabled\n");
break;
case NAND_ECC_SOFT:
case NAND_ECC_ENGINE_TYPE_SOFT:
/*
* This driver expects Hamming based ECC when ecc_mode is set
* to NAND_ECC_SOFT. Force ecc.algo to NAND_ECC_HAMMING to
* avoid adding an extra ecc_algo field to
* s3c2410_platform_nand.
* This driver expects Hamming based ECC when engine_type is set
* to NAND_ECC_ENGINE_TYPE_SOFT. Force ecc.algo to
* NAND_ECC_ALGO_HAMMING to avoid adding an extra ecc_algo field
* to s3c2410_platform_nand.
*/
chip->ecc.algo = NAND_ECC_HAMMING;
chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
dev_info(info->device, "soft ECC\n");
break;
case NAND_ECC_HW:
case NAND_ECC_ENGINE_TYPE_ON_HOST:
chip->ecc.calculate = s3c2410_nand_calculate_ecc;
chip->ecc.correct = s3c2410_nand_correct_data;
chip->ecc.strength = 1;

View File

@ -1039,13 +1039,13 @@ static int flctl_chip_attach_chip(struct nand_chip *chip)
chip->ecc.strength = 4;
chip->ecc.read_page = flctl_read_page_hwecc;
chip->ecc.write_page = flctl_write_page_hwecc;
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
/* 4 symbols ECC enabled */
flctl->flcmncr_base |= _4ECCEN;
} else {
chip->ecc.mode = NAND_ECC_SOFT;
chip->ecc.algo = NAND_ECC_HAMMING;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
}
return 0;

View File

@ -157,7 +157,7 @@ static int sharpsl_nand_probe(struct platform_device *pdev)
/* 15 us command delay time */
this->legacy.chip_delay = 15;
/* set eccmode using hardware ECC */
this->ecc.mode = NAND_ECC_HW;
this->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
this->ecc.size = 256;
this->ecc.bytes = 3;
this->ecc.strength = 1;

View File

@ -153,8 +153,9 @@ static int socrates_nand_probe(struct platform_device *ofdev)
nand_chip->legacy.read_buf = socrates_nand_read_buf;
nand_chip->legacy.dev_ready = socrates_nand_device_ready;
nand_chip->ecc.mode = NAND_ECC_SOFT; /* enable ECC */
nand_chip->ecc.algo = NAND_ECC_HAMMING;
/* enable ECC */
nand_chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
nand_chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
/* TODO: I have no idea what real delay is. */
nand_chip->legacy.chip_delay = 20; /* 20us command delay time */

View File

@ -1696,14 +1696,15 @@ static int stm32_fmc2_nfc_attach_chip(struct nand_chip *chip)
int ret;
/*
* Only NAND_ECC_HW mode is actually supported
* Only NAND_ECC_ENGINE_TYPE_ON_HOST mode is actually supported
* Hamming => ecc.strength = 1
* BCH4 => ecc.strength = 4
* BCH8 => ecc.strength = 8
* ECC sector size = 512
*/
if (chip->ecc.mode != NAND_ECC_HW) {
dev_err(nfc->dev, "nand_ecc_mode is not well defined in the DT\n");
if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST) {
dev_err(nfc->dev,
"nand_ecc_engine_type is not well defined in the DT\n");
return -EINVAL;
}
@ -1762,7 +1763,7 @@ static int stm32_fmc2_nfc_parse_child(struct stm32_fmc2_nfc *nfc,
return ret;
}
if (cs > FMC2_MAX_CE) {
if (cs >= FMC2_MAX_CE) {
dev_err(nfc->dev, "invalid reg value: %d\n", cs);
return -EINVAL;
}
@ -1952,7 +1953,7 @@ static int stm32_fmc2_nfc_probe(struct platform_device *pdev)
NAND_USES_DMA;
/* Default ECC settings */
chip->ecc.mode = NAND_ECC_HW;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
chip->ecc.size = FMC2_ECC_STEP_SIZE;
chip->ecc.strength = FMC2_ECC_BCH8;

Some files were not shown because too many files have changed in this diff Show More