Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'mtd/for-7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux

Pull MTD updates from Miquel Raynal:
"MTD changes:

- mtdconcat finally makes it in, after several years of being merged
and reverted

- Baikal SoC support is being removed, so MTD bits are being removed
as well

- misc cleanups

NAND changes:

- SunXi driver support for new versions of the Allwinner NAND
controller.

- DT-binding improvements and cleanups.

- A few fixes (Realtek ECC and Winbond SPI NAND), aside with the
usual load of misc changes.

SPI NOR fixes:

- Enable die erase on MT35XU02GCBA. We knew this flash needed this
fixup since 7f77c561e227 ("mtd: spi-nor: micron-st: add TODO for
fixing mt35xu02gcba") but did not add it due to lack of hardware to
test on.

- Fix locking on some Winbond w25q series flashes.

- Fix Auto Address Increment (AAI) writes on SST that flashes that
start on odd address. The write enable latch needs to be set again
after the single byte program"

* tag 'mtd/for-7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (44 commits)
mtd: spinand: winbond: Declare the QE bit on W25NxxJW
mtd: spi-nor: micron-st: Enable die erase support for MT35XU02GCBA
mtd: spi-nor: winbond: Fix locking support for w25q256jw
mtd: spi-nor: sst: Fix write enable before AAI sequence
mtd: spi-nor: winbond: Fix locking support for w25q64jvm
mtd: spi-nor: winbond: Fix locking support for w25q256jwm
dt-bindings: mtd: mxc-nand: add missing compatible string and ref to nand-controller-legacy.yaml
dt-bindings: mtd: gpmi-nand: ref to nand-controller-legacy.yaml
dt-bindings: mtd: refactor NAND bindings and add nand-controller-legacy.yaml
mtd: spinand: winbond: Clarify when to enable the HS bit
mtd: rawnand: sunxi: introduce maximize variable user data length
mtd: rawnand: sunxi: fix typos in comments
mtd: rawnand: sunxi: change error prone variable name
mtd: rawnand: sunxi: remove dead code
mtd: rawnand: sunxi: make the code more self-explanatory
mtd: rawnand: sunxi: replace hard coded value by a define - take2
mtd: rawnand: sunxi: do not count BBM bytes twice
mtd: rawnand: sunxi: fix sunxi_nfc_hw_ecc_read_extra_oob
mtd: rawnand: sunxi: sunxi_nand_ooblayout_free code clarification
mtd: cmdlinepart: use a flexible array member
...

+1126 -440
+1 -1
Documentation/devicetree/bindings/mtd/gpmi-nand.yaml
··· 101 101 unevaluatedProperties: false 102 102 103 103 allOf: 104 - - $ref: nand-controller.yaml 104 + - $ref: nand-controller-legacy.yaml 105 105 106 106 - if: 107 107 properties:
+24 -3
Documentation/devicetree/bindings/mtd/mxc-nand.yaml
··· 10 10 - Uwe Kleine-König <u.kleine-koenig@pengutronix.de> 11 11 12 12 allOf: 13 - - $ref: nand-controller.yaml 13 + - $ref: nand-controller-legacy.yaml 14 14 15 15 properties: 16 16 compatible: 17 17 oneOf: 18 - - const: fsl,imx27-nand 18 + - enum: 19 + - fsl,imx25-nand 20 + - fsl,imx27-nand 21 + - fsl,imx51-nand 22 + - fsl,imx53-nand 23 + - items: 24 + - enum: 25 + - fsl,imx35-nand 26 + - const: fsl,imx25-nand 19 27 - items: 20 28 - enum: 21 29 - fsl,imx31-nand 22 30 - const: fsl,imx27-nand 23 31 reg: 24 - maxItems: 1 32 + minItems: 1 33 + items: 34 + - description: IP register space 35 + - description: Nand flash internal buffer space 25 36 26 37 interrupts: 27 38 maxItems: 1 39 + 40 + clocks: 41 + maxItems: 1 42 + 43 + dmas: 44 + maxItems: 1 45 + 46 + dma-names: 47 + items: 48 + - const: rx-tx 28 49 29 50 required: 30 51 - compatible
+1 -45
Documentation/devicetree/bindings/mtd/nand-chip.yaml
··· 11 11 12 12 allOf: 13 13 - $ref: mtd.yaml# 14 + - $ref: nand-property.yaml 14 15 15 16 description: | 16 17 This file covers the generic description of a NAND chip. It implies that the ··· 22 21 reg: 23 22 description: 24 23 Contains the chip-select IDs. 25 - 26 - nand-ecc-engine: 27 - description: | 28 - A phandle on the hardware ECC engine if any. There are 29 - basically three possibilities: 30 - 1/ The ECC engine is part of the NAND controller, in this 31 - case the phandle should reference the parent node. 32 - 2/ The ECC engine is part of the NAND part (on-die), in this 33 - case the phandle should reference the node itself. 34 - 3/ The ECC engine is external, in this case the phandle should 35 - reference the specific ECC engine node. 36 - $ref: /schemas/types.yaml#/definitions/phandle 37 - 38 - nand-use-soft-ecc-engine: 39 - description: Use a software ECC engine. 40 - type: boolean 41 - 42 - nand-no-ecc-engine: 43 - description: Do not use any ECC correction. 44 - type: boolean 45 - 46 - nand-ecc-algo: 47 - description: 48 - Desired ECC algorithm. 49 - $ref: /schemas/types.yaml#/definitions/string 50 - enum: [hamming, bch, rs] 51 - 52 - nand-ecc-strength: 53 - description: 54 - Maximum number of bits that can be corrected per ECC step. 55 - $ref: /schemas/types.yaml#/definitions/uint32 56 - minimum: 1 57 - 58 - nand-ecc-step-size: 59 - description: 60 - Number of data bytes covered by a single ECC step. 61 - $ref: /schemas/types.yaml#/definitions/uint32 62 - minimum: 1 63 - 64 - secure-regions: 65 - description: 66 - Regions in the NAND chip which are protected using a secure element 67 - like Trustzone. This property contains the start address and size of 68 - the secure regions present. 69 - $ref: /schemas/types.yaml#/definitions/uint64-matrix 70 24 71 25 required: 72 26 - reg
+65
Documentation/devicetree/bindings/mtd/nand-controller-legacy.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/nand-controller-legacy.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: NAND Controller Common Properties 8 + 9 + maintainers: 10 + - Miquel Raynal <miquel.raynal@bootlin.com> 11 + - Richard Weinberger <richard@nod.at> 12 + 13 + description: > 14 + The NAND controller should be represented with its own DT node, and 15 + all NAND chips attached to this controller should be defined as 16 + children nodes of the NAND controller. This representation should be 17 + enforced even for simple controllers supporting only one chip. 18 + 19 + This is only for legacy nand controller, new controller should use 20 + nand-controller.yaml 21 + 22 + properties: 23 + 24 + "#address-cells": 25 + const: 1 26 + 27 + "#size-cells": 28 + enum: [0, 1] 29 + 30 + ranges: true 31 + 32 + cs-gpios: 33 + description: 34 + Array of chip-select available to the controller. The first 35 + entries are a 1:1 mapping of the available chip-select on the 36 + NAND controller (even if they are not used). As many additional 37 + chip-select as needed may follow and should be phandles of GPIO 38 + lines. 'reg' entries of the NAND chip subnodes become indexes of 39 + this array when this property is present. 40 + minItems: 1 41 + maxItems: 8 42 + 43 + partitions: 44 + type: object 45 + 46 + required: 47 + - compatible 48 + 49 + patternProperties: 50 + "^nand@[a-f0-9]$": 51 + type: object 52 + $ref: raw-nand-chip.yaml# 53 + 54 + "^partition@[0-9a-f]+$": 55 + type: object 56 + $ref: /schemas/mtd/partitions/partition.yaml#/$defs/partition-node 57 + deprecated: true 58 + 59 + allOf: 60 + - $ref: raw-nand-property.yaml# 61 + - $ref: nand-property.yaml# 62 + 63 + # This is a generic file other binding inherit from and extend 64 + additionalProperties: true 65 +
+2
Documentation/devicetree/bindings/mtd/nand-controller.yaml
··· 16 16 children nodes of the NAND controller. This representation should be 17 17 enforced even for simple controllers supporting only one chip. 18 18 19 + select: false 20 + 19 21 properties: 20 22 $nodename: 21 23 pattern: "^nand-controller(@.*)?"
+64
Documentation/devicetree/bindings/mtd/nand-property.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/nand-property.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: NAND Chip Common Properties 8 + 9 + maintainers: 10 + - Miquel Raynal <miquel.raynal@bootlin.com> 11 + 12 + description: | 13 + This file covers the generic properties of a NAND chip. It implies that the 14 + bus interface should not be taken into account: both raw NAND devices and 15 + SPI-NAND devices are concerned by this description. 16 + 17 + properties: 18 + nand-ecc-engine: 19 + description: | 20 + A phandle on the hardware ECC engine if any. There are 21 + basically three possibilities: 22 + 1/ The ECC engine is part of the NAND controller, in this 23 + case the phandle should reference the parent node. 24 + 2/ The ECC engine is part of the NAND part (on-die), in this 25 + case the phandle should reference the node itself. 26 + 3/ The ECC engine is external, in this case the phandle should 27 + reference the specific ECC engine node. 28 + $ref: /schemas/types.yaml#/definitions/phandle 29 + 30 + nand-use-soft-ecc-engine: 31 + description: Use a software ECC engine. 32 + type: boolean 33 + 34 + nand-no-ecc-engine: 35 + description: Do not use any ECC correction. 36 + type: boolean 37 + 38 + nand-ecc-algo: 39 + description: 40 + Desired ECC algorithm. 41 + $ref: /schemas/types.yaml#/definitions/string 42 + enum: [hamming, bch, rs] 43 + 44 + nand-ecc-strength: 45 + description: 46 + Maximum number of bits that can be corrected per ECC step. 47 + $ref: /schemas/types.yaml#/definitions/uint32 48 + minimum: 1 49 + 50 + nand-ecc-step-size: 51 + description: 52 + Number of data bytes covered by a single ECC step. 53 + $ref: /schemas/types.yaml#/definitions/uint32 54 + minimum: 1 55 + 56 + secure-regions: 57 + description: 58 + Regions in the NAND chip which are protected using a secure element 59 + like Trustzone. This property contains the start address and size of 60 + the secure regions present. 61 + $ref: /schemas/types.yaml#/definitions/uint64-matrix 62 + 63 + # This file can be referenced by more specific devices (like spi-nands) 64 + additionalProperties: true
+20
Documentation/devicetree/bindings/mtd/partitions/partition.yaml
··· 57 57 user space from 58 58 type: boolean 59 59 60 + part-concat-next: 61 + description: List of phandles to MTD partitions that need be concatenated 62 + with the current partition. 63 + $ref: /schemas/types.yaml#/definitions/phandle-array 64 + minItems: 1 65 + maxItems: 16 66 + items: 67 + maxItems: 1 68 + 60 69 align: 61 70 $ref: /schemas/types.yaml#/definitions/uint32 62 71 minimum: 2 ··· 188 179 compatible = "tfa-bl31"; 189 180 reg = <0x200000 0x100000>; 190 181 align = <0x4000>; 182 + }; 183 + 184 + part0: partition@400000 { 185 + part-concat-next = <&part1>; 186 + label = "part0_0"; 187 + reg = <0x400000 0x100000>; 188 + }; 189 + 190 + part1: partition@800000 { 191 + label = "part0_1"; 192 + reg = <0x800000 0x800000>; 191 193 }; 192 194 };
+1 -73
Documentation/devicetree/bindings/mtd/raw-nand-chip.yaml
··· 11 11 12 12 allOf: 13 13 - $ref: nand-chip.yaml# 14 + - $ref: raw-nand-property.yaml# 14 15 15 16 description: | 16 17 The ECC strength and ECC step size properties define the user ··· 31 30 reg: 32 31 description: 33 32 Contains the chip-select IDs. 34 - 35 - nand-ecc-placement: 36 - description: 37 - Location of the ECC bytes. This location is unknown by default 38 - but can be explicitly set to "oob", if all ECC bytes are 39 - known to be stored in the OOB area, or "interleaved" if ECC 40 - bytes will be interleaved with regular data in the main area. 41 - $ref: /schemas/types.yaml#/definitions/string 42 - enum: [ oob, interleaved ] 43 - deprecated: true 44 - 45 - nand-ecc-mode: 46 - description: 47 - Legacy ECC configuration mixing the ECC engine choice and 48 - configuration. 49 - $ref: /schemas/types.yaml#/definitions/string 50 - enum: [none, soft, soft_bch, hw, hw_syndrome, on-die] 51 - deprecated: true 52 - 53 - nand-bus-width: 54 - description: 55 - Bus width to the NAND chip 56 - $ref: /schemas/types.yaml#/definitions/uint32 57 - enum: [8, 16] 58 - default: 8 59 - 60 - nand-on-flash-bbt: 61 - description: 62 - With this property, the OS will search the device for a Bad 63 - Block Table (BBT). If not found, it will create one, reserve 64 - a few blocks at the end of the device to store it and update 65 - it as the device ages. Otherwise, the out-of-band area of a 66 - few pages of all the blocks will be scanned at boot time to 67 - find Bad Block Markers (BBM). These markers will help to 68 - build a volatile BBT in RAM. 69 - $ref: /schemas/types.yaml#/definitions/flag 70 - 71 - nand-ecc-maximize: 72 - description: 73 - Whether or not the ECC strength should be maximized. The 74 - maximum ECC strength is both controller and chip 75 - dependent. The ECC engine has to select the ECC config 76 - providing the best strength and taking the OOB area size 77 - constraint into account. This is particularly useful when 78 - only the in-band area is used by the upper layers, and you 79 - want to make your NAND as reliable as possible. 80 - $ref: /schemas/types.yaml#/definitions/flag 81 - 82 - nand-is-boot-medium: 83 - description: 84 - Whether or not the NAND chip is a boot medium. Drivers might 85 - use this information to select ECC algorithms supported by 86 - the boot ROM or similar restrictions. 87 - $ref: /schemas/types.yaml#/definitions/flag 88 - 89 - nand-rb: 90 - description: 91 - Contains the native Ready/Busy IDs. 92 - $ref: /schemas/types.yaml#/definitions/uint32-array 93 - 94 - rb-gpios: 95 - description: 96 - Contains one or more GPIO descriptor (the numper of descriptor 97 - depends on the number of R/B pins exposed by the flash) for the 98 - Ready/Busy pins. Active state refers to the NAND ready state and 99 - should be set to GPIOD_ACTIVE_HIGH unless the signal is inverted. 100 - 101 - wp-gpios: 102 - description: 103 - Contains one GPIO descriptor for the Write Protect pin. 104 - Active state refers to the NAND Write Protect state and should be 105 - set to GPIOD_ACTIVE_LOW unless the signal is inverted. 106 - maxItems: 1 107 33 108 34 required: 109 35 - reg
+98
Documentation/devicetree/bindings/mtd/raw-nand-property.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/raw-nand-property.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Raw NAND Chip Common Properties 8 + 9 + maintainers: 10 + - Miquel Raynal <miquel.raynal@bootlin.com> 11 + 12 + description: | 13 + The ECC strength and ECC step size properties define the user 14 + desires in terms of correction capability of a controller. Together, 15 + they request the ECC engine to correct {strength} bit errors per 16 + {size} bytes for a particular raw NAND chip. 17 + 18 + The interpretation of these parameters is implementation-defined, so 19 + not all implementations must support all possible 20 + combinations. However, implementations are encouraged to further 21 + specify the value(s) they support. 22 + 23 + properties: 24 + nand-ecc-placement: 25 + description: 26 + Location of the ECC bytes. This location is unknown by default 27 + but can be explicitly set to "oob", if all ECC bytes are 28 + known to be stored in the OOB area, or "interleaved" if ECC 29 + bytes will be interleaved with regular data in the main area. 30 + $ref: /schemas/types.yaml#/definitions/string 31 + enum: [ oob, interleaved ] 32 + deprecated: true 33 + 34 + nand-ecc-mode: 35 + description: 36 + Legacy ECC configuration mixing the ECC engine choice and 37 + configuration. 38 + $ref: /schemas/types.yaml#/definitions/string 39 + enum: [none, soft, soft_bch, hw, hw_syndrome, on-die] 40 + deprecated: true 41 + 42 + nand-bus-width: 43 + description: 44 + Bus width to the NAND chip 45 + $ref: /schemas/types.yaml#/definitions/uint32 46 + enum: [8, 16] 47 + default: 8 48 + 49 + nand-on-flash-bbt: 50 + description: 51 + With this property, the OS will search the device for a Bad 52 + Block Table (BBT). If not found, it will create one, reserve 53 + a few blocks at the end of the device to store it and update 54 + it as the device ages. Otherwise, the out-of-band area of a 55 + few pages of all the blocks will be scanned at boot time to 56 + find Bad Block Markers (BBM). These markers will help to 57 + build a volatile BBT in RAM. 58 + $ref: /schemas/types.yaml#/definitions/flag 59 + 60 + nand-ecc-maximize: 61 + description: 62 + Whether or not the ECC strength should be maximized. The 63 + maximum ECC strength is both controller and chip 64 + dependent. The ECC engine has to select the ECC config 65 + providing the best strength and taking the OOB area size 66 + constraint into account. This is particularly useful when 67 + only the in-band area is used by the upper layers, and you 68 + want to make your NAND as reliable as possible. 69 + $ref: /schemas/types.yaml#/definitions/flag 70 + 71 + nand-is-boot-medium: 72 + description: 73 + Whether or not the NAND chip is a boot medium. Drivers might 74 + use this information to select ECC algorithms supported by 75 + the boot ROM or similar restrictions. 76 + $ref: /schemas/types.yaml#/definitions/flag 77 + 78 + nand-rb: 79 + description: 80 + Contains the native Ready/Busy IDs. 81 + $ref: /schemas/types.yaml#/definitions/uint32-array 82 + 83 + rb-gpios: 84 + description: 85 + Contains one or more GPIO descriptor (the numper of descriptor 86 + depends on the number of R/B pins exposed by the flash) for the 87 + Ready/Busy pins. Active state refers to the NAND ready state and 88 + should be set to GPIOD_ACTIVE_HIGH unless the signal is inverted. 89 + 90 + wp-gpios: 91 + description: 92 + Contains one GPIO descriptor for the Write Protect pin. 93 + Active state refers to the NAND Write Protect state and should be 94 + set to GPIOD_ACTIVE_LOW unless the signal is inverted. 95 + maxItems: 1 96 + 97 + # This is a generic file other binding inherit from and extend 98 + additionalProperties: true
+9
drivers/mtd/Kconfig
··· 206 206 the parent of the partition device be the master device, rather than 207 207 what lies behind the master. 208 208 209 + config MTD_VIRT_CONCAT 210 + bool "Virtual concatenated MTD devices" 211 + depends on MTD_PARTITIONED_MASTER 212 + help 213 + The driver enables the creation of virtual MTD device by 214 + concatenating multiple physical MTD devices into a single 215 + entity. This allows for the creation of partitions larger than 216 + the individual physical chips, extending across chip boundaries. 217 + 209 218 source "drivers/mtd/chips/Kconfig" 210 219 211 220 source "drivers/mtd/maps/Kconfig"
+1
drivers/mtd/Makefile
··· 6 6 # Core functionality. 7 7 obj-$(CONFIG_MTD) += mtd.o 8 8 mtd-y := mtdcore.o mtdsuper.o mtdconcat.o mtdpart.o mtdchar.o 9 + mtd-$(CONFIG_MTD_VIRT_CONCAT) += mtd_virt_concat.o 9 10 10 11 obj-y += parsers/ 11 12
+1 -2
drivers/mtd/devices/docg3.c
··· 2049 2049 static void docg3_release(struct platform_device *pdev) 2050 2050 { 2051 2051 struct docg3_cascade *cascade = platform_get_drvdata(pdev); 2052 - struct docg3 *docg3 = cascade->floors[0]->priv; 2053 2052 int floor; 2054 2053 2055 2054 doc_unregister_sysfs(pdev, cascade); ··· 2056 2057 if (cascade->floors[floor]) 2057 2058 doc_release_device(cascade->floors[floor]); 2058 2059 2059 - bch_free(docg3->cascade->bch); 2060 + bch_free(cascade->bch); 2060 2061 } 2061 2062 2062 2063 #ifdef CONFIG_OF
-11
drivers/mtd/maps/Kconfig
··· 75 75 physically into the CPU's memory. The mapping description here is 76 76 taken from OF device tree. 77 77 78 - config MTD_PHYSMAP_BT1_ROM 79 - bool "Baikal-T1 Boot ROMs OF-based physical memory map handling" 80 - depends on MTD_PHYSMAP_OF 81 - depends on MIPS_BAIKAL_T1 || COMPILE_TEST 82 - select MTD_COMPLEX_MAPPINGS 83 - select MULTIPLEXER 84 - select MUX_MMIO 85 - help 86 - This provides some extra DT physmap parsing for the Baikal-T1 87 - platforms, some detection and setting up ROMs-specific accessors. 88 - 89 78 config MTD_PHYSMAP_VERSATILE 90 79 bool "ARM Versatile OF-based physical memory map handling" 91 80 depends on MTD_PHYSMAP_OF
-1
drivers/mtd/maps/Makefile
··· 19 19 obj-$(CONFIG_MTD_PXA2XX) += pxa2xx-flash.o 20 20 obj-$(CONFIG_MTD_PHYSMAP) += physmap.o 21 21 physmap-y := physmap-core.o 22 - physmap-$(CONFIG_MTD_PHYSMAP_BT1_ROM) += physmap-bt1-rom.o 23 22 physmap-$(CONFIG_MTD_PHYSMAP_VERSATILE) += physmap-versatile.o 24 23 physmap-$(CONFIG_MTD_PHYSMAP_GEMINI) += physmap-gemini.o 25 24 physmap-$(CONFIG_MTD_PHYSMAP_IXP4XX) += physmap-ixp4xx.o
-125
drivers/mtd/maps/physmap-bt1-rom.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Copyright (C) 2020 BAIKAL ELECTRONICS, JSC 4 - * 5 - * Authors: 6 - * Serge Semin <Sergey.Semin@baikalelectronics.ru> 7 - * 8 - * Baikal-T1 Physically Mapped Internal ROM driver 9 - */ 10 - #include <linux/bits.h> 11 - #include <linux/device.h> 12 - #include <linux/kernel.h> 13 - #include <linux/mtd/map.h> 14 - #include <linux/mtd/xip.h> 15 - #include <linux/mux/consumer.h> 16 - #include <linux/of.h> 17 - #include <linux/platform_device.h> 18 - #include <linux/string.h> 19 - #include <linux/types.h> 20 - 21 - #include "physmap-bt1-rom.h" 22 - 23 - /* 24 - * Baikal-T1 SoC ROMs are only accessible by the dword-aligned instructions. 25 - * We have to take this into account when implementing the data read-methods. 26 - * Note there is no need in bothering with endianness, since both Baikal-T1 27 - * CPU and MMIO are LE. 28 - */ 29 - static map_word __xipram bt1_rom_map_read(struct map_info *map, 30 - unsigned long ofs) 31 - { 32 - void __iomem *src = map->virt + ofs; 33 - unsigned int shift; 34 - map_word ret; 35 - u32 data; 36 - 37 - /* Read data within offset dword. */ 38 - shift = (uintptr_t)src & 0x3; 39 - data = readl_relaxed(src - shift); 40 - if (!shift) { 41 - ret.x[0] = data; 42 - return ret; 43 - } 44 - ret.x[0] = data >> (shift * BITS_PER_BYTE); 45 - 46 - /* Read data from the next dword. */ 47 - shift = 4 - shift; 48 - if (ofs + shift >= map->size) 49 - return ret; 50 - 51 - data = readl_relaxed(src + shift); 52 - ret.x[0] |= data << (shift * BITS_PER_BYTE); 53 - 54 - return ret; 55 - } 56 - 57 - static void __xipram bt1_rom_map_copy_from(struct map_info *map, 58 - void *to, unsigned long from, 59 - ssize_t len) 60 - { 61 - void __iomem *src = map->virt + from; 62 - unsigned int shift, chunk; 63 - u32 data; 64 - 65 - if (len <= 0 || from >= map->size) 66 - return; 67 - 68 - /* Make sure we don't go over the map limit. */ 69 - len = min_t(ssize_t, map->size - from, len); 70 - 71 - /* 72 - * Since requested data size can be pretty big we have to implement 73 - * the copy procedure as optimal as possible. That's why it's split 74 - * up into the next three stages: unaligned head, aligned body, 75 - * unaligned tail. 76 - */ 77 - shift = (uintptr_t)src & 0x3; 78 - if (shift) { 79 - chunk = min_t(ssize_t, 4 - shift, len); 80 - data = readl_relaxed(src - shift); 81 - memcpy(to, (char *)&data + shift, chunk); 82 - src += chunk; 83 - to += chunk; 84 - len -= chunk; 85 - } 86 - 87 - while (len >= 4) { 88 - data = readl_relaxed(src); 89 - memcpy(to, &data, 4); 90 - src += 4; 91 - to += 4; 92 - len -= 4; 93 - } 94 - 95 - if (len) { 96 - data = readl_relaxed(src); 97 - memcpy(to, &data, len); 98 - } 99 - } 100 - 101 - int of_flash_probe_bt1_rom(struct platform_device *pdev, 102 - struct device_node *np, 103 - struct map_info *map) 104 - { 105 - struct device *dev = &pdev->dev; 106 - 107 - /* It's supposed to be read-only MTD. */ 108 - if (!of_device_is_compatible(np, "mtd-rom")) { 109 - dev_info(dev, "No mtd-rom compatible string\n"); 110 - return 0; 111 - } 112 - 113 - /* Multiplatform guard. */ 114 - if (!of_device_is_compatible(np, "baikal,bt1-int-rom")) 115 - return 0; 116 - 117 - /* Sanity check the device parameters retrieved from DTB. */ 118 - if (map->bankwidth != 4) 119 - dev_warn(dev, "Bank width is supposed to be 32 bits wide\n"); 120 - 121 - map->read = bt1_rom_map_read; 122 - map->copy_from = bt1_rom_map_copy_from; 123 - 124 - return 0; 125 - }
-17
drivers/mtd/maps/physmap-bt1-rom.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - #include <linux/mtd/map.h> 3 - #include <linux/of.h> 4 - 5 - #ifdef CONFIG_MTD_PHYSMAP_BT1_ROM 6 - int of_flash_probe_bt1_rom(struct platform_device *pdev, 7 - struct device_node *np, 8 - struct map_info *map); 9 - #else 10 - static inline 11 - int of_flash_probe_bt1_rom(struct platform_device *pdev, 12 - struct device_node *np, 13 - struct map_info *map) 14 - { 15 - return 0; 16 - } 17 - #endif
-5
drivers/mtd/maps/physmap-core.c
··· 42 42 #include <linux/pm_runtime.h> 43 43 #include <linux/gpio/consumer.h> 44 44 45 - #include "physmap-bt1-rom.h" 46 45 #include "physmap-gemini.h" 47 46 #include "physmap-ixp4xx.h" 48 47 #include "physmap-versatile.h" ··· 363 364 info->maps[i].swap = swap; 364 365 info->maps[i].bankwidth = bankwidth; 365 366 info->maps[i].device_node = dp; 366 - 367 - err = of_flash_probe_bt1_rom(dev, dp, &info->maps[i]); 368 - if (err) 369 - return err; 370 367 371 368 err = of_flash_probe_gemini(dev, dp, &info->maps[i]); 372 369 if (err)
+1 -1
drivers/mtd/maps/physmap-gemini.c
··· 181 181 dev_err(dev, "no enabled pin control state\n"); 182 182 183 183 gf->disabled_state = pinctrl_lookup_state(gf->p, "disabled"); 184 - if (IS_ERR(gf->enabled_state)) { 184 + if (IS_ERR(gf->disabled_state)) { 185 185 dev_err(dev, "no disabled pin control state\n"); 186 186 } else { 187 187 ret = pinctrl_select_state(gf->p, gf->disabled_state);
+350
drivers/mtd/mtd_virt_concat.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Virtual concat MTD device driver 4 + * 5 + * Copyright (C) 2018 Bernhard Frauendienst 6 + * Author: Bernhard Frauendienst <kernel@nospam.obeliks.de> 7 + */ 8 + 9 + #include <linux/device.h> 10 + #include <linux/mtd/mtd.h> 11 + #include "mtdcore.h" 12 + #include <linux/mtd/partitions.h> 13 + #include <linux/of.h> 14 + #include <linux/of_platform.h> 15 + #include <linux/slab.h> 16 + #include <linux/mtd/concat.h> 17 + 18 + #define CONCAT_PROP "part-concat-next" 19 + #define CONCAT_POSTFIX "concat" 20 + #define MIN_DEV_PER_CONCAT 1 21 + 22 + static LIST_HEAD(concat_node_list); 23 + 24 + /** 25 + * struct mtd_virt_concat_node - components of a concatenation 26 + * @head: List handle 27 + * @count: Number of nodes 28 + * @nodes: Pointer to the nodes (partitions) to concatenate 29 + * @concat: Concatenation container 30 + */ 31 + struct mtd_virt_concat_node { 32 + struct list_head head; 33 + unsigned int count; 34 + struct mtd_concat *concat; 35 + struct device_node *nodes[] __counted_by(count); 36 + }; 37 + 38 + /** 39 + * mtd_is_part_concat - Check if the device is already part 40 + * of a concatenated device 41 + * @dev: pointer to 'device_node' 42 + * 43 + * Return: true if the device is already part of a concatenation, 44 + * false otherwise. 45 + */ 46 + static bool mtd_is_part_concat(struct device_node *dev) 47 + { 48 + struct mtd_virt_concat_node *item; 49 + int idx; 50 + 51 + list_for_each_entry(item, &concat_node_list, head) { 52 + for (idx = 0; idx < item->count; idx++) { 53 + if (item->nodes[idx] == dev) 54 + return true; 55 + } 56 + } 57 + return false; 58 + } 59 + 60 + static void mtd_virt_concat_put_mtd_devices(struct mtd_concat *concat) 61 + { 62 + int i; 63 + 64 + for (i = 0; i < concat->num_subdev; i++) 65 + put_mtd_device(concat->subdev[i]); 66 + } 67 + 68 + void mtd_virt_concat_destroy_joins(void) 69 + { 70 + struct mtd_virt_concat_node *item, *tmp; 71 + struct mtd_info *mtd; 72 + 73 + list_for_each_entry_safe(item, tmp, &concat_node_list, head) { 74 + mtd = &item->concat->mtd; 75 + if (item->concat) { 76 + mtd_device_unregister(mtd); 77 + kfree(mtd->name); 78 + mtd_concat_destroy(mtd); 79 + mtd_virt_concat_put_mtd_devices(item->concat); 80 + } 81 + } 82 + } 83 + 84 + /** 85 + * mtd_virt_concat_destroy - Destroy the concat that includes the mtd object 86 + * @mtd: pointer to 'mtd_info' 87 + * 88 + * Return: 0 on success, -error otherwise. 89 + */ 90 + int mtd_virt_concat_destroy(struct mtd_info *mtd) 91 + { 92 + struct mtd_info *child, *master = mtd_get_master(mtd); 93 + struct mtd_virt_concat_node *item, *tmp; 94 + struct mtd_concat *concat; 95 + int idx, ret = 0; 96 + bool is_mtd_found; 97 + 98 + list_for_each_entry_safe(item, tmp, &concat_node_list, head) { 99 + is_mtd_found = false; 100 + 101 + /* Find the concat item that hold the mtd device */ 102 + for (idx = 0; idx < item->count; idx++) { 103 + if (item->nodes[idx] == mtd->dev.of_node) { 104 + is_mtd_found = true; 105 + break; 106 + } 107 + } 108 + if (!is_mtd_found) 109 + continue; 110 + concat = item->concat; 111 + 112 + /* 113 + * Since this concatenated device is being removed, retrieve 114 + * all MTD devices that are part of it and register them 115 + * individually. 116 + */ 117 + for (idx = 0; idx < concat->num_subdev; idx++) { 118 + child = concat->subdev[idx]; 119 + if (child->dev.of_node != mtd->dev.of_node) { 120 + ret = add_mtd_device(child); 121 + if (ret) 122 + goto out; 123 + } 124 + } 125 + /* Destroy the concat */ 126 + if (concat->mtd.name) { 127 + del_mtd_device(&concat->mtd); 128 + kfree(concat->mtd.name); 129 + mtd_concat_destroy(&concat->mtd); 130 + mtd_virt_concat_put_mtd_devices(item->concat); 131 + } 132 + 133 + for (idx = 0; idx < item->count; idx++) 134 + of_node_put(item->nodes[idx]); 135 + 136 + kfree(item); 137 + } 138 + return 0; 139 + out: 140 + mutex_lock(&master->master.partitions_lock); 141 + list_del(&child->part.node); 142 + mutex_unlock(&master->master.partitions_lock); 143 + kfree(mtd->name); 144 + kfree(mtd); 145 + 146 + return ret; 147 + } 148 + 149 + /** 150 + * mtd_virt_concat_create_item - Create a concat item 151 + * @parts: pointer to 'device_node' 152 + * @count: number of mtd devices that make up 153 + * the concatenated device. 154 + * 155 + * Return: 0 on success, -error otherwise. 156 + */ 157 + static int mtd_virt_concat_create_item(struct device_node *parts, 158 + unsigned int count) 159 + { 160 + struct mtd_virt_concat_node *item; 161 + struct mtd_concat *concat; 162 + int i; 163 + 164 + for (i = 0; i < (count - 1); i++) { 165 + if (mtd_is_part_concat(of_parse_phandle(parts, CONCAT_PROP, i))) 166 + return 0; 167 + } 168 + 169 + item = kzalloc_flex(*item, nodes, count, GFP_KERNEL); 170 + if (!item) 171 + return -ENOMEM; 172 + 173 + item->count = count; 174 + 175 + /* 176 + * The partition in which "part-concat-next" property 177 + * is defined is the first device in the list of concat 178 + * devices. 179 + */ 180 + item->nodes[0] = parts; 181 + 182 + for (i = 1; i < count; i++) 183 + item->nodes[i] = of_parse_phandle(parts, CONCAT_PROP, (i - 1)); 184 + 185 + concat = kzalloc_flex(*concat, subdev, count, GFP_KERNEL); 186 + if (!concat) { 187 + kfree(item); 188 + return -ENOMEM; 189 + } 190 + 191 + item->concat = concat; 192 + 193 + list_add_tail(&item->head, &concat_node_list); 194 + 195 + return 0; 196 + } 197 + 198 + void mtd_virt_concat_destroy_items(void) 199 + { 200 + struct mtd_virt_concat_node *item, *temp; 201 + int i; 202 + 203 + list_for_each_entry_safe(item, temp, &concat_node_list, head) { 204 + for (i = 0; i < item->count; i++) 205 + of_node_put(item->nodes[i]); 206 + 207 + kfree(item); 208 + } 209 + } 210 + 211 + /** 212 + * mtd_virt_concat_add - Add a mtd device to the concat list 213 + * @mtd: pointer to 'mtd_info' 214 + * 215 + * Return: true on success, false otherwise. 216 + */ 217 + bool mtd_virt_concat_add(struct mtd_info *mtd) 218 + { 219 + struct mtd_virt_concat_node *item; 220 + struct mtd_concat *concat; 221 + int idx; 222 + 223 + list_for_each_entry(item, &concat_node_list, head) { 224 + concat = item->concat; 225 + for (idx = 0; idx < item->count; idx++) { 226 + if (item->nodes[idx] == mtd->dev.of_node) { 227 + concat->subdev[concat->num_subdev++] = mtd; 228 + return true; 229 + } 230 + } 231 + } 232 + return false; 233 + } 234 + 235 + /** 236 + * mtd_virt_concat_node_create - List all the concatenations found in DT 237 + * 238 + * Return: 0 on success, -error otherwise. 239 + */ 240 + int mtd_virt_concat_node_create(void) 241 + { 242 + struct device_node *parts = NULL; 243 + int ret = 0, count = 0; 244 + 245 + /* List all the concatenations found in DT */ 246 + do { 247 + parts = of_find_node_with_property(parts, CONCAT_PROP); 248 + if (!of_device_is_available(parts)) 249 + continue; 250 + 251 + if (mtd_is_part_concat(parts)) 252 + continue; 253 + 254 + count = of_count_phandle_with_args(parts, CONCAT_PROP, NULL); 255 + if (count < MIN_DEV_PER_CONCAT) 256 + continue; 257 + 258 + /* 259 + * The partition in which "part-concat-next" property is defined 260 + * is also part of the concat device, so increament count by 1. 261 + */ 262 + count++; 263 + 264 + ret = mtd_virt_concat_create_item(parts, count); 265 + if (ret) { 266 + of_node_put(parts); 267 + goto destroy_items; 268 + } 269 + } while (parts); 270 + 271 + return ret; 272 + 273 + destroy_items: 274 + mtd_virt_concat_destroy_items(); 275 + 276 + return ret; 277 + } 278 + 279 + /** 280 + * mtd_virt_concat_create_join - Create and register the concatenated 281 + * MTD device. 282 + * 283 + * Return: 0 on success, -error otherwise. 284 + */ 285 + int mtd_virt_concat_create_join(void) 286 + { 287 + struct mtd_virt_concat_node *item; 288 + struct mtd_concat *concat; 289 + struct mtd_info *mtd; 290 + ssize_t name_sz; 291 + int ret, idx; 292 + char *name; 293 + 294 + list_for_each_entry(item, &concat_node_list, head) { 295 + concat = item->concat; 296 + /* 297 + * Check if item->count != concat->num_subdev, it indicates 298 + * that the MTD information for all devices included in the 299 + * concatenation are not handy, concat MTD device can't be 300 + * created hence switch to next concat device. 301 + */ 302 + if (item->count != concat->num_subdev) { 303 + continue; 304 + } else { 305 + /* Calculate the legth of the name of the virtual device */ 306 + for (idx = 0, name_sz = 0; idx < concat->num_subdev; idx++) 307 + name_sz += (strlen(concat->subdev[idx]->name) + 1); 308 + name_sz += strlen(CONCAT_POSTFIX); 309 + name = kmalloc(name_sz + 1, GFP_KERNEL); 310 + if (!name) { 311 + mtd_virt_concat_put_mtd_devices(concat); 312 + return -ENOMEM; 313 + } 314 + 315 + ret = 0; 316 + for (idx = 0; idx < concat->num_subdev; idx++) { 317 + ret += sprintf((name + ret), "%s-", 318 + concat->subdev[idx]->name); 319 + } 320 + sprintf((name + ret), CONCAT_POSTFIX); 321 + 322 + if (concat->mtd.name) { 323 + ret = memcmp(concat->mtd.name, name, name_sz); 324 + if (ret == 0) 325 + continue; 326 + } 327 + mtd = mtd_concat_create(concat->subdev, concat->num_subdev, name); 328 + if (!mtd) { 329 + kfree(name); 330 + return -ENXIO; 331 + } 332 + concat->mtd = *mtd; 333 + /* Arbitrary set the first device as parent */ 334 + concat->mtd.dev.parent = concat->subdev[0]->dev.parent; 335 + concat->mtd.dev = concat->subdev[0]->dev; 336 + 337 + /* Add the mtd device */ 338 + ret = add_mtd_device(&concat->mtd); 339 + if (ret) 340 + goto destroy_concat; 341 + } 342 + } 343 + 344 + return 0; 345 + 346 + destroy_concat: 347 + mtd_concat_destroy(mtd); 348 + 349 + return ret; 350 + }
+1 -16
drivers/mtd/mtdconcat.c
··· 21 21 #include <asm/div64.h> 22 22 23 23 /* 24 - * Our storage structure: 25 - * Subdev points to an array of pointers to struct mtd_info objects 26 - * which is allocated along with this structure 27 - * 28 - */ 29 - struct mtd_concat { 30 - struct mtd_info mtd; 31 - int num_subdev; 32 - struct mtd_info **subdev; 33 - }; 34 - 35 - /* 36 24 * how to calculate the size required for the above structure, 37 25 * including the pointer array subdev points to: 38 26 */ ··· 627 639 const char *name) 628 640 { /* name for the new device */ 629 641 int i; 630 - size_t size; 631 642 struct mtd_concat *concat; 632 643 struct mtd_info *subdev_master = NULL; 633 644 uint32_t max_erasesize, curr_erasesize; ··· 639 652 printk(KERN_NOTICE "into device \"%s\"\n", name); 640 653 641 654 /* allocate the device structure */ 642 - size = SIZEOF_STRUCT_MTD_CONCAT(num_devs); 643 - concat = kzalloc(size, GFP_KERNEL); 655 + concat = kzalloc_flex(*concat, subdev, num_devs, GFP_KERNEL); 644 656 if (!concat) { 645 657 printk 646 658 ("memory allocation error while creating concatenated device \"%s\"\n", 647 659 name); 648 660 return NULL; 649 661 } 650 - concat->subdev = (struct mtd_info **) (concat + 1); 651 662 652 663 /* 653 664 * Set up the new "super" device's MTD object structure, check for
+21
drivers/mtd/mtdcore.c
··· 34 34 35 35 #include <linux/mtd/mtd.h> 36 36 #include <linux/mtd/partitions.h> 37 + #include <linux/mtd/concat.h> 37 38 38 39 #include "mtdcore.h" 39 40 ··· 1121 1120 goto out; 1122 1121 } 1123 1122 1123 + if (IS_REACHABLE(CONFIG_MTD_VIRT_CONCAT)) { 1124 + ret = mtd_virt_concat_node_create(); 1125 + if (ret < 0) 1126 + goto out; 1127 + } 1128 + 1124 1129 /* Prefer parsed partitions over driver-provided fallback */ 1125 1130 ret = parse_mtd_partitions(mtd, types, parser_data); 1126 1131 if (ret == -EPROBE_DEFER) ··· 1144 1137 if (ret) 1145 1138 goto out; 1146 1139 1140 + if (IS_REACHABLE(CONFIG_MTD_VIRT_CONCAT)) { 1141 + ret = mtd_virt_concat_create_join(); 1142 + if (ret < 0) 1143 + goto out; 1144 + } 1147 1145 /* 1148 1146 * FIXME: some drivers unfortunately call this function more than once. 1149 1147 * So we have to check if we've already assigned the reboot notifier. ··· 1198 1186 nvmem_unregister(master->otp_user_nvmem); 1199 1187 nvmem_unregister(master->otp_factory_nvmem); 1200 1188 1189 + if (IS_REACHABLE(CONFIG_MTD_VIRT_CONCAT)) { 1190 + err = mtd_virt_concat_destroy(master); 1191 + if (err) 1192 + return err; 1193 + } 1201 1194 err = del_mtd_partitions(master); 1202 1195 if (err) 1203 1196 return err; ··· 2638 2621 2639 2622 static void __exit cleanup_mtd(void) 2640 2623 { 2624 + if (IS_REACHABLE(CONFIG_MTD_VIRT_CONCAT)) { 2625 + mtd_virt_concat_destroy_joins(); 2626 + mtd_virt_concat_destroy_items(); 2627 + } 2641 2628 debugfs_remove_recursive(dfs_dir_mtd); 2642 2629 cleanup_mtdchar(); 2643 2630 if (proc_mtd)
+6
drivers/mtd/mtdpart.c
··· 18 18 #include <linux/err.h> 19 19 #include <linux/of.h> 20 20 #include <linux/of_platform.h> 21 + #include <linux/mtd/concat.h> 21 22 22 23 #include "mtdcore.h" 23 24 ··· 408 407 if (IS_ERR(child)) { 409 408 ret = PTR_ERR(child); 410 409 goto err_del_partitions; 410 + } 411 + 412 + if (IS_REACHABLE(CONFIG_MTD_VIRT_CONCAT)) { 413 + if (mtd_virt_concat_add(child)) 414 + continue; 411 415 } 412 416 413 417 mutex_lock(&master->master.partitions_lock);
+10 -8
drivers/mtd/nand/ecc-realtek.c
··· 17 17 * - BCH12 : Generate 20 ECC bytes from 512 data bytes plus 6 free bytes 18 18 * 19 19 * It can run for arbitrary NAND flash chips with different block and OOB sizes. Currently there 20 - * are only two known devices in the wild that have NAND flash and make use of this ECC engine 21 - * (Linksys LGS328C & LGS352C). To keep compatibility with vendor firmware, new modes can only 22 - * be added when new data layouts have been analyzed. For now allow BCH6 on flash with 2048 byte 23 - * blocks and 64 bytes oob. 20 + * are a few known devices in the wild that make use of this ECC engine 21 + * (Linksys LGS328C, LGS352C & Netlink HG323DAC). To keep compatibility with vendor firmware, 22 + * new modes can only be added when new data layouts have been analyzed. For now allow BCH6 on 23 + * flash with 2048 byte blocks and at least 64 bytes oob. Some vendors make use of 24 + * 128 bytes OOB NAND chips (e.g. Macronix MX35LF1G24AD) but only use BCH6 and thus the first 25 + * 64 bytes of the OOB area. In this case the engine leaves any extra bytes unused. 24 26 * 25 27 * This driver aligns with kernel ECC naming conventions. Neverthless a short notice on the 26 28 * Realtek naming conventions for the different structures in the OOB area. ··· 41 39 */ 42 40 43 41 #define RTL_ECC_ALLOWED_PAGE_SIZE 2048 44 - #define RTL_ECC_ALLOWED_OOB_SIZE 64 42 + #define RTL_ECC_ALLOWED_MIN_OOB_SIZE 64 45 43 #define RTL_ECC_ALLOWED_STRENGTH 6 46 44 47 45 #define RTL_ECC_BLOCK_SIZE 512 ··· 312 310 struct mtd_info *mtd = nanddev_to_mtd(nand); 313 311 struct device *dev = nand->ecc.engine->dev; 314 312 315 - if (mtd->oobsize != RTL_ECC_ALLOWED_OOB_SIZE || 313 + if (mtd->oobsize < RTL_ECC_ALLOWED_MIN_OOB_SIZE || 316 314 mtd->writesize != RTL_ECC_ALLOWED_PAGE_SIZE) { 317 - dev_err(dev, "only flash geometry data=%d, oob=%d supported\n", 318 - RTL_ECC_ALLOWED_PAGE_SIZE, RTL_ECC_ALLOWED_OOB_SIZE); 315 + dev_err(dev, "only flash geometry data=%d, oob>=%d supported\n", 316 + RTL_ECC_ALLOWED_PAGE_SIZE, RTL_ECC_ALLOWED_MIN_OOB_SIZE); 319 317 return -EINVAL; 320 318 } 321 319
+5 -2
drivers/mtd/nand/raw/cafe_nand.c
··· 837 837 838 838 MODULE_DEVICE_TABLE(pci, cafe_nand_tbl); 839 839 840 - static int cafe_nand_resume(struct pci_dev *pdev) 840 + static int cafe_nand_resume(struct device *dev) 841 841 { 842 842 uint32_t ctrl; 843 + struct pci_dev *pdev = to_pci_dev(dev); 843 844 struct mtd_info *mtd = pci_get_drvdata(pdev); 844 845 struct nand_chip *chip = mtd_to_nand(mtd); 845 846 struct cafe_priv *cafe = nand_get_controller_data(chip); ··· 878 877 return 0; 879 878 } 880 879 880 + static DEFINE_SIMPLE_DEV_PM_OPS(cafe_nand_ops, NULL, cafe_nand_resume); 881 + 881 882 static struct pci_driver cafe_nand_pci_driver = { 882 883 .name = "CAFÉ NAND", 883 884 .id_table = cafe_nand_tbl, 884 885 .probe = cafe_nand_probe, 885 886 .remove = cafe_nand_remove, 886 - .resume = cafe_nand_resume, 887 + .driver.pm = &cafe_nand_ops, 887 888 }; 888 889 889 890 module_pci_driver(cafe_nand_pci_driver);
+9 -1
drivers/mtd/nand/raw/fsl_ifc_nand.c
··· 7 7 * Author: Dipen Dudhat <Dipen.Dudhat@freescale.com> 8 8 */ 9 9 10 + #include <linux/cleanup.h> 10 11 #include <linux/module.h> 11 12 #include <linux/platform_device.h> 12 13 #include <linux/types.h> ··· 864 863 865 864 /* Fill in fsl_ifc_mtd structure */ 866 865 mtd->dev.parent = priv->dev; 867 - nand_set_flash_node(chip, priv->dev->of_node); 866 + 867 + struct device_node *np __free(device_node) = 868 + of_get_next_child_with_prefix(priv->dev->of_node, NULL, "nand"); 869 + 870 + if (np) 871 + nand_set_flash_node(chip, np); 872 + else 873 + nand_set_flash_node(chip, priv->dev->of_node); 868 874 869 875 /* fill in nand_chip structure */ 870 876 /* set up function call table */
+10 -1
drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
··· 5 5 * Copyright (C) 2010-2015 Freescale Semiconductor, Inc. 6 6 * Copyright (C) 2008 Embedded Alley Solutions, Inc. 7 7 */ 8 + #include <linux/cleanup.h> 8 9 #include <linux/clk.h> 9 10 #include <linux/delay.h> 10 11 #include <linux/slab.h> ··· 2689 2688 2690 2689 /* init the nand_chip{}, we don't support a 16-bit NAND Flash bus. */ 2691 2690 nand_set_controller_data(chip, this); 2692 - nand_set_flash_node(chip, this->pdev->dev.of_node); 2691 + 2692 + struct device_node *np __free(device_node) = 2693 + of_get_next_child_with_prefix(this->pdev->dev.of_node, NULL, "nand"); 2694 + 2695 + if (np) 2696 + nand_set_flash_node(chip, np); 2697 + else 2698 + nand_set_flash_node(chip, this->pdev->dev.of_node); 2699 + 2693 2700 chip->legacy.block_markbad = gpmi_block_markbad; 2694 2701 chip->badblock_pattern = &gpmi_bbt_descr; 2695 2702 chip->options |= NAND_NO_SUBPAGE_WRITE;
+9 -1
drivers/mtd/nand/raw/mxc_nand.c
··· 4 4 * Copyright 2008 Sascha Hauer, kernel@pengutronix.de 5 5 */ 6 6 7 + #include <linux/cleanup.h> 7 8 #include <linux/delay.h> 8 9 #include <linux/slab.h> 9 10 #include <linux/init.h> ··· 1715 1714 this->legacy.chip_delay = 5; 1716 1715 1717 1716 nand_set_controller_data(this, host); 1718 - nand_set_flash_node(this, pdev->dev.of_node); 1717 + 1718 + struct device_node *np __free(device_node) = 1719 + of_get_next_child_with_prefix(pdev->dev.of_node, NULL, "nand"); 1720 + 1721 + if (np) 1722 + nand_set_flash_node(this, np); 1723 + else 1724 + nand_set_flash_node(this, pdev->dev.of_node); 1719 1725 1720 1726 host->clk = devm_clk_get(&pdev->dev, NULL); 1721 1727 if (IS_ERR(host->clk))
+10 -9
drivers/mtd/nand/raw/nand_base.c
··· 43 43 #include <linux/mtd/partitions.h> 44 44 #include <linux/of.h> 45 45 #include <linux/gpio/consumer.h> 46 + #include <linux/cleanup.h> 46 47 47 48 #include "internals.h" 48 49 ··· 4705 4704 { 4706 4705 struct nand_chip *chip = mtd_to_nand(mtd); 4707 4706 4708 - mutex_lock(&chip->lock); 4709 - if (chip->suspended) { 4710 - if (chip->ops.resume) 4711 - chip->ops.resume(chip); 4712 - chip->suspended = 0; 4713 - } else { 4714 - pr_err("%s called for a chip which is not in suspended state\n", 4715 - __func__); 4707 + scoped_guard(mutex, &chip->lock) { 4708 + if (chip->suspended) { 4709 + if (chip->ops.resume) 4710 + chip->ops.resume(chip); 4711 + chip->suspended = 0; 4712 + } else { 4713 + pr_err("%s called for a chip which is not in suspended state\n", 4714 + __func__); 4715 + } 4716 4716 } 4717 - mutex_unlock(&chip->lock); 4718 4717 4719 4718 wake_up_all(&chip->resume_wq); 4720 4719 }
+294 -87
drivers/mtd/nand/raw/sunxi_nand.c
··· 209 209 210 210 /* 211 211 * On A10/A23, this is the size of the NDFC User Data Register, containing the 212 - * mandatory user data bytes following the ECC for each ECC step. 212 + * mandatory user data bytes preceding the ECC for each ECC step. 213 213 * Thus, for each ECC step, we need the ECC bytes + USER_DATA_SZ. 214 - * Those bits are currently unsused, and kept as default value 0xffffffff. 215 214 * 216 215 * On H6/H616, this size became configurable, from 0 bytes to 32, via the 217 216 * USER_DATA_LEN registers. ··· 248 249 * @timing_ctl: TIMING_CTL register value for this NAND chip 249 250 * @nsels: number of CS lines required by the NAND chip 250 251 * @sels: array of CS lines descriptions 252 + * @user_data_bytes: array of user data lengths for all ECC steps 251 253 */ 252 254 struct sunxi_nand_chip { 253 255 struct list_head node; ··· 257 257 unsigned long clk_rate; 258 258 u32 timing_cfg; 259 259 u32 timing_ctl; 260 + u8 *user_data_bytes; 260 261 int nsels; 261 262 struct sunxi_nand_chip_sel sels[] __counted_by(nsels); 262 263 }; ··· 273 272 * 274 273 * @has_mdma: Use mbus dma mode, otherwise general dma 275 274 * through MBUS on A23/A33 needs extra configuration. 276 - * @has_ecc_block_512: If the ECC can handle 512B or only 1024B chuncks 275 + * @has_ecc_block_512: If the ECC can handle 512B or only 1024B chunks 277 276 * @has_ecc_clk: If the controller needs an ECC clock. 278 277 * @has_mbus_clk: If the controller needs a mbus clock. 278 + * @legacy_max_strength:If the maximize strength function was off by 2 bytes 279 + * NB: this should not be used in new controllers 279 280 * @reg_io_data: I/O data register 280 281 * @reg_ecc_err_cnt: ECC error counter register 281 282 * @reg_user_data: User data register ··· 295 292 * @nstrengths: Size of @ecc_strengths 296 293 * @max_ecc_steps: Maximum supported steps for ECC, this is also the 297 294 * number of user data registers 298 - * @user_data_len_tab: Table of lenghts supported by USER_DATA_LEN register 295 + * @user_data_len_tab: Table of lengths supported by USER_DATA_LEN register 299 296 * The table index is the value to set in NFC_USER_DATA_LEN 300 297 * registers, and the corresponding value is the number of 301 298 * bytes to write ··· 307 304 bool has_ecc_block_512; 308 305 bool has_ecc_clk; 309 306 bool has_mbus_clk; 307 + bool legacy_max_strength; 310 308 unsigned int reg_io_data; 311 309 unsigned int reg_ecc_err_cnt; 312 310 unsigned int reg_user_data; ··· 824 820 return buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24); 825 821 } 826 822 827 - static void sunxi_nfc_hw_ecc_get_prot_oob_bytes(struct nand_chip *nand, u8 *oob, 828 - int step, bool bbm, int page) 823 + static u8 sunxi_nfc_user_data_sz(struct sunxi_nand_chip *sunxi_nand, int step) 829 824 { 830 - struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 825 + if (!sunxi_nand->user_data_bytes) 826 + return USER_DATA_SZ; 831 827 832 - sunxi_nfc_user_data_to_buf(readl(nfc->regs + NFC_REG_USER_DATA(nfc, step)), oob); 828 + return sunxi_nand->user_data_bytes[step]; 829 + } 830 + 831 + static void sunxi_nfc_hw_ecc_get_prot_oob_bytes(struct nand_chip *nand, u8 *oob, 832 + int step, bool bbm, int page, 833 + unsigned int user_data_sz) 834 + { 835 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 836 + struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 837 + u32 user_data; 838 + 839 + if (!nfc->caps->reg_user_data_len) { 840 + /* 841 + * For A10, the user data for step n is in the nth 842 + * REG_USER_DATA 843 + */ 844 + user_data = readl(nfc->regs + NFC_REG_USER_DATA(nfc, step)); 845 + sunxi_nfc_user_data_to_buf(user_data, oob); 846 + } else { 847 + /* 848 + * For H6 NAND controller, the user data for all steps is 849 + * contained in 32 user data registers, but not at a specific 850 + * offset for each step, they are just concatenated. 851 + */ 852 + unsigned int user_data_off = 0; 853 + unsigned int reg_off; 854 + u8 *ptr = oob; 855 + unsigned int i; 856 + 857 + for (i = 0; i < step; i++) 858 + user_data_off += sunxi_nfc_user_data_sz(sunxi_nand, i); 859 + 860 + user_data_off /= 4; 861 + for (i = 0; i < user_data_sz / 4; i++, ptr += 4) { 862 + reg_off = NFC_REG_USER_DATA(nfc, user_data_off + i); 863 + user_data = readl(nfc->regs + reg_off); 864 + sunxi_nfc_user_data_to_buf(user_data, ptr); 865 + } 866 + } 833 867 834 868 /* De-randomize the Bad Block Marker. */ 835 869 if (bbm && (nand->options & NAND_NEED_SCRAMBLING)) ··· 926 884 bool bbm, int page) 927 885 { 928 886 struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 929 - u8 user_data[USER_DATA_SZ]; 887 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 888 + unsigned int user_data_sz = sunxi_nfc_user_data_sz(sunxi_nand, step); 889 + u8 *user_data = NULL; 930 890 931 891 /* Randomize the Bad Block Marker. */ 932 892 if (bbm && (nand->options & NAND_NEED_SCRAMBLING)) { 933 - memcpy(user_data, oob, sizeof(user_data)); 893 + user_data = kmalloc(user_data_sz, GFP_KERNEL); 894 + memcpy(user_data, oob, user_data_sz); 934 895 sunxi_nfc_randomize_bbm(nand, page, user_data); 935 896 oob = user_data; 936 897 } 937 898 938 - writel(sunxi_nfc_buf_to_user_data(oob), 939 - nfc->regs + NFC_REG_USER_DATA(nfc, step)); 899 + if (!nfc->caps->reg_user_data_len) { 900 + /* 901 + * For A10, the user data for step n is in the nth 902 + * REG_USER_DATA 903 + */ 904 + writel(sunxi_nfc_buf_to_user_data(oob), 905 + nfc->regs + NFC_REG_USER_DATA(nfc, step)); 906 + } else { 907 + /* 908 + * For H6 NAND controller, the user data for all steps is 909 + * contained in 32 user data registers, but not at a specific 910 + * offset for each step, they are just concatenated. 911 + */ 912 + unsigned int user_data_off = 0; 913 + const u8 *ptr = oob; 914 + unsigned int i; 915 + 916 + for (i = 0; i < step; i++) 917 + user_data_off += sunxi_nfc_user_data_sz(sunxi_nand, i); 918 + 919 + user_data_off /= 4; 920 + for (i = 0; i < user_data_sz / 4; i++, ptr += 4) { 921 + writel(sunxi_nfc_buf_to_user_data(ptr), 922 + nfc->regs + NFC_REG_USER_DATA(nfc, user_data_off + i)); 923 + } 924 + } 925 + 926 + kfree(user_data); 940 927 } 941 928 942 929 static void sunxi_nfc_hw_ecc_update_stats(struct nand_chip *nand, ··· 986 915 bool *erased) 987 916 { 988 917 struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 918 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 919 + unsigned int user_data_sz = sunxi_nfc_user_data_sz(sunxi_nand, step); 989 920 struct nand_ecc_ctrl *ecc = &nand->ecc; 990 921 u32 tmp; 991 922 ··· 1010 937 memset(data, pattern, ecc->size); 1011 938 1012 939 if (oob) 1013 - memset(oob, pattern, ecc->bytes + USER_DATA_SZ); 940 + memset(oob, pattern, ecc->bytes + user_data_sz); 1014 941 1015 942 return 0; 1016 943 } ··· 1025 952 u8 *oob, int oob_off, 1026 953 int *cur_off, 1027 954 unsigned int *max_bitflips, 1028 - bool bbm, bool oob_required, int page) 955 + int step, bool oob_required, int page) 1029 956 { 1030 957 struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 958 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 959 + unsigned int user_data_sz = sunxi_nfc_user_data_sz(sunxi_nand, step); 1031 960 struct nand_ecc_ctrl *ecc = &nand->ecc; 1032 961 int raw_mode = 0; 1033 962 u32 pattern_found; 963 + bool bbm = !step; 1034 964 bool erased; 1035 965 int ret; 966 + /* From the controller point of view, we are at step 0 */ 967 + const int nfc_step = 0; 1036 968 1037 969 if (*cur_off != data_off) 1038 970 nand_change_read_column_op(nand, data_off, NULL, 0, false); ··· 1051 973 if (ret) 1052 974 return ret; 1053 975 1054 - sunxi_nfc_reset_user_data_len(nfc); 1055 - sunxi_nfc_set_user_data_len(nfc, USER_DATA_SZ, 0); 976 + sunxi_nfc_set_user_data_len(nfc, user_data_sz, nfc_step); 1056 977 sunxi_nfc_randomizer_config(nand, page, false); 1057 978 sunxi_nfc_randomizer_enable(nand); 1058 979 writel(NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD | NFC_ECC_OP, ··· 1062 985 if (ret) 1063 986 return ret; 1064 987 1065 - *cur_off = oob_off + ecc->bytes + USER_DATA_SZ; 988 + *cur_off = oob_off + ecc->bytes + user_data_sz; 1066 989 1067 990 pattern_found = readl(nfc->regs + nfc->caps->reg_pat_found); 1068 991 pattern_found = field_get(NFC_ECC_PAT_FOUND_MSK(nfc), pattern_found); 1069 992 1070 - ret = sunxi_nfc_hw_ecc_correct(nand, data, oob_required ? oob : NULL, 0, 1071 - readl(nfc->regs + NFC_REG_ECC_ST), 1072 - pattern_found, 1073 - &erased); 993 + ret = sunxi_nfc_hw_ecc_correct(nand, data, oob_required ? oob : NULL, 994 + nfc_step, readl(nfc->regs + NFC_REG_ECC_ST), 995 + pattern_found, &erased); 1074 996 if (erased) 1075 997 return 1; 1076 998 ··· 1086 1010 ecc->size); 1087 1011 1088 1012 nand_change_read_column_op(nand, oob_off, oob, 1089 - ecc->bytes + USER_DATA_SZ, false); 1013 + ecc->bytes + user_data_sz, false); 1090 1014 1091 1015 ret = nand_check_erased_ecc_chunk(data, ecc->size, oob, 1092 - ecc->bytes + USER_DATA_SZ, 1016 + ecc->bytes + user_data_sz, 1093 1017 NULL, 0, ecc->strength); 1094 1018 if (ret >= 0) 1095 1019 raw_mode = 1; ··· 1099 1023 if (oob_required) { 1100 1024 nand_change_read_column_op(nand, oob_off, NULL, 0, 1101 1025 false); 1102 - sunxi_nfc_randomizer_read_buf(nand, oob, ecc->bytes + USER_DATA_SZ, 1026 + sunxi_nfc_randomizer_read_buf(nand, oob, ecc->bytes + user_data_sz, 1103 1027 true, page); 1104 1028 1105 - sunxi_nfc_hw_ecc_get_prot_oob_bytes(nand, oob, 0, 1106 - bbm, page); 1029 + sunxi_nfc_hw_ecc_get_prot_oob_bytes(nand, oob, nfc_step, 1030 + bbm, page, user_data_sz); 1107 1031 } 1108 1032 } 1109 1033 ··· 1112 1036 return raw_mode; 1113 1037 } 1114 1038 1039 + /* 1040 + * Returns the offset of the OOB for each step. 1041 + * (it includes the user data before the ECC data.) 1042 + */ 1043 + static int sunxi_get_oob_offset(struct sunxi_nand_chip *sunxi_nand, 1044 + struct nand_ecc_ctrl *ecc, int step) 1045 + { 1046 + int ecc_off = step * ecc->bytes; 1047 + int i; 1048 + 1049 + for (i = 0; i < step; i++) 1050 + ecc_off += sunxi_nfc_user_data_sz(sunxi_nand, i); 1051 + 1052 + return ecc_off; 1053 + } 1054 + 1055 + /* 1056 + * Returns the offset of the ECC for each step. 1057 + * So, it's the same as sunxi_get_oob_offset(), 1058 + * but it skips the next user data. 1059 + */ 1060 + static int sunxi_get_ecc_offset(struct sunxi_nand_chip *sunxi_nand, 1061 + struct nand_ecc_ctrl *ecc, int step) 1062 + { 1063 + return sunxi_get_oob_offset(sunxi_nand, ecc, step) + 1064 + sunxi_nfc_user_data_sz(sunxi_nand, step); 1065 + } 1066 + 1115 1067 static void sunxi_nfc_hw_ecc_read_extra_oob(struct nand_chip *nand, 1116 1068 u8 *oob, int *cur_off, 1117 1069 bool randomize, int page) 1118 1070 { 1071 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 1119 1072 struct mtd_info *mtd = nand_to_mtd(nand); 1120 1073 struct nand_ecc_ctrl *ecc = &nand->ecc; 1121 - int offset = ((ecc->bytes + 4) * ecc->steps); 1074 + int offset = sunxi_get_oob_offset(sunxi_nand, ecc, ecc->steps); 1122 1075 int len = mtd->oobsize - offset; 1123 1076 1124 1077 if (len <= 0) 1125 1078 return; 1126 1079 1127 - if (!cur_off || *cur_off != offset) 1128 - nand_change_read_column_op(nand, mtd->writesize, NULL, 0, 1129 - false); 1080 + if (!cur_off || *cur_off != (offset + mtd->writesize)) 1081 + nand_change_read_column_op(nand, mtd->writesize + offset, 1082 + NULL, 0, false); 1130 1083 1131 1084 if (!randomize) 1132 1085 sunxi_nfc_read_buf(nand, oob + offset, len); ··· 1172 1067 int nchunks) 1173 1068 { 1174 1069 bool randomized = nand->options & NAND_NEED_SCRAMBLING; 1070 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 1175 1071 struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 1176 1072 struct mtd_info *mtd = nand_to_mtd(nand); 1177 1073 struct nand_ecc_ctrl *ecc = &nand->ecc; ··· 1192 1086 1193 1087 sunxi_nfc_hw_ecc_enable(nand); 1194 1088 sunxi_nfc_reset_user_data_len(nfc); 1195 - sunxi_nfc_set_user_data_len(nfc, USER_DATA_SZ, 0); 1089 + for (i = 0; i < nchunks; i++) 1090 + sunxi_nfc_set_user_data_len(nfc, sunxi_nfc_user_data_sz(sunxi_nand, i), i); 1196 1091 sunxi_nfc_randomizer_config(nand, page, false); 1197 1092 sunxi_nfc_randomizer_enable(nand); 1198 1093 ··· 1228 1121 1229 1122 for (i = 0; i < nchunks; i++) { 1230 1123 int data_off = i * ecc->size; 1231 - int oob_off = i * (ecc->bytes + USER_DATA_SZ); 1124 + unsigned int user_data_sz = sunxi_nfc_user_data_sz(sunxi_nand, i); 1125 + int oob_off = sunxi_get_oob_offset(sunxi_nand, ecc, i); 1232 1126 u8 *data = buf + data_off; 1233 1127 u8 *oob = nand->oob_poi + oob_off; 1234 1128 bool erased; ··· 1247 1139 /* TODO: use DMA to retrieve OOB */ 1248 1140 nand_change_read_column_op(nand, 1249 1141 mtd->writesize + oob_off, 1250 - oob, ecc->bytes + USER_DATA_SZ, false); 1142 + oob, ecc->bytes + user_data_sz, false); 1251 1143 1252 - sunxi_nfc_hw_ecc_get_prot_oob_bytes(nand, oob, i, 1253 - !i, page); 1144 + sunxi_nfc_hw_ecc_get_prot_oob_bytes(nand, oob, i, !i, 1145 + page, user_data_sz); 1254 1146 } 1255 1147 1256 1148 if (erased) ··· 1262 1154 if (status & NFC_ECC_ERR_MSK(nfc)) { 1263 1155 for (i = 0; i < nchunks; i++) { 1264 1156 int data_off = i * ecc->size; 1265 - int oob_off = i * (ecc->bytes + USER_DATA_SZ); 1157 + unsigned int user_data_sz = sunxi_nfc_user_data_sz(sunxi_nand, i); 1158 + int oob_off = sunxi_get_oob_offset(sunxi_nand, ecc, i); 1266 1159 u8 *data = buf + data_off; 1267 1160 u8 *oob = nand->oob_poi + oob_off; 1268 1161 ··· 1283 1174 /* TODO: use DMA to retrieve OOB */ 1284 1175 nand_change_read_column_op(nand, 1285 1176 mtd->writesize + oob_off, 1286 - oob, ecc->bytes + USER_DATA_SZ, false); 1177 + oob, ecc->bytes + user_data_sz, false); 1287 1178 1288 1179 ret = nand_check_erased_ecc_chunk(data, ecc->size, oob, 1289 - ecc->bytes + USER_DATA_SZ, 1180 + ecc->bytes + user_data_sz, 1290 1181 NULL, 0, 1291 1182 ecc->strength); 1292 1183 if (ret >= 0) ··· 1307 1198 static int sunxi_nfc_hw_ecc_write_chunk(struct nand_chip *nand, 1308 1199 const u8 *data, int data_off, 1309 1200 const u8 *oob, int oob_off, 1310 - int *cur_off, bool bbm, 1201 + int *cur_off, int step, 1311 1202 int page) 1312 1203 { 1313 1204 struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 1205 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 1206 + unsigned int user_data_sz = sunxi_nfc_user_data_sz(sunxi_nand, step); 1314 1207 struct nand_ecc_ctrl *ecc = &nand->ecc; 1208 + bool bbm = !step; 1315 1209 int ret; 1210 + /* From the controller point of view, we are at step 0 */ 1211 + const int nfc_step = 0; 1316 1212 1317 1213 if (data_off != *cur_off) 1318 1214 nand_change_write_column_op(nand, data_off, NULL, 0, false); ··· 1333 1219 1334 1220 sunxi_nfc_randomizer_config(nand, page, false); 1335 1221 sunxi_nfc_randomizer_enable(nand); 1336 - sunxi_nfc_reset_user_data_len(nfc); 1337 - sunxi_nfc_set_user_data_len(nfc, USER_DATA_SZ, 0); 1338 - sunxi_nfc_hw_ecc_set_prot_oob_bytes(nand, oob, 0, bbm, page); 1222 + sunxi_nfc_set_user_data_len(nfc, user_data_sz, nfc_step); 1223 + sunxi_nfc_hw_ecc_set_prot_oob_bytes(nand, oob, nfc_step, bbm, page); 1339 1224 1340 1225 writel(NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD | 1341 1226 NFC_ACCESS_DIR | NFC_ECC_OP, ··· 1345 1232 if (ret) 1346 1233 return ret; 1347 1234 1348 - *cur_off = oob_off + ecc->bytes + USER_DATA_SZ; 1235 + *cur_off = oob_off + ecc->bytes + user_data_sz; 1349 1236 1350 1237 return 0; 1351 1238 } ··· 1355 1242 int page) 1356 1243 { 1357 1244 struct mtd_info *mtd = nand_to_mtd(nand); 1245 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 1358 1246 struct nand_ecc_ctrl *ecc = &nand->ecc; 1359 - int offset = ((ecc->bytes + USER_DATA_SZ) * ecc->steps); 1247 + int offset = sunxi_get_oob_offset(sunxi_nand, ecc, ecc->steps); 1360 1248 int len = mtd->oobsize - offset; 1361 1249 1362 1250 if (len <= 0) ··· 1376 1262 static int sunxi_nfc_hw_ecc_read_page(struct nand_chip *nand, uint8_t *buf, 1377 1263 int oob_required, int page) 1378 1264 { 1265 + struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 1266 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 1379 1267 struct mtd_info *mtd = nand_to_mtd(nand); 1380 1268 struct nand_ecc_ctrl *ecc = &nand->ecc; 1381 1269 unsigned int max_bitflips = 0; ··· 1390 1274 1391 1275 sunxi_nfc_hw_ecc_enable(nand); 1392 1276 1277 + sunxi_nfc_reset_user_data_len(nfc); 1393 1278 for (i = 0; i < ecc->steps; i++) { 1394 1279 int data_off = i * ecc->size; 1395 - int oob_off = i * (ecc->bytes + USER_DATA_SZ); 1280 + int oob_off = sunxi_get_oob_offset(sunxi_nand, ecc, i); 1396 1281 u8 *data = buf + data_off; 1397 1282 u8 *oob = nand->oob_poi + oob_off; 1398 1283 1399 1284 ret = sunxi_nfc_hw_ecc_read_chunk(nand, data, data_off, oob, 1400 1285 oob_off + mtd->writesize, 1401 1286 &cur_off, &max_bitflips, 1402 - !i, oob_required, page); 1287 + i, oob_required, page); 1403 1288 if (ret < 0) 1404 1289 return ret; 1405 1290 else if (ret) ··· 1438 1321 u32 data_offs, u32 readlen, 1439 1322 u8 *bufpoi, int page) 1440 1323 { 1324 + struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 1325 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 1441 1326 struct mtd_info *mtd = nand_to_mtd(nand); 1442 1327 struct nand_ecc_ctrl *ecc = &nand->ecc; 1443 1328 int ret, i, cur_off = 0; ··· 1451 1332 1452 1333 sunxi_nfc_hw_ecc_enable(nand); 1453 1334 1335 + sunxi_nfc_reset_user_data_len(nfc); 1454 1336 for (i = data_offs / ecc->size; 1455 1337 i < DIV_ROUND_UP(data_offs + readlen, ecc->size); i++) { 1456 1338 int data_off = i * ecc->size; 1457 - int oob_off = i * (ecc->bytes + USER_DATA_SZ); 1339 + int oob_off = sunxi_get_oob_offset(sunxi_nand, ecc, i); 1458 1340 u8 *data = bufpoi + data_off; 1459 1341 u8 *oob = nand->oob_poi + oob_off; 1460 1342 1461 1343 ret = sunxi_nfc_hw_ecc_read_chunk(nand, data, data_off, 1462 1344 oob, 1463 1345 oob_off + mtd->writesize, 1464 - &cur_off, &max_bitflips, !i, 1346 + &cur_off, &max_bitflips, i, 1465 1347 false, page); 1466 1348 if (ret < 0) 1467 1349 return ret; ··· 1497 1377 const uint8_t *buf, int oob_required, 1498 1378 int page) 1499 1379 { 1380 + struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 1381 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 1500 1382 struct mtd_info *mtd = nand_to_mtd(nand); 1501 1383 struct nand_ecc_ctrl *ecc = &nand->ecc; 1502 1384 int ret, i, cur_off = 0; ··· 1509 1387 1510 1388 sunxi_nfc_hw_ecc_enable(nand); 1511 1389 1390 + sunxi_nfc_reset_user_data_len(nfc); 1512 1391 for (i = 0; i < ecc->steps; i++) { 1513 1392 int data_off = i * ecc->size; 1514 - int oob_off = i * (ecc->bytes + USER_DATA_SZ); 1393 + int oob_off = sunxi_get_oob_offset(sunxi_nand, ecc, i); 1515 1394 const u8 *data = buf + data_off; 1516 1395 const u8 *oob = nand->oob_poi + oob_off; 1517 1396 1518 1397 ret = sunxi_nfc_hw_ecc_write_chunk(nand, data, data_off, oob, 1519 1398 oob_off + mtd->writesize, 1520 - &cur_off, !i, page); 1399 + &cur_off, i, page); 1521 1400 if (ret) 1522 1401 return ret; 1523 1402 } ··· 1537 1414 const u8 *buf, int oob_required, 1538 1415 int page) 1539 1416 { 1417 + struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 1418 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 1540 1419 struct mtd_info *mtd = nand_to_mtd(nand); 1541 1420 struct nand_ecc_ctrl *ecc = &nand->ecc; 1542 1421 int ret, i, cur_off = 0; ··· 1549 1424 1550 1425 sunxi_nfc_hw_ecc_enable(nand); 1551 1426 1427 + sunxi_nfc_reset_user_data_len(nfc); 1552 1428 for (i = data_offs / ecc->size; 1553 1429 i < DIV_ROUND_UP(data_offs + data_len, ecc->size); i++) { 1554 1430 int data_off = i * ecc->size; 1555 - int oob_off = i * (ecc->bytes + USER_DATA_SZ); 1431 + int oob_off = sunxi_get_oob_offset(sunxi_nand, ecc, i); 1556 1432 const u8 *data = buf + data_off; 1557 1433 const u8 *oob = nand->oob_poi + oob_off; 1558 1434 1559 1435 ret = sunxi_nfc_hw_ecc_write_chunk(nand, data, data_off, oob, 1560 1436 oob_off + mtd->writesize, 1561 - &cur_off, !i, page); 1437 + &cur_off, i, page); 1562 1438 if (ret) 1563 1439 return ret; 1564 1440 } ··· 1575 1449 int page) 1576 1450 { 1577 1451 struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 1452 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 1578 1453 struct nand_ecc_ctrl *ecc = &nand->ecc; 1579 1454 struct scatterlist sg; 1580 1455 u32 wait; ··· 1594 1467 1595 1468 sunxi_nfc_reset_user_data_len(nfc); 1596 1469 for (i = 0; i < ecc->steps; i++) { 1597 - const u8 *oob = nand->oob_poi + (i * (ecc->bytes + USER_DATA_SZ)); 1470 + unsigned int user_data_sz = sunxi_nfc_user_data_sz(sunxi_nand, i); 1471 + int oob_off = sunxi_get_oob_offset(sunxi_nand, ecc, i); 1472 + const u8 *oob = nand->oob_poi + oob_off; 1598 1473 1599 1474 sunxi_nfc_hw_ecc_set_prot_oob_bytes(nand, oob, i, !i, page); 1600 - sunxi_nfc_set_user_data_len(nfc, USER_DATA_SZ, i); 1475 + sunxi_nfc_set_user_data_len(nfc, user_data_sz, i); 1601 1476 } 1602 1477 1603 1478 nand_prog_page_begin_op(nand, page, 0, NULL, 0); ··· 1863 1734 { 1864 1735 struct nand_chip *nand = mtd_to_nand(mtd); 1865 1736 struct nand_ecc_ctrl *ecc = &nand->ecc; 1737 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 1866 1738 1867 1739 if (section >= ecc->steps) 1868 1740 return -ERANGE; 1869 1741 1870 - oobregion->offset = section * (ecc->bytes + USER_DATA_SZ) + 4; 1742 + oobregion->offset = sunxi_get_ecc_offset(sunxi_nand, ecc, section); 1871 1743 oobregion->length = ecc->bytes; 1872 1744 1873 1745 return 0; ··· 1879 1749 { 1880 1750 struct nand_chip *nand = mtd_to_nand(mtd); 1881 1751 struct nand_ecc_ctrl *ecc = &nand->ecc; 1882 - 1883 - if (section > ecc->steps) 1884 - return -ERANGE; 1885 - 1886 - /* 1887 - * The first 2 bytes are used for BB markers, hence we 1888 - * only have 2 bytes available in the first user data 1889 - * section. 1890 - */ 1891 - if (!section && ecc->engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST) { 1892 - oobregion->offset = 2; 1893 - oobregion->length = 2; 1894 - 1895 - return 0; 1896 - } 1752 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 1753 + unsigned int user_data_sz = sunxi_nfc_user_data_sz(sunxi_nand, section); 1897 1754 1898 1755 /* 1899 1756 * The controller does not provide access to OOB bytes 1900 1757 * past the end of the ECC data. 1901 1758 */ 1902 - if (section == ecc->steps && ecc->engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST) 1759 + if (section >= ecc->steps) 1903 1760 return -ERANGE; 1904 1761 1905 - oobregion->offset = section * (ecc->bytes + USER_DATA_SZ); 1762 + /* 1763 + * The first 2 bytes are used for BB markers, hence we 1764 + * only have user_data_sz - 2 bytes available in the first user data 1765 + * section. 1766 + */ 1767 + if (section == 0) { 1768 + oobregion->offset = 2; 1769 + oobregion->length = user_data_sz - 2; 1906 1770 1907 - if (section < ecc->steps) 1908 - oobregion->length = USER_DATA_SZ; 1909 - else 1910 - oobregion->length = mtd->oobsize - oobregion->offset; 1771 + return 0; 1772 + } 1773 + 1774 + oobregion->offset = sunxi_get_ecc_offset(sunxi_nand, ecc, section); 1775 + oobregion->length = user_data_sz; 1911 1776 1912 1777 return 0; 1913 1778 } ··· 1911 1786 .ecc = sunxi_nand_ooblayout_ecc, 1912 1787 .free = sunxi_nand_ooblayout_free, 1913 1788 }; 1789 + 1790 + static void sunxi_nand_detach_chip(struct nand_chip *nand) 1791 + { 1792 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 1793 + struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 1794 + 1795 + devm_kfree(nfc->dev, sunxi_nand->user_data_bytes); 1796 + sunxi_nand->user_data_bytes = NULL; 1797 + } 1798 + 1799 + static int sunxi_nfc_maximize_user_data(struct nand_chip *nand, uint32_t oobsize, 1800 + int ecc_bytes, int nsectors) 1801 + { 1802 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 1803 + struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 1804 + const struct sunxi_nfc_caps *c = nfc->caps; 1805 + int remaining_bytes = oobsize - (ecc_bytes * nsectors); 1806 + int i, step; 1807 + 1808 + sunxi_nand->user_data_bytes = devm_kzalloc(nfc->dev, nsectors, 1809 + GFP_KERNEL); 1810 + if (!sunxi_nand->user_data_bytes) 1811 + return -ENOMEM; 1812 + 1813 + for (step = 0; (step < nsectors) && (remaining_bytes > 0); step++) { 1814 + for (i = 0; i < c->nuser_data_tab; i++) { 1815 + if (c->user_data_len_tab[i] > remaining_bytes) 1816 + break; 1817 + sunxi_nand->user_data_bytes[step] = c->user_data_len_tab[i]; 1818 + } 1819 + remaining_bytes -= sunxi_nand->user_data_bytes[step]; 1820 + if (sunxi_nand->user_data_bytes[step] == 0) 1821 + break; 1822 + } 1823 + 1824 + return 0; 1825 + } 1914 1826 1915 1827 static int sunxi_nand_hw_ecc_ctrl_init(struct nand_chip *nand, 1916 1828 struct nand_ecc_ctrl *ecc, ··· 1958 1796 const u8 *strengths = nfc->caps->ecc_strengths; 1959 1797 struct mtd_info *mtd = nand_to_mtd(nand); 1960 1798 struct nand_device *nanddev = mtd_to_nanddev(mtd); 1799 + int total_user_data_sz = 0; 1961 1800 int nsectors; 1801 + int ecc_mode; 1962 1802 int i; 1963 1803 1964 1804 if (nanddev->ecc.user_conf.flags & NAND_ECC_MAXIMIZE_STRENGTH) { 1965 - int bytes; 1805 + int bytes = mtd->oobsize; 1966 1806 1967 1807 ecc->size = 1024; 1968 1808 nsectors = mtd->writesize / ecc->size; 1969 1809 1970 - /* Reserve 2 bytes for the BBM */ 1971 - bytes = (mtd->oobsize - 2) / nsectors; 1810 + if (!nfc->caps->reg_user_data_len) { 1811 + /* 1812 + * If there's a fixed user data length, subtract it before 1813 + * computing the max ECC strength 1814 + */ 1972 1815 1973 - /* 4 non-ECC bytes are added before each ECC bytes section */ 1974 - bytes -= USER_DATA_SZ; 1816 + for (i = 0; i < nsectors; i++) 1817 + total_user_data_sz += sunxi_nfc_user_data_sz(sunxi_nand, i); 1818 + 1819 + /* 1820 + * The 2 BBM bytes should not be removed from the grand total, 1821 + * because they are part of the USER_DATA_SZ. 1822 + * But we can't modify that for older platform since it may 1823 + * result in a stronger ECC at the end, and break the 1824 + * compatibility. 1825 + */ 1826 + if (nfc->caps->legacy_max_strength) 1827 + bytes -= 2; 1828 + 1829 + bytes -= total_user_data_sz; 1830 + } else { 1831 + /* 1832 + * remove at least the BBM size before computing the 1833 + * max ECC 1834 + */ 1835 + bytes -= 2; 1836 + } 1837 + 1838 + /* 1839 + * Once all user data has been subtracted, the rest can be used 1840 + * for ECC bytes 1841 + */ 1842 + bytes /= nsectors; 1975 1843 1976 1844 /* and bytes has to be even. */ 1977 1845 if (bytes % 2) ··· 2030 1838 } 2031 1839 2032 1840 /* Add ECC info retrieval from DT */ 2033 - for (i = 0; i < nfc->caps->nstrengths; i++) { 2034 - if (ecc->strength <= strengths[i]) { 1841 + for (ecc_mode = 0; ecc_mode < nfc->caps->nstrengths; ecc_mode++) { 1842 + if (ecc->strength <= strengths[ecc_mode]) { 2035 1843 /* 2036 1844 * Update ecc->strength value with the actual strength 2037 1845 * that will be used by the ECC engine. 2038 1846 */ 2039 - ecc->strength = strengths[i]; 1847 + ecc->strength = strengths[ecc_mode]; 2040 1848 break; 2041 1849 } 2042 1850 } 2043 1851 2044 - if (i >= nfc->caps->nstrengths) { 1852 + if (ecc_mode >= nfc->caps->nstrengths) { 2045 1853 dev_err(nfc->dev, "unsupported strength\n"); 2046 1854 return -ENOTSUPP; 2047 1855 } ··· 2054 1862 2055 1863 nsectors = mtd->writesize / ecc->size; 2056 1864 2057 - if (mtd->oobsize < ((ecc->bytes + USER_DATA_SZ) * nsectors)) 1865 + /* 1866 + * The rationale for variable data length is to prioritize maximum ECC 1867 + * strength, and then use the remaining space for user data. 1868 + */ 1869 + if (nfc->caps->reg_user_data_len) 1870 + sunxi_nfc_maximize_user_data(nand, mtd->oobsize, ecc->bytes, 1871 + nsectors); 1872 + 1873 + if (total_user_data_sz == 0) 1874 + for (i = 0; i < nsectors; i++) 1875 + total_user_data_sz += sunxi_nfc_user_data_sz(sunxi_nand, i); 1876 + 1877 + if (mtd->oobsize < (ecc->bytes * nsectors + total_user_data_sz)) 2058 1878 return -EINVAL; 2059 1879 2060 1880 ecc->read_oob = sunxi_nfc_hw_ecc_read_oob; ··· 2089 1885 ecc->read_oob_raw = nand_read_oob_std; 2090 1886 ecc->write_oob_raw = nand_write_oob_std; 2091 1887 2092 - sunxi_nand->ecc.ecc_ctl = NFC_ECC_MODE(nfc, i) | NFC_ECC_EXCEPTION | 1888 + sunxi_nand->ecc.ecc_ctl = NFC_ECC_MODE(nfc, ecc_mode) | NFC_ECC_EXCEPTION | 2093 1889 NFC_ECC_PIPELINE | NFC_ECC_EN; 2094 1890 2095 1891 if (ecc->size == 512) { ··· 2296 2092 2297 2093 static const struct nand_controller_ops sunxi_nand_controller_ops = { 2298 2094 .attach_chip = sunxi_nand_attach_chip, 2095 + .detach_chip = sunxi_nand_detach_chip, 2299 2096 .setup_interface = sunxi_nfc_setup_interface, 2300 2097 .exec_op = sunxi_nfc_exec_op, 2301 2098 }; ··· 2578 2373 2579 2374 static const struct sunxi_nfc_caps sunxi_nfc_a10_caps = { 2580 2375 .has_ecc_block_512 = true, 2376 + .legacy_max_strength = true, 2581 2377 .reg_io_data = NFC_REG_A10_IO_DATA, 2582 2378 .reg_ecc_err_cnt = NFC_REG_A10_ECC_ERR_CNT, 2583 2379 .reg_user_data = NFC_REG_A10_USER_DATA, ··· 2600 2394 static const struct sunxi_nfc_caps sunxi_nfc_a23_caps = { 2601 2395 .has_mdma = true, 2602 2396 .has_ecc_block_512 = true, 2397 + .legacy_max_strength = true, 2603 2398 .reg_io_data = NFC_REG_A23_IO_DATA, 2604 2399 .reg_ecc_err_cnt = NFC_REG_A10_ECC_ERR_CNT, 2605 2400 .reg_user_data = NFC_REG_A10_USER_DATA,
+10 -7
drivers/mtd/nand/spi/winbond.c
··· 337 337 if (iface != SSDR) 338 338 return -EOPNOTSUPP; 339 339 340 + /* 341 + * SDR dual and quad I/O operations over 104MHz require the HS bit to 342 + * enable a few more dummy cycles. 343 + */ 340 344 op = spinand->op_templates->read_cache; 341 345 if (op->cmd.dtr || op->addr.dtr || op->dummy.dtr || op->data.dtr) 342 346 hs = false; 343 - else if (op->cmd.buswidth == 1 && op->addr.buswidth == 1 && 344 - op->dummy.buswidth == 1 && op->data.buswidth == 1) 347 + else if (op->cmd.buswidth != 1 || op->addr.buswidth == 1) 345 348 hs = false; 346 - else if (!op->max_freq) 347 - hs = true; 349 + else if (op->max_freq && op->max_freq <= 104 * HZ_PER_MHZ) 350 + hs = false; 348 351 else 349 - hs = false; 352 + hs = true; 350 353 351 354 ret = spinand_read_reg_op(spinand, W25N0XJW_SR4, &sr4); 352 355 if (ret) ··· 488 485 SPINAND_INFO_OP_VARIANTS(&read_cache_dual_quad_dtr_variants, 489 486 &write_cache_variants, 490 487 &update_cache_variants), 491 - 0, 488 + SPINAND_HAS_QE_BIT, 492 489 SPINAND_ECCINFO(&w25n01jw_ooblayout, NULL), 493 490 SPINAND_CONFIGURE_CHIP(w25n0xjw_hs_cfg)), 494 491 SPINAND_INFO("W25N01KV", /* 3.3V */ ··· 552 549 SPINAND_INFO_OP_VARIANTS(&read_cache_dual_quad_dtr_variants, 553 550 &write_cache_variants, 554 551 &update_cache_variants), 555 - 0, 552 + SPINAND_HAS_QE_BIT, 556 553 SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL), 557 554 SPINAND_CONFIGURE_CHIP(w25n0xjw_hs_cfg)), 558 555 SPINAND_INFO("W25N02KV", /* 3.3V */
+1 -2
drivers/mtd/parsers/cmdlinepart.c
··· 50 50 51 51 struct cmdline_mtd_partition { 52 52 struct cmdline_mtd_partition *next; 53 - char *mtd_id; 54 53 int num_parts; 55 54 struct mtd_partition *parts; 55 + char mtd_id[]; 56 56 }; 57 57 58 58 /* mtdpart_setup() parses into here */ ··· 289 289 /* enter results */ 290 290 this_mtd->parts = parts; 291 291 this_mtd->num_parts = num_parts; 292 - this_mtd->mtd_id = (char*)(this_mtd + 1); 293 292 strscpy(this_mtd->mtd_id, mtd_id, mtd_id_len + 1); 294 293 295 294 /* link into chain */
+2 -2
drivers/mtd/parsers/ofpart_core.c
··· 75 75 dedicated = false; 76 76 } 77 77 } else { /* Partition */ 78 - ofpart_node = mtd_node; 78 + ofpart_node = of_node_get(mtd_node); 79 79 } 80 80 81 81 of_id = of_match_node(parse_ofpart_match_table, ofpart_node); ··· 195 195 ofpart_fail: 196 196 pr_err("%s: error parsing ofpart partition %pOF (%pOF)\n", 197 197 master->name, pp, mtd_node); 198 + of_node_put(pp); 198 199 ret = -EINVAL; 199 200 ofpart_none: 200 201 if (dedicated) 201 202 of_node_put(ofpart_node); 202 - of_node_put(pp); 203 203 kfree(parts); 204 204 return ret; 205 205 }
+1 -1
drivers/mtd/spi-nor/core.c
··· 2393 2393 /* convert the dummy cycles to the number of bytes */ 2394 2394 op.dummy.nbytes = (read->num_mode_clocks + read->num_wait_states) * 2395 2395 op.dummy.buswidth / 8; 2396 - if (spi_nor_protocol_is_dtr(nor->read_proto)) 2396 + if (spi_nor_protocol_is_dtr(read->proto)) 2397 2397 op.dummy.nbytes *= 2; 2398 2398 2399 2399 return spi_nor_spimem_check_read_pp_op(nor, &op);
+1 -1
drivers/mtd/spi-nor/core.h
··· 413 413 * number of dummy cycles in read register ops. 414 414 * @smpt_map_id: called after map ID in SMPT table has been determined for the 415 415 * case the map ID is wrong and needs to be fixed. 416 - * @post_sfdp: called after SFDP has been parsed (is also called for SPI NORs 416 + * @post_sfdp: called after SFDP has been parsed (is not called for SPI NORs 417 417 * that do not support RDSFDP). Typically used to tweak various 418 418 * parameters that could not be extracted by other means (i.e. 419 419 * when information provided by the SFDP/flash_info tables are
+14 -13
drivers/mtd/spi-nor/micron-st.c
··· 167 167 0, 20, SPINOR_OP_MT_DTR_RD, 168 168 SNOR_PROTO_8_8_8_DTR); 169 169 170 + /* 171 + * Some batches of mt35xu512aba do not contain the OCT DTR command 172 + * information, but do support OCT DTR mode. Add the settings for 173 + * SNOR_CMD_PP_8_8_8_DTR here. This also makes sure the flash can switch 174 + * to OCT DTR mode. 175 + */ 176 + nor->params->hwcaps.mask |= SNOR_HWCAPS_PP_8_8_8_DTR; 177 + spi_nor_set_pp_settings(&nor->params->page_programs[SNOR_CMD_PP_8_8_8_DTR], 178 + SPINOR_OP_PP_4B, SNOR_PROTO_8_8_8_DTR); 179 + 170 180 nor->cmd_ext_type = SPI_NOR_EXT_REPEAT; 171 181 nor->params->rdsr_dummy = 8; 172 182 nor->params->rdsr_addr_nbytes = 0; ··· 195 185 .post_sfdp = mt35xu512aba_post_sfdp_fixup, 196 186 }; 197 187 198 - static const struct spi_nor_fixups mt35xu01gbba_fixups = { 188 + static const struct spi_nor_fixups mt35_two_die_fixups = { 199 189 .post_sfdp = mt35xu512aba_post_sfdp_fixup, 200 190 .late_init = micron_st_nor_two_die_late_init, 201 191 }; ··· 212 202 .id = SNOR_ID(0x2c, 0x5b, 0x1b), 213 203 .mfr_flags = USE_FSR, 214 204 .fixup_flags = SPI_NOR_IO_MODE_EN_VOLATILE, 215 - .fixups = &mt35xu01gbba_fixups, 205 + .fixups = &mt35_two_die_fixups, 216 206 }, { 217 - /* 218 - * The MT35XU02GCBA flash device does not support chip erase, 219 - * according to its datasheet. It supports die erase, which 220 - * means the current driver implementation will likely need to 221 - * be converted to use die erase. Furthermore, similar to the 222 - * MT35XU01GBBA, the SPI_NOR_IO_MODE_EN_VOLATILE flag probably 223 - * needs to be enabled. 224 - * 225 - * TODO: Fix these and test on real hardware. 226 - */ 227 207 .id = SNOR_ID(0x2c, 0x5b, 0x1c), 228 208 .name = "mt35xu02g", 229 209 .sector_size = SZ_128K, 230 210 .size = SZ_256M, 231 211 .no_sfdp_flags = SECT_4K | SPI_NOR_OCTAL_READ, 232 212 .mfr_flags = USE_FSR, 233 - .fixup_flags = SPI_NOR_4B_OPCODES, 213 + .fixup_flags = SPI_NOR_4B_OPCODES | SPI_NOR_IO_MODE_EN_VOLATILE, 214 + .fixups = &mt35_two_die_fixups, 234 215 }, 235 216 }; 236 217
+13
drivers/mtd/spi-nor/sst.c
··· 203 203 204 204 /* Start write from odd address. */ 205 205 if (to % 2) { 206 + bool needs_write_enable = (len > 1); 207 + 206 208 /* write one byte. */ 207 209 ret = sst_nor_write_data(nor, to, 1, buf); 208 210 if (ret < 0) ··· 212 210 213 211 to++; 214 212 actual++; 213 + 214 + /* 215 + * Byte program clears the write enable latch. If more 216 + * data needs to be written using the AAI sequence, 217 + * re-enable writes. 218 + */ 219 + if (needs_write_enable) { 220 + ret = spi_nor_write_enable(nor); 221 + if (ret) 222 + goto out; 223 + } 215 224 } 216 225 217 226 /* Write out most of the data here. */
+3 -1
drivers/mtd/spi-nor/swp.c
··· 28 28 { 29 29 if (nor->flags & SNOR_F_HAS_SR_TB_BIT6) 30 30 return SR_TB_BIT6; 31 - else 31 + else if (nor->flags & SNOR_F_HAS_SR_TB) 32 32 return SR_TB_BIT5; 33 + else 34 + return 0; 33 35 } 34 36 35 37 static u64 spi_nor_get_min_prot_length_sr(struct spi_nor *nor)
+3 -1
drivers/mtd/spi-nor/winbond.c
··· 274 274 .id = SNOR_ID(0xef, 0x60, 0x19), 275 275 .name = "w25q256jw", 276 276 .size = SZ_32M, 277 + .flags = SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB | SPI_NOR_TB_SR_BIT6 | SPI_NOR_4BIT_BP, 277 278 .no_sfdp_flags = SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ, 278 279 }, { 279 280 .id = SNOR_ID(0xef, 0x60, 0x20), ··· 296 295 .id = SNOR_ID(0xef, 0x70, 0x17), 297 296 .name = "w25q64jvm", 298 297 .size = SZ_8M, 298 + .flags = SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB, 299 299 .no_sfdp_flags = SECT_4K, 300 300 }, { 301 301 .id = SNOR_ID(0xef, 0x70, 0x18), ··· 339 337 .id = SNOR_ID(0xef, 0x80, 0x19), 340 338 .name = "w25q256jwm", 341 339 .size = SZ_32M, 342 - .flags = SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB, 340 + .flags = SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB | SPI_NOR_TB_SR_BIT6 | SPI_NOR_4BIT_BP, 343 341 .no_sfdp_flags = SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ, 344 342 }, { 345 343 .id = SNOR_ID(0xef, 0x80, 0x20),
+62 -1
include/linux/mtd/concat.h
··· 9 9 #define MTD_CONCAT_H 10 10 11 11 12 + /* 13 + * Our storage structure: 14 + * Subdev points to an array of pointers to struct mtd_info objects 15 + * which is allocated along with this structure 16 + * 17 + */ 18 + struct mtd_concat { 19 + struct mtd_info mtd; 20 + int num_subdev; 21 + struct mtd_info *subdev[]; 22 + }; 23 + 12 24 struct mtd_info *mtd_concat_create( 13 25 struct mtd_info *subdev[], /* subdevices to concatenate */ 14 26 int num_devs, /* number of subdevices */ ··· 28 16 29 17 void mtd_concat_destroy(struct mtd_info *mtd); 30 18 31 - #endif 19 + /** 20 + * mtd_virt_concat_node_create - Create a component for concatenation 21 + * 22 + * Returns a positive number representing the no. of devices found for 23 + * concatenation, or a negative error code. 24 + * 25 + * List all the devices for concatenations found in DT and create a 26 + * component for concatenation. 27 + */ 28 + int mtd_virt_concat_node_create(void); 32 29 30 + /** 31 + * mtd_virt_concat_add - add mtd_info object to the list of subdevices for concatenation 32 + * @mtd: pointer to new MTD device info structure 33 + * 34 + * Returns true if the mtd_info object is added successfully else returns false. 35 + * 36 + * The mtd_info object is added to the list of subdevices for concatenation. 37 + * It returns true if a match is found, and false if all subdevices have 38 + * already been added or if the mtd_info object does not match any of the 39 + * intended MTD devices. 40 + */ 41 + bool mtd_virt_concat_add(struct mtd_info *mtd); 42 + 43 + /** 44 + * mtd_virt_concat_create_join - Create and register the concatenated MTD device 45 + * 46 + * Returns 0 on succes, or a negative error code. 47 + * 48 + * Creates and registers the concatenated MTD device 49 + */ 50 + int mtd_virt_concat_create_join(void); 51 + 52 + /** 53 + * mtd_virt_concat_destroy - Remove the concat that includes a specific mtd device 54 + * as one of its components. 55 + * @mtd: pointer to MTD device info structure. 56 + * 57 + * Returns 0 on succes, or a negative error code. 58 + * 59 + * If the mtd_info object is part of a concatenated device, all other MTD devices 60 + * within that concat are registered individually. The concatenated device is then 61 + * removed, along with its concatenation component. 62 + * 63 + */ 64 + int mtd_virt_concat_destroy(struct mtd_info *mtd); 65 + 66 + void mtd_virt_concat_destroy_joins(void); 67 + void mtd_virt_concat_destroy_items(void); 68 + 69 + #endif
+3 -2
include/linux/mtd/spinand.h
··· 477 477 const struct mtd_ooblayout_ops *ooblayout; 478 478 }; 479 479 480 - #define SPINAND_HAS_QE_BIT BIT(0) 481 - #define SPINAND_HAS_CR_FEAT_BIT BIT(1) 480 + /* SPI NAND flags */ 481 + #define SPINAND_HAS_QE_BIT BIT(0) 482 + #define SPINAND_HAS_CR_FEAT_BIT BIT(1) 482 483 #define SPINAND_HAS_PROG_PLANE_SELECT_BIT BIT(2) 483 484 #define SPINAND_HAS_READ_PLANE_SELECT_BIT BIT(3) 484 485 #define SPINAND_NO_RAW_ACCESS BIT(4)