1
0
Fork 0

Staging/IIO driver updates for 4.14-rc1

Here is the big staging and IIO driver update for 4.14-rc1.
 
 Lots of staging driver fixes and cleanups, including some reorginizing
 of the lustre header files to try to impose some sanity on what is, and
 what is not, the uapi for that filesystem.
 
 There are some tty core changes in here as well, as the speakup drivers
 need them, and that's ok with me, they are sane and the speakup code is
 getting nicer because of it.
 
 There is also the addition of the obiligatory new wifi driver, just
 because it has been a release or two since we added our last one...
 
 Other than that, lots and lots of small coding style fixes, as usual.
 
 All of these have been in linux-next for a while with no reported
 issues.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCWa2AbA8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ymboACfUsNhw+cJlVb25J70NULkye3y1PAAoJ+Ayq30
 ckkLGakZayKcYEx50ffH
 =KJwg
 -----END PGP SIGNATURE-----

Merge tag 'staging-4.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging

Pull staging/IIO driver updates from Greg KH:
 "Here is the big staging and IIO driver update for 4.14-rc1.

  Lots of staging driver fixes and cleanups, including some reorginizing
  of the lustre header files to try to impose some sanity on what is,
  and what is not, the uapi for that filesystem.

  There are some tty core changes in here as well, as the speakup
  drivers need them, and that's ok with me, they are sane and the
  speakup code is getting nicer because of it.

  There is also the addition of the obiligatory new wifi driver, just
  because it has been a release or two since we added our last one...

  Other than that, lots and lots of small coding style fixes, as usual.

  All of these have been in linux-next for a while with no reported
  issues"

* tag 'staging-4.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging: (612 commits)
  staging:rtl8188eu:core Fix remove unneccessary else block
  staging: typec: fusb302: make structure fusb302_psy_desc static
  staging: unisys: visorbus: make two functions static
  staging: fsl-dpaa2/eth: fix off-by-one FD ctrl bitmaks
  staging: r8822be: Simplify deinit_priv()
  staging: r8822be: Remove some dead code
  staging: vboxvideo: Use CONFIG_DRM_KMS_FB_HELPER to check for fbdefio availability
  staging:rtl8188eu Fix comparison to NULL
  staging: rts5208: rename mmc_ddr_tunning_rx_cmd to mmc_ddr_tuning_rx_cmd
  Staging: Pi433: style fix - tabs and spaces
  staging: pi433: fix spelling mistake: "preample" -> "preamble"
  staging:rtl8188eu:core Fix Code Indent
  staging: typec: fusb302: Export current-limit through a power_supply class dev
  staging: typec: fusb302: Add support for USB2 charger detection through extcon
  staging: typec: fusb302: Use client->irq as irq if set
  staging: typec: fusb302: Get max snk mv/ma/mw from device-properties
  staging: typec: fusb302: Set max supply voltage to 5V
  staging: typec: tcpm: Add get_current_limit tcpc_dev callback
  staging:rtl8188eu Use __func__ instead of function name
  staging: lustre: coding style fixes found by checkpatch.pl
  ...
hifive-unleashed-5.1
Linus Torvalds 2017-09-05 10:36:26 -07:00
commit bf1d6b2c76
725 changed files with 141855 additions and 11622 deletions

View File

@ -119,6 +119,15 @@ Description:
unique to allow association with event codes. Units after
application of scale and offset are milliamps.
What: /sys/bus/iio/devices/iio:deviceX/in_powerY_raw
KernelVersion: 4.5
Contact: linux-iio@vger.kernel.org
Description:
Raw (unscaled no bias removal etc.) power measurement from
channel Y. The number must always be specified and
unique to allow association with event codes. Units after
application of scale and offset are milliwatts.
What: /sys/bus/iio/devices/iio:deviceX/in_capacitanceY_raw
KernelVersion: 3.2
Contact: linux-iio@vger.kernel.org

View File

@ -11,6 +11,11 @@ Required properties:
- atmel,min-sample-rate-hz: Minimum sampling rate, it depends on SoC.
- atmel,max-sample-rate-hz: Maximum sampling rate, it depends on SoC.
- atmel,startup-time-ms: Startup time expressed in ms, it depends on SoC.
- atmel,trigger-edge-type: One of possible edge types for the ADTRG hardware
trigger pin. When the specific edge type is detected, the conversion will
start. Possible values are rising, falling, or both.
This property uses the IRQ edge types values: IRQ_TYPE_EDGE_RISING ,
IRQ_TYPE_EDGE_FALLING or IRQ_TYPE_EDGE_BOTH
Example:
@ -25,4 +30,5 @@ adc: adc@fc030000 {
atmel,startup-time-ms = <4>;
vddana-supply = <&vdd_3v3_lp_reg>;
vref-supply = <&vdd_3v3_lp_reg>;
atmel,trigger-edge-type = <IRQ_TYPE_EDGE_BOTH>;
}

View File

@ -12,6 +12,7 @@ for the Thermal Controller which holds a phandle to the AUXADC.
Required properties:
- compatible: Should be one of:
- "mediatek,mt2701-auxadc": For MT2701 family of SoCs
- "mediatek,mt7622-auxadc": For MT7622 family of SoCs
- "mediatek,mt8173-auxadc": For MT8173 family of SoCs
- reg: Address range of the AUXADC unit.
- clocks: Should contain a clock specifier for each entry in clock-names

View File

@ -6,6 +6,7 @@ Required properties:
- "rockchip,rk3066-tsadc": for rk3036
- "rockchip,rk3328-saradc", "rockchip,rk3399-saradc": for rk3328
- "rockchip,rk3399-saradc": for rk3399
- "rockchip,rv1108-saradc", "rockchip,rk3399-saradc": for rv1108
- reg: physical base address of the controller and length of memory mapped
region.

View File

@ -74,6 +74,11 @@ Optional properties:
* can be 6, 8, 10 or 12 on stm32f4
* can be 8, 10, 12, 14 or 16 on stm32h7
Default is maximum resolution if unset.
- st,min-sample-time-nsecs: Minimum sampling time in nanoseconds.
Depending on hardware (board) e.g. high/low analog input source impedance,
fine tune of ADC sampling time may be recommended.
This can be either one value or an array that matches 'st,adc-channels' list,
to set sample time resp. for all channels, or independently for each channel.
Example:
adc: adc@40012000 {

View File

@ -10,7 +10,9 @@ current.
Contents of a stm32 dac root node:
-----------------------------------
Required properties:
- compatible: Must be "st,stm32h7-dac-core".
- compatible: Should be one of:
"st,stm32f4-dac-core"
"st,stm32h7-dac-core"
- reg: Offset and length of the device's register set.
- clocks: Must contain an entry for pclk (which feeds the peripheral bus
interface)

View File

@ -0,0 +1,17 @@
* HDC100x temperature + humidity sensors
Required properties:
- compatible: Should contain one of the following:
ti,hdc1000
ti,hdc1008
ti,hdc1010
ti,hdc1050
ti,hdc1080
- reg: i2c address of the sensor
Example:
hdc100x@40 {
compatible = "ti,hdc1000";
reg = <0x40>;
};

View File

@ -5,9 +5,18 @@ Required properties:
- reg: i2c address of the sensor / spi cs line
Optional properties:
- drive-open-drain: the interrupt/data ready line will be configured
as open drain, which is useful if several sensors share the same
interrupt line. This is a boolean property.
If the requested interrupt is configured as IRQ_TYPE_LEVEL_HIGH or
IRQ_TYPE_EDGE_RISING a pull-down resistor is needed to drive the line
when it is not active, whereas a pull-up one is needed when interrupt
line is configured as IRQ_TYPE_LEVEL_LOW or IRQ_TYPE_EDGE_FALLING.
Refer to pinctrl/pinctrl-bindings.txt for the property description.
- interrupt-parent: should be the phandle for the interrupt controller
- interrupts: interrupt mapping for IRQ. It should be configured with
flags IRQ_TYPE_LEVEL_HIGH or IRQ_TYPE_EDGE_RISING.
flags IRQ_TYPE_LEVEL_HIGH, IRQ_TYPE_EDGE_RISING, IRQ_TYPE_LEVEL_LOW or
IRQ_TYPE_EDGE_FALLING.
Refer to interrupt-controller/interrupts.txt for generic interrupt
client node bindings.

View File

@ -0,0 +1,13 @@
*HTU21 - Measurement-Specialties htu21 temperature & humidity sensor and humidity part of MS8607 sensor
Required properties:
- compatible: should be "meas,htu21" or "meas,ms8607-humidity"
- reg: I2C address of the sensor
Example:
htu21@40 {
compatible = "meas,htu21";
reg = <0x40>;
};

View File

@ -11,6 +11,14 @@ Required properties:
Optional properties:
- st,drdy-int-pin: the pin on the package that will be used to signal
"data ready" (valid values: 1 or 2).
- drive-open-drain: the interrupt/data ready line will be configured
as open drain, which is useful if several sensors share the same
interrupt line. This is a boolean property.
(This binding is taken from pinctrl/pinctrl-bindings.txt)
If the requested interrupt is configured as IRQ_TYPE_LEVEL_HIGH or
IRQ_TYPE_EDGE_RISING a pull-down resistor is needed to drive the line
when it is not active, whereas a pull-up one is needed when interrupt
line is configured as IRQ_TYPE_LEVEL_LOW or IRQ_TYPE_EDGE_FALLING.
- interrupt-parent: should be the phandle for the interrupt controller
- interrupts: interrupt mapping for IRQ. It should be configured with
flags IRQ_TYPE_LEVEL_HIGH, IRQ_TYPE_EDGE_RISING, IRQ_TYPE_LEVEL_LOW or

View File

@ -0,0 +1,17 @@
* MS5637 - Measurement-Specialties MS5637, MS5805, MS5837 and MS8607 pressure & temperature sensor
Required properties:
-compatible: should be one of the following
meas,ms5637
meas,ms5805
meas,ms5837
meas,ms8607-temppressure
-reg: I2C address of the sensor
Example:
ms5637@76 {
compatible = "meas,ms5637";
reg = <0x76>;
};

View File

@ -45,6 +45,7 @@ Accelerometers:
- st,lis2dh12-accel
- st,h3lis331dl-accel
- st,lng2dm-accel
- st,lis3l02dq
Gyroscopes:
- st,l3g4200d-gyro
@ -52,6 +53,7 @@ Gyroscopes:
- st,lsm330dl-gyro
- st,lsm330dlc-gyro
- st,l3gd20-gyro
- st,l3gd20h-gyro
- st,l3g4is-gyro
- st,lsm330-gyro
- st,lsm9ds0-gyro
@ -62,6 +64,7 @@ Magnetometers:
- st,lsm303dlhc-magn
- st,lsm303dlm-magn
- st,lis3mdl-magn
- st,lis2mdl
Pressure sensors:
- st,lps001wp-press

View File

@ -0,0 +1,19 @@
* TSYS01 - Measurement Specialties temperature sensor
Required properties:
- compatible: should be "meas,tsys01"
- reg: I2C address of the sensor (changeable via CSB pin)
------------------------
| CSB | Device Address |
------------------------
1 0x76
0 0x77
Example:
tsys01@76 {
compatible = "meas,tsys01";
reg = <0x76>;
};

View File

@ -4,7 +4,9 @@ Must be a sub-node of an STM32 Timers device tree node.
See ../mfd/stm32-timers.txt for details about the parent node.
Required parameters:
- compatible: Must be "st,stm32-timer-trigger".
- compatible: Must be one of:
"st,stm32-timer-trigger"
"st,stm32h7-timer-trigger"
- reg: Identify trigger hardware block.
Example:
@ -14,7 +16,7 @@ Example:
compatible = "st,stm32-timers";
reg = <0x40010000 0x400>;
clocks = <&rcc 0 160>;
clock-names = "clk_int";
clock-names = "int";
timer@0 {
compatible = "st,stm32-timer-trigger";

View File

@ -0,0 +1,29 @@
Fairchild FUSB302 Type-C Port controllers
Required properties :
- compatible : "fcs,fusb302"
- reg : I2C slave address
- interrupts : Interrupt specifier
Optional properties :
- fcs,max-sink-microvolt : Maximum voltage to negotiate when configured as sink
- fcs,max-sink-microamp : Maximum current to negotiate when configured as sink
- fcs,max-sink-microwatt : Maximum power to negotiate when configured as sink
If this is less then max-sink-microvolt *
max-sink-microamp then the configured current will
be clamped.
- fcs,operating-sink-microwatt :
Minimum amount of power accepted from a sink
when negotiating
Example:
fusb302: typec-portc@54 {
compatible = "fcs,fusb302";
reg = <0x54>;
interrupt-parent = <&nmi_intc>;
interrupts = <0 IRQ_TYPE_LEVEL_LOW>;
fcs,max-sink-microvolt = <12000000>;
fcs,max-sink-microamp = <3000000>;
fcs,max-sink-microwatt = <36000000>;
};

View File

@ -0,0 +1,29 @@
Cirrus Logic EP93xx ADC driver.
1. Overview
The driver is intended to work on both low-end (EP9301, EP9302) devices with
5-channel ADC and high-end (EP9307, EP9312, EP9315) devices with 10-channel
touchscreen/ADC module.
2. Channel numbering
Numbering scheme for channels 0..4 is defined in EP9301 and EP9302 datasheets.
EP9307, EP9312 and EP9312 have 3 channels more (total 8), but the numbering is
not defined. So the last three are numbered randomly, let's say.
Assuming ep93xx_adc is IIO device0, you'd find the following entries under
/sys/bus/iio/devices/iio:device0/:
+-----------------+---------------+
| sysfs entry | ball/pin name |
+-----------------+---------------+
| in_voltage0_raw | YM |
| in_voltage1_raw | SXP |
| in_voltage2_raw | SXM |
| in_voltage3_raw | SYP |
| in_voltage4_raw | SYM |
| in_voltage5_raw | XP |
| in_voltage6_raw | XM |
| in_voltage7_raw | YP |
+-----------------+---------------+

View File

@ -842,7 +842,7 @@ static SIMPLE_DEV_PM_OPS(bma180_pm_ops, bma180_suspend, bma180_resume);
#define BMA180_PM_OPS NULL
#endif
static struct i2c_device_id bma180_ids[] = {
static const struct i2c_device_id bma180_ids[] = {
{ "bma180", BMA180 },
{ "bma250", BMA250 },
{ }

View File

@ -64,6 +64,7 @@ static const struct acpi_device_id bmc150_accel_acpi_match[] = {
{"BMA250E", bma250e},
{"BMA222E", bma222e},
{"BMA0280", bma280},
{"BOSC0200"},
{ },
};
MODULE_DEVICE_TABLE(acpi, bmc150_accel_acpi_match);

View File

@ -139,7 +139,7 @@ static int da311_register_mask_write(struct i2c_client *client, u16 addr,
/* Init sequence taken from the android driver */
static int da311_reset(struct i2c_client *client)
{
const struct {
static const struct {
u16 addr;
u8 mask;
u8 data;

View File

@ -36,7 +36,7 @@
#define SCA3000_LOCKED BIT(5)
#define SCA3000_EEPROM_CS_ERROR BIT(1)
#define SCA3000_SPI_FRAME_ERROR BIT(0)
/* All reads done using register decrement so no need to directly access LSBs */
#define SCA3000_REG_X_MSB_ADDR 0x05
#define SCA3000_REG_Y_MSB_ADDR 0x07
@ -74,7 +74,7 @@
#define SCA3000_REG_INT_STATUS_ADDR 0x16
#define SCA3000_REG_INT_STATUS_THREE_QUARTERS BIT(7)
#define SCA3000_REG_INT_STATUS_HALF BIT(6)
#define SCA3000_INT_STATUS_FREE_FALL BIT(3)
#define SCA3000_INT_STATUS_Y_TRIGGER BIT(2)
#define SCA3000_INT_STATUS_X_TRIGGER BIT(1)
@ -124,7 +124,7 @@
#define SCA3000_REG_INT_MASK_ADDR 0x21
#define SCA3000_REG_INT_MASK_PROT_MASK 0x1C
#define SCA3000_REG_INT_MASK_RING_THREE_QUARTER BIT(7)
#define SCA3000_REG_INT_MASK_RING_HALF BIT(6)

View File

@ -29,10 +29,13 @@ enum st_accel_type {
LIS2DH12,
LIS3L02DQ,
LNG2DM,
H3LIS331DL,
LIS331DL,
LIS3LV02DL,
ST_ACCEL_MAX,
};
#define H3LIS331DL_DRIVER_NAME "h3lis331dl_accel"
#define H3LIS331DL_ACCEL_DEV_NAME "h3lis331dl_accel"
#define LIS3LV02DL_ACCEL_DEV_NAME "lis3lv02dl_accel"
#define LSM303DLHC_ACCEL_DEV_NAME "lsm303dlhc_accel"
#define LIS3DH_ACCEL_DEV_NAME "lis3dh"

View File

@ -161,7 +161,7 @@ static const struct st_sensor_settings st_accel_sensors_settings[] = {
.drdy_irq = {
.addr = 0x22,
.mask_int1 = 0x10,
.mask_int2 = 0x08,
.mask_int2 = 0x00,
.addr_ihl = 0x25,
.mask_ihl = 0x02,
.addr_stat_drdy = ST_SENSORS_DEFAULT_STAT_ADDR,
@ -464,7 +464,7 @@ static const struct st_sensor_settings st_accel_sensors_settings[] = {
.wai = 0x32,
.wai_addr = ST_SENSORS_DEFAULT_WAI_ADDRESS,
.sensors_supported = {
[0] = H3LIS331DL_DRIVER_NAME,
[0] = H3LIS331DL_ACCEL_DEV_NAME,
},
.ch = (struct iio_chan_spec *)st_accel_12bit_channels,
.odr = {
@ -637,7 +637,7 @@ static const struct st_sensor_settings st_accel_sensors_settings[] = {
.drdy_irq = {
.addr = 0x22,
.mask_int1 = 0x10,
.mask_int2 = 0x08,
.mask_int2 = 0x00,
.addr_ihl = 0x25,
.mask_ihl = 0x02,
.addr_stat_drdy = ST_SENSORS_DEFAULT_STAT_ADDR,

View File

@ -84,7 +84,7 @@ static const struct of_device_id st_accel_of_match[] = {
},
{
.compatible = "st,h3lis331dl-accel",
.data = H3LIS331DL_DRIVER_NAME,
.data = H3LIS331DL_ACCEL_DEV_NAME,
},
{
.compatible = "st,lis3l02dq",
@ -126,6 +126,9 @@ static const struct i2c_device_id st_accel_id_table[] = {
{ LIS2DH12_ACCEL_DEV_NAME, LIS2DH12 },
{ LIS3L02DQ_ACCEL_DEV_NAME, LIS3L02DQ },
{ LNG2DM_ACCEL_DEV_NAME, LNG2DM },
{ H3LIS331DL_ACCEL_DEV_NAME, H3LIS331DL },
{ LIS331DL_ACCEL_DEV_NAME, LIS331DL },
{ LIS3LV02DL_ACCEL_DEV_NAME, LIS3LV02DL },
{},
};
MODULE_DEVICE_TABLE(i2c, st_accel_id_table);
@ -144,7 +147,8 @@ static int st_accel_i2c_probe(struct i2c_client *client,
adata = iio_priv(indio_dev);
if (client->dev.of_node) {
st_sensors_of_i2c_probe(client, st_accel_of_match);
st_sensors_of_name_probe(&client->dev, st_accel_of_match,
client->name, sizeof(client->name));
} else if (ACPI_HANDLE(&client->dev)) {
ret = st_sensors_match_acpi_device(&client->dev);
if ((ret < 0) || (ret >= ST_ACCEL_MAX))

View File

@ -18,6 +18,77 @@
#include <linux/iio/common/st_sensors_spi.h>
#include "st_accel.h"
#ifdef CONFIG_OF
/*
* For new single-chip sensors use <device_name> as compatible string.
* For old single-chip devices keep <device_name>-accel to maintain
* compatibility
*/
static const struct of_device_id st_accel_of_match[] = {
{
/* An older compatible */
.compatible = "st,lis302dl-spi",
.data = LIS3LV02DL_ACCEL_DEV_NAME,
},
{
.compatible = "st,lis3lv02dl-accel",
.data = LIS3LV02DL_ACCEL_DEV_NAME,
},
{
.compatible = "st,lis3dh-accel",
.data = LIS3DH_ACCEL_DEV_NAME,
},
{
.compatible = "st,lsm330d-accel",
.data = LSM330D_ACCEL_DEV_NAME,
},
{
.compatible = "st,lsm330dl-accel",
.data = LSM330DL_ACCEL_DEV_NAME,
},
{
.compatible = "st,lsm330dlc-accel",
.data = LSM330DLC_ACCEL_DEV_NAME,
},
{
.compatible = "st,lis331dlh-accel",
.data = LIS331DLH_ACCEL_DEV_NAME,
},
{
.compatible = "st,lsm330-accel",
.data = LSM330_ACCEL_DEV_NAME,
},
{
.compatible = "st,lsm303agr-accel",
.data = LSM303AGR_ACCEL_DEV_NAME,
},
{
.compatible = "st,lis2dh12-accel",
.data = LIS2DH12_ACCEL_DEV_NAME,
},
{
.compatible = "st,lis3l02dq",
.data = LIS3L02DQ_ACCEL_DEV_NAME,
},
{
.compatible = "st,lng2dm-accel",
.data = LNG2DM_ACCEL_DEV_NAME,
},
{
.compatible = "st,h3lis331dl-accel",
.data = H3LIS331DL_ACCEL_DEV_NAME,
},
{
.compatible = "st,lis331dl-accel",
.data = LIS331DL_ACCEL_DEV_NAME,
},
{}
};
MODULE_DEVICE_TABLE(of, st_accel_of_match);
#else
#define st_accel_of_match NULL
#endif
static int st_accel_spi_probe(struct spi_device *spi)
{
struct iio_dev *indio_dev;
@ -30,6 +101,8 @@ static int st_accel_spi_probe(struct spi_device *spi)
adata = iio_priv(indio_dev);
st_sensors_of_name_probe(&spi->dev, st_accel_of_match,
spi->modalias, sizeof(spi->modalias));
st_sensors_spi_configure(indio_dev, spi, adata);
err = st_accel_common_probe(indio_dev);
@ -57,22 +130,17 @@ static const struct spi_device_id st_accel_id_table[] = {
{ LIS2DH12_ACCEL_DEV_NAME },
{ LIS3L02DQ_ACCEL_DEV_NAME },
{ LNG2DM_ACCEL_DEV_NAME },
{ H3LIS331DL_ACCEL_DEV_NAME },
{ LIS331DL_ACCEL_DEV_NAME },
{ LIS3LV02DL_ACCEL_DEV_NAME },
{},
};
MODULE_DEVICE_TABLE(spi, st_accel_id_table);
#ifdef CONFIG_OF
static const struct of_device_id lis302dl_spi_dt_ids[] = {
{ .compatible = "st,lis302dl-spi" },
{}
};
MODULE_DEVICE_TABLE(of, lis302dl_spi_dt_ids);
#endif
static struct spi_driver st_accel_driver = {
.driver = {
.name = "st-accel-spi",
.of_match_table = of_match_ptr(lis302dl_spi_dt_ids),
.of_match_table = of_match_ptr(st_accel_of_match),
},
.probe = st_accel_spi_probe,
.remove = st_accel_spi_remove,

View File

@ -158,6 +158,7 @@ config AT91_SAMA5D2_ADC
tristate "Atmel AT91 SAMA5D2 ADC"
depends on ARCH_AT91 || COMPILE_TEST
depends on HAS_IOMEM
select IIO_TRIGGERED_BUFFER
help
Say yes here to build support for Atmel SAMA5D2 ADC which is
available on SAMA5D2 SoC family.
@ -239,6 +240,15 @@ config DA9150_GPADC
To compile this driver as a module, choose M here: the module will be
called berlin2-adc.
config DLN2_ADC
tristate "Diolan DLN-2 ADC driver support"
depends on MFD_DLN2
help
Say yes here to build support for Diolan DLN-2 ADC.
This driver can also be built as a module. If so, the module will be
called adc_dln2.
config ENVELOPE_DETECTOR
tristate "Envelope detector using a DAC and a comparator"
depends on OF
@ -249,6 +259,17 @@ config ENVELOPE_DETECTOR
To compile this driver as a module, choose M here: the module will be
called envelope-detector.
config EP93XX_ADC
tristate "Cirrus Logic EP93XX ADC driver"
depends on ARCH_EP93XX
help
Driver for the ADC module on the EP93XX series of SoC from Cirrus Logic.
It's recommended to switch on CONFIG_HIGH_RES_TIMERS option, in this
case driver will reduce its CPU usage by 90% in some use cases.
To compile this driver as a module, choose M here: the module will be
called ep93xx_adc.
config EXYNOS_ADC
tristate "Exynos ADC driver support"
depends on ARCH_EXYNOS || ARCH_S3C24XX || ARCH_S3C64XX || (OF && COMPILE_TEST)
@ -322,7 +343,7 @@ config INA2XX_ADC
This driver is mutually exclusive with the HWMON version.
config IMX7D_ADC
tristate "IMX7D ADC driver"
tristate "Freescale IMX7D ADC driver"
depends on ARCH_MXC || COMPILE_TEST
depends on HAS_IOMEM
help
@ -362,6 +383,16 @@ config LPC32XX_ADC
activate only one via device tree selection. Provides direct access
via sysfs.
config LTC2471
tristate "Linear Technology LTC2471 and LTC2473 ADC driver"
depends on I2C
help
Say yes here to build support for Linear Technology LTC2471 and
LTC2473 16-bit I2C ADC.
This driver can also be built as a module. If so, the module will
be called ltc2471.
config LTC2485
tristate "Linear Technology LTC2485 ADC driver"
depends on I2C

View File

@ -24,7 +24,9 @@ obj-$(CONFIG_BERLIN2_ADC) += berlin2-adc.o
obj-$(CONFIG_CC10001_ADC) += cc10001_adc.o
obj-$(CONFIG_CPCAP_ADC) += cpcap-adc.o
obj-$(CONFIG_DA9150_GPADC) += da9150-gpadc.o
obj-$(CONFIG_DLN2_ADC) += dln2-adc.o
obj-$(CONFIG_ENVELOPE_DETECTOR) += envelope-detector.o
obj-$(CONFIG_EP93XX_ADC) += ep93xx_adc.o
obj-$(CONFIG_EXYNOS_ADC) += exynos_adc.o
obj-$(CONFIG_FSL_MX25_ADC) += fsl-imx25-gcq.o
obj-$(CONFIG_HI8435) += hi8435.o
@ -34,6 +36,7 @@ obj-$(CONFIG_INA2XX_ADC) += ina2xx-adc.o
obj-$(CONFIG_LP8788_ADC) += lp8788_adc.o
obj-$(CONFIG_LPC18XX_ADC) += lpc18xx_adc.o
obj-$(CONFIG_LPC32XX_ADC) += lpc32xx_adc.o
obj-$(CONFIG_LTC2471) += ltc2471.o
obj-$(CONFIG_LTC2485) += ltc2485.o
obj-$(CONFIG_LTC2497) += ltc2497.o
obj-$(CONFIG_MAX1027) += max1027.o

View File

@ -103,8 +103,7 @@ static int ad7766_preenable(struct iio_dev *indio_dev)
return ret;
}
if (ad7766->pd_gpio)
gpiod_set_value(ad7766->pd_gpio, 0);
gpiod_set_value(ad7766->pd_gpio, 0);
return 0;
}
@ -113,8 +112,7 @@ static int ad7766_postdisable(struct iio_dev *indio_dev)
{
struct ad7766 *ad7766 = iio_priv(indio_dev);
if (ad7766->pd_gpio)
gpiod_set_value(ad7766->pd_gpio, 1);
gpiod_set_value(ad7766->pd_gpio, 1);
/*
* The PD pin is synchronous to the clock, so give it some time to

View File

@ -25,6 +25,11 @@
#include <linux/wait.h>
#include <linux/iio/iio.h>
#include <linux/iio/sysfs.h>
#include <linux/iio/buffer.h>
#include <linux/iio/trigger.h>
#include <linux/iio/trigger_consumer.h>
#include <linux/iio/triggered_buffer.h>
#include <linux/pinctrl/consumer.h>
#include <linux/regulator/consumer.h>
/* Control Register */
@ -132,6 +137,17 @@
#define AT91_SAMA5D2_PRESSR 0xbc
/* Trigger Register */
#define AT91_SAMA5D2_TRGR 0xc0
/* Mask for TRGMOD field of TRGR register */
#define AT91_SAMA5D2_TRGR_TRGMOD_MASK GENMASK(2, 0)
/* No trigger, only software trigger can start conversions */
#define AT91_SAMA5D2_TRGR_TRGMOD_NO_TRIGGER 0
/* Trigger Mode external trigger rising edge */
#define AT91_SAMA5D2_TRGR_TRGMOD_EXT_TRIG_RISE 1
/* Trigger Mode external trigger falling edge */
#define AT91_SAMA5D2_TRGR_TRGMOD_EXT_TRIG_FALL 2
/* Trigger Mode external trigger any edge */
#define AT91_SAMA5D2_TRGR_TRGMOD_EXT_TRIG_ANY 3
/* Correction Select Register */
#define AT91_SAMA5D2_COSR 0xd0
/* Correction Value Register */
@ -145,14 +161,29 @@
/* Version Register */
#define AT91_SAMA5D2_VERSION 0xfc
#define AT91_SAMA5D2_HW_TRIG_CNT 3
#define AT91_SAMA5D2_SINGLE_CHAN_CNT 12
#define AT91_SAMA5D2_DIFF_CHAN_CNT 6
/*
* Maximum number of bytes to hold conversion from all channels
* plus the timestamp
*/
#define AT91_BUFFER_MAX_BYTES ((AT91_SAMA5D2_SINGLE_CHAN_CNT + \
AT91_SAMA5D2_DIFF_CHAN_CNT) * 2 + 8)
#define AT91_BUFFER_MAX_HWORDS (AT91_BUFFER_MAX_BYTES / 2)
#define AT91_SAMA5D2_CHAN_SINGLE(num, addr) \
{ \
.type = IIO_VOLTAGE, \
.channel = num, \
.address = addr, \
.scan_index = num, \
.scan_type = { \
.sign = 'u', \
.realbits = 12, \
.storagebits = 16, \
}, \
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \
@ -168,9 +199,11 @@
.channel = num, \
.channel2 = num2, \
.address = addr, \
.scan_index = num + AT91_SAMA5D2_SINGLE_CHAN_CNT, \
.scan_type = { \
.sign = 's', \
.realbits = 12, \
.storagebits = 16, \
}, \
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \
@ -188,6 +221,12 @@ struct at91_adc_soc_info {
unsigned max_sample_rate;
};
struct at91_adc_trigger {
char *name;
unsigned int trgmod_value;
unsigned int edge_type;
};
struct at91_adc_state {
void __iomem *base;
int irq;
@ -195,11 +234,14 @@ struct at91_adc_state {
struct regulator *reg;
struct regulator *vref;
int vref_uv;
struct iio_trigger *trig;
const struct at91_adc_trigger *selected_trig;
const struct iio_chan_spec *chan;
bool conversion_done;
u32 conversion_value;
struct at91_adc_soc_info soc_info;
wait_queue_head_t wq_data_available;
u16 buffer[AT91_BUFFER_MAX_HWORDS];
/*
* lock to prevent concurrent 'single conversion' requests through
* sysfs.
@ -207,6 +249,24 @@ struct at91_adc_state {
struct mutex lock;
};
static const struct at91_adc_trigger at91_adc_trigger_list[] = {
{
.name = "external_rising",
.trgmod_value = AT91_SAMA5D2_TRGR_TRGMOD_EXT_TRIG_RISE,
.edge_type = IRQ_TYPE_EDGE_RISING,
},
{
.name = "external_falling",
.trgmod_value = AT91_SAMA5D2_TRGR_TRGMOD_EXT_TRIG_FALL,
.edge_type = IRQ_TYPE_EDGE_FALLING,
},
{
.name = "external_any",
.trgmod_value = AT91_SAMA5D2_TRGR_TRGMOD_EXT_TRIG_ANY,
.edge_type = IRQ_TYPE_EDGE_BOTH,
},
};
static const struct iio_chan_spec at91_adc_channels[] = {
AT91_SAMA5D2_CHAN_SINGLE(0, 0x50),
AT91_SAMA5D2_CHAN_SINGLE(1, 0x54),
@ -226,12 +286,132 @@ static const struct iio_chan_spec at91_adc_channels[] = {
AT91_SAMA5D2_CHAN_DIFF(6, 7, 0x68),
AT91_SAMA5D2_CHAN_DIFF(8, 9, 0x70),
AT91_SAMA5D2_CHAN_DIFF(10, 11, 0x78),
IIO_CHAN_SOFT_TIMESTAMP(AT91_SAMA5D2_SINGLE_CHAN_CNT
+ AT91_SAMA5D2_DIFF_CHAN_CNT + 1),
};
static int at91_adc_configure_trigger(struct iio_trigger *trig, bool state)
{
struct iio_dev *indio = iio_trigger_get_drvdata(trig);
struct at91_adc_state *st = iio_priv(indio);
u32 status = at91_adc_readl(st, AT91_SAMA5D2_TRGR);
u8 bit;
/* clear TRGMOD */
status &= ~AT91_SAMA5D2_TRGR_TRGMOD_MASK;
if (state)
status |= st->selected_trig->trgmod_value;
/* set/unset hw trigger */
at91_adc_writel(st, AT91_SAMA5D2_TRGR, status);
for_each_set_bit(bit, indio->active_scan_mask, indio->num_channels) {
struct iio_chan_spec const *chan = indio->channels + bit;
if (state) {
at91_adc_writel(st, AT91_SAMA5D2_CHER,
BIT(chan->channel));
at91_adc_writel(st, AT91_SAMA5D2_IER,
BIT(chan->channel));
} else {
at91_adc_writel(st, AT91_SAMA5D2_IDR,
BIT(chan->channel));
at91_adc_writel(st, AT91_SAMA5D2_CHDR,
BIT(chan->channel));
}
}
return 0;
}
static int at91_adc_reenable_trigger(struct iio_trigger *trig)
{
struct iio_dev *indio = iio_trigger_get_drvdata(trig);
struct at91_adc_state *st = iio_priv(indio);
enable_irq(st->irq);
/* Needed to ACK the DRDY interruption */
at91_adc_readl(st, AT91_SAMA5D2_LCDR);
return 0;
}
static const struct iio_trigger_ops at91_adc_trigger_ops = {
.owner = THIS_MODULE,
.set_trigger_state = &at91_adc_configure_trigger,
.try_reenable = &at91_adc_reenable_trigger,
};
static struct iio_trigger *at91_adc_allocate_trigger(struct iio_dev *indio,
char *trigger_name)
{
struct iio_trigger *trig;
int ret;
trig = devm_iio_trigger_alloc(&indio->dev, "%s-dev%d-%s", indio->name,
indio->id, trigger_name);
if (!trig)
return NULL;
trig->dev.parent = indio->dev.parent;
iio_trigger_set_drvdata(trig, indio);
trig->ops = &at91_adc_trigger_ops;
ret = devm_iio_trigger_register(&indio->dev, trig);
if (ret)
return ERR_PTR(ret);
return trig;
}
static int at91_adc_trigger_init(struct iio_dev *indio)
{
struct at91_adc_state *st = iio_priv(indio);
st->trig = at91_adc_allocate_trigger(indio, st->selected_trig->name);
if (IS_ERR(st->trig)) {
dev_err(&indio->dev,
"could not allocate trigger\n");
return PTR_ERR(st->trig);
}
return 0;
}
static irqreturn_t at91_adc_trigger_handler(int irq, void *p)
{
struct iio_poll_func *pf = p;
struct iio_dev *indio = pf->indio_dev;
struct at91_adc_state *st = iio_priv(indio);
int i = 0;
u8 bit;
for_each_set_bit(bit, indio->active_scan_mask, indio->num_channels) {
struct iio_chan_spec const *chan = indio->channels + bit;
st->buffer[i] = at91_adc_readl(st, chan->address);
i++;
}
iio_push_to_buffers_with_timestamp(indio, st->buffer, pf->timestamp);
iio_trigger_notify_done(indio->trig);
return IRQ_HANDLED;
}
static int at91_adc_buffer_init(struct iio_dev *indio)
{
return devm_iio_triggered_buffer_setup(&indio->dev, indio,
&iio_pollfunc_store_time,
&at91_adc_trigger_handler, NULL);
}
static unsigned at91_adc_startup_time(unsigned startup_time_min,
unsigned adc_clk_khz)
{
const unsigned startup_lookup[] = {
static const unsigned int startup_lookup[] = {
0, 8, 16, 24,
64, 80, 96, 112,
512, 576, 640, 704,
@ -293,14 +473,18 @@ static irqreturn_t at91_adc_interrupt(int irq, void *private)
u32 status = at91_adc_readl(st, AT91_SAMA5D2_ISR);
u32 imr = at91_adc_readl(st, AT91_SAMA5D2_IMR);
if (status & imr) {
if (!(status & imr))
return IRQ_NONE;
if (iio_buffer_enabled(indio)) {
disable_irq_nosync(irq);
iio_trigger_poll(indio->trig);
} else {
st->conversion_value = at91_adc_readl(st, st->chan->address);
st->conversion_done = true;
wake_up_interruptible(&st->wq_data_available);
return IRQ_HANDLED;
}
return IRQ_NONE;
return IRQ_HANDLED;
}
static int at91_adc_read_raw(struct iio_dev *indio_dev,
@ -313,6 +497,11 @@ static int at91_adc_read_raw(struct iio_dev *indio_dev,
switch (mask) {
case IIO_CHAN_INFO_RAW:
/* we cannot use software trigger if hw trigger enabled */
ret = iio_device_claim_direct_mode(indio_dev);
if (ret)
return ret;
mutex_lock(&st->lock);
st->chan = chan;
@ -344,6 +533,8 @@ static int at91_adc_read_raw(struct iio_dev *indio_dev,
at91_adc_writel(st, AT91_SAMA5D2_CHDR, BIT(chan->channel));
mutex_unlock(&st->lock);
iio_device_release_direct_mode(indio_dev);
return ret;
case IIO_CHAN_INFO_SCALE:
@ -386,12 +577,27 @@ static const struct iio_info at91_adc_info = {
.driver_module = THIS_MODULE,
};
static void at91_adc_hw_init(struct at91_adc_state *st)
{
at91_adc_writel(st, AT91_SAMA5D2_CR, AT91_SAMA5D2_CR_SWRST);
at91_adc_writel(st, AT91_SAMA5D2_IDR, 0xffffffff);
/*
* Transfer field must be set to 2 according to the datasheet and
* allows different analog settings for each channel.
*/
at91_adc_writel(st, AT91_SAMA5D2_MR,
AT91_SAMA5D2_MR_TRANSFER(2) | AT91_SAMA5D2_MR_ANACH);
at91_adc_setup_samp_freq(st, st->soc_info.min_sample_rate);
}
static int at91_adc_probe(struct platform_device *pdev)
{
struct iio_dev *indio_dev;
struct at91_adc_state *st;
struct resource *res;
int ret;
int ret, i;
u32 edge_type;
indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*st));
if (!indio_dev)
@ -432,6 +638,27 @@ static int at91_adc_probe(struct platform_device *pdev)
return ret;
}
ret = of_property_read_u32(pdev->dev.of_node,
"atmel,trigger-edge-type", &edge_type);
if (ret) {
dev_err(&pdev->dev,
"invalid or missing value for atmel,trigger-edge-type\n");
return ret;
}
st->selected_trig = NULL;
for (i = 0; i < AT91_SAMA5D2_HW_TRIG_CNT; i++)
if (at91_adc_trigger_list[i].edge_type == edge_type) {
st->selected_trig = &at91_adc_trigger_list[i];
break;
}
if (!st->selected_trig) {
dev_err(&pdev->dev, "invalid external trigger edge value\n");
return -EINVAL;
}
init_waitqueue_head(&st->wq_data_available);
mutex_init(&st->lock);
@ -482,16 +709,7 @@ static int at91_adc_probe(struct platform_device *pdev)
goto vref_disable;
}
at91_adc_writel(st, AT91_SAMA5D2_CR, AT91_SAMA5D2_CR_SWRST);
at91_adc_writel(st, AT91_SAMA5D2_IDR, 0xffffffff);
/*
* Transfer field must be set to 2 according to the datasheet and
* allows different analog settings for each channel.
*/
at91_adc_writel(st, AT91_SAMA5D2_MR,
AT91_SAMA5D2_MR_TRANSFER(2) | AT91_SAMA5D2_MR_ANACH);
at91_adc_setup_samp_freq(st, st->soc_info.min_sample_rate);
at91_adc_hw_init(st);
ret = clk_prepare_enable(st->per_clk);
if (ret)
@ -499,10 +717,25 @@ static int at91_adc_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, indio_dev);
ret = at91_adc_buffer_init(indio_dev);
if (ret < 0) {
dev_err(&pdev->dev, "couldn't initialize the buffer.\n");
goto per_clk_disable_unprepare;
}
ret = at91_adc_trigger_init(indio_dev);
if (ret < 0) {
dev_err(&pdev->dev, "couldn't setup the triggers.\n");
goto per_clk_disable_unprepare;
}
ret = iio_device_register(indio_dev);
if (ret < 0)
goto per_clk_disable_unprepare;
dev_info(&pdev->dev, "setting up trigger as %s\n",
st->selected_trig->name);
dev_info(&pdev->dev, "version: %x\n",
readl_relaxed(st->base + AT91_SAMA5D2_VERSION));
@ -532,6 +765,69 @@ static int at91_adc_remove(struct platform_device *pdev)
return 0;
}
static __maybe_unused int at91_adc_suspend(struct device *dev)
{
struct iio_dev *indio_dev =
platform_get_drvdata(to_platform_device(dev));
struct at91_adc_state *st = iio_priv(indio_dev);
/*
* Do a sofware reset of the ADC before we go to suspend.
* this will ensure that all pins are free from being muxed by the ADC
* and can be used by for other devices.
* Otherwise, ADC will hog them and we can't go to suspend mode.
*/
at91_adc_writel(st, AT91_SAMA5D2_CR, AT91_SAMA5D2_CR_SWRST);
clk_disable_unprepare(st->per_clk);
regulator_disable(st->vref);
regulator_disable(st->reg);
return pinctrl_pm_select_sleep_state(dev);
}
static __maybe_unused int at91_adc_resume(struct device *dev)
{
struct iio_dev *indio_dev =
platform_get_drvdata(to_platform_device(dev));
struct at91_adc_state *st = iio_priv(indio_dev);
int ret;
ret = pinctrl_pm_select_default_state(dev);
if (ret)
goto resume_failed;
ret = regulator_enable(st->reg);
if (ret)
goto resume_failed;
ret = regulator_enable(st->vref);
if (ret)
goto reg_disable_resume;
ret = clk_prepare_enable(st->per_clk);
if (ret)
goto vref_disable_resume;
at91_adc_hw_init(st);
/* reconfiguring trigger hardware state */
if (iio_buffer_enabled(indio_dev))
at91_adc_configure_trigger(st->trig, true);
return 0;
vref_disable_resume:
regulator_disable(st->vref);
reg_disable_resume:
regulator_disable(st->reg);
resume_failed:
dev_err(&indio_dev->dev, "failed to resume\n");
return ret;
}
static SIMPLE_DEV_PM_OPS(at91_adc_pm_ops, at91_adc_suspend, at91_adc_resume);
static const struct of_device_id at91_adc_dt_match[] = {
{
.compatible = "atmel,sama5d2-adc",
@ -547,6 +843,7 @@ static struct platform_driver at91_adc_driver = {
.driver = {
.name = "at91-sama5d2_adc",
.of_match_table = at91_adc_dt_match,
.pm = &at91_adc_pm_ops,
},
};
module_platform_driver(at91_adc_driver)

View File

@ -799,7 +799,7 @@ static u32 calc_startup_ticks_9x5(u32 startup_time, u32 adc_clk_khz)
* For sama5d3x and at91sam9x5, the formula changes to:
* Startup Time = <lookup_table_value> / ADC Clock
*/
const int startup_lookup[] = {
static const int startup_lookup[] = {
0, 8, 16, 24,
64, 80, 96, 112,
512, 576, 640, 704,

View File

@ -0,0 +1,722 @@
/*
* Driver for the Diolan DLN-2 USB-ADC adapter
*
* Copyright (c) 2017 Jack Andersen
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation, version 2.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/types.h>
#include <linux/platform_device.h>
#include <linux/mfd/dln2.h>
#include <linux/iio/iio.h>
#include <linux/iio/sysfs.h>
#include <linux/iio/trigger.h>
#include <linux/iio/trigger_consumer.h>
#include <linux/iio/triggered_buffer.h>
#include <linux/iio/buffer.h>
#include <linux/iio/kfifo_buf.h>
#define DLN2_ADC_MOD_NAME "dln2-adc"
#define DLN2_ADC_ID 0x06
#define DLN2_ADC_GET_CHANNEL_COUNT DLN2_CMD(0x01, DLN2_ADC_ID)
#define DLN2_ADC_ENABLE DLN2_CMD(0x02, DLN2_ADC_ID)
#define DLN2_ADC_DISABLE DLN2_CMD(0x03, DLN2_ADC_ID)
#define DLN2_ADC_CHANNEL_ENABLE DLN2_CMD(0x05, DLN2_ADC_ID)
#define DLN2_ADC_CHANNEL_DISABLE DLN2_CMD(0x06, DLN2_ADC_ID)
#define DLN2_ADC_SET_RESOLUTION DLN2_CMD(0x08, DLN2_ADC_ID)
#define DLN2_ADC_CHANNEL_GET_VAL DLN2_CMD(0x0A, DLN2_ADC_ID)
#define DLN2_ADC_CHANNEL_GET_ALL_VAL DLN2_CMD(0x0B, DLN2_ADC_ID)
#define DLN2_ADC_CHANNEL_SET_CFG DLN2_CMD(0x0C, DLN2_ADC_ID)
#define DLN2_ADC_CHANNEL_GET_CFG DLN2_CMD(0x0D, DLN2_ADC_ID)
#define DLN2_ADC_CONDITION_MET_EV DLN2_CMD(0x10, DLN2_ADC_ID)
#define DLN2_ADC_EVENT_NONE 0
#define DLN2_ADC_EVENT_BELOW 1
#define DLN2_ADC_EVENT_LEVEL_ABOVE 2
#define DLN2_ADC_EVENT_OUTSIDE 3
#define DLN2_ADC_EVENT_INSIDE 4
#define DLN2_ADC_EVENT_ALWAYS 5
#define DLN2_ADC_MAX_CHANNELS 8
#define DLN2_ADC_DATA_BITS 10
/*
* Plays similar role to iio_demux_table in subsystem core; except allocated
* in a fixed 8-element array.
*/
struct dln2_adc_demux_table {
unsigned int from;
unsigned int to;
unsigned int length;
};
struct dln2_adc {
struct platform_device *pdev;
struct iio_chan_spec iio_channels[DLN2_ADC_MAX_CHANNELS + 1];
int port, trigger_chan;
struct iio_trigger *trig;
struct mutex mutex;
/* Cached sample period in milliseconds */
unsigned int sample_period;
/* Demux table */
unsigned int demux_count;
struct dln2_adc_demux_table demux[DLN2_ADC_MAX_CHANNELS];
/* Precomputed timestamp padding offset and length */
unsigned int ts_pad_offset, ts_pad_length;
};
struct dln2_adc_port_chan {
u8 port;
u8 chan;
};
struct dln2_adc_get_all_vals {
__le16 channel_mask;
__le16 values[DLN2_ADC_MAX_CHANNELS];
};
static void dln2_adc_add_demux(struct dln2_adc *dln2,
unsigned int in_loc, unsigned int out_loc,
unsigned int length)
{
struct dln2_adc_demux_table *p = dln2->demux_count ?
&dln2->demux[dln2->demux_count - 1] : NULL;
if (p && p->from + p->length == in_loc &&
p->to + p->length == out_loc) {
p->length += length;
} else if (dln2->demux_count < DLN2_ADC_MAX_CHANNELS) {
p = &dln2->demux[dln2->demux_count++];
p->from = in_loc;
p->to = out_loc;
p->length = length;
}
}
static void dln2_adc_update_demux(struct dln2_adc *dln2)
{
int in_ind = -1, out_ind;
unsigned int in_loc = 0, out_loc = 0;
struct iio_dev *indio_dev = platform_get_drvdata(dln2->pdev);
/* Clear out any old demux */
dln2->demux_count = 0;
/* Optimize all 8-channels case */
if (indio_dev->masklength &&
(*indio_dev->active_scan_mask & 0xff) == 0xff) {
dln2_adc_add_demux(dln2, 0, 0, 16);
dln2->ts_pad_offset = 0;
dln2->ts_pad_length = 0;
return;
}
/* Build demux table from fixed 8-channels to active_scan_mask */
for_each_set_bit(out_ind,
indio_dev->active_scan_mask,
indio_dev->masklength) {
/* Handle timestamp separately */
if (out_ind == DLN2_ADC_MAX_CHANNELS)
break;
for (++in_ind; in_ind != out_ind; ++in_ind)
in_loc += 2;
dln2_adc_add_demux(dln2, in_loc, out_loc, 2);
out_loc += 2;
in_loc += 2;
}
if (indio_dev->scan_timestamp) {
size_t ts_offset = indio_dev->scan_bytes / sizeof(int64_t) - 1;
dln2->ts_pad_offset = out_loc;
dln2->ts_pad_length = ts_offset * sizeof(int64_t) - out_loc;
} else {
dln2->ts_pad_offset = 0;
dln2->ts_pad_length = 0;
}
}
static int dln2_adc_get_chan_count(struct dln2_adc *dln2)
{
int ret;
u8 port = dln2->port;
u8 count;
int olen = sizeof(count);
ret = dln2_transfer(dln2->pdev, DLN2_ADC_GET_CHANNEL_COUNT,
&port, sizeof(port), &count, &olen);
if (ret < 0) {
dev_dbg(&dln2->pdev->dev, "Problem in %s\n", __func__);
return ret;
}
if (olen < sizeof(count))
return -EPROTO;
return count;
}
static int dln2_adc_set_port_resolution(struct dln2_adc *dln2)
{
int ret;
struct dln2_adc_port_chan port_chan = {
.port = dln2->port,
.chan = DLN2_ADC_DATA_BITS,
};
ret = dln2_transfer_tx(dln2->pdev, DLN2_ADC_SET_RESOLUTION,
&port_chan, sizeof(port_chan));
if (ret < 0)
dev_dbg(&dln2->pdev->dev, "Problem in %s\n", __func__);
return ret;
}
static int dln2_adc_set_chan_enabled(struct dln2_adc *dln2,
int channel, bool enable)
{
int ret;
struct dln2_adc_port_chan port_chan = {
.port = dln2->port,
.chan = channel,
};
u16 cmd = enable ? DLN2_ADC_CHANNEL_ENABLE : DLN2_ADC_CHANNEL_DISABLE;
ret = dln2_transfer_tx(dln2->pdev, cmd, &port_chan, sizeof(port_chan));
if (ret < 0)
dev_dbg(&dln2->pdev->dev, "Problem in %s\n", __func__);
return ret;
}
static int dln2_adc_set_port_enabled(struct dln2_adc *dln2, bool enable,
u16 *conflict_out)
{
int ret;
u8 port = dln2->port;
__le16 conflict;
int olen = sizeof(conflict);
u16 cmd = enable ? DLN2_ADC_ENABLE : DLN2_ADC_DISABLE;
if (conflict_out)
*conflict_out = 0;
ret = dln2_transfer(dln2->pdev, cmd, &port, sizeof(port),
&conflict, &olen);
if (ret < 0) {
dev_dbg(&dln2->pdev->dev, "Problem in %s(%d)\n",
__func__, (int)enable);
if (conflict_out && enable && olen >= sizeof(conflict))
*conflict_out = le16_to_cpu(conflict);
return ret;
}
if (enable && olen < sizeof(conflict))
return -EPROTO;
return ret;
}
static int dln2_adc_set_chan_period(struct dln2_adc *dln2,
unsigned int channel, unsigned int period)
{
int ret;
struct {
struct dln2_adc_port_chan port_chan;
__u8 type;
__le16 period;
__le16 low;
__le16 high;
} __packed set_cfg = {
.port_chan.port = dln2->port,
.port_chan.chan = channel,
.type = period ? DLN2_ADC_EVENT_ALWAYS : DLN2_ADC_EVENT_NONE,
.period = cpu_to_le16(period)
};
ret = dln2_transfer_tx(dln2->pdev, DLN2_ADC_CHANNEL_SET_CFG,
&set_cfg, sizeof(set_cfg));
if (ret < 0)
dev_dbg(&dln2->pdev->dev, "Problem in %s\n", __func__);
return ret;
}
static int dln2_adc_read(struct dln2_adc *dln2, unsigned int channel)
{
int ret, i;
struct iio_dev *indio_dev = platform_get_drvdata(dln2->pdev);
u16 conflict;
__le16 value;
int olen = sizeof(value);
struct dln2_adc_port_chan port_chan = {
.port = dln2->port,
.chan = channel,
};
ret = iio_device_claim_direct_mode(indio_dev);
if (ret < 0)
return ret;
ret = dln2_adc_set_chan_enabled(dln2, channel, true);
if (ret < 0)
goto release_direct;
ret = dln2_adc_set_port_enabled(dln2, true, &conflict);
if (ret < 0) {
if (conflict) {
dev_err(&dln2->pdev->dev,
"ADC pins conflict with mask %04X\n",
(int)conflict);
ret = -EBUSY;
}
goto disable_chan;
}
/*
* Call GET_VAL twice due to initial zero-return immediately after
* enabling channel.
*/
for (i = 0; i < 2; ++i) {
ret = dln2_transfer(dln2->pdev, DLN2_ADC_CHANNEL_GET_VAL,
&port_chan, sizeof(port_chan),
&value, &olen);
if (ret < 0) {
dev_dbg(&dln2->pdev->dev, "Problem in %s\n", __func__);
goto disable_port;
}
if (olen < sizeof(value)) {
ret = -EPROTO;
goto disable_port;
}
}
ret = le16_to_cpu(value);
disable_port:
dln2_adc_set_port_enabled(dln2, false, NULL);
disable_chan:
dln2_adc_set_chan_enabled(dln2, channel, false);
release_direct:
iio_device_release_direct_mode(indio_dev);
return ret;
}
static int dln2_adc_read_all(struct dln2_adc *dln2,
struct dln2_adc_get_all_vals *get_all_vals)
{
int ret;
__u8 port = dln2->port;
int olen = sizeof(*get_all_vals);
ret = dln2_transfer(dln2->pdev, DLN2_ADC_CHANNEL_GET_ALL_VAL,
&port, sizeof(port), get_all_vals, &olen);
if (ret < 0) {
dev_dbg(&dln2->pdev->dev, "Problem in %s\n", __func__);
return ret;
}
if (olen < sizeof(*get_all_vals))
return -EPROTO;
return ret;
}
static int dln2_adc_read_raw(struct iio_dev *indio_dev,
struct iio_chan_spec const *chan,
int *val,
int *val2,
long mask)
{
int ret;
unsigned int microhertz;
struct dln2_adc *dln2 = iio_priv(indio_dev);
switch (mask) {
case IIO_CHAN_INFO_RAW:
mutex_lock(&dln2->mutex);
ret = dln2_adc_read(dln2, chan->channel);
mutex_unlock(&dln2->mutex);
if (ret < 0)
return ret;
*val = ret;
return IIO_VAL_INT;
case IIO_CHAN_INFO_SCALE:
/*
* Voltage reference is fixed at 3.3v
* 3.3 / (1 << 10) * 1000000000
*/
*val = 0;
*val2 = 3222656;
return IIO_VAL_INT_PLUS_NANO;
case IIO_CHAN_INFO_SAMP_FREQ:
if (dln2->sample_period) {
microhertz = 1000000000 / dln2->sample_period;
*val = microhertz / 1000000;
*val2 = microhertz % 1000000;
} else {
*val = 0;
*val2 = 0;
}
return IIO_VAL_INT_PLUS_MICRO;
default:
return -EINVAL;
}
}
static int dln2_adc_write_raw(struct iio_dev *indio_dev,
struct iio_chan_spec const *chan,
int val,
int val2,
long mask)
{
int ret;
unsigned int microhertz;
struct dln2_adc *dln2 = iio_priv(indio_dev);
switch (mask) {
case IIO_CHAN_INFO_SAMP_FREQ:
microhertz = 1000000 * val + val2;
mutex_lock(&dln2->mutex);
dln2->sample_period =
microhertz ? 1000000000 / microhertz : UINT_MAX;
if (dln2->sample_period > 65535) {
dln2->sample_period = 65535;
dev_warn(&dln2->pdev->dev,
"clamping period to 65535ms\n");
}
/*
* The first requested channel is arbitrated as a shared
* trigger source, so only one event is registered with the
* DLN. The event handler will then read all enabled channel
* values using DLN2_ADC_CHANNEL_GET_ALL_VAL to maintain
* synchronization between ADC readings.
*/
if (dln2->trigger_chan != -1)
ret = dln2_adc_set_chan_period(dln2,
dln2->trigger_chan, dln2->sample_period);
else
ret = 0;
mutex_unlock(&dln2->mutex);
return ret;
default:
return -EINVAL;
}
}
static int dln2_update_scan_mode(struct iio_dev *indio_dev,
const unsigned long *scan_mask)
{
struct dln2_adc *dln2 = iio_priv(indio_dev);
int chan_count = indio_dev->num_channels - 1;
int ret, i, j;
mutex_lock(&dln2->mutex);
for (i = 0; i < chan_count; ++i) {
ret = dln2_adc_set_chan_enabled(dln2, i,
test_bit(i, scan_mask));
if (ret < 0) {
for (j = 0; j < i; ++j)
dln2_adc_set_chan_enabled(dln2, j, false);
mutex_unlock(&dln2->mutex);
dev_err(&dln2->pdev->dev,
"Unable to enable ADC channel %d\n", i);
return -EBUSY;
}
}
dln2_adc_update_demux(dln2);
mutex_unlock(&dln2->mutex);
return 0;
}
#define DLN2_ADC_CHAN(lval, idx) { \
lval.type = IIO_VOLTAGE; \
lval.channel = idx; \
lval.indexed = 1; \
lval.info_mask_separate = BIT(IIO_CHAN_INFO_RAW); \
lval.info_mask_shared_by_all = BIT(IIO_CHAN_INFO_SCALE) | \
BIT(IIO_CHAN_INFO_SAMP_FREQ); \
lval.scan_index = idx; \
lval.scan_type.sign = 'u'; \
lval.scan_type.realbits = DLN2_ADC_DATA_BITS; \
lval.scan_type.storagebits = 16; \
lval.scan_type.endianness = IIO_LE; \
}
/* Assignment version of IIO_CHAN_SOFT_TIMESTAMP */
#define IIO_CHAN_SOFT_TIMESTAMP_ASSIGN(lval, _si) { \
lval.type = IIO_TIMESTAMP; \
lval.channel = -1; \
lval.scan_index = _si; \
lval.scan_type.sign = 's'; \
lval.scan_type.realbits = 64; \
lval.scan_type.storagebits = 64; \
}
static const struct iio_info dln2_adc_info = {
.read_raw = dln2_adc_read_raw,
.write_raw = dln2_adc_write_raw,
.update_scan_mode = dln2_update_scan_mode,
.driver_module = THIS_MODULE,
};
static irqreturn_t dln2_adc_trigger_h(int irq, void *p)
{
struct iio_poll_func *pf = p;
struct iio_dev *indio_dev = pf->indio_dev;
struct {
__le16 values[DLN2_ADC_MAX_CHANNELS];
int64_t timestamp_space;
} data;
struct dln2_adc_get_all_vals dev_data;
struct dln2_adc *dln2 = iio_priv(indio_dev);
const struct dln2_adc_demux_table *t;
int ret, i;
mutex_lock(&dln2->mutex);
ret = dln2_adc_read_all(dln2, &dev_data);
mutex_unlock(&dln2->mutex);
if (ret < 0)
goto done;
/* Demux operation */
for (i = 0; i < dln2->demux_count; ++i) {
t = &dln2->demux[i];
memcpy((void *)data.values + t->to,
(void *)dev_data.values + t->from, t->length);
}
/* Zero padding space between values and timestamp */
if (dln2->ts_pad_length)
memset((void *)data.values + dln2->ts_pad_offset,
0, dln2->ts_pad_length);
iio_push_to_buffers_with_timestamp(indio_dev, &data,
iio_get_time_ns(indio_dev));
done:
iio_trigger_notify_done(indio_dev->trig);
return IRQ_HANDLED;
}
static int dln2_adc_triggered_buffer_postenable(struct iio_dev *indio_dev)
{
int ret;
struct dln2_adc *dln2 = iio_priv(indio_dev);
u16 conflict;
unsigned int trigger_chan;
mutex_lock(&dln2->mutex);
/* Enable ADC */
ret = dln2_adc_set_port_enabled(dln2, true, &conflict);
if (ret < 0) {
mutex_unlock(&dln2->mutex);
dev_dbg(&dln2->pdev->dev, "Problem in %s\n", __func__);
if (conflict) {
dev_err(&dln2->pdev->dev,
"ADC pins conflict with mask %04X\n",
(int)conflict);
ret = -EBUSY;
}
return ret;
}
/* Assign trigger channel based on first enabled channel */
trigger_chan = find_first_bit(indio_dev->active_scan_mask,
indio_dev->masklength);
if (trigger_chan < DLN2_ADC_MAX_CHANNELS) {
dln2->trigger_chan = trigger_chan;
ret = dln2_adc_set_chan_period(dln2, dln2->trigger_chan,
dln2->sample_period);
mutex_unlock(&dln2->mutex);
if (ret < 0) {
dev_dbg(&dln2->pdev->dev, "Problem in %s\n", __func__);
return ret;
}
} else {
dln2->trigger_chan = -1;
mutex_unlock(&dln2->mutex);
}
return iio_triggered_buffer_postenable(indio_dev);
}
static int dln2_adc_triggered_buffer_predisable(struct iio_dev *indio_dev)
{
int ret;
struct dln2_adc *dln2 = iio_priv(indio_dev);
mutex_lock(&dln2->mutex);
/* Disable trigger channel */
if (dln2->trigger_chan != -1) {
dln2_adc_set_chan_period(dln2, dln2->trigger_chan, 0);
dln2->trigger_chan = -1;
}
/* Disable ADC */
ret = dln2_adc_set_port_enabled(dln2, false, NULL);
mutex_unlock(&dln2->mutex);
if (ret < 0) {
dev_dbg(&dln2->pdev->dev, "Problem in %s\n", __func__);
return ret;
}
return iio_triggered_buffer_predisable(indio_dev);
}
static const struct iio_buffer_setup_ops dln2_adc_buffer_setup_ops = {
.postenable = dln2_adc_triggered_buffer_postenable,
.predisable = dln2_adc_triggered_buffer_predisable,
};
static void dln2_adc_event(struct platform_device *pdev, u16 echo,
const void *data, int len)
{
struct iio_dev *indio_dev = platform_get_drvdata(pdev);
struct dln2_adc *dln2 = iio_priv(indio_dev);
/* Called via URB completion handler */
iio_trigger_poll(dln2->trig);
}
static const struct iio_trigger_ops dln2_adc_trigger_ops = {
.owner = THIS_MODULE,
};
static int dln2_adc_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct dln2_adc *dln2;
struct dln2_platform_data *pdata = dev_get_platdata(&pdev->dev);
struct iio_dev *indio_dev;
int i, ret, chans;
indio_dev = devm_iio_device_alloc(dev, sizeof(*dln2));
if (!indio_dev) {
dev_err(dev, "failed allocating iio device\n");
return -ENOMEM;
}
dln2 = iio_priv(indio_dev);
dln2->pdev = pdev;
dln2->port = pdata->port;
dln2->trigger_chan = -1;
mutex_init(&dln2->mutex);
platform_set_drvdata(pdev, indio_dev);
ret = dln2_adc_set_port_resolution(dln2);
if (ret < 0) {
dev_err(dev, "failed to set ADC resolution to 10 bits\n");
return ret;
}
chans = dln2_adc_get_chan_count(dln2);
if (chans < 0) {
dev_err(dev, "failed to get channel count: %d\n", chans);
return chans;
}
if (chans > DLN2_ADC_MAX_CHANNELS) {
chans = DLN2_ADC_MAX_CHANNELS;
dev_warn(dev, "clamping channels to %d\n",
DLN2_ADC_MAX_CHANNELS);
}
for (i = 0; i < chans; ++i)
DLN2_ADC_CHAN(dln2->iio_channels[i], i)
IIO_CHAN_SOFT_TIMESTAMP_ASSIGN(dln2->iio_channels[i], i);
indio_dev->name = DLN2_ADC_MOD_NAME;
indio_dev->dev.parent = dev;
indio_dev->info = &dln2_adc_info;
indio_dev->modes = INDIO_DIRECT_MODE;
indio_dev->channels = dln2->iio_channels;
indio_dev->num_channels = chans + 1;
indio_dev->setup_ops = &dln2_adc_buffer_setup_ops;
dln2->trig = devm_iio_trigger_alloc(dev, "%s-dev%d",
indio_dev->name, indio_dev->id);
if (!dln2->trig) {
dev_err(dev, "failed to allocate trigger\n");
return -ENOMEM;
}
dln2->trig->ops = &dln2_adc_trigger_ops;
iio_trigger_set_drvdata(dln2->trig, dln2);
devm_iio_trigger_register(dev, dln2->trig);
iio_trigger_set_immutable(indio_dev, dln2->trig);
ret = devm_iio_triggered_buffer_setup(dev, indio_dev, NULL,
dln2_adc_trigger_h,
&dln2_adc_buffer_setup_ops);
if (ret) {
dev_err(dev, "failed to allocate triggered buffer: %d\n", ret);
return ret;
}
ret = dln2_register_event_cb(pdev, DLN2_ADC_CONDITION_MET_EV,
dln2_adc_event);
if (ret) {
dev_err(dev, "failed to setup DLN2 periodic event: %d\n", ret);
return ret;
}
ret = iio_device_register(indio_dev);
if (ret) {
dev_err(dev, "failed to register iio device: %d\n", ret);
goto unregister_event;
}
return ret;
unregister_event:
dln2_unregister_event_cb(pdev, DLN2_ADC_CONDITION_MET_EV);
return ret;
}
static int dln2_adc_remove(struct platform_device *pdev)
{
struct iio_dev *indio_dev = platform_get_drvdata(pdev);
iio_device_unregister(indio_dev);
dln2_unregister_event_cb(pdev, DLN2_ADC_CONDITION_MET_EV);
return 0;
}
static struct platform_driver dln2_adc_driver = {
.driver.name = DLN2_ADC_MOD_NAME,
.probe = dln2_adc_probe,
.remove = dln2_adc_remove,
};
module_platform_driver(dln2_adc_driver);
MODULE_AUTHOR("Jack Andersen <jackoalan@gmail.com");
MODULE_DESCRIPTION("Driver for the Diolan DLN2 ADC interface");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("platform:dln2-adc");

View File

@ -0,0 +1,255 @@
/*
* Driver for ADC module on the Cirrus Logic EP93xx series of SoCs
*
* Copyright (C) 2015 Alexander Sverdlin
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* The driver uses polling to get the conversion status. According to EP93xx
* datasheets, reading ADCResult register starts the conversion, but user is also
* responsible for ensuring that delay between adjacent conversion triggers is
* long enough so that maximum allowed conversion rate is not exceeded. This
* basically renders IRQ mode unusable.
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/iio/iio.h>
#include <linux/io.h>
#include <linux/irqflags.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/platform_device.h>
/*
* This code could benefit from real HR Timers, but jiffy granularity would
* lower ADC conversion rate down to CONFIG_HZ, so we fallback to busy wait
* in such case.
*
* HR Timers-based version loads CPU only up to 10% during back to back ADC
* conversion, while busy wait-based version consumes whole CPU power.
*/
#ifdef CONFIG_HIGH_RES_TIMERS
#define ep93xx_adc_delay(usmin, usmax) usleep_range(usmin, usmax)
#else
#define ep93xx_adc_delay(usmin, usmax) udelay(usmin)
#endif
#define EP93XX_ADC_RESULT 0x08
#define EP93XX_ADC_SDR BIT(31)
#define EP93XX_ADC_SWITCH 0x18
#define EP93XX_ADC_SW_LOCK 0x20
struct ep93xx_adc_priv {
struct clk *clk;
void __iomem *base;
int lastch;
struct mutex lock;
};
#define EP93XX_ADC_CH(index, dname, swcfg) { \
.type = IIO_VOLTAGE, \
.indexed = 1, \
.channel = index, \
.address = swcfg, \
.datasheet_name = dname, \
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
.info_mask_shared_by_all = BIT(IIO_CHAN_INFO_SCALE) | \
BIT(IIO_CHAN_INFO_OFFSET), \
}
/*
* Numbering scheme for channels 0..4 is defined in EP9301 and EP9302 datasheets.
* EP9307, EP9312 and EP9312 have 3 channels more (total 8), but the numbering is
* not defined. So the last three are numbered randomly, let's say.
*/
static const struct iio_chan_spec ep93xx_adc_channels[8] = {
EP93XX_ADC_CH(0, "YM", 0x608),
EP93XX_ADC_CH(1, "SXP", 0x680),
EP93XX_ADC_CH(2, "SXM", 0x640),
EP93XX_ADC_CH(3, "SYP", 0x620),
EP93XX_ADC_CH(4, "SYM", 0x610),
EP93XX_ADC_CH(5, "XP", 0x601),
EP93XX_ADC_CH(6, "XM", 0x602),
EP93XX_ADC_CH(7, "YP", 0x604),
};
static int ep93xx_read_raw(struct iio_dev *iiodev,
struct iio_chan_spec const *channel, int *value,
int *shift, long mask)
{
struct ep93xx_adc_priv *priv = iio_priv(iiodev);
unsigned long timeout;
int ret;
switch (mask) {
case IIO_CHAN_INFO_RAW:
mutex_lock(&priv->lock);
if (priv->lastch != channel->channel) {
priv->lastch = channel->channel;
/*
* Switch register is software-locked, unlocking must be
* immediately followed by write
*/
local_irq_disable();
writel_relaxed(0xAA, priv->base + EP93XX_ADC_SW_LOCK);
writel_relaxed(channel->address,
priv->base + EP93XX_ADC_SWITCH);
local_irq_enable();
/*
* Settling delay depends on module clock and could be
* 2ms or 500us
*/
ep93xx_adc_delay(2000, 2000);
}
/* Start the conversion, eventually discarding old result */
readl_relaxed(priv->base + EP93XX_ADC_RESULT);
/* Ensure maximum conversion rate is not exceeded */
ep93xx_adc_delay(DIV_ROUND_UP(1000000, 925),
DIV_ROUND_UP(1000000, 925));
/* At this point conversion must be completed, but anyway... */
ret = IIO_VAL_INT;
timeout = jiffies + msecs_to_jiffies(1) + 1;
while (1) {
u32 t;
t = readl_relaxed(priv->base + EP93XX_ADC_RESULT);
if (t & EP93XX_ADC_SDR) {
*value = sign_extend32(t, 15);
break;
}
if (time_after(jiffies, timeout)) {
dev_err(&iiodev->dev, "Conversion timeout\n");
ret = -ETIMEDOUT;
break;
}
cpu_relax();
}
mutex_unlock(&priv->lock);
return ret;
case IIO_CHAN_INFO_OFFSET:
/* According to datasheet, range is -25000..25000 */
*value = 25000;
return IIO_VAL_INT;
case IIO_CHAN_INFO_SCALE:
/* Typical supply voltage is 3.3v */
*value = (1ULL << 32) * 3300 / 50000;
*shift = 32;
return IIO_VAL_FRACTIONAL_LOG2;
}
return -EINVAL;
}
static const struct iio_info ep93xx_adc_info = {
.driver_module = THIS_MODULE,
.read_raw = ep93xx_read_raw,
};
static int ep93xx_adc_probe(struct platform_device *pdev)
{
int ret;
struct iio_dev *iiodev;
struct ep93xx_adc_priv *priv;
struct clk *pclk;
struct resource *res;
iiodev = devm_iio_device_alloc(&pdev->dev, sizeof(*priv));
if (!iiodev)
return -ENOMEM;
priv = iio_priv(iiodev);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(&pdev->dev, "Cannot obtain memory resource\n");
return -ENXIO;
}
priv->base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(priv->base)) {
dev_err(&pdev->dev, "Cannot map memory resource\n");
return PTR_ERR(priv->base);
}
iiodev->dev.parent = &pdev->dev;
iiodev->name = dev_name(&pdev->dev);
iiodev->modes = INDIO_DIRECT_MODE;
iiodev->info = &ep93xx_adc_info;
iiodev->num_channels = ARRAY_SIZE(ep93xx_adc_channels);
iiodev->channels = ep93xx_adc_channels;
priv->lastch = -1;
mutex_init(&priv->lock);
platform_set_drvdata(pdev, iiodev);
priv->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(priv->clk)) {
dev_err(&pdev->dev, "Cannot obtain clock\n");
return PTR_ERR(priv->clk);
}
pclk = clk_get_parent(priv->clk);
if (!pclk) {
dev_warn(&pdev->dev, "Cannot obtain parent clock\n");
} else {
/*
* This is actually a place for improvement:
* EP93xx ADC supports two clock divisors -- 4 and 16,
* resulting in conversion rates 3750 and 925 samples per second
* with 500us or 2ms settling time respectively.
* One might find this interesting enough to be configurable.
*/
ret = clk_set_rate(priv->clk, clk_get_rate(pclk) / 16);
if (ret)
dev_warn(&pdev->dev, "Cannot set clock rate\n");
/*
* We can tolerate rate setting failure because the module should
* work in any case.
*/
}
ret = clk_enable(priv->clk);
if (ret) {
dev_err(&pdev->dev, "Cannot enable clock\n");
return ret;
}
ret = iio_device_register(iiodev);
if (ret)
clk_disable(priv->clk);
return ret;
}
static int ep93xx_adc_remove(struct platform_device *pdev)
{
struct iio_dev *iiodev = platform_get_drvdata(pdev);
struct ep93xx_adc_priv *priv = iio_priv(iiodev);
iio_device_unregister(iiodev);
clk_disable(priv->clk);
return 0;
}
static struct platform_driver ep93xx_adc_driver = {
.driver = {
.name = "ep93xx-adc",
},
.probe = ep93xx_adc_probe,
.remove = ep93xx_adc_remove,
};
module_platform_driver(ep93xx_adc_driver);
MODULE_AUTHOR("Alexander Sverdlin <alexander.sverdlin@gmail.com>");
MODULE_DESCRIPTION("Cirrus Logic EP93XX ADC driver");
MODULE_LICENSE("GPL");
MODULE_ALIAS("platform:ep93xx-adc");

View File

@ -44,6 +44,7 @@
#define INA226_MASK_ENABLE 0x06
#define INA226_CVRF BIT(3)
#define INA219_CNVR BIT(1)
#define INA2XX_MAX_REGISTERS 8
@ -592,6 +593,7 @@ static int ina2xx_work_buffer(struct iio_dev *indio_dev)
int bit, ret, i = 0;
s64 time_a, time_b;
unsigned int alert;
int cnvr_need_clear = 0;
time_a = iio_get_time_ns(indio_dev);
@ -603,22 +605,30 @@ static int ina2xx_work_buffer(struct iio_dev *indio_dev)
* we check the ConVersionReadyFlag.
* On hardware that supports using the ALERT pin to toggle a
* GPIO a triggered buffer could be used instead.
* For now, we pay for that extra read of the ALERT register
* For now, we do an extra read of the MASK_ENABLE register (INA226)
* resp. the BUS_VOLTAGE register (INA219).
*/
if (!chip->allow_async_readout)
do {
ret = regmap_read(chip->regmap, INA226_MASK_ENABLE,
&alert);
if (chip->config->chip_id == ina226) {
ret = regmap_read(chip->regmap,
INA226_MASK_ENABLE, &alert);
alert &= INA226_CVRF;
} else {
ret = regmap_read(chip->regmap,
INA2XX_BUS_VOLTAGE, &alert);
alert &= INA219_CNVR;
cnvr_need_clear = alert;
}
if (ret < 0)
return ret;
alert &= INA226_CVRF;
} while (!alert);
/*
* Single register reads: bulk_read will not work with ina226
* as there is no auto-increment of the address register for
* data length longer than 16bits.
* Single register reads: bulk_read will not work with ina226/219
* as there is no auto-increment of the register pointer.
*/
for_each_set_bit(bit, indio_dev->active_scan_mask,
indio_dev->masklength) {
@ -630,6 +640,18 @@ static int ina2xx_work_buffer(struct iio_dev *indio_dev)
return ret;
data[i++] = val;
if (INA2XX_SHUNT_VOLTAGE + bit == INA2XX_POWER)
cnvr_need_clear = 0;
}
/* Dummy read on INA219 power register to clear CNVR flag */
if (cnvr_need_clear && chip->config->chip_id == ina219) {
unsigned int val;
ret = regmap_read(chip->regmap, INA2XX_POWER, &val);
if (ret < 0)
return ret;
}
time_b = iio_get_time_ns(indio_dev);

View File

@ -0,0 +1,160 @@
/*
* Driver for Linear Technology LTC2471 and LTC2473 voltage monitors
* The LTC2473 is identical to the 2471, but reports a differential signal.
*
* Copyright (C) 2017 Topic Embedded Products
* Author: Mike Looijmans <mike.looijmans@topic.nl>
*
* License: GPLv2
*/
#include <linux/err.h>
#include <linux/i2c.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/iio/iio.h>
#include <linux/iio/sysfs.h>
enum ltc2471_chips {
ltc2471,
ltc2473,
};
struct ltc2471_data {
struct i2c_client *client;
};
/* Reference voltage is 1.25V */
#define LTC2471_VREF 1250
/* Read two bytes from the I2C bus to obtain the ADC result */
static int ltc2471_get_value(struct i2c_client *client)
{
int ret;
__be16 buf;
ret = i2c_master_recv(client, (char *)&buf, sizeof(buf));
if (ret < 0)
return ret;
if (ret != sizeof(buf))
return -EIO;
/* MSB first */
return be16_to_cpu(buf);
}
static int ltc2471_read_raw(struct iio_dev *indio_dev,
struct iio_chan_spec const *chan,
int *val, int *val2, long info)
{
struct ltc2471_data *data = iio_priv(indio_dev);
int ret;
switch (info) {
case IIO_CHAN_INFO_RAW:
ret = ltc2471_get_value(data->client);
if (ret < 0)
return ret;
*val = ret;
return IIO_VAL_INT;
case IIO_CHAN_INFO_SCALE:
if (chan->differential)
/* Output ranges from -VREF to +VREF */
*val = 2 * LTC2471_VREF;
else
/* Output ranges from 0 to VREF */
*val = LTC2471_VREF;
*val2 = 16; /* 16 data bits */
return IIO_VAL_FRACTIONAL_LOG2;
case IIO_CHAN_INFO_OFFSET:
/* Only differential chip has this property */
*val = -LTC2471_VREF;
return IIO_VAL_INT;
default:
return -EINVAL;
}
}
static const struct iio_chan_spec ltc2471_channel[] = {
{
.type = IIO_VOLTAGE,
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
},
};
static const struct iio_chan_spec ltc2473_channel[] = {
{
.type = IIO_VOLTAGE,
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE) |
BIT(IIO_CHAN_INFO_OFFSET),
.differential = 1,
},
};
static const struct iio_info ltc2471_info = {
.read_raw = ltc2471_read_raw,
.driver_module = THIS_MODULE,
};
static int ltc2471_i2c_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
struct iio_dev *indio_dev;
struct ltc2471_data *data;
int ret;
if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C))
return -EOPNOTSUPP;
indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*data));
if (!indio_dev)
return -ENOMEM;
data = iio_priv(indio_dev);
data->client = client;
indio_dev->dev.parent = &client->dev;
indio_dev->name = id->name;
indio_dev->info = &ltc2471_info;
indio_dev->modes = INDIO_DIRECT_MODE;
if (id->driver_data == ltc2473)
indio_dev->channels = ltc2473_channel;
else
indio_dev->channels = ltc2471_channel;
indio_dev->num_channels = 1;
/* Trigger once to start conversion and check if chip is there */
ret = ltc2471_get_value(client);
if (ret < 0) {
dev_err(&client->dev, "Cannot read from device.\n");
return ret;
}
return devm_iio_device_register(&client->dev, indio_dev);
}
static const struct i2c_device_id ltc2471_i2c_id[] = {
{ "ltc2471", ltc2471 },
{ "ltc2473", ltc2473 },
{}
};
MODULE_DEVICE_TABLE(i2c, ltc2471_i2c_id);
static struct i2c_driver ltc2471_i2c_driver = {
.driver = {
.name = "ltc2471",
},
.probe = ltc2471_i2c_probe,
.id_table = ltc2471_i2c_id,
};
module_i2c_driver(ltc2471_i2c_driver);
MODULE_DESCRIPTION("LTC2471/LTC2473 ADC driver");
MODULE_AUTHOR("Topic Embedded Products");
MODULE_LICENSE("GPL v2");

View File

@ -11,6 +11,7 @@
#include <linux/delay.h>
#include <linux/i2c.h>
#include <linux/iio/iio.h>
#include <linux/iio/driver.h>
#include <linux/iio/sysfs.h>
#include <linux/module.h>
#include <linux/of.h>
@ -127,13 +128,14 @@ static int ltc2497_read_raw(struct iio_dev *indio_dev,
}
}
#define LTC2497_CHAN(_chan, _addr) { \
#define LTC2497_CHAN(_chan, _addr, _ds_name) { \
.type = IIO_VOLTAGE, \
.indexed = 1, \
.channel = (_chan), \
.address = (_addr | (_chan / 2) | ((_chan & 1) ? LTC2497_SIGN : 0)), \
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \
.datasheet_name = (_ds_name), \
}
#define LTC2497_CHAN_DIFF(_chan, _addr) { \
@ -148,22 +150,22 @@ static int ltc2497_read_raw(struct iio_dev *indio_dev,
}
static const struct iio_chan_spec ltc2497_channel[] = {
LTC2497_CHAN(0, LTC2497_SGL),
LTC2497_CHAN(1, LTC2497_SGL),
LTC2497_CHAN(2, LTC2497_SGL),
LTC2497_CHAN(3, LTC2497_SGL),
LTC2497_CHAN(4, LTC2497_SGL),
LTC2497_CHAN(5, LTC2497_SGL),
LTC2497_CHAN(6, LTC2497_SGL),
LTC2497_CHAN(7, LTC2497_SGL),
LTC2497_CHAN(8, LTC2497_SGL),
LTC2497_CHAN(9, LTC2497_SGL),
LTC2497_CHAN(10, LTC2497_SGL),
LTC2497_CHAN(11, LTC2497_SGL),
LTC2497_CHAN(12, LTC2497_SGL),
LTC2497_CHAN(13, LTC2497_SGL),
LTC2497_CHAN(14, LTC2497_SGL),
LTC2497_CHAN(15, LTC2497_SGL),
LTC2497_CHAN(0, LTC2497_SGL, "CH0"),
LTC2497_CHAN(1, LTC2497_SGL, "CH1"),
LTC2497_CHAN(2, LTC2497_SGL, "CH2"),
LTC2497_CHAN(3, LTC2497_SGL, "CH3"),
LTC2497_CHAN(4, LTC2497_SGL, "CH4"),
LTC2497_CHAN(5, LTC2497_SGL, "CH5"),
LTC2497_CHAN(6, LTC2497_SGL, "CH6"),
LTC2497_CHAN(7, LTC2497_SGL, "CH7"),
LTC2497_CHAN(8, LTC2497_SGL, "CH8"),
LTC2497_CHAN(9, LTC2497_SGL, "CH9"),
LTC2497_CHAN(10, LTC2497_SGL, "CH10"),
LTC2497_CHAN(11, LTC2497_SGL, "CH11"),
LTC2497_CHAN(12, LTC2497_SGL, "CH12"),
LTC2497_CHAN(13, LTC2497_SGL, "CH13"),
LTC2497_CHAN(14, LTC2497_SGL, "CH14"),
LTC2497_CHAN(15, LTC2497_SGL, "CH15"),
LTC2497_CHAN_DIFF(0, LTC2497_DIFF),
LTC2497_CHAN_DIFF(1, LTC2497_DIFF),
LTC2497_CHAN_DIFF(2, LTC2497_DIFF),
@ -192,6 +194,7 @@ static int ltc2497_probe(struct i2c_client *client,
{
struct iio_dev *indio_dev;
struct ltc2497_st *st;
struct iio_map *plat_data;
int ret;
if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C |
@ -221,19 +224,31 @@ static int ltc2497_probe(struct i2c_client *client,
if (ret < 0)
return ret;
if (client->dev.platform_data) {
plat_data = ((struct iio_map *)client->dev.platform_data);
ret = iio_map_array_register(indio_dev, plat_data);
if (ret) {
dev_err(&indio_dev->dev, "iio map err: %d\n", ret);
goto err_regulator_disable;
}
}
ret = i2c_smbus_write_byte(st->client, LTC2497_CONFIG_DEFAULT);
if (ret < 0)
goto err_regulator_disable;
goto err_array_unregister;
st->addr_prev = LTC2497_CONFIG_DEFAULT;
st->time_prev = ktime_get();
ret = iio_device_register(indio_dev);
if (ret < 0)
goto err_regulator_disable;
goto err_array_unregister;
return 0;
err_array_unregister:
iio_map_array_unregister(indio_dev);
err_regulator_disable:
regulator_disable(st->ref);
@ -245,6 +260,7 @@ static int ltc2497_remove(struct i2c_client *client)
struct iio_dev *indio_dev = i2c_get_clientdata(client);
struct ltc2497_st *st = iio_priv(indio_dev);
iio_map_array_unregister(indio_dev);
iio_device_unregister(indio_dev);
regulator_disable(st->ref);

View File

@ -549,8 +549,8 @@ static int max9611_probe(struct i2c_client *client,
ret = of_property_read_u32(of_node, shunt_res_prop, &of_shunt);
if (ret) {
dev_err(&client->dev,
"Missing %s property for %s node\n",
shunt_res_prop, of_node->full_name);
"Missing %s property for %pOF node\n",
shunt_res_prop, of_node);
return ret;
}
max9611->shunt_resistor_uohm = of_shunt;

View File

@ -379,10 +379,12 @@ static int mcp3422_probe(struct i2c_client *client,
/* meaningful default configuration */
config = (MCP3422_CONT_SAMPLING
| MCP3422_CHANNEL_VALUE(1)
| MCP3422_CHANNEL_VALUE(0)
| MCP3422_PGA_VALUE(MCP3422_PGA_1)
| MCP3422_SAMPLE_RATE_VALUE(MCP3422_SRATE_240));
mcp3422_update_config(adc, config);
err = mcp3422_update_config(adc, config);
if (err < 0)
return err;
err = devm_iio_device_register(&client->dev, indio_dev);
if (err < 0)

View File

@ -572,8 +572,8 @@ static int meson_sar_adc_clk_init(struct iio_dev *indio_dev,
struct clk_init_data init;
const char *clk_parents[1];
init.name = devm_kasprintf(&indio_dev->dev, GFP_KERNEL, "%s#adc_div",
of_node_full_name(indio_dev->dev.of_node));
init.name = devm_kasprintf(&indio_dev->dev, GFP_KERNEL, "%pOF#adc_div",
indio_dev->dev.of_node);
init.flags = 0;
init.ops = &clk_divider_ops;
clk_parents[0] = __clk_get_name(priv->clkin);
@ -591,8 +591,8 @@ static int meson_sar_adc_clk_init(struct iio_dev *indio_dev,
if (WARN_ON(IS_ERR(priv->adc_div_clk)))
return PTR_ERR(priv->adc_div_clk);
init.name = devm_kasprintf(&indio_dev->dev, GFP_KERNEL, "%s#adc_en",
of_node_full_name(indio_dev->dev.of_node));
init.name = devm_kasprintf(&indio_dev->dev, GFP_KERNEL, "%pOF#adc_en",
indio_dev->dev.of_node);
init.flags = CLK_SET_RATE_PARENT;
init.ops = &clk_gate_ops;
clk_parents[0] = __clk_get_name(priv->adc_div_clk);
@ -915,6 +915,11 @@ static int meson_sar_adc_probe(struct platform_device *pdev)
init_completion(&priv->done);
match = of_match_device(meson_sar_adc_of_match, &pdev->dev);
if (!match) {
dev_err(&pdev->dev, "failed to match device\n");
return -ENODEV;
}
priv->data = match->data;
indio_dev->name = priv->data->name;

View File

@ -184,6 +184,37 @@ static const struct iio_info mt6577_auxadc_info = {
.read_raw = &mt6577_auxadc_read_raw,
};
static int __maybe_unused mt6577_auxadc_resume(struct device *dev)
{
struct iio_dev *indio_dev = dev_get_drvdata(dev);
struct mt6577_auxadc_device *adc_dev = iio_priv(indio_dev);
int ret;
ret = clk_prepare_enable(adc_dev->adc_clk);
if (ret) {
pr_err("failed to enable auxadc clock\n");
return ret;
}
mt6577_auxadc_mod_reg(adc_dev->reg_base + MT6577_AUXADC_MISC,
MT6577_AUXADC_PDN_EN, 0);
mdelay(MT6577_AUXADC_POWER_READY_MS);
return 0;
}
static int __maybe_unused mt6577_auxadc_suspend(struct device *dev)
{
struct iio_dev *indio_dev = dev_get_drvdata(dev);
struct mt6577_auxadc_device *adc_dev = iio_priv(indio_dev);
mt6577_auxadc_mod_reg(adc_dev->reg_base + MT6577_AUXADC_MISC,
0, MT6577_AUXADC_PDN_EN);
clk_disable_unprepare(adc_dev->adc_clk);
return 0;
}
static int mt6577_auxadc_probe(struct platform_device *pdev)
{
struct mt6577_auxadc_device *adc_dev;
@ -269,8 +300,13 @@ static int mt6577_auxadc_remove(struct platform_device *pdev)
return 0;
}
static SIMPLE_DEV_PM_OPS(mt6577_auxadc_pm_ops,
mt6577_auxadc_suspend,
mt6577_auxadc_resume);
static const struct of_device_id mt6577_auxadc_of_match[] = {
{ .compatible = "mediatek,mt2701-auxadc", },
{ .compatible = "mediatek,mt7622-auxadc", },
{ .compatible = "mediatek,mt8173-auxadc", },
{ }
};
@ -280,6 +316,7 @@ static struct platform_driver mt6577_auxadc_driver = {
.driver = {
.name = "mt6577-auxadc",
.of_match_table = mt6577_auxadc_of_match,
.pm = &mt6577_auxadc_pm_ops,
},
.probe = mt6577_auxadc_probe,
.remove = mt6577_auxadc_remove,

View File

@ -224,6 +224,11 @@ static int rockchip_saradc_probe(struct platform_device *pdev)
info = iio_priv(indio_dev);
match = of_match_device(rockchip_saradc_match, &pdev->dev);
if (!match) {
dev_err(&pdev->dev, "failed to match device\n");
return -ENODEV;
}
info->data = match->data;
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
@ -235,7 +240,8 @@ static int rockchip_saradc_probe(struct platform_device *pdev)
* The reset should be an optional property, as it should work
* with old devicetrees as well
*/
info->reset = devm_reset_control_get(&pdev->dev, "saradc-apb");
info->reset = devm_reset_control_get_exclusive(&pdev->dev,
"saradc-apb");
if (IS_ERR(info->reset)) {
ret = PTR_ERR(info->reset);
if (ret != -ENOENT)

View File

@ -172,7 +172,7 @@ struct stm32h7_adc_ck_spec {
int div;
};
const struct stm32h7_adc_ck_spec stm32h7_adc_ckmodes_spec[] = {
static const struct stm32h7_adc_ck_spec stm32h7_adc_ckmodes_spec[] = {
/* 00: CK_ADC[1..3]: Asynchronous clock modes */
{ 0, 0, 1 },
{ 0, 1, 2 },

View File

@ -83,6 +83,8 @@
#define STM32H7_ADC_IER 0x04
#define STM32H7_ADC_CR 0x08
#define STM32H7_ADC_CFGR 0x0C
#define STM32H7_ADC_SMPR1 0x14
#define STM32H7_ADC_SMPR2 0x18
#define STM32H7_ADC_PCSEL 0x1C
#define STM32H7_ADC_SQR1 0x30
#define STM32H7_ADC_SQR2 0x34
@ -151,6 +153,7 @@ enum stm32h7_adc_dmngt {
#define STM32H7_BOOST_CLKRATE 20000000UL
#define STM32_ADC_MAX_SQ 16 /* SQ1..SQ16 */
#define STM32_ADC_MAX_SMP 7 /* SMPx range is [0..7] */
#define STM32_ADC_TIMEOUT_US 100000
#define STM32_ADC_TIMEOUT (msecs_to_jiffies(STM32_ADC_TIMEOUT_US / 1000))
@ -227,6 +230,8 @@ struct stm32_adc_regs {
* @exten: trigger control register & bitfield
* @extsel: trigger selection register & bitfield
* @res: resolution selection register & bitfield
* @smpr: smpr1 & smpr2 registers offset array
* @smp_bits: smpr1 & smpr2 index and bitfields
*/
struct stm32_adc_regspec {
const u32 dr;
@ -236,6 +241,8 @@ struct stm32_adc_regspec {
const struct stm32_adc_regs exten;
const struct stm32_adc_regs extsel;
const struct stm32_adc_regs res;
const u32 smpr[2];
const struct stm32_adc_regs *smp_bits;
};
struct stm32_adc;
@ -251,6 +258,7 @@ struct stm32_adc;
* @start_conv: routine to start conversions
* @stop_conv: routine to stop conversions
* @unprepare: optional unprepare routine (disable, power-down)
* @smp_cycles: programmable sampling time (ADC clock cycles)
*/
struct stm32_adc_cfg {
const struct stm32_adc_regspec *regs;
@ -262,6 +270,7 @@ struct stm32_adc_cfg {
void (*start_conv)(struct stm32_adc *, bool dma);
void (*stop_conv)(struct stm32_adc *);
void (*unprepare)(struct stm32_adc *);
const unsigned int *smp_cycles;
};
/**
@ -283,6 +292,7 @@ struct stm32_adc_cfg {
* @rx_dma_buf: dma rx buffer bus address
* @rx_buf_sz: dma rx buffer size
* @pcsel bitmask to preselect channels on some devices
* @smpr_val: sampling time settings (e.g. smpr1 / smpr2)
* @cal: optional calibration data on some devices
*/
struct stm32_adc {
@ -303,6 +313,7 @@ struct stm32_adc {
dma_addr_t rx_dma_buf;
unsigned int rx_buf_sz;
u32 pcsel;
u32 smpr_val[2];
struct stm32_adc_calib cal;
};
@ -431,6 +442,39 @@ static struct stm32_adc_trig_info stm32f4_adc_trigs[] = {
{}, /* sentinel */
};
/**
* stm32f4_smp_bits[] - describe sampling time register index & bit fields
* Sorted so it can be indexed by channel number.
*/
static const struct stm32_adc_regs stm32f4_smp_bits[] = {
/* STM32F4_ADC_SMPR2: smpr[] index, mask, shift for SMP0 to SMP9 */
{ 1, GENMASK(2, 0), 0 },
{ 1, GENMASK(5, 3), 3 },
{ 1, GENMASK(8, 6), 6 },
{ 1, GENMASK(11, 9), 9 },
{ 1, GENMASK(14, 12), 12 },
{ 1, GENMASK(17, 15), 15 },
{ 1, GENMASK(20, 18), 18 },
{ 1, GENMASK(23, 21), 21 },
{ 1, GENMASK(26, 24), 24 },
{ 1, GENMASK(29, 27), 27 },
/* STM32F4_ADC_SMPR1, smpr[] index, mask, shift for SMP10 to SMP18 */
{ 0, GENMASK(2, 0), 0 },
{ 0, GENMASK(5, 3), 3 },
{ 0, GENMASK(8, 6), 6 },
{ 0, GENMASK(11, 9), 9 },
{ 0, GENMASK(14, 12), 12 },
{ 0, GENMASK(17, 15), 15 },
{ 0, GENMASK(20, 18), 18 },
{ 0, GENMASK(23, 21), 21 },
{ 0, GENMASK(26, 24), 24 },
};
/* STM32F4 programmable sampling time (ADC clock cycles) */
static const unsigned int stm32f4_adc_smp_cycles[STM32_ADC_MAX_SMP + 1] = {
3, 15, 28, 56, 84, 112, 144, 480,
};
static const struct stm32_adc_regspec stm32f4_adc_regspec = {
.dr = STM32F4_ADC_DR,
.ier_eoc = { STM32F4_ADC_CR1, STM32F4_EOCIE },
@ -440,6 +484,8 @@ static const struct stm32_adc_regspec stm32f4_adc_regspec = {
.extsel = { STM32F4_ADC_CR2, STM32F4_EXTSEL_MASK,
STM32F4_EXTSEL_SHIFT },
.res = { STM32F4_ADC_CR1, STM32F4_RES_MASK, STM32F4_RES_SHIFT },
.smpr = { STM32F4_ADC_SMPR1, STM32F4_ADC_SMPR2 },
.smp_bits = stm32f4_smp_bits,
};
static const struct stm32_adc_regs stm32h7_sq[STM32_ADC_MAX_SQ + 1] = {
@ -483,6 +529,40 @@ static struct stm32_adc_trig_info stm32h7_adc_trigs[] = {
{},
};
/**
* stm32h7_smp_bits - describe sampling time register index & bit fields
* Sorted so it can be indexed by channel number.
*/
static const struct stm32_adc_regs stm32h7_smp_bits[] = {
/* STM32H7_ADC_SMPR1, smpr[] index, mask, shift for SMP0 to SMP9 */
{ 0, GENMASK(2, 0), 0 },
{ 0, GENMASK(5, 3), 3 },
{ 0, GENMASK(8, 6), 6 },
{ 0, GENMASK(11, 9), 9 },
{ 0, GENMASK(14, 12), 12 },
{ 0, GENMASK(17, 15), 15 },
{ 0, GENMASK(20, 18), 18 },
{ 0, GENMASK(23, 21), 21 },
{ 0, GENMASK(26, 24), 24 },
{ 0, GENMASK(29, 27), 27 },
/* STM32H7_ADC_SMPR2, smpr[] index, mask, shift for SMP10 to SMP19 */
{ 1, GENMASK(2, 0), 0 },
{ 1, GENMASK(5, 3), 3 },
{ 1, GENMASK(8, 6), 6 },
{ 1, GENMASK(11, 9), 9 },
{ 1, GENMASK(14, 12), 12 },
{ 1, GENMASK(17, 15), 15 },
{ 1, GENMASK(20, 18), 18 },
{ 1, GENMASK(23, 21), 21 },
{ 1, GENMASK(26, 24), 24 },
{ 1, GENMASK(29, 27), 27 },
};
/* STM32H7 programmable sampling time (ADC clock cycles, rounded down) */
static const unsigned int stm32h7_adc_smp_cycles[STM32_ADC_MAX_SMP + 1] = {
1, 2, 8, 16, 32, 64, 387, 810,
};
static const struct stm32_adc_regspec stm32h7_adc_regspec = {
.dr = STM32H7_ADC_DR,
.ier_eoc = { STM32H7_ADC_IER, STM32H7_EOCIE },
@ -492,6 +572,8 @@ static const struct stm32_adc_regspec stm32h7_adc_regspec = {
.extsel = { STM32H7_ADC_CFGR, STM32H7_EXTSEL_MASK,
STM32H7_EXTSEL_SHIFT },
.res = { STM32H7_ADC_CFGR, STM32H7_RES_MASK, STM32H7_RES_SHIFT },
.smpr = { STM32H7_ADC_SMPR1, STM32H7_ADC_SMPR2 },
.smp_bits = stm32h7_smp_bits,
};
/**
@ -933,6 +1015,7 @@ static void stm32h7_adc_unprepare(struct stm32_adc *adc)
* @scan_mask: channels to be converted
*
* Conversion sequence :
* Apply sampling time settings for all channels.
* Configure ADC scan sequence based on selected channels in scan_mask.
* Add channels to SQR registers, from scan_mask LSB to MSB, then
* program sequence len.
@ -946,6 +1029,10 @@ static int stm32_adc_conf_scan_seq(struct iio_dev *indio_dev,
u32 val, bit;
int i = 0;
/* Apply sampling time settings */
stm32_adc_writel(adc, adc->cfg->regs->smpr[0], adc->smpr_val[0]);
stm32_adc_writel(adc, adc->cfg->regs->smpr[1], adc->smpr_val[1]);
for_each_set_bit(bit, scan_mask, indio_dev->masklength) {
chan = indio_dev->channels + bit;
/*
@ -1079,6 +1166,7 @@ static const struct iio_enum stm32_adc_trig_pol = {
* @res: conversion result
*
* The function performs a single conversion on a given channel:
* - Apply sampling time settings
* - Program sequencer with one channel (e.g. in SQ1 with len = 1)
* - Use SW trigger
* - Start conversion, then wait for interrupt completion.
@ -1103,6 +1191,10 @@ static int stm32_adc_single_conv(struct iio_dev *indio_dev,
return ret;
}
/* Apply sampling time settings */
stm32_adc_writel(adc, regs->smpr[0], adc->smpr_val[0]);
stm32_adc_writel(adc, regs->smpr[1], adc->smpr_val[1]);
/* Program chan number in regular sequence (SQ1) */
val = stm32_adc_readl(adc, regs->sqr[1].reg);
val &= ~regs->sqr[1].mask;
@ -1507,10 +1599,28 @@ static int stm32_adc_of_get_resolution(struct iio_dev *indio_dev)
return 0;
}
static void stm32_adc_smpr_init(struct stm32_adc *adc, int channel, u32 smp_ns)
{
const struct stm32_adc_regs *smpr = &adc->cfg->regs->smp_bits[channel];
u32 period_ns, shift = smpr->shift, mask = smpr->mask;
unsigned int smp, r = smpr->reg;
/* Determine sampling time (ADC clock cycles) */
period_ns = NSEC_PER_SEC / adc->common->rate;
for (smp = 0; smp <= STM32_ADC_MAX_SMP; smp++)
if ((period_ns * adc->cfg->smp_cycles[smp]) >= smp_ns)
break;
if (smp > STM32_ADC_MAX_SMP)
smp = STM32_ADC_MAX_SMP;
/* pre-build sampling time registers (e.g. smpr1, smpr2) */
adc->smpr_val[r] = (adc->smpr_val[r] & ~mask) | (smp << shift);
}
static void stm32_adc_chan_init_one(struct iio_dev *indio_dev,
struct iio_chan_spec *chan,
const struct stm32_adc_chan_spec *channel,
int scan_index)
int scan_index, u32 smp)
{
struct stm32_adc *adc = iio_priv(indio_dev);
@ -1526,6 +1636,9 @@ static void stm32_adc_chan_init_one(struct iio_dev *indio_dev,
chan->scan_type.storagebits = 16;
chan->ext_info = stm32_adc_ext_info;
/* Prepare sampling time settings */
stm32_adc_smpr_init(adc, chan->channel, smp);
/* pre-build selected channels mask */
adc->pcsel |= BIT(chan->channel);
}
@ -1538,8 +1651,8 @@ static int stm32_adc_chan_of_init(struct iio_dev *indio_dev)
struct property *prop;
const __be32 *cur;
struct iio_chan_spec *channels;
int scan_index = 0, num_channels;
u32 val;
int scan_index = 0, num_channels, ret;
u32 val, smp = 0;
num_channels = of_property_count_u32_elems(node, "st,adc-channels");
if (num_channels < 0 ||
@ -1548,6 +1661,13 @@ static int stm32_adc_chan_of_init(struct iio_dev *indio_dev)
return num_channels < 0 ? num_channels : -EINVAL;
}
/* Optional sample time is provided either for each, or all channels */
ret = of_property_count_u32_elems(node, "st,min-sample-time-nsecs");
if (ret > 1 && ret != num_channels) {
dev_err(&indio_dev->dev, "Invalid st,min-sample-time-nsecs\n");
return -EINVAL;
}
channels = devm_kcalloc(&indio_dev->dev, num_channels,
sizeof(struct iio_chan_spec), GFP_KERNEL);
if (!channels)
@ -1558,9 +1678,19 @@ static int stm32_adc_chan_of_init(struct iio_dev *indio_dev)
dev_err(&indio_dev->dev, "Invalid channel %d\n", val);
return -EINVAL;
}
/*
* Using of_property_read_u32_index(), smp value will only be
* modified if valid u32 value can be decoded. This allows to
* get either no value, 1 shared value for all indexes, or one
* value per channel.
*/
of_property_read_u32_index(node, "st,min-sample-time-nsecs",
scan_index, &smp);
stm32_adc_chan_init_one(indio_dev, &channels[scan_index],
&adc_info->channels[val],
scan_index);
scan_index, smp);
scan_index++;
}
@ -1755,6 +1885,7 @@ static const struct stm32_adc_cfg stm32f4_adc_cfg = {
.clk_required = true,
.start_conv = stm32f4_adc_start_conv,
.stop_conv = stm32f4_adc_stop_conv,
.smp_cycles = stm32f4_adc_smp_cycles,
};
static const struct stm32_adc_cfg stm32h7_adc_cfg = {
@ -1766,6 +1897,7 @@ static const struct stm32_adc_cfg stm32h7_adc_cfg = {
.stop_conv = stm32h7_adc_stop_conv,
.prepare = stm32h7_adc_prepare,
.unprepare = stm32h7_adc_unprepare,
.smp_cycles = stm32h7_adc_smp_cycles,
};
static const struct of_device_id stm32_adc_of_match[] = {

View File

@ -17,6 +17,7 @@
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/init.h>
#include <linux/irq.h>
#include <linux/i2c.h>
#include <linux/regmap.h>
#include <linux/pm_runtime.h>
@ -28,6 +29,7 @@
#include <linux/iio/iio.h>
#include <linux/iio/types.h>
#include <linux/iio/sysfs.h>
#include <linux/iio/events.h>
#include <linux/iio/buffer.h>
#include <linux/iio/triggered_buffer.h>
#include <linux/iio/trigger_consumer.h>
@ -36,17 +38,38 @@
#define ADS1015_CONV_REG 0x00
#define ADS1015_CFG_REG 0x01
#define ADS1015_LO_THRESH_REG 0x02
#define ADS1015_HI_THRESH_REG 0x03
#define ADS1015_CFG_COMP_QUE_SHIFT 0
#define ADS1015_CFG_COMP_LAT_SHIFT 2
#define ADS1015_CFG_COMP_POL_SHIFT 3
#define ADS1015_CFG_COMP_MODE_SHIFT 4
#define ADS1015_CFG_DR_SHIFT 5
#define ADS1015_CFG_MOD_SHIFT 8
#define ADS1015_CFG_PGA_SHIFT 9
#define ADS1015_CFG_MUX_SHIFT 12
#define ADS1015_CFG_COMP_QUE_MASK GENMASK(1, 0)
#define ADS1015_CFG_COMP_LAT_MASK BIT(2)
#define ADS1015_CFG_COMP_POL_MASK BIT(2)
#define ADS1015_CFG_COMP_MODE_MASK BIT(4)
#define ADS1015_CFG_DR_MASK GENMASK(7, 5)
#define ADS1015_CFG_MOD_MASK BIT(8)
#define ADS1015_CFG_PGA_MASK GENMASK(11, 9)
#define ADS1015_CFG_MUX_MASK GENMASK(14, 12)
/* Comparator queue and disable field */
#define ADS1015_CFG_COMP_DISABLE 3
/* Comparator polarity field */
#define ADS1015_CFG_COMP_POL_LOW 0
#define ADS1015_CFG_COMP_POL_HIGH 1
/* Comparator mode field */
#define ADS1015_CFG_COMP_MODE_TRAD 0
#define ADS1015_CFG_COMP_MODE_WINDOW 1
/* device operating modes */
#define ADS1015_CONTINUOUS 0
#define ADS1015_SINGLESHOT 1
@ -81,18 +104,36 @@ static const unsigned int ads1115_data_rate[] = {
8, 16, 32, 64, 128, 250, 475, 860
};
static const struct {
int scale;
int uscale;
} ads1015_scale[] = {
{3, 0},
{2, 0},
{1, 0},
{0, 500000},
{0, 250000},
{0, 125000},
{0, 125000},
{0, 125000},
/*
* Translation from PGA bits to full-scale positive and negative input voltage
* range in mV
*/
static int ads1015_fullscale_range[] = {
6144, 4096, 2048, 1024, 512, 256, 256, 256
};
/*
* Translation from COMP_QUE field value to the number of successive readings
* exceed the threshold values before an interrupt is generated
*/
static const int ads1015_comp_queue[] = { 1, 2, 4 };
static const struct iio_event_spec ads1015_events[] = {
{
.type = IIO_EV_TYPE_THRESH,
.dir = IIO_EV_DIR_RISING,
.mask_separate = BIT(IIO_EV_INFO_VALUE) |
BIT(IIO_EV_INFO_ENABLE),
}, {
.type = IIO_EV_TYPE_THRESH,
.dir = IIO_EV_DIR_FALLING,
.mask_separate = BIT(IIO_EV_INFO_VALUE),
}, {
.type = IIO_EV_TYPE_THRESH,
.dir = IIO_EV_DIR_EITHER,
.mask_separate = BIT(IIO_EV_INFO_ENABLE) |
BIT(IIO_EV_INFO_PERIOD),
},
};
#define ADS1015_V_CHAN(_chan, _addr) { \
@ -111,6 +152,8 @@ static const struct {
.shift = 4, \
.endianness = IIO_CPU, \
}, \
.event_spec = ads1015_events, \
.num_event_specs = ARRAY_SIZE(ads1015_events), \
.datasheet_name = "AIN"#_chan, \
}
@ -132,6 +175,8 @@ static const struct {
.shift = 4, \
.endianness = IIO_CPU, \
}, \
.event_spec = ads1015_events, \
.num_event_specs = ARRAY_SIZE(ads1015_events), \
.datasheet_name = "AIN"#_chan"-AIN"#_chan2, \
}
@ -150,6 +195,8 @@ static const struct {
.storagebits = 16, \
.endianness = IIO_CPU, \
}, \
.event_spec = ads1015_events, \
.num_event_specs = ARRAY_SIZE(ads1015_events), \
.datasheet_name = "AIN"#_chan, \
}
@ -170,9 +217,17 @@ static const struct {
.storagebits = 16, \
.endianness = IIO_CPU, \
}, \
.event_spec = ads1015_events, \
.num_event_specs = ARRAY_SIZE(ads1015_events), \
.datasheet_name = "AIN"#_chan"-AIN"#_chan2, \
}
struct ads1015_thresh_data {
unsigned int comp_queue;
int high_thresh;
int low_thresh;
};
struct ads1015_data {
struct regmap *regmap;
/*
@ -182,18 +237,54 @@ struct ads1015_data {
struct mutex lock;
struct ads1015_channel_data channel_data[ADS1015_CHANNELS];
unsigned int event_channel;
unsigned int comp_mode;
struct ads1015_thresh_data thresh_data[ADS1015_CHANNELS];
unsigned int *data_rate;
/*
* Set to true when the ADC is switched to the continuous-conversion
* mode and exits from a power-down state. This flag is used to avoid
* getting the stale result from the conversion register.
*/
bool conv_invalid;
};
static bool ads1015_event_channel_enabled(struct ads1015_data *data)
{
return (data->event_channel != ADS1015_CHANNELS);
}
static void ads1015_event_channel_enable(struct ads1015_data *data, int chan,
int comp_mode)
{
WARN_ON(ads1015_event_channel_enabled(data));
data->event_channel = chan;
data->comp_mode = comp_mode;
}
static void ads1015_event_channel_disable(struct ads1015_data *data, int chan)
{
data->event_channel = ADS1015_CHANNELS;
}
static bool ads1015_is_writeable_reg(struct device *dev, unsigned int reg)
{
return (reg == ADS1015_CFG_REG);
switch (reg) {
case ADS1015_CFG_REG:
case ADS1015_LO_THRESH_REG:
case ADS1015_HI_THRESH_REG:
return true;
default:
return false;
}
}
static const struct regmap_config ads1015_regmap_config = {
.reg_bits = 8,
.val_bits = 16,
.max_register = ADS1015_CFG_REG,
.max_register = ADS1015_HI_THRESH_REG,
.writeable_reg = ads1015_is_writeable_reg,
};
@ -235,33 +326,51 @@ static int ads1015_set_power_state(struct ads1015_data *data, bool on)
ret = pm_runtime_put_autosuspend(dev);
}
return ret;
return ret < 0 ? ret : 0;
}
static
int ads1015_get_adc_result(struct ads1015_data *data, int chan, int *val)
{
int ret, pga, dr, conv_time;
bool change;
unsigned int old, mask, cfg;
if (chan < 0 || chan >= ADS1015_CHANNELS)
return -EINVAL;
pga = data->channel_data[chan].pga;
dr = data->channel_data[chan].data_rate;
ret = regmap_update_bits_check(data->regmap, ADS1015_CFG_REG,
ADS1015_CFG_MUX_MASK |
ADS1015_CFG_PGA_MASK,
chan << ADS1015_CFG_MUX_SHIFT |
pga << ADS1015_CFG_PGA_SHIFT,
&change);
if (ret < 0)
ret = regmap_read(data->regmap, ADS1015_CFG_REG, &old);
if (ret)
return ret;
if (change) {
conv_time = DIV_ROUND_UP(USEC_PER_SEC, data->data_rate[dr]);
pga = data->channel_data[chan].pga;
dr = data->channel_data[chan].data_rate;
mask = ADS1015_CFG_MUX_MASK | ADS1015_CFG_PGA_MASK |
ADS1015_CFG_DR_MASK;
cfg = chan << ADS1015_CFG_MUX_SHIFT | pga << ADS1015_CFG_PGA_SHIFT |
dr << ADS1015_CFG_DR_SHIFT;
if (ads1015_event_channel_enabled(data)) {
mask |= ADS1015_CFG_COMP_QUE_MASK | ADS1015_CFG_COMP_MODE_MASK;
cfg |= data->thresh_data[chan].comp_queue <<
ADS1015_CFG_COMP_QUE_SHIFT |
data->comp_mode <<
ADS1015_CFG_COMP_MODE_SHIFT;
}
cfg = (old & ~mask) | (cfg & mask);
ret = regmap_write(data->regmap, ADS1015_CFG_REG, cfg);
if (ret)
return ret;
if (old != cfg || data->conv_invalid) {
int dr_old = (old & ADS1015_CFG_DR_MASK) >>
ADS1015_CFG_DR_SHIFT;
conv_time = DIV_ROUND_UP(USEC_PER_SEC, data->data_rate[dr_old]);
conv_time += DIV_ROUND_UP(USEC_PER_SEC, data->data_rate[dr]);
usleep_range(conv_time, conv_time + 1);
data->conv_invalid = false;
}
return regmap_read(data->regmap, ADS1015_CONV_REG, val);
@ -298,52 +407,36 @@ err:
return IRQ_HANDLED;
}
static int ads1015_set_scale(struct ads1015_data *data, int chan,
static int ads1015_set_scale(struct ads1015_data *data,
struct iio_chan_spec const *chan,
int scale, int uscale)
{
int i, ret, rindex = -1;
int i;
int fullscale = div_s64((scale * 1000000LL + uscale) <<
(chan->scan_type.realbits - 1), 1000000);
for (i = 0; i < ARRAY_SIZE(ads1015_scale); i++)
if (ads1015_scale[i].scale == scale &&
ads1015_scale[i].uscale == uscale) {
rindex = i;
break;
for (i = 0; i < ARRAY_SIZE(ads1015_fullscale_range); i++) {
if (ads1015_fullscale_range[i] == fullscale) {
data->channel_data[chan->address].pga = i;
return 0;
}
if (rindex < 0)
return -EINVAL;
}
ret = regmap_update_bits(data->regmap, ADS1015_CFG_REG,
ADS1015_CFG_PGA_MASK,
rindex << ADS1015_CFG_PGA_SHIFT);
if (ret < 0)
return ret;
data->channel_data[chan].pga = rindex;
return 0;
return -EINVAL;
}
static int ads1015_set_data_rate(struct ads1015_data *data, int chan, int rate)
{
int i, ret, rindex = -1;
int i;
for (i = 0; i < ARRAY_SIZE(ads1015_data_rate); i++)
for (i = 0; i < ARRAY_SIZE(ads1015_data_rate); i++) {
if (data->data_rate[i] == rate) {
rindex = i;
break;
data->channel_data[chan].data_rate = i;
return 0;
}
if (rindex < 0)
return -EINVAL;
}
ret = regmap_update_bits(data->regmap, ADS1015_CFG_REG,
ADS1015_CFG_DR_MASK,
rindex << ADS1015_CFG_DR_SHIFT);
if (ret < 0)
return ret;
data->channel_data[chan].data_rate = rindex;
return 0;
return -EINVAL;
}
static int ads1015_read_raw(struct iio_dev *indio_dev,
@ -353,41 +446,47 @@ static int ads1015_read_raw(struct iio_dev *indio_dev,
int ret, idx;
struct ads1015_data *data = iio_priv(indio_dev);
mutex_lock(&indio_dev->mlock);
mutex_lock(&data->lock);
switch (mask) {
case IIO_CHAN_INFO_RAW: {
int shift = chan->scan_type.shift;
if (iio_buffer_enabled(indio_dev)) {
ret = -EBUSY;
ret = iio_device_claim_direct_mode(indio_dev);
if (ret)
break;
if (ads1015_event_channel_enabled(data) &&
data->event_channel != chan->address) {
ret = -EBUSY;
goto release_direct;
}
ret = ads1015_set_power_state(data, true);
if (ret < 0)
break;
goto release_direct;
ret = ads1015_get_adc_result(data, chan->address, val);
if (ret < 0) {
ads1015_set_power_state(data, false);
break;
goto release_direct;
}
*val = sign_extend32(*val >> shift, 15 - shift);
ret = ads1015_set_power_state(data, false);
if (ret < 0)
break;
goto release_direct;
ret = IIO_VAL_INT;
release_direct:
iio_device_release_direct_mode(indio_dev);
break;
}
case IIO_CHAN_INFO_SCALE:
idx = data->channel_data[chan->address].pga;
*val = ads1015_scale[idx].scale;
*val2 = ads1015_scale[idx].uscale;
ret = IIO_VAL_INT_PLUS_MICRO;
*val = ads1015_fullscale_range[idx];
*val2 = chan->scan_type.realbits - 1;
ret = IIO_VAL_FRACTIONAL_LOG2;
break;
case IIO_CHAN_INFO_SAMP_FREQ:
idx = data->channel_data[chan->address].data_rate;
@ -399,7 +498,6 @@ static int ads1015_read_raw(struct iio_dev *indio_dev,
break;
}
mutex_unlock(&data->lock);
mutex_unlock(&indio_dev->mlock);
return ret;
}
@ -414,7 +512,7 @@ static int ads1015_write_raw(struct iio_dev *indio_dev,
mutex_lock(&data->lock);
switch (mask) {
case IIO_CHAN_INFO_SCALE:
ret = ads1015_set_scale(data, chan->address, val, val2);
ret = ads1015_set_scale(data, chan, val, val2);
break;
case IIO_CHAN_INFO_SAMP_FREQ:
ret = ads1015_set_data_rate(data, chan->address, val);
@ -428,8 +526,254 @@ static int ads1015_write_raw(struct iio_dev *indio_dev,
return ret;
}
static int ads1015_read_event(struct iio_dev *indio_dev,
const struct iio_chan_spec *chan, enum iio_event_type type,
enum iio_event_direction dir, enum iio_event_info info, int *val,
int *val2)
{
struct ads1015_data *data = iio_priv(indio_dev);
int ret;
unsigned int comp_queue;
int period;
int dr;
mutex_lock(&data->lock);
switch (info) {
case IIO_EV_INFO_VALUE:
*val = (dir == IIO_EV_DIR_RISING) ?
data->thresh_data[chan->address].high_thresh :
data->thresh_data[chan->address].low_thresh;
ret = IIO_VAL_INT;
break;
case IIO_EV_INFO_PERIOD:
dr = data->channel_data[chan->address].data_rate;
comp_queue = data->thresh_data[chan->address].comp_queue;
period = ads1015_comp_queue[comp_queue] *
USEC_PER_SEC / data->data_rate[dr];
*val = period / USEC_PER_SEC;
*val2 = period % USEC_PER_SEC;
ret = IIO_VAL_INT_PLUS_MICRO;
break;
default:
ret = -EINVAL;
break;
}
mutex_unlock(&data->lock);
return ret;
}
static int ads1015_write_event(struct iio_dev *indio_dev,
const struct iio_chan_spec *chan, enum iio_event_type type,
enum iio_event_direction dir, enum iio_event_info info, int val,
int val2)
{
struct ads1015_data *data = iio_priv(indio_dev);
int realbits = chan->scan_type.realbits;
int ret = 0;
long long period;
int i;
int dr;
mutex_lock(&data->lock);
switch (info) {
case IIO_EV_INFO_VALUE:
if (val >= 1 << (realbits - 1) || val < -1 << (realbits - 1)) {
ret = -EINVAL;
break;
}
if (dir == IIO_EV_DIR_RISING)
data->thresh_data[chan->address].high_thresh = val;
else
data->thresh_data[chan->address].low_thresh = val;
break;
case IIO_EV_INFO_PERIOD:
dr = data->channel_data[chan->address].data_rate;
period = val * USEC_PER_SEC + val2;
for (i = 0; i < ARRAY_SIZE(ads1015_comp_queue) - 1; i++) {
if (period <= ads1015_comp_queue[i] *
USEC_PER_SEC / data->data_rate[dr])
break;
}
data->thresh_data[chan->address].comp_queue = i;
break;
default:
ret = -EINVAL;
break;
}
mutex_unlock(&data->lock);
return ret;
}
static int ads1015_read_event_config(struct iio_dev *indio_dev,
const struct iio_chan_spec *chan, enum iio_event_type type,
enum iio_event_direction dir)
{
struct ads1015_data *data = iio_priv(indio_dev);
int ret = 0;
mutex_lock(&data->lock);
if (data->event_channel == chan->address) {
switch (dir) {
case IIO_EV_DIR_RISING:
ret = 1;
break;
case IIO_EV_DIR_EITHER:
ret = (data->comp_mode == ADS1015_CFG_COMP_MODE_WINDOW);
break;
default:
ret = -EINVAL;
break;
}
}
mutex_unlock(&data->lock);
return ret;
}
static int ads1015_enable_event_config(struct ads1015_data *data,
const struct iio_chan_spec *chan, int comp_mode)
{
int low_thresh = data->thresh_data[chan->address].low_thresh;
int high_thresh = data->thresh_data[chan->address].high_thresh;
int ret;
unsigned int val;
if (ads1015_event_channel_enabled(data)) {
if (data->event_channel != chan->address ||
(data->comp_mode == ADS1015_CFG_COMP_MODE_TRAD &&
comp_mode == ADS1015_CFG_COMP_MODE_WINDOW))
return -EBUSY;
return 0;
}
if (comp_mode == ADS1015_CFG_COMP_MODE_TRAD) {
low_thresh = max(-1 << (chan->scan_type.realbits - 1),
high_thresh - 1);
}
ret = regmap_write(data->regmap, ADS1015_LO_THRESH_REG,
low_thresh << chan->scan_type.shift);
if (ret)
return ret;
ret = regmap_write(data->regmap, ADS1015_HI_THRESH_REG,
high_thresh << chan->scan_type.shift);
if (ret)
return ret;
ret = ads1015_set_power_state(data, true);
if (ret < 0)
return ret;
ads1015_event_channel_enable(data, chan->address, comp_mode);
ret = ads1015_get_adc_result(data, chan->address, &val);
if (ret) {
ads1015_event_channel_disable(data, chan->address);
ads1015_set_power_state(data, false);
}
return ret;
}
static int ads1015_disable_event_config(struct ads1015_data *data,
const struct iio_chan_spec *chan, int comp_mode)
{
int ret;
if (!ads1015_event_channel_enabled(data))
return 0;
if (data->event_channel != chan->address)
return 0;
if (data->comp_mode == ADS1015_CFG_COMP_MODE_TRAD &&
comp_mode == ADS1015_CFG_COMP_MODE_WINDOW)
return 0;
ret = regmap_update_bits(data->regmap, ADS1015_CFG_REG,
ADS1015_CFG_COMP_QUE_MASK,
ADS1015_CFG_COMP_DISABLE <<
ADS1015_CFG_COMP_QUE_SHIFT);
if (ret)
return ret;
ads1015_event_channel_disable(data, chan->address);
return ads1015_set_power_state(data, false);
}
static int ads1015_write_event_config(struct iio_dev *indio_dev,
const struct iio_chan_spec *chan, enum iio_event_type type,
enum iio_event_direction dir, int state)
{
struct ads1015_data *data = iio_priv(indio_dev);
int ret;
int comp_mode = (dir == IIO_EV_DIR_EITHER) ?
ADS1015_CFG_COMP_MODE_WINDOW : ADS1015_CFG_COMP_MODE_TRAD;
mutex_lock(&data->lock);
/* Prevent from enabling both buffer and event at a time */
ret = iio_device_claim_direct_mode(indio_dev);
if (ret) {
mutex_unlock(&data->lock);
return ret;
}
if (state)
ret = ads1015_enable_event_config(data, chan, comp_mode);
else
ret = ads1015_disable_event_config(data, chan, comp_mode);
iio_device_release_direct_mode(indio_dev);
mutex_unlock(&data->lock);
return ret;
}
static irqreturn_t ads1015_event_handler(int irq, void *priv)
{
struct iio_dev *indio_dev = priv;
struct ads1015_data *data = iio_priv(indio_dev);
int val;
int ret;
/* Clear the latched ALERT/RDY pin */
ret = regmap_read(data->regmap, ADS1015_CONV_REG, &val);
if (ret)
return IRQ_HANDLED;
if (ads1015_event_channel_enabled(data)) {
enum iio_event_direction dir;
u64 code;
dir = data->comp_mode == ADS1015_CFG_COMP_MODE_TRAD ?
IIO_EV_DIR_RISING : IIO_EV_DIR_EITHER;
code = IIO_UNMOD_EVENT_CODE(IIO_VOLTAGE, data->event_channel,
IIO_EV_TYPE_THRESH, dir);
iio_push_event(indio_dev, code, iio_get_time_ns(indio_dev));
}
return IRQ_HANDLED;
}
static int ads1015_buffer_preenable(struct iio_dev *indio_dev)
{
struct ads1015_data *data = iio_priv(indio_dev);
/* Prevent from enabling both buffer and event at a time */
if (ads1015_event_channel_enabled(data))
return -EBUSY;
return ads1015_set_power_state(iio_priv(indio_dev), true);
}
@ -446,7 +790,10 @@ static const struct iio_buffer_setup_ops ads1015_buffer_setup_ops = {
.validate_scan_mask = &iio_validate_scan_mask_onehot,
};
static IIO_CONST_ATTR(scale_available, "3 2 1 0.5 0.25 0.125");
static IIO_CONST_ATTR_NAMED(ads1015_scale_available, scale_available,
"3 2 1 0.5 0.25 0.125");
static IIO_CONST_ATTR_NAMED(ads1115_scale_available, scale_available,
"0.1875 0.125 0.0625 0.03125 0.015625 0.007813");
static IIO_CONST_ATTR_NAMED(ads1015_sampling_frequency_available,
sampling_frequency_available, "128 250 490 920 1600 2400 3300");
@ -454,7 +801,7 @@ static IIO_CONST_ATTR_NAMED(ads1115_sampling_frequency_available,
sampling_frequency_available, "8 16 32 64 128 250 475 860");
static struct attribute *ads1015_attributes[] = {
&iio_const_attr_scale_available.dev_attr.attr,
&iio_const_attr_ads1015_scale_available.dev_attr.attr,
&iio_const_attr_ads1015_sampling_frequency_available.dev_attr.attr,
NULL,
};
@ -464,7 +811,7 @@ static const struct attribute_group ads1015_attribute_group = {
};
static struct attribute *ads1115_attributes[] = {
&iio_const_attr_scale_available.dev_attr.attr,
&iio_const_attr_ads1115_scale_available.dev_attr.attr,
&iio_const_attr_ads1115_sampling_frequency_available.dev_attr.attr,
NULL,
};
@ -477,6 +824,10 @@ static const struct iio_info ads1015_info = {
.driver_module = THIS_MODULE,
.read_raw = ads1015_read_raw,
.write_raw = ads1015_write_raw,
.read_event_value = ads1015_read_event,
.write_event_value = ads1015_write_event,
.read_event_config = ads1015_read_event_config,
.write_event_config = ads1015_write_event_config,
.attrs = &ads1015_attribute_group,
};
@ -484,6 +835,10 @@ static const struct iio_info ads1115_info = {
.driver_module = THIS_MODULE,
.read_raw = ads1015_read_raw,
.write_raw = ads1015_write_raw,
.read_event_value = ads1015_read_event,
.write_event_value = ads1015_write_event,
.read_event_config = ads1015_read_event_config,
.write_event_config = ads1015_write_event_config,
.attrs = &ads1115_attribute_group,
};
@ -505,24 +860,24 @@ static int ads1015_get_channels_config_of(struct i2c_client *client)
unsigned int data_rate = ADS1015_DEFAULT_DATA_RATE;
if (of_property_read_u32(node, "reg", &pval)) {
dev_err(&client->dev, "invalid reg on %s\n",
node->full_name);
dev_err(&client->dev, "invalid reg on %pOF\n",
node);
continue;
}
channel = pval;
if (channel >= ADS1015_CHANNELS) {
dev_err(&client->dev,
"invalid channel index %d on %s\n",
channel, node->full_name);
"invalid channel index %d on %pOF\n",
channel, node);
continue;
}
if (!of_property_read_u32(node, "ti,gain", &pval)) {
pga = pval;
if (pga > 6) {
dev_err(&client->dev, "invalid gain on %s\n",
node->full_name);
dev_err(&client->dev, "invalid gain on %pOF\n",
node);
of_node_put(node);
return -EINVAL;
}
@ -532,8 +887,8 @@ static int ads1015_get_channels_config_of(struct i2c_client *client)
data_rate = pval;
if (data_rate > 7) {
dev_err(&client->dev,
"invalid data_rate on %s\n",
node->full_name);
"invalid data_rate on %pOF\n",
node);
of_node_put(node);
return -EINVAL;
}
@ -573,6 +928,13 @@ static void ads1015_get_channels_config(struct i2c_client *client)
}
}
static int ads1015_set_conv_mode(struct ads1015_data *data, int mode)
{
return regmap_update_bits(data->regmap, ADS1015_CFG_REG,
ADS1015_CFG_MOD_MASK,
mode << ADS1015_CFG_MOD_SHIFT);
}
static int ads1015_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
@ -580,6 +942,7 @@ static int ads1015_probe(struct i2c_client *client,
struct ads1015_data *data;
int ret;
enum chip_ids chip;
int i;
indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*data));
if (!indio_dev)
@ -614,6 +977,18 @@ static int ads1015_probe(struct i2c_client *client,
break;
}
data->event_channel = ADS1015_CHANNELS;
/*
* Set default lower and upper threshold to min and max value
* respectively.
*/
for (i = 0; i < ADS1015_CHANNELS; i++) {
int realbits = indio_dev->channels[i].scan_type.realbits;
data->thresh_data[i].low_thresh = -1 << (realbits - 1);
data->thresh_data[i].high_thresh = (1 << (realbits - 1)) - 1;
}
/* we need to keep this ABI the same as used by hwmon ADS1015 driver */
ads1015_get_channels_config(client);
@ -623,16 +998,56 @@ static int ads1015_probe(struct i2c_client *client,
return PTR_ERR(data->regmap);
}
ret = iio_triggered_buffer_setup(indio_dev, NULL,
ads1015_trigger_handler,
&ads1015_buffer_setup_ops);
ret = devm_iio_triggered_buffer_setup(&client->dev, indio_dev, NULL,
ads1015_trigger_handler,
&ads1015_buffer_setup_ops);
if (ret < 0) {
dev_err(&client->dev, "iio triggered buffer setup failed\n");
return ret;
}
if (client->irq) {
unsigned long irq_trig =
irqd_get_trigger_type(irq_get_irq_data(client->irq));
unsigned int cfg_comp_mask = ADS1015_CFG_COMP_QUE_MASK |
ADS1015_CFG_COMP_LAT_MASK | ADS1015_CFG_COMP_POL_MASK;
unsigned int cfg_comp =
ADS1015_CFG_COMP_DISABLE << ADS1015_CFG_COMP_QUE_SHIFT |
1 << ADS1015_CFG_COMP_LAT_SHIFT;
switch (irq_trig) {
case IRQF_TRIGGER_LOW:
cfg_comp |= ADS1015_CFG_COMP_POL_LOW;
break;
case IRQF_TRIGGER_HIGH:
cfg_comp |= ADS1015_CFG_COMP_POL_HIGH;
break;
default:
return -EINVAL;
}
ret = regmap_update_bits(data->regmap, ADS1015_CFG_REG,
cfg_comp_mask, cfg_comp);
if (ret)
return ret;
ret = devm_request_threaded_irq(&client->dev, client->irq,
NULL, ads1015_event_handler,
irq_trig | IRQF_ONESHOT,
client->name, indio_dev);
if (ret)
return ret;
}
ret = ads1015_set_conv_mode(data, ADS1015_CONTINUOUS);
if (ret)
return ret;
data->conv_invalid = true;
ret = pm_runtime_set_active(&client->dev);
if (ret)
goto err_buffer_cleanup;
return ret;
pm_runtime_set_autosuspend_delay(&client->dev, ADS1015_SLEEP_DELAY_MS);
pm_runtime_use_autosuspend(&client->dev);
pm_runtime_enable(&client->dev);
@ -640,15 +1055,10 @@ static int ads1015_probe(struct i2c_client *client,
ret = iio_device_register(indio_dev);
if (ret < 0) {
dev_err(&client->dev, "Failed to register IIO device\n");
goto err_buffer_cleanup;
return ret;
}
return 0;
err_buffer_cleanup:
iio_triggered_buffer_cleanup(indio_dev);
return ret;
}
static int ads1015_remove(struct i2c_client *client)
@ -662,12 +1072,8 @@ static int ads1015_remove(struct i2c_client *client)
pm_runtime_set_suspended(&client->dev);
pm_runtime_put_noidle(&client->dev);
iio_triggered_buffer_cleanup(indio_dev);
/* power down single shot mode */
return regmap_update_bits(data->regmap, ADS1015_CFG_REG,
ADS1015_CFG_MOD_MASK,
ADS1015_SINGLESHOT << ADS1015_CFG_MOD_SHIFT);
return ads1015_set_conv_mode(data, ADS1015_SINGLESHOT);
}
#ifdef CONFIG_PM
@ -676,19 +1082,20 @@ static int ads1015_runtime_suspend(struct device *dev)
struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev));
struct ads1015_data *data = iio_priv(indio_dev);
return regmap_update_bits(data->regmap, ADS1015_CFG_REG,
ADS1015_CFG_MOD_MASK,
ADS1015_SINGLESHOT << ADS1015_CFG_MOD_SHIFT);
return ads1015_set_conv_mode(data, ADS1015_SINGLESHOT);
}
static int ads1015_runtime_resume(struct device *dev)
{
struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev));
struct ads1015_data *data = iio_priv(indio_dev);
int ret;
return regmap_update_bits(data->regmap, ADS1015_CFG_REG,
ADS1015_CFG_MOD_MASK,
ADS1015_CONTINUOUS << ADS1015_CFG_MOD_SHIFT);
ret = ads1015_set_conv_mode(data, ADS1015_CONTINUOUS);
if (!ret)
data->conv_invalid = true;
return ret;
}
#endif

View File

@ -21,6 +21,7 @@
* GNU General Public License for more details.
*/
#include <linux/acpi.h>
#include <linux/bitops.h>
#include <linux/device.h>
#include <linux/err.h>
@ -37,6 +38,12 @@
#include <linux/iio/trigger_consumer.h>
#include <linux/iio/triggered_buffer.h>
/*
* In case of ACPI, we use the 5000 mV as default for the reference pin.
* Device tree users encode that via the vref-supply regulator.
*/
#define TI_ADS7950_VA_MV_ACPI_DEFAULT 5000
#define TI_ADS7950_CR_MANUAL BIT(12)
#define TI_ADS7950_CR_WRITE BIT(11)
#define TI_ADS7950_CR_CHAN(ch) ((ch) << 7)
@ -58,6 +65,7 @@ struct ti_ads7950_state {
struct spi_message scan_single_msg;
struct regulator *reg;
unsigned int vref_mv;
unsigned int settings;
@ -305,11 +313,15 @@ static int ti_ads7950_get_range(struct ti_ads7950_state *st)
{
int vref;
vref = regulator_get_voltage(st->reg);
if (vref < 0)
return vref;
if (st->vref_mv) {
vref = st->vref_mv;
} else {
vref = regulator_get_voltage(st->reg);
if (vref < 0)
return vref;
vref /= 1000;
vref /= 1000;
}
if (st->settings & TI_ADS7950_CR_RANGE_5V)
vref *= 2;
@ -411,6 +423,10 @@ static int ti_ads7950_probe(struct spi_device *spi)
spi_message_init_with_transfers(&st->scan_single_msg,
st->scan_single_xfer, 3);
/* Use hard coded value for reference voltage in ACPI case */
if (ACPI_COMPANION(&spi->dev))
st->vref_mv = TI_ADS7950_VA_MV_ACPI_DEFAULT;
st->reg = devm_regulator_get(&spi->dev, "vref");
if (IS_ERR(st->reg)) {
dev_err(&spi->dev, "Failed get get regulator \"vref\"\n");
@ -475,9 +491,27 @@ static const struct spi_device_id ti_ads7950_id[] = {
};
MODULE_DEVICE_TABLE(spi, ti_ads7950_id);
static const struct of_device_id ads7950_of_table[] = {
{ .compatible = "ti,ads7950", .data = &ti_ads7950_chip_info[TI_ADS7950] },
{ .compatible = "ti,ads7951", .data = &ti_ads7950_chip_info[TI_ADS7951] },
{ .compatible = "ti,ads7952", .data = &ti_ads7950_chip_info[TI_ADS7952] },
{ .compatible = "ti,ads7953", .data = &ti_ads7950_chip_info[TI_ADS7953] },
{ .compatible = "ti,ads7954", .data = &ti_ads7950_chip_info[TI_ADS7954] },
{ .compatible = "ti,ads7955", .data = &ti_ads7950_chip_info[TI_ADS7955] },
{ .compatible = "ti,ads7956", .data = &ti_ads7950_chip_info[TI_ADS7956] },
{ .compatible = "ti,ads7957", .data = &ti_ads7950_chip_info[TI_ADS7957] },
{ .compatible = "ti,ads7958", .data = &ti_ads7950_chip_info[TI_ADS7958] },
{ .compatible = "ti,ads7959", .data = &ti_ads7950_chip_info[TI_ADS7959] },
{ .compatible = "ti,ads7960", .data = &ti_ads7950_chip_info[TI_ADS7960] },
{ .compatible = "ti,ads7961", .data = &ti_ads7950_chip_info[TI_ADS7961] },
{ },
};
MODULE_DEVICE_TABLE(of, ads7950_of_table);
static struct spi_driver ti_ads7950_driver = {
.driver = {
.name = "ads7950",
.of_match_table = ads7950_of_table,
},
.probe = ti_ads7950_probe,
.remove = ti_ads7950_remove,

View File

@ -68,7 +68,7 @@ void xadc_handle_events(struct iio_dev *indio_dev, unsigned long events)
xadc_handle_event(indio_dev, i);
}
static unsigned xadc_get_threshold_offset(const struct iio_chan_spec *chan,
static unsigned int xadc_get_threshold_offset(const struct iio_chan_spec *chan,
enum iio_event_direction dir)
{
unsigned int offset;
@ -90,26 +90,24 @@ static unsigned xadc_get_threshold_offset(const struct iio_chan_spec *chan,
static unsigned int xadc_get_alarm_mask(const struct iio_chan_spec *chan)
{
if (chan->type == IIO_TEMP) {
if (chan->type == IIO_TEMP)
return XADC_ALARM_OT_MASK;
} else {
switch (chan->channel) {
case 0:
return XADC_ALARM_VCCINT_MASK;
case 1:
return XADC_ALARM_VCCAUX_MASK;
case 2:
return XADC_ALARM_VCCBRAM_MASK;
case 3:
return XADC_ALARM_VCCPINT_MASK;
case 4:
return XADC_ALARM_VCCPAUX_MASK;
case 5:
return XADC_ALARM_VCCODDR_MASK;
default:
/* We will never get here */
return 0;
}
switch (chan->channel) {
case 0:
return XADC_ALARM_VCCINT_MASK;
case 1:
return XADC_ALARM_VCCAUX_MASK;
case 2:
return XADC_ALARM_VCCBRAM_MASK;
case 3:
return XADC_ALARM_VCCPINT_MASK;
case 4:
return XADC_ALARM_VCCPAUX_MASK;
case 5:
return XADC_ALARM_VCCODDR_MASK;
default:
/* We will never get here */
return 0;
}
}

View File

@ -71,13 +71,13 @@ struct xadc {
};
struct xadc_ops {
int (*read)(struct xadc *, unsigned int, uint16_t *);
int (*write)(struct xadc *, unsigned int, uint16_t);
int (*read)(struct xadc *xadc, unsigned int reg, uint16_t *val);
int (*write)(struct xadc *xadc, unsigned int reg, uint16_t val);
int (*setup)(struct platform_device *pdev, struct iio_dev *indio_dev,
int irq);
void (*update_alarm)(struct xadc *, unsigned int);
unsigned long (*get_dclk_rate)(struct xadc *);
irqreturn_t (*interrupt_handler)(int, void *);
void (*update_alarm)(struct xadc *xadc, unsigned int alarm);
unsigned long (*get_dclk_rate)(struct xadc *xadc);
irqreturn_t (*interrupt_handler)(int irq, void *devid);
unsigned int flags;
};

View File

@ -21,6 +21,15 @@ config ATLAS_PH_SENSOR
To compile this driver as module, choose M here: the
module will be called atlas-ph-sensor.
config CCS811
tristate "AMS CCS811 VOC sensor"
depends on I2C
select IIO_BUFFER
select IIO_TRIGGERED_BUFFER
help
Say Y here to build I2C interface support for the AMS
CCS811 VOC (Volatile Organic Compounds) sensor
config IAQCORE
tristate "AMS iAQ-Core VOC sensors"
depends on I2C

View File

@ -4,5 +4,6 @@
# When adding new entries keep the list in alphabetical order
obj-$(CONFIG_ATLAS_PH_SENSOR) += atlas-ph-sensor.o
obj-$(CONFIG_CCS811) += ccs811.o
obj-$(CONFIG_IAQCORE) += ams-iaq-core.o
obj-$(CONFIG_VZ89X) += vz89x.o

View File

@ -0,0 +1,405 @@
/*
* ccs811.c - Support for AMS CCS811 VOC Sensor
*
* Copyright (C) 2017 Narcisa Vasile <narcisaanamaria12@gmail.com>
*
* Datasheet: ams.com/content/download/951091/2269479/CCS811_DS000459_3-00.pdf
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* IIO driver for AMS CCS811 (I2C address 0x5A/0x5B set by ADDR Low/High)
*
* TODO:
* 1. Make the drive mode selectable form userspace
* 2. Add support for interrupts
* 3. Adjust time to wait for data to be ready based on selected operation mode
* 4. Read error register and put the information in logs
*/
#include <linux/delay.h>
#include <linux/i2c.h>
#include <linux/iio/iio.h>
#include <linux/iio/buffer.h>
#include <linux/iio/triggered_buffer.h>
#include <linux/iio/trigger_consumer.h>
#include <linux/module.h>
#define CCS811_STATUS 0x00
#define CCS811_MEAS_MODE 0x01
#define CCS811_ALG_RESULT_DATA 0x02
#define CCS811_RAW_DATA 0x03
#define CCS811_HW_ID 0x20
#define CCS881_HW_ID_VALUE 0x81
#define CCS811_HW_VERSION 0x21
#define CCS811_HW_VERSION_VALUE 0x10
#define CCS811_HW_VERSION_MASK 0xF0
#define CCS811_ERR 0xE0
/* Used to transition from boot to application mode */
#define CCS811_APP_START 0xF4
/* Status register flags */
#define CCS811_STATUS_ERROR BIT(0)
#define CCS811_STATUS_DATA_READY BIT(3)
#define CCS811_STATUS_APP_VALID_MASK BIT(4)
#define CCS811_STATUS_APP_VALID_LOADED BIT(4)
/*
* Value of FW_MODE bit of STATUS register describes the sensor's state:
* 0: Firmware is in boot mode, this allows new firmware to be loaded
* 1: Firmware is in application mode. CCS811 is ready to take ADC measurements
*/
#define CCS811_STATUS_FW_MODE_MASK BIT(7)
#define CCS811_STATUS_FW_MODE_APPLICATION BIT(7)
/* Measurement modes */
#define CCS811_MODE_IDLE 0x00
#define CCS811_MODE_IAQ_1SEC 0x10
#define CCS811_MODE_IAQ_10SEC 0x20
#define CCS811_MODE_IAQ_60SEC 0x30
#define CCS811_MODE_RAW_DATA 0x40
#define CCS811_VOLTAGE_MASK 0x3FF
struct ccs811_reading {
__be16 co2;
__be16 voc;
u8 status;
u8 error;
__be16 resistance;
} __attribute__((__packed__));
struct ccs811_data {
struct i2c_client *client;
struct mutex lock; /* Protect readings */
struct ccs811_reading buffer;
};
static const struct iio_chan_spec ccs811_channels[] = {
{
.type = IIO_CURRENT,
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
BIT(IIO_CHAN_INFO_SCALE),
.scan_index = -1,
}, {
.type = IIO_VOLTAGE,
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
BIT(IIO_CHAN_INFO_SCALE),
.scan_index = -1,
}, {
.type = IIO_CONCENTRATION,
.channel2 = IIO_MOD_CO2,
.modified = 1,
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
BIT(IIO_CHAN_INFO_OFFSET) |
BIT(IIO_CHAN_INFO_SCALE),
.scan_index = 0,
.scan_type = {
.sign = 'u',
.realbits = 16,
.storagebits = 16,
.endianness = IIO_BE,
},
}, {
.type = IIO_CONCENTRATION,
.channel2 = IIO_MOD_VOC,
.modified = 1,
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
BIT(IIO_CHAN_INFO_SCALE),
.scan_index = 1,
.scan_type = {
.sign = 'u',
.realbits = 16,
.storagebits = 16,
.endianness = IIO_BE,
},
},
IIO_CHAN_SOFT_TIMESTAMP(2),
};
/*
* The CCS811 powers-up in boot mode. A setup write to CCS811_APP_START will
* transition the sensor to application mode.
*/
static int ccs811_start_sensor_application(struct i2c_client *client)
{
int ret;
ret = i2c_smbus_read_byte_data(client, CCS811_STATUS);
if (ret < 0)
return ret;
if ((ret & CCS811_STATUS_APP_VALID_MASK) !=
CCS811_STATUS_APP_VALID_LOADED)
return -EIO;
ret = i2c_smbus_write_byte(client, CCS811_APP_START);
if (ret < 0)
return ret;
ret = i2c_smbus_read_byte_data(client, CCS811_STATUS);
if (ret < 0)
return ret;
if ((ret & CCS811_STATUS_FW_MODE_MASK) !=
CCS811_STATUS_FW_MODE_APPLICATION) {
dev_err(&client->dev, "Application failed to start. Sensor is still in boot mode.\n");
return -EIO;
}
return 0;
}
static int ccs811_setup(struct i2c_client *client)
{
int ret;
ret = ccs811_start_sensor_application(client);
if (ret < 0)
return ret;
return i2c_smbus_write_byte_data(client, CCS811_MEAS_MODE,
CCS811_MODE_IAQ_1SEC);
}
static int ccs811_get_measurement(struct ccs811_data *data)
{
int ret, tries = 11;
/* Maximum waiting time: 1s, as measurements are made every second */
while (tries-- > 0) {
ret = i2c_smbus_read_byte_data(data->client, CCS811_STATUS);
if (ret < 0)
return ret;
if ((ret & CCS811_STATUS_DATA_READY) || tries == 0)
break;
msleep(100);
}
if (!(ret & CCS811_STATUS_DATA_READY))
return -EIO;
return i2c_smbus_read_i2c_block_data(data->client,
CCS811_ALG_RESULT_DATA, 8,
(char *)&data->buffer);
}
static int ccs811_read_raw(struct iio_dev *indio_dev,
struct iio_chan_spec const *chan,
int *val, int *val2, long mask)
{
struct ccs811_data *data = iio_priv(indio_dev);
int ret;
switch (mask) {
case IIO_CHAN_INFO_RAW:
mutex_lock(&data->lock);
ret = ccs811_get_measurement(data);
if (ret < 0) {
mutex_unlock(&data->lock);
return ret;
}
switch (chan->type) {
case IIO_VOLTAGE:
*val = be16_to_cpu(data->buffer.resistance) &
CCS811_VOLTAGE_MASK;
ret = IIO_VAL_INT;
break;
case IIO_CURRENT:
*val = be16_to_cpu(data->buffer.resistance) >> 10;
ret = IIO_VAL_INT;
break;
case IIO_CONCENTRATION:
switch (chan->channel2) {
case IIO_MOD_CO2:
*val = be16_to_cpu(data->buffer.co2);
ret = IIO_VAL_INT;
break;
case IIO_MOD_VOC:
*val = be16_to_cpu(data->buffer.voc);
ret = IIO_VAL_INT;
break;
default:
ret = -EINVAL;
}
break;
default:
ret = -EINVAL;
}
mutex_unlock(&data->lock);
return ret;
case IIO_CHAN_INFO_SCALE:
switch (chan->type) {
case IIO_VOLTAGE:
*val = 1;
*val2 = 612903;
return IIO_VAL_INT_PLUS_MICRO;
case IIO_CURRENT:
*val = 0;
*val2 = 1000;
return IIO_VAL_INT_PLUS_MICRO;
case IIO_CONCENTRATION:
switch (chan->channel2) {
case IIO_MOD_CO2:
*val = 0;
*val2 = 12834;
return IIO_VAL_INT_PLUS_MICRO;
case IIO_MOD_VOC:
*val = 0;
*val2 = 84246;
return IIO_VAL_INT_PLUS_MICRO;
default:
return -EINVAL;
}
default:
return -EINVAL;
}
case IIO_CHAN_INFO_OFFSET:
if (!(chan->type == IIO_CONCENTRATION &&
chan->channel2 == IIO_MOD_CO2))
return -EINVAL;
*val = -400;
return IIO_VAL_INT;
default:
return -EINVAL;
}
}
static const struct iio_info ccs811_info = {
.read_raw = ccs811_read_raw,
.driver_module = THIS_MODULE,
};
static irqreturn_t ccs811_trigger_handler(int irq, void *p)
{
struct iio_poll_func *pf = p;
struct iio_dev *indio_dev = pf->indio_dev;
struct ccs811_data *data = iio_priv(indio_dev);
struct i2c_client *client = data->client;
s16 buf[8]; /* s16 eCO2 + s16 TVOC + padding + 8 byte timestamp */
int ret;
ret = i2c_smbus_read_i2c_block_data(client, CCS811_ALG_RESULT_DATA, 4,
(u8 *)&buf);
if (ret != 4) {
dev_err(&client->dev, "cannot read sensor data\n");
goto err;
}
iio_push_to_buffers_with_timestamp(indio_dev, buf,
iio_get_time_ns(indio_dev));
err:
iio_trigger_notify_done(indio_dev->trig);
return IRQ_HANDLED;
}
static int ccs811_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
struct iio_dev *indio_dev;
struct ccs811_data *data;
int ret;
if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_WRITE_BYTE
| I2C_FUNC_SMBUS_BYTE_DATA
| I2C_FUNC_SMBUS_READ_I2C_BLOCK))
return -EOPNOTSUPP;
/* Check hardware id (should be 0x81 for this family of devices) */
ret = i2c_smbus_read_byte_data(client, CCS811_HW_ID);
if (ret < 0)
return ret;
if (ret != CCS881_HW_ID_VALUE) {
dev_err(&client->dev, "hardware id doesn't match CCS81x\n");
return -ENODEV;
}
ret = i2c_smbus_read_byte_data(client, CCS811_HW_VERSION);
if (ret < 0)
return ret;
if ((ret & CCS811_HW_VERSION_MASK) != CCS811_HW_VERSION_VALUE) {
dev_err(&client->dev, "no CCS811 sensor\n");
return -ENODEV;
}
indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*data));
if (!indio_dev)
return -ENOMEM;
ret = ccs811_setup(client);
if (ret < 0)
return ret;
data = iio_priv(indio_dev);
i2c_set_clientdata(client, indio_dev);
data->client = client;
mutex_init(&data->lock);
indio_dev->dev.parent = &client->dev;
indio_dev->name = id->name;
indio_dev->info = &ccs811_info;
indio_dev->channels = ccs811_channels;
indio_dev->num_channels = ARRAY_SIZE(ccs811_channels);
ret = iio_triggered_buffer_setup(indio_dev, NULL,
ccs811_trigger_handler, NULL);
if (ret < 0) {
dev_err(&client->dev, "triggered buffer setup failed\n");
goto err_poweroff;
}
ret = iio_device_register(indio_dev);
if (ret < 0) {
dev_err(&client->dev, "unable to register iio device\n");
goto err_buffer_cleanup;
}
return 0;
err_buffer_cleanup:
iio_triggered_buffer_cleanup(indio_dev);
err_poweroff:
i2c_smbus_write_byte_data(client, CCS811_MEAS_MODE, CCS811_MODE_IDLE);
return ret;
}
static int ccs811_remove(struct i2c_client *client)
{
struct iio_dev *indio_dev = i2c_get_clientdata(client);
iio_device_unregister(indio_dev);
iio_triggered_buffer_cleanup(indio_dev);
return i2c_smbus_write_byte_data(client, CCS811_MEAS_MODE,
CCS811_MODE_IDLE);
}
static const struct i2c_device_id ccs811_id[] = {
{"ccs811", 0},
{ }
};
MODULE_DEVICE_TABLE(i2c, ccs811_id);
static struct i2c_driver ccs811_driver = {
.driver = {
.name = "ccs811",
},
.probe = ccs811_probe,
.remove = ccs811_remove,
.id_table = ccs811_id,
};
module_i2c_driver(ccs811_driver);
MODULE_AUTHOR("Narcisa Vasile <narcisaanamaria12@gmail.com>");
MODULE_DESCRIPTION("CCS811 volatile organic compounds sensor");
MODULE_LICENSE("GPL v2");

View File

@ -15,6 +15,7 @@
#include <linux/iio/iio.h>
#include <linux/regulator/consumer.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <asm/unaligned.h>
#include <linux/iio/common/st_sensors.h>
@ -345,6 +346,36 @@ static struct st_sensors_platform_data *st_sensors_of_probe(struct device *dev,
return pdata;
}
/**
* st_sensors_of_name_probe() - device tree probe for ST sensor name
* @dev: driver model representation of the device.
* @match: the OF match table for the device, containing compatible strings
* but also a .data field with the corresponding internal kernel name
* used by this sensor.
* @name: device name buffer reference.
* @len: device name buffer length.
*
* In effect this function matches a compatible string to an internal kernel
* name for a certain sensor device, so that the rest of the autodetection can
* rely on that name from this point on. I2C/SPI devices will be renamed
* to match the internal kernel convention.
*/
void st_sensors_of_name_probe(struct device *dev,
const struct of_device_id *match,
char *name, int len)
{
const struct of_device_id *of_id;
of_id = of_match_device(match, dev);
if (!of_id || !of_id->data)
return;
/* The name from the OF match takes precedence if present */
strncpy(name, of_id->data, len);
name[len - 1] = '\0';
}
EXPORT_SYMBOL(st_sensors_of_name_probe);
#else
static struct st_sensors_platform_data *st_sensors_of_probe(struct device *dev,
struct st_sensors_platform_data *defdata)

View File

@ -79,35 +79,6 @@ void st_sensors_i2c_configure(struct iio_dev *indio_dev,
}
EXPORT_SYMBOL(st_sensors_i2c_configure);
#ifdef CONFIG_OF
/**
* st_sensors_of_i2c_probe() - device tree probe for ST I2C sensors
* @client: the I2C client device for the sensor
* @match: the OF match table for the device, containing compatible strings
* but also a .data field with the corresponding internal kernel name
* used by this sensor.
*
* In effect this function matches a compatible string to an internal kernel
* name for a certain sensor device, so that the rest of the autodetection can
* rely on that name from this point on. I2C client devices will be renamed
* to match the internal kernel convention.
*/
void st_sensors_of_i2c_probe(struct i2c_client *client,
const struct of_device_id *match)
{
const struct of_device_id *of_id;
of_id = of_match_device(match, &client->dev);
if (!of_id)
return;
/* The name from the OF match takes precedence if present */
strncpy(client->name, of_id->data, sizeof(client->name));
client->name[sizeof(client->name) - 1] = '\0';
}
EXPORT_SYMBOL(st_sensors_of_i2c_probe);
#endif
#ifdef CONFIG_ACPI
int st_sensors_match_acpi_device(struct device *dev)
{

View File

@ -42,6 +42,14 @@ struct stm32_dac_priv {
struct stm32_dac_common common;
};
/**
* struct stm32_dac_cfg - DAC configuration
* @has_hfsel: DAC has high frequency control
*/
struct stm32_dac_cfg {
bool has_hfsel;
};
static struct stm32_dac_priv *to_stm32_dac_priv(struct stm32_dac_common *com)
{
return container_of(com, struct stm32_dac_priv, common);
@ -57,6 +65,7 @@ static const struct regmap_config stm32_dac_regmap_cfg = {
static int stm32_dac_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
const struct stm32_dac_cfg *cfg;
struct stm32_dac_priv *priv;
struct regmap *regmap;
struct resource *res;
@ -69,6 +78,8 @@ static int stm32_dac_probe(struct platform_device *pdev)
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
cfg = (const struct stm32_dac_cfg *)
of_match_device(dev->driver->of_match_table, dev)->data;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
mmio = devm_ioremap_resource(dev, res);
@ -114,19 +125,23 @@ static int stm32_dac_probe(struct platform_device *pdev)
goto err_vref;
}
priv->rst = devm_reset_control_get(dev, NULL);
priv->rst = devm_reset_control_get_exclusive(dev, NULL);
if (!IS_ERR(priv->rst)) {
reset_control_assert(priv->rst);
udelay(2);
reset_control_deassert(priv->rst);
}
/* When clock speed is higher than 80MHz, set HFSEL */
priv->common.hfsel = (clk_get_rate(priv->pclk) > 80000000UL);
ret = regmap_update_bits(regmap, STM32_DAC_CR, STM32H7_DAC_CR_HFSEL,
priv->common.hfsel ? STM32H7_DAC_CR_HFSEL : 0);
if (ret)
goto err_pclk;
if (cfg && cfg->has_hfsel) {
/* When clock speed is higher than 80MHz, set HFSEL */
priv->common.hfsel = (clk_get_rate(priv->pclk) > 80000000UL);
ret = regmap_update_bits(regmap, STM32_DAC_CR,
STM32H7_DAC_CR_HFSEL,
priv->common.hfsel ?
STM32H7_DAC_CR_HFSEL : 0);
if (ret)
goto err_pclk;
}
platform_set_drvdata(pdev, &priv->common);
@ -158,8 +173,17 @@ static int stm32_dac_remove(struct platform_device *pdev)
return 0;
}
static const struct stm32_dac_cfg stm32h7_dac_cfg = {
.has_hfsel = true,
};
static const struct of_device_id stm32_dac_of_match[] = {
{ .compatible = "st,stm32h7-dac-core", },
{
.compatible = "st,stm32f4-dac-core",
}, {
.compatible = "st,stm32h7-dac-core",
.data = (void *)&stm32h7_dac_cfg,
},
{},
};
MODULE_DEVICE_TABLE(of, stm32_dac_of_match);

View File

@ -268,7 +268,7 @@ static int stm32_dac_chan_of_init(struct iio_dev *indio_dev)
break;
}
if (i >= ARRAY_SIZE(stm32_dac_channels)) {
dev_err(&indio_dev->dev, "Invalid st,dac-channel\n");
dev_err(&indio_dev->dev, "Invalid reg property\n");
return -EINVAL;
}

View File

@ -1063,11 +1063,6 @@ static int mpu3050_trigger_probe(struct iio_dev *indio_dev, int irq)
case IRQF_TRIGGER_RISING:
dev_info(&indio_dev->dev,
"pulse interrupts on the rising edge\n");
if (mpu3050->irq_opendrain) {
dev_info(&indio_dev->dev,
"rising edge incompatible with open drain\n");
mpu3050->irq_opendrain = false;
}
break;
case IRQF_TRIGGER_FALLING:
mpu3050->irq_actl = true;
@ -1078,11 +1073,6 @@ static int mpu3050_trigger_probe(struct iio_dev *indio_dev, int irq)
mpu3050->irq_latch = true;
dev_info(&indio_dev->dev,
"interrupts active high level\n");
if (mpu3050->irq_opendrain) {
dev_info(&indio_dev->dev,
"active high incompatible with open drain\n");
mpu3050->irq_opendrain = false;
}
/*
* With level IRQs, we mask the IRQ until it is processed,
* but with edge IRQs (pulses) we can queue several interrupts

View File

@ -19,6 +19,7 @@
#define LSM330DL_GYRO_DEV_NAME "lsm330dl_gyro"
#define LSM330DLC_GYRO_DEV_NAME "lsm330dlc_gyro"
#define L3GD20_GYRO_DEV_NAME "l3gd20"
#define L3GD20H_GYRO_DEV_NAME "l3gd20h"
#define L3G4IS_GYRO_DEV_NAME "l3g4is_ui"
#define LSM330_GYRO_DEV_NAME "lsm330_gyro"
#define LSM9DS0_GYRO_DEV_NAME "lsm9ds0_gyro"

View File

@ -35,6 +35,7 @@
#define ST_GYRO_DEFAULT_OUT_Z_L_ADDR 0x2c
/* FULLSCALE */
#define ST_GYRO_FS_AVL_245DPS 245
#define ST_GYRO_FS_AVL_250DPS 250
#define ST_GYRO_FS_AVL_500DPS 500
#define ST_GYRO_FS_AVL_2000DPS 2000
@ -196,17 +197,17 @@ static const struct st_sensor_settings st_gyro_sensors_settings[] = {
.wai = 0xd7,
.wai_addr = ST_SENSORS_DEFAULT_WAI_ADDRESS,
.sensors_supported = {
[0] = L3GD20_GYRO_DEV_NAME,
[0] = L3GD20H_GYRO_DEV_NAME,
},
.ch = (struct iio_chan_spec *)st_gyro_16bit_channels,
.odr = {
.addr = 0x20,
.mask = 0xc0,
.odr_avl = {
{ .hz = 95, .value = 0x00, },
{ .hz = 190, .value = 0x01, },
{ .hz = 380, .value = 0x02, },
{ .hz = 760, .value = 0x03, },
{ .hz = 100, .value = 0x00, },
{ .hz = 200, .value = 0x01, },
{ .hz = 400, .value = 0x02, },
{ .hz = 800, .value = 0x03, },
},
},
.pw = {
@ -224,7 +225,7 @@ static const struct st_sensor_settings st_gyro_sensors_settings[] = {
.mask = 0x30,
.fs_avl = {
[0] = {
.num = ST_GYRO_FS_AVL_250DPS,
.num = ST_GYRO_FS_AVL_245DPS,
.value = 0x00,
.gain = IIO_DEGREE_TO_RAD(8750),
},

View File

@ -40,6 +40,10 @@ static const struct of_device_id st_gyro_of_match[] = {
.compatible = "st,l3gd20-gyro",
.data = L3GD20_GYRO_DEV_NAME,
},
{
.compatible = "st,l3gd20h-gyro",
.data = L3GD20H_GYRO_DEV_NAME,
},
{
.compatible = "st,l3g4is-gyro",
.data = L3G4IS_GYRO_DEV_NAME,
@ -71,7 +75,8 @@ static int st_gyro_i2c_probe(struct i2c_client *client,
return -ENOMEM;
gdata = iio_priv(indio_dev);
st_sensors_of_i2c_probe(client, st_gyro_of_match);
st_sensors_of_name_probe(&client->dev, st_gyro_of_match,
client->name, sizeof(client->name));
st_sensors_i2c_configure(indio_dev, client, gdata);
@ -95,6 +100,7 @@ static const struct i2c_device_id st_gyro_id_table[] = {
{ LSM330DL_GYRO_DEV_NAME },
{ LSM330DLC_GYRO_DEV_NAME },
{ L3GD20_GYRO_DEV_NAME },
{ L3GD20H_GYRO_DEV_NAME },
{ L3G4IS_GYRO_DEV_NAME },
{ LSM330_GYRO_DEV_NAME },
{ LSM9DS0_GYRO_DEV_NAME },

View File

@ -18,6 +18,56 @@
#include <linux/iio/common/st_sensors_spi.h>
#include "st_gyro.h"
#ifdef CONFIG_OF
/*
* For new single-chip sensors use <device_name> as compatible string.
* For old single-chip devices keep <device_name>-gyro to maintain
* compatibility
*/
static const struct of_device_id st_gyro_of_match[] = {
{
.compatible = "st,l3g4200d-gyro",
.data = L3G4200D_GYRO_DEV_NAME,
},
{
.compatible = "st,lsm330d-gyro",
.data = LSM330D_GYRO_DEV_NAME,
},
{
.compatible = "st,lsm330dl-gyro",
.data = LSM330DL_GYRO_DEV_NAME,
},
{
.compatible = "st,lsm330dlc-gyro",
.data = LSM330DLC_GYRO_DEV_NAME,
},
{
.compatible = "st,l3gd20-gyro",
.data = L3GD20_GYRO_DEV_NAME,
},
{
.compatible = "st,l3gd20h-gyro",
.data = L3GD20H_GYRO_DEV_NAME,
},
{
.compatible = "st,l3g4is-gyro",
.data = L3G4IS_GYRO_DEV_NAME,
},
{
.compatible = "st,lsm330-gyro",
.data = LSM330_GYRO_DEV_NAME,
},
{
.compatible = "st,lsm9ds0-gyro",
.data = LSM9DS0_GYRO_DEV_NAME,
},
{},
};
MODULE_DEVICE_TABLE(of, st_gyro_of_match);
#else
#define st_gyro_of_match NULL
#endif
static int st_gyro_spi_probe(struct spi_device *spi)
{
struct iio_dev *indio_dev;
@ -30,6 +80,8 @@ static int st_gyro_spi_probe(struct spi_device *spi)
gdata = iio_priv(indio_dev);
st_sensors_of_name_probe(&spi->dev, st_gyro_of_match,
spi->modalias, sizeof(spi->modalias));
st_sensors_spi_configure(indio_dev, spi, gdata);
err = st_gyro_common_probe(indio_dev);
@ -52,6 +104,7 @@ static const struct spi_device_id st_gyro_id_table[] = {
{ LSM330DL_GYRO_DEV_NAME },
{ LSM330DLC_GYRO_DEV_NAME },
{ L3GD20_GYRO_DEV_NAME },
{ L3GD20H_GYRO_DEV_NAME },
{ L3G4IS_GYRO_DEV_NAME },
{ LSM330_GYRO_DEV_NAME },
{ LSM9DS0_GYRO_DEV_NAME },
@ -62,6 +115,7 @@ MODULE_DEVICE_TABLE(spi, st_gyro_id_table);
static struct spi_driver st_gyro_driver = {
.driver = {
.name = "st-gyro-spi",
.of_match_table = of_match_ptr(st_gyro_of_match),
},
.probe = st_gyro_spi_probe,
.remove = st_gyro_spi_remove,

View File

@ -31,7 +31,8 @@ config HDC100X
select IIO_TRIGGERED_BUFFER
help
Say yes here to build support for the Texas Instruments
HDC1000 and HDC1008 relative humidity and temperature sensors.
HDC1000, HDC1008, HDC1010, HDC1050, and HDC1080 relative
humidity and temperature sensors.
To compile this driver as a module, choose M here: the module
will be called hdc100x.

View File

@ -13,6 +13,12 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* Datasheets:
* http://www.ti.com/product/HDC1000/datasheet
* http://www.ti.com/product/HDC1008/datasheet
* http://www.ti.com/product/HDC1010/datasheet
* http://www.ti.com/product/HDC1050/datasheet
* http://www.ti.com/product/HDC1080/datasheet
*/
#include <linux/delay.h>
@ -414,13 +420,29 @@ static int hdc100x_remove(struct i2c_client *client)
static const struct i2c_device_id hdc100x_id[] = {
{ "hdc100x", 0 },
{ "hdc1000", 0 },
{ "hdc1008", 0 },
{ "hdc1010", 0 },
{ "hdc1050", 0 },
{ "hdc1080", 0 },
{ }
};
MODULE_DEVICE_TABLE(i2c, hdc100x_id);
static const struct of_device_id hdc100x_dt_ids[] = {
{ .compatible = "ti,hdc1000" },
{ .compatible = "ti,hdc1008" },
{ .compatible = "ti,hdc1010" },
{ .compatible = "ti,hdc1050" },
{ .compatible = "ti,hdc1080" },
{ }
};
MODULE_DEVICE_TABLE(of, hdc100x_dt_ids);
static struct i2c_driver hdc100x_driver = {
.driver = {
.name = "hdc100x",
.of_match_table = of_match_ptr(hdc100x_dt_ids),
},
.probe = hdc100x_probe,
.remove = hdc100x_remove,

View File

@ -30,12 +30,6 @@ struct hts221_transfer_function {
int (*write)(struct device *dev, u8 addr, int len, u8 *data);
};
#define HTS221_AVG_DEPTH 8
struct hts221_avg_avl {
u16 avg;
u8 val;
};
enum hts221_sensor_type {
HTS221_SENSOR_H,
HTS221_SENSOR_T,
@ -66,10 +60,9 @@ struct hts221_hw {
extern const struct dev_pm_ops hts221_pm_ops;
int hts221_config_drdy(struct hts221_hw *hw, bool enable);
int hts221_write_with_mask(struct hts221_hw *hw, u8 addr, u8 mask, u8 val);
int hts221_probe(struct iio_dev *iio_dev);
int hts221_power_on(struct hts221_hw *hw);
int hts221_power_off(struct hts221_hw *hw);
int hts221_set_enable(struct hts221_hw *hw, bool enable);
int hts221_allocate_buffers(struct hts221_hw *hw);
int hts221_allocate_trigger(struct hts221_hw *hw);

View File

@ -20,8 +20,16 @@
#include <linux/iio/triggered_buffer.h>
#include <linux/iio/buffer.h>
#include <linux/platform_data/st_sensors_pdata.h>
#include "hts221.h"
#define HTS221_REG_DRDY_HL_ADDR 0x22
#define HTS221_REG_DRDY_HL_MASK BIT(7)
#define HTS221_REG_DRDY_PP_OD_ADDR 0x22
#define HTS221_REG_DRDY_PP_OD_MASK BIT(6)
#define HTS221_REG_DRDY_EN_ADDR 0x22
#define HTS221_REG_DRDY_EN_MASK BIT(2)
#define HTS221_REG_STATUS_ADDR 0x27
#define HTS221_RH_DRDY_MASK BIT(1)
#define HTS221_TEMP_DRDY_MASK BIT(0)
@ -30,8 +38,12 @@ static int hts221_trig_set_state(struct iio_trigger *trig, bool state)
{
struct iio_dev *iio_dev = iio_trigger_get_drvdata(trig);
struct hts221_hw *hw = iio_priv(iio_dev);
int err;
return hts221_config_drdy(hw, state);
err = hts221_write_with_mask(hw, HTS221_REG_DRDY_EN_ADDR,
HTS221_REG_DRDY_EN_MASK, state);
return err < 0 ? err : 0;
}
static const struct iio_trigger_ops hts221_trigger_ops = {
@ -67,6 +79,9 @@ static irqreturn_t hts221_trigger_handler_thread(int irq, void *private)
int hts221_allocate_trigger(struct hts221_hw *hw)
{
struct iio_dev *iio_dev = iio_priv_to_dev(hw);
bool irq_active_low = false, open_drain = false;
struct device_node *np = hw->dev->of_node;
struct st_sensors_platform_data *pdata;
unsigned long irq_type;
int err;
@ -76,6 +91,10 @@ int hts221_allocate_trigger(struct hts221_hw *hw)
case IRQF_TRIGGER_HIGH:
case IRQF_TRIGGER_RISING:
break;
case IRQF_TRIGGER_LOW:
case IRQF_TRIGGER_FALLING:
irq_active_low = true;
break;
default:
dev_info(hw->dev,
"mode %lx unsupported, using IRQF_TRIGGER_RISING\n",
@ -84,6 +103,24 @@ int hts221_allocate_trigger(struct hts221_hw *hw)
break;
}
err = hts221_write_with_mask(hw, HTS221_REG_DRDY_HL_ADDR,
HTS221_REG_DRDY_HL_MASK, irq_active_low);
if (err < 0)
return err;
pdata = (struct st_sensors_platform_data *)hw->dev->platform_data;
if ((np && of_property_read_bool(np, "drive-open-drain")) ||
(pdata && pdata->open_drain)) {
irq_type |= IRQF_SHARED;
open_drain = true;
}
err = hts221_write_with_mask(hw, HTS221_REG_DRDY_PP_OD_ADDR,
HTS221_REG_DRDY_PP_OD_MASK,
open_drain);
if (err < 0)
return err;
err = devm_request_threaded_irq(hw->dev, hw->irq, NULL,
hts221_trigger_handler_thread,
irq_type | IRQF_ONESHOT,
@ -109,12 +146,12 @@ int hts221_allocate_trigger(struct hts221_hw *hw)
static int hts221_buffer_preenable(struct iio_dev *iio_dev)
{
return hts221_power_on(iio_priv(iio_dev));
return hts221_set_enable(iio_priv(iio_dev), true);
}
static int hts221_buffer_postdisable(struct iio_dev *iio_dev)
{
return hts221_power_off(iio_priv(iio_dev));
return hts221_set_enable(iio_priv(iio_dev), false);
}
static const struct iio_buffer_setup_ops hts221_buffer_ops = {

View File

@ -23,7 +23,6 @@
#define HTS221_REG_CNTRL1_ADDR 0x20
#define HTS221_REG_CNTRL2_ADDR 0x21
#define HTS221_REG_CNTRL3_ADDR 0x22
#define HTS221_REG_AVG_ADDR 0x10
#define HTS221_REG_H_OUT_L 0x28
@ -32,30 +31,9 @@
#define HTS221_HUMIDITY_AVG_MASK 0x07
#define HTS221_TEMP_AVG_MASK 0x38
#define HTS221_ODR_MASK 0x87
#define HTS221_ODR_MASK 0x03
#define HTS221_BDU_MASK BIT(2)
#define HTS221_DRDY_MASK BIT(2)
#define HTS221_ENABLE_SENSOR BIT(7)
#define HTS221_HUMIDITY_AVG_4 0x00 /* 0.4 %RH */
#define HTS221_HUMIDITY_AVG_8 0x01 /* 0.3 %RH */
#define HTS221_HUMIDITY_AVG_16 0x02 /* 0.2 %RH */
#define HTS221_HUMIDITY_AVG_32 0x03 /* 0.15 %RH */
#define HTS221_HUMIDITY_AVG_64 0x04 /* 0.1 %RH */
#define HTS221_HUMIDITY_AVG_128 0x05 /* 0.07 %RH */
#define HTS221_HUMIDITY_AVG_256 0x06 /* 0.05 %RH */
#define HTS221_HUMIDITY_AVG_512 0x07 /* 0.03 %RH */
#define HTS221_TEMP_AVG_2 0x00 /* 0.08 degC */
#define HTS221_TEMP_AVG_4 0x08 /* 0.05 degC */
#define HTS221_TEMP_AVG_8 0x10 /* 0.04 degC */
#define HTS221_TEMP_AVG_16 0x18 /* 0.03 degC */
#define HTS221_TEMP_AVG_32 0x20 /* 0.02 degC */
#define HTS221_TEMP_AVG_64 0x28 /* 0.015 degC */
#define HTS221_TEMP_AVG_128 0x30 /* 0.01 degC */
#define HTS221_TEMP_AVG_256 0x38 /* 0.007 degC */
#define HTS221_ENABLE_MASK BIT(7)
/* calibration registers */
#define HTS221_REG_0RH_CAL_X_H 0x36
@ -73,10 +51,11 @@ struct hts221_odr {
u8 val;
};
#define HTS221_AVG_DEPTH 8
struct hts221_avg {
u8 addr;
u8 mask;
struct hts221_avg_avl avg_avl[HTS221_AVG_DEPTH];
u16 avg_avl[HTS221_AVG_DEPTH];
};
static const struct hts221_odr hts221_odr_table[] = {
@ -90,28 +69,28 @@ static const struct hts221_avg hts221_avg_list[] = {
.addr = HTS221_REG_AVG_ADDR,
.mask = HTS221_HUMIDITY_AVG_MASK,
.avg_avl = {
{ 4, HTS221_HUMIDITY_AVG_4 },
{ 8, HTS221_HUMIDITY_AVG_8 },
{ 16, HTS221_HUMIDITY_AVG_16 },
{ 32, HTS221_HUMIDITY_AVG_32 },
{ 64, HTS221_HUMIDITY_AVG_64 },
{ 128, HTS221_HUMIDITY_AVG_128 },
{ 256, HTS221_HUMIDITY_AVG_256 },
{ 512, HTS221_HUMIDITY_AVG_512 },
4, /* 0.4 %RH */
8, /* 0.3 %RH */
16, /* 0.2 %RH */
32, /* 0.15 %RH */
64, /* 0.1 %RH */
128, /* 0.07 %RH */
256, /* 0.05 %RH */
512, /* 0.03 %RH */
},
},
{
.addr = HTS221_REG_AVG_ADDR,
.mask = HTS221_TEMP_AVG_MASK,
.avg_avl = {
{ 2, HTS221_TEMP_AVG_2 },
{ 4, HTS221_TEMP_AVG_4 },
{ 8, HTS221_TEMP_AVG_8 },
{ 16, HTS221_TEMP_AVG_16 },
{ 32, HTS221_TEMP_AVG_32 },
{ 64, HTS221_TEMP_AVG_64 },
{ 128, HTS221_TEMP_AVG_128 },
{ 256, HTS221_TEMP_AVG_256 },
2, /* 0.08 degC */
4, /* 0.05 degC */
8, /* 0.04 degC */
16, /* 0.03 degC */
32, /* 0.02 degC */
64, /* 0.015 degC */
128, /* 0.01 degC */
256, /* 0.007 degC */
},
},
};
@ -152,8 +131,7 @@ static const struct iio_chan_spec hts221_channels[] = {
IIO_CHAN_SOFT_TIMESTAMP(2),
};
static int hts221_write_with_mask(struct hts221_hw *hw, u8 addr, u8 mask,
u8 val)
int hts221_write_with_mask(struct hts221_hw *hw, u8 addr, u8 mask, u8 val)
{
u8 data;
int err;
@ -166,7 +144,7 @@ static int hts221_write_with_mask(struct hts221_hw *hw, u8 addr, u8 mask,
goto unlock;
}
data = (data & ~mask) | (val & mask);
data = (data & ~mask) | ((val << __ffs(mask)) & mask);
err = hw->tf->write(hw->dev, addr, sizeof(data), &data);
if (err < 0)
@ -199,21 +177,9 @@ static int hts221_check_whoami(struct hts221_hw *hw)
return 0;
}
int hts221_config_drdy(struct hts221_hw *hw, bool enable)
{
u8 val = enable ? BIT(2) : 0;
int err;
err = hts221_write_with_mask(hw, HTS221_REG_CNTRL3_ADDR,
HTS221_DRDY_MASK, val);
return err < 0 ? err : 0;
}
static int hts221_update_odr(struct hts221_hw *hw, u8 odr)
{
int i, err;
u8 val;
for (i = 0; i < ARRAY_SIZE(hts221_odr_table); i++)
if (hts221_odr_table[i].hz == odr)
@ -222,9 +188,8 @@ static int hts221_update_odr(struct hts221_hw *hw, u8 odr)
if (i == ARRAY_SIZE(hts221_odr_table))
return -EINVAL;
val = HTS221_ENABLE_SENSOR | HTS221_BDU_MASK | hts221_odr_table[i].val;
err = hts221_write_with_mask(hw, HTS221_REG_CNTRL1_ADDR,
HTS221_ODR_MASK, val);
HTS221_ODR_MASK, hts221_odr_table[i].val);
if (err < 0)
return err;
@ -241,14 +206,13 @@ static int hts221_update_avg(struct hts221_hw *hw,
const struct hts221_avg *avg = &hts221_avg_list[type];
for (i = 0; i < HTS221_AVG_DEPTH; i++)
if (avg->avg_avl[i].avg == val)
if (avg->avg_avl[i] == val)
break;
if (i == HTS221_AVG_DEPTH)
return -EINVAL;
err = hts221_write_with_mask(hw, avg->addr, avg->mask,
avg->avg_avl[i].val);
err = hts221_write_with_mask(hw, avg->addr, avg->mask, i);
if (err < 0)
return err;
@ -283,7 +247,7 @@ hts221_sysfs_rh_oversampling_avail(struct device *dev,
for (i = 0; i < ARRAY_SIZE(avg->avg_avl); i++)
len += scnprintf(buf + len, PAGE_SIZE - len, "%d ",
avg->avg_avl[i].avg);
avg->avg_avl[i]);
buf[len - 1] = '\n';
return len;
@ -300,36 +264,22 @@ hts221_sysfs_temp_oversampling_avail(struct device *dev,
for (i = 0; i < ARRAY_SIZE(avg->avg_avl); i++)
len += scnprintf(buf + len, PAGE_SIZE - len, "%d ",
avg->avg_avl[i].avg);
avg->avg_avl[i]);
buf[len - 1] = '\n';
return len;
}
int hts221_power_on(struct hts221_hw *hw)
int hts221_set_enable(struct hts221_hw *hw, bool enable)
{
int err;
err = hts221_update_odr(hw, hw->odr);
err = hts221_write_with_mask(hw, HTS221_REG_CNTRL1_ADDR,
HTS221_ENABLE_MASK, enable);
if (err < 0)
return err;
hw->enabled = true;
return 0;
}
int hts221_power_off(struct hts221_hw *hw)
{
__le16 data = 0;
int err;
err = hw->tf->write(hw->dev, HTS221_REG_CNTRL1_ADDR, sizeof(data),
(u8 *)&data);
if (err < 0)
return err;
hw->enabled = false;
hw->enabled = enable;
return 0;
}
@ -484,7 +434,7 @@ static int hts221_read_oneshot(struct hts221_hw *hw, u8 addr, int *val)
u8 data[HTS221_DATA_SIZE];
int err;
err = hts221_power_on(hw);
err = hts221_set_enable(hw, true);
if (err < 0)
return err;
@ -494,7 +444,7 @@ static int hts221_read_oneshot(struct hts221_hw *hw, u8 addr, int *val)
if (err < 0)
return err;
hts221_power_off(hw);
hts221_set_enable(hw, false);
*val = (s16)get_unaligned_le16(data);
@ -534,13 +484,13 @@ static int hts221_read_raw(struct iio_dev *iio_dev,
case IIO_HUMIDITYRELATIVE:
avg = &hts221_avg_list[HTS221_SENSOR_H];
idx = hw->sensors[HTS221_SENSOR_H].cur_avg_idx;
*val = avg->avg_avl[idx].avg;
*val = avg->avg_avl[idx];
ret = IIO_VAL_INT;
break;
case IIO_TEMP:
avg = &hts221_avg_list[HTS221_SENSOR_T];
idx = hw->sensors[HTS221_SENSOR_T].cur_avg_idx;
*val = avg->avg_avl[idx].avg;
*val = avg->avg_avl[idx];
ret = IIO_VAL_INT;
break;
default:
@ -644,8 +594,6 @@ int hts221_probe(struct iio_dev *iio_dev)
if (err < 0)
return err;
hw->odr = hts221_odr_table[0].hz;
iio_dev->modes = INDIO_DIRECT_MODE;
iio_dev->dev.parent = hw->dev;
iio_dev->available_scan_masks = hts221_scan_masks;
@ -654,6 +602,16 @@ int hts221_probe(struct iio_dev *iio_dev)
iio_dev->name = HTS221_DEV_NAME;
iio_dev->info = &hts221_info;
/* enable Block Data Update */
err = hts221_write_with_mask(hw, HTS221_REG_CNTRL1_ADDR,
HTS221_BDU_MASK, 1);
if (err < 0)
return err;
err = hts221_update_odr(hw, hts221_odr_table[0].hz);
if (err < 0)
return err;
/* configure humidity sensor */
err = hts221_parse_rh_caldata(hw);
if (err < 0) {
@ -661,7 +619,7 @@ int hts221_probe(struct iio_dev *iio_dev)
return err;
}
data = hts221_avg_list[HTS221_SENSOR_H].avg_avl[3].avg;
data = hts221_avg_list[HTS221_SENSOR_H].avg_avl[3];
err = hts221_update_avg(hw, HTS221_SENSOR_H, data);
if (err < 0) {
dev_err(hw->dev, "failed to set rh oversampling ratio\n");
@ -676,7 +634,7 @@ int hts221_probe(struct iio_dev *iio_dev)
return err;
}
data = hts221_avg_list[HTS221_SENSOR_T].avg_avl[3].avg;
data = hts221_avg_list[HTS221_SENSOR_T].avg_avl[3];
err = hts221_update_avg(hw, HTS221_SENSOR_T, data);
if (err < 0) {
dev_err(hw->dev,
@ -702,11 +660,10 @@ static int __maybe_unused hts221_suspend(struct device *dev)
{
struct iio_dev *iio_dev = dev_get_drvdata(dev);
struct hts221_hw *hw = iio_priv(iio_dev);
__le16 data = 0;
int err;
err = hw->tf->write(hw->dev, HTS221_REG_CNTRL1_ADDR, sizeof(data),
(u8 *)&data);
err = hts221_write_with_mask(hw, HTS221_REG_CNTRL1_ADDR,
HTS221_ENABLE_MASK, false);
return err < 0 ? err : 0;
}
@ -718,7 +675,8 @@ static int __maybe_unused hts221_resume(struct device *dev)
int err = 0;
if (hw->enabled)
err = hts221_update_odr(hw, hw->odr);
err = hts221_write_with_mask(hw, HTS221_REG_CNTRL1_ADDR,
HTS221_ENABLE_MASK, true);
return err;
}

View File

@ -238,11 +238,19 @@ static const struct i2c_device_id htu21_id[] = {
};
MODULE_DEVICE_TABLE(i2c, htu21_id);
static const struct of_device_id htu21_of_match[] = {
{ .compatible = "meas,htu21", },
{ .compatible = "meas,ms8607-humidity", },
{ },
};
MODULE_DEVICE_TABLE(of, htu21_of_match);
static struct i2c_driver htu21_driver = {
.probe = htu21_probe,
.id_table = htu21_id,
.driver = {
.name = "htu21",
.of_match_table = of_match_ptr(htu21_of_match),
},
};

View File

@ -217,7 +217,7 @@ static int adis16400_set_freq(struct adis16400_state *st, unsigned int freq)
return adis_write_reg_8(&st->adis, ADIS16400_SMPL_PRD, val);
}
static const unsigned adis16400_3db_divisors[] = {
static const unsigned int adis16400_3db_divisors[] = {
[0] = 2, /* Special case */
[1] = 6,
[2] = 12,
@ -890,7 +890,7 @@ static const struct adis_data adis16400_data = {
static void adis16400_setup_chan_mask(struct adis16400_state *st)
{
const struct adis16400_chip_info *chip_info = st->variant;
unsigned i;
unsigned int i;
for (i = 0; i < chip_info->num_channels; i++) {
const struct iio_chan_spec *ch = &chip_info->channels[i];

View File

@ -31,6 +31,8 @@
#include <linux/iio/iio.h>
#include <linux/iio/buffer.h>
#include <linux/platform_data/st_sensors_pdata.h>
#include "st_lsm6dsx.h"
#define ST_LSM6DSX_REG_FIFO_THL_ADDR 0x06
@ -39,6 +41,8 @@
#define ST_LSM6DSX_REG_FIFO_DEC_GXL_ADDR 0x08
#define ST_LSM6DSX_REG_HLACTIVE_ADDR 0x12
#define ST_LSM6DSX_REG_HLACTIVE_MASK BIT(5)
#define ST_LSM6DSX_REG_PP_OD_ADDR 0x12
#define ST_LSM6DSX_REG_PP_OD_MASK BIT(4)
#define ST_LSM6DSX_REG_FIFO_MODE_ADDR 0x0a
#define ST_LSM6DSX_FIFO_MODE_MASK GENMASK(2, 0)
#define ST_LSM6DSX_FIFO_ODR_MASK GENMASK(6, 3)
@ -417,6 +421,8 @@ static const struct iio_buffer_setup_ops st_lsm6dsx_buffer_ops = {
int st_lsm6dsx_fifo_setup(struct st_lsm6dsx_hw *hw)
{
struct device_node *np = hw->dev->of_node;
struct st_sensors_platform_data *pdata;
struct iio_buffer *buffer;
unsigned long irq_type;
bool irq_active_low;
@ -444,6 +450,17 @@ int st_lsm6dsx_fifo_setup(struct st_lsm6dsx_hw *hw)
if (err < 0)
return err;
pdata = (struct st_sensors_platform_data *)hw->dev->platform_data;
if ((np && of_property_read_bool(np, "drive-open-drain")) ||
(pdata && pdata->open_drain)) {
err = st_lsm6dsx_write_with_mask(hw, ST_LSM6DSX_REG_PP_OD_ADDR,
ST_LSM6DSX_REG_PP_OD_MASK, 1);
if (err < 0)
return err;
irq_type |= IRQF_SHARED;
}
err = devm_request_threaded_irq(hw->dev, hw->irq,
st_lsm6dsx_handler_irq,
st_lsm6dsx_handler_thread,

View File

@ -44,7 +44,7 @@ int iio_map_array_register(struct iio_dev *indio_dev, struct iio_map *maps)
}
mapi->map = &maps[i];
mapi->indio_dev = indio_dev;
list_add(&mapi->l, &iio_map_list);
list_add_tail(&mapi->l, &iio_map_list);
i++;
}
error_ret:
@ -205,8 +205,8 @@ static struct iio_channel *of_iio_channel_get_by_name(struct device_node *np,
if (!IS_ERR(chan) || PTR_ERR(chan) == -EPROBE_DEFER)
break;
else if (name && index >= 0) {
pr_err("ERROR: could not get IIO channel %s:%s(%i)\n",
np->full_name, name ? name : "", index);
pr_err("ERROR: could not get IIO channel %pOF:%s(%i)\n",
np, name ? name : "", index);
return NULL;
}

View File

@ -505,7 +505,7 @@ static SIMPLE_DEV_PM_OPS(apds9300_pm_ops, apds9300_suspend, apds9300_resume);
#define APDS9300_PM_OPS NULL
#endif
static struct i2c_device_id apds9300_id[] = {
static const struct i2c_device_id apds9300_id[] = {
{ APDS9300_DRV_NAME, 0 },
{ }
};

View File

@ -9,7 +9,7 @@
*
* IIO driver for RPR-0521RS (7-bit I2C slave address 0x38).
*
* TODO: illuminance channel, buffer
* TODO: illuminance channel
*/
#include <linux/module.h>
@ -20,6 +20,10 @@
#include <linux/acpi.h>
#include <linux/iio/iio.h>
#include <linux/iio/buffer.h>
#include <linux/iio/trigger.h>
#include <linux/iio/trigger_consumer.h>
#include <linux/iio/triggered_buffer.h>
#include <linux/iio/sysfs.h>
#include <linux/pm_runtime.h>
@ -30,6 +34,7 @@
#define RPR0521_REG_PXS_DATA 0x44 /* 16-bit, little endian */
#define RPR0521_REG_ALS_DATA0 0x46 /* 16-bit, little endian */
#define RPR0521_REG_ALS_DATA1 0x48 /* 16-bit, little endian */
#define RPR0521_REG_INTERRUPT 0x4A
#define RPR0521_REG_PS_OFFSET_LSB 0x53
#define RPR0521_REG_ID 0x92
@ -42,16 +47,31 @@
#define RPR0521_ALS_DATA1_GAIN_SHIFT 2
#define RPR0521_PXS_GAIN_MASK GENMASK(5, 4)
#define RPR0521_PXS_GAIN_SHIFT 4
#define RPR0521_PXS_PERSISTENCE_MASK GENMASK(3, 0)
#define RPR0521_INTERRUPT_INT_TRIG_PS_MASK BIT(0)
#define RPR0521_INTERRUPT_INT_TRIG_ALS_MASK BIT(1)
#define RPR0521_INTERRUPT_INT_REASSERT_MASK BIT(3)
#define RPR0521_INTERRUPT_ALS_INT_STATUS_MASK BIT(6)
#define RPR0521_INTERRUPT_PS_INT_STATUS_MASK BIT(7)
#define RPR0521_MODE_ALS_ENABLE BIT(7)
#define RPR0521_MODE_ALS_DISABLE 0x00
#define RPR0521_MODE_PXS_ENABLE BIT(6)
#define RPR0521_MODE_PXS_DISABLE 0x00
#define RPR0521_PXS_PERSISTENCE_DRDY 0x00
#define RPR0521_INTERRUPT_INT_TRIG_PS_ENABLE BIT(0)
#define RPR0521_INTERRUPT_INT_TRIG_PS_DISABLE 0x00
#define RPR0521_INTERRUPT_INT_TRIG_ALS_ENABLE BIT(1)
#define RPR0521_INTERRUPT_INT_TRIG_ALS_DISABLE 0x00
#define RPR0521_INTERRUPT_INT_REASSERT_ENABLE BIT(3)
#define RPR0521_INTERRUPT_INT_REASSERT_DISABLE 0x00
#define RPR0521_MANUFACT_ID 0xE0
#define RPR0521_DEFAULT_MEAS_TIME 0x06 /* ALS - 100ms, PXS - 100ms */
#define RPR0521_DRV_NAME "RPR0521"
#define RPR0521_IRQ_NAME "rpr0521_event"
#define RPR0521_REGMAP_NAME "rpr0521_regmap"
#define RPR0521_SLEEP_DELAY_MS 2000
@ -167,6 +187,9 @@ struct rpr0521_data {
bool als_dev_en;
bool pxs_dev_en;
struct iio_trigger *drdy_trigger0;
s64 irq_timestamp;
/* optimize runtime pm ops - enable/disable device only if needed */
bool als_ps_need_en;
bool pxs_ps_need_en;
@ -196,6 +219,19 @@ static const struct attribute_group rpr0521_attribute_group = {
.attrs = rpr0521_attributes,
};
/* Order of the channel data in buffer */
enum rpr0521_scan_index_order {
RPR0521_CHAN_INDEX_PXS,
RPR0521_CHAN_INDEX_BOTH,
RPR0521_CHAN_INDEX_IR,
};
static const unsigned long rpr0521_available_scan_masks[] = {
BIT(RPR0521_CHAN_INDEX_PXS) | BIT(RPR0521_CHAN_INDEX_BOTH) |
BIT(RPR0521_CHAN_INDEX_IR),
0
};
static const struct iio_chan_spec rpr0521_channels[] = {
{
.type = IIO_PROXIMITY,
@ -204,6 +240,13 @@ static const struct iio_chan_spec rpr0521_channels[] = {
BIT(IIO_CHAN_INFO_OFFSET) |
BIT(IIO_CHAN_INFO_SCALE),
.info_mask_shared_by_all = BIT(IIO_CHAN_INFO_SAMP_FREQ),
.scan_index = RPR0521_CHAN_INDEX_PXS,
.scan_type = {
.sign = 'u',
.realbits = 16,
.storagebits = 16,
.endianness = IIO_LE,
},
},
{
.type = IIO_INTENSITY,
@ -213,6 +256,13 @@ static const struct iio_chan_spec rpr0521_channels[] = {
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
BIT(IIO_CHAN_INFO_SCALE),
.info_mask_shared_by_all = BIT(IIO_CHAN_INFO_SAMP_FREQ),
.scan_index = RPR0521_CHAN_INDEX_BOTH,
.scan_type = {
.sign = 'u',
.realbits = 16,
.storagebits = 16,
.endianness = IIO_LE,
},
},
{
.type = IIO_INTENSITY,
@ -222,6 +272,13 @@ static const struct iio_chan_spec rpr0521_channels[] = {
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
BIT(IIO_CHAN_INFO_SCALE),
.info_mask_shared_by_all = BIT(IIO_CHAN_INFO_SAMP_FREQ),
.scan_index = RPR0521_CHAN_INDEX_IR,
.scan_type = {
.sign = 'u',
.realbits = 16,
.storagebits = 16,
.endianness = IIO_LE,
},
},
};
@ -330,6 +387,198 @@ static int rpr0521_set_power_state(struct rpr0521_data *data, bool on,
return 0;
}
/* Interrupt register tells if this sensor caused the interrupt or not. */
static inline bool rpr0521_is_triggered(struct rpr0521_data *data)
{
int ret;
int reg;
ret = regmap_read(data->regmap, RPR0521_REG_INTERRUPT, &reg);
if (ret < 0)
return false; /* Reg read failed. */
if (reg &
(RPR0521_INTERRUPT_ALS_INT_STATUS_MASK |
RPR0521_INTERRUPT_PS_INT_STATUS_MASK))
return true;
else
return false; /* Int not from this sensor. */
}
/* IRQ to trigger handler */
static irqreturn_t rpr0521_drdy_irq_handler(int irq, void *private)
{
struct iio_dev *indio_dev = private;
struct rpr0521_data *data = iio_priv(indio_dev);
data->irq_timestamp = iio_get_time_ns(indio_dev);
/*
* We need to wake the thread to read the interrupt reg. It
* is not possible to do that here because regmap_read takes a
* mutex.
*/
return IRQ_WAKE_THREAD;
}
static irqreturn_t rpr0521_drdy_irq_thread(int irq, void *private)
{
struct iio_dev *indio_dev = private;
struct rpr0521_data *data = iio_priv(indio_dev);
if (rpr0521_is_triggered(data)) {
iio_trigger_poll_chained(data->drdy_trigger0);
return IRQ_HANDLED;
}
return IRQ_NONE;
}
static irqreturn_t rpr0521_trigger_consumer_store_time(int irq, void *p)
{
struct iio_poll_func *pf = p;
struct iio_dev *indio_dev = pf->indio_dev;
/* Other trigger polls store time here. */
if (!iio_trigger_using_own(indio_dev))
pf->timestamp = iio_get_time_ns(indio_dev);
return IRQ_WAKE_THREAD;
}
static irqreturn_t rpr0521_trigger_consumer_handler(int irq, void *p)
{
struct iio_poll_func *pf = p;
struct iio_dev *indio_dev = pf->indio_dev;
struct rpr0521_data *data = iio_priv(indio_dev);
int err;
u8 buffer[16]; /* 3 16-bit channels + padding + ts */
/* Use irq timestamp when reasonable. */
if (iio_trigger_using_own(indio_dev) && data->irq_timestamp) {
pf->timestamp = data->irq_timestamp;
data->irq_timestamp = 0;
}
/* Other chained trigger polls get timestamp only here. */
if (!pf->timestamp)
pf->timestamp = iio_get_time_ns(indio_dev);
err = regmap_bulk_read(data->regmap, RPR0521_REG_PXS_DATA,
&buffer,
(3 * 2) + 1); /* 3 * 16-bit + (discarded) int clear reg. */
if (!err)
iio_push_to_buffers_with_timestamp(indio_dev,
buffer, pf->timestamp);
else
dev_err(&data->client->dev,
"Trigger consumer can't read from sensor.\n");
pf->timestamp = 0;
iio_trigger_notify_done(indio_dev->trig);
return IRQ_HANDLED;
}
static int rpr0521_write_int_enable(struct rpr0521_data *data)
{
int err;
/* Interrupt after each measurement */
err = regmap_update_bits(data->regmap, RPR0521_REG_PXS_CTRL,
RPR0521_PXS_PERSISTENCE_MASK,
RPR0521_PXS_PERSISTENCE_DRDY);
if (err) {
dev_err(&data->client->dev, "PS control reg write fail.\n");
return -EBUSY;
}
/* Ignore latch and mode because of drdy */
err = regmap_write(data->regmap, RPR0521_REG_INTERRUPT,
RPR0521_INTERRUPT_INT_REASSERT_DISABLE |
RPR0521_INTERRUPT_INT_TRIG_ALS_DISABLE |
RPR0521_INTERRUPT_INT_TRIG_PS_ENABLE
);
if (err) {
dev_err(&data->client->dev, "Interrupt setup write fail.\n");
return -EBUSY;
}
return 0;
}
static int rpr0521_write_int_disable(struct rpr0521_data *data)
{
/* Don't care of clearing mode, assert and latch. */
return regmap_write(data->regmap, RPR0521_REG_INTERRUPT,
RPR0521_INTERRUPT_INT_TRIG_ALS_DISABLE |
RPR0521_INTERRUPT_INT_TRIG_PS_DISABLE
);
}
/*
* Trigger producer enable / disable. Note that there will be trigs only when
* measurement data is ready to be read.
*/
static int rpr0521_pxs_drdy_set_state(struct iio_trigger *trigger,
bool enable_drdy)
{
struct iio_dev *indio_dev = iio_trigger_get_drvdata(trigger);
struct rpr0521_data *data = iio_priv(indio_dev);
int err;
if (enable_drdy)
err = rpr0521_write_int_enable(data);
else
err = rpr0521_write_int_disable(data);
if (err)
dev_err(&data->client->dev, "rpr0521_pxs_drdy_set_state failed\n");
return err;
}
static const struct iio_trigger_ops rpr0521_trigger_ops = {
.set_trigger_state = rpr0521_pxs_drdy_set_state,
.owner = THIS_MODULE,
};
static int rpr0521_buffer_preenable(struct iio_dev *indio_dev)
{
int err;
struct rpr0521_data *data = iio_priv(indio_dev);
mutex_lock(&data->lock);
err = rpr0521_set_power_state(data, true,
(RPR0521_MODE_PXS_MASK | RPR0521_MODE_ALS_MASK));
mutex_unlock(&data->lock);
if (err)
dev_err(&data->client->dev, "_buffer_preenable fail\n");
return err;
}
static int rpr0521_buffer_postdisable(struct iio_dev *indio_dev)
{
int err;
struct rpr0521_data *data = iio_priv(indio_dev);
mutex_lock(&data->lock);
err = rpr0521_set_power_state(data, false,
(RPR0521_MODE_PXS_MASK | RPR0521_MODE_ALS_MASK));
mutex_unlock(&data->lock);
if (err)
dev_err(&data->client->dev, "_buffer_postdisable fail\n");
return err;
}
static const struct iio_buffer_setup_ops rpr0521_buffer_setup_ops = {
.preenable = rpr0521_buffer_preenable,
.postenable = iio_triggered_buffer_postenable,
.predisable = iio_triggered_buffer_predisable,
.postdisable = rpr0521_buffer_postdisable,
};
static int rpr0521_get_gain(struct rpr0521_data *data, int chan,
int *val, int *val2)
{
@ -473,6 +722,7 @@ static int rpr0521_read_raw(struct iio_dev *indio_dev,
{
struct rpr0521_data *data = iio_priv(indio_dev);
int ret;
int busy;
u8 device_mask;
__le16 raw_data;
@ -481,26 +731,30 @@ static int rpr0521_read_raw(struct iio_dev *indio_dev,
if (chan->type != IIO_INTENSITY && chan->type != IIO_PROXIMITY)
return -EINVAL;
busy = iio_device_claim_direct_mode(indio_dev);
if (busy)
return -EBUSY;
device_mask = rpr0521_data_reg[chan->address].device_mask;
mutex_lock(&data->lock);
ret = rpr0521_set_power_state(data, true, device_mask);
if (ret < 0) {
mutex_unlock(&data->lock);
return ret;
}
if (ret < 0)
goto rpr0521_read_raw_out;
ret = regmap_bulk_read(data->regmap,
rpr0521_data_reg[chan->address].address,
&raw_data, sizeof(raw_data));
if (ret < 0) {
rpr0521_set_power_state(data, false, device_mask);
mutex_unlock(&data->lock);
return ret;
goto rpr0521_read_raw_out;
}
ret = rpr0521_set_power_state(data, false, device_mask);
rpr0521_read_raw_out:
mutex_unlock(&data->lock);
iio_device_release_direct_mode(indio_dev);
if (ret < 0)
return ret;
@ -617,12 +871,15 @@ static int rpr0521_init(struct rpr0521_data *data)
return ret;
#endif
data->irq_timestamp = 0;
return 0;
}
static int rpr0521_poweroff(struct rpr0521_data *data)
{
int ret;
int tmp;
ret = regmap_update_bits(data->regmap, RPR0521_REG_MODE_CTRL,
RPR0521_MODE_ALS_MASK |
@ -635,6 +892,16 @@ static int rpr0521_poweroff(struct rpr0521_data *data)
data->als_dev_en = false;
data->pxs_dev_en = false;
/*
* Int pin keeps state after power off. Set pin to high impedance
* mode to prevent power drain.
*/
ret = regmap_read(data->regmap, RPR0521_REG_INTERRUPT, &tmp);
if (ret) {
dev_err(&data->client->dev, "Failed to reset int pin.\n");
return ret;
}
return 0;
}
@ -707,6 +974,61 @@ static int rpr0521_probe(struct i2c_client *client,
pm_runtime_set_autosuspend_delay(&client->dev, RPR0521_SLEEP_DELAY_MS);
pm_runtime_use_autosuspend(&client->dev);
/*
* If sensor write/read is needed in _probe after _use_autosuspend,
* sensor needs to be _resumed first using rpr0521_set_power_state().
*/
/* IRQ to trigger setup */
if (client->irq) {
/* Trigger0 producer setup */
data->drdy_trigger0 = devm_iio_trigger_alloc(
indio_dev->dev.parent,
"%s-dev%d", indio_dev->name, indio_dev->id);
if (!data->drdy_trigger0) {
ret = -ENOMEM;
goto err_pm_disable;
}
data->drdy_trigger0->dev.parent = indio_dev->dev.parent;
data->drdy_trigger0->ops = &rpr0521_trigger_ops;
indio_dev->available_scan_masks = rpr0521_available_scan_masks;
iio_trigger_set_drvdata(data->drdy_trigger0, indio_dev);
/* Ties irq to trigger producer handler. */
ret = devm_request_threaded_irq(&client->dev, client->irq,
rpr0521_drdy_irq_handler, rpr0521_drdy_irq_thread,
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
RPR0521_IRQ_NAME, indio_dev);
if (ret < 0) {
dev_err(&client->dev, "request irq %d for trigger0 failed\n",
client->irq);
goto err_pm_disable;
}
ret = devm_iio_trigger_register(indio_dev->dev.parent,
data->drdy_trigger0);
if (ret) {
dev_err(&client->dev, "iio trigger register failed\n");
goto err_pm_disable;
}
/*
* Now whole pipe from physical interrupt (irq defined by
* devicetree to device) to trigger0 output is set up.
*/
/* Trigger consumer setup */
ret = devm_iio_triggered_buffer_setup(indio_dev->dev.parent,
indio_dev,
rpr0521_trigger_consumer_store_time,
rpr0521_trigger_consumer_handler,
&rpr0521_buffer_setup_ops);
if (ret < 0) {
dev_err(&client->dev, "iio triggered buffer setup failed\n");
goto err_pm_disable;
}
}
ret = iio_device_register(indio_dev);
if (ret)
goto err_pm_disable;

View File

@ -11,6 +11,8 @@
* 7-bit I2C slave address 0x39 (TCS34721, TCS34723) or 0x29 (TCS34725,
* TCS34727)
*
* Datasheet: http://ams.com/eng/content/download/319364/1117183/file/TCS3472_Datasheet_EN_v2.pdf
*
* TODO: interrupt support, thresholds, wait time
*/
@ -169,7 +171,7 @@ static int tcs3472_write_raw(struct iio_dev *indio_dev,
for (i = 0; i < 256; i++) {
if (val2 == (256 - i) * 2400) {
data->atime = i;
return i2c_smbus_write_word_data(
return i2c_smbus_write_byte_data(
data->client, TCS3472_ATIME,
data->atime);
}

View File

@ -924,7 +924,7 @@ static const struct dev_pm_ops tsl2583_pm_ops = {
SET_RUNTIME_PM_OPS(tsl2583_suspend, tsl2583_resume, NULL)
};
static struct i2c_device_id tsl2583_idtable[] = {
static const struct i2c_device_id tsl2583_idtable[] = {
{ "tsl2580", 0 },
{ "tsl2581", 1 },
{ "tsl2583", 2 },

View File

@ -13,8 +13,8 @@ config AK8974
select IIO_BUFFER
select IIO_TRIGGERED_BUFFER
help
Say yes here to build support for Asahi Kasei AK8974 or
AMI305 I2C-based 3-axis magnetometer chips.
Say yes here to build support for Asahi Kasei AK8974, AMI305 or
AMI306 I2C-based 3-axis magnetometer chips.
To compile this driver as a module, choose M here: the module
will be called ak8974.

View File

@ -20,6 +20,7 @@
#include <linux/mutex.h>
#include <linux/delay.h>
#include <linux/bitops.h>
#include <linux/random.h>
#include <linux/regmap.h>
#include <linux/regulator/consumer.h>
#include <linux/pm_runtime.h>
@ -36,7 +37,7 @@
* and MSB is at the next higher address.
*/
/* These registers are common for AK8974 and AMI305 */
/* These registers are common for AK8974 and AMI30x */
#define AK8974_SELFTEST 0x0C
#define AK8974_SELFTEST_IDLE 0x55
#define AK8974_SELFTEST_OK 0xAA
@ -44,6 +45,7 @@
#define AK8974_INFO 0x0D
#define AK8974_WHOAMI 0x0F
#define AK8974_WHOAMI_VALUE_AMI306 0x46
#define AK8974_WHOAMI_VALUE_AMI305 0x47
#define AK8974_WHOAMI_VALUE_AK8974 0x48
@ -73,6 +75,35 @@
#define AK8974_TEMP 0x31
#define AMI305_TEMP 0x60
/* AMI306-specific control register */
#define AMI306_CTRL4 0x5C
/* AMI306 factory calibration data */
/* fine axis sensitivity */
#define AMI306_FINEOUTPUT_X 0x90
#define AMI306_FINEOUTPUT_Y 0x92
#define AMI306_FINEOUTPUT_Z 0x94
/* axis sensitivity */
#define AMI306_SENS_X 0x96
#define AMI306_SENS_Y 0x98
#define AMI306_SENS_Z 0x9A
/* axis cross-interference */
#define AMI306_GAIN_PARA_XZ 0x9C
#define AMI306_GAIN_PARA_XY 0x9D
#define AMI306_GAIN_PARA_YZ 0x9E
#define AMI306_GAIN_PARA_YX 0x9F
#define AMI306_GAIN_PARA_ZY 0xA0
#define AMI306_GAIN_PARA_ZX 0xA1
/* offset at ZERO magnetic field */
#define AMI306_OFFZERO_X 0xF8
#define AMI306_OFFZERO_Y 0xFA
#define AMI306_OFFZERO_Z 0xFC
#define AK8974_INT_X_HIGH BIT(7) /* Axis over +threshold */
#define AK8974_INT_Y_HIGH BIT(6)
#define AK8974_INT_Z_HIGH BIT(5)
@ -158,6 +189,26 @@ struct ak8974 {
static const char ak8974_reg_avdd[] = "avdd";
static const char ak8974_reg_dvdd[] = "dvdd";
static int ak8974_get_u16_val(struct ak8974 *ak8974, u8 reg, u16 *val)
{
int ret;
__le16 bulk;
ret = regmap_bulk_read(ak8974->map, reg, &bulk, 2);
if (ret)
return ret;
*val = le16_to_cpu(bulk);
return 0;
}
static int ak8974_set_u16_val(struct ak8974 *ak8974, u8 reg, u16 val)
{
__le16 bulk = cpu_to_le16(val);
return regmap_bulk_write(ak8974->map, reg, &bulk, 2);
}
static int ak8974_set_power(struct ak8974 *ak8974, bool mode)
{
int ret;
@ -209,6 +260,12 @@ static int ak8974_configure(struct ak8974 *ak8974)
ret = regmap_write(ak8974->map, AK8974_CTRL3, 0);
if (ret)
return ret;
if (ak8974->variant == AK8974_WHOAMI_VALUE_AMI306) {
/* magic from datasheet: set high-speed measurement mode */
ret = ak8974_set_u16_val(ak8974, AMI306_CTRL4, 0xA07E);
if (ret)
return ret;
}
ret = regmap_write(ak8974->map, AK8974_INT_CTRL, AK8974_INT_CTRL_POL);
if (ret)
return ret;
@ -388,17 +445,18 @@ static int ak8974_selftest(struct ak8974 *ak8974)
return 0;
}
static int ak8974_get_u16_val(struct ak8974 *ak8974, u8 reg, u16 *val)
static void ak8974_read_calib_data(struct ak8974 *ak8974, unsigned int reg,
__le16 *tab, size_t tab_size)
{
int ret;
__le16 bulk;
ret = regmap_bulk_read(ak8974->map, reg, &bulk, 2);
if (ret)
return ret;
*val = le16_to_cpu(bulk);
return 0;
int ret = regmap_bulk_read(ak8974->map, reg, tab, tab_size);
if (ret) {
memset(tab, 0xFF, tab_size);
dev_warn(&ak8974->i2c->dev,
"can't read calibration data (regs %u..%zu): %d\n",
reg, reg + tab_size - 1, ret);
} else {
add_device_randomness(tab, tab_size);
}
}
static int ak8974_detect(struct ak8974 *ak8974)
@ -413,9 +471,13 @@ static int ak8974_detect(struct ak8974 *ak8974)
if (ret)
return ret;
name = "ami305";
switch (whoami) {
case AK8974_WHOAMI_VALUE_AMI306:
name = "ami306";
/* fall-through */
case AK8974_WHOAMI_VALUE_AMI305:
name = "ami305";
ret = regmap_read(ak8974->map, AMI305_VER, &fw);
if (ret)
return ret;
@ -423,6 +485,7 @@ static int ak8974_detect(struct ak8974 *ak8974)
ret = ak8974_get_u16_val(ak8974, AMI305_SN, &sn);
if (ret)
return ret;
add_device_randomness(&sn, sizeof(sn));
dev_info(&ak8974->i2c->dev,
"detected %s, FW ver %02x, S/N: %04x\n",
name, fw, sn);
@ -440,6 +503,33 @@ static int ak8974_detect(struct ak8974 *ak8974)
ak8974->name = name;
ak8974->variant = whoami;
if (whoami == AK8974_WHOAMI_VALUE_AMI306) {
__le16 fab_data1[9], fab_data2[3];
int i;
ak8974_read_calib_data(ak8974, AMI306_FINEOUTPUT_X,
fab_data1, sizeof(fab_data1));
ak8974_read_calib_data(ak8974, AMI306_OFFZERO_X,
fab_data2, sizeof(fab_data2));
for (i = 0; i < 3; ++i) {
static const char axis[3] = "XYZ";
static const char pgaxis[6] = "ZYZXYX";
unsigned offz = le16_to_cpu(fab_data2[i]) & 0x7F;
unsigned fine = le16_to_cpu(fab_data1[i]);
unsigned sens = le16_to_cpu(fab_data1[i + 3]);
unsigned pgain1 = le16_to_cpu(fab_data1[i + 6]);
unsigned pgain2 = pgain1 >> 8;
pgain1 &= 0xFF;
dev_info(&ak8974->i2c->dev,
"factory calibration for axis %c: offz=%u sens=%u fine=%u pga%c=%u pga%c=%u\n",
axis[i], offz, sens, fine, pgaxis[i * 2],
pgain1, pgaxis[i * 2 + 1], pgain2);
}
}
return 0;
}
@ -602,19 +692,27 @@ static bool ak8974_writeable_reg(struct device *dev, unsigned int reg)
case AMI305_OFFSET_Y + 1:
case AMI305_OFFSET_Z:
case AMI305_OFFSET_Z + 1:
if (ak8974->variant == AK8974_WHOAMI_VALUE_AMI305)
return true;
return false;
return ak8974->variant == AK8974_WHOAMI_VALUE_AMI305 ||
ak8974->variant == AK8974_WHOAMI_VALUE_AMI306;
case AMI306_CTRL4:
case AMI306_CTRL4 + 1:
return ak8974->variant == AK8974_WHOAMI_VALUE_AMI306;
default:
return false;
}
}
static bool ak8974_precious_reg(struct device *dev, unsigned int reg)
{
return reg == AK8974_INT_CLEAR;
}
static const struct regmap_config ak8974_regmap_config = {
.reg_bits = 8,
.val_bits = 8,
.max_register = 0xff,
.writeable_reg = ak8974_writeable_reg,
.precious_reg = ak8974_precious_reg,
};
static int ak8974_probe(struct i2c_client *i2c,
@ -678,7 +776,7 @@ static int ak8974_probe(struct i2c_client *i2c,
ret = ak8974_detect(ak8974);
if (ret) {
dev_err(&i2c->dev, "neither AK8974 nor AMI305 found\n");
dev_err(&i2c->dev, "neither AK8974 nor AMI30x found\n");
goto power_off;
}
@ -827,6 +925,7 @@ static const struct dev_pm_ops ak8974_dev_pm_ops = {
static const struct i2c_device_id ak8974_id[] = {
{"ami305", 0 },
{"ami306", 0 },
{"ak8974", 0 },
{}
};
@ -850,7 +949,7 @@ static struct i2c_driver ak8974_driver = {
};
module_i2c_driver(ak8974_driver);
MODULE_DESCRIPTION("AK8974 and AMI305 3-axis magnetometer driver");
MODULE_DESCRIPTION("AK8974 and AMI30x 3-axis magnetometer driver");
MODULE_AUTHOR("Samu Onkalo");
MODULE_AUTHOR("Linus Walleij");
MODULE_LICENSE("GPL v2");

View File

@ -784,6 +784,7 @@ static const struct iio_info ak8975_info = {
.driver_module = THIS_MODULE,
};
#ifdef CONFIG_ACPI
static const struct acpi_device_id ak_acpi_match[] = {
{"AK8975", AK8975},
{"AK8963", AK8963},
@ -793,6 +794,7 @@ static const struct acpi_device_id ak_acpi_match[] = {
{ },
};
MODULE_DEVICE_TABLE(acpi, ak_acpi_match);
#endif
static const char *ak8975_match_acpi_device(struct device *dev,
enum asahi_compass_chipset *chipset)

View File

@ -19,6 +19,7 @@
#define LSM303DLM_MAGN_DEV_NAME "lsm303dlm_magn"
#define LIS3MDL_MAGN_DEV_NAME "lis3mdl"
#define LSM303AGR_MAGN_DEV_NAME "lsm303agr_magn"
#define LIS2MDL_MAGN_DEV_NAME "lis2mdl"
int st_magn_common_probe(struct iio_dev *indio_dev);
void st_magn_common_remove(struct iio_dev *indio_dev);

View File

@ -315,7 +315,7 @@ static const struct st_sensor_settings st_magn_sensors_settings[] = {
},
},
},
.multi_read_bit = false,
.multi_read_bit = true,
.bootime = 2,
},
{
@ -323,6 +323,7 @@ static const struct st_sensor_settings st_magn_sensors_settings[] = {
.wai_addr = 0x4f,
.sensors_supported = {
[0] = LSM303AGR_MAGN_DEV_NAME,
[1] = LIS2MDL_MAGN_DEV_NAME,
},
.ch = (struct iio_chan_spec *)st_magn_3_16bit_channels,
.odr = {

View File

@ -40,6 +40,10 @@ static const struct of_device_id st_magn_of_match[] = {
.compatible = "st,lsm303agr-magn",
.data = LSM303AGR_MAGN_DEV_NAME,
},
{
.compatible = "st,lis2mdl",
.data = LIS2MDL_MAGN_DEV_NAME,
},
{},
};
MODULE_DEVICE_TABLE(of, st_magn_of_match);
@ -59,7 +63,8 @@ static int st_magn_i2c_probe(struct i2c_client *client,
return -ENOMEM;
mdata = iio_priv(indio_dev);
st_sensors_of_i2c_probe(client, st_magn_of_match);
st_sensors_of_name_probe(&client->dev, st_magn_of_match,
client->name, sizeof(client->name));
st_sensors_i2c_configure(indio_dev, client, mdata);
@ -84,6 +89,7 @@ static const struct i2c_device_id st_magn_id_table[] = {
{ LSM303DLM_MAGN_DEV_NAME },
{ LIS3MDL_MAGN_DEV_NAME },
{ LSM303AGR_MAGN_DEV_NAME },
{ LIS2MDL_MAGN_DEV_NAME },
{},
};
MODULE_DEVICE_TABLE(i2c, st_magn_id_table);

View File

@ -18,6 +18,32 @@
#include <linux/iio/common/st_sensors_spi.h>
#include "st_magn.h"
#ifdef CONFIG_OF
/*
* For new single-chip sensors use <device_name> as compatible string.
* For old single-chip devices keep <device_name>-magn to maintain
* compatibility
*/
static const struct of_device_id st_magn_of_match[] = {
{
.compatible = "st,lis3mdl-magn",
.data = LIS3MDL_MAGN_DEV_NAME,
},
{
.compatible = "st,lsm303agr-magn",
.data = LSM303AGR_MAGN_DEV_NAME,
},
{
.compatible = "st,lis2mdl",
.data = LIS2MDL_MAGN_DEV_NAME,
},
{}
};
MODULE_DEVICE_TABLE(of, st_magn_of_match);
#else
#define st_magn_of_match NULL
#endif
static int st_magn_spi_probe(struct spi_device *spi)
{
struct iio_dev *indio_dev;
@ -30,6 +56,8 @@ static int st_magn_spi_probe(struct spi_device *spi)
mdata = iio_priv(indio_dev);
st_sensors_of_name_probe(&spi->dev, st_magn_of_match,
spi->modalias, sizeof(spi->modalias));
st_sensors_spi_configure(indio_dev, spi, mdata);
err = st_magn_common_probe(indio_dev);
@ -50,6 +78,7 @@ static int st_magn_spi_remove(struct spi_device *spi)
static const struct spi_device_id st_magn_id_table[] = {
{ LIS3MDL_MAGN_DEV_NAME },
{ LSM303AGR_MAGN_DEV_NAME },
{ LIS2MDL_MAGN_DEV_NAME },
{},
};
MODULE_DEVICE_TABLE(spi, st_magn_id_table);
@ -57,6 +86,7 @@ MODULE_DEVICE_TABLE(spi, st_magn_id_table);
static struct spi_driver st_magn_driver = {
.driver = {
.name = "st-magn-spi",
.of_match_table = of_match_ptr(st_magn_of_match),
},
.probe = st_magn_spi_probe,
.remove = st_magn_spi_remove,

View File

@ -238,7 +238,7 @@ static int dev_rot_parse_report(struct platform_device *pdev,
static int hid_dev_rot_probe(struct platform_device *pdev)
{
int ret;
static char *name;
char *name;
struct iio_dev *indio_dev;
struct dev_rot_state *rot_state;
struct hid_sensor_hub_device *hsdev = pdev->dev.platform_data;

View File

@ -181,11 +181,21 @@ static const struct i2c_device_id ms5637_id[] = {
};
MODULE_DEVICE_TABLE(i2c, ms5637_id);
static const struct of_device_id ms5637_of_match[] = {
{ .compatible = "meas,ms5637", },
{ .compatible = "meas,ms5805", },
{ .compatible = "meas,ms5837", },
{ .compatible = "meas,ms8607-temppressure", },
{ },
};
MODULE_DEVICE_TABLE(of, ms5637_of_match);
static struct i2c_driver ms5637_driver = {
.probe = ms5637_probe,
.id_table = ms5637_id,
.driver = {
.name = "ms5637"
.name = "ms5637",
.of_match_table = of_match_ptr(ms5637_of_match),
},
};

View File

@ -390,7 +390,7 @@ static const struct st_sensor_settings st_press_sensors_settings[] = {
.drdy_irq = {
.addr = 0x23,
.mask_int1 = 0x01,
.mask_int2 = 0x10,
.mask_int2 = 0x00,
.addr_ihl = 0x22,
.mask_ihl = 0x80,
.addr_od = 0x22,
@ -449,7 +449,7 @@ static const struct st_sensor_settings st_press_sensors_settings[] = {
.drdy_irq = {
.addr = 0x12,
.mask_int1 = 0x04,
.mask_int2 = 0x08,
.mask_int2 = 0x00,
.addr_ihl = 0x12,
.mask_ihl = 0x80,
.addr_od = 0x12,

View File

@ -77,7 +77,8 @@ static int st_press_i2c_probe(struct i2c_client *client,
press_data = iio_priv(indio_dev);
if (client->dev.of_node) {
st_sensors_of_i2c_probe(client, st_press_of_match);
st_sensors_of_name_probe(&client->dev, st_press_of_match,
client->name, sizeof(client->name));
} else if (ACPI_HANDLE(&client->dev)) {
ret = st_sensors_match_acpi_device(&client->dev);
if ((ret < 0) || (ret >= ST_PRESS_MAX))

View File

@ -18,6 +18,36 @@
#include <linux/iio/common/st_sensors_spi.h>
#include "st_pressure.h"
#ifdef CONFIG_OF
/*
* For new single-chip sensors use <device_name> as compatible string.
* For old single-chip devices keep <device_name>-press to maintain
* compatibility
*/
static const struct of_device_id st_press_of_match[] = {
{
.compatible = "st,lps001wp-press",
.data = LPS001WP_PRESS_DEV_NAME,
},
{
.compatible = "st,lps25h-press",
.data = LPS25H_PRESS_DEV_NAME,
},
{
.compatible = "st,lps331ap-press",
.data = LPS331AP_PRESS_DEV_NAME,
},
{
.compatible = "st,lps22hb-press",
.data = LPS22HB_PRESS_DEV_NAME,
},
{},
};
MODULE_DEVICE_TABLE(of, st_press_of_match);
#else
#define st_press_of_match NULL
#endif
static int st_press_spi_probe(struct spi_device *spi)
{
struct iio_dev *indio_dev;
@ -30,6 +60,8 @@ static int st_press_spi_probe(struct spi_device *spi)
press_data = iio_priv(indio_dev);
st_sensors_of_name_probe(&spi->dev, st_press_of_match,
spi->modalias, sizeof(spi->modalias));
st_sensors_spi_configure(indio_dev, spi, press_data);
err = st_press_common_probe(indio_dev);
@ -58,6 +90,7 @@ MODULE_DEVICE_TABLE(spi, st_press_id_table);
static struct spi_driver st_press_driver = {
.driver = {
.name = "st-press-spi",
.of_match_table = of_match_ptr(st_press_of_match),
},
.probe = st_press_spi_probe,
.remove = st_press_spi_remove,

View File

@ -141,14 +141,14 @@ struct zpa2326_private {
struct regulator *vdd;
};
#define zpa2326_err(_idev, _format, _arg...) \
dev_err(_idev->dev.parent, _format, ##_arg)
#define zpa2326_err(idev, fmt, ...) \
dev_err(idev->dev.parent, fmt "\n", ##__VA_ARGS__)
#define zpa2326_warn(_idev, _format, _arg...) \
dev_warn(_idev->dev.parent, _format, ##_arg)
#define zpa2326_warn(idev, fmt, ...) \
dev_warn(idev->dev.parent, fmt "\n", ##__VA_ARGS__)
#define zpa2326_dbg(_idev, _format, _arg...) \
dev_dbg(_idev->dev.parent, _format, ##_arg)
#define zpa2326_dbg(idev, fmt, ...) \
dev_dbg(idev->dev.parent, fmt "\n", ##__VA_ARGS__)
bool zpa2326_isreg_writeable(struct device *dev, unsigned int reg)
{

View File

@ -57,12 +57,12 @@ config SX9500
module will be called sx9500.
config SRF08
tristate "Devantech SRF08 ultrasonic ranger sensor"
tristate "Devantech SRF02/SRF08/SRF10 ultrasonic ranger sensor"
depends on I2C
help
Say Y here to build a driver for Devantech SRF08 ultrasonic
ranger sensor. This driver can be used to measure the distance
of objects.
Say Y here to build a driver for Devantech SRF02/SRF08/SRF10
ultrasonic ranger sensors with i2c interface.
This driver can be used to measure the distance of objects.
To compile this driver as a module, choose M here: the
module will be called srf08.

View File

@ -1,14 +1,18 @@
/*
* srf08.c - Support for Devantech SRF08 ultrasonic ranger
* srf08.c - Support for Devantech SRFxx ultrasonic ranger
* with i2c interface
* actually supported are srf02, srf08, srf10
*
* Copyright (c) 2016 Andreas Klinger <ak@it-klinger.de>
* Copyright (c) 2016, 2017 Andreas Klinger <ak@it-klinger.de>
*
* This file is subject to the terms and conditions of version 2 of
* the GNU General Public License. See the file COPYING in the main
* the GNU General Public License. See the file COPYING in the main
* directory of this archive for more details.
*
* For details about the device see:
* http://www.robot-electronics.co.uk/htm/srf08tech.html
* http://www.robot-electronics.co.uk/htm/srf10tech.htm
* http://www.robot-electronics.co.uk/htm/srf02tech.htm
*/
#include <linux/err.h>
@ -18,6 +22,9 @@
#include <linux/bitops.h>
#include <linux/iio/iio.h>
#include <linux/iio/sysfs.h>
#include <linux/iio/buffer.h>
#include <linux/iio/trigger_consumer.h>
#include <linux/iio/triggered_buffer.h>
/* registers of SRF08 device */
#define SRF08_WRITE_COMMAND 0x00 /* Command Register */
@ -30,14 +37,46 @@
#define SRF08_CMD_RANGING_CM 0x51 /* Ranging Mode - Result in cm */
#define SRF08_DEFAULT_GAIN 1025 /* default analogue value of Gain */
#define SRF08_DEFAULT_RANGE 6020 /* default value of Range in mm */
enum srf08_sensor_type {
SRF02,
SRF08,
SRF10,
SRF_MAX_TYPE
};
struct srf08_chip_info {
const int *sensitivity_avail;
int num_sensitivity_avail;
int sensitivity_default;
/* default value of Range in mm */
int range_default;
};
struct srf08_data {
struct i2c_client *client;
int sensitivity; /* Gain */
int range_mm; /* max. Range in mm */
/*
* Gain in the datasheet is called sensitivity here to distinct it
* from the gain used with amplifiers of adc's
*/
int sensitivity;
/* max. Range in mm */
int range_mm;
struct mutex lock;
/*
* triggered buffer
* 1x16-bit channel + 3x16 padding + 4x16 timestamp
*/
s16 buffer[8];
/* Sensor-Type */
enum srf08_sensor_type sensor_type;
/* Chip-specific information */
const struct srf08_chip_info *chip_info;
};
/*
@ -47,11 +86,42 @@ struct srf08_data {
* But with ADC's this term is already used differently and that's why it
* is called "Sensitivity" here.
*/
static const int srf08_sensitivity[] = {
static const struct srf08_chip_info srf02_chip_info = {
.sensitivity_avail = NULL,
.num_sensitivity_avail = 0,
.sensitivity_default = 0,
.range_default = 0,
};
static const int srf08_sensitivity_avail[] = {
94, 97, 100, 103, 107, 110, 114, 118,
123, 128, 133, 139, 145, 152, 159, 168,
177, 187, 199, 212, 227, 245, 265, 288,
317, 352, 395, 450, 524, 626, 777, 1025 };
317, 352, 395, 450, 524, 626, 777, 1025
};
static const struct srf08_chip_info srf08_chip_info = {
.sensitivity_avail = srf08_sensitivity_avail,
.num_sensitivity_avail = ARRAY_SIZE(srf08_sensitivity_avail),
.sensitivity_default = 1025,
.range_default = 6020,
};
static const int srf10_sensitivity_avail[] = {
40, 40, 50, 60, 70, 80, 100, 120,
140, 200, 250, 300, 350, 400, 500, 600,
700,
};
static const struct srf08_chip_info srf10_chip_info = {
.sensitivity_avail = srf10_sensitivity_avail,
.num_sensitivity_avail = ARRAY_SIZE(srf10_sensitivity_avail),
.sensitivity_default = 700,
.range_default = 6020,
};
static int srf08_read_ranging(struct srf08_data *data)
{
@ -110,6 +180,29 @@ static int srf08_read_ranging(struct srf08_data *data)
return ret;
}
static irqreturn_t srf08_trigger_handler(int irq, void *p)
{
struct iio_poll_func *pf = p;
struct iio_dev *indio_dev = pf->indio_dev;
struct srf08_data *data = iio_priv(indio_dev);
s16 sensor_data;
sensor_data = srf08_read_ranging(data);
if (sensor_data < 0)
goto err;
mutex_lock(&data->lock);
data->buffer[0] = sensor_data;
iio_push_to_buffers_with_timestamp(indio_dev,
data->buffer, pf->timestamp);
mutex_unlock(&data->lock);
err:
iio_trigger_notify_done(indio_dev->trig);
return IRQ_HANDLED;
}
static int srf08_read_raw(struct iio_dev *indio_dev,
struct iio_chan_spec const *channel, int *val,
int *val2, long mask)
@ -225,9 +318,13 @@ static ssize_t srf08_show_sensitivity_available(struct device *dev,
struct device_attribute *attr, char *buf)
{
int i, len = 0;
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
struct srf08_data *data = iio_priv(indio_dev);
for (i = 0; i < ARRAY_SIZE(srf08_sensitivity); i++)
len += sprintf(buf + len, "%d ", srf08_sensitivity[i]);
for (i = 0; i < data->chip_info->num_sensitivity_avail; i++)
if (data->chip_info->sensitivity_avail[i])
len += sprintf(buf + len, "%d ",
data->chip_info->sensitivity_avail[i]);
len += sprintf(buf + len, "\n");
@ -256,19 +353,21 @@ static ssize_t srf08_write_sensitivity(struct srf08_data *data,
int ret, i;
u8 regval;
for (i = 0; i < ARRAY_SIZE(srf08_sensitivity); i++)
if (val == srf08_sensitivity[i]) {
if (!val)
return -EINVAL;
for (i = 0; i < data->chip_info->num_sensitivity_avail; i++)
if (val && (val == data->chip_info->sensitivity_avail[i])) {
regval = i;
break;
}
if (i >= ARRAY_SIZE(srf08_sensitivity))
if (i >= data->chip_info->num_sensitivity_avail)
return -EINVAL;
mutex_lock(&data->lock);
ret = i2c_smbus_write_byte_data(client,
SRF08_WRITE_MAX_GAIN, regval);
ret = i2c_smbus_write_byte_data(client, SRF08_WRITE_MAX_GAIN, regval);
if (ret < 0) {
dev_err(&client->dev, "write_sensitivity - err: %d\n", ret);
mutex_unlock(&data->lock);
@ -323,7 +422,15 @@ static const struct iio_chan_spec srf08_channels[] = {
.info_mask_separate =
BIT(IIO_CHAN_INFO_RAW) |
BIT(IIO_CHAN_INFO_SCALE),
.scan_index = 0,
.scan_type = {
.sign = 's',
.realbits = 16,
.storagebits = 16,
.endianness = IIO_CPU,
},
},
IIO_CHAN_SOFT_TIMESTAMP(1),
};
static const struct iio_info srf08_info = {
@ -332,6 +439,15 @@ static const struct iio_info srf08_info = {
.driver_module = THIS_MODULE,
};
/*
* srf02 don't have an adjustable range or sensitivity,
* so we don't need attributes at all
*/
static const struct iio_info srf02_info = {
.read_raw = srf08_read_raw,
.driver_module = THIS_MODULE,
};
static int srf08_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
@ -352,34 +468,84 @@ static int srf08_probe(struct i2c_client *client,
data = iio_priv(indio_dev);
i2c_set_clientdata(client, indio_dev);
data->client = client;
data->sensor_type = (enum srf08_sensor_type)id->driver_data;
indio_dev->name = "srf08";
switch (data->sensor_type) {
case SRF02:
data->chip_info = &srf02_chip_info;
indio_dev->info = &srf02_info;
break;
case SRF08:
data->chip_info = &srf08_chip_info;
indio_dev->info = &srf08_info;
break;
case SRF10:
data->chip_info = &srf10_chip_info;
indio_dev->info = &srf08_info;
break;
default:
return -EINVAL;
}
indio_dev->name = id->name;
indio_dev->dev.parent = &client->dev;
indio_dev->modes = INDIO_DIRECT_MODE;
indio_dev->info = &srf08_info;
indio_dev->channels = srf08_channels;
indio_dev->num_channels = ARRAY_SIZE(srf08_channels);
mutex_init(&data->lock);
/*
* set default values of device here
* these register values cannot be read from the hardware
* therefore set driver specific default values
*/
ret = srf08_write_range_mm(data, SRF08_DEFAULT_RANGE);
if (ret < 0)
ret = devm_iio_triggered_buffer_setup(&client->dev, indio_dev,
iio_pollfunc_store_time, srf08_trigger_handler, NULL);
if (ret < 0) {
dev_err(&client->dev, "setup of iio triggered buffer failed\n");
return ret;
}
ret = srf08_write_sensitivity(data, SRF08_DEFAULT_GAIN);
if (ret < 0)
return ret;
if (data->chip_info->range_default) {
/*
* set default range of device in mm here
* these register values cannot be read from the hardware
* therefore set driver specific default values
*
* srf02 don't have a default value so it'll be omitted
*/
ret = srf08_write_range_mm(data,
data->chip_info->range_default);
if (ret < 0)
return ret;
}
if (data->chip_info->sensitivity_default) {
/*
* set default sensitivity of device here
* these register values cannot be read from the hardware
* therefore set driver specific default values
*
* srf02 don't have a default value so it'll be omitted
*/
ret = srf08_write_sensitivity(data,
data->chip_info->sensitivity_default);
if (ret < 0)
return ret;
}
return devm_iio_device_register(&client->dev, indio_dev);
}
static const struct of_device_id of_srf08_match[] = {
{ .compatible = "devantech,srf02", (void *)SRF02},
{ .compatible = "devantech,srf08", (void *)SRF08},
{ .compatible = "devantech,srf10", (void *)SRF10},
{},
};
MODULE_DEVICE_TABLE(of, of_srf08_match);
static const struct i2c_device_id srf08_id[] = {
{ "srf08", 0 },
{ "srf02", SRF02 },
{ "srf08", SRF08 },
{ "srf10", SRF10 },
{ }
};
MODULE_DEVICE_TABLE(i2c, srf08_id);
@ -387,6 +553,7 @@ MODULE_DEVICE_TABLE(i2c, srf08_id);
static struct i2c_driver srf08_driver = {
.driver = {
.name = "srf08",
.of_match_table = of_srf08_match,
},
.probe = srf08_probe,
.id_table = srf08_id,
@ -394,5 +561,5 @@ static struct i2c_driver srf08_driver = {
module_i2c_driver(srf08_driver);
MODULE_AUTHOR("Andreas Klinger <ak@it-klinger.de>");
MODULE_DESCRIPTION("Devantech SRF08 ultrasonic ranger driver");
MODULE_DESCRIPTION("Devantech SRF02/SRF08/SRF10 i2c ultrasonic ranger driver");
MODULE_LICENSE("GPL");

View File

@ -214,11 +214,18 @@ static const struct i2c_device_id tsys01_id[] = {
};
MODULE_DEVICE_TABLE(i2c, tsys01_id);
static const struct of_device_id tsys01_of_match[] = {
{ .compatible = "meas,tsys01", },
{ },
};
MODULE_DEVICE_TABLE(of, tsys01_of_match);
static struct i2c_driver tsys01_driver = {
.probe = tsys01_i2c_probe,
.id_table = tsys01_id,
.driver = {
.name = "tsys01",
.of_match_table = of_match_ptr(tsys01_of_match),
},
};

View File

@ -13,6 +13,7 @@
#include <linux/mfd/stm32-timers.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/of_device.h>
#define MAX_TRIGGERS 7
#define MAX_VALIDS 5
@ -28,9 +29,14 @@ static const void *triggers_table[][MAX_TRIGGERS] = {
{ TIM7_TRGO,},
{ TIM8_TRGO, TIM8_TRGO2, TIM8_CH1, TIM8_CH2, TIM8_CH3, TIM8_CH4,},
{ TIM9_TRGO, TIM9_CH1, TIM9_CH2,},
{ }, /* timer 10 */
{ }, /* timer 11 */
{ TIM10_OC1,},
{ TIM11_OC1,},
{ TIM12_TRGO, TIM12_CH1, TIM12_CH2,},
{ TIM13_OC1,},
{ TIM14_OC1,},
{ TIM15_TRGO,},
{ TIM16_OC1,},
{ TIM17_OC1,},
};
/* List the triggers accepted by each timer */
@ -43,10 +49,30 @@ static const void *valids_table[][MAX_VALIDS] = {
{ }, /* timer 6 */
{ }, /* timer 7 */
{ TIM1_TRGO, TIM2_TRGO, TIM4_TRGO, TIM5_TRGO,},
{ TIM2_TRGO, TIM3_TRGO,},
{ TIM2_TRGO, TIM3_TRGO, TIM10_OC1, TIM11_OC1,},
{ }, /* timer 10 */
{ }, /* timer 11 */
{ TIM4_TRGO, TIM5_TRGO,},
{ TIM4_TRGO, TIM5_TRGO, TIM13_OC1, TIM14_OC1,},
};
static const void *stm32h7_valids_table[][MAX_VALIDS] = {
{ TIM15_TRGO, TIM2_TRGO, TIM3_TRGO, TIM4_TRGO,},
{ TIM1_TRGO, TIM8_TRGO, TIM3_TRGO, TIM4_TRGO,},
{ TIM1_TRGO, TIM2_TRGO, TIM15_TRGO, TIM4_TRGO,},
{ TIM1_TRGO, TIM2_TRGO, TIM3_TRGO, TIM8_TRGO,},
{ TIM1_TRGO, TIM8_TRGO, TIM3_TRGO, TIM4_TRGO,},
{ }, /* timer 6 */
{ }, /* timer 7 */
{ TIM1_TRGO, TIM2_TRGO, TIM4_TRGO, TIM5_TRGO,},
{ }, /* timer 9 */
{ }, /* timer 10 */
{ }, /* timer 11 */
{ TIM4_TRGO, TIM5_TRGO, TIM13_OC1, TIM14_OC1,},
{ }, /* timer 13 */
{ }, /* timer 14 */
{ TIM1_TRGO, TIM3_TRGO, TIM16_OC1, TIM17_OC1,},
{ }, /* timer 16 */
{ }, /* timer 17 */
};
struct stm32_timer_trigger {
@ -59,11 +85,21 @@ struct stm32_timer_trigger {
bool has_trgo2;
};
struct stm32_timer_trigger_cfg {
const void *(*valids_table)[MAX_VALIDS];
const unsigned int num_valids_table;
};
static bool stm32_timer_is_trgo2_name(const char *name)
{
return !!strstr(name, "trgo2");
}
static bool stm32_timer_is_trgo_name(const char *name)
{
return (!!strstr(name, "trgo") && !strstr(name, "trgo2"));
}
static int stm32_timer_start(struct stm32_timer_trigger *priv,
struct iio_trigger *trig,
unsigned int frequency)
@ -328,6 +364,7 @@ static int stm32_setup_iio_triggers(struct stm32_timer_trigger *priv)
while (cur && *cur) {
struct iio_trigger *trig;
bool cur_is_trgo = stm32_timer_is_trgo_name(*cur);
bool cur_is_trgo2 = stm32_timer_is_trgo2_name(*cur);
if (cur_is_trgo2 && !priv->has_trgo2) {
@ -344,10 +381,9 @@ static int stm32_setup_iio_triggers(struct stm32_timer_trigger *priv)
/*
* sampling frequency and master mode attributes
* should only be available on trgo trigger which
* is always the first in the list.
* should only be available on trgo/trgo2 triggers
*/
if (cur == priv->triggers || cur_is_trgo2)
if (cur_is_trgo || cur_is_trgo2)
trig->dev.groups = stm32_trigger_attr_groups;
iio_trigger_set_drvdata(trig, priv);
@ -770,18 +806,22 @@ static int stm32_timer_trigger_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct stm32_timer_trigger *priv;
struct stm32_timers *ddata = dev_get_drvdata(pdev->dev.parent);
const struct stm32_timer_trigger_cfg *cfg;
unsigned int index;
int ret;
if (of_property_read_u32(dev->of_node, "reg", &index))
return -EINVAL;
cfg = (const struct stm32_timer_trigger_cfg *)
of_match_device(dev->driver->of_match_table, dev)->data;
if (index >= ARRAY_SIZE(triggers_table) ||
index >= ARRAY_SIZE(valids_table))
index >= cfg->num_valids_table)
return -EINVAL;
/* Create an IIO device only if we have triggers to be validated */
if (*valids_table[index])
if (*cfg->valids_table[index])
priv = stm32_setup_counter_device(dev);
else
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
@ -794,7 +834,7 @@ static int stm32_timer_trigger_probe(struct platform_device *pdev)
priv->clk = ddata->clk;
priv->max_arr = ddata->max_arr;
priv->triggers = triggers_table[index];
priv->valids = valids_table[index];
priv->valids = cfg->valids_table[index];
stm32_timer_detect_trgo2(priv);
ret = stm32_setup_iio_triggers(priv);
@ -806,8 +846,24 @@ static int stm32_timer_trigger_probe(struct platform_device *pdev)
return 0;
}
static const struct stm32_timer_trigger_cfg stm32_timer_trg_cfg = {
.valids_table = valids_table,
.num_valids_table = ARRAY_SIZE(valids_table),
};
static const struct stm32_timer_trigger_cfg stm32h7_timer_trg_cfg = {
.valids_table = stm32h7_valids_table,
.num_valids_table = ARRAY_SIZE(stm32h7_valids_table),
};
static const struct of_device_id stm32_trig_of_match[] = {
{ .compatible = "st,stm32-timer-trigger", },
{
.compatible = "st,stm32-timer-trigger",
.data = (void *)&stm32_timer_trg_cfg,
}, {
.compatible = "st,stm32h7-timer-trigger",
.data = (void *)&stm32h7_timer_trg_cfg,
},
{ /* end node */ },
};
MODULE_DEVICE_TABLE(of, stm32_trig_of_match);

View File

@ -40,6 +40,8 @@ source "drivers/staging/rtl8712/Kconfig"
source "drivers/staging/rtl8188eu/Kconfig"
source "drivers/staging/rtlwifi/Kconfig"
source "drivers/staging/rts5208/Kconfig"
source "drivers/staging/octeon/Kconfig"
@ -112,4 +114,6 @@ source "drivers/staging/typec/Kconfig"
source "drivers/staging/vboxvideo/Kconfig"
source "drivers/staging/pi433/Kconfig"
endif # STAGING

View File

@ -10,6 +10,7 @@ obj-$(CONFIG_RTL8192E) += rtl8192e/
obj-$(CONFIG_RTL8723BS) += rtl8723bs/
obj-$(CONFIG_R8712U) += rtl8712/
obj-$(CONFIG_R8188EU) += rtl8188eu/
obj-$(CONFIG_R8822BE) += rtlwifi/
obj-$(CONFIG_RTS5208) += rts5208/
obj-$(CONFIG_NETLOGIC_XLR_NET) += netlogic/
obj-$(CONFIG_OCTEON_ETHERNET) += octeon/
@ -45,3 +46,4 @@ obj-$(CONFIG_GREYBUS) += greybus/
obj-$(CONFIG_BCM2835_VCHIQ) += vc04_services/
obj-$(CONFIG_CRYPTO_DEV_CCREE) += ccree/
obj-$(CONFIG_DRM_VBOXVIDEO) += vboxvideo/
obj-$(CONFIG_PI433) += pi433/

View File

@ -135,7 +135,7 @@ struct ion_heap_ops {
/**
* heap flags - flags between the heaps and core ion code
*/
#define ION_HEAP_FLAG_DEFER_FREE (1 << 0)
#define ION_HEAP_FLAG_DEFER_FREE BIT(0)
/**
* private flags - flags internal to ion
@ -146,7 +146,7 @@ struct ion_heap_ops {
* any buffer storage that came from the system allocator will be
* returned to the system allocator.
*/
#define ION_PRIV_FLAG_SHRINKER_FREE (1 << 0)
#define ION_PRIV_FLAG_SHRINKER_FREE BIT(0)
/**
* struct ion_heap - represents a heap in the system
@ -226,8 +226,8 @@ int ion_heap_buffer_zero(struct ion_buffer *buffer);
int ion_heap_pages_zero(struct page *page, size_t size, pgprot_t pgprot);
int ion_alloc(size_t len,
unsigned int heap_id_mask,
unsigned int flags);
unsigned int heap_id_mask,
unsigned int flags);
/**
* ion_heap_init_shrinker
@ -291,7 +291,7 @@ size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size);
* flag.
*/
size_t ion_heap_freelist_shrink(struct ion_heap *heap,
size_t size);
size_t size);
/**
* ion_heap_freelist_size - returns the size of the freelist in bytes
@ -352,7 +352,7 @@ void ion_page_pool_free(struct ion_page_pool *pool, struct page *page);
* returns the number of items freed in pages
*/
int ion_page_pool_shrink(struct ion_page_pool *pool, gfp_t gfp_mask,
int nr_to_scan);
int nr_to_scan);
long ion_ioctl(struct file *filp, unsigned int cmd, unsigned long arg);

View File

@ -31,7 +31,6 @@ struct ion_cma_heap {
#define to_cma_heap(x) container_of(x, struct ion_cma_heap, heap)
/* ION CMA heap operations functions */
static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer,
unsigned long len,
@ -46,7 +45,7 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer,
if (!pages)
return -ENOMEM;
table = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
table = kmalloc(sizeof(*table), GFP_KERNEL);
if (!table)
goto err;
@ -106,7 +105,7 @@ static struct ion_heap *__ion_cma_heap_create(struct cma *cma)
return &cma_heap->heap;
}
int __ion_add_cma_heaps(struct cma *cma, void *data)
static int __ion_add_cma_heaps(struct cma *cma, void *data)
{
struct ion_heap *heap;

View File

@ -98,7 +98,6 @@ static void free_buffer_page(struct ion_system_heap *heap,
ion_page_pool_free(pool, page);
}
static struct page *alloc_largest_available(struct ion_system_heap *heap,
struct ion_buffer *buffer,
unsigned long size,
@ -256,7 +255,6 @@ static struct ion_heap_ops system_heap_ops = {
static int ion_system_heap_debug_show(struct ion_heap *heap, struct seq_file *s,
void *unused)
{
struct ion_system_heap *sys_heap = container_of(heap,
struct ion_system_heap,
heap);

View File

@ -23,12 +23,3 @@ config CRYPTO_DEV_CCREE
Choose this if you wish to use hardware acceleration of
cryptographic operations on the system REE.
If unsure say Y.
config CCREE_FIPS_SUPPORT
bool "Turn on CryptoCell 7XX REE FIPS mode support"
depends on CRYPTO_DEV_CCREE
default n
help
Say 'Y' to enable support for FIPS compliant mode by the
CCREE driver.
If unsure say N.

View File

@ -1,3 +1,3 @@
obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o
ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_aead.o ssi_ivgen.o ssi_sram_mgr.o ssi_pm.o
ccree-$(CCREE_FIPS_SUPPORT) += ssi_fips.o ssi_fips_ll.o ssi_fips_ext.o ssi_fips_local.o
ccree-$(CONFIG_CRYPTO_FIPS) += ssi_fips.o

View File

@ -27,7 +27,8 @@
******************************************************************************/
#define HW_DESC_SIZE_WORDS 6
#define HW_QUEUE_SLOTS_MAX 15 /* Max. available slots in HW queue */
/* Define max. available slots in HW queue */
#define HW_QUEUE_SLOTS_MAX 15
#define CC_REG_NAME(word, name) DX_DSCRPTR_QUEUE_WORD ## word ## _ ## name

View File

@ -36,7 +36,6 @@
#include "ssi_hash.h"
#include "ssi_sysfs.h"
#include "ssi_sram_mgr.h"
#include "ssi_fips_local.h"
#define template_aead template_u.aead
@ -57,22 +56,26 @@ struct ssi_aead_handle {
struct list_head aead_list;
};
struct cc_hmac_s {
u8 *padded_authkey;
u8 *ipad_opad; /* IPAD, OPAD*/
dma_addr_t padded_authkey_dma_addr;
dma_addr_t ipad_opad_dma_addr;
};
struct cc_xcbc_s {
u8 *xcbc_keys; /* K1,K2,K3 */
dma_addr_t xcbc_keys_dma_addr;
};
struct ssi_aead_ctx {
struct ssi_drvdata *drvdata;
u8 ctr_nonce[MAX_NONCE_SIZE]; /* used for ctr3686 iv and aes ccm */
u8 *enckey;
dma_addr_t enckey_dma_addr;
union {
struct {
u8 *padded_authkey;
u8 *ipad_opad; /* IPAD, OPAD*/
dma_addr_t padded_authkey_dma_addr;
dma_addr_t ipad_opad_dma_addr;
} hmac;
struct {
u8 *xcbc_keys; /* K1,K2,K3 */
dma_addr_t xcbc_keys_dma_addr;
} xcbc;
struct cc_hmac_s hmac;
struct cc_xcbc_s xcbc;
} auth_state;
unsigned int enc_keylen;
unsigned int auth_keylen;
@ -93,46 +96,50 @@ static void ssi_aead_exit(struct crypto_aead *tfm)
struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
SSI_LOG_DEBUG("Clearing context @%p for %s\n",
crypto_aead_ctx(tfm), crypto_tfm_alg_name(&(tfm->base)));
crypto_aead_ctx(tfm), crypto_tfm_alg_name(&tfm->base));
dev = &ctx->drvdata->plat_dev->dev;
/* Unmap enckey buffer */
if (ctx->enckey) {
dma_free_coherent(dev, AES_MAX_KEY_SIZE, ctx->enckey, ctx->enckey_dma_addr);
SSI_LOG_DEBUG("Freed enckey DMA buffer enckey_dma_addr=0x%llX\n",
(unsigned long long)ctx->enckey_dma_addr);
SSI_LOG_DEBUG("Freed enckey DMA buffer enckey_dma_addr=%pad\n",
ctx->enckey_dma_addr);
ctx->enckey_dma_addr = 0;
ctx->enckey = NULL;
}
if (ctx->auth_mode == DRV_HASH_XCBC_MAC) { /* XCBC authetication */
if (ctx->auth_state.xcbc.xcbc_keys) {
struct cc_xcbc_s *xcbc = &ctx->auth_state.xcbc;
if (xcbc->xcbc_keys) {
dma_free_coherent(dev, CC_AES_128_BIT_KEY_SIZE * 3,
ctx->auth_state.xcbc.xcbc_keys,
ctx->auth_state.xcbc.xcbc_keys_dma_addr);
xcbc->xcbc_keys,
xcbc->xcbc_keys_dma_addr);
}
SSI_LOG_DEBUG("Freed xcbc_keys DMA buffer xcbc_keys_dma_addr=0x%llX\n",
(unsigned long long)ctx->auth_state.xcbc.xcbc_keys_dma_addr);
ctx->auth_state.xcbc.xcbc_keys_dma_addr = 0;
ctx->auth_state.xcbc.xcbc_keys = NULL;
SSI_LOG_DEBUG("Freed xcbc_keys DMA buffer xcbc_keys_dma_addr=%pad\n",
xcbc->xcbc_keys_dma_addr);
xcbc->xcbc_keys_dma_addr = 0;
xcbc->xcbc_keys = NULL;
} else if (ctx->auth_mode != DRV_HASH_NULL) { /* HMAC auth. */
if (ctx->auth_state.hmac.ipad_opad) {
struct cc_hmac_s *hmac = &ctx->auth_state.hmac;
if (hmac->ipad_opad) {
dma_free_coherent(dev, 2 * MAX_HMAC_DIGEST_SIZE,
ctx->auth_state.hmac.ipad_opad,
ctx->auth_state.hmac.ipad_opad_dma_addr);
SSI_LOG_DEBUG("Freed ipad_opad DMA buffer ipad_opad_dma_addr=0x%llX\n",
(unsigned long long)ctx->auth_state.hmac.ipad_opad_dma_addr);
ctx->auth_state.hmac.ipad_opad_dma_addr = 0;
ctx->auth_state.hmac.ipad_opad = NULL;
hmac->ipad_opad,
hmac->ipad_opad_dma_addr);
SSI_LOG_DEBUG("Freed ipad_opad DMA buffer ipad_opad_dma_addr=%pad\n",
hmac->ipad_opad_dma_addr);
hmac->ipad_opad_dma_addr = 0;
hmac->ipad_opad = NULL;
}
if (ctx->auth_state.hmac.padded_authkey) {
if (hmac->padded_authkey) {
dma_free_coherent(dev, MAX_HMAC_BLOCK_SIZE,
ctx->auth_state.hmac.padded_authkey,
ctx->auth_state.hmac.padded_authkey_dma_addr);
SSI_LOG_DEBUG("Freed padded_authkey DMA buffer padded_authkey_dma_addr=0x%llX\n",
(unsigned long long)ctx->auth_state.hmac.padded_authkey_dma_addr);
ctx->auth_state.hmac.padded_authkey_dma_addr = 0;
ctx->auth_state.hmac.padded_authkey = NULL;
hmac->padded_authkey,
hmac->padded_authkey_dma_addr);
SSI_LOG_DEBUG("Freed padded_authkey DMA buffer padded_authkey_dma_addr=%pad\n",
hmac->padded_authkey_dma_addr);
hmac->padded_authkey_dma_addr = 0;
hmac->padded_authkey = NULL;
}
}
}
@ -144,9 +151,7 @@ static int ssi_aead_init(struct crypto_aead *tfm)
struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
struct ssi_crypto_alg *ssi_alg =
container_of(alg, struct ssi_crypto_alg, aead_alg);
SSI_LOG_DEBUG("Initializing context @%p for %s\n", ctx, crypto_tfm_alg_name(&(tfm->base)));
CHECK_AND_RETURN_UPON_FIPS_ERROR();
SSI_LOG_DEBUG("Initializing context @%p for %s\n", ctx, crypto_tfm_alg_name(&tfm->base));
/* Initialize modes in instance */
ctx->cipher_mode = ssi_alg->cipher_mode;
@ -158,7 +163,7 @@ static int ssi_aead_init(struct crypto_aead *tfm)
/* Allocate key buffer, cache line aligned */
ctx->enckey = dma_alloc_coherent(dev, AES_MAX_KEY_SIZE,
&ctx->enckey_dma_addr, GFP_KERNEL);
&ctx->enckey_dma_addr, GFP_KERNEL);
if (!ctx->enckey) {
SSI_LOG_ERR("Failed allocating key buffer\n");
goto init_failed;
@ -168,31 +173,42 @@ static int ssi_aead_init(struct crypto_aead *tfm)
/* Set default authlen value */
if (ctx->auth_mode == DRV_HASH_XCBC_MAC) { /* XCBC authetication */
struct cc_xcbc_s *xcbc = &ctx->auth_state.xcbc;
const unsigned int key_size = CC_AES_128_BIT_KEY_SIZE * 3;
/* Allocate dma-coherent buffer for XCBC's K1+K2+K3 */
/* (and temporary for user key - up to 256b) */
ctx->auth_state.xcbc.xcbc_keys = dma_alloc_coherent(dev,
CC_AES_128_BIT_KEY_SIZE * 3,
&ctx->auth_state.xcbc.xcbc_keys_dma_addr, GFP_KERNEL);
if (!ctx->auth_state.xcbc.xcbc_keys) {
xcbc->xcbc_keys = dma_alloc_coherent(dev, key_size,
&xcbc->xcbc_keys_dma_addr,
GFP_KERNEL);
if (!xcbc->xcbc_keys) {
SSI_LOG_ERR("Failed allocating buffer for XCBC keys\n");
goto init_failed;
}
} else if (ctx->auth_mode != DRV_HASH_NULL) { /* HMAC authentication */
struct cc_hmac_s *hmac = &ctx->auth_state.hmac;
const unsigned int digest_size = 2 * MAX_HMAC_DIGEST_SIZE;
dma_addr_t *pkey_dma = &hmac->padded_authkey_dma_addr;
/* Allocate dma-coherent buffer for IPAD + OPAD */
ctx->auth_state.hmac.ipad_opad = dma_alloc_coherent(dev,
2 * MAX_HMAC_DIGEST_SIZE,
&ctx->auth_state.hmac.ipad_opad_dma_addr, GFP_KERNEL);
if (!ctx->auth_state.hmac.ipad_opad) {
hmac->ipad_opad = dma_alloc_coherent(dev, digest_size,
&hmac->ipad_opad_dma_addr,
GFP_KERNEL);
if (!hmac->ipad_opad) {
SSI_LOG_ERR("Failed allocating IPAD/OPAD buffer\n");
goto init_failed;
}
SSI_LOG_DEBUG("Allocated authkey buffer in context ctx->authkey=@%p\n",
ctx->auth_state.hmac.ipad_opad);
ctx->auth_state.hmac.padded_authkey = dma_alloc_coherent(dev,
MAX_HMAC_BLOCK_SIZE,
&ctx->auth_state.hmac.padded_authkey_dma_addr, GFP_KERNEL);
if (!ctx->auth_state.hmac.padded_authkey) {
SSI_LOG_DEBUG("Allocated authkey buffer in context ctx->authkey=@%p\n",
hmac->ipad_opad);
hmac->padded_authkey = dma_alloc_coherent(dev,
MAX_HMAC_BLOCK_SIZE,
pkey_dma,
GFP_KERNEL);
if (!hmac->padded_authkey) {
SSI_LOG_ERR("failed to allocate padded_authkey\n");
goto init_failed;
}
@ -223,7 +239,7 @@ static void ssi_aead_complete(struct device *dev, void *ssi_req, void __iomem *c
if (areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) {
if (memcmp(areq_ctx->mac_buf, areq_ctx->icv_virt_addr,
ctx->authsize) != 0) {
ctx->authsize) != 0) {
SSI_LOG_DEBUG("Payload authentication failure, "
"(auth-size=%d, cipher=%d).\n",
ctx->authsize, ctx->cipher_mode);
@ -236,8 +252,8 @@ static void ssi_aead_complete(struct device *dev, void *ssi_req, void __iomem *c
} else { /*ENCRYPT*/
if (unlikely(areq_ctx->is_icv_fragmented))
ssi_buffer_mgr_copy_scatterlist_portion(
areq_ctx->mac_buf, areq_ctx->dstSgl, areq->cryptlen + areq_ctx->dstOffset,
areq->cryptlen + areq_ctx->dstOffset + ctx->authsize, SSI_SG_FROM_BUF);
areq_ctx->mac_buf, areq_ctx->dst_sgl, areq->cryptlen + areq_ctx->dst_offset,
areq->cryptlen + areq_ctx->dst_offset + ctx->authsize, SSI_SG_FROM_BUF);
/* If an IV was generated, copy it back to the user provided buffer. */
if (areq_ctx->backup_giv) {
@ -292,12 +308,13 @@ static int xcbc_setkey(struct cc_hw_desc *desc, struct ssi_aead_ctx *ctx)
static int hmac_setkey(struct cc_hw_desc *desc, struct ssi_aead_ctx *ctx)
{
unsigned int hmacPadConst[2] = { HMAC_IPAD_CONST, HMAC_OPAD_CONST };
unsigned int hmac_pad_const[2] = { HMAC_IPAD_CONST, HMAC_OPAD_CONST };
unsigned int digest_ofs = 0;
unsigned int hash_mode = (ctx->auth_mode == DRV_HASH_SHA1) ?
DRV_HASH_HW_SHA1 : DRV_HASH_HW_SHA256;
unsigned int digest_size = (ctx->auth_mode == DRV_HASH_SHA1) ?
CC_SHA1_DIGEST_SIZE : CC_SHA256_DIGEST_SIZE;
struct cc_hmac_s *hmac = &ctx->auth_state.hmac;
int idx = 0;
int i;
@ -325,7 +342,7 @@ static int hmac_setkey(struct cc_hw_desc *desc, struct ssi_aead_ctx *ctx)
/* Prepare ipad key */
hw_desc_init(&desc[idx]);
set_xor_val(&desc[idx], hmacPadConst[i]);
set_xor_val(&desc[idx], hmac_pad_const[i]);
set_cipher_mode(&desc[idx], hash_mode);
set_flow_mode(&desc[idx], S_DIN_to_HASH);
set_setup_mode(&desc[idx], SETUP_LOAD_STATE1);
@ -334,7 +351,7 @@ static int hmac_setkey(struct cc_hw_desc *desc, struct ssi_aead_ctx *ctx)
/* Perform HASH update */
hw_desc_init(&desc[idx]);
set_din_type(&desc[idx], DMA_DLLI,
ctx->auth_state.hmac.padded_authkey_dma_addr,
hmac->padded_authkey_dma_addr,
SHA256_BLOCK_SIZE, NS_BIT);
set_cipher_mode(&desc[idx], hash_mode);
set_xor_active(&desc[idx]);
@ -345,8 +362,8 @@ static int hmac_setkey(struct cc_hw_desc *desc, struct ssi_aead_ctx *ctx)
hw_desc_init(&desc[idx]);
set_cipher_mode(&desc[idx], hash_mode);
set_dout_dlli(&desc[idx],
(ctx->auth_state.hmac.ipad_opad_dma_addr +
digest_ofs), digest_size, NS_BIT, 0);
(hmac->ipad_opad_dma_addr + digest_ofs),
digest_size, NS_BIT, 0);
set_flow_mode(&desc[idx], S_HASH_to_DOUT);
set_setup_mode(&desc[idx], SETUP_WRITE_STATE0);
set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED);
@ -361,7 +378,7 @@ static int hmac_setkey(struct cc_hw_desc *desc, struct ssi_aead_ctx *ctx)
static int validate_keys_sizes(struct ssi_aead_ctx *ctx)
{
SSI_LOG_DEBUG("enc_keylen=%u authkeylen=%u\n",
ctx->enc_keylen, ctx->auth_keylen);
ctx->enc_keylen, ctx->auth_keylen);
switch (ctx->auth_mode) {
case DRV_HASH_SHA1:
@ -385,7 +402,7 @@ static int validate_keys_sizes(struct ssi_aead_ctx *ctx)
if (unlikely(ctx->flow_mode == S_DIN_to_DES)) {
if (ctx->enc_keylen != DES3_EDE_KEY_SIZE) {
SSI_LOG_ERR("Invalid cipher(3DES) key size: %u\n",
ctx->enc_keylen);
ctx->enc_keylen);
return -EINVAL;
}
} else { /* Default assumed to be AES ciphers */
@ -393,7 +410,7 @@ static int validate_keys_sizes(struct ssi_aead_ctx *ctx)
(ctx->enc_keylen != AES_KEYSIZE_192) &&
(ctx->enc_keylen != AES_KEYSIZE_256)) {
SSI_LOG_ERR("Invalid cipher(AES) key size: %u\n",
ctx->enc_keylen);
ctx->enc_keylen);
return -EINVAL;
}
}
@ -536,9 +553,9 @@ ssi_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
int seq_len = 0, rc = -EINVAL;
SSI_LOG_DEBUG("Setting key in context @%p for %s. key=%p keylen=%u\n",
ctx, crypto_tfm_alg_name(crypto_aead_tfm(tfm)), key, keylen);
ctx, crypto_tfm_alg_name(crypto_aead_tfm(tfm)),
key, keylen);
CHECK_AND_RETURN_UPON_FIPS_ERROR();
/* STAT_PHASE_0: Init and sanity checks */
if (ctx->auth_mode != DRV_HASH_NULL) { /* authenc() alg. */
@ -654,7 +671,6 @@ static int ssi_aead_setauthsize(
{
struct ssi_aead_ctx *ctx = crypto_aead_ctx(authenc);
CHECK_AND_RETURN_UPON_FIPS_ERROR();
/* Unsupported auth. sizes */
if ((authsize == 0) ||
(authsize > crypto_aead_maxauthsize(authenc))) {
@ -669,7 +685,7 @@ static int ssi_aead_setauthsize(
#if SSI_CC_HAS_AES_CCM
static int ssi_rfc4309_ccm_setauthsize(struct crypto_aead *authenc,
unsigned int authsize)
unsigned int authsize)
{
switch (authsize) {
case 8:
@ -684,7 +700,7 @@ static int ssi_rfc4309_ccm_setauthsize(struct crypto_aead *authenc,
}
static int ssi_ccm_setauthsize(struct crypto_aead *authenc,
unsigned int authsize)
unsigned int authsize)
{
switch (authsize) {
case 4:
@ -762,11 +778,11 @@ ssi_aead_process_authenc_data_desc(
{
struct scatterlist *cipher =
(direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ?
areq_ctx->dstSgl : areq_ctx->srcSgl;
areq_ctx->dst_sgl : areq_ctx->src_sgl;
unsigned int offset =
(direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ?
areq_ctx->dstOffset : areq_ctx->srcOffset;
areq_ctx->dst_offset : areq_ctx->src_offset;
SSI_LOG_DEBUG("AUTHENC: SRC/DST buffer type DLLI\n");
hw_desc_init(&desc[idx]);
set_din_type(&desc[idx], DMA_DLLI,
@ -828,11 +844,11 @@ ssi_aead_process_cipher_data_desc(
SSI_LOG_DEBUG("CIPHER: SRC/DST buffer type DLLI\n");
hw_desc_init(&desc[idx]);
set_din_type(&desc[idx], DMA_DLLI,
(sg_dma_address(areq_ctx->srcSgl) +
areq_ctx->srcOffset), areq_ctx->cryptlen, NS_BIT);
(sg_dma_address(areq_ctx->src_sgl) +
areq_ctx->src_offset), areq_ctx->cryptlen, NS_BIT);
set_dout_dlli(&desc[idx],
(sg_dma_address(areq_ctx->dstSgl) +
areq_ctx->dstOffset),
(sg_dma_address(areq_ctx->dst_sgl) +
areq_ctx->dst_offset),
areq_ctx->cryptlen, NS_BIT, 0);
set_flow_mode(&desc[idx], flow_mode);
break;
@ -1168,8 +1184,8 @@ static inline void ssi_aead_load_mlli_to_sram(
(req_ctx->data_buff_type == SSI_DMA_BUF_MLLI) ||
!req_ctx->is_single_pass)) {
SSI_LOG_DEBUG("Copy-to-sram: mlli_dma=%08x, mlli_size=%u\n",
(unsigned int)ctx->drvdata->mlli_sram_addr,
req_ctx->mlli_params.mlli_len);
(unsigned int)ctx->drvdata->mlli_sram_addr,
req_ctx->mlli_params.mlli_len);
/* Copy MLLI table host-to-sram */
hw_desc_init(&desc[*seq_size]);
set_din_type(&desc[*seq_size], DMA_DLLI,
@ -1313,7 +1329,8 @@ ssi_aead_xcbc_authenc(
}
static int validate_data_size(struct ssi_aead_ctx *ctx,
enum drv_crypto_direction direct, struct aead_request *req)
enum drv_crypto_direction direct,
struct aead_request *req)
{
struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
unsigned int assoclen = req->assoclen;
@ -1321,7 +1338,7 @@ static int validate_data_size(struct ssi_aead_ctx *ctx,
(req->cryptlen - ctx->authsize) : req->cryptlen;
if (unlikely((direct == DRV_CRYPTO_DIRECTION_DECRYPT) &&
(req->cryptlen < ctx->authsize)))
(req->cryptlen < ctx->authsize)))
goto data_size_err;
areq_ctx->is_single_pass = true; /*defaulted to fast flow*/
@ -1329,7 +1346,7 @@ static int validate_data_size(struct ssi_aead_ctx *ctx,
switch (ctx->flow_mode) {
case S_DIN_to_AES:
if (unlikely((ctx->cipher_mode == DRV_CIPHER_CBC) &&
!IS_ALIGNED(cipherlen, AES_BLOCK_SIZE)))
!IS_ALIGNED(cipherlen, AES_BLOCK_SIZE)))
goto data_size_err;
if (ctx->cipher_mode == DRV_CIPHER_CCM)
break;
@ -1365,27 +1382,27 @@ data_size_err:
}
#if SSI_CC_HAS_AES_CCM
static unsigned int format_ccm_a0(u8 *pA0Buff, u32 headerSize)
static unsigned int format_ccm_a0(u8 *pa0_buff, u32 header_size)
{
unsigned int len = 0;
if (headerSize == 0)
if (header_size == 0)
return 0;
if (headerSize < ((1UL << 16) - (1UL << 8))) {
if (header_size < ((1UL << 16) - (1UL << 8))) {
len = 2;
pA0Buff[0] = (headerSize >> 8) & 0xFF;
pA0Buff[1] = headerSize & 0xFF;
pa0_buff[0] = (header_size >> 8) & 0xFF;
pa0_buff[1] = header_size & 0xFF;
} else {
len = 6;
pA0Buff[0] = 0xFF;
pA0Buff[1] = 0xFE;
pA0Buff[2] = (headerSize >> 24) & 0xFF;
pA0Buff[3] = (headerSize >> 16) & 0xFF;
pA0Buff[4] = (headerSize >> 8) & 0xFF;
pA0Buff[5] = headerSize & 0xFF;
pa0_buff[0] = 0xFF;
pa0_buff[1] = 0xFE;
pa0_buff[2] = (header_size >> 24) & 0xFF;
pa0_buff[3] = (header_size >> 16) & 0xFF;
pa0_buff[4] = (header_size >> 8) & 0xFF;
pa0_buff[5] = header_size & 0xFF;
}
return len;
@ -1557,7 +1574,7 @@ static int config_ccm_adata(struct aead_request *req)
/* taken from crypto/ccm.c */
/* 2 <= L <= 8, so 1 <= L' <= 7. */
if (2 > l || l > 8) {
if (l < 2 || l > 8) {
SSI_LOG_ERR("illegal iv value %X\n", req->iv[0]);
return -EINVAL;
}
@ -1848,8 +1865,9 @@ static inline void ssi_aead_dump_gcm(
SSI_LOG_DEBUG("%s\n", title);
}
SSI_LOG_DEBUG("cipher_mode %d, authsize %d, enc_keylen %d, assoclen %d, cryptlen %d\n", \
ctx->cipher_mode, ctx->authsize, ctx->enc_keylen, req->assoclen, req_ctx->cryptlen);
SSI_LOG_DEBUG("cipher_mode %d, authsize %d, enc_keylen %d, assoclen %d, cryptlen %d\n",
ctx->cipher_mode, ctx->authsize, ctx->enc_keylen,
req->assoclen, req_ctx->cryptlen);
if (ctx->enckey)
dump_byte_array("mac key", ctx->enckey, 16);
@ -1864,7 +1882,7 @@ static inline void ssi_aead_dump_gcm(
dump_byte_array("mac_buf", req_ctx->mac_buf, AES_BLOCK_SIZE);
dump_byte_array("gcm_len_block", req_ctx->gcm_len_block.lenA, AES_BLOCK_SIZE);
dump_byte_array("gcm_len_block", req_ctx->gcm_len_block.len_a, AES_BLOCK_SIZE);
if (req->src && req->cryptlen)
dump_byte_array("req->src", sg_virt(req->src), req->cryptlen + req->assoclen);
@ -1886,7 +1904,7 @@ static int config_gcm_context(struct aead_request *req)
(req->cryptlen - ctx->authsize);
__be32 counter = cpu_to_be32(2);
SSI_LOG_DEBUG("config_gcm_context() cryptlen = %d, req->assoclen = %d ctx->authsize = %d\n", cryptlen, req->assoclen, ctx->authsize);
SSI_LOG_DEBUG("%s() cryptlen = %d, req->assoclen = %d ctx->authsize = %d\n", __func__, cryptlen, req->assoclen, ctx->authsize);
memset(req_ctx->hkey, 0, AES_BLOCK_SIZE);
@ -1903,16 +1921,16 @@ static int config_gcm_context(struct aead_request *req)
__be64 temp64;
temp64 = cpu_to_be64(req->assoclen * 8);
memcpy(&req_ctx->gcm_len_block.lenA, &temp64, sizeof(temp64));
memcpy(&req_ctx->gcm_len_block.len_a, &temp64, sizeof(temp64));
temp64 = cpu_to_be64(cryptlen * 8);
memcpy(&req_ctx->gcm_len_block.lenC, &temp64, 8);
memcpy(&req_ctx->gcm_len_block.len_c, &temp64, 8);
} else { //rfc4543=> all data(AAD,IV,Plain) are considered additional data that is nothing is encrypted.
__be64 temp64;
temp64 = cpu_to_be64((req->assoclen + GCM_BLOCK_RFC4_IV_SIZE + cryptlen) * 8);
memcpy(&req_ctx->gcm_len_block.lenA, &temp64, sizeof(temp64));
memcpy(&req_ctx->gcm_len_block.len_a, &temp64, sizeof(temp64));
temp64 = 0;
memcpy(&req_ctx->gcm_len_block.lenC, &temp64, 8);
memcpy(&req_ctx->gcm_len_block.len_c, &temp64, 8);
}
return 0;
@ -1944,16 +1962,16 @@ static int ssi_aead_process(struct aead_request *req, enum drv_crypto_direction
struct ssi_crypto_req ssi_req = {};
SSI_LOG_DEBUG("%s context=%p req=%p iv=%p src=%p src_ofs=%d dst=%p dst_ofs=%d cryptolen=%d\n",
((direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ? "Encrypt" : "Decrypt"), ctx, req, req->iv,
sg_virt(req->src), req->src->offset, sg_virt(req->dst), req->dst->offset, req->cryptlen);
CHECK_AND_RETURN_UPON_FIPS_ERROR();
((direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ? "Encrypt" : "Decrypt"),
ctx, req, req->iv, sg_virt(req->src), req->src->offset,
sg_virt(req->dst), req->dst->offset, req->cryptlen);
/* STAT_PHASE_0: Init and sanity checks */
/* Check data length according to mode */
if (unlikely(validate_data_size(ctx, direct, req) != 0)) {
SSI_LOG_ERR("Unsupported crypt/assoc len %d/%d.\n",
req->cryptlen, req->assoclen);
req->cryptlen, req->assoclen);
crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_BLOCK_LEN);
return -EINVAL;
}
@ -1976,7 +1994,7 @@ static int ssi_aead_process(struct aead_request *req, enum drv_crypto_direction
memcpy(areq_ctx->ctr_iv, ctx->ctr_nonce, CTR_RFC3686_NONCE_SIZE);
if (!areq_ctx->backup_giv) /*User none-generated IV*/
memcpy(areq_ctx->ctr_iv + CTR_RFC3686_NONCE_SIZE,
req->iv, CTR_RFC3686_IV_SIZE);
req->iv, CTR_RFC3686_IV_SIZE);
/* Initialize counter portion of counter block */
*(__be32 *)(areq_ctx->ctr_iv + CTR_RFC3686_NONCE_SIZE +
CTR_RFC3686_IV_SIZE) = cpu_to_be32(1);
@ -2198,7 +2216,7 @@ static int ssi_rfc4106_gcm_setkey(struct crypto_aead *tfm, const u8 *key, unsign
struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
int rc = 0;
SSI_LOG_DEBUG("ssi_rfc4106_gcm_setkey() keylen %d, key %p\n", keylen, key);
SSI_LOG_DEBUG("%s() keylen %d, key %p\n", __func__, keylen, key);
if (keylen < 4)
return -EINVAL;
@ -2216,7 +2234,7 @@ static int ssi_rfc4543_gcm_setkey(struct crypto_aead *tfm, const u8 *key, unsign
struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
int rc = 0;
SSI_LOG_DEBUG("ssi_rfc4543_gcm_setkey() keylen %d, key %p\n", keylen, key);
SSI_LOG_DEBUG("%s() keylen %d, key %p\n", __func__, keylen, key);
if (keylen < 4)
return -EINVAL;
@ -2230,7 +2248,7 @@ static int ssi_rfc4543_gcm_setkey(struct crypto_aead *tfm, const u8 *key, unsign
}
static int ssi_gcm_setauthsize(struct crypto_aead *authenc,
unsigned int authsize)
unsigned int authsize)
{
switch (authsize) {
case 4:
@ -2249,9 +2267,9 @@ static int ssi_gcm_setauthsize(struct crypto_aead *authenc,
}
static int ssi_rfc4106_gcm_setauthsize(struct crypto_aead *authenc,
unsigned int authsize)
unsigned int authsize)
{
SSI_LOG_DEBUG("ssi_rfc4106_gcm_setauthsize() authsize %d\n", authsize);
SSI_LOG_DEBUG("authsize %d\n", authsize);
switch (authsize) {
case 8:
@ -2268,7 +2286,7 @@ static int ssi_rfc4106_gcm_setauthsize(struct crypto_aead *authenc,
static int ssi_rfc4543_gcm_setauthsize(struct crypto_aead *authenc,
unsigned int authsize)
{
SSI_LOG_DEBUG("ssi_rfc4543_gcm_setauthsize() authsize %d\n", authsize);
SSI_LOG_DEBUG("authsize %d\n", authsize);
if (authsize != 16)
return -EINVAL;
@ -2641,7 +2659,7 @@ static struct ssi_crypto_alg *ssi_aead_create_alg(struct ssi_alg_template *templ
struct ssi_crypto_alg *t_alg;
struct aead_alg *alg;
t_alg = kzalloc(sizeof(struct ssi_crypto_alg), GFP_KERNEL);
t_alg = kzalloc(sizeof(*t_alg), GFP_KERNEL);
if (!t_alg) {
SSI_LOG_ERR("failed to allocate t_alg\n");
return ERR_PTR(-ENOMEM);
@ -2696,7 +2714,7 @@ int ssi_aead_alloc(struct ssi_drvdata *drvdata)
int rc = -ENOMEM;
int alg;
aead_handle = kmalloc(sizeof(struct ssi_aead_handle), GFP_KERNEL);
aead_handle = kmalloc(sizeof(*aead_handle), GFP_KERNEL);
if (!aead_handle) {
rc = -ENOMEM;
goto fail0;
@ -2720,14 +2738,14 @@ int ssi_aead_alloc(struct ssi_drvdata *drvdata)
if (IS_ERR(t_alg)) {
rc = PTR_ERR(t_alg);
SSI_LOG_ERR("%s alg allocation failed\n",
aead_algs[alg].driver_name);
aead_algs[alg].driver_name);
goto fail1;
}
t_alg->drvdata = drvdata;
rc = crypto_register_aead(&t_alg->aead_alg);
if (unlikely(rc != 0)) {
SSI_LOG_ERR("%s alg registration failed\n",
t_alg->aead_alg.base.cra_driver_name);
t_alg->aead_alg.base.cra_driver_name);
goto fail2;
} else {
list_add_tail(&t_alg->entry, &aead_handle->aead_list);

View File

@ -69,8 +69,8 @@ struct aead_req_ctx {
u8 gcm_iv_inc2[AES_BLOCK_SIZE] ____cacheline_aligned;
u8 hkey[AES_BLOCK_SIZE] ____cacheline_aligned;
struct {
u8 lenA[GCM_BLOCK_LEN_SIZE] ____cacheline_aligned;
u8 lenC[GCM_BLOCK_LEN_SIZE];
u8 len_a[GCM_BLOCK_LEN_SIZE] ____cacheline_aligned;
u8 len_c[GCM_BLOCK_LEN_SIZE];
} gcm_len_block;
u8 ccm_config[CCM_CONFIG_BUF_SIZE] ____cacheline_aligned;
@ -94,10 +94,10 @@ struct aead_req_ctx {
struct ssi_mlli assoc;
struct ssi_mlli src;
struct ssi_mlli dst;
struct scatterlist *srcSgl;
struct scatterlist *dstSgl;
unsigned int srcOffset;
unsigned int dstOffset;
struct scatterlist *src_sgl;
struct scatterlist *dst_sgl;
unsigned int src_offset;
unsigned int dst_offset;
enum ssi_req_dma_buf_type assoc_buff_type;
enum ssi_req_dma_buf_type data_buff_type;
struct mlli_params mlli_params;

View File

@ -150,7 +150,7 @@ static inline int ssi_buffer_mgr_render_buff_to_mlli(
u32 **mlli_entry_pp)
{
u32 *mlli_entry_p = *mlli_entry_pp;
u32 new_nents;;
u32 new_nents;
/* Verify there is no memory overflow*/
new_nents = (*curr_nents + buff_size / CC_MAX_MLLI_ENTRY_SIZE + 1);
@ -162,8 +162,8 @@ static inline int ssi_buffer_mgr_render_buff_to_mlli(
cc_lli_set_addr(mlli_entry_p, buff_dma);
cc_lli_set_size(mlli_entry_p, CC_MAX_MLLI_ENTRY_SIZE);
SSI_LOG_DEBUG("entry[%d]: single_buff=0x%08X size=%08X\n", *curr_nents,
mlli_entry_p[LLI_WORD0_OFFSET],
mlli_entry_p[LLI_WORD1_OFFSET]);
mlli_entry_p[LLI_WORD0_OFFSET],
mlli_entry_p[LLI_WORD1_OFFSET]);
buff_dma += CC_MAX_MLLI_ENTRY_SIZE;
buff_size -= CC_MAX_MLLI_ENTRY_SIZE;
mlli_entry_p = mlli_entry_p + 2;
@ -173,8 +173,8 @@ static inline int ssi_buffer_mgr_render_buff_to_mlli(
cc_lli_set_addr(mlli_entry_p, buff_dma);
cc_lli_set_size(mlli_entry_p, buff_size);
SSI_LOG_DEBUG("entry[%d]: single_buff=0x%08X size=%08X\n", *curr_nents,
mlli_entry_p[LLI_WORD0_OFFSET],
mlli_entry_p[LLI_WORD1_OFFSET]);
mlli_entry_p[LLI_WORD0_OFFSET],
mlli_entry_p[LLI_WORD1_OFFSET]);
mlli_entry_p = mlli_entry_p + 2;
*mlli_entry_pp = mlli_entry_p;
(*curr_nents)++;
@ -182,8 +182,8 @@ static inline int ssi_buffer_mgr_render_buff_to_mlli(
}
static inline int ssi_buffer_mgr_render_scatterlist_to_mlli(
struct scatterlist *sgl, u32 sgl_data_len, u32 sglOffset, u32 *curr_nents,
u32 **mlli_entry_pp)
struct scatterlist *sgl, u32 sgl_data_len, u32 sgl_offset,
u32 *curr_nents, u32 **mlli_entry_pp)
{
struct scatterlist *curr_sgl = sgl;
u32 *mlli_entry_p = *mlli_entry_pp;
@ -192,16 +192,17 @@ static inline int ssi_buffer_mgr_render_scatterlist_to_mlli(
for ( ; (curr_sgl) && (sgl_data_len != 0);
curr_sgl = sg_next(curr_sgl)) {
u32 entry_data_len =
(sgl_data_len > sg_dma_len(curr_sgl) - sglOffset) ?
sg_dma_len(curr_sgl) - sglOffset : sgl_data_len;
(sgl_data_len > sg_dma_len(curr_sgl) - sgl_offset) ?
sg_dma_len(curr_sgl) - sgl_offset :
sgl_data_len;
sgl_data_len -= entry_data_len;
rc = ssi_buffer_mgr_render_buff_to_mlli(
sg_dma_address(curr_sgl) + sglOffset, entry_data_len, curr_nents,
&mlli_entry_p);
sg_dma_address(curr_sgl) + sgl_offset, entry_data_len,
curr_nents, &mlli_entry_p);
if (rc != 0)
return rc;
sglOffset = 0;
sgl_offset = 0;
}
*mlli_entry_pp = mlli_entry_p;
return 0;
@ -221,7 +222,7 @@ static int ssi_buffer_mgr_generate_mlli(
/* Allocate memory from the pointed pool */
mlli_params->mlli_virt_addr = dma_pool_alloc(
mlli_params->curr_pool, GFP_KERNEL,
&(mlli_params->mlli_dma_addr));
&mlli_params->mlli_dma_addr);
if (unlikely(!mlli_params->mlli_virt_addr)) {
SSI_LOG_ERR("dma_pool_alloc() failed\n");
rc = -ENOMEM;
@ -249,7 +250,7 @@ static int ssi_buffer_mgr_generate_mlli(
/*Calculate the current MLLI table length for the
*length field in the descriptor
*/
*(sg_data->mlli_nents[i]) +=
*sg_data->mlli_nents[i] +=
(total_nents - prev_total_nents);
prev_total_nents = total_nents;
}
@ -259,9 +260,9 @@ static int ssi_buffer_mgr_generate_mlli(
mlli_params->mlli_len = (total_nents * LLI_ENTRY_BYTE_SIZE);
SSI_LOG_DEBUG("MLLI params: "
"virt_addr=%pK dma_addr=0x%llX mlli_len=0x%X\n",
"virt_addr=%pK dma_addr=%pad mlli_len=0x%X\n",
mlli_params->mlli_virt_addr,
(unsigned long long)mlli_params->mlli_dma_addr,
mlli_params->mlli_dma_addr,
mlli_params->mlli_len);
build_mlli_exit:
@ -275,9 +276,9 @@ static inline void ssi_buffer_mgr_add_buffer_entry(
{
unsigned int index = sgl_data->num_of_buffers;
SSI_LOG_DEBUG("index=%u single_buff=0x%llX "
SSI_LOG_DEBUG("index=%u single_buff=%pad "
"buffer_len=0x%08X is_last=%d\n",
index, (unsigned long long)buffer_dma, buffer_len, is_last_entry);
index, buffer_dma, buffer_len, is_last_entry);
sgl_data->nents[index] = 1;
sgl_data->entry[index].buffer_dma = buffer_dma;
sgl_data->offset[index] = 0;
@ -302,7 +303,7 @@ static inline void ssi_buffer_mgr_add_scatterlist_entry(
unsigned int index = sgl_data->num_of_buffers;
SSI_LOG_DEBUG("index=%u nents=%u sgl=%pK data_len=0x%08X is_last=%d\n",
index, nents, sgl, data_len, is_last_table);
index, nents, sgl, data_len, is_last_table);
sgl_data->nents[index] = nents;
sgl_data->entry[index].sgl = sgl;
sgl_data->offset[index] = data_offset;
@ -317,7 +318,7 @@ static inline void ssi_buffer_mgr_add_scatterlist_entry(
static int
ssi_buffer_mgr_dma_map_sg(struct device *dev, struct scatterlist *sg, u32 nents,
enum dma_data_direction direction)
enum dma_data_direction direction)
{
u32 i, j;
struct scatterlist *l_sg = sg;
@ -358,10 +359,10 @@ static int ssi_buffer_mgr_map_scatterlist(
SSI_LOG_ERR("dma_map_sg() single buffer failed\n");
return -ENOMEM;
}
SSI_LOG_DEBUG("Mapped sg: dma_address=0x%llX "
SSI_LOG_DEBUG("Mapped sg: dma_address=%pad "
"page=%p addr=%pK offset=%u "
"length=%u\n",
(unsigned long long)sg_dma_address(sg),
sg_dma_address(sg),
sg_page(sg),
sg_virt(sg),
sg->offset, sg->length);
@ -370,11 +371,11 @@ static int ssi_buffer_mgr_map_scatterlist(
*mapped_nents = 1;
} else { /*sg_is_last*/
*nents = ssi_buffer_mgr_get_sgl_nents(sg, nbytes, lbytes,
&is_chained);
&is_chained);
if (*nents > max_sg_nents) {
*nents = 0;
SSI_LOG_ERR("Too many fragments. current %d max %d\n",
*nents, max_sg_nents);
*nents, max_sg_nents);
return -ENOMEM;
}
if (!is_chained) {
@ -392,9 +393,9 @@ static int ssi_buffer_mgr_map_scatterlist(
* must have the same nents before and after map
*/
*mapped_nents = ssi_buffer_mgr_dma_map_sg(dev,
sg,
*nents,
direction);
sg,
*nents,
direction);
if (unlikely(*mapped_nents != *nents)) {
*nents = *mapped_nents;
SSI_LOG_ERR("dma_map_sg() sg buffer failed\n");
@ -408,10 +409,10 @@ static int ssi_buffer_mgr_map_scatterlist(
static inline int
ssi_aead_handle_config_buf(struct device *dev,
struct aead_req_ctx *areq_ctx,
u8 *config_data,
struct buffer_array *sg_data,
unsigned int assoclen)
struct aead_req_ctx *areq_ctx,
u8 *config_data,
struct buffer_array *sg_data,
unsigned int assoclen)
{
SSI_LOG_DEBUG(" handle additional data config set to DLLI\n");
/* create sg for the current buffer */
@ -422,10 +423,10 @@ ssi_aead_handle_config_buf(struct device *dev,
"config buffer failed\n");
return -ENOMEM;
}
SSI_LOG_DEBUG("Mapped curr_buff: dma_address=0x%llX "
SSI_LOG_DEBUG("Mapped curr_buff: dma_address=%pad "
"page=%p addr=%pK "
"offset=%u length=%u\n",
(unsigned long long)sg_dma_address(&areq_ctx->ccm_adata_sg),
sg_dma_address(&areq_ctx->ccm_adata_sg),
sg_page(&areq_ctx->ccm_adata_sg),
sg_virt(&areq_ctx->ccm_adata_sg),
areq_ctx->ccm_adata_sg.offset,
@ -433,19 +434,18 @@ ssi_aead_handle_config_buf(struct device *dev,
/* prepare for case of MLLI */
if (assoclen > 0) {
ssi_buffer_mgr_add_scatterlist_entry(sg_data, 1,
&areq_ctx->ccm_adata_sg,
(AES_BLOCK_SIZE +
areq_ctx->ccm_hdr_size), 0,
false, NULL);
&areq_ctx->ccm_adata_sg,
(AES_BLOCK_SIZE + areq_ctx->ccm_hdr_size),
0, false, NULL);
}
return 0;
}
static inline int ssi_ahash_handle_curr_buf(struct device *dev,
struct ahash_req_ctx *areq_ctx,
u8 *curr_buff,
u32 curr_buff_cnt,
struct buffer_array *sg_data)
struct ahash_req_ctx *areq_ctx,
u8 *curr_buff,
u32 curr_buff_cnt,
struct buffer_array *sg_data)
{
SSI_LOG_DEBUG(" handle curr buff %x set to DLLI\n", curr_buff_cnt);
/* create sg for the current buffer */
@ -456,10 +456,10 @@ static inline int ssi_ahash_handle_curr_buf(struct device *dev,
"src buffer failed\n");
return -ENOMEM;
}
SSI_LOG_DEBUG("Mapped curr_buff: dma_address=0x%llX "
SSI_LOG_DEBUG("Mapped curr_buff: dma_address=%pad "
"page=%p addr=%pK "
"offset=%u length=%u\n",
(unsigned long long)sg_dma_address(areq_ctx->buff_sg),
sg_dma_address(areq_ctx->buff_sg),
sg_page(areq_ctx->buff_sg),
sg_virt(areq_ctx->buff_sg),
areq_ctx->buff_sg->offset,
@ -469,7 +469,7 @@ static inline int ssi_ahash_handle_curr_buf(struct device *dev,
areq_ctx->in_nents = 0;
/* prepare for case of MLLI */
ssi_buffer_mgr_add_scatterlist_entry(sg_data, 1, areq_ctx->buff_sg,
curr_buff_cnt, 0, false, NULL);
curr_buff_cnt, 0, false, NULL);
return 0;
}
@ -483,9 +483,9 @@ void ssi_buffer_mgr_unmap_blkcipher_request(
struct blkcipher_req_ctx *req_ctx = (struct blkcipher_req_ctx *)ctx;
if (likely(req_ctx->gen_ctx.iv_dma_addr != 0)) {
SSI_LOG_DEBUG("Unmapped iv: iv_dma_addr=0x%llX iv_size=%u\n",
(unsigned long long)req_ctx->gen_ctx.iv_dma_addr,
ivsize);
SSI_LOG_DEBUG("Unmapped iv: iv_dma_addr=%pad iv_size=%u\n",
req_ctx->gen_ctx.iv_dma_addr,
ivsize);
dma_unmap_single(dev, req_ctx->gen_ctx.iv_dma_addr,
ivsize,
req_ctx->is_giv ? DMA_BIDIRECTIONAL :
@ -498,16 +498,12 @@ void ssi_buffer_mgr_unmap_blkcipher_request(
req_ctx->mlli_params.mlli_dma_addr);
}
dma_unmap_sg(dev, src, req_ctx->in_nents,
DMA_BIDIRECTIONAL);
SSI_LOG_DEBUG("Unmapped req->src=%pK\n",
sg_virt(src));
dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_BIDIRECTIONAL);
SSI_LOG_DEBUG("Unmapped req->src=%pK\n", sg_virt(src));
if (src != dst) {
dma_unmap_sg(dev, dst, req_ctx->out_nents,
DMA_BIDIRECTIONAL);
SSI_LOG_DEBUG("Unmapped req->dst=%pK\n",
sg_virt(dst));
dma_unmap_sg(dev, dst, req_ctx->out_nents, DMA_BIDIRECTIONAL);
SSI_LOG_DEBUG("Unmapped req->dst=%pK\n", sg_virt(dst));
}
}
@ -542,22 +538,24 @@ int ssi_buffer_mgr_map_blkcipher_request(
req_ctx->is_giv ? DMA_BIDIRECTIONAL :
DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev,
req_ctx->gen_ctx.iv_dma_addr))) {
req_ctx->gen_ctx.iv_dma_addr))) {
SSI_LOG_ERR("Mapping iv %u B at va=%pK "
"for DMA failed\n", ivsize, info);
return -ENOMEM;
}
SSI_LOG_DEBUG("Mapped iv %u B at va=%pK to dma=0x%llX\n",
ivsize, info,
(unsigned long long)req_ctx->gen_ctx.iv_dma_addr);
SSI_LOG_DEBUG("Mapped iv %u B at va=%pK to dma=%pad\n",
ivsize, info,
req_ctx->gen_ctx.iv_dma_addr);
} else {
req_ctx->gen_ctx.iv_dma_addr = 0;
}
/* Map the src SGL */
rc = ssi_buffer_mgr_map_scatterlist(dev, src,
nbytes, DMA_BIDIRECTIONAL, &req_ctx->in_nents,
LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents);
nbytes, DMA_BIDIRECTIONAL,
&req_ctx->in_nents,
LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy,
&mapped_nents);
if (unlikely(rc != 0)) {
rc = -ENOMEM;
goto ablkcipher_exit;
@ -570,8 +568,10 @@ int ssi_buffer_mgr_map_blkcipher_request(
if (unlikely(req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI)) {
req_ctx->out_nents = 0;
ssi_buffer_mgr_add_scatterlist_entry(&sg_data,
req_ctx->in_nents, src,
nbytes, 0, true, &req_ctx->in_mlli_nents);
req_ctx->in_nents,
src, nbytes, 0,
true,
&req_ctx->in_mlli_nents);
}
} else {
/* Map the dst sg */
@ -588,13 +588,15 @@ int ssi_buffer_mgr_map_blkcipher_request(
if (unlikely((req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI))) {
ssi_buffer_mgr_add_scatterlist_entry(&sg_data,
req_ctx->in_nents, src,
nbytes, 0, true,
&req_ctx->in_mlli_nents);
req_ctx->in_nents,
src, nbytes, 0,
true,
&req_ctx->in_mlli_nents);
ssi_buffer_mgr_add_scatterlist_entry(&sg_data,
req_ctx->out_nents, dst,
nbytes, 0, true,
&req_ctx->out_mlli_nents);
req_ctx->out_nents,
dst, nbytes, 0,
true,
&req_ctx->out_mlli_nents);
}
}
@ -606,7 +608,7 @@ int ssi_buffer_mgr_map_blkcipher_request(
}
SSI_LOG_DEBUG("areq_ctx->dma_buf_type = %s\n",
GET_DMA_BUFFER_TYPE(req_ctx->dma_buf_type));
GET_DMA_BUFFER_TYPE(req_ctx->dma_buf_type));
return 0;
@ -628,7 +630,7 @@ void ssi_buffer_mgr_unmap_aead_request(
if (areq_ctx->mac_buf_dma_addr != 0) {
dma_unmap_single(dev, areq_ctx->mac_buf_dma_addr,
MAX_MAC_SIZE, DMA_BIDIRECTIONAL);
MAX_MAC_SIZE, DMA_BIDIRECTIONAL);
}
#if SSI_CC_HAS_AES_GCM
@ -645,12 +647,12 @@ void ssi_buffer_mgr_unmap_aead_request(
if (areq_ctx->gcm_iv_inc1_dma_addr != 0) {
dma_unmap_single(dev, areq_ctx->gcm_iv_inc1_dma_addr,
AES_BLOCK_SIZE, DMA_TO_DEVICE);
AES_BLOCK_SIZE, DMA_TO_DEVICE);
}
if (areq_ctx->gcm_iv_inc2_dma_addr != 0) {
dma_unmap_single(dev, areq_ctx->gcm_iv_inc2_dma_addr,
AES_BLOCK_SIZE, DMA_TO_DEVICE);
AES_BLOCK_SIZE, DMA_TO_DEVICE);
}
}
#endif
@ -658,7 +660,7 @@ void ssi_buffer_mgr_unmap_aead_request(
if (areq_ctx->ccm_hdr_size != ccm_header_size_null) {
if (areq_ctx->ccm_iv0_dma_addr != 0) {
dma_unmap_single(dev, areq_ctx->ccm_iv0_dma_addr,
AES_BLOCK_SIZE, DMA_TO_DEVICE);
AES_BLOCK_SIZE, DMA_TO_DEVICE);
}
dma_unmap_sg(dev, &areq_ctx->ccm_adata_sg, 1, DMA_TO_DEVICE);
@ -672,9 +674,9 @@ void ssi_buffer_mgr_unmap_aead_request(
*allocated and should be released
*/
if (areq_ctx->mlli_params.curr_pool) {
SSI_LOG_DEBUG("free MLLI buffer: dma=0x%08llX virt=%pK\n",
(unsigned long long)areq_ctx->mlli_params.mlli_dma_addr,
areq_ctx->mlli_params.mlli_virt_addr);
SSI_LOG_DEBUG("free MLLI buffer: dma=%pad virt=%pK\n",
areq_ctx->mlli_params.mlli_dma_addr,
areq_ctx->mlli_params.mlli_virt_addr);
dma_pool_free(areq_ctx->mlli_params.curr_pool,
areq_ctx->mlli_params.mlli_virt_addr,
areq_ctx->mlli_params.mlli_dma_addr);
@ -690,14 +692,17 @@ void ssi_buffer_mgr_unmap_aead_request(
dma_unmap_sg(dev, req->src, ssi_buffer_mgr_get_sgl_nents(req->src, size_to_unmap, &dummy, &chained), DMA_BIDIRECTIONAL);
if (unlikely(req->src != req->dst)) {
SSI_LOG_DEBUG("Unmapping dst sgl: req->dst=%pK\n",
sg_virt(req->dst));
dma_unmap_sg(dev, req->dst, ssi_buffer_mgr_get_sgl_nents(req->dst, size_to_unmap, &dummy, &chained),
DMA_BIDIRECTIONAL);
sg_virt(req->dst));
dma_unmap_sg(dev, req->dst,
ssi_buffer_mgr_get_sgl_nents(req->dst,
size_to_unmap,
&dummy,
&chained),
DMA_BIDIRECTIONAL);
}
if (drvdata->coherent &&
(areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) &&
likely(req->src == req->dst))
{
likely(req->src == req->dst)) {
u32 size_to_skip = req->assoclen;
if (areq_ctx->is_gcm4543)
@ -753,11 +758,11 @@ static inline int ssi_buffer_mgr_get_aead_icv_nents(
*is_icv_fragmented = true;
} else {
SSI_LOG_ERR("Unsupported num. of ICV fragments (> %d)\n",
MAX_ICV_NENTS_SUPPORTED);
MAX_ICV_NENTS_SUPPORTED);
nents = -1; /*unsupported*/
}
SSI_LOG_DEBUG("is_frag=%s icv_nents=%u\n",
(*is_icv_fragmented ? "true" : "false"), nents);
(*is_icv_fragmented ? "true" : "false"), nents);
return nents;
}
@ -778,18 +783,18 @@ static inline int ssi_buffer_mgr_aead_chain_iv(
goto chain_iv_exit;
}
areq_ctx->gen_ctx.iv_dma_addr = dma_map_single(dev, req->iv,
hw_iv_size, DMA_BIDIRECTIONAL);
areq_ctx->gen_ctx.iv_dma_addr = dma_map_single(dev, req->iv, hw_iv_size,
DMA_BIDIRECTIONAL);
if (unlikely(dma_mapping_error(dev, areq_ctx->gen_ctx.iv_dma_addr))) {
SSI_LOG_ERR("Mapping iv %u B at va=%pK for DMA failed\n",
hw_iv_size, req->iv);
hw_iv_size, req->iv);
rc = -ENOMEM;
goto chain_iv_exit;
}
SSI_LOG_DEBUG("Mapped iv %u B at va=%pK to dma=0x%llX\n",
hw_iv_size, req->iv,
(unsigned long long)areq_ctx->gen_ctx.iv_dma_addr);
SSI_LOG_DEBUG("Mapped iv %u B at va=%pK to dma=%pad\n",
hw_iv_size, req->iv,
areq_ctx->gen_ctx.iv_dma_addr);
if (do_chain && areq_ctx->plaintext_authenticate_only) { // TODO: what about CTR?? ask Ron
struct crypto_aead *tfm = crypto_aead_reqtfm(req);
unsigned int iv_size_to_authenc = crypto_aead_ivsize(tfm);
@ -833,8 +838,8 @@ static inline int ssi_buffer_mgr_aead_chain_assoc(
areq_ctx->assoc.nents = 0;
areq_ctx->assoc.mlli_nents = 0;
SSI_LOG_DEBUG("Chain assoc of length 0: buff_type=%s nents=%u\n",
GET_DMA_BUFFER_TYPE(areq_ctx->assoc_buff_type),
areq_ctx->assoc.nents);
GET_DMA_BUFFER_TYPE(areq_ctx->assoc_buff_type),
areq_ctx->assoc.nents);
goto chain_assoc_exit;
}
@ -868,10 +873,9 @@ static inline int ssi_buffer_mgr_aead_chain_assoc(
if (areq_ctx->ccm_hdr_size != ccm_header_size_null) {
if (unlikely((mapped_nents + 1) >
LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES)) {
SSI_LOG_ERR("CCM case.Too many fragments. "
"Current %d max %d\n",
(areq_ctx->assoc.nents + 1),
LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES);
SSI_LOG_ERR("CCM case.Too many fragments. Current %d max %d\n",
(areq_ctx->assoc.nents + 1),
LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES);
rc = -ENOMEM;
goto chain_assoc_exit;
}
@ -884,10 +888,10 @@ static inline int ssi_buffer_mgr_aead_chain_assoc(
areq_ctx->assoc_buff_type = SSI_DMA_BUF_MLLI;
if (unlikely((do_chain) ||
(areq_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI))) {
(areq_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI))) {
SSI_LOG_DEBUG("Chain assoc: buff_type=%s nents=%u\n",
GET_DMA_BUFFER_TYPE(areq_ctx->assoc_buff_type),
areq_ctx->assoc.nents);
GET_DMA_BUFFER_TYPE(areq_ctx->assoc_buff_type),
areq_ctx->assoc.nents);
ssi_buffer_mgr_add_scatterlist_entry(
sg_data, areq_ctx->assoc.nents,
req->src, req->assoclen, 0, is_last,
@ -911,26 +915,26 @@ static inline void ssi_buffer_mgr_prepare_aead_data_dlli(
if (likely(req->src == req->dst)) {
/*INPLACE*/
areq_ctx->icv_dma_addr = sg_dma_address(
areq_ctx->srcSgl) +
areq_ctx->src_sgl) +
(*src_last_bytes - authsize);
areq_ctx->icv_virt_addr = sg_virt(
areq_ctx->srcSgl) +
areq_ctx->src_sgl) +
(*src_last_bytes - authsize);
} else if (direct == DRV_CRYPTO_DIRECTION_DECRYPT) {
/*NON-INPLACE and DECRYPT*/
areq_ctx->icv_dma_addr = sg_dma_address(
areq_ctx->srcSgl) +
areq_ctx->src_sgl) +
(*src_last_bytes - authsize);
areq_ctx->icv_virt_addr = sg_virt(
areq_ctx->srcSgl) +
areq_ctx->src_sgl) +
(*src_last_bytes - authsize);
} else {
/*NON-INPLACE and ENCRYPT*/
areq_ctx->icv_dma_addr = sg_dma_address(
areq_ctx->dstSgl) +
areq_ctx->dst_sgl) +
(*dst_last_bytes - authsize);
areq_ctx->icv_virt_addr = sg_virt(
areq_ctx->dstSgl) +
areq_ctx->dst_sgl) +
(*dst_last_bytes - authsize);
}
}
@ -951,13 +955,18 @@ static inline int ssi_buffer_mgr_prepare_aead_data_mlli(
if (likely(req->src == req->dst)) {
/*INPLACE*/
ssi_buffer_mgr_add_scatterlist_entry(sg_data,
areq_ctx->src.nents, areq_ctx->srcSgl,
areq_ctx->cryptlen, areq_ctx->srcOffset, is_last_table,
&areq_ctx->src.mlli_nents);
areq_ctx->src.nents,
areq_ctx->src_sgl,
areq_ctx->cryptlen,
areq_ctx->src_offset,
is_last_table,
&areq_ctx->src.mlli_nents);
icv_nents = ssi_buffer_mgr_get_aead_icv_nents(areq_ctx->srcSgl,
areq_ctx->src.nents, authsize, *src_last_bytes,
&areq_ctx->is_icv_fragmented);
icv_nents = ssi_buffer_mgr_get_aead_icv_nents(areq_ctx->src_sgl,
areq_ctx->src.nents,
authsize,
*src_last_bytes,
&areq_ctx->is_icv_fragmented);
if (unlikely(icv_nents < 0)) {
rc = -ENOTSUPP;
goto prepare_data_mlli_exit;
@ -995,27 +1004,35 @@ static inline int ssi_buffer_mgr_prepare_aead_data_mlli(
} else { /* Contig. ICV */
/*Should hanlde if the sg is not contig.*/
areq_ctx->icv_dma_addr = sg_dma_address(
&areq_ctx->srcSgl[areq_ctx->src.nents - 1]) +
&areq_ctx->src_sgl[areq_ctx->src.nents - 1]) +
(*src_last_bytes - authsize);
areq_ctx->icv_virt_addr = sg_virt(
&areq_ctx->srcSgl[areq_ctx->src.nents - 1]) +
&areq_ctx->src_sgl[areq_ctx->src.nents - 1]) +
(*src_last_bytes - authsize);
}
} else if (direct == DRV_CRYPTO_DIRECTION_DECRYPT) {
/*NON-INPLACE and DECRYPT*/
ssi_buffer_mgr_add_scatterlist_entry(sg_data,
areq_ctx->src.nents, areq_ctx->srcSgl,
areq_ctx->cryptlen, areq_ctx->srcOffset, is_last_table,
&areq_ctx->src.mlli_nents);
areq_ctx->src.nents,
areq_ctx->src_sgl,
areq_ctx->cryptlen,
areq_ctx->src_offset,
is_last_table,
&areq_ctx->src.mlli_nents);
ssi_buffer_mgr_add_scatterlist_entry(sg_data,
areq_ctx->dst.nents, areq_ctx->dstSgl,
areq_ctx->cryptlen, areq_ctx->dstOffset, is_last_table,
&areq_ctx->dst.mlli_nents);
areq_ctx->dst.nents,
areq_ctx->dst_sgl,
areq_ctx->cryptlen,
areq_ctx->dst_offset,
is_last_table,
&areq_ctx->dst.mlli_nents);
icv_nents = ssi_buffer_mgr_get_aead_icv_nents(areq_ctx->srcSgl,
areq_ctx->src.nents, authsize, *src_last_bytes,
&areq_ctx->is_icv_fragmented);
icv_nents = ssi_buffer_mgr_get_aead_icv_nents(areq_ctx->src_sgl,
areq_ctx->src.nents,
authsize,
*src_last_bytes,
&areq_ctx->is_icv_fragmented);
if (unlikely(icv_nents < 0)) {
rc = -ENOTSUPP;
goto prepare_data_mlli_exit;
@ -1039,26 +1056,34 @@ static inline int ssi_buffer_mgr_prepare_aead_data_mlli(
} else { /* Contig. ICV */
/*Should hanlde if the sg is not contig.*/
areq_ctx->icv_dma_addr = sg_dma_address(
&areq_ctx->srcSgl[areq_ctx->src.nents - 1]) +
&areq_ctx->src_sgl[areq_ctx->src.nents - 1]) +
(*src_last_bytes - authsize);
areq_ctx->icv_virt_addr = sg_virt(
&areq_ctx->srcSgl[areq_ctx->src.nents - 1]) +
&areq_ctx->src_sgl[areq_ctx->src.nents - 1]) +
(*src_last_bytes - authsize);
}
} else {
/*NON-INPLACE and ENCRYPT*/
ssi_buffer_mgr_add_scatterlist_entry(sg_data,
areq_ctx->dst.nents, areq_ctx->dstSgl,
areq_ctx->cryptlen, areq_ctx->dstOffset, is_last_table,
&areq_ctx->dst.mlli_nents);
areq_ctx->dst.nents,
areq_ctx->dst_sgl,
areq_ctx->cryptlen,
areq_ctx->dst_offset,
is_last_table,
&areq_ctx->dst.mlli_nents);
ssi_buffer_mgr_add_scatterlist_entry(sg_data,
areq_ctx->src.nents, areq_ctx->srcSgl,
areq_ctx->cryptlen, areq_ctx->srcOffset, is_last_table,
&areq_ctx->src.mlli_nents);
areq_ctx->src.nents,
areq_ctx->src_sgl,
areq_ctx->cryptlen,
areq_ctx->src_offset,
is_last_table,
&areq_ctx->src.mlli_nents);
icv_nents = ssi_buffer_mgr_get_aead_icv_nents(areq_ctx->dstSgl,
areq_ctx->dst.nents, authsize, *dst_last_bytes,
icv_nents = ssi_buffer_mgr_get_aead_icv_nents(areq_ctx->dst_sgl,
areq_ctx->dst.nents,
authsize,
*dst_last_bytes,
&areq_ctx->is_icv_fragmented);
if (unlikely(icv_nents < 0)) {
rc = -ENOTSUPP;
@ -1068,10 +1093,10 @@ static inline int ssi_buffer_mgr_prepare_aead_data_mlli(
if (likely(!areq_ctx->is_icv_fragmented)) {
/* Contig. ICV */
areq_ctx->icv_dma_addr = sg_dma_address(
&areq_ctx->dstSgl[areq_ctx->dst.nents - 1]) +
&areq_ctx->dst_sgl[areq_ctx->dst.nents - 1]) +
(*dst_last_bytes - authsize);
areq_ctx->icv_virt_addr = sg_virt(
&areq_ctx->dstSgl[areq_ctx->dst.nents - 1]) +
&areq_ctx->dst_sgl[areq_ctx->dst.nents - 1]) +
(*dst_last_bytes - authsize);
} else {
areq_ctx->icv_dma_addr = areq_ctx->mac_buf_dma_addr;
@ -1113,37 +1138,36 @@ static inline int ssi_buffer_mgr_aead_chain_data(
rc = -EINVAL;
goto chain_data_exit;
}
areq_ctx->srcSgl = req->src;
areq_ctx->dstSgl = req->dst;
areq_ctx->src_sgl = req->src;
areq_ctx->dst_sgl = req->dst;
if (is_gcm4543)
size_for_map += crypto_aead_ivsize(tfm);
size_for_map += (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ? authsize : 0;
src_mapped_nents = ssi_buffer_mgr_get_sgl_nents(req->src, size_for_map, &src_last_bytes, &chained);
sg_index = areq_ctx->srcSgl->length;
sg_index = areq_ctx->src_sgl->length;
//check where the data starts
while (sg_index <= size_to_skip) {
offset -= areq_ctx->srcSgl->length;
areq_ctx->srcSgl = sg_next(areq_ctx->srcSgl);
offset -= areq_ctx->src_sgl->length;
areq_ctx->src_sgl = sg_next(areq_ctx->src_sgl);
//if have reached the end of the sgl, then this is unexpected
if (!areq_ctx->srcSgl) {
if (!areq_ctx->src_sgl) {
SSI_LOG_ERR("reached end of sg list. unexpected\n");
BUG();
}
sg_index += areq_ctx->srcSgl->length;
sg_index += areq_ctx->src_sgl->length;
src_mapped_nents--;
}
if (unlikely(src_mapped_nents > LLI_MAX_NUM_OF_DATA_ENTRIES))
{
if (unlikely(src_mapped_nents > LLI_MAX_NUM_OF_DATA_ENTRIES)) {
SSI_LOG_ERR("Too many fragments. current %d max %d\n",
src_mapped_nents, LLI_MAX_NUM_OF_DATA_ENTRIES);
src_mapped_nents, LLI_MAX_NUM_OF_DATA_ENTRIES);
return -ENOMEM;
}
areq_ctx->src.nents = src_mapped_nents;
areq_ctx->srcOffset = offset;
areq_ctx->src_offset = offset;
if (req->src != req->dst) {
size_for_map = req->assoclen + req->cryptlen;
@ -1152,9 +1176,11 @@ static inline int ssi_buffer_mgr_aead_chain_data(
size_for_map += crypto_aead_ivsize(tfm);
rc = ssi_buffer_mgr_map_scatterlist(dev, req->dst, size_for_map,
DMA_BIDIRECTIONAL, &(areq_ctx->dst.nents),
LLI_MAX_NUM_OF_DATA_ENTRIES, &dst_last_bytes,
&dst_mapped_nents);
DMA_BIDIRECTIONAL,
&areq_ctx->dst.nents,
LLI_MAX_NUM_OF_DATA_ENTRIES,
&dst_last_bytes,
&dst_mapped_nents);
if (unlikely(rc != 0)) {
rc = -ENOMEM;
goto chain_data_exit;
@ -1162,35 +1188,37 @@ static inline int ssi_buffer_mgr_aead_chain_data(
}
dst_mapped_nents = ssi_buffer_mgr_get_sgl_nents(req->dst, size_for_map, &dst_last_bytes, &chained);
sg_index = areq_ctx->dstSgl->length;
sg_index = areq_ctx->dst_sgl->length;
offset = size_to_skip;
//check where the data starts
while (sg_index <= size_to_skip) {
offset -= areq_ctx->dstSgl->length;
areq_ctx->dstSgl = sg_next(areq_ctx->dstSgl);
offset -= areq_ctx->dst_sgl->length;
areq_ctx->dst_sgl = sg_next(areq_ctx->dst_sgl);
//if have reached the end of the sgl, then this is unexpected
if (!areq_ctx->dstSgl) {
if (!areq_ctx->dst_sgl) {
SSI_LOG_ERR("reached end of sg list. unexpected\n");
BUG();
}
sg_index += areq_ctx->dstSgl->length;
sg_index += areq_ctx->dst_sgl->length;
dst_mapped_nents--;
}
if (unlikely(dst_mapped_nents > LLI_MAX_NUM_OF_DATA_ENTRIES))
{
if (unlikely(dst_mapped_nents > LLI_MAX_NUM_OF_DATA_ENTRIES)) {
SSI_LOG_ERR("Too many fragments. current %d max %d\n",
dst_mapped_nents, LLI_MAX_NUM_OF_DATA_ENTRIES);
return -ENOMEM;
}
areq_ctx->dst.nents = dst_mapped_nents;
areq_ctx->dstOffset = offset;
areq_ctx->dst_offset = offset;
if ((src_mapped_nents > 1) ||
(dst_mapped_nents > 1) ||
do_chain) {
areq_ctx->data_buff_type = SSI_DMA_BUF_MLLI;
rc = ssi_buffer_mgr_prepare_aead_data_mlli(drvdata, req, sg_data,
&src_last_bytes, &dst_last_bytes, is_last_table);
rc = ssi_buffer_mgr_prepare_aead_data_mlli(drvdata, req,
sg_data,
&src_last_bytes,
&dst_last_bytes,
is_last_table);
} else {
areq_ctx->data_buff_type = SSI_DMA_BUF_DLLI;
ssi_buffer_mgr_prepare_aead_data_dlli(
@ -1202,7 +1230,7 @@ chain_data_exit:
}
static void ssi_buffer_mgr_update_aead_mlli_nents(struct ssi_drvdata *drvdata,
struct aead_request *req)
struct aead_request *req)
{
struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
u32 curr_mlli_size = 0;
@ -1274,8 +1302,7 @@ int ssi_buffer_mgr_map_aead_request(
if (drvdata->coherent &&
(areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) &&
likely(req->src == req->dst))
{
likely(req->src == req->dst)) {
u32 size_to_skip = req->assoclen;
if (is_gcm4543)
@ -1296,19 +1323,21 @@ int ssi_buffer_mgr_map_aead_request(
req->cryptlen :
(req->cryptlen - authsize);
areq_ctx->mac_buf_dma_addr = dma_map_single(dev,
areq_ctx->mac_buf, MAX_MAC_SIZE, DMA_BIDIRECTIONAL);
areq_ctx->mac_buf_dma_addr = dma_map_single(dev, areq_ctx->mac_buf,
MAX_MAC_SIZE,
DMA_BIDIRECTIONAL);
if (unlikely(dma_mapping_error(dev, areq_ctx->mac_buf_dma_addr))) {
SSI_LOG_ERR("Mapping mac_buf %u B at va=%pK for DMA failed\n",
MAX_MAC_SIZE, areq_ctx->mac_buf);
MAX_MAC_SIZE, areq_ctx->mac_buf);
rc = -ENOMEM;
goto aead_map_failure;
}
if (areq_ctx->ccm_hdr_size != ccm_header_size_null) {
areq_ctx->ccm_iv0_dma_addr = dma_map_single(dev,
(areq_ctx->ccm_config + CCM_CTR_COUNT_0_OFFSET),
AES_BLOCK_SIZE, DMA_TO_DEVICE);
(areq_ctx->ccm_config + CCM_CTR_COUNT_0_OFFSET),
AES_BLOCK_SIZE,
DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, areq_ctx->ccm_iv0_dma_addr))) {
SSI_LOG_ERR("Mapping mac_buf %u B at va=%pK "
@ -1319,7 +1348,8 @@ int ssi_buffer_mgr_map_aead_request(
goto aead_map_failure;
}
if (ssi_aead_handle_config_buf(dev, areq_ctx,
areq_ctx->ccm_config, &sg_data, req->assoclen) != 0) {
areq_ctx->ccm_config, &sg_data,
req->assoclen) != 0) {
rc = -ENOMEM;
goto aead_map_failure;
}
@ -1328,26 +1358,31 @@ int ssi_buffer_mgr_map_aead_request(
#if SSI_CC_HAS_AES_GCM
if (areq_ctx->cipher_mode == DRV_CIPHER_GCTR) {
areq_ctx->hkey_dma_addr = dma_map_single(dev,
areq_ctx->hkey, AES_BLOCK_SIZE, DMA_BIDIRECTIONAL);
areq_ctx->hkey,
AES_BLOCK_SIZE,
DMA_BIDIRECTIONAL);
if (unlikely(dma_mapping_error(dev, areq_ctx->hkey_dma_addr))) {
SSI_LOG_ERR("Mapping hkey %u B at va=%pK for DMA failed\n",
AES_BLOCK_SIZE, areq_ctx->hkey);
AES_BLOCK_SIZE, areq_ctx->hkey);
rc = -ENOMEM;
goto aead_map_failure;
}
areq_ctx->gcm_block_len_dma_addr = dma_map_single(dev,
&areq_ctx->gcm_len_block, AES_BLOCK_SIZE, DMA_TO_DEVICE);
&areq_ctx->gcm_len_block,
AES_BLOCK_SIZE,
DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, areq_ctx->gcm_block_len_dma_addr))) {
SSI_LOG_ERR("Mapping gcm_len_block %u B at va=%pK for DMA failed\n",
AES_BLOCK_SIZE, &areq_ctx->gcm_len_block);
AES_BLOCK_SIZE, &areq_ctx->gcm_len_block);
rc = -ENOMEM;
goto aead_map_failure;
}
areq_ctx->gcm_iv_inc1_dma_addr = dma_map_single(dev,
areq_ctx->gcm_iv_inc1,
AES_BLOCK_SIZE, DMA_TO_DEVICE);
areq_ctx->gcm_iv_inc1,
AES_BLOCK_SIZE,
DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, areq_ctx->gcm_iv_inc1_dma_addr))) {
SSI_LOG_ERR("Mapping gcm_iv_inc1 %u B at va=%pK "
@ -1359,8 +1394,9 @@ int ssi_buffer_mgr_map_aead_request(
}
areq_ctx->gcm_iv_inc2_dma_addr = dma_map_single(dev,
areq_ctx->gcm_iv_inc2,
AES_BLOCK_SIZE, DMA_TO_DEVICE);
areq_ctx->gcm_iv_inc2,
AES_BLOCK_SIZE,
DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, areq_ctx->gcm_iv_inc2_dma_addr))) {
SSI_LOG_ERR("Mapping gcm_iv_inc2 %u B at va=%pK "
@ -1380,7 +1416,7 @@ int ssi_buffer_mgr_map_aead_request(
if (is_gcm4543)
size_to_map += crypto_aead_ivsize(tfm);
rc = ssi_buffer_mgr_map_scatterlist(dev, req->src,
size_to_map, DMA_BIDIRECTIONAL, &(areq_ctx->src.nents),
size_to_map, DMA_BIDIRECTIONAL, &areq_ctx->src.nents,
LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES + LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents);
if (unlikely(rc != 0)) {
rc = -ENOMEM;
@ -1491,18 +1527,18 @@ int ssi_buffer_mgr_map_hash_request_final(
/* map the previous buffer */
if (*curr_buff_cnt != 0) {
if (ssi_ahash_handle_curr_buf(dev, areq_ctx, curr_buff,
*curr_buff_cnt, &sg_data) != 0) {
*curr_buff_cnt, &sg_data) != 0) {
return -ENOMEM;
}
}
if (src && (nbytes > 0) && do_update) {
if (unlikely(ssi_buffer_mgr_map_scatterlist(dev, src,
nbytes,
DMA_TO_DEVICE,
&areq_ctx->in_nents,
LLI_MAX_NUM_OF_DATA_ENTRIES,
&dummy, &mapped_nents))){
if (unlikely(ssi_buffer_mgr_map_scatterlist(dev, src, nbytes,
DMA_TO_DEVICE,
&areq_ctx->in_nents,
LLI_MAX_NUM_OF_DATA_ENTRIES,
&dummy,
&mapped_nents))){
goto unmap_curr_buff;
}
if (src && (mapped_nents == 1)
@ -1522,19 +1558,18 @@ int ssi_buffer_mgr_map_hash_request_final(
mlli_params->curr_pool = buff_mgr->mlli_buffs_pool;
/* add the src data to the sg_data */
ssi_buffer_mgr_add_scatterlist_entry(&sg_data,
areq_ctx->in_nents,
src,
nbytes, 0,
true, &areq_ctx->mlli_nents);
areq_ctx->in_nents,
src, nbytes, 0, true,
&areq_ctx->mlli_nents);
if (unlikely(ssi_buffer_mgr_generate_mlli(dev, &sg_data,
mlli_params) != 0)) {
mlli_params) != 0)) {
goto fail_unmap_din;
}
}
/* change the buffer index for the unmap function */
areq_ctx->buff_index = (areq_ctx->buff_index ^ 1);
SSI_LOG_DEBUG("areq_ctx->data_dma_buf_type = %s\n",
GET_DMA_BUFFER_TYPE(areq_ctx->data_dma_buf_type));
GET_DMA_BUFFER_TYPE(areq_ctx->data_dma_buf_type));
return 0;
fail_unmap_din:
@ -1588,8 +1623,8 @@ int ssi_buffer_mgr_map_hash_request_update(
&curr_buff[*curr_buff_cnt]);
areq_ctx->in_nents =
ssi_buffer_mgr_get_sgl_nents(src,
nbytes,
&dummy, NULL);
nbytes,
&dummy, NULL);
sg_copy_to_buffer(src, areq_ctx->in_nents,
&curr_buff[*curr_buff_cnt], nbytes);
*curr_buff_cnt += nbytes;
@ -1612,15 +1647,15 @@ int ssi_buffer_mgr_map_hash_request_update(
(update_data_len - *curr_buff_cnt),
*next_buff_cnt);
ssi_buffer_mgr_copy_scatterlist_portion(next_buff, src,
(update_data_len - *curr_buff_cnt),
nbytes, SSI_SG_TO_BUF);
(update_data_len - *curr_buff_cnt),
nbytes, SSI_SG_TO_BUF);
/* change the buffer index for next operation */
swap_index = 1;
}
if (*curr_buff_cnt != 0) {
if (ssi_ahash_handle_curr_buf(dev, areq_ctx, curr_buff,
*curr_buff_cnt, &sg_data) != 0) {
*curr_buff_cnt, &sg_data) != 0) {
return -ENOMEM;
}
/* change the buffer index for next operation */
@ -1629,11 +1664,12 @@ int ssi_buffer_mgr_map_hash_request_update(
if (update_data_len > *curr_buff_cnt) {
if (unlikely(ssi_buffer_mgr_map_scatterlist(dev, src,
(update_data_len - *curr_buff_cnt),
DMA_TO_DEVICE,
&areq_ctx->in_nents,
LLI_MAX_NUM_OF_DATA_ENTRIES,
&dummy, &mapped_nents))){
(update_data_len - *curr_buff_cnt),
DMA_TO_DEVICE,
&areq_ctx->in_nents,
LLI_MAX_NUM_OF_DATA_ENTRIES,
&dummy,
&mapped_nents))){
goto unmap_curr_buff;
}
if ((mapped_nents == 1)
@ -1653,12 +1689,14 @@ int ssi_buffer_mgr_map_hash_request_update(
mlli_params->curr_pool = buff_mgr->mlli_buffs_pool;
/* add the src data to the sg_data */
ssi_buffer_mgr_add_scatterlist_entry(&sg_data,
areq_ctx->in_nents,
src,
(update_data_len - *curr_buff_cnt), 0,
true, &areq_ctx->mlli_nents);
areq_ctx->in_nents,
src,
(update_data_len - *curr_buff_cnt),
0,
true,
&areq_ctx->mlli_nents);
if (unlikely(ssi_buffer_mgr_generate_mlli(dev, &sg_data,
mlli_params) != 0)) {
mlli_params) != 0)) {
goto fail_unmap_din;
}
}
@ -1687,28 +1725,28 @@ void ssi_buffer_mgr_unmap_hash_request(
*allocated and should be released
*/
if (areq_ctx->mlli_params.curr_pool) {
SSI_LOG_DEBUG("free MLLI buffer: dma=0x%llX virt=%pK\n",
(unsigned long long)areq_ctx->mlli_params.mlli_dma_addr,
areq_ctx->mlli_params.mlli_virt_addr);
SSI_LOG_DEBUG("free MLLI buffer: dma=%pad virt=%pK\n",
areq_ctx->mlli_params.mlli_dma_addr,
areq_ctx->mlli_params.mlli_virt_addr);
dma_pool_free(areq_ctx->mlli_params.curr_pool,
areq_ctx->mlli_params.mlli_virt_addr,
areq_ctx->mlli_params.mlli_dma_addr);
}
if ((src) && likely(areq_ctx->in_nents != 0)) {
SSI_LOG_DEBUG("Unmapped sg src: virt=%pK dma=0x%llX len=0x%X\n",
sg_virt(src),
(unsigned long long)sg_dma_address(src),
sg_dma_len(src));
SSI_LOG_DEBUG("Unmapped sg src: virt=%pK dma=%pad len=0x%X\n",
sg_virt(src),
sg_dma_address(src),
sg_dma_len(src));
dma_unmap_sg(dev, src,
areq_ctx->in_nents, DMA_TO_DEVICE);
}
if (*prev_len != 0) {
SSI_LOG_DEBUG("Unmapped buffer: areq_ctx->buff_sg=%pK"
" dma=0x%llX len 0x%X\n",
" dma=%pad len 0x%X\n",
sg_virt(areq_ctx->buff_sg),
(unsigned long long)sg_dma_address(areq_ctx->buff_sg),
sg_dma_address(areq_ctx->buff_sg),
sg_dma_len(areq_ctx->buff_sg));
dma_unmap_sg(dev, areq_ctx->buff_sg, 1, DMA_TO_DEVICE);
if (!do_revert) {
@ -1725,8 +1763,7 @@ int ssi_buffer_mgr_init(struct ssi_drvdata *drvdata)
struct buff_mgr_handle *buff_mgr_handle;
struct device *dev = &drvdata->plat_dev->dev;
buff_mgr_handle = (struct buff_mgr_handle *)
kmalloc(sizeof(struct buff_mgr_handle), GFP_KERNEL);
buff_mgr_handle = kmalloc(sizeof(*buff_mgr_handle), GFP_KERNEL);
if (!buff_mgr_handle)
return -ENOMEM;

View File

@ -23,6 +23,8 @@
#include <crypto/aes.h>
#include <crypto/ctr.h>
#include <crypto/des.h>
#include <crypto/xts.h>
#include <crypto/scatterwalk.h>
#include "ssi_config.h"
#include "ssi_driver.h"
@ -31,7 +33,6 @@
#include "ssi_cipher.h"
#include "ssi_request_mgr.h"
#include "ssi_sysfs.h"
#include "ssi_fips_local.h"
#define MAX_ABLKCIPHER_SEQ_LEN 6
@ -68,7 +69,8 @@ struct ssi_ablkcipher_ctx {
static void ssi_ablkcipher_complete(struct device *dev, void *ssi_req, void __iomem *cc_base);
static int validate_keys_sizes(struct ssi_ablkcipher_ctx *ctx_p, u32 size) {
static int validate_keys_sizes(struct ssi_ablkcipher_ctx *ctx_p, u32 size)
{
switch (ctx_p->flow_mode) {
case S_DIN_to_AES:
switch (size) {
@ -92,8 +94,7 @@ static int validate_keys_sizes(struct ssi_ablkcipher_ctx *ctx_p, u32 size) {
break;
}
case S_DIN_to_DES:
if (likely(size == DES3_EDE_KEY_SIZE ||
size == DES_KEY_SIZE))
if (likely(size == DES3_EDE_KEY_SIZE || size == DES_KEY_SIZE))
return 0;
break;
#if SSI_CC_HAS_MULTI2
@ -108,7 +109,8 @@ static int validate_keys_sizes(struct ssi_ablkcipher_ctx *ctx_p, u32 size) {
return -EINVAL;
}
static int validate_data_size(struct ssi_ablkcipher_ctx *ctx_p, unsigned int size) {
static int validate_data_size(struct ssi_ablkcipher_ctx *ctx_p, unsigned int size)
{
switch (ctx_p->flow_mode) {
case S_DIN_to_AES:
switch (ctx_p->cipher_mode) {
@ -183,10 +185,9 @@ static int ssi_blkcipher_init(struct crypto_tfm *tfm)
int rc = 0;
unsigned int max_key_buf_size = get_max_keysize(tfm);
SSI_LOG_DEBUG("Initializing context @%p for %s\n", ctx_p,
crypto_tfm_alg_name(tfm));
SSI_LOG_DEBUG("Initializing context @%p for %s\n",
ctx_p, crypto_tfm_alg_name(tfm));
CHECK_AND_RETURN_UPON_FIPS_ERROR();
ctx_p->cipher_mode = ssi_alg->cipher_mode;
ctx_p->flow_mode = ssi_alg->flow_mode;
ctx_p->drvdata = ssi_alg->drvdata;
@ -203,15 +204,16 @@ static int ssi_blkcipher_init(struct crypto_tfm *tfm)
/* Map key buffer */
ctx_p->user.key_dma_addr = dma_map_single(dev, (void *)ctx_p->user.key,
max_key_buf_size, DMA_TO_DEVICE);
max_key_buf_size,
DMA_TO_DEVICE);
if (dma_mapping_error(dev, ctx_p->user.key_dma_addr)) {
SSI_LOG_ERR("Mapping Key %u B at va=%pK for DMA failed\n",
max_key_buf_size, ctx_p->user.key);
max_key_buf_size, ctx_p->user.key);
return -ENOMEM;
}
SSI_LOG_DEBUG("Mapped key %u B at va=%pK to dma=0x%llX\n",
max_key_buf_size, ctx_p->user.key,
(unsigned long long)ctx_p->user.key_dma_addr);
SSI_LOG_DEBUG("Mapped key %u B at va=%pK to dma=%pad\n",
max_key_buf_size, ctx_p->user.key,
ctx_p->user.key_dma_addr);
if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) {
/* Alloc hash tfm for essiv */
@ -232,7 +234,7 @@ static void ssi_blkcipher_exit(struct crypto_tfm *tfm)
unsigned int max_key_buf_size = get_max_keysize(tfm);
SSI_LOG_DEBUG("Clearing context @%p for %s\n",
crypto_tfm_ctx(tfm), crypto_tfm_alg_name(tfm));
crypto_tfm_ctx(tfm), crypto_tfm_alg_name(tfm));
if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) {
/* Free hash tfm for essiv */
@ -242,9 +244,9 @@ static void ssi_blkcipher_exit(struct crypto_tfm *tfm)
/* Unmap key buffer */
dma_unmap_single(dev, ctx_p->user.key_dma_addr, max_key_buf_size,
DMA_TO_DEVICE);
SSI_LOG_DEBUG("Unmapped key buffer key_dma_addr=0x%llX\n",
(unsigned long long)ctx_p->user.key_dma_addr);
DMA_TO_DEVICE);
SSI_LOG_DEBUG("Unmapped key buffer key_dma_addr=%pad\n",
ctx_p->user.key_dma_addr);
/* Free key buffer in context */
kfree(ctx_p->user.key);
@ -263,31 +265,15 @@ static const u8 zero_buff[] = { 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0};
/* The function verifies that tdes keys are not weak.*/
static int ssi_fips_verify_3des_keys(const u8 *key, unsigned int keylen)
static int ssi_verify_3des_keys(const u8 *key, unsigned int keylen)
{
#ifdef CCREE_FIPS_SUPPORT
struct tdes_keys *tdes_key = (struct tdes_keys *)key;
/* verify key1 != key2 and key3 != key2*/
if (unlikely((memcmp((u8 *)tdes_key->key1, (u8 *)tdes_key->key2, sizeof(tdes_key->key1)) == 0) ||
(memcmp((u8 *)tdes_key->key3, (u8 *)tdes_key->key2, sizeof(tdes_key->key3)) == 0))) {
(memcmp((u8 *)tdes_key->key3, (u8 *)tdes_key->key2, sizeof(tdes_key->key3)) == 0))) {
return -ENOEXEC;
}
#endif /* CCREE_FIPS_SUPPORT */
return 0;
}
/* The function verifies that xts keys are not weak.*/
static int ssi_fips_verify_xts_keys(const u8 *key, unsigned int keylen)
{
#ifdef CCREE_FIPS_SUPPORT
/* Weak key is define as key that its first half (128/256 lsb) equals its second half (128/256 msb) */
int singleKeySize = keylen >> 1;
if (unlikely(memcmp(key, &key[singleKeySize], singleKeySize) == 0))
return -ENOEXEC;
#endif /* CCREE_FIPS_SUPPORT */
return 0;
}
@ -317,12 +303,10 @@ static int ssi_blkcipher_setkey(struct crypto_tfm *tfm,
unsigned int max_key_buf_size = get_max_keysize(tfm);
SSI_LOG_DEBUG("Setting key in context @%p for %s. keylen=%u\n",
ctx_p, crypto_tfm_alg_name(tfm), keylen);
ctx_p, crypto_tfm_alg_name(tfm), keylen);
dump_byte_array("key", (u8 *)key, keylen);
CHECK_AND_RETURN_UPON_FIPS_ERROR();
SSI_LOG_DEBUG("ssi_blkcipher_setkey: after FIPS check");
SSI_LOG_DEBUG("after FIPS check");
/* STAT_PHASE_0: Init and sanity checks */
@ -368,7 +352,7 @@ static int ssi_blkcipher_setkey(struct crypto_tfm *tfm,
}
ctx_p->keylen = keylen;
SSI_LOG_DEBUG("ssi_blkcipher_setkey: ssi_is_hw_key ret 0");
SSI_LOG_DEBUG("ssi_is_hw_key ret 0");
return 0;
}
@ -378,25 +362,25 @@ static int ssi_blkcipher_setkey(struct crypto_tfm *tfm,
if (unlikely(!des_ekey(tmp, key)) &&
(crypto_tfm_get_flags(tfm) & CRYPTO_TFM_REQ_WEAK_KEY)) {
tfm->crt_flags |= CRYPTO_TFM_RES_WEAK_KEY;
SSI_LOG_DEBUG("ssi_blkcipher_setkey: weak DES key");
SSI_LOG_DEBUG("weak DES key");
return -EINVAL;
}
}
if ((ctx_p->cipher_mode == DRV_CIPHER_XTS) &&
ssi_fips_verify_xts_keys(key, keylen) != 0) {
SSI_LOG_DEBUG("ssi_blkcipher_setkey: weak XTS key");
xts_check_key(tfm, key, keylen) != 0) {
SSI_LOG_DEBUG("weak XTS key");
return -EINVAL;
}
if ((ctx_p->flow_mode == S_DIN_to_DES) &&
(keylen == DES3_EDE_KEY_SIZE) &&
ssi_fips_verify_3des_keys(key, keylen) != 0) {
SSI_LOG_DEBUG("ssi_blkcipher_setkey: weak 3DES key");
ssi_verify_3des_keys(key, keylen) != 0) {
SSI_LOG_DEBUG("weak 3DES key");
return -EINVAL;
}
/* STAT_PHASE_1: Copy key to ctx */
dma_sync_single_for_cpu(dev, ctx_p->user.key_dma_addr,
max_key_buf_size, DMA_TO_DEVICE);
max_key_buf_size, DMA_TO_DEVICE);
if (ctx_p->flow_mode == S_DIN_to_MULTI2) {
#if SSI_CC_HAS_MULTI2
@ -405,7 +389,7 @@ static int ssi_blkcipher_setkey(struct crypto_tfm *tfm,
if (ctx_p->key_round_number < CC_MULTI2_MIN_NUM_ROUNDS ||
ctx_p->key_round_number > CC_MULTI2_MAX_NUM_ROUNDS) {
crypto_tfm_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
SSI_LOG_DEBUG("ssi_blkcipher_setkey: SSI_CC_HAS_MULTI2 einval");
SSI_LOG_DEBUG("SSI_CC_HAS_MULTI2 einval");
return -EINVAL;
#endif /*SSI_CC_HAS_MULTI2*/
} else {
@ -429,10 +413,10 @@ static int ssi_blkcipher_setkey(struct crypto_tfm *tfm,
}
}
dma_sync_single_for_device(dev, ctx_p->user.key_dma_addr,
max_key_buf_size, DMA_TO_DEVICE);
max_key_buf_size, DMA_TO_DEVICE);
ctx_p->keylen = keylen;
SSI_LOG_DEBUG("ssi_blkcipher_setkey: return safely");
SSI_LOG_DEBUG("return safely");
return 0;
}
@ -632,17 +616,15 @@ ssi_blkcipher_create_data_desc(
break;
#endif /*SSI_CC_HAS_MULTI2*/
default:
SSI_LOG_ERR("invalid flow mode, flow_mode = %d \n", flow_mode);
SSI_LOG_ERR("invalid flow mode, flow_mode = %d\n", flow_mode);
return;
}
/* Process */
if (likely(req_ctx->dma_buf_type == SSI_DMA_BUF_DLLI)) {
SSI_LOG_DEBUG(" data params addr 0x%llX length 0x%X \n",
(unsigned long long)sg_dma_address(src),
nbytes);
SSI_LOG_DEBUG(" data params addr 0x%llX length 0x%X \n",
(unsigned long long)sg_dma_address(dst),
nbytes);
SSI_LOG_DEBUG(" data params addr %pad length 0x%X\n",
sg_dma_address(src), nbytes);
SSI_LOG_DEBUG(" data params addr %pad length 0x%X\n",
sg_dma_address(dst), nbytes);
hw_desc_init(&desc[*seq_size]);
set_din_type(&desc[*seq_size], DMA_DLLI, sg_dma_address(src),
nbytes, NS_BIT);
@ -655,9 +637,9 @@ ssi_blkcipher_create_data_desc(
(*seq_size)++;
} else {
/* bypass */
SSI_LOG_DEBUG(" bypass params addr 0x%llX "
SSI_LOG_DEBUG(" bypass params addr %pad "
"length 0x%X addr 0x%08X\n",
(unsigned long long)req_ctx->mlli_params.mlli_dma_addr,
req_ctx->mlli_params.mlli_dma_addr,
req_ctx->mlli_params.mlli_len,
(unsigned int)ctx_p->drvdata->mlli_sram_addr);
hw_desc_init(&desc[*seq_size]);
@ -706,16 +688,17 @@ ssi_blkcipher_create_data_desc(
}
static int ssi_blkcipher_complete(struct device *dev,
struct ssi_ablkcipher_ctx *ctx_p,
struct blkcipher_req_ctx *req_ctx,
struct scatterlist *dst,
struct scatterlist *src,
unsigned int ivsize,
void *areq,
void __iomem *cc_base)
struct ssi_ablkcipher_ctx *ctx_p,
struct blkcipher_req_ctx *req_ctx,
struct scatterlist *dst,
struct scatterlist *src,
unsigned int ivsize,
void *areq,
void __iomem *cc_base)
{
int completion_error = 0;
u32 inflight_counter;
struct ablkcipher_request *req = (struct ablkcipher_request *)areq;
ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst);
@ -726,6 +709,22 @@ static int ssi_blkcipher_complete(struct device *dev,
ctx_p->drvdata->inflight_counter--;
if (areq) {
/*
* The crypto API expects us to set the req->info to the last
* ciphertext block. For encrypt, simply copy from the result.
* For decrypt, we must copy from a saved buffer since this
* could be an in-place decryption operation and the src is
* lost by this point.
*/
if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) {
memcpy(req->info, req_ctx->backup_info, ivsize);
kfree(req_ctx->backup_info);
} else {
scatterwalk_map_and_copy(req->info, req->dst,
(req->nbytes - ivsize),
ivsize, 0);
}
ablkcipher_request_complete(areq, completion_error);
return 0;
}
@ -749,21 +748,22 @@ static int ssi_blkcipher_process(
int rc, seq_len = 0, cts_restore_flag = 0;
SSI_LOG_DEBUG("%s areq=%p info=%p nbytes=%d\n",
((direction == DRV_CRYPTO_DIRECTION_ENCRYPT) ? "Encrypt" : "Decrypt"),
areq, info, nbytes);
((direction == DRV_CRYPTO_DIRECTION_ENCRYPT) ? "Encrypt" : "Decrypt"),
areq, info, nbytes);
CHECK_AND_RETURN_UPON_FIPS_ERROR();
/* STAT_PHASE_0: Init and sanity checks */
/* TODO: check data length according to mode */
if (unlikely(validate_data_size(ctx_p, nbytes))) {
SSI_LOG_ERR("Unsupported data size %d.\n", nbytes);
crypto_tfm_set_flags(tfm, CRYPTO_TFM_RES_BAD_BLOCK_LEN);
return -EINVAL;
rc = -EINVAL;
goto exit_process;
}
if (nbytes == 0) {
/* No data to process is valid */
return 0;
rc = 0;
goto exit_process;
}
/*For CTS in case of data size aligned to 16 use CBC mode*/
if (((nbytes % AES_BLOCK_SIZE) == 0) && (ctx_p->cipher_mode == DRV_CIPHER_CBC_CTS)) {
@ -804,12 +804,8 @@ static int ssi_blkcipher_process(
ssi_blkcipher_create_setup_desc(tfm, req_ctx, ivsize, nbytes,
desc, &seq_len);
/* Data processing */
ssi_blkcipher_create_data_desc(tfm,
req_ctx,
dst, src,
nbytes,
areq,
desc, &seq_len);
ssi_blkcipher_create_data_desc(tfm, req_ctx, dst, src, nbytes, areq,
desc, &seq_len);
/* do we need to generate IV? */
if (req_ctx->is_giv) {
@ -842,6 +838,9 @@ exit_process:
if (cts_restore_flag != 0)
ctx_p->cipher_mode = DRV_CIPHER_CBC_CTS;
if (rc != -EINPROGRESS)
kfree(req_ctx->backup_info);
return rc;
}
@ -853,8 +852,6 @@ static void ssi_ablkcipher_complete(struct device *dev, void *ssi_req, void __io
struct ssi_ablkcipher_ctx *ctx_p = crypto_ablkcipher_ctx(tfm);
unsigned int ivsize = crypto_ablkcipher_ivsize(tfm);
CHECK_AND_RETURN_VOID_UPON_FIPS_ERROR();
ssi_blkcipher_complete(dev, ctx_p, req_ctx, areq->dst, areq->src,
ivsize, areq, cc_base);
}
@ -871,8 +868,8 @@ static int ssi_ablkcipher_init(struct crypto_tfm *tfm)
}
static int ssi_ablkcipher_setkey(struct crypto_ablkcipher *tfm,
const u8 *key,
unsigned int keylen)
const u8 *key,
unsigned int keylen)
{
return ssi_blkcipher_setkey(crypto_ablkcipher_tfm(tfm), key, keylen);
}
@ -884,7 +881,6 @@ static int ssi_ablkcipher_encrypt(struct ablkcipher_request *req)
struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(req);
unsigned int ivsize = crypto_ablkcipher_ivsize(ablk_tfm);
req_ctx->backup_info = req->info;
req_ctx->is_giv = false;
return ssi_blkcipher_process(tfm, req_ctx, req->dst, req->src, req->nbytes, req->info, ivsize, (void *)req, DRV_CRYPTO_DIRECTION_ENCRYPT);
@ -897,8 +893,18 @@ static int ssi_ablkcipher_decrypt(struct ablkcipher_request *req)
struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(req);
unsigned int ivsize = crypto_ablkcipher_ivsize(ablk_tfm);
req_ctx->backup_info = req->info;
/*
* Allocate and save the last IV sized bytes of the source, which will
* be lost in case of in-place decryption and might be needed for CTS.
*/
req_ctx->backup_info = kmalloc(ivsize, GFP_KERNEL);
if (!req_ctx->backup_info)
return -ENOMEM;
scatterwalk_map_and_copy(req_ctx->backup_info, req->src,
(req->nbytes - ivsize), ivsize, 0);
req_ctx->is_giv = false;
return ssi_blkcipher_process(tfm, req_ctx, req->dst, req->src, req->nbytes, req->info, ivsize, (void *)req, DRV_CRYPTO_DIRECTION_DECRYPT);
}
@ -1244,7 +1250,7 @@ struct ssi_crypto_alg *ssi_ablkcipher_create_alg(struct ssi_alg_template *templa
struct ssi_crypto_alg *t_alg;
struct crypto_alg *alg;
t_alg = kzalloc(sizeof(struct ssi_crypto_alg), GFP_KERNEL);
t_alg = kzalloc(sizeof(*t_alg), GFP_KERNEL);
if (!t_alg) {
SSI_LOG_ERR("failed to allocate t_alg\n");
return ERR_PTR(-ENOMEM);
@ -1286,7 +1292,7 @@ int ssi_ablkcipher_free(struct ssi_drvdata *drvdata)
if (blkcipher_handle) {
/* Remove registered algs */
list_for_each_entry_safe(t_alg, n,
&blkcipher_handle->blkcipher_alg_list,
&blkcipher_handle->blkcipher_alg_list,
entry) {
crypto_unregister_alg(&t_alg->crypto_alg);
list_del(&t_alg->entry);
@ -1305,8 +1311,7 @@ int ssi_ablkcipher_alloc(struct ssi_drvdata *drvdata)
int rc = -ENOMEM;
int alg;
ablkcipher_handle = kmalloc(sizeof(struct ssi_blkcipher_handle),
GFP_KERNEL);
ablkcipher_handle = kmalloc(sizeof(*ablkcipher_handle), GFP_KERNEL);
if (!ablkcipher_handle)
return -ENOMEM;
@ -1322,7 +1327,7 @@ int ssi_ablkcipher_alloc(struct ssi_drvdata *drvdata)
if (IS_ERR(t_alg)) {
rc = PTR_ERR(t_alg);
SSI_LOG_ERR("%s alg allocation failed\n",
blkcipher_algs[alg].driver_name);
blkcipher_algs[alg].driver_name);
goto fail0;
}
t_alg->drvdata = drvdata;
@ -1330,17 +1335,17 @@ int ssi_ablkcipher_alloc(struct ssi_drvdata *drvdata)
SSI_LOG_DEBUG("registering %s\n", blkcipher_algs[alg].driver_name);
rc = crypto_register_alg(&t_alg->crypto_alg);
SSI_LOG_DEBUG("%s alg registration rc = %x\n",
t_alg->crypto_alg.cra_driver_name, rc);
t_alg->crypto_alg.cra_driver_name, rc);
if (unlikely(rc != 0)) {
SSI_LOG_ERR("%s alg registration failed\n",
t_alg->crypto_alg.cra_driver_name);
t_alg->crypto_alg.cra_driver_name);
kfree(t_alg);
goto fail0;
} else {
list_add_tail(&t_alg->entry,
&ablkcipher_handle->blkcipher_alg_list);
SSI_LOG_DEBUG("Registered %s\n",
t_alg->crypto_alg.cra_driver_name);
t_alg->crypto_alg.cra_driver_name);
}
}
return 0;

View File

@ -71,7 +71,7 @@
#include "ssi_ivgen.h"
#include "ssi_sram_mgr.h"
#include "ssi_pm.h"
#include "ssi_fips_local.h"
#include "ssi_fips.h"
#ifdef DX_DUMP_BYTES
void dump_byte_array(const char *name, const u8 *the_array, unsigned long size)
@ -81,12 +81,11 @@ void dump_byte_array(const char *name, const u8 *the_array, unsigned long size)
char line_buf[80];
if (!the_array) {
SSI_LOG_ERR("cannot dump_byte_array - NULL pointer\n");
SSI_LOG_ERR("cannot dump array - NULL pointer\n");
return;
}
ret = snprintf(line_buf, sizeof(line_buf), "%s[%lu]: ",
name, size);
ret = snprintf(line_buf, sizeof(line_buf), "%s[%lu]: ", name, size);
if (ret < 0) {
SSI_LOG_ERR("snprintf returned %d . aborting buffer array dump\n", ret);
return;
@ -95,8 +94,8 @@ void dump_byte_array(const char *name, const u8 *the_array, unsigned long size)
for (i = 0, cur_byte = the_array;
(i < size) && (line_offset < sizeof(line_buf)); i++, cur_byte++) {
ret = snprintf(line_buf + line_offset,
sizeof(line_buf) - line_offset,
"0x%02X ", *cur_byte);
sizeof(line_buf) - line_offset,
"0x%02X ", *cur_byte);
if (ret < 0) {
SSI_LOG_ERR("snprintf returned %d . aborting buffer array dump\n", ret);
return;
@ -193,11 +192,11 @@ int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe)
#ifdef DX_IRQ_DELAY
/* Set CC IRQ delay */
CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRQ_TIMER_INIT_VAL),
DX_IRQ_DELAY);
DX_IRQ_DELAY);
#endif
if (CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRQ_TIMER_INIT_VAL)) > 0) {
SSI_LOG_DEBUG("irq_delay=%d CC cycles\n",
CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRQ_TIMER_INIT_VAL)));
CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRQ_TIMER_INIT_VAL)));
}
#endif
@ -224,7 +223,8 @@ static int init_cc_resources(struct platform_device *plat_dev)
struct resource *req_mem_cc_regs = NULL;
void __iomem *cc_base = NULL;
bool irq_registered = false;
struct ssi_drvdata *new_drvdata = kzalloc(sizeof(struct ssi_drvdata), GFP_KERNEL);
struct ssi_drvdata *new_drvdata = kzalloc(sizeof(*new_drvdata),
GFP_KERNEL);
struct device *dev = &plat_dev->dev;
struct device_node *np = dev->of_node;
u32 signature_val;
@ -251,10 +251,10 @@ static int init_cc_resources(struct platform_device *plat_dev)
rc = -ENODEV;
goto init_cc_res_err;
}
SSI_LOG_DEBUG("Got MEM resource (%s): start=0x%llX end=0x%llX\n",
new_drvdata->res_mem->name,
(unsigned long long)new_drvdata->res_mem->start,
(unsigned long long)new_drvdata->res_mem->end);
SSI_LOG_DEBUG("Got MEM resource (%s): start=%pad end=%pad\n",
new_drvdata->res_mem->name,
new_drvdata->res_mem->start,
new_drvdata->res_mem->end);
/* Map registers space */
req_mem_cc_regs = request_mem_region(new_drvdata->res_mem->start, resource_size(new_drvdata->res_mem), "arm_cc7x_regs");
if (unlikely(!req_mem_cc_regs)) {
@ -266,7 +266,8 @@ static int init_cc_resources(struct platform_device *plat_dev)
cc_base = ioremap(new_drvdata->res_mem->start, resource_size(new_drvdata->res_mem));
if (unlikely(!cc_base)) {
SSI_LOG_ERR("ioremap[CC](0x%08X,0x%08X) failed\n",
(unsigned int)new_drvdata->res_mem->start, (unsigned int)resource_size(new_drvdata->res_mem));
(unsigned int)new_drvdata->res_mem->start,
(unsigned int)resource_size(new_drvdata->res_mem));
rc = -ENOMEM;
goto init_cc_res_err;
}
@ -284,15 +285,15 @@ static int init_cc_resources(struct platform_device *plat_dev)
IRQF_SHARED, "arm_cc7x", new_drvdata);
if (unlikely(rc != 0)) {
SSI_LOG_ERR("Could not register to interrupt %llu\n",
(unsigned long long)new_drvdata->res_irq->start);
(unsigned long long)new_drvdata->res_irq->start);
goto init_cc_res_err;
}
init_completion(&new_drvdata->icache_setup_completion);
irq_registered = true;
SSI_LOG_DEBUG("Registered to IRQ (%s) %llu\n",
new_drvdata->res_irq->name,
(unsigned long long)new_drvdata->res_irq->start);
new_drvdata->res_irq->name,
(unsigned long long)new_drvdata->res_irq->start);
new_drvdata->plat_dev = plat_dev;
@ -301,19 +302,16 @@ static int init_cc_resources(struct platform_device *plat_dev)
goto init_cc_res_err;
if (!new_drvdata->plat_dev->dev.dma_mask)
{
new_drvdata->plat_dev->dev.dma_mask = &new_drvdata->plat_dev->dev.coherent_dma_mask;
}
if (!new_drvdata->plat_dev->dev.coherent_dma_mask)
{
new_drvdata->plat_dev->dev.coherent_dma_mask = DMA_BIT_MASK(DMA_BIT_MASK_LEN);
}
/* Verify correct mapping */
signature_val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_SIGNATURE));
if (signature_val != DX_DEV_SIGNATURE) {
SSI_LOG_ERR("Invalid CC signature: SIGNATURE=0x%08X != expected=0x%08X\n",
signature_val, (u32)DX_DEV_SIGNATURE);
signature_val, (u32)DX_DEV_SIGNATURE);
rc = -EINVAL;
goto init_cc_res_err;
}
@ -330,7 +328,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
}
#ifdef ENABLE_CC_SYSFS
rc = ssi_sysfs_init(&(plat_dev->dev.kobj), new_drvdata);
rc = ssi_sysfs_init(&plat_dev->dev.kobj, new_drvdata);
if (unlikely(rc != 0)) {
SSI_LOG_ERR("init_stat_db failed\n");
goto init_cc_res_err;
@ -401,6 +399,12 @@ static int init_cc_resources(struct platform_device *plat_dev)
goto init_cc_res_err;
}
/* If we got here and FIPS mode is enabled
* it means all FIPS test passed, so let TEE
* know we're good.
*/
cc_set_ree_fips_status(new_drvdata, true);
return 0;
init_cc_res_err:
@ -428,7 +432,7 @@ init_cc_res_err:
new_drvdata->cc_base = NULL;
}
release_mem_region(new_drvdata->res_mem->start,
resource_size(new_drvdata->res_mem));
resource_size(new_drvdata->res_mem));
new_drvdata->res_mem = NULL;
}
kfree(new_drvdata);
@ -471,7 +475,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
if (drvdata->cc_base) {
iounmap(drvdata->cc_base);
release_mem_region(drvdata->res_mem->start,
resource_size(drvdata->res_mem));
resource_size(drvdata->res_mem));
drvdata->cc_base = NULL;
drvdata->res_mem = NULL;
}
@ -516,12 +520,12 @@ static int cc7x_probe(struct platform_device *plat_dev)
asm volatile("mrc p15, 0, %0, c0, c0, 1" : "=r" (ctr));
cacheline_size = 4 << ((ctr >> 16) & 0xf);
SSI_LOG_DEBUG("CP15(L1_CACHE_BYTES) = %u , Kconfig(L1_CACHE_BYTES) = %u\n",
cacheline_size, L1_CACHE_BYTES);
cacheline_size, L1_CACHE_BYTES);
asm volatile("mrc p15, 0, %0, c0, c0, 0" : "=r" (ctr));
SSI_LOG_DEBUG("Main ID register (MIDR): Implementer 0x%02X, Arch 0x%01X,"
" Part 0x%03X, Rev r%dp%d\n",
(ctr >> 24), (ctr >> 16) & 0xF, (ctr >> 4) & 0xFFF, (ctr >> 20) & 0xF, ctr & 0xF);
SSI_LOG_DEBUG("Main ID register (MIDR): Implementer 0x%02X, Arch 0x%01X, Part 0x%03X, Rev r%dp%d\n",
(ctr >> 24), (ctr >> 16) & 0xF, (ctr >> 4) & 0xFFF,
(ctr >> 20) & 0xF, ctr & 0xF);
#endif
/* Map registers space */
@ -546,7 +550,7 @@ static int cc7x_remove(struct platform_device *plat_dev)
}
#if defined(CONFIG_PM_RUNTIME) || defined(CONFIG_PM_SLEEP)
static struct dev_pm_ops arm_cc7x_driver_pm = {
static const struct dev_pm_ops arm_cc7x_driver_pm = {
SET_RUNTIME_PM_OPS(ssi_power_mgr_runtime_suspend, ssi_power_mgr_runtime_resume, NULL)
};
#endif

Some files were not shown because too many files have changed in this diff Show More