Merge 5.15.51 into android14-5.15

Changes in 5.15.51
	random: schedule mix_interrupt_randomness() less often
	random: quiet urandom warning ratelimit suppression message
	ALSA: hda/via: Fix missing beep setup
	ALSA: hda/conexant: Fix missing beep setup
	ALSA: hda/realtek: Add mute LED quirk for HP Omen laptop
	ALSA: hda/realtek - ALC897 headset MIC no sound
	ALSA: hda/realtek: Apply fixup for Lenovo Yoga Duet 7 properly
	ALSA: hda/realtek: Add quirk for Clevo PD70PNT
	ALSA: hda/realtek: Add quirk for Clevo NS50PU
	net: openvswitch: fix parsing of nw_proto for IPv6 fragments
	9p: Fix refcounting during full path walks for fid lookups
	9p: fix fid refcount leak in v9fs_vfs_atomic_open_dotl
	9p: fix fid refcount leak in v9fs_vfs_get_link
	btrfs: fix hang during unmount when block group reclaim task is running
	btrfs: prevent remounting to v1 space cache for subpage mount
	btrfs: add error messages to all unrecognized mount options
	scsi: ibmvfc: Store vhost pointer during subcrq allocation
	scsi: ibmvfc: Allocate/free queue resource only during probe/remove
	mmc: sdhci-pci-o2micro: Fix card detect by dealing with debouncing
	mmc: mediatek: wait dma stop bit reset to 0
	xen/gntdev: Avoid blocking in unmap_grant_pages()
	MAINTAINERS: Add new IOMMU development mailing list
	mtd: rawnand: gpmi: Fix setting busy timeout setting
	ata: libata: add qc->flags in ata_qc_complete_template tracepoint
	dm era: commit metadata in postsuspend after worker stops
	dm mirror log: clear log bits up to BITS_PER_LONG boundary
	tracing/kprobes: Check whether get_kretprobe() returns NULL in kretprobe_dispatcher()
	drm/i915: Implement w/a 22010492432 for adl-s
	USB: serial: pl2303: add support for more HXN (G) types
	USB: serial: option: add Telit LE910Cx 0x1250 composition
	USB: serial: option: add Quectel EM05-G modem
	USB: serial: option: add Quectel RM500K module support
	drm/msm: Ensure mmap offset is initialized
	drm/msm: Fix double pm_runtime_disable() call
	netfilter: use get_random_u32 instead of prandom
	scsi: scsi_debug: Fix zone transition to full condition
	drm/msm: Switch ordering of runpm put vs devfreq_idle
	scsi: iscsi: Exclude zero from the endpoint ID range
	xsk: Fix generic transmit when completion queue reservation fails
	drm/msm: use for_each_sgtable_sg to iterate over scatterlist
	bpf: Fix request_sock leak in sk lookup helpers
	drm/sun4i: Fix crash during suspend after component bind failure
	bpf, x86: Fix tail call count offset calculation on bpf2bpf call
	scsi: storvsc: Correct reporting of Hyper-V I/O size limits
	phy: aquantia: Fix AN when higher speeds than 1G are not advertised
	KVM: arm64: Prevent kmemleak from accessing pKVM memory
	net: Write lock dev_base_lock without disabling bottom halves.
	net: fix data-race in dev_isalive()
	tipc: fix use-after-free Read in tipc_named_reinit
	igb: fix a use-after-free issue in igb_clean_tx_ring
	bonding: ARP monitor spams NETDEV_NOTIFY_PEERS notifiers
	ethtool: Fix get module eeprom fallback
	net/sched: sch_netem: Fix arithmetic in netem_dump() for 32-bit platforms
	drm/msm/mdp4: Fix refcount leak in mdp4_modeset_init_intf
	drm/msm/dp: check core_initialized before disable interrupts at dp_display_unbind()
	drm/msm/dp: Drop now unused hpd_high member
	drm/msm/dp: dp_link_parse_sink_count() return immediately if aux read failed
	drm/msm/dp: do not initialize phy until plugin interrupt received
	drm/msm/dp: force link training for display resolution change
	perf arm-spe: Don't set data source if it's not a memory operation
	erspan: do not assume transport header is always set
	net/tls: fix tls_sk_proto_close executed repeatedly
	udmabuf: add back sanity check
	selftests: netfilter: correct PKTGEN_SCRIPT_PATHS in nft_concat_range.sh
	xen-blkfront: Handle NULL gendisk
	x86/xen: Remove undefined behavior in setup_features()
	MIPS: Remove repetitive increase irq_err_count
	afs: Fix dynamic root getattr
	ice: ethtool: advertise 1000M speeds properly
	regmap-irq: Fix a bug in regmap_irq_enable() for type_in_mask chips
	regmap-irq: Fix offset/index mismatch in read_sub_irq_data()
	igb: Make DMA faster when CPU is active on the PCIe link
	virtio_net: fix xdp_rxq_info bug after suspend/resume
	Revert "net/tls: fix tls_sk_proto_close executed repeatedly"
	sock: redo the psock vs ULP protection check
	nvme-pci: add NO APST quirk for Kioxia device
	nvme: move the Samsung X5 quirk entry to the core quirks
	gpio: winbond: Fix error code in winbond_gpio_get()
	s390/cpumf: Handle events cycles and instructions identical
	iio: mma8452: fix probe fail when device tree compatible is used.
	iio: magnetometer: yas530: Fix memchr_inv() misuse
	iio: adc: vf610: fix conversion mode sysfs node name
	usb: typec: wcove: Drop wrong dependency to INTEL_SOC_PMIC
	xhci: turn off port power in shutdown
	xhci-pci: Allow host runtime PM as default for Intel Raptor Lake xHCI
	xhci-pci: Allow host runtime PM as default for Intel Meteor Lake xHCI
	usb: gadget: Fix non-unique driver names in raw-gadget driver
	USB: gadget: Fix double-free bug in raw_gadget driver
	usb: chipidea: udc: check request status before setting device address
	dt-bindings: usb: ohci: Increase the number of PHYs
	dt-bindings: usb: ehci: Increase the number of PHYs
	btrfs: don't set lock_owner when locking extent buffer for reading
	btrfs: fix deadlock with fsync+fiemap+transaction commit
	f2fs: attach inline_data after setting compression
	iio:humidity:hts221: rearrange iio trigger get and register
	iio:chemical:ccs811: rearrange iio trigger get and register
	iio:accel:kxcjk-1013: rearrange iio trigger get and register
	iio:accel:bma180: rearrange iio trigger get and register
	iio:accel:mxc4005: rearrange iio trigger get and register
	iio: accel: mma8452: ignore the return value of reset operation
	iio: gyro: mpu3050: Fix the error handling in mpu3050_power_up()
	iio: trigger: sysfs: fix use-after-free on remove
	iio: adc: stm32: fix maximum clock rate for stm32mp15x
	iio: imu: inv_icm42600: Fix broken icm42600 (chip id 0 value)
	iio: afe: rescale: Fix boolean logic bug
	iio: adc: stm32: Fix ADCs iteration in irq handler
	iio: adc: stm32: Fix IRQs on STM32F4 by removing custom spurious IRQs message
	iio: adc: axp288: Override TS pin bias current for some models
	iio: adc: rzg2l_adc: add missing fwnode_handle_put() in rzg2l_adc_parse_properties()
	iio: adc: adi-axi-adc: Fix refcount leak in adi_axi_adc_attach_client
	iio: adc: ti-ads131e08: add missing fwnode_handle_put() in ads131e08_alloc_channels()
	xtensa: xtfpga: Fix refcount leak bug in setup
	xtensa: Fix refcount leak bug in time.c
	parisc/stifb: Fix fb_is_primary_device() only available with CONFIG_FB_STI
	parisc: Enable ARCH_HAS_STRICT_MODULE_RWX
	powerpc/microwatt: wire up rng during setup_arch()
	powerpc: Enable execve syscall exit tracepoint
	powerpc/rtas: Allow ibm,platform-dump RTAS call with null buffer address
	powerpc/powernv: wire up rng during setup_arch
	drm/msm/dp: Always clear mask bits to disable interrupts at dp_ctrl_reset_irq_ctrl()
	ARM: dts: imx7: Move hsic_phy power domain to HSIC PHY node
	ARM: dts: imx6qdl: correct PU regulator ramp delay
	arm64: dts: ti: k3-am64-main: Remove support for HS400 speed mode
	ARM: exynos: Fix refcount leak in exynos_map_pmu
	soc: bcm: brcmstb: pm: pm-arm: Fix refcount leak in brcmstb_pm_probe
	ARM: Fix refcount leak in axxia_boot_secondary
	memory: samsung: exynos5422-dmc: Fix refcount leak in of_get_dram_timings
	ARM: cns3xxx: Fix refcount leak in cns3xxx_init
	modpost: fix section mismatch check for exported init/exit sections
	ARM: dts: bcm2711-rpi-400: Fix GPIO line names
	random: update comment from copy_to_user() -> copy_to_iter()
	perf build-id: Fix caching files with a wrong build ID
	dma-direct: use the correct size for dma_set_encrypted()
	kbuild: link vmlinux only once for CONFIG_TRIM_UNUSED_KSYMS (2nd attempt)
	powerpc/pseries: wire up rng during setup_arch()
	Linux 5.15.51

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: Ic8819a78d2d84055e7a6d44bdfab6a6cd8296dac
This commit is contained in:
Greg Kroah-Hartman
2022-07-13 12:01:29 +02:00
142 changed files with 1204 additions and 580 deletions

View File

@@ -1,4 +1,4 @@
What: /sys/bus/iio/devices/iio:deviceX/conversion_mode
What: /sys/bus/iio/devices/iio:deviceX/in_conversion_mode
KernelVersion: 4.2
Contact: linux-iio@vger.kernel.org
Description:

View File

@@ -135,7 +135,8 @@ properties:
Phandle of a companion.
phys:
maxItems: 1
minItems: 1
maxItems: 3
phy-names:
const: usb

View File

@@ -102,7 +102,8 @@ properties:
Overrides the detected port count
phys:
maxItems: 1
minItems: 1
maxItems: 3
phy-names:
const: usb

View File

@@ -434,6 +434,7 @@ ACPI VIOT DRIVER
M: Jean-Philippe Brucker <jean-philippe@linaro.org>
L: linux-acpi@vger.kernel.org
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
S: Maintained
F: drivers/acpi/viot.c
F: include/linux/acpi_viot.h
@@ -941,6 +942,7 @@ AMD IOMMU (AMD-VI)
M: Joerg Roedel <joro@8bytes.org>
R: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git
F: drivers/iommu/amd/
@@ -5602,6 +5604,7 @@ M: Christoph Hellwig <hch@lst.de>
M: Marek Szyprowski <m.szyprowski@samsung.com>
R: Robin Murphy <robin.murphy@arm.com>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
S: Supported
W: http://git.infradead.org/users/hch/dma-mapping.git
T: git git://git.infradead.org/users/hch/dma-mapping.git
@@ -5614,6 +5617,7 @@ F: kernel/dma/
DMA MAPPING BENCHMARK
M: Barry Song <song.bao.hua@hisilicon.com>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
F: kernel/dma/map_benchmark.c
F: tools/testing/selftests/dma/
@@ -7115,6 +7119,7 @@ F: drivers/gpu/drm/exynos/exynos_dp*
EXYNOS SYSMMU (IOMMU) driver
M: Marek Szyprowski <m.szyprowski@samsung.com>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
S: Maintained
F: drivers/iommu/exynos-iommu.c
@@ -9464,6 +9469,7 @@ INTEL IOMMU (VT-d)
M: David Woodhouse <dwmw2@infradead.org>
M: Lu Baolu <baolu.lu@linux.intel.com>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git
F: drivers/iommu/intel/
@@ -9800,6 +9806,7 @@ IOMMU DRIVERS
M: Joerg Roedel <joro@8bytes.org>
M: Will Deacon <will@kernel.org>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git
F: Documentation/devicetree/bindings/iommu/
@@ -11802,6 +11809,7 @@ F: drivers/i2c/busses/i2c-mt65xx.c
MEDIATEK IOMMU DRIVER
M: Yong Wu <yong.wu@mediatek.com>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
L: linux-mediatek@lists.infradead.org (moderated for non-subscribers)
S: Supported
F: Documentation/devicetree/bindings/iommu/mediatek*
@@ -15567,6 +15575,7 @@ F: drivers/i2c/busses/i2c-qcom-cci.c
QUALCOMM IOMMU
M: Rob Clark <robdclark@gmail.com>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
L: linux-arm-msm@vger.kernel.org
S: Maintained
F: drivers/iommu/arm/arm-smmu/qcom_iommu.c
@@ -17996,6 +18005,7 @@ F: arch/x86/boot/video*
SWIOTLB SUBSYSTEM
M: Christoph Hellwig <hch@infradead.org>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
S: Supported
W: http://git.infradead.org/users/hch/dma-mapping.git
T: git git://git.infradead.org/users/hch/dma-mapping.git
@@ -20577,6 +20587,7 @@ M: Juergen Gross <jgross@suse.com>
M: Stefano Stabellini <sstabellini@kernel.org>
L: xen-devel@lists.xenproject.org (moderated for non-subscribers)
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
S: Supported
F: arch/x86/xen/*swiotlb*
F: drivers/xen/*swiotlb*

View File

@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 5
PATCHLEVEL = 15
SUBLEVEL = 50
SUBLEVEL = 51
EXTRAVERSION =
NAME = Trick or Treat
@@ -1226,7 +1226,7 @@ KBUILD_MODULES := 1
autoksyms_recursive: descend modules.order
$(Q)$(CONFIG_SHELL) $(srctree)/scripts/adjust_autoksyms.sh \
"$(MAKE) -f $(srctree)/Makefile vmlinux"
"$(MAKE) -f $(srctree)/Makefile autoksyms_recursive"
endif
autoksyms_h := $(if $(CONFIG_TRIM_UNUSED_KSYMS), include/generated/autoksyms.h)

View File

@@ -28,12 +28,12 @@
&expgpio {
gpio-line-names = "BT_ON",
"WL_ON",
"",
"PWR_LED_OFF",
"GLOBAL_RESET",
"VDD_SD_IO_SEL",
"CAM_GPIO",
"GLOBAL_SHUTDOWN",
"SD_PWR_ON",
"SD_OC_N";
"SHUTDOWN_REQUEST";
};
&genet_mdio {

View File

@@ -763,7 +763,7 @@
regulator-name = "vddpu";
regulator-min-microvolt = <725000>;
regulator-max-microvolt = <1450000>;
regulator-enable-ramp-delay = <150>;
regulator-enable-ramp-delay = <380>;
anatop-reg-offset = <0x140>;
anatop-vol-bit-shift = <9>;
anatop-vol-bit-width = <5>;

View File

@@ -104,6 +104,7 @@
compatible = "usb-nop-xceiv";
clocks = <&clks IMX7D_USB_HSIC_ROOT_CLK>;
clock-names = "main_clk";
power-domains = <&pgc_hsic_phy>;
#phy-cells = <0>;
};
@@ -1135,7 +1136,6 @@
compatible = "fsl,imx7d-usb", "fsl,imx27-usb";
reg = <0x30b30000 0x200>;
interrupts = <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>;
power-domains = <&pgc_hsic_phy>;
clocks = <&clks IMX7D_USB_CTRL_CLK>;
fsl,usbphy = <&usbphynop3>;
fsl,usbmisc = <&usbmisc3 0>;

View File

@@ -39,6 +39,7 @@ static int axxia_boot_secondary(unsigned int cpu, struct task_struct *idle)
return -ENOENT;
syscon = of_iomap(syscon_np, 0);
of_node_put(syscon_np);
if (!syscon)
return -ENOMEM;

View File

@@ -372,6 +372,7 @@ static void __init cns3xxx_init(void)
/* De-Asscer SATA Reset */
cns3xxx_pwr_soft_rst(CNS3XXX_PWR_SOFTWARE_RST(SATA));
}
of_node_put(dn);
dn = of_find_compatible_node(NULL, NULL, "cavium,cns3420-sdhci");
if (of_device_is_available(dn)) {
@@ -385,6 +386,7 @@ static void __init cns3xxx_init(void)
cns3xxx_pwr_clk_en(CNS3XXX_PWR_CLK_EN(SDIO));
cns3xxx_pwr_soft_rst(CNS3XXX_PWR_SOFTWARE_RST(SDIO));
}
of_node_put(dn);
pm_power_off = cns3xxx_power_off;

View File

@@ -149,6 +149,7 @@ static void exynos_map_pmu(void)
np = of_find_matching_node(NULL, exynos_dt_pmu_match);
if (np)
pmu_base_addr = of_iomap(np, 0);
of_node_put(np);
}
static void __init exynos_init_irq(void)

View File

@@ -456,13 +456,11 @@
clock-names = "clk_ahb", "clk_xin";
mmc-ddr-1_8v;
mmc-hs200-1_8v;
mmc-hs400-1_8v;
ti,trm-icp = <0x2>;
ti,otap-del-sel-legacy = <0x0>;
ti,otap-del-sel-mmc-hs = <0x0>;
ti,otap-del-sel-ddr52 = <0x6>;
ti,otap-del-sel-hs200 = <0x7>;
ti,otap-del-sel-hs400 = <0x4>;
};
sdhci1: mmc@fa00000 {

View File

@@ -2173,11 +2173,11 @@ static int finalize_hyp_mode(void)
return 0;
/*
* Exclude HYP BSS from kmemleak so that it doesn't get peeked
* at, which would end badly once the section is inaccessible.
* None of other sections should ever be introspected.
* Exclude HYP sections from kmemleak so that they don't get peeked
* at, which would end badly once inaccessible.
*/
kmemleak_free_part(__hyp_bss_start, __hyp_bss_end - __hyp_bss_start);
kmemleak_free_part(__va(hyp_mem_base), hyp_mem_size);
return pkvm_drop_host_privileges();
}

View File

@@ -640,8 +640,6 @@ static int icu_get_irq(unsigned int irq)
printk(KERN_ERR "spurious ICU interrupt: %04x,%04x\n", pend1, pend2);
atomic_inc(&irq_err_count);
return -1;
}

View File

@@ -9,6 +9,7 @@ config PARISC
select ARCH_WANT_FRAME_POINTERS
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_HAS_STRICT_MODULE_RWX
select ARCH_HAS_UBSAN_SANITIZE_ALL
select ARCH_NO_SG_CHAIN
select ARCH_SUPPORTS_HUGETLBFS if PA20

View File

@@ -12,7 +12,7 @@ static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
pgprot_val(vma->vm_page_prot) |= _PAGE_NO_CACHE;
}
#if defined(CONFIG_STI_CONSOLE) || defined(CONFIG_FB_STI)
#if defined(CONFIG_FB_STI)
int fb_is_primary_device(struct fb_info *info);
#else
static inline int fb_is_primary_device(struct fb_info *info)

View File

@@ -1818,7 +1818,7 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
tm_reclaim_current(0);
#endif
memset(regs->gpr, 0, sizeof(regs->gpr));
memset(&regs->gpr[1], 0, sizeof(regs->gpr) - sizeof(regs->gpr[0]));
regs->ctr = 0;
regs->link = 0;
regs->xer = 0;

View File

@@ -983,7 +983,7 @@ static struct rtas_filter rtas_filters[] __ro_after_init = {
{ "get-time-of-day", -1, -1, -1, -1, -1 },
{ "ibm,get-vpd", -1, 0, -1, 1, 2 },
{ "ibm,lpar-perftools", -1, 2, 3, -1, -1 },
{ "ibm,platform-dump", -1, 4, 5, -1, -1 },
{ "ibm,platform-dump", -1, 4, 5, -1, -1 }, /* Special cased */
{ "ibm,read-slot-reset-state", -1, -1, -1, -1, -1 },
{ "ibm,scan-log-dump", -1, 0, 1, -1, -1 },
{ "ibm,set-dynamic-indicator", -1, 2, -1, -1, -1 },
@@ -1032,6 +1032,15 @@ static bool block_rtas_call(int token, int nargs,
size = 1;
end = base + size - 1;
/*
* Special case for ibm,platform-dump - NULL buffer
* address is used to indicate end of dump processing
*/
if (!strcmp(f->name, "ibm,platform-dump") &&
base == 0)
return false;
if (!in_rmo_buf(base, end))
goto err;
}

View File

@@ -0,0 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _MICROWATT_H
#define _MICROWATT_H
void microwatt_rng_init(void);
#endif /* _MICROWATT_H */

View File

@@ -11,6 +11,7 @@
#include <asm/archrandom.h>
#include <asm/cputable.h>
#include <asm/machdep.h>
#include "microwatt.h"
#define DARN_ERR 0xFFFFFFFFFFFFFFFFul
@@ -29,7 +30,7 @@ int microwatt_get_random_darn(unsigned long *v)
return 1;
}
static __init int rng_init(void)
void __init microwatt_rng_init(void)
{
unsigned long val;
int i;
@@ -37,12 +38,7 @@ static __init int rng_init(void)
for (i = 0; i < 10; i++) {
if (microwatt_get_random_darn(&val)) {
ppc_md.get_random_seed = microwatt_get_random_darn;
return 0;
return;
}
}
pr_warn("Unable to use DARN for get_random_seed()\n");
return -EIO;
}
machine_subsys_initcall(, rng_init);

View File

@@ -16,6 +16,8 @@
#include <asm/xics.h>
#include <asm/udbg.h>
#include "microwatt.h"
static void __init microwatt_init_IRQ(void)
{
xics_init();
@@ -32,10 +34,16 @@ static int __init microwatt_populate(void)
}
machine_arch_initcall(microwatt, microwatt_populate);
static void __init microwatt_setup_arch(void)
{
microwatt_rng_init();
}
define_machine(microwatt) {
.name = "microwatt",
.probe = microwatt_probe,
.init_IRQ = microwatt_init_IRQ,
.setup_arch = microwatt_setup_arch,
.progress = udbg_progress,
.calibrate_decr = generic_calibrate_decr,
};

View File

@@ -42,4 +42,6 @@ ssize_t memcons_copy(struct memcons *mc, char *to, loff_t pos, size_t count);
u32 memcons_get_size(struct memcons *mc);
struct memcons *memcons_init(struct device_node *node, const char *mc_prop_name);
void pnv_rng_init(void);
#endif /* _POWERNV_H */

View File

@@ -17,6 +17,7 @@
#include <asm/prom.h>
#include <asm/machdep.h>
#include <asm/smp.h>
#include "powernv.h"
#define DARN_ERR 0xFFFFFFFFFFFFFFFFul
@@ -28,7 +29,6 @@ struct powernv_rng {
static DEFINE_PER_CPU(struct powernv_rng *, powernv_rng);
int powernv_hwrng_present(void)
{
struct powernv_rng *rng;
@@ -98,9 +98,6 @@ static int initialise_darn(void)
return 0;
}
}
pr_warn("Unable to use DARN for get_random_seed()\n");
return -EIO;
}
@@ -163,32 +160,55 @@ static __init int rng_create(struct device_node *dn)
rng_init_per_cpu(rng, dn);
pr_info_once("Registering arch random hook.\n");
ppc_md.get_random_seed = powernv_get_random_long;
return 0;
}
static __init int rng_init(void)
static int __init pnv_get_random_long_early(unsigned long *v)
{
struct device_node *dn;
int rc;
if (!slab_is_available())
return 0;
if (cmpxchg(&ppc_md.get_random_seed, pnv_get_random_long_early,
NULL) != pnv_get_random_long_early)
return 0;
for_each_compatible_node(dn, NULL, "ibm,power-rng") {
rc = rng_create(dn);
if (rc) {
pr_err("Failed creating rng for %pOF (%d).\n",
dn, rc);
if (rng_create(dn))
continue;
}
/* Create devices for hwrng driver */
of_platform_device_create(dn, NULL, NULL);
}
initialise_darn();
if (!ppc_md.get_random_seed)
return 0;
return ppc_md.get_random_seed(v);
}
void __init pnv_rng_init(void)
{
struct device_node *dn;
/* Prefer darn over the rest. */
if (!initialise_darn())
return;
dn = of_find_compatible_node(NULL, NULL, "ibm,power-rng");
if (dn)
ppc_md.get_random_seed = pnv_get_random_long_early;
of_node_put(dn);
}
static int __init pnv_rng_late_init(void)
{
unsigned long v;
/* In case it wasn't called during init for some other reason. */
if (ppc_md.get_random_seed == pnv_get_random_long_early)
pnv_get_random_long_early(&v);
return 0;
}
machine_subsys_initcall(powernv, rng_init);
machine_subsys_initcall(powernv, pnv_rng_late_init);

View File

@@ -190,6 +190,8 @@ static void __init pnv_setup_arch(void)
pnv_check_guarded_cores();
/* XXX PMCS */
pnv_rng_init();
}
static void __init pnv_init(void)

View File

@@ -115,4 +115,6 @@ extern u32 pseries_security_flavor;
void pseries_setup_security_mitigations(void);
void pseries_lpar_read_hblkrm_characteristics(void);
void pseries_rng_init(void);
#endif /* _PSERIES_PSERIES_H */

View File

@@ -10,6 +10,7 @@
#include <asm/archrandom.h>
#include <asm/machdep.h>
#include <asm/plpar_wrappers.h>
#include "pseries.h"
static int pseries_get_random_long(unsigned long *v)
@@ -24,19 +25,13 @@ static int pseries_get_random_long(unsigned long *v)
return 0;
}
static __init int rng_init(void)
void __init pseries_rng_init(void)
{
struct device_node *dn;
dn = of_find_compatible_node(NULL, NULL, "ibm,random");
if (!dn)
return -ENODEV;
pr_info("Registering arch random hook.\n");
return;
ppc_md.get_random_seed = pseries_get_random_long;
of_node_put(dn);
return 0;
}
machine_subsys_initcall(pseries, rng_init);

View File

@@ -840,6 +840,8 @@ static void __init pSeries_setup_arch(void)
if (swiotlb_force == SWIOTLB_FORCE)
ppc_swiotlb_enable = 1;
pseries_rng_init();
}
static void pseries_panic(char *str)

View File

@@ -516,6 +516,26 @@ static int __hw_perf_event_init(struct perf_event *event, unsigned int type)
return err;
}
/* Events CPU_CYLCES and INSTRUCTIONS can be submitted with two different
* attribute::type values:
* - PERF_TYPE_HARDWARE:
* - pmu->type:
* Handle both type of invocations identical. They address the same hardware.
* The result is different when event modifiers exclude_kernel and/or
* exclude_user are also set.
*/
static int cpumf_pmu_event_type(struct perf_event *event)
{
u64 ev = event->attr.config;
if (cpumf_generic_events_basic[PERF_COUNT_HW_CPU_CYCLES] == ev ||
cpumf_generic_events_basic[PERF_COUNT_HW_INSTRUCTIONS] == ev ||
cpumf_generic_events_user[PERF_COUNT_HW_CPU_CYCLES] == ev ||
cpumf_generic_events_user[PERF_COUNT_HW_INSTRUCTIONS] == ev)
return PERF_TYPE_HARDWARE;
return PERF_TYPE_RAW;
}
static int cpumf_pmu_event_init(struct perf_event *event)
{
unsigned int type = event->attr.type;
@@ -525,7 +545,7 @@ static int cpumf_pmu_event_init(struct perf_event *event)
err = __hw_perf_event_init(event, type);
else if (event->pmu->type == type)
/* Registered as unknown PMU */
err = __hw_perf_event_init(event, PERF_TYPE_RAW);
err = __hw_perf_event_init(event, cpumf_pmu_event_type(event));
else
return -ENOENT;

View File

@@ -1440,8 +1440,9 @@ st: if (is_imm8(insn->off))
case BPF_JMP | BPF_CALL:
func = (u8 *) __bpf_call_base + imm32;
if (tail_call_reachable) {
/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
EMIT3_off32(0x48, 0x8B, 0x85,
-(bpf_prog->aux->stack_depth + 8));
-round_up(bpf_prog->aux->stack_depth, 8) - 8);
if (!imm32 || emit_call(&prog, func, image + addrs[i - 1] + 7))
return -EINVAL;
} else {

View File

@@ -154,6 +154,7 @@ static void __init calibrate_ccount(void)
cpu = of_find_compatible_node(NULL, NULL, "cdns,xtensa-cpu");
if (cpu) {
clk = of_clk_get(cpu, 0);
of_node_put(cpu);
if (!IS_ERR(clk)) {
ccount_freq = clk_get_rate(clk);
return;

View File

@@ -133,6 +133,7 @@ static int __init machine_setup(void)
if ((eth = of_find_compatible_node(eth, NULL, "opencores,ethoc")))
update_local_mac(eth);
of_node_put(eth);
return 0;
}
arch_initcall(machine_setup);

View File

@@ -252,6 +252,7 @@ static void regmap_irq_enable(struct irq_data *data)
struct regmap_irq_chip_data *d = irq_data_get_irq_chip_data(data);
struct regmap *map = d->map;
const struct regmap_irq *irq_data = irq_to_regmap_irq(d, data->hwirq);
unsigned int reg = irq_data->reg_offset / map->reg_stride;
unsigned int mask, type;
type = irq_data->type.type_falling_val | irq_data->type.type_rising_val;
@@ -268,14 +269,14 @@ static void regmap_irq_enable(struct irq_data *data)
* at the corresponding offset in regmap_irq_set_type().
*/
if (d->chip->type_in_mask && type)
mask = d->type_buf[irq_data->reg_offset / map->reg_stride];
mask = d->type_buf[reg] & irq_data->mask;
else
mask = irq_data->mask;
if (d->chip->clear_on_unmask)
d->clear_status = true;
d->mask_buf[irq_data->reg_offset / map->reg_stride] &= ~mask;
d->mask_buf[reg] &= ~mask;
}
static void regmap_irq_disable(struct irq_data *data)
@@ -386,6 +387,7 @@ static inline int read_sub_irq_data(struct regmap_irq_chip_data *data,
subreg = &chip->sub_reg_offsets[b];
for (i = 0; i < subreg->num_regs; i++) {
unsigned int offset = subreg->offset[i];
unsigned int index = offset / map->reg_stride;
if (chip->not_fixed_stride)
ret = regmap_read(map,
@@ -394,7 +396,7 @@ static inline int read_sub_irq_data(struct regmap_irq_chip_data *data,
else
ret = regmap_read(map,
chip->status_base + offset,
&data->status_buf[offset]);
&data->status_buf[index]);
if (ret)
break;

View File

@@ -2140,9 +2140,11 @@ static void blkfront_closing(struct blkfront_info *info)
return;
/* No more blkif_request(). */
blk_mq_stop_hw_queues(info->rq);
blk_mark_disk_dead(info->gd);
set_capacity(info->gd, 0);
if (info->rq && info->gd) {
blk_mq_stop_hw_queues(info->rq);
blk_mark_disk_dead(info->gd);
set_capacity(info->gd, 0);
}
for_each_rinfo(info, rinfo, i) {
/* No more gnttab callback work. */
@@ -2478,16 +2480,19 @@ static int blkfront_remove(struct xenbus_device *xbdev)
dev_dbg(&xbdev->dev, "%s removed", xbdev->nodename);
del_gendisk(info->gd);
if (info->gd)
del_gendisk(info->gd);
mutex_lock(&blkfront_mutex);
list_del(&info->info_list);
mutex_unlock(&blkfront_mutex);
blkif_free(info, 0);
xlbd_release_minors(info->gd->first_minor, info->gd->minors);
blk_cleanup_disk(info->gd);
blk_mq_free_tag_set(&info->tag_set);
if (info->gd) {
xlbd_release_minors(info->gd->first_minor, info->gd->minors);
blk_cleanup_disk(info->gd);
blk_mq_free_tag_set(&info->tag_set);
}
kfree(info);
return 0;

View File

@@ -88,7 +88,7 @@ static RAW_NOTIFIER_HEAD(random_ready_chain);
/* Control how we warn userspace. */
static struct ratelimit_state urandom_warning =
RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3);
RATELIMIT_STATE_INIT_FLAGS("urandom_warning", HZ, 3, RATELIMIT_MSG_ON_RELEASE);
static int ratelimit_disable __read_mostly =
IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM);
module_param_named(ratelimit_disable, ratelimit_disable, int, 0644);
@@ -452,7 +452,7 @@ static ssize_t get_random_bytes_user(struct iov_iter *iter)
/*
* Immediately overwrite the ChaCha key at index 4 with random
* bytes, in case userspace causes copy_to_user() below to sleep
* bytes, in case userspace causes copy_to_iter() below to sleep
* forever, so that we still retain forward secrecy in that case.
*/
crng_make_state(chacha_state, (u8 *)&chacha_state[4], CHACHA_KEY_SIZE);
@@ -1000,7 +1000,7 @@ void add_interrupt_randomness(int irq)
if (new_count & MIX_INFLIGHT)
return;
if (new_count < 64 && !time_is_before_jiffies(fast_pool->last + HZ))
if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
return;
if (unlikely(!fast_pool->mix.func))

View File

@@ -32,8 +32,11 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct udmabuf *ubuf = vma->vm_private_data;
pgoff_t pgoff = vmf->pgoff;
vmf->page = ubuf->pages[vmf->pgoff];
if (pgoff >= ubuf->pagecount)
return VM_FAULT_SIGBUS;
vmf->page = ubuf->pages[pgoff];
get_page(vmf->page);
return 0;
}

View File

@@ -217,8 +217,6 @@ static int giu_get_irq(unsigned int irq)
printk(KERN_ERR "spurious GIU interrupt: %04x(%04x),%04x(%04x)\n",
maskl, pendl, maskh, pendh);
atomic_inc(&irq_err_count);
return -EINVAL;
}

View File

@@ -385,12 +385,13 @@ static int winbond_gpio_get(struct gpio_chip *gc, unsigned int offset)
unsigned long *base = gpiochip_get_data(gc);
const struct winbond_gpio_info *info;
bool val;
int ret;
winbond_gpio_get_info(&offset, &info);
val = winbond_sio_enter(*base);
if (val)
return val;
ret = winbond_sio_enter(*base);
if (ret)
return ret;
winbond_sio_select_logical(*base, info->dev);

View File

@@ -2434,7 +2434,7 @@ static void icl_wrpll_params_populate(struct skl_wrpll_params *params,
}
/*
* Display WA #22010492432: ehl, tgl, adl-p
* Display WA #22010492432: ehl, tgl, adl-s, adl-p
* Program half of the nominal DCO divider fraction value.
*/
static bool
@@ -2442,7 +2442,7 @@ ehl_combo_pll_div_frac_wa_needed(struct drm_i915_private *i915)
{
return ((IS_PLATFORM(i915, INTEL_ELKHARTLAKE) &&
IS_JSL_EHL_DISPLAY_STEP(i915, STEP_B0, STEP_FOREVER)) ||
IS_TIGERLAKE(i915) || IS_ALDERLAKE_P(i915)) &&
IS_TIGERLAKE(i915) || IS_ALDERLAKE_S(i915) || IS_ALDERLAKE_P(i915)) &&
i915->dpll.ref_clks.nssc == 38400;
}

View File

@@ -958,7 +958,8 @@ void adreno_gpu_cleanup(struct adreno_gpu *adreno_gpu)
for (i = 0; i < ARRAY_SIZE(adreno_gpu->info->fw); i++)
release_firmware(adreno_gpu->fw[i]);
pm_runtime_disable(&priv->gpu_pdev->dev);
if (pm_runtime_enabled(&priv->gpu_pdev->dev))
pm_runtime_disable(&priv->gpu_pdev->dev);
msm_gpu_cleanup(&adreno_gpu->base);
}

View File

@@ -223,6 +223,7 @@ static int mdp4_modeset_init_intf(struct mdp4_kms *mdp4_kms,
encoder = mdp4_lcdc_encoder_init(dev, panel_node);
if (IS_ERR(encoder)) {
DRM_DEV_ERROR(dev->dev, "failed to construct LCDC encoder\n");
of_node_put(panel_node);
return PTR_ERR(encoder);
}
@@ -232,6 +233,7 @@ static int mdp4_modeset_init_intf(struct mdp4_kms *mdp4_kms,
connector = mdp4_lvds_connector_init(dev, panel_node, encoder);
if (IS_ERR(connector)) {
DRM_DEV_ERROR(dev->dev, "failed to initialize LVDS connector\n");
of_node_put(panel_node);
return PTR_ERR(connector);
}

View File

@@ -1348,60 +1348,49 @@ static int dp_ctrl_enable_stream_clocks(struct dp_ctrl_private *ctrl)
return ret;
}
int dp_ctrl_host_init(struct dp_ctrl *dp_ctrl, bool flip, bool reset)
void dp_ctrl_reset_irq_ctrl(struct dp_ctrl *dp_ctrl, bool enable)
{
struct dp_ctrl_private *ctrl;
struct dp_io *dp_io;
struct phy *phy;
if (!dp_ctrl) {
DRM_ERROR("Invalid input data\n");
return -EINVAL;
}
ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
dp_io = &ctrl->parser->io;
phy = dp_io->phy;
ctrl->dp_ctrl.orientation = flip;
dp_catalog_ctrl_reset(ctrl->catalog);
if (reset)
dp_catalog_ctrl_reset(ctrl->catalog);
DRM_DEBUG_DP("flip=%d\n", flip);
dp_catalog_ctrl_phy_reset(ctrl->catalog);
phy_init(phy);
dp_catalog_ctrl_enable_irq(ctrl->catalog, true);
return 0;
/*
* all dp controller programmable registers will not
* be reset to default value after DP_SW_RESET
* therefore interrupt mask bits have to be updated
* to enable/disable interrupts
*/
dp_catalog_ctrl_enable_irq(ctrl->catalog, enable);
}
/**
* dp_ctrl_host_deinit() - Uninitialize DP controller
* @dp_ctrl: Display Port Driver data
*
* Perform required steps to uninitialize DP controller
* and its resources.
*/
void dp_ctrl_host_deinit(struct dp_ctrl *dp_ctrl)
void dp_ctrl_phy_init(struct dp_ctrl *dp_ctrl)
{
struct dp_ctrl_private *ctrl;
struct dp_io *dp_io;
struct phy *phy;
if (!dp_ctrl) {
DRM_ERROR("Invalid input data\n");
return;
}
ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
dp_io = &ctrl->parser->io;
phy = dp_io->phy;
dp_catalog_ctrl_phy_reset(ctrl->catalog);
phy_init(phy);
}
void dp_ctrl_phy_exit(struct dp_ctrl *dp_ctrl)
{
struct dp_ctrl_private *ctrl;
struct dp_io *dp_io;
struct phy *phy;
ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
dp_io = &ctrl->parser->io;
phy = dp_io->phy;
dp_catalog_ctrl_enable_irq(ctrl->catalog, false);
dp_catalog_ctrl_phy_reset(ctrl->catalog);
phy_exit(phy);
DRM_DEBUG_DP("Host deinitialized successfully\n");
}
static bool dp_ctrl_use_fixed_nvid(struct dp_ctrl_private *ctrl)
@@ -1471,7 +1460,10 @@ static int dp_ctrl_deinitialize_mainlink(struct dp_ctrl_private *ctrl)
}
phy_power_off(phy);
/* aux channel down, reinit phy */
phy_exit(phy);
phy_init(phy);
return 0;
}
@@ -1501,6 +1493,8 @@ end:
return ret;
}
static int dp_ctrl_on_stream_phy_test_report(struct dp_ctrl *dp_ctrl);
static int dp_ctrl_process_phy_test_request(struct dp_ctrl_private *ctrl)
{
int ret = 0;
@@ -1523,7 +1517,7 @@ static int dp_ctrl_process_phy_test_request(struct dp_ctrl_private *ctrl)
ret = dp_ctrl_on_link(&ctrl->dp_ctrl);
if (!ret)
ret = dp_ctrl_on_stream(&ctrl->dp_ctrl);
ret = dp_ctrl_on_stream_phy_test_report(&ctrl->dp_ctrl);
else
DRM_ERROR("failed to enable DP link controller\n");
@@ -1778,7 +1772,27 @@ static int dp_ctrl_link_retrain(struct dp_ctrl_private *ctrl)
return dp_ctrl_setup_main_link(ctrl, &training_step);
}
int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)
static int dp_ctrl_on_stream_phy_test_report(struct dp_ctrl *dp_ctrl)
{
int ret;
struct dp_ctrl_private *ctrl;
ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
ctrl->dp_ctrl.pixel_rate = ctrl->panel->dp_mode.drm_mode.clock;
ret = dp_ctrl_enable_stream_clocks(ctrl);
if (ret) {
DRM_ERROR("Failed to start pixel clocks. ret=%d\n", ret);
return ret;
}
dp_ctrl_send_phy_test_pattern(ctrl);
return 0;
}
int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl, bool force_link_train)
{
int ret = 0;
bool mainlink_ready = false;
@@ -1809,12 +1823,7 @@ int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)
goto end;
}
if (ctrl->link->sink_request & DP_TEST_LINK_PHY_TEST_PATTERN) {
dp_ctrl_send_phy_test_pattern(ctrl);
return 0;
}
if (!dp_ctrl_channel_eq_ok(ctrl))
if (force_link_train || !dp_ctrl_channel_eq_ok(ctrl))
dp_ctrl_link_retrain(ctrl);
/* stop txing train pattern to end link training */
@@ -1877,8 +1886,14 @@ int dp_ctrl_off_link_stream(struct dp_ctrl *dp_ctrl)
return ret;
}
DRM_DEBUG_DP("Before, phy=%x init_count=%d power_on=%d\n",
(u32)(uintptr_t)phy, phy->init_count, phy->power_count);
phy_power_off(phy);
DRM_DEBUG_DP("After, phy=%x init_count=%d power_on=%d\n",
(u32)(uintptr_t)phy, phy->init_count, phy->power_count);
/* aux channel down, reinit phy */
phy_exit(phy);
phy_init(phy);
@@ -1887,23 +1902,6 @@ int dp_ctrl_off_link_stream(struct dp_ctrl *dp_ctrl)
return ret;
}
void dp_ctrl_off_phy(struct dp_ctrl *dp_ctrl)
{
struct dp_ctrl_private *ctrl;
struct dp_io *dp_io;
struct phy *phy;
ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
dp_io = &ctrl->parser->io;
phy = dp_io->phy;
dp_catalog_ctrl_reset(ctrl->catalog);
phy_exit(phy);
DRM_DEBUG_DP("DP off phy done\n");
}
int dp_ctrl_off(struct dp_ctrl *dp_ctrl)
{
struct dp_ctrl_private *ctrl;
@@ -1931,10 +1929,14 @@ int dp_ctrl_off(struct dp_ctrl *dp_ctrl)
DRM_ERROR("Failed to disable link clocks. ret=%d\n", ret);
}
phy_power_off(phy);
phy_exit(phy);
DRM_DEBUG_DP("Before, phy=%x init_count=%d power_on=%d\n",
(u32)(uintptr_t)phy, phy->init_count, phy->power_count);
phy_power_off(phy);
DRM_DEBUG_DP("After, phy=%x init_count=%d power_on=%d\n",
(u32)(uintptr_t)phy, phy->init_count, phy->power_count);
DRM_DEBUG_DP("DP off done\n");
return ret;
}

View File

@@ -19,12 +19,9 @@ struct dp_ctrl {
u32 pixel_rate;
};
int dp_ctrl_host_init(struct dp_ctrl *dp_ctrl, bool flip, bool reset);
void dp_ctrl_host_deinit(struct dp_ctrl *dp_ctrl);
int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl);
int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl);
int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl, bool force_link_train);
int dp_ctrl_off_link_stream(struct dp_ctrl *dp_ctrl);
void dp_ctrl_off_phy(struct dp_ctrl *dp_ctrl);
int dp_ctrl_off(struct dp_ctrl *dp_ctrl);
void dp_ctrl_push_idle(struct dp_ctrl *dp_ctrl);
void dp_ctrl_isr(struct dp_ctrl *dp_ctrl);
@@ -34,4 +31,9 @@ struct dp_ctrl *dp_ctrl_get(struct device *dev, struct dp_link *link,
struct dp_power *power, struct dp_catalog *catalog,
struct dp_parser *parser);
void dp_ctrl_reset_irq_ctrl(struct dp_ctrl *dp_ctrl, bool enable);
void dp_ctrl_phy_init(struct dp_ctrl *dp_ctrl);
void dp_ctrl_phy_exit(struct dp_ctrl *dp_ctrl);
void dp_ctrl_irq_phy_exit(struct dp_ctrl *dp_ctrl);
#endif /* _DP_CTRL_H_ */

View File

@@ -81,6 +81,7 @@ struct dp_display_private {
/* state variables */
bool core_initialized;
bool phy_initialized;
bool hpd_irq_on;
bool audio_supported;
@@ -260,7 +261,8 @@ static void dp_display_unbind(struct device *dev, struct device *master,
struct dp_display_private, dp_display);
/* disable all HPD interrupts */
dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false);
if (dp->core_initialized)
dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false);
kthread_stop(dp->ev_tsk);
@@ -361,36 +363,45 @@ end:
return rc;
}
static void dp_display_host_init(struct dp_display_private *dp, int reset)
static void dp_display_host_phy_init(struct dp_display_private *dp)
{
bool flip = false;
DRM_DEBUG_DP("core_init=%d phy_init=%d\n",
dp->core_initialized, dp->phy_initialized);
DRM_DEBUG_DP("core_initialized=%d\n", dp->core_initialized);
if (dp->core_initialized) {
DRM_DEBUG_DP("DP core already initialized\n");
return;
if (!dp->phy_initialized) {
dp_ctrl_phy_init(dp->ctrl);
dp->phy_initialized = true;
}
}
if (dp->usbpd->orientation == ORIENTATION_CC2)
flip = true;
static void dp_display_host_phy_exit(struct dp_display_private *dp)
{
DRM_DEBUG_DP("core_init=%d phy_init=%d\n",
dp->core_initialized, dp->phy_initialized);
dp_power_init(dp->power, flip);
dp_ctrl_host_init(dp->ctrl, flip, reset);
if (dp->phy_initialized) {
dp_ctrl_phy_exit(dp->ctrl);
dp->phy_initialized = false;
}
}
static void dp_display_host_init(struct dp_display_private *dp)
{
DRM_DEBUG_DP("core_initialized=%d\n", dp->core_initialized);
dp_power_init(dp->power, false);
dp_ctrl_reset_irq_ctrl(dp->ctrl, true);
dp_aux_init(dp->aux);
dp->core_initialized = true;
}
static void dp_display_host_deinit(struct dp_display_private *dp)
{
if (!dp->core_initialized) {
DRM_DEBUG_DP("DP core not initialized\n");
return;
}
DRM_DEBUG_DP("core_initialized=%d\n", dp->core_initialized);
dp_ctrl_host_deinit(dp->ctrl);
dp_ctrl_reset_irq_ctrl(dp->ctrl, false);
dp_aux_deinit(dp->aux);
dp_power_deinit(dp->power);
dp->core_initialized = false;
}
@@ -408,7 +419,7 @@ static int dp_display_usbpd_configure_cb(struct device *dev)
dp = container_of(g_dp_display,
struct dp_display_private, dp_display);
dp_display_host_init(dp, false);
dp_display_host_phy_init(dp);
rc = dp_display_process_hpd_high(dp);
end:
@@ -546,17 +557,9 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data)
dp->hpd_state = ST_CONNECT_PENDING;
hpd->hpd_high = 1;
ret = dp_display_usbpd_configure_cb(&dp->pdev->dev);
if (ret) { /* link train failed */
hpd->hpd_high = 0;
dp->hpd_state = ST_DISCONNECTED;
if (ret == -ECONNRESET) { /* cable unplugged */
dp->core_initialized = false;
}
} else {
/* start sentinel checking in case of missing uevent */
dp_add_event(dp, EV_CONNECT_PENDING_TIMEOUT, 0, tout);
@@ -626,9 +629,7 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
if (state == ST_DISCONNECTED) {
/* triggered by irq_hdp with sink_count = 0 */
if (dp->link->sink_count == 0) {
dp_ctrl_off_phy(dp->ctrl);
hpd->hpd_high = 0;
dp->core_initialized = false;
dp_display_host_phy_exit(dp);
}
mutex_unlock(&dp->event_mutex);
return 0;
@@ -651,8 +652,6 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
/* disable HPD plug interrupts */
dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK, false);
hpd->hpd_high = 0;
/*
* We don't need separate work for disconnect as
* connect/attention interrupts are disabled
@@ -692,7 +691,6 @@ static int dp_disconnect_pending_timeout(struct dp_display_private *dp, u32 data
static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data)
{
u32 state;
int ret;
mutex_lock(&dp->event_mutex);
@@ -717,10 +715,8 @@ static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data)
return 0;
}
ret = dp_display_usbpd_attention_cb(&dp->pdev->dev);
if (ret == -ECONNRESET) { /* cable unplugged */
dp->core_initialized = false;
}
dp_display_usbpd_attention_cb(&dp->pdev->dev);
DRM_DEBUG_DP("hpd_state=%d\n", state);
mutex_unlock(&dp->event_mutex);
@@ -869,7 +865,7 @@ static int dp_display_enable(struct dp_display_private *dp, u32 data)
return 0;
}
rc = dp_ctrl_on_stream(dp->ctrl);
rc = dp_ctrl_on_stream(dp->ctrl, data);
if (!rc)
dp_display->power_on = true;
@@ -915,12 +911,19 @@ static int dp_display_disable(struct dp_display_private *dp, u32 data)
dp_display->audio_enabled = false;
/* triggered by irq_hpd with sink_count = 0 */
if (dp->link->sink_count == 0) {
/*
* irq_hpd with sink_count = 0
* hdmi unplugged out of dongle
*/
dp_ctrl_off_link_stream(dp->ctrl);
} else {
/*
* unplugged interrupt
* dongle unplugged out of DUT
*/
dp_ctrl_off(dp->ctrl);
dp->core_initialized = false;
dp_display_host_phy_exit(dp);
}
dp_display->power_on = false;
@@ -1050,7 +1053,7 @@ void msm_dp_snapshot(struct msm_disp_state *disp_state, struct msm_dp *dp)
static void dp_display_config_hpd(struct dp_display_private *dp)
{
dp_display_host_init(dp, true);
dp_display_host_init(dp);
dp_catalog_ctrl_hpd_config(dp->catalog);
/* Enable interrupt first time
@@ -1317,20 +1320,23 @@ static int dp_pm_resume(struct device *dev)
dp->hpd_state = ST_DISCONNECTED;
/* turn on dp ctrl/phy */
dp_display_host_init(dp, true);
dp_display_host_init(dp);
dp_catalog_ctrl_hpd_config(dp->catalog);
/*
* set sink to normal operation mode -- D0
* before dpcd read
*/
dp_link_psm_config(dp->link, &dp->panel->link_info, false);
if (dp_catalog_link_is_connected(dp->catalog)) {
/*
* set sink to normal operation mode -- D0
* before dpcd read
*/
dp_display_host_phy_init(dp);
dp_link_psm_config(dp->link, &dp->panel->link_info, false);
sink_count = drm_dp_read_sink_count(dp->aux);
if (sink_count < 0)
sink_count = 0;
dp_display_host_phy_exit(dp);
}
dp->link->sink_count = sink_count;
@@ -1369,18 +1375,16 @@ static int dp_pm_suspend(struct device *dev)
DRM_DEBUG_DP("Before, core_inited=%d power_on=%d\n",
dp->core_initialized, dp_display->power_on);
if (dp->core_initialized == true) {
/* mainlink enabled */
if (dp_power_clk_status(dp->power, DP_CTRL_PM))
dp_ctrl_off_link_stream(dp->ctrl);
/* mainlink enabled */
if (dp_power_clk_status(dp->power, DP_CTRL_PM))
dp_ctrl_off_link_stream(dp->ctrl);
dp_display_host_deinit(dp);
}
dp->hpd_state = ST_SUSPENDED;
dp_display_host_phy_exit(dp);
/* host_init will be called at pm_resume */
dp->core_initialized = false;
dp_display_host_deinit(dp);
dp->hpd_state = ST_SUSPENDED;
DRM_DEBUG_DP("After, core_inited=%d power_on=%d\n",
dp->core_initialized, dp_display->power_on);
@@ -1508,6 +1512,7 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder)
int rc = 0;
struct dp_display_private *dp_display;
u32 state;
bool force_link_train = false;
dp_display = container_of(dp, struct dp_display_private, dp_display);
if (!dp_display->dp_mode.drm_mode.clock) {
@@ -1536,10 +1541,12 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder)
state = dp_display->hpd_state;
if (state == ST_DISPLAY_OFF)
dp_display_host_init(dp_display, true);
if (state == ST_DISPLAY_OFF) {
dp_display_host_phy_init(dp_display);
force_link_train = true;
}
dp_display_enable(dp_display, 0);
dp_display_enable(dp_display, force_link_train);
rc = dp_display_post_enable(dp);
if (rc) {
@@ -1548,10 +1555,6 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder)
dp_display_unprepare(dp);
}
/* manual kick off plug event to train link */
if (state == ST_DISPLAY_OFF)
dp_add_event(dp_display, EV_IRQ_HPD_INT, 0, 0);
/* completed connection */
dp_display->hpd_state = ST_CONNECTED;

View File

@@ -32,8 +32,6 @@ int dp_hpd_connect(struct dp_usbpd *dp_usbpd, bool hpd)
hpd_priv = container_of(dp_usbpd, struct dp_hpd_private,
dp_usbpd);
dp_usbpd->hpd_high = hpd;
if (!hpd_priv->dp_cb || !hpd_priv->dp_cb->configure
|| !hpd_priv->dp_cb->disconnect) {
pr_err("hpd dp_cb not initialized\n");

View File

@@ -26,7 +26,6 @@ enum plug_orientation {
* @multi_func: multi-function preferred
* @usb_config_req: request to switch to usb
* @exit_dp_mode: request exit from displayport mode
* @hpd_high: Hot Plug Detect signal is high.
* @hpd_irq: Change in the status since last message
* @alt_mode_cfg_done: bool to specify alt mode status
* @debug_en: bool to specify debug mode
@@ -39,7 +38,6 @@ struct dp_usbpd {
bool multi_func;
bool usb_config_req;
bool exit_dp_mode;
bool hpd_high;
bool hpd_irq;
bool alt_mode_cfg_done;
bool debug_en;

View File

@@ -737,18 +737,25 @@ static int dp_link_parse_sink_count(struct dp_link *dp_link)
return 0;
}
static void dp_link_parse_sink_status_field(struct dp_link_private *link)
static int dp_link_parse_sink_status_field(struct dp_link_private *link)
{
int len = 0;
link->prev_sink_count = link->dp_link.sink_count;
dp_link_parse_sink_count(&link->dp_link);
len = dp_link_parse_sink_count(&link->dp_link);
if (len < 0) {
DRM_ERROR("DP parse sink count failed\n");
return len;
}
len = drm_dp_dpcd_read_link_status(link->aux,
link->link_status);
if (len < DP_LINK_STATUS_SIZE)
if (len < DP_LINK_STATUS_SIZE) {
DRM_ERROR("DP link status read failed\n");
dp_link_parse_request(link);
return len;
}
return dp_link_parse_request(link);
}
/**
@@ -1023,7 +1030,9 @@ int dp_link_process_request(struct dp_link *dp_link)
dp_link_reset_data(link);
dp_link_parse_sink_status_field(link);
ret = dp_link_parse_sink_status_field(link);
if (ret)
return ret;
if (link->request.test_requested == DP_TEST_LINK_EDID_READ) {
dp_link->sink_request |= DP_TEST_LINK_EDID_READ;

View File

@@ -1102,7 +1102,7 @@ static const struct drm_driver msm_driver = {
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
.gem_prime_import_sg_table = msm_gem_prime_import_sg_table,
.gem_prime_mmap = drm_gem_prime_mmap,
.gem_prime_mmap = msm_gem_prime_mmap,
#ifdef CONFIG_DEBUG_FS
.debugfs_init = msm_debugfs_init,
#endif

View File

@@ -298,6 +298,7 @@ unsigned long msm_gem_shrinker_shrink(struct drm_device *dev, unsigned long nr_t
void msm_gem_shrinker_init(struct drm_device *dev);
void msm_gem_shrinker_cleanup(struct drm_device *dev);
int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj);
int msm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);

View File

@@ -11,6 +11,21 @@
#include "msm_drv.h"
#include "msm_gem.h"
int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
{
int ret;
/* Ensure the mmap offset is initialized. We lazily initialize it,
* so if it has not been first mmap'd directly as a GEM object, the
* mmap offset will not be already initialized.
*/
ret = drm_gem_create_mmap_offset(obj);
if (ret)
return ret;
return drm_gem_prime_mmap(obj, vma);
}
struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
{
struct msm_gem_object *msm_obj = to_msm_bo(obj);

View File

@@ -658,7 +658,6 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
msm_submit_retire(submit);
pm_runtime_mark_last_busy(&gpu->pdev->dev);
pm_runtime_put_autosuspend(&gpu->pdev->dev);
spin_lock_irqsave(&ring->submit_lock, flags);
list_del(&submit->node);
@@ -672,6 +671,8 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
msm_devfreq_idle(gpu);
mutex_unlock(&gpu->active_lock);
pm_runtime_put_autosuspend(&gpu->pdev->dev);
msm_gem_submit_put(submit);
}

View File

@@ -58,7 +58,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova,
u64 addr = iova;
unsigned int i;
for_each_sg(sgt->sgl, sg, sgt->nents, i) {
for_each_sgtable_sg(sgt, sg, i) {
size_t size = sg->length;
phys_addr_t phys = sg_phys(sg);

View File

@@ -72,7 +72,6 @@ static int sun4i_drv_bind(struct device *dev)
goto free_drm;
}
dev_set_drvdata(dev, drm);
drm->dev_private = drv;
INIT_LIST_HEAD(&drv->frontend_list);
INIT_LIST_HEAD(&drv->engine_list);
@@ -113,6 +112,8 @@ static int sun4i_drv_bind(struct device *dev)
drm_fbdev_generic_setup(drm, 32);
dev_set_drvdata(dev, drm);
return 0;
finish_poll:
@@ -129,6 +130,7 @@ static void sun4i_drv_unbind(struct device *dev)
{
struct drm_device *drm = dev_get_drvdata(dev);
dev_set_drvdata(dev, NULL);
drm_dev_unregister(drm);
drm_kms_helper_poll_fini(drm);
drm_atomic_helper_shutdown(drm);

View File

@@ -1006,11 +1006,12 @@ static int bma180_probe(struct i2c_client *client,
data->trig->ops = &bma180_trigger_ops;
iio_trigger_set_drvdata(data->trig, indio_dev);
indio_dev->trig = iio_trigger_get(data->trig);
ret = iio_trigger_register(data->trig);
if (ret)
goto err_trigger_free;
indio_dev->trig = iio_trigger_get(data->trig);
}
ret = iio_triggered_buffer_setup(indio_dev, NULL,

View File

@@ -1553,12 +1553,12 @@ static int kxcjk1013_probe(struct i2c_client *client,
data->dready_trig->ops = &kxcjk1013_trigger_ops;
iio_trigger_set_drvdata(data->dready_trig, indio_dev);
indio_dev->trig = data->dready_trig;
iio_trigger_get(indio_dev->trig);
ret = iio_trigger_register(data->dready_trig);
if (ret)
goto err_poweroff;
indio_dev->trig = iio_trigger_get(data->dready_trig);
data->motion_trig->ops = &kxcjk1013_trigger_ops;
iio_trigger_set_drvdata(data->motion_trig, indio_dev);
ret = iio_trigger_register(data->motion_trig);

View File

@@ -1493,10 +1493,14 @@ static int mma8452_reset(struct i2c_client *client)
int i;
int ret;
ret = i2c_smbus_write_byte_data(client, MMA8452_CTRL_REG2,
/*
* Find on fxls8471, after config reset bit, it reset immediately,
* and will not give ACK, so here do not check the return value.
* The following code will read the reset register, and check whether
* this reset works.
*/
i2c_smbus_write_byte_data(client, MMA8452_CTRL_REG2,
MMA8452_CTRL_REG2_RST);
if (ret < 0)
return ret;
for (i = 0; i < 10; i++) {
usleep_range(100, 200);
@@ -1539,11 +1543,13 @@ static int mma8452_probe(struct i2c_client *client,
mutex_init(&data->lock);
data->chip_info = device_get_match_data(&client->dev);
if (!data->chip_info && id) {
data->chip_info = &mma_chip_info_table[id->driver_data];
} else {
dev_err(&client->dev, "unknown device model\n");
return -ENODEV;
if (!data->chip_info) {
if (id) {
data->chip_info = &mma_chip_info_table[id->driver_data];
} else {
dev_err(&client->dev, "unknown device model\n");
return -ENODEV;
}
}
data->vdd_reg = devm_regulator_get(&client->dev, "vdd");

View File

@@ -456,8 +456,6 @@ static int mxc4005_probe(struct i2c_client *client,
data->dready_trig->ops = &mxc4005_trigger_ops;
iio_trigger_set_drvdata(data->dready_trig, indio_dev);
indio_dev->trig = data->dready_trig;
iio_trigger_get(indio_dev->trig);
ret = devm_iio_trigger_register(&client->dev,
data->dready_trig);
if (ret) {
@@ -465,6 +463,8 @@ static int mxc4005_probe(struct i2c_client *client,
"failed to register trigger\n");
return ret;
}
indio_dev->trig = iio_trigger_get(data->dready_trig);
}
return devm_iio_device_register(&client->dev, indio_dev);

View File

@@ -322,16 +322,19 @@ static struct adi_axi_adc_client *adi_axi_adc_attach_client(struct device *dev)
if (!try_module_get(cl->dev->driver->owner)) {
mutex_unlock(&registered_clients_lock);
of_node_put(cln);
return ERR_PTR(-ENODEV);
}
get_device(cl->dev);
cl->info = info;
mutex_unlock(&registered_clients_lock);
of_node_put(cln);
return cl;
}
mutex_unlock(&registered_clients_lock);
of_node_put(cln);
return ERR_PTR(-EPROBE_DEFER);
}

View File

@@ -196,6 +196,14 @@ static const struct dmi_system_id axp288_adc_ts_bias_override[] = {
},
.driver_data = (void *)(uintptr_t)AXP288_ADC_TS_BIAS_80UA,
},
{
/* Nuvision Solo 10 Draw */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "TMAX"),
DMI_MATCH(DMI_PRODUCT_NAME, "TM101W610L"),
},
.driver_data = (void *)(uintptr_t)AXP288_ADC_TS_BIAS_80UA,
},
{}
};

View File

@@ -334,11 +334,15 @@ static int rzg2l_adc_parse_properties(struct platform_device *pdev, struct rzg2l
i = 0;
device_for_each_child_node(&pdev->dev, fwnode) {
ret = fwnode_property_read_u32(fwnode, "reg", &channel);
if (ret)
if (ret) {
fwnode_handle_put(fwnode);
return ret;
}
if (channel >= RZG2L_ADC_MAX_CHANNELS)
if (channel >= RZG2L_ADC_MAX_CHANNELS) {
fwnode_handle_put(fwnode);
return -EINVAL;
}
chan_array[i].type = IIO_VOLTAGE;
chan_array[i].indexed = 1;

View File

@@ -64,6 +64,7 @@ struct stm32_adc_priv;
* @max_clk_rate_hz: maximum analog clock rate (Hz, from datasheet)
* @has_syscfg: SYSCFG capability flags
* @num_irqs: number of interrupt lines
* @num_adcs: maximum number of ADC instances in the common registers
*/
struct stm32_adc_priv_cfg {
const struct stm32_adc_common_regs *regs;
@@ -71,6 +72,7 @@ struct stm32_adc_priv_cfg {
u32 max_clk_rate_hz;
unsigned int has_syscfg;
unsigned int num_irqs;
unsigned int num_adcs;
};
/**
@@ -352,7 +354,7 @@ static void stm32_adc_irq_handler(struct irq_desc *desc)
* before invoking the interrupt handler (e.g. call ISR only for
* IRQ-enabled ADCs).
*/
for (i = 0; i < priv->cfg->num_irqs; i++) {
for (i = 0; i < priv->cfg->num_adcs; i++) {
if ((status & priv->cfg->regs->eoc_msk[i] &&
stm32_adc_eoc_enabled(priv, i)) ||
(status & priv->cfg->regs->ovr_msk[i]))
@@ -796,6 +798,7 @@ static const struct stm32_adc_priv_cfg stm32f4_adc_priv_cfg = {
.clk_sel = stm32f4_adc_clk_sel,
.max_clk_rate_hz = 36000000,
.num_irqs = 1,
.num_adcs = 3,
};
static const struct stm32_adc_priv_cfg stm32h7_adc_priv_cfg = {
@@ -804,14 +807,16 @@ static const struct stm32_adc_priv_cfg stm32h7_adc_priv_cfg = {
.max_clk_rate_hz = 36000000,
.has_syscfg = HAS_VBOOSTER,
.num_irqs = 1,
.num_adcs = 2,
};
static const struct stm32_adc_priv_cfg stm32mp1_adc_priv_cfg = {
.regs = &stm32h7_adc_common_regs,
.clk_sel = stm32h7_adc_clk_sel,
.max_clk_rate_hz = 40000000,
.max_clk_rate_hz = 36000000,
.has_syscfg = HAS_VBOOSTER | HAS_ANASWVDD,
.num_irqs = 2,
.num_adcs = 2,
};
static const struct of_device_id stm32_adc_of_match[] = {

View File

@@ -1259,7 +1259,6 @@ static irqreturn_t stm32_adc_threaded_isr(int irq, void *data)
struct stm32_adc *adc = iio_priv(indio_dev);
const struct stm32_adc_regspec *regs = adc->cfg->regs;
u32 status = stm32_adc_readl(adc, regs->isr_eoc.reg);
u32 mask = stm32_adc_readl(adc, regs->ier_eoc.reg);
/* Check ovr status right now, as ovr mask should be already disabled */
if (status & regs->isr_ovr.mask) {
@@ -1274,11 +1273,6 @@ static irqreturn_t stm32_adc_threaded_isr(int irq, void *data)
return IRQ_HANDLED;
}
if (!(status & mask))
dev_err_ratelimited(&indio_dev->dev,
"Unexpected IRQ: IER=0x%08x, ISR=0x%08x\n",
mask, status);
return IRQ_NONE;
}
@@ -1288,10 +1282,6 @@ static irqreturn_t stm32_adc_isr(int irq, void *data)
struct stm32_adc *adc = iio_priv(indio_dev);
const struct stm32_adc_regspec *regs = adc->cfg->regs;
u32 status = stm32_adc_readl(adc, regs->isr_eoc.reg);
u32 mask = stm32_adc_readl(adc, regs->ier_eoc.reg);
if (!(status & mask))
return IRQ_WAKE_THREAD;
if (status & regs->isr_ovr.mask) {
/*

View File

@@ -739,7 +739,7 @@ static int ads131e08_alloc_channels(struct iio_dev *indio_dev)
device_for_each_child_node(dev, node) {
ret = fwnode_property_read_u32(node, "reg", &channel);
if (ret)
return ret;
goto err_child_out;
ret = fwnode_property_read_u32(node, "ti,gain", &tmp);
if (ret) {
@@ -747,7 +747,7 @@ static int ads131e08_alloc_channels(struct iio_dev *indio_dev)
} else {
ret = ads131e08_pga_gain_to_field_value(st, tmp);
if (ret < 0)
return ret;
goto err_child_out;
channel_config[i].pga_gain = tmp;
}
@@ -758,7 +758,7 @@ static int ads131e08_alloc_channels(struct iio_dev *indio_dev)
} else {
ret = ads131e08_validate_channel_mux(st, tmp);
if (ret)
return ret;
goto err_child_out;
channel_config[i].mux = tmp;
}
@@ -784,6 +784,10 @@ static int ads131e08_alloc_channels(struct iio_dev *indio_dev)
st->channel_config = channel_config;
return 0;
err_child_out:
fwnode_handle_put(node);
return ret;
}
static void ads131e08_regulator_disable(void *data)

View File

@@ -148,7 +148,7 @@ static int rescale_configure_channel(struct device *dev,
chan->ext_info = rescale->ext_info;
chan->type = rescale->cfg->type;
if (iio_channel_has_info(schan, IIO_CHAN_INFO_RAW) ||
if (iio_channel_has_info(schan, IIO_CHAN_INFO_RAW) &&
iio_channel_has_info(schan, IIO_CHAN_INFO_SCALE)) {
dev_info(dev, "using raw+scale source channel\n");
} else if (iio_channel_has_info(schan, IIO_CHAN_INFO_PROCESSED)) {

View File

@@ -499,11 +499,11 @@ static int ccs811_probe(struct i2c_client *client,
data->drdy_trig->ops = &ccs811_trigger_ops;
iio_trigger_set_drvdata(data->drdy_trig, indio_dev);
indio_dev->trig = data->drdy_trig;
iio_trigger_get(indio_dev->trig);
ret = iio_trigger_register(data->drdy_trig);
if (ret)
goto err_poweroff;
indio_dev->trig = iio_trigger_get(data->drdy_trig);
}
ret = iio_triggered_buffer_setup(indio_dev, NULL,

View File

@@ -876,6 +876,7 @@ static int mpu3050_power_up(struct mpu3050 *mpu3050)
ret = regmap_update_bits(mpu3050->map, MPU3050_PWR_MGM,
MPU3050_PWR_MGM_SLEEP, 0);
if (ret) {
regulator_bulk_disable(ARRAY_SIZE(mpu3050->regs), mpu3050->regs);
dev_err(mpu3050->dev, "error setting power mode\n");
return ret;
}

View File

@@ -135,9 +135,12 @@ int hts221_allocate_trigger(struct iio_dev *iio_dev)
iio_trigger_set_drvdata(hw->trig, iio_dev);
hw->trig->ops = &hts221_trigger_ops;
err = devm_iio_trigger_register(hw->dev, hw->trig);
iio_dev->trig = iio_trigger_get(hw->trig);
return devm_iio_trigger_register(hw->dev, hw->trig);
return err;
}
static int hts221_buffer_preenable(struct iio_dev *iio_dev)

View File

@@ -17,6 +17,7 @@
#include "inv_icm42600_buffer.h"
enum inv_icm42600_chip {
INV_CHIP_INVALID,
INV_CHIP_ICM42600,
INV_CHIP_ICM42602,
INV_CHIP_ICM42605,

View File

@@ -565,7 +565,7 @@ int inv_icm42600_core_probe(struct regmap *regmap, int chip, int irq,
bool open_drain;
int ret;
if (chip < 0 || chip >= INV_CHIP_NB) {
if (chip <= INV_CHIP_INVALID || chip >= INV_CHIP_NB) {
dev_err(dev, "invalid chip = %d\n", chip);
return -ENODEV;
}

View File

@@ -639,7 +639,7 @@ static int yas532_get_calibration_data(struct yas5xx *yas5xx)
dev_dbg(yas5xx->dev, "calibration data: %*ph\n", 14, data);
/* Sanity check, is this all zeroes? */
if (memchr_inv(data, 0x00, 13)) {
if (memchr_inv(data, 0x00, 13) == NULL) {
if (!(data[13] & BIT(7)))
dev_warn(yas5xx->dev, "calibration is blank!\n");
}

View File

@@ -195,6 +195,7 @@ static int iio_sysfs_trigger_remove(int id)
}
iio_trigger_unregister(t->trig);
irq_work_sync(&t->work);
iio_trigger_free(t->trig);
list_del(&t->l);

View File

@@ -1400,7 +1400,7 @@ static void start_worker(struct era *era)
static void stop_worker(struct era *era)
{
atomic_set(&era->suspended, 1);
flush_workqueue(era->wq);
drain_workqueue(era->wq);
}
/*----------------------------------------------------------------
@@ -1570,6 +1570,12 @@ static void era_postsuspend(struct dm_target *ti)
}
stop_worker(era);
r = metadata_commit(era->md);
if (r) {
DMERR("%s: metadata_commit failed", __func__);
/* FIXME: fail mode */
}
}
static int era_preresume(struct dm_target *ti)

View File

@@ -615,7 +615,7 @@ static int disk_resume(struct dm_dirty_log *log)
log_clear_bit(lc, lc->clean_bits, i);
/* clear any old bits -- device has shrunk */
for (i = lc->region_count; i % (sizeof(*lc->clean_bits) << BYTE_SHIFT); i++)
for (i = lc->region_count; i % BITS_PER_LONG; i++)
log_clear_bit(lc, lc->clean_bits, i);
/* copy clean across to sync */

View File

@@ -1187,33 +1187,39 @@ static int of_get_dram_timings(struct exynos5_dmc *dmc)
dmc->timing_row = devm_kmalloc_array(dmc->dev, TIMING_COUNT,
sizeof(u32), GFP_KERNEL);
if (!dmc->timing_row)
return -ENOMEM;
if (!dmc->timing_row) {
ret = -ENOMEM;
goto put_node;
}
dmc->timing_data = devm_kmalloc_array(dmc->dev, TIMING_COUNT,
sizeof(u32), GFP_KERNEL);
if (!dmc->timing_data)
return -ENOMEM;
if (!dmc->timing_data) {
ret = -ENOMEM;
goto put_node;
}
dmc->timing_power = devm_kmalloc_array(dmc->dev, TIMING_COUNT,
sizeof(u32), GFP_KERNEL);
if (!dmc->timing_power)
return -ENOMEM;
if (!dmc->timing_power) {
ret = -ENOMEM;
goto put_node;
}
dmc->timings = of_lpddr3_get_ddr_timings(np_ddr, dmc->dev,
DDR_TYPE_LPDDR3,
&dmc->timings_arr_size);
if (!dmc->timings) {
of_node_put(np_ddr);
dev_warn(dmc->dev, "could not get timings from DT\n");
return -EINVAL;
ret = -EINVAL;
goto put_node;
}
dmc->min_tck = of_lpddr3_get_min_tck(np_ddr, dmc->dev);
if (!dmc->min_tck) {
of_node_put(np_ddr);
dev_warn(dmc->dev, "could not get tck from DT\n");
return -EINVAL;
ret = -EINVAL;
goto put_node;
}
/* Sorted array of OPPs with frequency ascending */
@@ -1227,13 +1233,14 @@ static int of_get_dram_timings(struct exynos5_dmc *dmc)
clk_period_ps);
}
of_node_put(np_ddr);
/* Take the highest frequency's timings as 'bypass' */
dmc->bypass_timing_row = dmc->timing_row[idx - 1];
dmc->bypass_timing_data = dmc->timing_data[idx - 1];
dmc->bypass_timing_power = dmc->timing_power[idx - 1];
put_node:
of_node_put(np_ddr);
return ret;
}

View File

@@ -1355,7 +1355,7 @@ static void msdc_data_xfer_next(struct msdc_host *host, struct mmc_request *mrq)
msdc_request_done(host, mrq);
}
static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
static void msdc_data_xfer_done(struct msdc_host *host, u32 events,
struct mmc_request *mrq, struct mmc_data *data)
{
struct mmc_command *stop;
@@ -1375,7 +1375,7 @@ static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
spin_unlock_irqrestore(&host->lock, flags);
if (done)
return true;
return;
stop = data->stop;
if (check_data || (stop && stop->error)) {
@@ -1384,12 +1384,15 @@ static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
sdr_set_field(host->base + MSDC_DMA_CTRL, MSDC_DMA_CTRL_STOP,
1);
ret = readl_poll_timeout_atomic(host->base + MSDC_DMA_CTRL, val,
!(val & MSDC_DMA_CTRL_STOP), 1, 20000);
if (ret)
dev_dbg(host->dev, "DMA stop timed out\n");
ret = readl_poll_timeout_atomic(host->base + MSDC_DMA_CFG, val,
!(val & MSDC_DMA_CFG_STS), 1, 20000);
if (ret) {
dev_dbg(host->dev, "DMA stop timed out\n");
return false;
}
if (ret)
dev_dbg(host->dev, "DMA inactive timed out\n");
sdr_clr_bits(host->base + MSDC_INTEN, data_ints_mask);
dev_dbg(host->dev, "DMA stop\n");
@@ -1414,9 +1417,7 @@ static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
}
msdc_data_xfer_next(host, mrq);
done = true;
}
return done;
}
static void msdc_set_buswidth(struct msdc_host *host, u32 width)
@@ -2347,6 +2348,9 @@ static void msdc_cqe_disable(struct mmc_host *mmc, bool recovery)
if (recovery) {
sdr_set_field(host->base + MSDC_DMA_CTRL,
MSDC_DMA_CTRL_STOP, 1);
if (WARN_ON(readl_poll_timeout(host->base + MSDC_DMA_CTRL, val,
!(val & MSDC_DMA_CTRL_STOP), 1, 3000)))
return;
if (WARN_ON(readl_poll_timeout(host->base + MSDC_DMA_CFG, val,
!(val & MSDC_DMA_CFG_STS), 1, 3000)))
return;

View File

@@ -147,6 +147,8 @@ static int sdhci_o2_get_cd(struct mmc_host *mmc)
if (!(sdhci_readw(host, O2_PLL_DLL_WDT_CONTROL1) & O2_PLL_LOCK_STATUS))
sdhci_o2_enable_internal_clock(host);
else
sdhci_o2_wait_card_detect_stable(host);
return !!(sdhci_readl(host, SDHCI_PRESENT_STATE) & SDHCI_CARD_PRESENT);
}

View File

@@ -685,7 +685,7 @@ static void gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
hw->timing0 = BF_GPMI_TIMING0_ADDRESS_SETUP(addr_setup_cycles) |
BF_GPMI_TIMING0_DATA_HOLD(data_hold_cycles) |
BF_GPMI_TIMING0_DATA_SETUP(data_setup_cycles);
hw->timing1 = BF_GPMI_TIMING1_BUSY_TIMEOUT(busy_timeout_cycles * 4096);
hw->timing1 = BF_GPMI_TIMING1_BUSY_TIMEOUT(DIV_ROUND_UP(busy_timeout_cycles, 4096));
/*
* Derive NFC ideal delay from {3}:

View File

@@ -3474,9 +3474,11 @@ re_arm:
if (!rtnl_trylock())
return;
if (should_notify_peers)
if (should_notify_peers) {
bond->send_peer_notif--;
call_netdevice_notifiers(NETDEV_NOTIFY_PEERS,
bond->dev);
}
if (should_notify_rtnl) {
bond_slave_state_notify(bond);
bond_slave_link_notify(bond);

View File

@@ -2150,6 +2150,42 @@ ice_setup_autoneg(struct ice_port_info *p, struct ethtool_link_ksettings *ks,
return err;
}
/**
* ice_set_phy_type_from_speed - set phy_types based on speeds
* and advertised modes
* @ks: ethtool link ksettings struct
* @phy_type_low: pointer to the lower part of phy_type
* @phy_type_high: pointer to the higher part of phy_type
* @adv_link_speed: targeted link speeds bitmap
*/
static void
ice_set_phy_type_from_speed(const struct ethtool_link_ksettings *ks,
u64 *phy_type_low, u64 *phy_type_high,
u16 adv_link_speed)
{
/* Handle 1000M speed in a special way because ice_update_phy_type
* enables all link modes, but having mixed copper and optical
* standards is not supported.
*/
adv_link_speed &= ~ICE_AQ_LINK_SPEED_1000MB;
if (ethtool_link_ksettings_test_link_mode(ks, advertising,
1000baseT_Full))
*phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_T |
ICE_PHY_TYPE_LOW_1G_SGMII;
if (ethtool_link_ksettings_test_link_mode(ks, advertising,
1000baseKX_Full))
*phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_KX;
if (ethtool_link_ksettings_test_link_mode(ks, advertising,
1000baseX_Full))
*phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_SX |
ICE_PHY_TYPE_LOW_1000BASE_LX;
ice_update_phy_type(phy_type_low, phy_type_high, adv_link_speed);
}
/**
* ice_set_link_ksettings - Set Speed and Duplex
* @netdev: network interface device structure
@@ -2286,7 +2322,8 @@ ice_set_link_ksettings(struct net_device *netdev,
adv_link_speed = curr_link_speed;
/* Convert the advertise link speeds to their corresponded PHY_TYPE */
ice_update_phy_type(&phy_type_low, &phy_type_high, adv_link_speed);
ice_set_phy_type_from_speed(ks, &phy_type_low, &phy_type_high,
adv_link_speed);
if (!autoneg_changed && adv_link_speed == curr_link_speed) {
netdev_info(netdev, "Nothing changed, exiting without setting anything.\n");

View File

@@ -4819,8 +4819,11 @@ static void igb_clean_tx_ring(struct igb_ring *tx_ring)
while (i != tx_ring->next_to_use) {
union e1000_adv_tx_desc *eop_desc, *tx_desc;
/* Free all the Tx ring sk_buffs */
dev_kfree_skb_any(tx_buffer->skb);
/* Free all the Tx ring sk_buffs or xdp frames */
if (tx_buffer->type == IGB_TYPE_SKB)
dev_kfree_skb_any(tx_buffer->skb);
else
xdp_return_frame(tx_buffer->xdpf);
/* unmap skb header data */
dma_unmap_single(tx_ring->dev,
@@ -9820,11 +9823,10 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba)
struct e1000_hw *hw = &adapter->hw;
u32 dmac_thr;
u16 hwm;
u32 reg;
if (hw->mac.type > e1000_82580) {
if (adapter->flags & IGB_FLAG_DMAC) {
u32 reg;
/* force threshold to 0. */
wr32(E1000_DMCTXTH, 0);
@@ -9857,7 +9859,6 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba)
/* Disable BMC-to-OS Watchdog Enable */
if (hw->mac.type != e1000_i354)
reg &= ~E1000_DMACR_DC_BMC2OSW_EN;
wr32(E1000_DMACR, reg);
/* no lower threshold to disable
@@ -9874,12 +9875,12 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba)
*/
wr32(E1000_DMCTXTH, (IGB_MIN_TXPBSIZE -
(IGB_TX_BUF_4096 + adapter->max_frame_size)) >> 6);
}
/* make low power state decision controlled
* by DMA coal
*/
if (hw->mac.type >= e1000_i210 ||
(adapter->flags & IGB_FLAG_DMAC)) {
reg = rd32(E1000_PCIEMISC);
reg &= ~E1000_PCIEMISC_LX_DECISION;
reg |= E1000_PCIEMISC_LX_DECISION;
wr32(E1000_PCIEMISC, reg);
} /* endif adapter->dmac is not disabled */
} else if (hw->mac.type == e1000_82580) {

View File

@@ -34,6 +34,8 @@
#define MDIO_AN_VEND_PROV 0xc400
#define MDIO_AN_VEND_PROV_1000BASET_FULL BIT(15)
#define MDIO_AN_VEND_PROV_1000BASET_HALF BIT(14)
#define MDIO_AN_VEND_PROV_5000BASET_FULL BIT(11)
#define MDIO_AN_VEND_PROV_2500BASET_FULL BIT(10)
#define MDIO_AN_VEND_PROV_DOWNSHIFT_EN BIT(4)
#define MDIO_AN_VEND_PROV_DOWNSHIFT_MASK GENMASK(3, 0)
#define MDIO_AN_VEND_PROV_DOWNSHIFT_DFLT 4
@@ -231,9 +233,20 @@ static int aqr_config_aneg(struct phy_device *phydev)
phydev->advertising))
reg |= MDIO_AN_VEND_PROV_1000BASET_HALF;
/* Handle the case when the 2.5G and 5G speeds are not advertised */
if (linkmode_test_bit(ETHTOOL_LINK_MODE_2500baseT_Full_BIT,
phydev->advertising))
reg |= MDIO_AN_VEND_PROV_2500BASET_FULL;
if (linkmode_test_bit(ETHTOOL_LINK_MODE_5000baseT_Full_BIT,
phydev->advertising))
reg |= MDIO_AN_VEND_PROV_5000BASET_FULL;
ret = phy_modify_mmd_changed(phydev, MDIO_MMD_AN, MDIO_AN_VEND_PROV,
MDIO_AN_VEND_PROV_1000BASET_HALF |
MDIO_AN_VEND_PROV_1000BASET_FULL, reg);
MDIO_AN_VEND_PROV_1000BASET_FULL |
MDIO_AN_VEND_PROV_2500BASET_FULL |
MDIO_AN_VEND_PROV_5000BASET_FULL, reg);
if (ret < 0)
return ret;
if (ret > 0)

View File

@@ -2431,7 +2431,6 @@ static const struct ethtool_ops virtnet_ethtool_ops = {
static void virtnet_freeze_down(struct virtio_device *vdev)
{
struct virtnet_info *vi = vdev->priv;
int i;
/* Make sure no work handler is accessing the device */
flush_work(&vi->config_work);
@@ -2439,14 +2438,8 @@ static void virtnet_freeze_down(struct virtio_device *vdev)
netif_tx_lock_bh(vi->dev);
netif_device_detach(vi->dev);
netif_tx_unlock_bh(vi->dev);
cancel_delayed_work_sync(&vi->refill);
if (netif_running(vi->dev)) {
for (i = 0; i < vi->max_queue_pairs; i++) {
napi_disable(&vi->rq[i].napi);
virtnet_napi_tx_disable(&vi->sq[i].napi);
}
}
if (netif_running(vi->dev))
virtnet_close(vi->dev);
}
static int init_vqs(struct virtnet_info *vi);
@@ -2454,7 +2447,7 @@ static int init_vqs(struct virtnet_info *vi);
static int virtnet_restore_up(struct virtio_device *vdev)
{
struct virtnet_info *vi = vdev->priv;
int err, i;
int err;
err = init_vqs(vi);
if (err)
@@ -2463,15 +2456,9 @@ static int virtnet_restore_up(struct virtio_device *vdev)
virtio_device_ready(vdev);
if (netif_running(vi->dev)) {
for (i = 0; i < vi->curr_queue_pairs; i++)
if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
schedule_delayed_work(&vi->refill, 0);
for (i = 0; i < vi->max_queue_pairs; i++) {
virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);
virtnet_napi_tx_enable(vi, vi->sq[i].vq,
&vi->sq[i].napi);
}
err = virtnet_open(vi->dev);
if (err)
return err;
}
netif_tx_lock_bh(vi->dev);

View File

@@ -2475,6 +2475,34 @@ static const struct nvme_core_quirk_entry core_quirks[] = {
.vid = 0x14a4,
.fr = "22301111",
.quirks = NVME_QUIRK_SIMPLE_SUSPEND,
},
{
/*
* This Kioxia CD6-V Series / HPE PE8030 device times out and
* aborts I/O during any load, but more easily reproducible
* with discards (fstrim).
*
* The device is left in a state where it is also not possible
* to use "nvme set-feature" to disable APST, but booting with
* nvme_core.default_ps_max_latency=0 works.
*/
.vid = 0x1e0f,
.mn = "KCD6XVUL6T40",
.quirks = NVME_QUIRK_NO_APST,
},
{
/*
* The external Samsung X5 SSD fails initialization without a
* delay before checking if it is ready and has a whole set of
* other problems. To make this even more interesting, it
* shares the PCI ID with internal Samsung 970 Evo Plus that
* does not need or want these quirks.
*/
.vid = 0x144d,
.mn = "Samsung Portable SSD X5",
.quirks = NVME_QUIRK_DELAY_BEFORE_CHK_RDY |
NVME_QUIRK_NO_DEEPEST_PS |
NVME_QUIRK_IGNORE_DEV_SUBNQN,
}
};

View File

@@ -3380,10 +3380,6 @@ static const struct pci_device_id nvme_id_table[] = {
NVME_QUIRK_128_BYTES_SQES |
NVME_QUIRK_SHARED_TAGS |
NVME_QUIRK_SKIP_CID_GEN },
{ PCI_DEVICE(0x144d, 0xa808), /* Samsung X5 */
.driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY|
NVME_QUIRK_NO_DEEPEST_PS |
NVME_QUIRK_IGNORE_DEV_SUBNQN, },
{ PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) },
{ 0, }
};

View File

@@ -160,8 +160,8 @@ static void ibmvfc_npiv_logout(struct ibmvfc_host *);
static void ibmvfc_tgt_implicit_logout_and_del(struct ibmvfc_target *);
static void ibmvfc_tgt_move_login(struct ibmvfc_target *);
static void ibmvfc_release_sub_crqs(struct ibmvfc_host *);
static void ibmvfc_init_sub_crqs(struct ibmvfc_host *);
static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *);
static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *);
static const char *unknown_error = "unknown error";
@@ -917,7 +917,7 @@ static int ibmvfc_reenable_crq_queue(struct ibmvfc_host *vhost)
struct vio_dev *vdev = to_vio_dev(vhost->dev);
unsigned long flags;
ibmvfc_release_sub_crqs(vhost);
ibmvfc_dereg_sub_crqs(vhost);
/* Re-enable the CRQ */
do {
@@ -936,7 +936,7 @@ static int ibmvfc_reenable_crq_queue(struct ibmvfc_host *vhost)
spin_unlock(vhost->crq.q_lock);
spin_unlock_irqrestore(vhost->host->host_lock, flags);
ibmvfc_init_sub_crqs(vhost);
ibmvfc_reg_sub_crqs(vhost);
return rc;
}
@@ -955,7 +955,7 @@ static int ibmvfc_reset_crq(struct ibmvfc_host *vhost)
struct vio_dev *vdev = to_vio_dev(vhost->dev);
struct ibmvfc_queue *crq = &vhost->crq;
ibmvfc_release_sub_crqs(vhost);
ibmvfc_dereg_sub_crqs(vhost);
/* Close the CRQ */
do {
@@ -988,7 +988,7 @@ static int ibmvfc_reset_crq(struct ibmvfc_host *vhost)
spin_unlock(vhost->crq.q_lock);
spin_unlock_irqrestore(vhost->host->host_lock, flags);
ibmvfc_init_sub_crqs(vhost);
ibmvfc_reg_sub_crqs(vhost);
return rc;
}
@@ -5680,6 +5680,8 @@ static int ibmvfc_alloc_queue(struct ibmvfc_host *vhost,
queue->cur = 0;
queue->fmt = fmt;
queue->size = PAGE_SIZE / fmt_size;
queue->vhost = vhost;
return 0;
}
@@ -5755,9 +5757,6 @@ static int ibmvfc_register_scsi_channel(struct ibmvfc_host *vhost,
ENTER;
if (ibmvfc_alloc_queue(vhost, scrq, IBMVFC_SUB_CRQ_FMT))
return -ENOMEM;
rc = h_reg_sub_crq(vdev->unit_address, scrq->msg_token, PAGE_SIZE,
&scrq->cookie, &scrq->hw_irq);
@@ -5788,7 +5787,6 @@ static int ibmvfc_register_scsi_channel(struct ibmvfc_host *vhost,
}
scrq->hwq_id = index;
scrq->vhost = vhost;
LEAVE;
return 0;
@@ -5798,7 +5796,6 @@ irq_failed:
rc = plpar_hcall_norets(H_FREE_SUB_CRQ, vdev->unit_address, scrq->cookie);
} while (rtas_busy_delay(rc));
reg_failed:
ibmvfc_free_queue(vhost, scrq);
LEAVE;
return rc;
}
@@ -5824,12 +5821,50 @@ static void ibmvfc_deregister_scsi_channel(struct ibmvfc_host *vhost, int index)
if (rc)
dev_err(dev, "Failed to free sub-crq[%d]: rc=%ld\n", index, rc);
ibmvfc_free_queue(vhost, scrq);
/* Clean out the queue */
memset(scrq->msgs.crq, 0, PAGE_SIZE);
scrq->cur = 0;
LEAVE;
}
static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *vhost)
{
int i, j;
ENTER;
if (!vhost->mq_enabled || !vhost->scsi_scrqs.scrqs)
return;
for (i = 0; i < nr_scsi_hw_queues; i++) {
if (ibmvfc_register_scsi_channel(vhost, i)) {
for (j = i; j > 0; j--)
ibmvfc_deregister_scsi_channel(vhost, j - 1);
vhost->do_enquiry = 0;
return;
}
}
LEAVE;
}
static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *vhost)
{
int i;
ENTER;
if (!vhost->mq_enabled || !vhost->scsi_scrqs.scrqs)
return;
for (i = 0; i < nr_scsi_hw_queues; i++)
ibmvfc_deregister_scsi_channel(vhost, i);
LEAVE;
}
static void ibmvfc_init_sub_crqs(struct ibmvfc_host *vhost)
{
struct ibmvfc_queue *scrq;
int i, j;
ENTER;
@@ -5845,30 +5880,41 @@ static void ibmvfc_init_sub_crqs(struct ibmvfc_host *vhost)
}
for (i = 0; i < nr_scsi_hw_queues; i++) {
if (ibmvfc_register_scsi_channel(vhost, i)) {
for (j = i; j > 0; j--)
ibmvfc_deregister_scsi_channel(vhost, j - 1);
scrq = &vhost->scsi_scrqs.scrqs[i];
if (ibmvfc_alloc_queue(vhost, scrq, IBMVFC_SUB_CRQ_FMT)) {
for (j = i; j > 0; j--) {
scrq = &vhost->scsi_scrqs.scrqs[j - 1];
ibmvfc_free_queue(vhost, scrq);
}
kfree(vhost->scsi_scrqs.scrqs);
vhost->scsi_scrqs.scrqs = NULL;
vhost->scsi_scrqs.active_queues = 0;
vhost->do_enquiry = 0;
break;
vhost->mq_enabled = 0;
return;
}
}
ibmvfc_reg_sub_crqs(vhost);
LEAVE;
}
static void ibmvfc_release_sub_crqs(struct ibmvfc_host *vhost)
{
struct ibmvfc_queue *scrq;
int i;
ENTER;
if (!vhost->scsi_scrqs.scrqs)
return;
for (i = 0; i < nr_scsi_hw_queues; i++)
ibmvfc_deregister_scsi_channel(vhost, i);
ibmvfc_dereg_sub_crqs(vhost);
for (i = 0; i < nr_scsi_hw_queues; i++) {
scrq = &vhost->scsi_scrqs.scrqs[i];
ibmvfc_free_queue(vhost, scrq);
}
kfree(vhost->scsi_scrqs.scrqs);
vhost->scsi_scrqs.scrqs = NULL;

View File

@@ -789,6 +789,7 @@ struct ibmvfc_queue {
spinlock_t _lock;
spinlock_t *q_lock;
struct ibmvfc_host *vhost;
struct ibmvfc_event_pool evt_pool;
struct list_head sent;
struct list_head free;
@@ -797,7 +798,6 @@ struct ibmvfc_queue {
union ibmvfc_iu cancel_rsp;
/* Sub-CRQ fields */
struct ibmvfc_host *vhost;
unsigned long cookie;
unsigned long vios_cookie;
unsigned long hw_irq;

View File

@@ -2785,6 +2785,24 @@ static void zbc_open_zone(struct sdebug_dev_info *devip,
}
}
static inline void zbc_set_zone_full(struct sdebug_dev_info *devip,
struct sdeb_zone_state *zsp)
{
switch (zsp->z_cond) {
case ZC2_IMPLICIT_OPEN:
devip->nr_imp_open--;
break;
case ZC3_EXPLICIT_OPEN:
devip->nr_exp_open--;
break;
default:
WARN_ONCE(true, "Invalid zone %llu condition %x\n",
zsp->z_start, zsp->z_cond);
break;
}
zsp->z_cond = ZC5_FULL;
}
static void zbc_inc_wp(struct sdebug_dev_info *devip,
unsigned long long lba, unsigned int num)
{
@@ -2797,7 +2815,7 @@ static void zbc_inc_wp(struct sdebug_dev_info *devip,
if (zsp->z_type == ZBC_ZTYPE_SWR) {
zsp->z_wp += num;
if (zsp->z_wp >= zend)
zsp->z_cond = ZC5_FULL;
zbc_set_zone_full(devip, zsp);
return;
}
@@ -2816,7 +2834,7 @@ static void zbc_inc_wp(struct sdebug_dev_info *devip,
n = num;
}
if (zsp->z_wp >= zend)
zsp->z_cond = ZC5_FULL;
zbc_set_zone_full(devip, zsp);
num -= n;
lba += n;

View File

@@ -213,7 +213,12 @@ iscsi_create_endpoint(int dd_size)
return NULL;
mutex_lock(&iscsi_ep_idr_mutex);
id = idr_alloc(&iscsi_ep_idr, ep, 0, -1, GFP_NOIO);
/*
* First endpoint id should be 1 to comply with user space
* applications (iscsid).
*/
id = idr_alloc(&iscsi_ep_idr, ep, 1, -1, GFP_NOIO);
if (id < 0) {
mutex_unlock(&iscsi_ep_idr_mutex);
printk(KERN_ERR "Could not allocate endpoint ID. Error %d.\n",

View File

@@ -1907,7 +1907,7 @@ static struct scsi_host_template scsi_driver = {
.cmd_per_lun = 2048,
.this_id = -1,
/* Ensure there are no gaps in presented sgls */
.virt_boundary_mask = PAGE_SIZE-1,
.virt_boundary_mask = HV_HYP_PAGE_SIZE - 1,
.no_write_same = 1,
.track_queue_depth = 1,
.change_queue_depth = storvsc_change_queue_depth,
@@ -1961,6 +1961,7 @@ static int storvsc_probe(struct hv_device *device,
int max_targets;
int max_channels;
int max_sub_channels = 0;
u32 max_xfer_bytes;
/*
* Based on the windows host we are running on,
@@ -2049,12 +2050,28 @@ static int storvsc_probe(struct hv_device *device,
}
/* max cmd length */
host->max_cmd_len = STORVSC_MAX_CMD_LEN;
/*
* set the table size based on the info we got
* from the host.
* Any reasonable Hyper-V configuration should provide
* max_transfer_bytes value aligning to HV_HYP_PAGE_SIZE,
* protecting it from any weird value.
*/
host->sg_tablesize = (stor_device->max_transfer_bytes >> PAGE_SHIFT);
max_xfer_bytes = round_down(stor_device->max_transfer_bytes, HV_HYP_PAGE_SIZE);
/* max_hw_sectors_kb */
host->max_sectors = max_xfer_bytes >> 9;
/*
* There are 2 requirements for Hyper-V storvsc sgl segments,
* based on which the below calculation for max segments is
* done:
*
* 1. Except for the first and last sgl segment, all sgl segments
* should be align to HV_HYP_PAGE_SIZE, that also means the
* maximum number of segments in a sgl can be calculated by
* dividing the total max transfer length by HV_HYP_PAGE_SIZE.
*
* 2. Except for the first and last, each entry in the SGL must
* have an offset that is a multiple of HV_HYP_PAGE_SIZE.
*/
host->sg_tablesize = (max_xfer_bytes >> HV_HYP_PAGE_SHIFT) + 1;
/*
* For non-IDE disks, the host supports multiple channels.
* Set the number of HW queues we are supporting.

View File

@@ -783,6 +783,7 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
}
ret = brcmstb_init_sram(dn);
of_node_put(dn);
if (ret) {
pr_err("error setting up SRAM for PM\n");
return ret;

View File

@@ -1040,6 +1040,9 @@ isr_setup_status_complete(struct usb_ep *ep, struct usb_request *req)
struct ci_hdrc *ci = req->context;
unsigned long flags;
if (req->status < 0)
return;
if (ci->setaddr) {
hw_usb_set_address(ci, ci->address);
ci->setaddr = false;

View File

@@ -11,6 +11,7 @@
#include <linux/ctype.h>
#include <linux/debugfs.h>
#include <linux/delay.h>
#include <linux/idr.h>
#include <linux/kref.h>
#include <linux/miscdevice.h>
#include <linux/module.h>
@@ -36,6 +37,9 @@ MODULE_LICENSE("GPL");
/*----------------------------------------------------------------------*/
static DEFINE_IDA(driver_id_numbers);
#define DRIVER_DRIVER_NAME_LENGTH_MAX 32
#define RAW_EVENT_QUEUE_SIZE 16
struct raw_event_queue {
@@ -161,6 +165,9 @@ struct raw_dev {
/* Reference to misc device: */
struct device *dev;
/* Make driver names unique */
int driver_id_number;
/* Protected by lock: */
enum dev_state state;
bool gadget_registered;
@@ -189,6 +196,7 @@ static struct raw_dev *dev_new(void)
spin_lock_init(&dev->lock);
init_completion(&dev->ep0_done);
raw_event_queue_init(&dev->queue);
dev->driver_id_number = -1;
return dev;
}
@@ -199,6 +207,9 @@ static void dev_free(struct kref *kref)
kfree(dev->udc_name);
kfree(dev->driver.udc_name);
kfree(dev->driver.driver.name);
if (dev->driver_id_number >= 0)
ida_free(&driver_id_numbers, dev->driver_id_number);
if (dev->req) {
if (dev->ep0_urb_queued)
usb_ep_dequeue(dev->gadget->ep0, dev->req);
@@ -419,9 +430,11 @@ out_put:
static int raw_ioctl_init(struct raw_dev *dev, unsigned long value)
{
int ret = 0;
int driver_id_number;
struct usb_raw_init arg;
char *udc_driver_name;
char *udc_device_name;
char *driver_driver_name;
unsigned long flags;
if (copy_from_user(&arg, (void __user *)value, sizeof(arg)))
@@ -440,36 +453,43 @@ static int raw_ioctl_init(struct raw_dev *dev, unsigned long value)
return -EINVAL;
}
driver_id_number = ida_alloc(&driver_id_numbers, GFP_KERNEL);
if (driver_id_number < 0)
return driver_id_number;
driver_driver_name = kmalloc(DRIVER_DRIVER_NAME_LENGTH_MAX, GFP_KERNEL);
if (!driver_driver_name) {
ret = -ENOMEM;
goto out_free_driver_id_number;
}
snprintf(driver_driver_name, DRIVER_DRIVER_NAME_LENGTH_MAX,
DRIVER_NAME ".%d", driver_id_number);
udc_driver_name = kmalloc(UDC_NAME_LENGTH_MAX, GFP_KERNEL);
if (!udc_driver_name)
return -ENOMEM;
if (!udc_driver_name) {
ret = -ENOMEM;
goto out_free_driver_driver_name;
}
ret = strscpy(udc_driver_name, &arg.driver_name[0],
UDC_NAME_LENGTH_MAX);
if (ret < 0) {
kfree(udc_driver_name);
return ret;
}
if (ret < 0)
goto out_free_udc_driver_name;
ret = 0;
udc_device_name = kmalloc(UDC_NAME_LENGTH_MAX, GFP_KERNEL);
if (!udc_device_name) {
kfree(udc_driver_name);
return -ENOMEM;
ret = -ENOMEM;
goto out_free_udc_driver_name;
}
ret = strscpy(udc_device_name, &arg.device_name[0],
UDC_NAME_LENGTH_MAX);
if (ret < 0) {
kfree(udc_driver_name);
kfree(udc_device_name);
return ret;
}
if (ret < 0)
goto out_free_udc_device_name;
ret = 0;
spin_lock_irqsave(&dev->lock, flags);
if (dev->state != STATE_DEV_OPENED) {
dev_dbg(dev->dev, "fail, device is not opened\n");
kfree(udc_driver_name);
kfree(udc_device_name);
ret = -EINVAL;
goto out_unlock;
}
@@ -484,14 +504,25 @@ static int raw_ioctl_init(struct raw_dev *dev, unsigned long value)
dev->driver.suspend = gadget_suspend;
dev->driver.resume = gadget_resume;
dev->driver.reset = gadget_reset;
dev->driver.driver.name = DRIVER_NAME;
dev->driver.driver.name = driver_driver_name;
dev->driver.udc_name = udc_device_name;
dev->driver.match_existing_only = 1;
dev->driver_id_number = driver_id_number;
dev->state = STATE_DEV_INITIALIZED;
spin_unlock_irqrestore(&dev->lock, flags);
return ret;
out_unlock:
spin_unlock_irqrestore(&dev->lock, flags);
out_free_udc_device_name:
kfree(udc_device_name);
out_free_udc_driver_name:
kfree(udc_driver_name);
out_free_driver_driver_name:
kfree(driver_driver_name);
out_free_driver_id_number:
ida_free(&driver_id_numbers, driver_id_number);
return ret;
}

View File

@@ -652,7 +652,7 @@ struct xhci_hub *xhci_get_rhub(struct usb_hcd *hcd)
* It will release and re-aquire the lock while calling ACPI
* method.
*/
static void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd,
void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd,
u16 index, bool on, unsigned long *flags)
__must_hold(&xhci->lock)
{

View File

@@ -61,6 +61,8 @@
#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI 0x461e
#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI 0x464e
#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI 0x51ed
#define PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI 0xa71e
#define PCI_DEVICE_ID_INTEL_METEOR_LAKE_XHCI 0x7ec0
#define PCI_DEVICE_ID_AMD_RENOIR_XHCI 0x1639
#define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9
@@ -270,7 +272,9 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI))
pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_METEOR_LAKE_XHCI))
xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
if (pdev->vendor == PCI_VENDOR_ID_ETRON &&

View File

@@ -774,6 +774,8 @@ static void xhci_stop(struct usb_hcd *hcd)
void xhci_shutdown(struct usb_hcd *hcd)
{
struct xhci_hcd *xhci = hcd_to_xhci(hcd);
unsigned long flags;
int i;
if (xhci->quirks & XHCI_SPURIOUS_REBOOT)
usb_disable_xhci_ports(to_pci_dev(hcd->self.sysdev));
@@ -789,12 +791,21 @@ void xhci_shutdown(struct usb_hcd *hcd)
del_timer_sync(&xhci->shared_hcd->rh_timer);
}
spin_lock_irq(&xhci->lock);
spin_lock_irqsave(&xhci->lock, flags);
xhci_halt(xhci);
/* Power off USB2 ports*/
for (i = 0; i < xhci->usb2_rhub.num_ports; i++)
xhci_set_port_power(xhci, xhci->main_hcd, i, false, &flags);
/* Power off USB3 ports*/
for (i = 0; i < xhci->usb3_rhub.num_ports; i++)
xhci_set_port_power(xhci, xhci->shared_hcd, i, false, &flags);
/* Workaround for spurious wakeups at shutdown with HSW */
if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
spin_unlock_irq(&xhci->lock);
spin_unlock_irqrestore(&xhci->lock, flags);
xhci_cleanup_msix(xhci);

View File

@@ -2174,6 +2174,8 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, u16 wIndex,
int xhci_hub_status_data(struct usb_hcd *hcd, char *buf);
int xhci_find_raw_port_number(struct usb_hcd *hcd, int port1);
struct xhci_hub *xhci_get_rhub(struct usb_hcd *hcd);
void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd, u16 index,
bool on, unsigned long *flags);
void xhci_hc_died(struct xhci_hcd *xhci);

View File

@@ -252,10 +252,12 @@ static void option_instat_callback(struct urb *urb);
#define QUECTEL_PRODUCT_EG95 0x0195
#define QUECTEL_PRODUCT_BG96 0x0296
#define QUECTEL_PRODUCT_EP06 0x0306
#define QUECTEL_PRODUCT_EM05G 0x030a
#define QUECTEL_PRODUCT_EM12 0x0512
#define QUECTEL_PRODUCT_RM500Q 0x0800
#define QUECTEL_PRODUCT_EC200S_CN 0x6002
#define QUECTEL_PRODUCT_EC200T 0x6026
#define QUECTEL_PRODUCT_RM500K 0x7001
#define CMOTECH_VENDOR_ID 0x16d8
#define CMOTECH_PRODUCT_6001 0x6001
@@ -1134,6 +1136,8 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
.driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G, 0xff),
.driver_info = RSVD(6) | ZLP },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff),
.driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
@@ -1147,6 +1151,7 @@ static const struct usb_device_id option_ids[] = {
.driver_info = ZLP },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500K, 0xff, 0x00, 0x00) },
{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) },
@@ -1279,6 +1284,7 @@ static const struct usb_device_id option_ids[] = {
.driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1231, 0xff), /* Telit LE910Cx (RNDIS) */
.driver_info = NCTRL(2) | RSVD(3) },
{ USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x1250, 0xff, 0x00, 0x00) }, /* Telit LE910Cx (rmnet) */
{ USB_DEVICE(TELIT_VENDOR_ID, 0x1260),
.driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
{ USB_DEVICE(TELIT_VENDOR_ID, 0x1261),

View File

@@ -436,22 +436,27 @@ static int pl2303_detect_type(struct usb_serial *serial)
break;
case 0x200:
switch (bcdDevice) {
case 0x100:
case 0x100: /* GC */
case 0x105:
return TYPE_HXN;
case 0x300: /* GT / TA */
if (pl2303_supports_hx_status(serial))
return TYPE_TA;
fallthrough;
case 0x305:
case 0x400: /* GL */
case 0x405:
return TYPE_HXN;
case 0x500: /* GE / TB */
if (pl2303_supports_hx_status(serial))
return TYPE_TB;
fallthrough;
case 0x505:
case 0x600: /* GS */
case 0x605:
/*
* Assume it's an HXN-type if the device doesn't
* support the old read request value.
*/
if (!pl2303_supports_hx_status(serial))
return TYPE_HXN;
break;
case 0x300:
return TYPE_TA;
case 0x500:
return TYPE_TB;
case 0x700: /* GR */
case 0x705:
return TYPE_HXN;
}
break;
}

View File

@@ -56,7 +56,6 @@ config TYPEC_WCOVE
tristate "Intel WhiskeyCove PMIC USB Type-C PHY driver"
depends on ACPI
depends on MFD_INTEL_PMC_BXT
depends on INTEL_SOC_PMIC
depends on BXT_WC_PMIC_OPREGION
help
This driver adds support for USB Type-C on Intel Broxton platforms

View File

@@ -1127,6 +1127,7 @@ int sti_call(const struct sti_struct *sti, unsigned long func,
return ret;
}
#if defined(CONFIG_FB_STI)
/* check if given fb_info is the primary device */
int fb_is_primary_device(struct fb_info *info)
{
@@ -1142,6 +1143,7 @@ int fb_is_primary_device(struct fb_info *info)
return (sti->info == info);
}
EXPORT_SYMBOL(fb_is_primary_device);
#endif
MODULE_AUTHOR("Philipp Rumpf, Helge Deller, Thomas Bogendoerfer");
MODULE_DESCRIPTION("Core STI driver for HP's NGLE series graphics cards in HP PARISC machines");

View File

@@ -42,7 +42,7 @@ void xen_setup_features(void)
if (HYPERVISOR_xen_version(XENVER_get_features, &fi) < 0)
break;
for (j = 0; j < 32; j++)
xen_features[i * 32 + j] = !!(fi.submap & 1<<j);
xen_features[i * 32 + j] = !!(fi.submap & 1U << j);
}
if (xen_pv_domain()) {

Some files were not shown because too many files have changed in this diff Show More