Merge 5.15.78 into android14-5.15

Changes in 5.15.78
	scsi: lpfc: Adjust bytes received vales during cmf timer interval
	scsi: lpfc: Adjust CMF total bytes and rxmonitor
	scsi: lpfc: Rework MIB Rx Monitor debug info logic
	serial: ar933x: Deassert Transmit Enable on ->rs485_config()
	KVM: x86: Trace re-injected exceptions
	KVM: x86: Treat #DBs from the emulator as fault-like (code and DR7.GD=1)
	drm/amd/display: explicitly disable psr_feature_enable appropriately
	mm/hugetlb: fix races when looking up a CONT-PTE/PMD size hugetlb page
	HID: playstation: add initial DualSense Edge controller support
	KVM: x86: Protect the unused bits in MSR exiting flags
	KVM: x86: Copy filter arg outside kvm_vm_ioctl_set_msr_filter()
	KVM: x86: Add compat handler for KVM_X86_SET_MSR_FILTER
	RDMA/cma: Use output interface for net_dev check
	IB/hfi1: Correctly move list in sc_disable()
	RDMA/hns: Remove magic number
	RDMA/hns: Use hr_reg_xxx() instead of remaining roce_set_xxx()
	RDMA/hns: Disable local invalidate operation
	NFSv4: Fix a potential state reclaim deadlock
	NFSv4.1: Handle RECLAIM_COMPLETE trunking errors
	NFSv4.1: We must always send RECLAIM_COMPLETE after a reboot
	SUNRPC: Fix null-ptr-deref when xps sysfs alloc failed
	NFSv4.2: Fixup CLONE dest file size for zero-length count
	nfs4: Fix kmemleak when allocate slot failed
	net: dsa: Fix possible memory leaks in dsa_loop_init()
	RDMA/core: Fix null-ptr-deref in ib_core_cleanup()
	RDMA/qedr: clean up work queue on failure in qedr_alloc_resources()
	net: dsa: fall back to default tagger if we can't load the one from DT
	nfc: fdp: Fix potential memory leak in fdp_nci_send()
	nfc: nxp-nci: Fix potential memory leak in nxp_nci_send()
	nfc: s3fwrn5: Fix potential memory leak in s3fwrn5_nci_send()
	nfc: nfcmrvl: Fix potential memory leak in nfcmrvl_i2c_nci_send()
	net: fec: fix improper use of NETDEV_TX_BUSY
	ata: pata_legacy: fix pdc20230_set_piomode()
	net: sched: Fix use after free in red_enqueue()
	net: tun: fix bugs for oversize packet when napi frags enabled
	netfilter: nf_tables: netlink notifier might race to release objects
	netfilter: nf_tables: release flow rule object from commit path
	ipvs: use explicitly signed chars
	ipvs: fix WARNING in __ip_vs_cleanup_batch()
	ipvs: fix WARNING in ip_vs_app_net_cleanup()
	rose: Fix NULL pointer dereference in rose_send_frame()
	mISDN: fix possible memory leak in mISDN_register_device()
	isdn: mISDN: netjet: fix wrong check of device registration
	btrfs: fix inode list leak during backref walking at resolve_indirect_refs()
	btrfs: fix inode list leak during backref walking at find_parent_nodes()
	btrfs: fix ulist leaks in error paths of qgroup self tests
	netfilter: ipset: enforce documented limit to prevent allocating huge memory
	Bluetooth: L2CAP: Fix use-after-free caused by l2cap_reassemble_sdu
	Bluetooth: virtio_bt: Use skb_put to set length
	Bluetooth: L2CAP: fix use-after-free in l2cap_conn_del()
	Bluetooth: L2CAP: Fix memory leak in vhci_write
	net: mdio: fix undefined behavior in bit shift for __mdiobus_register
	ibmvnic: Free rwi on reset success
	stmmac: dwmac-loongson: fix invalid mdio_node
	net/smc: Fix possible leaked pernet namespace in smc_init()
	net, neigh: Fix null-ptr-deref in neigh_table_clear()
	ipv6: fix WARNING in ip6_route_net_exit_late()
	vsock: fix possible infinite sleep in vsock_connectible_wait_data()
	drm/msm/hdmi: Remove spurious IRQF_ONESHOT flag
	drm/msm/hdmi: fix IRQ lifetime
	video/fbdev/stifb: Implement the stifb_fillrect() function
	fbdev: stifb: Fall back to cfb_fillrect() on 32-bit HCRX cards
	mtd: parsers: bcm47xxpart: print correct offset on read error
	mtd: parsers: bcm47xxpart: Fix halfblock reads
	s390/uaccess: add missing EX_TABLE entries to __clear_user()
	s390/boot: add secure boot trailer
	s390/cio: derive cdev information only for IO-subchannels
	s390/cio: fix out-of-bounds access on cio_ignore free
	media: rkisp1: Don't pass the quantization to rkisp1_csm_config()
	media: rkisp1: Initialize color space on resizer sink and source pads
	media: rkisp1: Use correct macro for gradient registers
	media: rkisp1: Zero v4l2_subdev_format fields in when validating links
	media: s5p_cec: limit msg.len to CEC_MAX_MSG_SIZE
	media: cros-ec-cec: limit msg.len to CEC_MAX_MSG_SIZE
	media: dvb-frontends/drxk: initialize err to 0
	media: meson: vdec: fix possible refcount leak in vdec_probe()
	media: v4l: subdev: Fail graciously when getting try data for NULL state
	ACPI: APEI: Fix integer overflow in ghes_estatus_pool_init()
	scsi: core: Restrict legal sdev_state transitions via sysfs
	HID: saitek: add madcatz variant of MMO7 mouse device ID
	drm/amdgpu: set vm_update_mode=0 as default for Sienna Cichlid in SRIOV case
	i2c: xiic: Add platform module alias
	efi/tpm: Pass correct address to memblock_reserve
	clk: qcom: Update the force mem core bit for GPU clocks
	ARM: dts: imx6qdl-gw59{10,13}: fix user pushbutton GPIO offset
	arm64: dts: imx8: correct clock order
	arm64: dts: lx2160a: specify clock frequencies for the MDIO controllers
	arm64: dts: ls1088a: specify clock frequencies for the MDIO controllers
	arm64: dts: ls208xa: specify clock frequencies for the MDIO controllers
	block: Fix possible memory leak for rq_wb on add_disk failure
	firmware: arm_scmi: Suppress the driver's bind attributes
	firmware: arm_scmi: Make Rx chan_setup fail on memory errors
	firmware: arm_scmi: Fix devres allocation device in virtio transport
	arm64: dts: juno: Add thermal critical trip points
	i2c: piix4: Fix adapter not be removed in piix4_remove()
	Bluetooth: L2CAP: Fix accepting connection request for invalid SPSM
	Bluetooth: L2CAP: Fix attempting to access uninitialized memory
	block, bfq: protect 'bfqd->queued' by 'bfqd->lock'
	af_unix: Fix memory leaks of the whole sk due to OOB skb.
	fscrypt: stop using keyrings subsystem for fscrypt_master_key
	fscrypt: fix keyring memory leak on mount failure
	btrfs: fix lost file sync on direct IO write with nowait and dsync iocb
	btrfs: fix tree mod log mishandling of reallocated nodes
	btrfs: fix type of parameter generation in btrfs_get_dentry
	ftrace: Fix use-after-free for dynamic ftrace_ops
	tcp/udp: Make early_demux back namespacified.
	tracing: kprobe: Fix memory leak in test_gen_kprobe/kretprobe_cmd()
	kprobe: reverse kp->flags when arm_kprobe failed
	ring-buffer: Check for NULL cpu_buffer in ring_buffer_wake_waiters()
	tools/nolibc/string: Fix memcmp() implementation
	tracing/histogram: Update document for KEYS_MAX size
	capabilities: fix potential memleak on error path from vfs_getxattr_alloc()
	fuse: add file_modified() to fallocate
	efi: random: reduce seed size to 32 bytes
	efi: random: Use 'ACPI reclaim' memory for random seed
	arm64: entry: avoid kprobe recursion
	perf/x86/intel: Fix pebs event constraints for ICL
	perf/x86/intel: Add Cooper Lake stepping to isolation_ucodes[]
	perf/x86/intel: Fix pebs event constraints for SPR
	parisc: Make 8250_gsc driver dependend on CONFIG_PARISC
	parisc: Export iosapic_serial_irq() symbol for serial port driver
	parisc: Avoid printing the hardware path twice
	ext4: fix warning in 'ext4_da_release_space'
	ext4: fix BUG_ON() when directory entry has invalid rec_len
	x86/syscall: Include asm/ptrace.h in syscall_wrapper header
	KVM: x86: Mask off reserved bits in CPUID.80000006H
	KVM: x86: Mask off reserved bits in CPUID.8000001AH
	KVM: x86: Mask off reserved bits in CPUID.80000008H
	KVM: x86: Mask off reserved bits in CPUID.80000001H
	KVM: x86: Mask off reserved bits in CPUID.8000001FH
	KVM: VMX: fully disable SGX if SECONDARY_EXEC_ENCLS_EXITING unavailable
	KVM: arm64: Fix bad dereference on MTE-enabled systems
	KVM: x86: emulator: em_sysexit should update ctxt->mode
	KVM: x86: emulator: introduce emulator_recalc_and_set_mode
	KVM: x86: emulator: update the emulation mode after rsm
	KVM: x86: emulator: update the emulation mode after CR0 write
	tee: Fix tee_shm_register() for kernel TEE drivers
	ext4,f2fs: fix readahead of verity data
	cifs: fix regression in very old smb1 mounts
	drm/rockchip: dsi: Clean up 'usage_mode' when failing to attach
	drm/rockchip: dsi: Force synchronous probe
	drm/i915/sdvo: Filter out invalid outputs more sensibly
	drm/i915/sdvo: Setup DDC fully before output init
	wifi: brcmfmac: Fix potential buffer overflow in brcmf_fweh_event_worker()
	Linux 5.15.78

Change-Id: I807682f70b9d2817cef65b5ccb76567b20ca6ccd
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
Greg Kroah-Hartman
2022-11-15 16:38:36 +00:00
143 changed files with 1413 additions and 934 deletions

View File

@@ -39,7 +39,7 @@ Documentation written by Tom Zanussi
will use the event's kernel stacktrace as the key. The keywords
'keys' or 'key' can be used to specify keys, and the keywords
'values', 'vals', or 'val' can be used to specify values. Compound
keys consisting of up to two fields can be specified by the 'keys'
keys consisting of up to three fields can be specified by the 'keys'
keyword. Hashing a compound key produces a unique entry in the
table for each unique combination of component keys, and can be
useful for providing more fine-grained summaries of event data.

View File

@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 5
PATCHLEVEL = 15
SUBLEVEL = 77
SUBLEVEL = 78
EXTRAVERSION =
NAME = Trick or Treat

View File

@@ -29,7 +29,7 @@
user-pb {
label = "user_pb";
gpios = <&gsc_gpio 0 GPIO_ACTIVE_LOW>;
gpios = <&gsc_gpio 2 GPIO_ACTIVE_LOW>;
linux,code = <BTN_0>;
};

View File

@@ -26,7 +26,7 @@
user-pb {
label = "user_pb";
gpios = <&gsc_gpio 0 GPIO_ACTIVE_LOW>;
gpios = <&gsc_gpio 2 GPIO_ACTIVE_LOW>;
linux,code = <BTN_0>;
};

View File

@@ -597,12 +597,26 @@
polling-delay = <1000>;
polling-delay-passive = <100>;
thermal-sensors = <&scpi_sensors0 0>;
trips {
pmic_crit0: trip0 {
temperature = <90000>;
hysteresis = <2000>;
type = "critical";
};
};
};
soc {
polling-delay = <1000>;
polling-delay-passive = <100>;
thermal-sensors = <&scpi_sensors0 3>;
trips {
soc_crit0: trip0 {
temperature = <80000>;
hysteresis = <2000>;
type = "critical";
};
};
};
big_cluster_thermal_zone: big-cluster {

View File

@@ -758,6 +758,9 @@
little-endian;
#address-cells = <1>;
#size-cells = <0>;
clock-frequency = <2500000>;
clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
QORIQ_CLK_PLL_DIV(1)>;
status = "disabled";
};
@@ -767,6 +770,9 @@
little-endian;
#address-cells = <1>;
#size-cells = <0>;
clock-frequency = <2500000>;
clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
QORIQ_CLK_PLL_DIV(1)>;
status = "disabled";
};

View File

@@ -525,6 +525,9 @@
little-endian;
#address-cells = <1>;
#size-cells = <0>;
clock-frequency = <2500000>;
clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
QORIQ_CLK_PLL_DIV(2)>;
status = "disabled";
};
@@ -534,6 +537,9 @@
little-endian;
#address-cells = <1>;
#size-cells = <0>;
clock-frequency = <2500000>;
clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
QORIQ_CLK_PLL_DIV(2)>;
status = "disabled";
};

View File

@@ -1369,6 +1369,9 @@
#address-cells = <1>;
#size-cells = <0>;
little-endian;
clock-frequency = <2500000>;
clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
QORIQ_CLK_PLL_DIV(2)>;
status = "disabled";
};
@@ -1379,6 +1382,9 @@
little-endian;
#address-cells = <1>;
#size-cells = <0>;
clock-frequency = <2500000>;
clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
QORIQ_CLK_PLL_DIV(2)>;
status = "disabled";
};

View File

@@ -38,9 +38,9 @@ conn_subsys: bus@5b000000 {
interrupts = <GIC_SPI 232 IRQ_TYPE_LEVEL_HIGH>;
reg = <0x5b010000 0x10000>;
clocks = <&sdhc0_lpcg IMX_LPCG_CLK_4>,
<&sdhc0_lpcg IMX_LPCG_CLK_5>,
<&sdhc0_lpcg IMX_LPCG_CLK_0>;
clock-names = "ipg", "per", "ahb";
<&sdhc0_lpcg IMX_LPCG_CLK_0>,
<&sdhc0_lpcg IMX_LPCG_CLK_5>;
clock-names = "ipg", "ahb", "per";
power-domains = <&pd IMX_SC_R_SDHC_0>;
status = "disabled";
};
@@ -49,9 +49,9 @@ conn_subsys: bus@5b000000 {
interrupts = <GIC_SPI 233 IRQ_TYPE_LEVEL_HIGH>;
reg = <0x5b020000 0x10000>;
clocks = <&sdhc1_lpcg IMX_LPCG_CLK_4>,
<&sdhc1_lpcg IMX_LPCG_CLK_5>,
<&sdhc1_lpcg IMX_LPCG_CLK_0>;
clock-names = "ipg", "per", "ahb";
<&sdhc1_lpcg IMX_LPCG_CLK_0>,
<&sdhc1_lpcg IMX_LPCG_CLK_5>;
clock-names = "ipg", "ahb", "per";
power-domains = <&pd IMX_SC_R_SDHC_1>;
fsl,tuning-start-tap = <20>;
fsl,tuning-step= <2>;
@@ -62,9 +62,9 @@ conn_subsys: bus@5b000000 {
interrupts = <GIC_SPI 234 IRQ_TYPE_LEVEL_HIGH>;
reg = <0x5b030000 0x10000>;
clocks = <&sdhc2_lpcg IMX_LPCG_CLK_4>,
<&sdhc2_lpcg IMX_LPCG_CLK_5>,
<&sdhc2_lpcg IMX_LPCG_CLK_0>;
clock-names = "ipg", "per", "ahb";
<&sdhc2_lpcg IMX_LPCG_CLK_0>,
<&sdhc2_lpcg IMX_LPCG_CLK_5>;
clock-names = "ipg", "ahb", "per";
power-domains = <&pd IMX_SC_R_SDHC_2>;
status = "disabled";
};

View File

@@ -323,7 +323,8 @@ static void cortex_a76_erratum_1463225_svc_handler(void)
__this_cpu_write(__in_cortex_a76_erratum_1463225_wa, 0);
}
static bool cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
static __always_inline bool
cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
{
if (!__this_cpu_read(__in_cortex_a76_erratum_1463225_wa))
return false;

View File

@@ -10,12 +10,12 @@
#define SVERSION_ANY_ID PA_SVERSION_ANY_ID
struct hp_hardware {
unsigned short hw_type:5; /* HPHW_xxx */
unsigned short hversion;
unsigned long sversion:28;
unsigned short opt;
const char name[80]; /* The hardware description */
};
unsigned int hw_type:8; /* HPHW_xxx */
unsigned int hversion:12;
unsigned int sversion:12;
unsigned char opt;
unsigned char name[59]; /* The hardware description */
} __packed;
struct parisc_device;

View File

@@ -882,15 +882,13 @@ void __init walk_central_bus(void)
&root);
}
static void print_parisc_device(struct parisc_device *dev)
static __init void print_parisc_device(struct parisc_device *dev)
{
char hw_path[64];
static int count;
static int count __initdata;
print_pa_hwpath(dev, hw_path);
pr_info("%d. %s at %pap [%s] { %d, 0x%x, 0x%.3x, 0x%.5x }",
++count, dev->name, &(dev->hpa.start), hw_path, dev->id.hw_type,
dev->id.hversion_rev, dev->id.hversion, dev->id.sversion);
pr_info("%d. %s at %pap { type:%d, hv:%#x, sv:%#x, rev:%#x }",
++count, dev->name, &(dev->hpa.start), dev->id.hw_type,
dev->id.hversion, dev->id.sversion, dev->id.hversion_rev);
if (dev->num_addrs) {
int k;
@@ -1079,7 +1077,7 @@ static __init int qemu_print_iodc_data(struct device *lin_dev, void *data)
static int print_one_device(struct device * dev, void * data)
static __init int print_one_device(struct device * dev, void * data)
{
struct parisc_device * pdev = to_parisc_device(dev);

View File

@@ -93,8 +93,17 @@ SECTIONS
_compressed_start = .;
*(.vmlinux.bin.compressed)
_compressed_end = .;
FILL(0xff);
. = ALIGN(4096);
}
#define SB_TRAILER_SIZE 32
/* Trailer needed for Secure Boot */
. += SB_TRAILER_SIZE; /* make sure .sb.trailer does not overwrite the previous section */
. = ALIGN(4096) - SB_TRAILER_SIZE;
.sb.trailer : {
QUAD(0)
QUAD(0)
QUAD(0)
QUAD(0x000000207a49504c)
}
_end = .;

View File

@@ -212,7 +212,7 @@ static inline unsigned long clear_user_mvcos(void __user *to, unsigned long size
asm volatile(
" llilh 0,%[spec]\n"
"0: .insn ss,0xc80000000000,0(%0,%1),0(%4),0\n"
" jz 4f\n"
"6: jz 4f\n"
"1: algr %0,%2\n"
" slgr %1,%2\n"
" j 0b\n"
@@ -222,11 +222,11 @@ static inline unsigned long clear_user_mvcos(void __user *to, unsigned long size
" clgr %0,%3\n" /* copy crosses next page boundary? */
" jnh 5f\n"
"3: .insn ss,0xc80000000000,0(%3,%1),0(%4),0\n"
" slgr %0,%3\n"
"7: slgr %0,%3\n"
" j 5f\n"
"4: slgr %0,%0\n"
"5:\n"
EX_TABLE(0b,2b) EX_TABLE(3b,5b)
EX_TABLE(0b,2b) EX_TABLE(6b,2b) EX_TABLE(3b,5b) EX_TABLE(7b,5b)
: "+a" (size), "+a" (to), "+a" (tmp1), "=a" (tmp2)
: "a" (empty_zero_page), [spec] "K" (0x81UL)
: "cc", "memory", "0");

View File

@@ -4707,6 +4707,7 @@ static const struct x86_cpu_desc isolation_ucodes[] = {
INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X, 5, 0x00000000),
INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X, 6, 0x00000000),
INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X, 7, 0x00000000),
INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X, 11, 0x00000000),
INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_L, 3, 0x0000007c),
INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE, 3, 0x0000007c),
INTEL_CPU_DESC(INTEL_FAM6_KABYLAKE, 9, 0x0000004e),

View File

@@ -936,8 +936,13 @@ struct event_constraint intel_icl_pebs_event_constraints[] = {
INTEL_FLAGS_UEVENT_CONSTRAINT(0x0400, 0x800000000ULL), /* SLOTS */
INTEL_PLD_CONSTRAINT(0x1cd, 0xff), /* MEM_TRANS_RETIRED.LOAD_LATENCY */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x1d0, 0xf), /* MEM_INST_RETIRED.LOAD */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x2d0, 0xf), /* MEM_INST_RETIRED.STORE */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_LOADS */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x12d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_STORES */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x21d0, 0xf), /* MEM_INST_RETIRED.LOCK_LOADS */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x41d0, 0xf), /* MEM_INST_RETIRED.SPLIT_LOADS */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x42d0, 0xf), /* MEM_INST_RETIRED.SPLIT_STORES */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x81d0, 0xf), /* MEM_INST_RETIRED.ALL_LOADS */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x82d0, 0xf), /* MEM_INST_RETIRED.ALL_STORES */
INTEL_FLAGS_EVENT_CONSTRAINT_DATALA_LD_RANGE(0xd1, 0xd4, 0xf), /* MEM_LOAD_*_RETIRED.* */
@@ -958,8 +963,13 @@ struct event_constraint intel_spr_pebs_event_constraints[] = {
INTEL_FLAGS_EVENT_CONSTRAINT(0xc0, 0xfe),
INTEL_PLD_CONSTRAINT(0x1cd, 0xfe),
INTEL_PSD_CONSTRAINT(0x2cd, 0x1),
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x1d0, 0xf),
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x2d0, 0xf),
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_LOADS */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x12d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_STORES */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x21d0, 0xf), /* MEM_INST_RETIRED.LOCK_LOADS */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x41d0, 0xf), /* MEM_INST_RETIRED.SPLIT_LOADS */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x42d0, 0xf), /* MEM_INST_RETIRED.SPLIT_STORES */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x81d0, 0xf), /* MEM_INST_RETIRED.ALL_LOADS */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x82d0, 0xf), /* MEM_INST_RETIRED.ALL_STORES */
INTEL_FLAGS_EVENT_CONSTRAINT_DATALA_LD_RANGE(0xd1, 0xd4, 0xf),

View File

@@ -6,7 +6,7 @@
#ifndef _ASM_X86_SYSCALL_WRAPPER_H
#define _ASM_X86_SYSCALL_WRAPPER_H
struct pt_regs;
#include <asm/ptrace.h>
extern long __x64_sys_ni_syscall(const struct pt_regs *regs);
extern long __ia32_sys_ni_syscall(const struct pt_regs *regs);

View File

@@ -902,11 +902,13 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
entry->eax = min(entry->eax, 0x8000001f);
break;
case 0x80000001:
entry->ebx &= ~GENMASK(27, 16);
cpuid_entry_override(entry, CPUID_8000_0001_EDX);
cpuid_entry_override(entry, CPUID_8000_0001_ECX);
break;
case 0x80000006:
/* L2 cache and TLB: pass through host info. */
/* Drop reserved bits, pass host L2 cache and TLB info. */
entry->edx &= ~GENMASK(17, 16);
break;
case 0x80000007: /* Advanced power management */
/* invariant TSC is CPUID.80000007H:EDX[8] */
@@ -936,6 +938,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
g_phys_as = phys_as;
entry->eax = g_phys_as | (virt_as << 8);
entry->ecx &= ~(GENMASK(31, 16) | GENMASK(11, 8));
entry->edx = 0;
cpuid_entry_override(entry, CPUID_8000_0008_EBX);
break;
@@ -955,6 +958,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
entry->ecx = entry->edx = 0;
break;
case 0x8000001a:
entry->eax &= GENMASK(2, 0);
entry->ebx = entry->ecx = entry->edx = 0;
break;
case 0x8000001e:
break;
case 0x8000001F:
@@ -962,7 +968,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
} else {
cpuid_entry_override(entry, CPUID_8000_001F_EAX);
/* Clear NumVMPL since KVM does not support VMPL. */
entry->ebx &= ~GENMASK(31, 12);
/*
* Enumerate '0' for "PA bits reduction", the adjusted
* MAXPHYADDR is enumerated directly (see 0x80000008).

View File

@@ -795,8 +795,7 @@ static int linearize(struct x86_emulate_ctxt *ctxt,
ctxt->mode, linear);
}
static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst,
enum x86emul_mode mode)
static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst)
{
ulong linear;
int rc;
@@ -806,41 +805,71 @@ static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst,
if (ctxt->op_bytes != sizeof(unsigned long))
addr.ea = dst & ((1UL << (ctxt->op_bytes << 3)) - 1);
rc = __linearize(ctxt, addr, &max_size, 1, false, true, mode, &linear);
rc = __linearize(ctxt, addr, &max_size, 1, false, true, ctxt->mode, &linear);
if (rc == X86EMUL_CONTINUE)
ctxt->_eip = addr.ea;
return rc;
}
static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst)
static inline int emulator_recalc_and_set_mode(struct x86_emulate_ctxt *ctxt)
{
return assign_eip(ctxt, dst, ctxt->mode);
}
static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst,
const struct desc_struct *cs_desc)
{
enum x86emul_mode mode = ctxt->mode;
int rc;
#ifdef CONFIG_X86_64
if (ctxt->mode >= X86EMUL_MODE_PROT16) {
if (cs_desc->l) {
u64 efer = 0;
u64 efer;
struct desc_struct cs;
u16 selector;
u32 base3;
ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
if (!(ctxt->ops->get_cr(ctxt, 0) & X86_CR0_PE)) {
/* Real mode. cpu must not have long mode active */
if (efer & EFER_LMA)
mode = X86EMUL_MODE_PROT64;
} else
mode = X86EMUL_MODE_PROT32; /* temporary value */
return X86EMUL_UNHANDLEABLE;
ctxt->mode = X86EMUL_MODE_REAL;
return X86EMUL_CONTINUE;
}
#endif
if (mode == X86EMUL_MODE_PROT16 || mode == X86EMUL_MODE_PROT32)
mode = cs_desc->d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16;
rc = assign_eip(ctxt, dst, mode);
if (rc == X86EMUL_CONTINUE)
ctxt->mode = mode;
if (ctxt->eflags & X86_EFLAGS_VM) {
/* Protected/VM86 mode. cpu must not have long mode active */
if (efer & EFER_LMA)
return X86EMUL_UNHANDLEABLE;
ctxt->mode = X86EMUL_MODE_VM86;
return X86EMUL_CONTINUE;
}
if (!ctxt->ops->get_segment(ctxt, &selector, &cs, &base3, VCPU_SREG_CS))
return X86EMUL_UNHANDLEABLE;
if (efer & EFER_LMA) {
if (cs.l) {
/* Proper long mode */
ctxt->mode = X86EMUL_MODE_PROT64;
} else if (cs.d) {
/* 32 bit compatibility mode*/
ctxt->mode = X86EMUL_MODE_PROT32;
} else {
ctxt->mode = X86EMUL_MODE_PROT16;
}
} else {
/* Legacy 32 bit / 16 bit mode */
ctxt->mode = cs.d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16;
}
return X86EMUL_CONTINUE;
}
static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst)
{
return assign_eip(ctxt, dst);
}
static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst)
{
int rc = emulator_recalc_and_set_mode(ctxt);
if (rc != X86EMUL_CONTINUE)
return rc;
return assign_eip(ctxt, dst);
}
static inline int jmp_rel(struct x86_emulate_ctxt *ctxt, int rel)
@@ -2153,7 +2182,7 @@ static int em_jmp_far(struct x86_emulate_ctxt *ctxt)
if (rc != X86EMUL_CONTINUE)
return rc;
rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc);
rc = assign_eip_far(ctxt, ctxt->src.val);
/* Error handling is not implemented. */
if (rc != X86EMUL_CONTINUE)
return X86EMUL_UNHANDLEABLE;
@@ -2234,7 +2263,7 @@ static int em_ret_far(struct x86_emulate_ctxt *ctxt)
&new_desc);
if (rc != X86EMUL_CONTINUE)
return rc;
rc = assign_eip_far(ctxt, eip, &new_desc);
rc = assign_eip_far(ctxt, eip);
/* Error handling is not implemented. */
if (rc != X86EMUL_CONTINUE)
return X86EMUL_UNHANDLEABLE;
@@ -2617,7 +2646,7 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt)
* those side effects need to be explicitly handled for both success
* and shutdown.
*/
return X86EMUL_CONTINUE;
return emulator_recalc_and_set_mode(ctxt);
emulate_shutdown:
ctxt->ops->triple_fault(ctxt);
@@ -2861,6 +2890,7 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt)
ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS);
ctxt->_eip = rdx;
ctxt->mode = usermode;
*reg_write(ctxt, VCPU_REGS_RSP) = rcx;
return X86EMUL_CONTINUE;
@@ -3457,7 +3487,7 @@ static int em_call_far(struct x86_emulate_ctxt *ctxt)
if (rc != X86EMUL_CONTINUE)
return rc;
rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc);
rc = assign_eip_far(ctxt, ctxt->src.val);
if (rc != X86EMUL_CONTINUE)
goto fail;
@@ -3599,11 +3629,25 @@ static int em_movbe(struct x86_emulate_ctxt *ctxt)
static int em_cr_write(struct x86_emulate_ctxt *ctxt)
{
if (ctxt->ops->set_cr(ctxt, ctxt->modrm_reg, ctxt->src.val))
int cr_num = ctxt->modrm_reg;
int r;
if (ctxt->ops->set_cr(ctxt, cr_num, ctxt->src.val))
return emulate_gp(ctxt, 0);
/* Disable writeback. */
ctxt->dst.type = OP_NONE;
if (cr_num == 0) {
/*
* CR0 write might have updated CR0.PE and/or CR0.PG
* which can affect the cpu's execution mode.
*/
r = emulator_recalc_and_set_mode(ctxt);
if (r != X86EMUL_CONTINUE)
return r;
}
return X86EMUL_CONTINUE;
}

View File

@@ -355,25 +355,29 @@ TRACE_EVENT(kvm_inj_virq,
* Tracepoint for kvm interrupt injection:
*/
TRACE_EVENT(kvm_inj_exception,
TP_PROTO(unsigned exception, bool has_error, unsigned error_code),
TP_ARGS(exception, has_error, error_code),
TP_PROTO(unsigned exception, bool has_error, unsigned error_code,
bool reinjected),
TP_ARGS(exception, has_error, error_code, reinjected),
TP_STRUCT__entry(
__field( u8, exception )
__field( u8, has_error )
__field( u32, error_code )
__field( bool, reinjected )
),
TP_fast_assign(
__entry->exception = exception;
__entry->has_error = has_error;
__entry->error_code = error_code;
__entry->reinjected = reinjected;
),
TP_printk("%s (0x%x)",
TP_printk("%s (0x%x)%s",
__print_symbolic(__entry->exception, kvm_trace_sym_exc),
/* FIXME: don't print error_code if not present */
__entry->has_error ? __entry->error_code : 0)
__entry->has_error ? __entry->error_code : 0,
__entry->reinjected ? " [reinjected]" : "")
);
/*

View File

@@ -7920,6 +7920,11 @@ static __init int hardware_setup(void)
if (!cpu_has_virtual_nmis())
enable_vnmi = 0;
#ifdef CONFIG_X86_SGX_KVM
if (!cpu_has_vmx_encls_vmexit())
enable_sgx = false;
#endif
/*
* set_apic_access_page_addr() is used to reload apic access
* page upon invalidation. No need to do anything if not

View File

@@ -525,6 +525,7 @@ static int exception_class(int vector)
#define EXCPT_TRAP 1
#define EXCPT_ABORT 2
#define EXCPT_INTERRUPT 3
#define EXCPT_DB 4
static int exception_type(int vector)
{
@@ -535,8 +536,14 @@ static int exception_type(int vector)
mask = 1 << vector;
/* #DB is trap, as instruction watchpoints are handled elsewhere */
if (mask & ((1 << DB_VECTOR) | (1 << BP_VECTOR) | (1 << OF_VECTOR)))
/*
* #DBs can be trap-like or fault-like, the caller must check other CPU
* state, e.g. DR6, to determine whether a #DB is a trap or fault.
*/
if (mask & (1 << DB_VECTOR))
return EXCPT_DB;
if (mask & ((1 << BP_VECTOR) | (1 << OF_VECTOR)))
return EXCPT_TRAP;
if (mask & ((1 << DF_VECTOR) | (1 << MC_VECTOR)))
@@ -5762,6 +5769,11 @@ split_irqchip_unlock:
r = 0;
break;
case KVM_CAP_X86_USER_SPACE_MSR:
r = -EINVAL;
if (cap->args[0] & ~(KVM_MSR_EXIT_REASON_INVAL |
KVM_MSR_EXIT_REASON_UNKNOWN |
KVM_MSR_EXIT_REASON_FILTER))
break;
kvm->arch.user_space_msr_mask = cap->args[0];
r = 0;
break;
@@ -5882,23 +5894,22 @@ static int kvm_add_msr_filter(struct kvm_x86_msr_filter *msr_filter,
return 0;
}
static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm,
struct kvm_msr_filter *filter)
{
struct kvm_msr_filter __user *user_msr_filter = argp;
struct kvm_x86_msr_filter *new_filter, *old_filter;
struct kvm_msr_filter filter;
bool default_allow;
bool empty = true;
int r = 0;
u32 i;
if (copy_from_user(&filter, user_msr_filter, sizeof(filter)))
return -EFAULT;
if (filter->flags & ~KVM_MSR_FILTER_DEFAULT_DENY)
return -EINVAL;
for (i = 0; i < ARRAY_SIZE(filter.ranges); i++)
empty &= !filter.ranges[i].nmsrs;
for (i = 0; i < ARRAY_SIZE(filter->ranges); i++)
empty &= !filter->ranges[i].nmsrs;
default_allow = !(filter.flags & KVM_MSR_FILTER_DEFAULT_DENY);
default_allow = !(filter->flags & KVM_MSR_FILTER_DEFAULT_DENY);
if (empty && !default_allow)
return -EINVAL;
@@ -5906,8 +5917,8 @@ static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
if (!new_filter)
return -ENOMEM;
for (i = 0; i < ARRAY_SIZE(filter.ranges); i++) {
r = kvm_add_msr_filter(new_filter, &filter.ranges[i]);
for (i = 0; i < ARRAY_SIZE(filter->ranges); i++) {
r = kvm_add_msr_filter(new_filter, &filter->ranges[i]);
if (r) {
kvm_free_msr_filter(new_filter);
return r;
@@ -5930,6 +5941,62 @@ static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
return 0;
}
#ifdef CONFIG_KVM_COMPAT
/* for KVM_X86_SET_MSR_FILTER */
struct kvm_msr_filter_range_compat {
__u32 flags;
__u32 nmsrs;
__u32 base;
__u32 bitmap;
};
struct kvm_msr_filter_compat {
__u32 flags;
struct kvm_msr_filter_range_compat ranges[KVM_MSR_FILTER_MAX_RANGES];
};
#define KVM_X86_SET_MSR_FILTER_COMPAT _IOW(KVMIO, 0xc6, struct kvm_msr_filter_compat)
long kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl,
unsigned long arg)
{
void __user *argp = (void __user *)arg;
struct kvm *kvm = filp->private_data;
long r = -ENOTTY;
switch (ioctl) {
case KVM_X86_SET_MSR_FILTER_COMPAT: {
struct kvm_msr_filter __user *user_msr_filter = argp;
struct kvm_msr_filter_compat filter_compat;
struct kvm_msr_filter filter;
int i;
if (copy_from_user(&filter_compat, user_msr_filter,
sizeof(filter_compat)))
return -EFAULT;
filter.flags = filter_compat.flags;
for (i = 0; i < ARRAY_SIZE(filter.ranges); i++) {
struct kvm_msr_filter_range_compat *cr;
cr = &filter_compat.ranges[i];
filter.ranges[i] = (struct kvm_msr_filter_range) {
.flags = cr->flags,
.nmsrs = cr->nmsrs,
.base = cr->base,
.bitmap = (__u8 *)(ulong)cr->bitmap,
};
}
r = kvm_vm_ioctl_set_msr_filter(kvm, &filter);
break;
}
}
return r;
}
#endif
#ifdef CONFIG_HAVE_KVM_PM_NOTIFIER
static int kvm_arch_suspend_notifier(struct kvm *kvm)
{
@@ -6305,9 +6372,16 @@ set_pit2_out:
case KVM_SET_PMU_EVENT_FILTER:
r = kvm_vm_ioctl_set_pmu_event_filter(kvm, argp);
break;
case KVM_X86_SET_MSR_FILTER:
r = kvm_vm_ioctl_set_msr_filter(kvm, argp);
case KVM_X86_SET_MSR_FILTER: {
struct kvm_msr_filter __user *user_msr_filter = argp;
struct kvm_msr_filter filter;
if (copy_from_user(&filter, user_msr_filter, sizeof(filter)))
return -EFAULT;
r = kvm_vm_ioctl_set_msr_filter(kvm, &filter);
break;
}
default:
r = -ENOTTY;
}
@@ -8134,6 +8208,12 @@ restart:
unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu);
toggle_interruptibility(vcpu, ctxt->interruptibility);
vcpu->arch.emulate_regs_need_sync_to_vcpu = false;
/*
* Note, EXCPT_DB is assumed to be fault-like as the emulator
* only supports code breakpoints and general detect #DB, both
* of which are fault-like.
*/
if (!ctxt->have_exception ||
exception_type(ctxt->exception.vector) == EXCPT_TRAP) {
kvm_rip_write(vcpu, ctxt->eip);
@@ -8965,6 +9045,11 @@ int kvm_check_nested_events(struct kvm_vcpu *vcpu)
static void kvm_inject_exception(struct kvm_vcpu *vcpu)
{
trace_kvm_inj_exception(vcpu->arch.exception.nr,
vcpu->arch.exception.has_error_code,
vcpu->arch.exception.error_code,
vcpu->arch.exception.injected);
if (vcpu->arch.exception.error_code && !is_protmode(vcpu))
vcpu->arch.exception.error_code = false;
static_call(kvm_x86_queue_exception)(vcpu);
@@ -9022,13 +9107,16 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
/* try to inject new event if pending */
if (vcpu->arch.exception.pending) {
trace_kvm_inj_exception(vcpu->arch.exception.nr,
vcpu->arch.exception.has_error_code,
vcpu->arch.exception.error_code);
vcpu->arch.exception.pending = false;
vcpu->arch.exception.injected = true;
/*
* Fault-class exceptions, except #DBs, set RF=1 in the RFLAGS
* value pushed on the stack. Trap-like exception and all #DBs
* leave RF as-is (KVM follows Intel's behavior in this regard;
* AMD states that code breakpoint #DBs excplitly clear RF=0).
*
* Note, most versions of Intel's SDM and AMD's APM incorrectly
* describe the behavior of General Detect #DBs, which are
* fault-like. They do _not_ set RF, a la code breakpoints.
*/
if (exception_type(vcpu->arch.exception.nr) == EXCPT_FAULT)
__kvm_set_rflags(vcpu, kvm_get_rflags(vcpu) |
X86_EFLAGS_RF);
@@ -9042,6 +9130,10 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
}
kvm_inject_exception(vcpu);
vcpu->arch.exception.pending = false;
vcpu->arch.exception.injected = true;
can_inject = false;
}

View File

@@ -462,6 +462,8 @@ static struct bfq_io_cq *bfq_bic_lookup(struct bfq_data *bfqd,
*/
void bfq_schedule_dispatch(struct bfq_data *bfqd)
{
lockdep_assert_held(&bfqd->lock);
if (bfqd->queued != 0) {
bfq_log(bfqd, "schedule dispatch");
blk_mq_run_hw_queues(bfqd->queue, true);
@@ -6746,8 +6748,8 @@ bfq_idle_slice_timer_body(struct bfq_data *bfqd, struct bfq_queue *bfqq)
bfq_bfqq_expire(bfqd, bfqq, true, reason);
schedule_dispatch:
spin_unlock_irqrestore(&bfqd->lock, flags);
bfq_schedule_dispatch(bfqd);
spin_unlock_irqrestore(&bfqd->lock, flags);
}
/*

View File

@@ -527,6 +527,7 @@ out_unregister_bdi:
bdi_unregister(disk->bdi);
out_unregister_queue:
blk_unregister_queue(disk);
rq_qos_exit(disk->queue);
out_put_slave_dir:
kobject_put(disk->slave_dir);
out_put_holder_dir:

View File

@@ -163,7 +163,7 @@ static void ghes_unmap(void __iomem *vaddr, enum fixed_addresses fixmap_idx)
clear_fixmap(fixmap_idx);
}
int ghes_estatus_pool_init(int num_ghes)
int ghes_estatus_pool_init(unsigned int num_ghes)
{
unsigned long addr, len;
int rc;

View File

@@ -315,9 +315,10 @@ static void pdc20230_set_piomode(struct ata_port *ap, struct ata_device *adev)
outb(inb(0x1F4) & 0x07, 0x1F4);
rt = inb(0x1F3);
rt &= 0x07 << (3 * adev->devno);
rt &= ~(0x07 << (3 * !adev->devno));
if (pio)
rt |= (1 + 3 * pio) << (3 * adev->devno);
rt |= (1 + 3 * pio) << (3 * !adev->devno);
outb(rt, 0x1F3);
udelay(100);
outb(inb(0x1F2) | 0x01, 0x1F2);

View File

@@ -219,7 +219,7 @@ static void virtbt_rx_work(struct work_struct *work)
if (!skb)
return;
skb->len = len;
skb_put(skb, len);
virtbt_rx_handle(vbt, skb);
if (virtbt_add_inbuf(vbt) < 0)

View File

@@ -3571,6 +3571,7 @@ static int gcc_sc7280_probe(struct platform_device *pdev)
regmap_update_bits(regmap, 0x28004, BIT(0), BIT(0));
regmap_update_bits(regmap, 0x28014, BIT(0), BIT(0));
regmap_update_bits(regmap, 0x71004, BIT(0), BIT(0));
regmap_update_bits(regmap, 0x7100C, BIT(13), BIT(13));
ret = qcom_cc_register_rcg_dfs(regmap, gcc_dfs_clocks,
ARRAY_SIZE(gcc_dfs_clocks));

View File

@@ -463,6 +463,7 @@ static int gpu_cc_sc7280_probe(struct platform_device *pdev)
*/
regmap_update_bits(regmap, 0x1170, BIT(0), BIT(0));
regmap_update_bits(regmap, 0x1098, BIT(0), BIT(0));
regmap_update_bits(regmap, 0x1098, BIT(13), BIT(13));
return qcom_cc_really_probe(pdev, &gpu_cc_sc7280_desc, regmap);
}

View File

@@ -1516,8 +1516,12 @@ scmi_txrx_setup(struct scmi_info *info, struct device *dev, int prot_id)
{
int ret = scmi_chan_setup(info, dev, prot_id, true);
if (!ret) /* Rx is optional, hence no error check */
scmi_chan_setup(info, dev, prot_id, false);
if (!ret) {
/* Rx is optional, report only memory errors */
ret = scmi_chan_setup(info, dev, prot_id, false);
if (ret && ret != -ENOMEM)
ret = 0;
}
return ret;
}
@@ -2009,6 +2013,7 @@ MODULE_DEVICE_TABLE(of, scmi_of_match);
static struct platform_driver scmi_driver = {
.driver = {
.name = "arm-scmi",
.suppress_bind_attrs = true,
.of_match_table = scmi_of_match,
.dev_groups = versions_groups,
},

View File

@@ -247,19 +247,19 @@ static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
for (i = 0; i < vioch->max_msg; i++) {
struct scmi_vio_msg *msg;
msg = devm_kzalloc(cinfo->dev, sizeof(*msg), GFP_KERNEL);
msg = devm_kzalloc(dev, sizeof(*msg), GFP_KERNEL);
if (!msg)
return -ENOMEM;
if (tx) {
msg->request = devm_kzalloc(cinfo->dev,
msg->request = devm_kzalloc(dev,
VIRTIO_SCMI_MAX_PDU_SIZE,
GFP_KERNEL);
if (!msg->request)
return -ENOMEM;
}
msg->input = devm_kzalloc(cinfo->dev, VIRTIO_SCMI_MAX_PDU_SIZE,
msg->input = devm_kzalloc(dev, VIRTIO_SCMI_MAX_PDU_SIZE,
GFP_KERNEL);
if (!msg->input)
return -ENOMEM;

View File

@@ -590,7 +590,7 @@ int __init efi_config_parse_tables(const efi_config_table_t *config_tables,
seed = early_memremap(efi_rng_seed, sizeof(*seed));
if (seed != NULL) {
size = READ_ONCE(seed->size);
size = min(seed->size, EFI_RANDOM_SEED_SIZE);
early_memunmap(seed, sizeof(*seed));
} else {
pr_err("Could not map UEFI random seed!\n");

View File

@@ -75,7 +75,12 @@ efi_status_t efi_random_get_seed(void)
if (status != EFI_SUCCESS)
return status;
status = efi_bs_call(allocate_pool, EFI_RUNTIME_SERVICES_DATA,
/*
* Use EFI_ACPI_RECLAIM_MEMORY here so that it is guaranteed that the
* allocation will survive a kexec reboot (although we refresh the seed
* beforehand)
*/
status = efi_bs_call(allocate_pool, EFI_ACPI_RECLAIM_MEMORY,
sizeof(*seed) + EFI_RANDOM_SEED_SIZE,
(void **)&seed);
if (status != EFI_SUCCESS)

View File

@@ -97,7 +97,7 @@ int __init efi_tpm_eventlog_init(void)
goto out_calc;
}
memblock_reserve((unsigned long)final_tbl,
memblock_reserve(efi.tpm_final_log,
tbl_size + sizeof(*final_tbl));
efi_tpm_final_log_size = tbl_size;

View File

@@ -706,6 +706,12 @@ void amdgpu_detect_virtualization(struct amdgpu_device *adev)
adev->virt.caps |= AMDGPU_PASSTHROUGH_MODE;
}
if (amdgpu_sriov_vf(adev) && adev->asic_type == CHIP_SIENNA_CICHLID)
/* VF MMIO access (except mailbox range) from CPU
* will be blocked during sriov runtime
*/
adev->virt.caps |= AMDGPU_VF_MMIO_ACCESS_PROTECT;
/* we have the ability to check now */
if (amdgpu_sriov_vf(adev)) {
switch (adev->asic_type) {

View File

@@ -31,6 +31,7 @@
#define AMDGPU_SRIOV_CAPS_IS_VF (1 << 2) /* this GPU is a virtual function */
#define AMDGPU_PASSTHROUGH_MODE (1 << 3) /* thw whole GPU is pass through for VM */
#define AMDGPU_SRIOV_CAPS_RUNTIME (1 << 4) /* is out of full access mode */
#define AMDGPU_VF_MMIO_ACCESS_PROTECT (1 << 5) /* MMIO write access is not allowed in sriov runtime */
/* all asic after AI use this offset */
#define mmRCC_IOV_FUNC_IDENTIFIER 0xDE5
@@ -278,6 +279,9 @@ struct amdgpu_video_codec_info;
#define amdgpu_passthrough(adev) \
((adev)->virt.caps & AMDGPU_PASSTHROUGH_MODE)
#define amdgpu_sriov_vf_mmio_access_protection(adev) \
((adev)->virt.caps & AMDGPU_VF_MMIO_ACCESS_PROTECT)
static inline bool is_virtual_machine(void)
{
#ifdef CONFIG_X86

View File

@@ -3224,7 +3224,11 @@ void amdgpu_vm_manager_init(struct amdgpu_device *adev)
*/
#ifdef CONFIG_X86_64
if (amdgpu_vm_update_mode == -1) {
if (amdgpu_gmc_vram_full_visible(&adev->gmc))
/* For asic with VF MMIO access protection
* avoid using CPU for VM table updates
*/
if (amdgpu_gmc_vram_full_visible(&adev->gmc) &&
!amdgpu_sriov_vf_mmio_access_protection(adev))
adev->vm_manager.vm_update_mode =
AMDGPU_VM_USE_CPU_FOR_COMPUTE;
else

View File

@@ -36,10 +36,14 @@ void amdgpu_dm_set_psr_caps(struct dc_link *link)
{
uint8_t dpcd_data[EDP_PSR_RECEIVER_CAP_SIZE];
if (!(link->connector_signal & SIGNAL_TYPE_EDP))
if (!(link->connector_signal & SIGNAL_TYPE_EDP)) {
link->psr_settings.psr_feature_enabled = false;
return;
if (link->type == dc_connection_none)
}
if (link->type == dc_connection_none) {
link->psr_settings.psr_feature_enabled = false;
return;
}
if (dm_helpers_dp_read_dpcd(NULL, link, DP_PSR_SUPPORT,
dpcd_data, sizeof(dpcd_data))) {
link->dpcd_caps.psr_caps.psr_version = dpcd_data[0];

View File

@@ -2762,13 +2762,10 @@ intel_sdvo_dvi_init(struct intel_sdvo *intel_sdvo, int device)
if (!intel_sdvo_connector)
return false;
if (device == 0) {
intel_sdvo->controlled_output |= SDVO_OUTPUT_TMDS0;
if (device == 0)
intel_sdvo_connector->output_flag = SDVO_OUTPUT_TMDS0;
} else if (device == 1) {
intel_sdvo->controlled_output |= SDVO_OUTPUT_TMDS1;
else if (device == 1)
intel_sdvo_connector->output_flag = SDVO_OUTPUT_TMDS1;
}
intel_connector = &intel_sdvo_connector->base;
connector = &intel_connector->base;
@@ -2823,7 +2820,6 @@ intel_sdvo_tv_init(struct intel_sdvo *intel_sdvo, int type)
encoder->encoder_type = DRM_MODE_ENCODER_TVDAC;
connector->connector_type = DRM_MODE_CONNECTOR_SVIDEO;
intel_sdvo->controlled_output |= type;
intel_sdvo_connector->output_flag = type;
if (intel_sdvo_connector_init(intel_sdvo_connector, intel_sdvo) < 0) {
@@ -2864,13 +2860,10 @@ intel_sdvo_analog_init(struct intel_sdvo *intel_sdvo, int device)
encoder->encoder_type = DRM_MODE_ENCODER_DAC;
connector->connector_type = DRM_MODE_CONNECTOR_VGA;
if (device == 0) {
intel_sdvo->controlled_output |= SDVO_OUTPUT_RGB0;
if (device == 0)
intel_sdvo_connector->output_flag = SDVO_OUTPUT_RGB0;
} else if (device == 1) {
intel_sdvo->controlled_output |= SDVO_OUTPUT_RGB1;
else if (device == 1)
intel_sdvo_connector->output_flag = SDVO_OUTPUT_RGB1;
}
if (intel_sdvo_connector_init(intel_sdvo_connector, intel_sdvo) < 0) {
kfree(intel_sdvo_connector);
@@ -2900,13 +2893,10 @@ intel_sdvo_lvds_init(struct intel_sdvo *intel_sdvo, int device)
encoder->encoder_type = DRM_MODE_ENCODER_LVDS;
connector->connector_type = DRM_MODE_CONNECTOR_LVDS;
if (device == 0) {
intel_sdvo->controlled_output |= SDVO_OUTPUT_LVDS0;
if (device == 0)
intel_sdvo_connector->output_flag = SDVO_OUTPUT_LVDS0;
} else if (device == 1) {
intel_sdvo->controlled_output |= SDVO_OUTPUT_LVDS1;
else if (device == 1)
intel_sdvo_connector->output_flag = SDVO_OUTPUT_LVDS1;
}
if (intel_sdvo_connector_init(intel_sdvo_connector, intel_sdvo) < 0) {
kfree(intel_sdvo_connector);
@@ -2939,16 +2929,39 @@ err:
return false;
}
static u16 intel_sdvo_filter_output_flags(u16 flags)
{
flags &= SDVO_OUTPUT_MASK;
/* SDVO requires XXX1 function may not exist unless it has XXX0 function.*/
if (!(flags & SDVO_OUTPUT_TMDS0))
flags &= ~SDVO_OUTPUT_TMDS1;
if (!(flags & SDVO_OUTPUT_RGB0))
flags &= ~SDVO_OUTPUT_RGB1;
if (!(flags & SDVO_OUTPUT_LVDS0))
flags &= ~SDVO_OUTPUT_LVDS1;
return flags;
}
static bool
intel_sdvo_output_setup(struct intel_sdvo *intel_sdvo, u16 flags)
{
/* SDVO requires XXX1 function may not exist unless it has XXX0 function.*/
struct drm_i915_private *i915 = to_i915(intel_sdvo->base.base.dev);
flags = intel_sdvo_filter_output_flags(flags);
intel_sdvo->controlled_output = flags;
intel_sdvo_select_ddc_bus(i915, intel_sdvo);
if (flags & SDVO_OUTPUT_TMDS0)
if (!intel_sdvo_dvi_init(intel_sdvo, 0))
return false;
if ((flags & SDVO_TMDS_MASK) == SDVO_TMDS_MASK)
if (flags & SDVO_OUTPUT_TMDS1)
if (!intel_sdvo_dvi_init(intel_sdvo, 1))
return false;
@@ -2969,7 +2982,7 @@ intel_sdvo_output_setup(struct intel_sdvo *intel_sdvo, u16 flags)
if (!intel_sdvo_analog_init(intel_sdvo, 0))
return false;
if ((flags & SDVO_RGB_MASK) == SDVO_RGB_MASK)
if (flags & SDVO_OUTPUT_RGB1)
if (!intel_sdvo_analog_init(intel_sdvo, 1))
return false;
@@ -2977,14 +2990,13 @@ intel_sdvo_output_setup(struct intel_sdvo *intel_sdvo, u16 flags)
if (!intel_sdvo_lvds_init(intel_sdvo, 0))
return false;
if ((flags & SDVO_LVDS_MASK) == SDVO_LVDS_MASK)
if (flags & SDVO_OUTPUT_LVDS1)
if (!intel_sdvo_lvds_init(intel_sdvo, 1))
return false;
if ((flags & SDVO_OUTPUT_MASK) == 0) {
if (flags == 0) {
unsigned char bytes[2];
intel_sdvo->controlled_output = 0;
memcpy(bytes, &intel_sdvo->caps.output_flags, 2);
DRM_DEBUG_KMS("%s: Unknown SDVO output type (0x%02x%02x)\n",
SDVO_NAME(intel_sdvo),
@@ -3396,8 +3408,6 @@ bool intel_sdvo_init(struct drm_i915_private *dev_priv,
*/
intel_sdvo->base.cloneable = 0;
intel_sdvo_select_ddc_bus(dev_priv, intel_sdvo);
/* Set the input timing to the screen. Assume always input 0. */
if (!intel_sdvo_set_target_input(intel_sdvo))
goto err_output;

View File

@@ -330,8 +330,8 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi,
goto fail;
}
ret = devm_request_irq(&pdev->dev, hdmi->irq,
msm_hdmi_irq, IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
ret = devm_request_irq(dev->dev, hdmi->irq,
msm_hdmi_irq, IRQF_TRIGGER_HIGH,
"hdmi_isr", hdmi);
if (ret < 0) {
DRM_DEV_ERROR(dev->dev, "failed to request IRQ%u: %d\n",

View File

@@ -1027,23 +1027,31 @@ static int dw_mipi_dsi_rockchip_host_attach(void *priv_data,
if (ret) {
DRM_DEV_ERROR(dsi->dev, "Failed to register component: %d\n",
ret);
return ret;
goto out;
}
second = dw_mipi_dsi_rockchip_find_second(dsi);
if (IS_ERR(second))
return PTR_ERR(second);
if (IS_ERR(second)) {
ret = PTR_ERR(second);
goto out;
}
if (second) {
ret = component_add(second, &dw_mipi_dsi_rockchip_ops);
if (ret) {
DRM_DEV_ERROR(second,
"Failed to register component: %d\n",
ret);
return ret;
goto out;
}
}
return 0;
out:
mutex_lock(&dsi->usage_mutex);
dsi->usage_mode = DW_DSI_USAGE_IDLE;
mutex_unlock(&dsi->usage_mutex);
return ret;
}
static int dw_mipi_dsi_rockchip_host_detach(void *priv_data,
@@ -1630,5 +1638,11 @@ struct platform_driver dw_mipi_dsi_rockchip_driver = {
.of_match_table = dw_mipi_dsi_rockchip_dt_ids,
.pm = &dw_mipi_dsi_rockchip_pm_ops,
.name = "dw-mipi-dsi-rockchip",
/*
* For dual-DSI display, one DSI pokes at the other DSI's
* drvdata in dw_mipi_dsi_rockchip_find_second(). This is not
* safe for asynchronous probe.
*/
.probe_type = PROBE_FORCE_SYNCHRONOUS,
},
};

View File

@@ -845,6 +845,7 @@
#define USB_DEVICE_ID_MADCATZ_BEATPAD 0x4540
#define USB_DEVICE_ID_MADCATZ_RAT5 0x1705
#define USB_DEVICE_ID_MADCATZ_RAT9 0x1709
#define USB_DEVICE_ID_MADCATZ_MMO7 0x1713
#define USB_VENDOR_ID_MCC 0x09db
#define USB_DEVICE_ID_MCC_PMD1024LS 0x0076
@@ -1112,6 +1113,7 @@
#define USB_DEVICE_ID_SONY_PS4_CONTROLLER_2 0x09cc
#define USB_DEVICE_ID_SONY_PS4_CONTROLLER_DONGLE 0x0ba0
#define USB_DEVICE_ID_SONY_PS5_CONTROLLER 0x0ce6
#define USB_DEVICE_ID_SONY_PS5_CONTROLLER_2 0x0df2
#define USB_DEVICE_ID_SONY_MOTION_CONTROLLER 0x03d5
#define USB_DEVICE_ID_SONY_NAVIGATION_CONTROLLER 0x042f
#define USB_DEVICE_ID_SONY_BUZZ_CONTROLLER 0x0002

View File

@@ -1282,7 +1282,8 @@ static int ps_probe(struct hid_device *hdev, const struct hid_device_id *id)
goto err_stop;
}
if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER) {
if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER ||
hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) {
dev = dualsense_create(hdev);
if (IS_ERR(dev)) {
hid_err(hdev, "Failed to create dualsense.\n");
@@ -1320,6 +1321,8 @@ static void ps_remove(struct hid_device *hdev)
static const struct hid_device_id ps_devices[] = {
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER) },
{ HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) },
{ HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) },
{ }
};
MODULE_DEVICE_TABLE(hid, ps_devices);

View File

@@ -608,6 +608,7 @@ static const struct hid_device_id hid_have_special_driver[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_MMO7) },
{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_RAT5) },
{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_RAT9) },
{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_MMO7) },
#endif
#if IS_ENABLED(CONFIG_HID_SAMSUNG)
{ HID_USB_DEVICE(USB_VENDOR_ID_SAMSUNG, USB_DEVICE_ID_SAMSUNG_IR_REMOTE) },

View File

@@ -187,6 +187,8 @@ static const struct hid_device_id saitek_devices[] = {
.driver_data = SAITEK_RELEASE_MODE_RAT7 },
{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_MMO7),
.driver_data = SAITEK_RELEASE_MODE_MMO7 },
{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_MMO7),
.driver_data = SAITEK_RELEASE_MODE_MMO7 },
{ }
};

View File

@@ -1080,6 +1080,7 @@ static int piix4_probe(struct pci_dev *dev, const struct pci_device_id *id)
"", &piix4_main_adapters[0]);
if (retval < 0)
return retval;
piix4_adapter_count = 1;
}
/* Check for auxiliary SMBus on some AMD chipsets */

View File

@@ -934,6 +934,7 @@ static struct platform_driver xiic_i2c_driver = {
module_platform_driver(xiic_i2c_driver);
MODULE_ALIAS("platform:" DRIVER_NAME);
MODULE_AUTHOR("info@mocean-labs.com");
MODULE_DESCRIPTION("Xilinx I2C bus driver");
MODULE_LICENSE("GPL v2");

View File

@@ -1433,7 +1433,7 @@ static bool validate_ipv4_net_dev(struct net_device *net_dev,
return false;
memset(&fl4, 0, sizeof(fl4));
fl4.flowi4_iif = net_dev->ifindex;
fl4.flowi4_oif = net_dev->ifindex;
fl4.daddr = daddr;
fl4.saddr = saddr;

View File

@@ -2814,10 +2814,18 @@ static int __init ib_core_init(void)
nldev_init();
rdma_nl_register(RDMA_NL_LS, ibnl_ls_cb_table);
roce_gid_mgmt_init();
ret = roce_gid_mgmt_init();
if (ret) {
pr_warn("Couldn't init RoCE GID management\n");
goto err_parent;
}
return 0;
err_parent:
rdma_nl_unregister(RDMA_NL_LS);
nldev_exit();
unregister_pernet_device(&rdma_dev_net_ops);
err_compat:
unregister_blocking_lsm_notifier(&ibdev_lsm_nb);
err_sa:

View File

@@ -2349,7 +2349,7 @@ void __init nldev_init(void)
rdma_nl_register(RDMA_NL_NLDEV, nldev_cb_table);
}
void __exit nldev_exit(void)
void nldev_exit(void)
{
rdma_nl_unregister(RDMA_NL_NLDEV);
}

View File

@@ -913,8 +913,7 @@ void sc_disable(struct send_context *sc)
spin_unlock(&sc->release_lock);
write_seqlock(&sc->waitlock);
if (!list_empty(&sc->piowait))
list_move(&sc->piowait, &wake_list);
list_splice_init(&sc->piowait, &wake_list);
write_sequnlock(&sc->waitlock);
while (!list_empty(&wake_list)) {
struct iowait *wait;

View File

@@ -82,7 +82,6 @@ static const u32 hns_roce_op_code[] = {
HR_OPC_MAP(ATOMIC_CMP_AND_SWP, ATOM_CMP_AND_SWAP),
HR_OPC_MAP(ATOMIC_FETCH_AND_ADD, ATOM_FETCH_AND_ADD),
HR_OPC_MAP(SEND_WITH_INV, SEND_WITH_INV),
HR_OPC_MAP(LOCAL_INV, LOCAL_INV),
HR_OPC_MAP(MASKED_ATOMIC_CMP_AND_SWP, ATOM_MSK_CMP_AND_SWAP),
HR_OPC_MAP(MASKED_ATOMIC_FETCH_AND_ADD, ATOM_MSK_FETCH_AND_ADD),
HR_OPC_MAP(REG_MR, FAST_REG_PMR),
@@ -149,8 +148,7 @@ static void set_atomic_seg(const struct ib_send_wr *wr,
aseg->cmp_data = 0;
}
roce_set_field(rc_sq_wqe->byte_16, V2_RC_SEND_WQE_BYTE_16_SGE_NUM_M,
V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S, valid_num_sge);
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SGE_NUM, valid_num_sge);
}
static int fill_ext_sge_inl_data(struct hns_roce_qp *qp,
@@ -271,8 +269,7 @@ static int set_rc_inl(struct hns_roce_qp *qp, const struct ib_send_wr *wr,
dseg += sizeof(struct hns_roce_v2_rc_send_wqe);
if (msg_len <= HNS_ROCE_V2_MAX_RC_INL_INN_SZ) {
roce_set_bit(rc_sq_wqe->byte_20,
V2_RC_SEND_WQE_BYTE_20_INL_TYPE_S, 0);
hr_reg_clear(rc_sq_wqe, RC_SEND_WQE_INL_TYPE);
for (i = 0; i < wr->num_sge; i++) {
memcpy(dseg, ((void *)wr->sg_list[i].addr),
@@ -280,17 +277,13 @@ static int set_rc_inl(struct hns_roce_qp *qp, const struct ib_send_wr *wr,
dseg += wr->sg_list[i].length;
}
} else {
roce_set_bit(rc_sq_wqe->byte_20,
V2_RC_SEND_WQE_BYTE_20_INL_TYPE_S, 1);
hr_reg_enable(rc_sq_wqe, RC_SEND_WQE_INL_TYPE);
ret = fill_ext_sge_inl_data(qp, wr, &curr_idx, msg_len);
if (ret)
return ret;
roce_set_field(rc_sq_wqe->byte_16,
V2_RC_SEND_WQE_BYTE_16_SGE_NUM_M,
V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S,
curr_idx - *sge_idx);
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SGE_NUM, curr_idx - *sge_idx);
}
*sge_idx = curr_idx;
@@ -309,12 +302,10 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr,
int j = 0;
int i;
roce_set_field(rc_sq_wqe->byte_20,
V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_M,
V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S,
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_MSG_START_SGE_IDX,
(*sge_ind) & (qp->sge.sge_cnt - 1));
roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_INLINE_S,
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_INLINE,
!!(wr->send_flags & IB_SEND_INLINE));
if (wr->send_flags & IB_SEND_INLINE)
return set_rc_inl(qp, wr, rc_sq_wqe, sge_ind);
@@ -339,9 +330,7 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr,
valid_num_sge - HNS_ROCE_SGE_IN_WQE);
}
roce_set_field(rc_sq_wqe->byte_16,
V2_RC_SEND_WQE_BYTE_16_SGE_NUM_M,
V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S, valid_num_sge);
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SGE_NUM, valid_num_sge);
return 0;
}
@@ -412,8 +401,7 @@ static int set_ud_opcode(struct hns_roce_v2_ud_send_wqe *ud_sq_wqe,
ud_sq_wqe->immtdata = get_immtdata(wr);
roce_set_field(ud_sq_wqe->byte_4, V2_UD_SEND_WQE_BYTE_4_OPCODE_M,
V2_UD_SEND_WQE_BYTE_4_OPCODE_S, to_hr_opcode(ib_op));
hr_reg_write(ud_sq_wqe, UD_SEND_WQE_OPCODE, to_hr_opcode(ib_op));
return 0;
}
@@ -424,21 +412,15 @@ static int fill_ud_av(struct hns_roce_v2_ud_send_wqe *ud_sq_wqe,
struct ib_device *ib_dev = ah->ibah.device;
struct hns_roce_dev *hr_dev = to_hr_dev(ib_dev);
roce_set_field(ud_sq_wqe->byte_24, V2_UD_SEND_WQE_BYTE_24_UDPSPN_M,
V2_UD_SEND_WQE_BYTE_24_UDPSPN_S, ah->av.udp_sport);
roce_set_field(ud_sq_wqe->byte_36, V2_UD_SEND_WQE_BYTE_36_HOPLIMIT_M,
V2_UD_SEND_WQE_BYTE_36_HOPLIMIT_S, ah->av.hop_limit);
roce_set_field(ud_sq_wqe->byte_36, V2_UD_SEND_WQE_BYTE_36_TCLASS_M,
V2_UD_SEND_WQE_BYTE_36_TCLASS_S, ah->av.tclass);
roce_set_field(ud_sq_wqe->byte_40, V2_UD_SEND_WQE_BYTE_40_FLOW_LABEL_M,
V2_UD_SEND_WQE_BYTE_40_FLOW_LABEL_S, ah->av.flowlabel);
hr_reg_write(ud_sq_wqe, UD_SEND_WQE_UDPSPN, ah->av.udp_sport);
hr_reg_write(ud_sq_wqe, UD_SEND_WQE_HOPLIMIT, ah->av.hop_limit);
hr_reg_write(ud_sq_wqe, UD_SEND_WQE_TCLASS, ah->av.tclass);
hr_reg_write(ud_sq_wqe, UD_SEND_WQE_FLOW_LABEL, ah->av.flowlabel);
if (WARN_ON(ah->av.sl > MAX_SERVICE_LEVEL))
return -EINVAL;
roce_set_field(ud_sq_wqe->byte_40, V2_UD_SEND_WQE_BYTE_40_SL_M,
V2_UD_SEND_WQE_BYTE_40_SL_S, ah->av.sl);
hr_reg_write(ud_sq_wqe, UD_SEND_WQE_SL, ah->av.sl);
ud_sq_wqe->sgid_index = ah->av.gid_index;
@@ -448,10 +430,8 @@ static int fill_ud_av(struct hns_roce_v2_ud_send_wqe *ud_sq_wqe,
if (hr_dev->pci_dev->revision >= PCI_REVISION_ID_HIP09)
return 0;
roce_set_bit(ud_sq_wqe->byte_40, V2_UD_SEND_WQE_BYTE_40_UD_VLAN_EN_S,
ah->av.vlan_en);
roce_set_field(ud_sq_wqe->byte_36, V2_UD_SEND_WQE_BYTE_36_VLAN_M,
V2_UD_SEND_WQE_BYTE_36_VLAN_S, ah->av.vlan_id);
hr_reg_write(ud_sq_wqe, UD_SEND_WQE_VLAN_EN, ah->av.vlan_en);
hr_reg_write(ud_sq_wqe, UD_SEND_WQE_VLAN, ah->av.vlan_id);
return 0;
}
@@ -476,27 +456,19 @@ static inline int set_ud_wqe(struct hns_roce_qp *qp,
ud_sq_wqe->msg_len = cpu_to_le32(msg_len);
roce_set_bit(ud_sq_wqe->byte_4, V2_UD_SEND_WQE_BYTE_4_CQE_S,
hr_reg_write(ud_sq_wqe, UD_SEND_WQE_CQE,
!!(wr->send_flags & IB_SEND_SIGNALED));
roce_set_bit(ud_sq_wqe->byte_4, V2_UD_SEND_WQE_BYTE_4_SE_S,
hr_reg_write(ud_sq_wqe, UD_SEND_WQE_SE,
!!(wr->send_flags & IB_SEND_SOLICITED));
roce_set_field(ud_sq_wqe->byte_16, V2_UD_SEND_WQE_BYTE_16_PD_M,
V2_UD_SEND_WQE_BYTE_16_PD_S, to_hr_pd(qp->ibqp.pd)->pdn);
roce_set_field(ud_sq_wqe->byte_16, V2_UD_SEND_WQE_BYTE_16_SGE_NUM_M,
V2_UD_SEND_WQE_BYTE_16_SGE_NUM_S, valid_num_sge);
roce_set_field(ud_sq_wqe->byte_20,
V2_UD_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_M,
V2_UD_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S,
hr_reg_write(ud_sq_wqe, UD_SEND_WQE_PD, to_hr_pd(qp->ibqp.pd)->pdn);
hr_reg_write(ud_sq_wqe, UD_SEND_WQE_SGE_NUM, valid_num_sge);
hr_reg_write(ud_sq_wqe, UD_SEND_WQE_MSG_START_SGE_IDX,
curr_idx & (qp->sge.sge_cnt - 1));
ud_sq_wqe->qkey = cpu_to_le32(ud_wr(wr)->remote_qkey & 0x80000000 ?
qp->qkey : ud_wr(wr)->remote_qkey);
roce_set_field(ud_sq_wqe->byte_32, V2_UD_SEND_WQE_BYTE_32_DQPN_M,
V2_UD_SEND_WQE_BYTE_32_DQPN_S, ud_wr(wr)->remote_qpn);
hr_reg_write(ud_sq_wqe, UD_SEND_WQE_DQPN, ud_wr(wr)->remote_qpn);
ret = fill_ud_av(ud_sq_wqe, ah);
if (ret)
@@ -516,8 +488,7 @@ static inline int set_ud_wqe(struct hns_roce_qp *qp,
dma_wmb();
*sge_idx = curr_idx;
roce_set_bit(ud_sq_wqe->byte_4, V2_UD_SEND_WQE_BYTE_4_OWNER_S,
owner_bit);
hr_reg_write(ud_sq_wqe, UD_SEND_WQE_OWNER, owner_bit);
return 0;
}
@@ -552,9 +523,6 @@ static int set_rc_opcode(struct hns_roce_dev *hr_dev,
else
ret = -EOPNOTSUPP;
break;
case IB_WR_LOCAL_INV:
roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_SO_S, 1);
fallthrough;
case IB_WR_SEND_WITH_INV:
rc_sq_wqe->inv_key = cpu_to_le32(wr->ex.invalidate_rkey);
break;
@@ -565,11 +533,11 @@ static int set_rc_opcode(struct hns_roce_dev *hr_dev,
if (unlikely(ret))
return ret;
roce_set_field(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
V2_RC_SEND_WQE_BYTE_4_OPCODE_S, to_hr_opcode(ib_op));
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_OPCODE, to_hr_opcode(ib_op));
return ret;
}
static inline int set_rc_wqe(struct hns_roce_qp *qp,
const struct ib_send_wr *wr,
void *wqe, unsigned int *sge_idx,
@@ -590,13 +558,13 @@ static inline int set_rc_wqe(struct hns_roce_qp *qp,
if (WARN_ON(ret))
return ret;
roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_FENCE_S,
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_FENCE,
(wr->send_flags & IB_SEND_FENCE) ? 1 : 0);
roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_SE_S,
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SE,
(wr->send_flags & IB_SEND_SOLICITED) ? 1 : 0);
roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_CQE_S,
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_CQE,
(wr->send_flags & IB_SEND_SIGNALED) ? 1 : 0);
if (wr->opcode == IB_WR_ATOMIC_CMP_AND_SWP ||
@@ -616,8 +584,7 @@ static inline int set_rc_wqe(struct hns_roce_qp *qp,
dma_wmb();
*sge_idx = curr_idx;
roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_OWNER_S,
owner_bit);
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_OWNER, owner_bit);
return ret;
}
@@ -678,16 +645,15 @@ static void hns_roce_write512(struct hns_roce_dev *hr_dev, u64 *val,
static void write_dwqe(struct hns_roce_dev *hr_dev, struct hns_roce_qp *qp,
void *wqe)
{
#define HNS_ROCE_SL_SHIFT 2
struct hns_roce_v2_rc_send_wqe *rc_sq_wqe = wqe;
/* All kinds of DirectWQE have the same header field layout */
roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_FLAG_S, 1);
roce_set_field(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_DB_SL_L_M,
V2_RC_SEND_WQE_BYTE_4_DB_SL_L_S, qp->sl);
roce_set_field(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_DB_SL_H_M,
V2_RC_SEND_WQE_BYTE_4_DB_SL_H_S, qp->sl >> 2);
roce_set_field(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_WQE_INDEX_M,
V2_RC_SEND_WQE_BYTE_4_WQE_INDEX_S, qp->sq.head);
hr_reg_enable(rc_sq_wqe, RC_SEND_WQE_FLAG);
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_DB_SL_L, qp->sl);
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_DB_SL_H,
qp->sl >> HNS_ROCE_SL_SHIFT);
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_WQE_INDEX, qp->sq.head);
hns_roce_write512(hr_dev, wqe, qp->sq.db_reg);
}
@@ -1783,17 +1749,16 @@ static int __hns_roce_set_vf_switch_param(struct hns_roce_dev *hr_dev,
swt = (struct hns_roce_vf_switch *)desc.data;
hns_roce_cmq_setup_basic_desc(&desc, HNS_SWITCH_PARAMETER_CFG, true);
swt->rocee_sel |= cpu_to_le32(HNS_ICL_SWITCH_CMD_ROCEE_SEL);
roce_set_field(swt->fun_id, VF_SWITCH_DATA_FUN_ID_VF_ID_M,
VF_SWITCH_DATA_FUN_ID_VF_ID_S, vf_id);
hr_reg_write(swt, VF_SWITCH_VF_ID, vf_id);
ret = hns_roce_cmq_send(hr_dev, &desc, 1);
if (ret)
return ret;
desc.flag = cpu_to_le16(HNS_ROCE_CMD_FLAG_IN);
desc.flag &= cpu_to_le16(~HNS_ROCE_CMD_FLAG_WR);
roce_set_bit(swt->cfg, VF_SWITCH_DATA_CFG_ALW_LPBK_S, 1);
roce_set_bit(swt->cfg, VF_SWITCH_DATA_CFG_ALW_LCL_LPBK_S, 0);
roce_set_bit(swt->cfg, VF_SWITCH_DATA_CFG_ALW_DST_OVRD_S, 1);
hr_reg_enable(swt, VF_SWITCH_ALW_LPBK);
hr_reg_clear(swt, VF_SWITCH_ALW_LCL_LPBK);
hr_reg_enable(swt, VF_SWITCH_ALW_DST_OVRD);
return hns_roce_cmq_send(hr_dev, &desc, 1);
}
@@ -2942,10 +2907,8 @@ static int config_sgid_table(struct hns_roce_dev *hr_dev,
hns_roce_cmq_setup_basic_desc(&desc, HNS_ROCE_OPC_CFG_SGID_TB, false);
roce_set_field(sgid_tb->table_idx_rsv, CFG_SGID_TB_TABLE_IDX_M,
CFG_SGID_TB_TABLE_IDX_S, gid_index);
roce_set_field(sgid_tb->vf_sgid_type_rsv, CFG_SGID_TB_VF_SGID_TYPE_M,
CFG_SGID_TB_VF_SGID_TYPE_S, sgid_type);
hr_reg_write(sgid_tb, CFG_SGID_TB_TABLE_IDX, gid_index);
hr_reg_write(sgid_tb, CFG_SGID_TB_VF_SGID_TYPE, sgid_type);
copy_gid(&sgid_tb->vf_sgid_l, gid);
@@ -2980,19 +2943,14 @@ static int config_gmv_table(struct hns_roce_dev *hr_dev,
copy_gid(&tb_a->vf_sgid_l, gid);
roce_set_field(tb_a->vf_sgid_type_vlan, CFG_GMV_TB_VF_SGID_TYPE_M,
CFG_GMV_TB_VF_SGID_TYPE_S, sgid_type);
roce_set_bit(tb_a->vf_sgid_type_vlan, CFG_GMV_TB_VF_VLAN_EN_S,
vlan_id < VLAN_CFI_MASK);
roce_set_field(tb_a->vf_sgid_type_vlan, CFG_GMV_TB_VF_VLAN_ID_M,
CFG_GMV_TB_VF_VLAN_ID_S, vlan_id);
hr_reg_write(tb_a, GMV_TB_A_VF_SGID_TYPE, sgid_type);
hr_reg_write(tb_a, GMV_TB_A_VF_VLAN_EN, vlan_id < VLAN_CFI_MASK);
hr_reg_write(tb_a, GMV_TB_A_VF_VLAN_ID, vlan_id);
tb_b->vf_smac_l = cpu_to_le32(*(u32 *)mac);
roce_set_field(tb_b->vf_smac_h, CFG_GMV_TB_SMAC_H_M,
CFG_GMV_TB_SMAC_H_S, *(u16 *)&mac[4]);
roce_set_field(tb_b->table_idx_rsv, CFG_GMV_TB_SGID_IDX_M,
CFG_GMV_TB_SGID_IDX_S, gid_index);
hr_reg_write(tb_b, GMV_TB_B_SMAC_H, *(u16 *)&mac[4]);
hr_reg_write(tb_b, GMV_TB_B_SGID_IDX, gid_index);
return hns_roce_cmq_send(hr_dev, desc, 2);
}
@@ -3041,10 +2999,8 @@ static int hns_roce_v2_set_mac(struct hns_roce_dev *hr_dev, u8 phy_port,
reg_smac_l = *(u32 *)(&addr[0]);
reg_smac_h = *(u16 *)(&addr[4]);
roce_set_field(smac_tb->tb_idx_rsv, CFG_SMAC_TB_IDX_M,
CFG_SMAC_TB_IDX_S, phy_port);
roce_set_field(smac_tb->vf_smac_h_rsv, CFG_SMAC_TB_VF_SMAC_H_M,
CFG_SMAC_TB_VF_SMAC_H_S, reg_smac_h);
hr_reg_write(smac_tb, CFG_SMAC_TB_IDX, phy_port);
hr_reg_write(smac_tb, CFG_SMAC_TB_VF_SMAC_H, reg_smac_h);
smac_tb->vf_smac_l = cpu_to_le32(reg_smac_l);
return hns_roce_cmq_send(hr_dev, &desc, 1);
@@ -3073,20 +3029,14 @@ static int set_mtpt_pbl(struct hns_roce_dev *hr_dev,
mpt_entry->pbl_size = cpu_to_le32(mr->npages);
mpt_entry->pbl_ba_l = cpu_to_le32(pbl_ba >> 3);
roce_set_field(mpt_entry->byte_48_mode_ba,
V2_MPT_BYTE_48_PBL_BA_H_M, V2_MPT_BYTE_48_PBL_BA_H_S,
upper_32_bits(pbl_ba >> 3));
hr_reg_write(mpt_entry, MPT_PBL_BA_H, upper_32_bits(pbl_ba >> 3));
mpt_entry->pa0_l = cpu_to_le32(lower_32_bits(pages[0]));
roce_set_field(mpt_entry->byte_56_pa0_h, V2_MPT_BYTE_56_PA0_H_M,
V2_MPT_BYTE_56_PA0_H_S, upper_32_bits(pages[0]));
hr_reg_write(mpt_entry, MPT_PA0_H, upper_32_bits(pages[0]));
mpt_entry->pa1_l = cpu_to_le32(lower_32_bits(pages[1]));
roce_set_field(mpt_entry->byte_64_buf_pa1, V2_MPT_BYTE_64_PA1_H_M,
V2_MPT_BYTE_64_PA1_H_S, upper_32_bits(pages[1]));
roce_set_field(mpt_entry->byte_64_buf_pa1,
V2_MPT_BYTE_64_PBL_BUF_PG_SZ_M,
V2_MPT_BYTE_64_PBL_BUF_PG_SZ_S,
hr_reg_write(mpt_entry, MPT_PA1_H, upper_32_bits(pages[1]));
hr_reg_write(mpt_entry, MPT_PBL_BUF_PG_SZ,
to_hr_hw_page_shift(mr->pbl_mtr.hem_cfg.buf_pg_shift));
return 0;
@@ -3104,7 +3054,6 @@ static int hns_roce_v2_write_mtpt(struct hns_roce_dev *hr_dev,
hr_reg_write(mpt_entry, MPT_ST, V2_MPT_ST_VALID);
hr_reg_write(mpt_entry, MPT_PD, mr->pd);
hr_reg_enable(mpt_entry, MPT_L_INV_EN);
hr_reg_write_bool(mpt_entry, MPT_BIND_EN,
mr->access & IB_ACCESS_MW_BIND);
@@ -3149,24 +3098,19 @@ static int hns_roce_v2_rereg_write_mtpt(struct hns_roce_dev *hr_dev,
u32 mr_access_flags = mr->access;
int ret = 0;
roce_set_field(mpt_entry->byte_4_pd_hop_st, V2_MPT_BYTE_4_MPT_ST_M,
V2_MPT_BYTE_4_MPT_ST_S, V2_MPT_ST_VALID);
roce_set_field(mpt_entry->byte_4_pd_hop_st, V2_MPT_BYTE_4_PD_M,
V2_MPT_BYTE_4_PD_S, mr->pd);
hr_reg_write(mpt_entry, MPT_ST, V2_MPT_ST_VALID);
hr_reg_write(mpt_entry, MPT_PD, mr->pd);
if (flags & IB_MR_REREG_ACCESS) {
roce_set_bit(mpt_entry->byte_8_mw_cnt_en,
V2_MPT_BYTE_8_BIND_EN_S,
hr_reg_write(mpt_entry, MPT_BIND_EN,
(mr_access_flags & IB_ACCESS_MW_BIND ? 1 : 0));
roce_set_bit(mpt_entry->byte_8_mw_cnt_en,
V2_MPT_BYTE_8_ATOMIC_EN_S,
hr_reg_write(mpt_entry, MPT_ATOMIC_EN,
mr_access_flags & IB_ACCESS_REMOTE_ATOMIC ? 1 : 0);
roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_RR_EN_S,
hr_reg_write(mpt_entry, MPT_RR_EN,
mr_access_flags & IB_ACCESS_REMOTE_READ ? 1 : 0);
roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_RW_EN_S,
hr_reg_write(mpt_entry, MPT_RW_EN,
mr_access_flags & IB_ACCESS_REMOTE_WRITE ? 1 : 0);
roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_LW_EN_S,
hr_reg_write(mpt_entry, MPT_LW_EN,
mr_access_flags & IB_ACCESS_LOCAL_WRITE ? 1 : 0);
}
@@ -3197,37 +3141,27 @@ static int hns_roce_v2_frmr_write_mtpt(struct hns_roce_dev *hr_dev,
return -ENOBUFS;
}
roce_set_field(mpt_entry->byte_4_pd_hop_st, V2_MPT_BYTE_4_MPT_ST_M,
V2_MPT_BYTE_4_MPT_ST_S, V2_MPT_ST_FREE);
roce_set_field(mpt_entry->byte_4_pd_hop_st, V2_MPT_BYTE_4_PBL_HOP_NUM_M,
V2_MPT_BYTE_4_PBL_HOP_NUM_S, 1);
roce_set_field(mpt_entry->byte_4_pd_hop_st,
V2_MPT_BYTE_4_PBL_BA_PG_SZ_M,
V2_MPT_BYTE_4_PBL_BA_PG_SZ_S,
hr_reg_write(mpt_entry, MPT_ST, V2_MPT_ST_FREE);
hr_reg_write(mpt_entry, MPT_PD, mr->pd);
hr_reg_enable(mpt_entry, MPT_RA_EN);
hr_reg_enable(mpt_entry, MPT_R_INV_EN);
hr_reg_enable(mpt_entry, MPT_FRE);
hr_reg_clear(mpt_entry, MPT_MR_MW);
hr_reg_enable(mpt_entry, MPT_BPD);
hr_reg_clear(mpt_entry, MPT_PA);
hr_reg_write(mpt_entry, MPT_PBL_HOP_NUM, 1);
hr_reg_write(mpt_entry, MPT_PBL_BA_PG_SZ,
to_hr_hw_page_shift(mr->pbl_mtr.hem_cfg.ba_pg_shift));
roce_set_field(mpt_entry->byte_4_pd_hop_st, V2_MPT_BYTE_4_PD_M,
V2_MPT_BYTE_4_PD_S, mr->pd);
roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_RA_EN_S, 1);
roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_R_INV_EN_S, 1);
roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_L_INV_EN_S, 1);
roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_FRE_S, 1);
roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_PA_S, 0);
roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_MR_MW_S, 0);
roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_BPD_S, 1);
hr_reg_write(mpt_entry, MPT_PBL_BUF_PG_SZ,
to_hr_hw_page_shift(mr->pbl_mtr.hem_cfg.buf_pg_shift));
mpt_entry->pbl_size = cpu_to_le32(mr->npages);
mpt_entry->pbl_ba_l = cpu_to_le32(lower_32_bits(pbl_ba >> 3));
roce_set_field(mpt_entry->byte_48_mode_ba, V2_MPT_BYTE_48_PBL_BA_H_M,
V2_MPT_BYTE_48_PBL_BA_H_S,
upper_32_bits(pbl_ba >> 3));
roce_set_field(mpt_entry->byte_64_buf_pa1,
V2_MPT_BYTE_64_PBL_BUF_PG_SZ_M,
V2_MPT_BYTE_64_PBL_BUF_PG_SZ_S,
to_hr_hw_page_shift(mr->pbl_mtr.hem_cfg.buf_pg_shift));
hr_reg_write(mpt_entry, MPT_PBL_BA_H, upper_32_bits(pbl_ba >> 3));
return 0;
}
@@ -3239,36 +3173,28 @@ static int hns_roce_v2_mw_write_mtpt(void *mb_buf, struct hns_roce_mw *mw)
mpt_entry = mb_buf;
memset(mpt_entry, 0, sizeof(*mpt_entry));
roce_set_field(mpt_entry->byte_4_pd_hop_st, V2_MPT_BYTE_4_MPT_ST_M,
V2_MPT_BYTE_4_MPT_ST_S, V2_MPT_ST_FREE);
roce_set_field(mpt_entry->byte_4_pd_hop_st, V2_MPT_BYTE_4_PD_M,
V2_MPT_BYTE_4_PD_S, mw->pdn);
roce_set_field(mpt_entry->byte_4_pd_hop_st, V2_MPT_BYTE_4_PBL_HOP_NUM_M,
V2_MPT_BYTE_4_PBL_HOP_NUM_S,
mw->pbl_hop_num == HNS_ROCE_HOP_NUM_0 ? 0 :
mw->pbl_hop_num);
roce_set_field(mpt_entry->byte_4_pd_hop_st,
V2_MPT_BYTE_4_PBL_BA_PG_SZ_M,
V2_MPT_BYTE_4_PBL_BA_PG_SZ_S,
mw->pbl_ba_pg_sz + PG_SHIFT_OFFSET);
hr_reg_write(mpt_entry, MPT_ST, V2_MPT_ST_FREE);
hr_reg_write(mpt_entry, MPT_PD, mw->pdn);
roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_R_INV_EN_S, 1);
roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_L_INV_EN_S, 1);
roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_LW_EN_S, 1);
hr_reg_enable(mpt_entry, MPT_R_INV_EN);
hr_reg_enable(mpt_entry, MPT_LW_EN);
roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_PA_S, 0);
roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_MR_MW_S, 1);
roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_BPD_S, 1);
roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_BQP_S,
hr_reg_enable(mpt_entry, MPT_MR_MW);
hr_reg_enable(mpt_entry, MPT_BPD);
hr_reg_clear(mpt_entry, MPT_PA);
hr_reg_write(mpt_entry, MPT_BQP,
mw->ibmw.type == IB_MW_TYPE_1 ? 0 : 1);
roce_set_field(mpt_entry->byte_64_buf_pa1,
V2_MPT_BYTE_64_PBL_BUF_PG_SZ_M,
V2_MPT_BYTE_64_PBL_BUF_PG_SZ_S,
mw->pbl_buf_pg_sz + PG_SHIFT_OFFSET);
mpt_entry->lkey = cpu_to_le32(mw->rkey);
hr_reg_write(mpt_entry, MPT_PBL_HOP_NUM,
mw->pbl_hop_num == HNS_ROCE_HOP_NUM_0 ? 0 :
mw->pbl_hop_num);
hr_reg_write(mpt_entry, MPT_PBL_BA_PG_SZ,
mw->pbl_ba_pg_sz + PG_SHIFT_OFFSET);
hr_reg_write(mpt_entry, MPT_PBL_BUF_PG_SZ,
mw->pbl_buf_pg_sz + PG_SHIFT_OFFSET);
return 0;
}
@@ -3607,7 +3533,6 @@ static const u32 wc_send_op_map[] = {
HR_WC_OP_MAP(RDMA_READ, RDMA_READ),
HR_WC_OP_MAP(RDMA_WRITE, RDMA_WRITE),
HR_WC_OP_MAP(RDMA_WRITE_WITH_IMM, RDMA_WRITE),
HR_WC_OP_MAP(LOCAL_INV, LOCAL_INV),
HR_WC_OP_MAP(ATOM_CMP_AND_SWAP, COMP_SWAP),
HR_WC_OP_MAP(ATOM_FETCH_AND_ADD, FETCH_ADD),
HR_WC_OP_MAP(ATOM_MSK_CMP_AND_SWAP, MASKED_COMP_SWAP),
@@ -3657,9 +3582,6 @@ static void fill_send_wc(struct ib_wc *wc, struct hns_roce_v2_cqe *cqe)
case HNS_ROCE_V2_WQE_OP_RDMA_WRITE_WITH_IMM:
wc->wc_flags |= IB_WC_WITH_IMM;
break;
case HNS_ROCE_V2_WQE_OP_LOCAL_INV:
wc->wc_flags |= IB_WC_WITH_INVALIDATE;
break;
case HNS_ROCE_V2_WQE_OP_ATOM_CMP_AND_SWAP:
case HNS_ROCE_V2_WQE_OP_ATOM_FETCH_AND_ADD:
case HNS_ROCE_V2_WQE_OP_ATOM_MSK_CMP_AND_SWAP:

View File

@@ -184,7 +184,6 @@ enum {
HNS_ROCE_V2_WQE_OP_ATOM_MSK_CMP_AND_SWAP = 0x8,
HNS_ROCE_V2_WQE_OP_ATOM_MSK_FETCH_AND_ADD = 0x9,
HNS_ROCE_V2_WQE_OP_FAST_REG_PMR = 0xa,
HNS_ROCE_V2_WQE_OP_LOCAL_INV = 0xb,
HNS_ROCE_V2_WQE_OP_BIND_MW = 0xc,
HNS_ROCE_V2_WQE_OP_MASK = 0x1f,
};
@@ -790,12 +789,15 @@ struct hns_roce_v2_mpt_entry {
#define MPT_LKEY MPT_FIELD_LOC(223, 192)
#define MPT_VA MPT_FIELD_LOC(287, 224)
#define MPT_PBL_SIZE MPT_FIELD_LOC(319, 288)
#define MPT_PBL_BA MPT_FIELD_LOC(380, 320)
#define MPT_PBL_BA_L MPT_FIELD_LOC(351, 320)
#define MPT_PBL_BA_H MPT_FIELD_LOC(380, 352)
#define MPT_BLK_MODE MPT_FIELD_LOC(381, 381)
#define MPT_RSV0 MPT_FIELD_LOC(383, 382)
#define MPT_PA0 MPT_FIELD_LOC(441, 384)
#define MPT_PA0_L MPT_FIELD_LOC(415, 384)
#define MPT_PA0_H MPT_FIELD_LOC(441, 416)
#define MPT_BOUND_VA MPT_FIELD_LOC(447, 442)
#define MPT_PA1 MPT_FIELD_LOC(505, 448)
#define MPT_PA1_L MPT_FIELD_LOC(479, 448)
#define MPT_PA1_H MPT_FIELD_LOC(505, 480)
#define MPT_PERSIST_EN MPT_FIELD_LOC(506, 506)
#define MPT_RSV2 MPT_FIELD_LOC(507, 507)
#define MPT_PBL_BUF_PG_SZ MPT_FIELD_LOC(511, 508)
@@ -901,48 +903,24 @@ struct hns_roce_v2_ud_send_wqe {
u8 dgid[GID_LEN_V2];
};
#define V2_UD_SEND_WQE_BYTE_4_OPCODE_S 0
#define V2_UD_SEND_WQE_BYTE_4_OPCODE_M GENMASK(4, 0)
#define UD_SEND_WQE_FIELD_LOC(h, l) FIELD_LOC(struct hns_roce_v2_ud_send_wqe, h, l)
#define V2_UD_SEND_WQE_BYTE_4_OWNER_S 7
#define V2_UD_SEND_WQE_BYTE_4_CQE_S 8
#define V2_UD_SEND_WQE_BYTE_4_SE_S 11
#define V2_UD_SEND_WQE_BYTE_16_PD_S 0
#define V2_UD_SEND_WQE_BYTE_16_PD_M GENMASK(23, 0)
#define V2_UD_SEND_WQE_BYTE_16_SGE_NUM_S 24
#define V2_UD_SEND_WQE_BYTE_16_SGE_NUM_M GENMASK(31, 24)
#define V2_UD_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S 0
#define V2_UD_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_M GENMASK(23, 0)
#define V2_UD_SEND_WQE_BYTE_24_UDPSPN_S 16
#define V2_UD_SEND_WQE_BYTE_24_UDPSPN_M GENMASK(31, 16)
#define V2_UD_SEND_WQE_BYTE_32_DQPN_S 0
#define V2_UD_SEND_WQE_BYTE_32_DQPN_M GENMASK(23, 0)
#define V2_UD_SEND_WQE_BYTE_36_VLAN_S 0
#define V2_UD_SEND_WQE_BYTE_36_VLAN_M GENMASK(15, 0)
#define V2_UD_SEND_WQE_BYTE_36_HOPLIMIT_S 16
#define V2_UD_SEND_WQE_BYTE_36_HOPLIMIT_M GENMASK(23, 16)
#define V2_UD_SEND_WQE_BYTE_36_TCLASS_S 24
#define V2_UD_SEND_WQE_BYTE_36_TCLASS_M GENMASK(31, 24)
#define V2_UD_SEND_WQE_BYTE_40_FLOW_LABEL_S 0
#define V2_UD_SEND_WQE_BYTE_40_FLOW_LABEL_M GENMASK(19, 0)
#define V2_UD_SEND_WQE_BYTE_40_SL_S 20
#define V2_UD_SEND_WQE_BYTE_40_SL_M GENMASK(23, 20)
#define V2_UD_SEND_WQE_BYTE_40_UD_VLAN_EN_S 30
#define V2_UD_SEND_WQE_BYTE_40_LBI_S 31
#define UD_SEND_WQE_OPCODE UD_SEND_WQE_FIELD_LOC(4, 0)
#define UD_SEND_WQE_OWNER UD_SEND_WQE_FIELD_LOC(7, 7)
#define UD_SEND_WQE_CQE UD_SEND_WQE_FIELD_LOC(8, 8)
#define UD_SEND_WQE_SE UD_SEND_WQE_FIELD_LOC(11, 11)
#define UD_SEND_WQE_PD UD_SEND_WQE_FIELD_LOC(119, 96)
#define UD_SEND_WQE_SGE_NUM UD_SEND_WQE_FIELD_LOC(127, 120)
#define UD_SEND_WQE_MSG_START_SGE_IDX UD_SEND_WQE_FIELD_LOC(151, 128)
#define UD_SEND_WQE_UDPSPN UD_SEND_WQE_FIELD_LOC(191, 176)
#define UD_SEND_WQE_DQPN UD_SEND_WQE_FIELD_LOC(247, 224)
#define UD_SEND_WQE_VLAN UD_SEND_WQE_FIELD_LOC(271, 256)
#define UD_SEND_WQE_HOPLIMIT UD_SEND_WQE_FIELD_LOC(279, 272)
#define UD_SEND_WQE_TCLASS UD_SEND_WQE_FIELD_LOC(287, 280)
#define UD_SEND_WQE_FLOW_LABEL UD_SEND_WQE_FIELD_LOC(307, 288)
#define UD_SEND_WQE_SL UD_SEND_WQE_FIELD_LOC(311, 308)
#define UD_SEND_WQE_VLAN_EN UD_SEND_WQE_FIELD_LOC(318, 318)
#define UD_SEND_WQE_LBI UD_SEND_WQE_FIELD_LOC(319, 319)
struct hns_roce_v2_rc_send_wqe {
__le32 byte_4;
@@ -957,42 +935,22 @@ struct hns_roce_v2_rc_send_wqe {
__le64 va;
};
#define V2_RC_SEND_WQE_BYTE_4_OPCODE_S 0
#define V2_RC_SEND_WQE_BYTE_4_OPCODE_M GENMASK(4, 0)
#define RC_SEND_WQE_FIELD_LOC(h, l) FIELD_LOC(struct hns_roce_v2_rc_send_wqe, h, l)
#define V2_RC_SEND_WQE_BYTE_4_DB_SL_L_S 5
#define V2_RC_SEND_WQE_BYTE_4_DB_SL_L_M GENMASK(6, 5)
#define V2_RC_SEND_WQE_BYTE_4_DB_SL_H_S 13
#define V2_RC_SEND_WQE_BYTE_4_DB_SL_H_M GENMASK(14, 13)
#define V2_RC_SEND_WQE_BYTE_4_WQE_INDEX_S 15
#define V2_RC_SEND_WQE_BYTE_4_WQE_INDEX_M GENMASK(30, 15)
#define V2_RC_SEND_WQE_BYTE_4_OWNER_S 7
#define V2_RC_SEND_WQE_BYTE_4_CQE_S 8
#define V2_RC_SEND_WQE_BYTE_4_FENCE_S 9
#define V2_RC_SEND_WQE_BYTE_4_SO_S 10
#define V2_RC_SEND_WQE_BYTE_4_SE_S 11
#define V2_RC_SEND_WQE_BYTE_4_INLINE_S 12
#define V2_RC_SEND_WQE_BYTE_4_FLAG_S 31
#define V2_RC_SEND_WQE_BYTE_16_XRC_SRQN_S 0
#define V2_RC_SEND_WQE_BYTE_16_XRC_SRQN_M GENMASK(23, 0)
#define V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S 24
#define V2_RC_SEND_WQE_BYTE_16_SGE_NUM_M GENMASK(31, 24)
#define V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S 0
#define V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_M GENMASK(23, 0)
#define V2_RC_SEND_WQE_BYTE_20_INL_TYPE_S 31
#define RC_SEND_WQE_OPCODE RC_SEND_WQE_FIELD_LOC(4, 0)
#define RC_SEND_WQE_DB_SL_L RC_SEND_WQE_FIELD_LOC(6, 5)
#define RC_SEND_WQE_DB_SL_H RC_SEND_WQE_FIELD_LOC(14, 13)
#define RC_SEND_WQE_OWNER RC_SEND_WQE_FIELD_LOC(7, 7)
#define RC_SEND_WQE_CQE RC_SEND_WQE_FIELD_LOC(8, 8)
#define RC_SEND_WQE_FENCE RC_SEND_WQE_FIELD_LOC(9, 9)
#define RC_SEND_WQE_SE RC_SEND_WQE_FIELD_LOC(11, 11)
#define RC_SEND_WQE_INLINE RC_SEND_WQE_FIELD_LOC(12, 12)
#define RC_SEND_WQE_WQE_INDEX RC_SEND_WQE_FIELD_LOC(30, 15)
#define RC_SEND_WQE_FLAG RC_SEND_WQE_FIELD_LOC(31, 31)
#define RC_SEND_WQE_XRC_SRQN RC_SEND_WQE_FIELD_LOC(119, 96)
#define RC_SEND_WQE_SGE_NUM RC_SEND_WQE_FIELD_LOC(127, 120)
#define RC_SEND_WQE_MSG_START_SGE_IDX RC_SEND_WQE_FIELD_LOC(151, 128)
#define RC_SEND_WQE_INL_TYPE RC_SEND_WQE_FIELD_LOC(159, 159)
struct hns_roce_wqe_frmr_seg {
__le32 pbl_size;
@@ -1114,12 +1072,12 @@ struct hns_roce_vf_switch {
__le32 resv3;
};
#define VF_SWITCH_DATA_FUN_ID_VF_ID_S 3
#define VF_SWITCH_DATA_FUN_ID_VF_ID_M GENMASK(10, 3)
#define VF_SWITCH_FIELD_LOC(h, l) FIELD_LOC(struct hns_roce_vf_switch, h, l)
#define VF_SWITCH_DATA_CFG_ALW_LPBK_S 1
#define VF_SWITCH_DATA_CFG_ALW_LCL_LPBK_S 2
#define VF_SWITCH_DATA_CFG_ALW_DST_OVRD_S 3
#define VF_SWITCH_VF_ID VF_SWITCH_FIELD_LOC(42, 35)
#define VF_SWITCH_ALW_LPBK VF_SWITCH_FIELD_LOC(65, 65)
#define VF_SWITCH_ALW_LCL_LPBK VF_SWITCH_FIELD_LOC(66, 66)
#define VF_SWITCH_ALW_DST_OVRD VF_SWITCH_FIELD_LOC(67, 67)
struct hns_roce_post_mbox {
__le32 in_param_l;
@@ -1182,11 +1140,10 @@ struct hns_roce_cfg_sgid_tb {
__le32 vf_sgid_type_rsv;
};
#define CFG_SGID_TB_TABLE_IDX_S 0
#define CFG_SGID_TB_TABLE_IDX_M GENMASK(7, 0)
#define SGID_TB_FIELD_LOC(h, l) FIELD_LOC(struct hns_roce_cfg_sgid_tb, h, l)
#define CFG_SGID_TB_VF_SGID_TYPE_S 0
#define CFG_SGID_TB_VF_SGID_TYPE_M GENMASK(1, 0)
#define CFG_SGID_TB_TABLE_IDX SGID_TB_FIELD_LOC(7, 0)
#define CFG_SGID_TB_VF_SGID_TYPE SGID_TB_FIELD_LOC(161, 160)
struct hns_roce_cfg_smac_tb {
__le32 tb_idx_rsv;
@@ -1194,11 +1151,11 @@ struct hns_roce_cfg_smac_tb {
__le32 vf_smac_h_rsv;
__le32 rsv[3];
};
#define CFG_SMAC_TB_IDX_S 0
#define CFG_SMAC_TB_IDX_M GENMASK(7, 0)
#define CFG_SMAC_TB_VF_SMAC_H_S 0
#define CFG_SMAC_TB_VF_SMAC_H_M GENMASK(15, 0)
#define SMAC_TB_FIELD_LOC(h, l) FIELD_LOC(struct hns_roce_cfg_smac_tb, h, l)
#define CFG_SMAC_TB_IDX SMAC_TB_FIELD_LOC(7, 0)
#define CFG_SMAC_TB_VF_SMAC_H SMAC_TB_FIELD_LOC(79, 64)
struct hns_roce_cfg_gmv_tb_a {
__le32 vf_sgid_l;
@@ -1209,16 +1166,11 @@ struct hns_roce_cfg_gmv_tb_a {
__le32 resv;
};
#define CFG_GMV_TB_SGID_IDX_S 0
#define CFG_GMV_TB_SGID_IDX_M GENMASK(7, 0)
#define GMV_TB_A_FIELD_LOC(h, l) FIELD_LOC(struct hns_roce_cfg_gmv_tb_a, h, l)
#define CFG_GMV_TB_VF_SGID_TYPE_S 0
#define CFG_GMV_TB_VF_SGID_TYPE_M GENMASK(1, 0)
#define CFG_GMV_TB_VF_VLAN_EN_S 2
#define CFG_GMV_TB_VF_VLAN_ID_S 16
#define CFG_GMV_TB_VF_VLAN_ID_M GENMASK(27, 16)
#define GMV_TB_A_VF_SGID_TYPE GMV_TB_A_FIELD_LOC(129, 128)
#define GMV_TB_A_VF_VLAN_EN GMV_TB_A_FIELD_LOC(130, 130)
#define GMV_TB_A_VF_VLAN_ID GMV_TB_A_FIELD_LOC(155, 144)
struct hns_roce_cfg_gmv_tb_b {
__le32 vf_smac_l;
@@ -1227,8 +1179,10 @@ struct hns_roce_cfg_gmv_tb_b {
__le32 resv[3];
};
#define CFG_GMV_TB_SMAC_H_S 0
#define CFG_GMV_TB_SMAC_H_M GENMASK(15, 0)
#define GMV_TB_B_FIELD_LOC(h, l) FIELD_LOC(struct hns_roce_cfg_gmv_tb_b, h, l)
#define GMV_TB_B_SMAC_H GMV_TB_B_FIELD_LOC(47, 32)
#define GMV_TB_B_SGID_IDX GMV_TB_B_FIELD_LOC(71, 64)
#define HNS_ROCE_QUERY_PF_CAPS_CMD_NUM 5
struct hns_roce_query_pf_caps_a {

View File

@@ -345,6 +345,10 @@ static int qedr_alloc_resources(struct qedr_dev *dev)
if (IS_IWARP(dev)) {
xa_init(&dev->qps);
dev->iwarp_wq = create_singlethread_workqueue("qedr_iwarpq");
if (!dev->iwarp_wq) {
rc = -ENOMEM;
goto err1;
}
}
/* Allocate Status blocks for CNQ */
@@ -352,7 +356,7 @@ static int qedr_alloc_resources(struct qedr_dev *dev)
GFP_KERNEL);
if (!dev->sb_array) {
rc = -ENOMEM;
goto err1;
goto err_destroy_wq;
}
dev->cnq_array = kcalloc(dev->num_cnq,
@@ -403,6 +407,9 @@ err3:
kfree(dev->cnq_array);
err2:
kfree(dev->sb_array);
err_destroy_wq:
if (IS_IWARP(dev))
destroy_workqueue(dev->iwarp_wq);
err1:
kfree(dev->sgid_tbl);
return rc;

View File

@@ -956,7 +956,7 @@ nj_release(struct tiger_hw *card)
}
if (card->irq > 0)
free_irq(card->irq, card);
if (card->isac.dch.dev.dev.class)
if (device_is_registered(&card->isac.dch.dev.dev))
mISDN_unregister_device(&card->isac.dch.dev);
for (i = 0; i < 2; i++) {

View File

@@ -233,11 +233,12 @@ mISDN_register_device(struct mISDNdevice *dev,
if (debug & DEBUG_CORE)
printk(KERN_DEBUG "mISDN_register %s %d\n",
dev_name(&dev->dev), dev->id);
dev->dev.class = &mISDN_class;
err = create_stack(dev);
if (err)
goto error1;
dev->dev.class = &mISDN_class;
dev->dev.platform_data = dev;
dev->dev.parent = parent;
dev_set_drvdata(&dev->dev, dev);
@@ -249,8 +250,8 @@ mISDN_register_device(struct mISDNdevice *dev,
error3:
delete_stack(dev);
return err;
error1:
put_device(&dev->dev);
return err;
}

View File

@@ -44,6 +44,8 @@ static void handle_cec_message(struct cros_ec_cec *cros_ec_cec)
uint8_t *cec_message = cros_ec->event_data.data.cec_message;
unsigned int len = cros_ec->event_size;
if (len > CEC_MAX_MSG_SIZE)
len = CEC_MAX_MSG_SIZE;
cros_ec_cec->rx_msg.len = len;
memcpy(cros_ec_cec->rx_msg.msg, cec_message, len);

View File

@@ -115,6 +115,8 @@ static irqreturn_t s5p_cec_irq_handler(int irq, void *priv)
dev_dbg(cec->dev, "Buffer overrun (worker did not process previous message)\n");
cec->rx = STATE_BUSY;
cec->msg.len = status >> 24;
if (cec->msg.len > CEC_MAX_MSG_SIZE)
cec->msg.len = CEC_MAX_MSG_SIZE;
cec->msg.rx_status = CEC_RX_STATUS_OK;
s5p_cec_get_rx_buf(cec, cec->msg.len,
cec->msg.msg);

View File

@@ -6673,7 +6673,7 @@ static int drxk_read_snr(struct dvb_frontend *fe, u16 *snr)
static int drxk_read_ucblocks(struct dvb_frontend *fe, u32 *ucblocks)
{
struct drxk_state *state = fe->demodulator_priv;
u16 err;
u16 err = 0;
dprintk(1, "\n");

View File

@@ -1270,11 +1270,12 @@ static int rkisp1_capture_link_validate(struct media_link *link)
struct rkisp1_capture *cap = video_get_drvdata(vdev);
const struct rkisp1_capture_fmt_cfg *fmt =
rkisp1_find_fmt_cfg(cap, cap->pix.fmt.pixelformat);
struct v4l2_subdev_format sd_fmt;
struct v4l2_subdev_format sd_fmt = {
.which = V4L2_SUBDEV_FORMAT_ACTIVE,
.pad = link->source->index,
};
int ret;
sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
sd_fmt.pad = link->source->index;
ret = v4l2_subdev_call(sd, pad, get_fmt, NULL, &sd_fmt);
if (ret)
return ret;

View File

@@ -275,7 +275,7 @@ static void rkisp1_lsc_config(struct rkisp1_params *params,
RKISP1_CIF_ISP_LSC_XSIZE_01 + i * 4);
/* program x grad tables */
data = RKISP1_CIF_ISP_LSC_SECT_SIZE(arg->x_grad_tbl[i * 2],
data = RKISP1_CIF_ISP_LSC_SECT_GRAD(arg->x_grad_tbl[i * 2],
arg->x_grad_tbl[i * 2 + 1]);
rkisp1_write(params->rkisp1, data,
RKISP1_CIF_ISP_LSC_XGRAD_01 + i * 4);
@@ -287,7 +287,7 @@ static void rkisp1_lsc_config(struct rkisp1_params *params,
RKISP1_CIF_ISP_LSC_YSIZE_01 + i * 4);
/* program y grad tables */
data = RKISP1_CIF_ISP_LSC_SECT_SIZE(arg->y_grad_tbl[i * 2],
data = RKISP1_CIF_ISP_LSC_SECT_GRAD(arg->y_grad_tbl[i * 2],
arg->y_grad_tbl[i * 2 + 1]);
rkisp1_write(params->rkisp1, data,
RKISP1_CIF_ISP_LSC_YGRAD_01 + i * 4);
@@ -751,7 +751,7 @@ static void rkisp1_ie_enable(struct rkisp1_params *params, bool en)
}
}
static void rkisp1_csm_config(struct rkisp1_params *params, bool full_range)
static void rkisp1_csm_config(struct rkisp1_params *params)
{
static const u16 full_range_coeff[] = {
0x0026, 0x004b, 0x000f,
@@ -765,7 +765,7 @@ static void rkisp1_csm_config(struct rkisp1_params *params, bool full_range)
};
unsigned int i;
if (full_range) {
if (params->quantization == V4L2_QUANTIZATION_FULL_RANGE) {
for (i = 0; i < ARRAY_SIZE(full_range_coeff); i++)
rkisp1_write(params->rkisp1, full_range_coeff[i],
RKISP1_CIF_ISP_CC_COEFF_0 + i * 4);
@@ -1235,11 +1235,7 @@ static void rkisp1_params_config_parameter(struct rkisp1_params *params)
rkisp1_param_set_bits(params, RKISP1_CIF_ISP_HIST_PROP,
rkisp1_hst_params_default_config.mode);
/* set the range */
if (params->quantization == V4L2_QUANTIZATION_FULL_RANGE)
rkisp1_csm_config(params, true);
else
rkisp1_csm_config(params, false);
rkisp1_csm_config(params);
spin_lock_irq(&params->config_lock);

View File

@@ -480,7 +480,7 @@
(((v0) & 0xFFF) | (((v1) & 0xFFF) << 12))
#define RKISP1_CIF_ISP_LSC_SECT_SIZE(v0, v1) \
(((v0) & 0xFFF) | (((v1) & 0xFFF) << 16))
#define RKISP1_CIF_ISP_LSC_GRAD_SIZE(v0, v1) \
#define RKISP1_CIF_ISP_LSC_SECT_GRAD(v0, v1) \
(((v0) & 0xFFF) | (((v1) & 0xFFF) << 16))
/* LSC: ISP_LSC_TABLE_SEL */

View File

@@ -510,6 +510,10 @@ static int rkisp1_rsz_init_config(struct v4l2_subdev *sd,
sink_fmt->height = RKISP1_DEFAULT_HEIGHT;
sink_fmt->field = V4L2_FIELD_NONE;
sink_fmt->code = RKISP1_DEF_FMT;
sink_fmt->colorspace = V4L2_COLORSPACE_SRGB;
sink_fmt->xfer_func = V4L2_XFER_FUNC_SRGB;
sink_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601;
sink_fmt->quantization = V4L2_QUANTIZATION_LIM_RANGE;
sink_crop = v4l2_subdev_get_try_crop(sd, sd_state,
RKISP1_RSZ_PAD_SINK);

View File

@@ -233,11 +233,11 @@ static int bcm47xxpart_parse(struct mtd_info *master,
}
/* Read middle of the block */
err = mtd_read(master, offset + 0x8000, 0x4, &bytes_read,
err = mtd_read(master, offset + (blocksize / 2), 0x4, &bytes_read,
(uint8_t *)buf);
if (err && !mtd_is_bitflip(err)) {
pr_err("mtd_read error while parsing (offset: 0x%X): %d\n",
offset, err);
offset + (blocksize / 2), err);
continue;
}

View File

@@ -376,6 +376,17 @@ static struct mdio_driver dsa_loop_drv = {
#define NUM_FIXED_PHYS (DSA_LOOP_NUM_PORTS - 2)
static void dsa_loop_phydevs_unregister(void)
{
unsigned int i;
for (i = 0; i < NUM_FIXED_PHYS; i++)
if (!IS_ERR(phydevs[i])) {
fixed_phy_unregister(phydevs[i]);
phy_device_free(phydevs[i]);
}
}
static int __init dsa_loop_init(void)
{
struct fixed_phy_status status = {
@@ -383,23 +394,23 @@ static int __init dsa_loop_init(void)
.speed = SPEED_100,
.duplex = DUPLEX_FULL,
};
unsigned int i;
unsigned int i, ret;
for (i = 0; i < NUM_FIXED_PHYS; i++)
phydevs[i] = fixed_phy_register(PHY_POLL, &status, NULL);
return mdio_driver_register(&dsa_loop_drv);
ret = mdio_driver_register(&dsa_loop_drv);
if (ret)
dsa_loop_phydevs_unregister();
return ret;
}
module_init(dsa_loop_init);
static void __exit dsa_loop_exit(void)
{
unsigned int i;
mdio_driver_unregister(&dsa_loop_drv);
for (i = 0; i < NUM_FIXED_PHYS; i++)
if (!IS_ERR(phydevs[i]))
fixed_phy_unregister(phydevs[i]);
dsa_loop_phydevs_unregister();
}
module_exit(dsa_loop_exit);

View File

@@ -656,7 +656,7 @@ fec_enet_txq_put_data_tso(struct fec_enet_priv_tx_q *txq, struct sk_buff *skb,
dev_kfree_skb_any(skb);
if (net_ratelimit())
netdev_err(ndev, "Tx DMA memory map failed\n");
return NETDEV_TX_BUSY;
return NETDEV_TX_OK;
}
bdp->cbd_datlen = cpu_to_fec16(size);
@@ -718,7 +718,7 @@ fec_enet_txq_put_hdr_tso(struct fec_enet_priv_tx_q *txq,
dev_kfree_skb_any(skb);
if (net_ratelimit())
netdev_err(ndev, "Tx DMA memory map failed\n");
return NETDEV_TX_BUSY;
return NETDEV_TX_OK;
}
}

View File

@@ -2621,19 +2621,19 @@ static void __ibmvnic_reset(struct work_struct *work)
rwi = get_next_rwi(adapter);
/*
* If there is another reset queued, free the previous rwi
* and process the new reset even if previous reset failed
* (the previous reset could have failed because of a fail
* over for instance, so process the fail over).
*
* If there are no resets queued and the previous reset failed,
* the adapter would be in an undefined state. So retry the
* previous reset as a hard reset.
*
* Else, free the previous rwi and, if there is another reset
* queued, process the new reset even if previous reset failed
* (the previous reset could have failed because of a fail
* over for instance, so process the fail over).
*/
if (rwi)
kfree(tmprwi);
else if (rc)
if (!rwi && rc)
rwi = tmprwi;
else
kfree(tmprwi);
if (rwi && (rwi->reset_reason == VNIC_RESET_FAILOVER ||
rwi->reset_reason == VNIC_RESET_MOBILITY || rc))

View File

@@ -51,7 +51,6 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
struct stmmac_resources res;
struct device_node *np;
int ret, i, phy_mode;
bool mdio = false;
np = dev_of_node(&pdev->dev);
@@ -69,12 +68,10 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
if (!plat)
return -ENOMEM;
plat->mdio_node = of_get_child_by_name(np, "mdio");
if (plat->mdio_node) {
dev_err(&pdev->dev, "Found MDIO subnode\n");
mdio = true;
}
dev_info(&pdev->dev, "Found MDIO subnode\n");
if (mdio) {
plat->mdio_bus_data = devm_kzalloc(&pdev->dev,
sizeof(*plat->mdio_bus_data),
GFP_KERNEL);

View File

@@ -577,7 +577,7 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
}
for (i = 0; i < PHY_MAX_ADDR; i++) {
if ((bus->phy_mask & (1 << i)) == 0) {
if ((bus->phy_mask & BIT(i)) == 0) {
struct phy_device *phydev;
phydev = mdiobus_scan(bus, i);

View File

@@ -1445,7 +1445,8 @@ static struct sk_buff *tun_napi_alloc_frags(struct tun_file *tfile,
int err;
int i;
if (it->nr_segs > MAX_SKB_FRAGS + 1)
if (it->nr_segs > MAX_SKB_FRAGS + 1 ||
len > (ETH_MAX_MTU - NET_SKB_PAD - NET_IP_ALIGN))
return ERR_PTR(-EMSGSIZE);
local_bh_disable();

View File

@@ -228,6 +228,10 @@ static void brcmf_fweh_event_worker(struct work_struct *work)
brcmf_fweh_event_name(event->code), event->code,
event->emsg.ifidx, event->emsg.bsscfgidx,
event->emsg.addr);
if (event->emsg.bsscfgidx >= BRCMF_MAX_IFS) {
bphy_err(drvr, "invalid bsscfg index: %u\n", event->emsg.bsscfgidx);
goto event_free;
}
/* convert event message */
emsg_be = &event->emsg;

View File

@@ -249,11 +249,19 @@ static int fdp_nci_close(struct nci_dev *ndev)
static int fdp_nci_send(struct nci_dev *ndev, struct sk_buff *skb)
{
struct fdp_nci_info *info = nci_get_drvdata(ndev);
int ret;
if (atomic_dec_and_test(&info->data_pkt_counter))
info->data_pkt_counter_cb(ndev);
return info->phy_ops->write(info->phy, skb);
ret = info->phy_ops->write(info->phy, skb);
if (ret < 0) {
kfree_skb(skb);
return ret;
}
consume_skb(skb);
return 0;
}
static int fdp_nci_request_firmware(struct nci_dev *ndev)

View File

@@ -132,12 +132,17 @@ static int nfcmrvl_i2c_nci_send(struct nfcmrvl_private *priv,
ret = -EREMOTEIO;
} else
ret = 0;
kfree_skb(skb);
}
if (ret) {
kfree_skb(skb);
return ret;
}
consume_skb(skb);
return 0;
}
static void nfcmrvl_i2c_nci_update_config(struct nfcmrvl_private *priv,
const void *param)
{

View File

@@ -77,12 +77,15 @@ static int nxp_nci_send(struct nci_dev *ndev, struct sk_buff *skb)
return -EINVAL;
r = info->phy_ops->write(info->phy_id, skb);
if (r < 0)
if (r < 0) {
kfree_skb(skb);
return r;
}
consume_skb(skb);
return 0;
}
static const struct nci_ops nxp_nci_ops = {
.open = nxp_nci_open,
.close = nxp_nci_close,

View File

@@ -110,13 +110,17 @@ static int s3fwrn5_nci_send(struct nci_dev *ndev, struct sk_buff *skb)
}
ret = s3fwrn5_write(info, skb);
if (ret < 0)
if (ret < 0) {
kfree_skb(skb);
mutex_unlock(&info->mutex);
return ret;
}
consume_skb(skb);
mutex_unlock(&info->mutex);
return 0;
}
static int s3fwrn5_nci_post_setup(struct nci_dev *ndev)
{
struct s3fwrn5_info *info = nci_get_drvdata(ndev);

View File

@@ -875,6 +875,7 @@ int iosapic_serial_irq(struct parisc_device *dev)
return vi->txn_irq;
}
EXPORT_SYMBOL(iosapic_serial_irq);
#endif

View File

@@ -792,9 +792,8 @@ static int __unset_online(struct device *dev, void *data)
{
struct idset *set = data;
struct subchannel *sch = to_subchannel(dev);
struct ccw_device *cdev = sch_get_cdev(sch);
if (cdev && cdev->online)
if (sch->st == SUBCHANNEL_TYPE_IO && sch->config.ena)
idset_sch_del(set, sch->schid);
return 0;

View File

@@ -1558,10 +1558,7 @@ struct lpfc_hba {
u32 cgn_acqe_cnt;
/* RX monitor handling for CMF */
struct rxtable_entry *rxtable; /* RX_monitor information */
atomic_t rxtable_idx_head;
#define LPFC_RXMONITOR_TABLE_IN_USE (LPFC_MAX_RXMONITOR_ENTRY + 73)
atomic_t rxtable_idx_tail;
struct lpfc_rx_info_monitor *rx_monitor;
atomic_t rx_max_read_cnt; /* Maximum read bytes */
uint64_t rx_block_cnt;
@@ -1610,7 +1607,8 @@ struct lpfc_hba {
#define LPFC_MAX_RXMONITOR_ENTRY 800
#define LPFC_MAX_RXMONITOR_DUMP 32
struct rxtable_entry {
struct rx_info_entry {
uint64_t cmf_bytes; /* Total no of read bytes for CMF_SYNC_WQE */
uint64_t total_bytes; /* Total no of read bytes requested */
uint64_t rcv_bytes; /* Total no of read bytes completed */
uint64_t avg_io_size;
@@ -1624,6 +1622,13 @@ struct rxtable_entry {
uint32_t timer_interval;
};
struct lpfc_rx_info_monitor {
struct rx_info_entry *ring; /* info organized in a circular buffer */
u32 head_idx, tail_idx; /* index to head/tail of ring */
spinlock_t lock; /* spinlock for ring */
u32 entries; /* storing number entries/size of ring */
};
static inline struct Scsi_Host *
lpfc_shost_from_vport(struct lpfc_vport *vport)
{

View File

@@ -90,6 +90,14 @@ void lpfc_cgn_dump_rxmonitor(struct lpfc_hba *phba);
void lpfc_cgn_update_stat(struct lpfc_hba *phba, uint32_t dtag);
void lpfc_unblock_requests(struct lpfc_hba *phba);
void lpfc_block_requests(struct lpfc_hba *phba);
int lpfc_rx_monitor_create_ring(struct lpfc_rx_info_monitor *rx_monitor,
u32 entries);
void lpfc_rx_monitor_destroy_ring(struct lpfc_rx_info_monitor *rx_monitor);
void lpfc_rx_monitor_record(struct lpfc_rx_info_monitor *rx_monitor,
struct rx_info_entry *entry);
u32 lpfc_rx_monitor_report(struct lpfc_hba *phba,
struct lpfc_rx_info_monitor *rx_monitor, char *buf,
u32 buf_len, u32 max_read_entries);
void lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_mbx_cmpl_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);

View File

@@ -5520,7 +5520,7 @@ lpfc_rx_monitor_open(struct inode *inode, struct file *file)
if (!debug)
goto out;
debug->buffer = vmalloc(MAX_DEBUGFS_RX_TABLE_SIZE);
debug->buffer = vmalloc(MAX_DEBUGFS_RX_INFO_SIZE);
if (!debug->buffer) {
kfree(debug);
goto out;
@@ -5541,55 +5541,18 @@ lpfc_rx_monitor_read(struct file *file, char __user *buf, size_t nbytes,
struct lpfc_rx_monitor_debug *debug = file->private_data;
struct lpfc_hba *phba = (struct lpfc_hba *)debug->i_private;
char *buffer = debug->buffer;
struct rxtable_entry *entry;
int i, len = 0, head, tail, last, start;
head = atomic_read(&phba->rxtable_idx_head);
while (head == LPFC_RXMONITOR_TABLE_IN_USE) {
/* Table is getting updated */
msleep(20);
head = atomic_read(&phba->rxtable_idx_head);
if (!phba->rx_monitor) {
scnprintf(buffer, MAX_DEBUGFS_RX_INFO_SIZE,
"Rx Monitor Info is empty.\n");
} else {
lpfc_rx_monitor_report(phba, phba->rx_monitor, buffer,
MAX_DEBUGFS_RX_INFO_SIZE,
LPFC_MAX_RXMONITOR_ENTRY);
}
tail = atomic_xchg(&phba->rxtable_idx_tail, head);
if (!phba->rxtable || head == tail) {
len += scnprintf(buffer + len, MAX_DEBUGFS_RX_TABLE_SIZE - len,
"Rxtable is empty\n");
goto out;
}
last = (head > tail) ? head : LPFC_MAX_RXMONITOR_ENTRY;
start = tail;
len += scnprintf(buffer + len, MAX_DEBUGFS_RX_TABLE_SIZE - len,
" MaxBPI\t Total Data Cmd Total Data Cmpl "
" Latency(us) Avg IO Size\tMax IO Size IO cnt "
"Info BWutil(ms)\n");
get_table:
for (i = start; i < last; i++) {
entry = &phba->rxtable[i];
len += scnprintf(buffer + len, MAX_DEBUGFS_RX_TABLE_SIZE - len,
"%3d:%12lld %12lld\t%12lld\t"
"%8lldus\t%8lld\t%10lld "
"%8d %2d %2d(%2d)\n",
i, entry->max_bytes_per_interval,
entry->total_bytes,
entry->rcv_bytes,
entry->avg_io_latency,
entry->avg_io_size,
entry->max_read_cnt,
entry->io_cnt,
entry->cmf_info,
entry->timer_utilization,
entry->timer_interval);
}
if (head != last) {
start = 0;
last = head;
goto get_table;
}
out:
return simple_read_from_buffer(buf, nbytes, ppos, buffer, len);
return simple_read_from_buffer(buf, nbytes, ppos, buffer,
strlen(buffer));
}
static int

View File

@@ -282,7 +282,7 @@ struct lpfc_idiag {
void *ptr_private;
};
#define MAX_DEBUGFS_RX_TABLE_SIZE (100 * LPFC_MAX_RXMONITOR_ENTRY)
#define MAX_DEBUGFS_RX_INFO_SIZE (128 * LPFC_MAX_RXMONITOR_ENTRY)
struct lpfc_rx_monitor_debug {
char *i_private;
char *buffer;

View File

@@ -5428,38 +5428,12 @@ lpfc_async_link_speed_to_read_top(struct lpfc_hba *phba, uint8_t speed_code)
void
lpfc_cgn_dump_rxmonitor(struct lpfc_hba *phba)
{
struct rxtable_entry *entry;
int cnt = 0, head, tail, last, start;
head = atomic_read(&phba->rxtable_idx_head);
tail = atomic_read(&phba->rxtable_idx_tail);
if (!phba->rxtable || head == tail) {
lpfc_printf_log(phba, KERN_ERR, LOG_CGN_MGMT,
"4411 Rxtable is empty\n");
return;
}
last = tail;
start = head;
/* Display the last LPFC_MAX_RXMONITOR_DUMP entries from the rxtable */
while (start != last) {
if (start)
start--;
else
start = LPFC_MAX_RXMONITOR_ENTRY - 1;
entry = &phba->rxtable[start];
if (!phba->rx_monitor) {
lpfc_printf_log(phba, KERN_INFO, LOG_CGN_MGMT,
"4410 %02d: MBPI %lld Xmit %lld Cmpl %lld "
"Lat %lld ASz %lld Info %02d BWUtil %d "
"Int %d slot %d\n",
cnt, entry->max_bytes_per_interval,
entry->total_bytes, entry->rcv_bytes,
entry->avg_io_latency, entry->avg_io_size,
entry->cmf_info, entry->timer_utilization,
entry->timer_interval, start);
cnt++;
if (cnt >= LPFC_MAX_RXMONITOR_DUMP)
return;
"4411 Rx Monitor Info is empty.\n");
} else {
lpfc_rx_monitor_report(phba, phba->rx_monitor, NULL, 0,
LPFC_MAX_RXMONITOR_DUMP);
}
}
@@ -5866,11 +5840,10 @@ lpfc_cmf_timer(struct hrtimer *timer)
{
struct lpfc_hba *phba = container_of(timer, struct lpfc_hba,
cmf_timer);
struct rxtable_entry *entry;
struct rx_info_entry entry;
uint32_t io_cnt;
uint32_t head, tail;
uint32_t busy, max_read;
uint64_t total, rcv, lat, mbpi;
uint64_t total, rcv, lat, mbpi, extra, cnt;
int timer_interval = LPFC_CMF_INTERVAL;
uint32_t ms;
struct lpfc_cgn_stat *cgs;
@@ -5937,12 +5910,27 @@ lpfc_cmf_timer(struct hrtimer *timer)
phba->hba_flag & HBA_SETUP) {
mbpi = phba->cmf_last_sync_bw;
phba->cmf_last_sync_bw = 0;
lpfc_issue_cmf_sync_wqe(phba, LPFC_CMF_INTERVAL, total);
extra = 0;
/* Calculate any extra bytes needed to account for the
* timer accuracy. If we are less than LPFC_CMF_INTERVAL
* calculate the adjustment needed for total to reflect
* a full LPFC_CMF_INTERVAL.
*/
if (ms && ms < LPFC_CMF_INTERVAL) {
cnt = div_u64(total, ms); /* bytes per ms */
cnt *= LPFC_CMF_INTERVAL; /* what total should be */
if (cnt > mbpi)
cnt = mbpi;
extra = cnt - total;
}
lpfc_issue_cmf_sync_wqe(phba, LPFC_CMF_INTERVAL, total + extra);
} else {
/* For Monitor mode or link down we want mbpi
* to be the full link speed
*/
mbpi = phba->cmf_link_byte_count;
extra = 0;
}
phba->cmf_timer_cnt++;
@@ -5968,39 +5956,30 @@ lpfc_cmf_timer(struct hrtimer *timer)
}
/* Save rxmonitor information for debug */
if (phba->rxtable) {
head = atomic_xchg(&phba->rxtable_idx_head,
LPFC_RXMONITOR_TABLE_IN_USE);
entry = &phba->rxtable[head];
entry->total_bytes = total;
entry->rcv_bytes = rcv;
entry->cmf_busy = busy;
entry->cmf_info = phba->cmf_active_info;
if (phba->rx_monitor) {
entry.total_bytes = total;
entry.cmf_bytes = total + extra;
entry.rcv_bytes = rcv;
entry.cmf_busy = busy;
entry.cmf_info = phba->cmf_active_info;
if (io_cnt) {
entry->avg_io_latency = div_u64(lat, io_cnt);
entry->avg_io_size = div_u64(rcv, io_cnt);
entry.avg_io_latency = div_u64(lat, io_cnt);
entry.avg_io_size = div_u64(rcv, io_cnt);
} else {
entry->avg_io_latency = 0;
entry->avg_io_size = 0;
entry.avg_io_latency = 0;
entry.avg_io_size = 0;
}
entry->max_read_cnt = max_read;
entry->io_cnt = io_cnt;
entry->max_bytes_per_interval = mbpi;
entry.max_read_cnt = max_read;
entry.io_cnt = io_cnt;
entry.max_bytes_per_interval = mbpi;
if (phba->cmf_active_mode == LPFC_CFG_MANAGED)
entry->timer_utilization = phba->cmf_last_ts;
entry.timer_utilization = phba->cmf_last_ts;
else
entry->timer_utilization = ms;
entry->timer_interval = ms;
entry.timer_utilization = ms;
entry.timer_interval = ms;
phba->cmf_last_ts = 0;
/* Increment rxtable index */
head = (head + 1) % LPFC_MAX_RXMONITOR_ENTRY;
tail = atomic_read(&phba->rxtable_idx_tail);
if (head == tail) {
tail = (tail + 1) % LPFC_MAX_RXMONITOR_ENTRY;
atomic_set(&phba->rxtable_idx_tail, tail);
}
atomic_set(&phba->rxtable_idx_head, head);
lpfc_rx_monitor_record(phba->rx_monitor, &entry);
}
if (phba->cmf_active_mode == LPFC_CFG_MONITOR) {

View File

@@ -344,9 +344,12 @@ lpfc_mem_free_all(struct lpfc_hba *phba)
phba->cgn_i = NULL;
}
/* Free RX table */
kfree(phba->rxtable);
phba->rxtable = NULL;
/* Free RX Monitor */
if (phba->rx_monitor) {
lpfc_rx_monitor_destroy_ring(phba->rx_monitor);
kfree(phba->rx_monitor);
phba->rx_monitor = NULL;
}
/* Free the iocb lookup array */
kfree(psli->iocbq_lookup);

View File

@@ -7878,6 +7878,172 @@ static void lpfc_sli4_dip(struct lpfc_hba *phba)
}
}
/**
* lpfc_rx_monitor_create_ring - Initialize ring buffer for rx_monitor
* @rx_monitor: Pointer to lpfc_rx_info_monitor object
* @entries: Number of rx_info_entry objects to allocate in ring
*
* Return:
* 0 - Success
* ENOMEM - Failure to kmalloc
**/
int lpfc_rx_monitor_create_ring(struct lpfc_rx_info_monitor *rx_monitor,
u32 entries)
{
rx_monitor->ring = kmalloc_array(entries, sizeof(struct rx_info_entry),
GFP_KERNEL);
if (!rx_monitor->ring)
return -ENOMEM;
rx_monitor->head_idx = 0;
rx_monitor->tail_idx = 0;
spin_lock_init(&rx_monitor->lock);
rx_monitor->entries = entries;
return 0;
}
/**
* lpfc_rx_monitor_destroy_ring - Free ring buffer for rx_monitor
* @rx_monitor: Pointer to lpfc_rx_info_monitor object
**/
void lpfc_rx_monitor_destroy_ring(struct lpfc_rx_info_monitor *rx_monitor)
{
spin_lock(&rx_monitor->lock);
kfree(rx_monitor->ring);
rx_monitor->ring = NULL;
rx_monitor->entries = 0;
rx_monitor->head_idx = 0;
rx_monitor->tail_idx = 0;
spin_unlock(&rx_monitor->lock);
}
/**
* lpfc_rx_monitor_record - Insert an entry into rx_monitor's ring
* @rx_monitor: Pointer to lpfc_rx_info_monitor object
* @entry: Pointer to rx_info_entry
*
* Used to insert an rx_info_entry into rx_monitor's ring. Note that this is a
* deep copy of rx_info_entry not a shallow copy of the rx_info_entry ptr.
*
* This is called from lpfc_cmf_timer, which is in timer/softirq context.
*
* In cases of old data overflow, we do a best effort of FIFO order.
**/
void lpfc_rx_monitor_record(struct lpfc_rx_info_monitor *rx_monitor,
struct rx_info_entry *entry)
{
struct rx_info_entry *ring = rx_monitor->ring;
u32 *head_idx = &rx_monitor->head_idx;
u32 *tail_idx = &rx_monitor->tail_idx;
spinlock_t *ring_lock = &rx_monitor->lock;
u32 ring_size = rx_monitor->entries;
spin_lock(ring_lock);
memcpy(&ring[*tail_idx], entry, sizeof(*entry));
*tail_idx = (*tail_idx + 1) % ring_size;
/* Best effort of FIFO saved data */
if (*tail_idx == *head_idx)
*head_idx = (*head_idx + 1) % ring_size;
spin_unlock(ring_lock);
}
/**
* lpfc_rx_monitor_report - Read out rx_monitor's ring
* @phba: Pointer to lpfc_hba object
* @rx_monitor: Pointer to lpfc_rx_info_monitor object
* @buf: Pointer to char buffer that will contain rx monitor info data
* @buf_len: Length buf including null char
* @max_read_entries: Maximum number of entries to read out of ring
*
* Used to dump/read what's in rx_monitor's ring buffer.
*
* If buf is NULL || buf_len == 0, then it is implied that we want to log the
* information to kmsg instead of filling out buf.
*
* Return:
* Number of entries read out of the ring
**/
u32 lpfc_rx_monitor_report(struct lpfc_hba *phba,
struct lpfc_rx_info_monitor *rx_monitor, char *buf,
u32 buf_len, u32 max_read_entries)
{
struct rx_info_entry *ring = rx_monitor->ring;
struct rx_info_entry *entry;
u32 *head_idx = &rx_monitor->head_idx;
u32 *tail_idx = &rx_monitor->tail_idx;
spinlock_t *ring_lock = &rx_monitor->lock;
u32 ring_size = rx_monitor->entries;
u32 cnt = 0;
char tmp[DBG_LOG_STR_SZ] = {0};
bool log_to_kmsg = (!buf || !buf_len) ? true : false;
if (!log_to_kmsg) {
/* clear the buffer to be sure */
memset(buf, 0, buf_len);
scnprintf(buf, buf_len, "\t%-16s%-16s%-16s%-16s%-8s%-8s%-8s"
"%-8s%-8s%-8s%-16s\n",
"MaxBPI", "Tot_Data_CMF",
"Tot_Data_Cmd", "Tot_Data_Cmpl",
"Lat(us)", "Avg_IO", "Max_IO", "Bsy",
"IO_cnt", "Info", "BWutil(ms)");
}
/* Needs to be _bh because record is called from timer interrupt
* context
*/
spin_lock_bh(ring_lock);
while (*head_idx != *tail_idx) {
entry = &ring[*head_idx];
/* Read out this entry's data. */
if (!log_to_kmsg) {
/* If !log_to_kmsg, then store to buf. */
scnprintf(tmp, sizeof(tmp),
"%03d:\t%-16llu%-16llu%-16llu%-16llu%-8llu"
"%-8llu%-8llu%-8u%-8u%-8u%u(%u)\n",
*head_idx, entry->max_bytes_per_interval,
entry->cmf_bytes, entry->total_bytes,
entry->rcv_bytes, entry->avg_io_latency,
entry->avg_io_size, entry->max_read_cnt,
entry->cmf_busy, entry->io_cnt,
entry->cmf_info, entry->timer_utilization,
entry->timer_interval);
/* Check for buffer overflow */
if ((strlen(buf) + strlen(tmp)) >= buf_len)
break;
/* Append entry's data to buffer */
strlcat(buf, tmp, buf_len);
} else {
lpfc_printf_log(phba, KERN_INFO, LOG_CGN_MGMT,
"4410 %02u: MBPI %llu Xmit %llu "
"Cmpl %llu Lat %llu ASz %llu Info %02u "
"BWUtil %u Int %u slot %u\n",
cnt, entry->max_bytes_per_interval,
entry->total_bytes, entry->rcv_bytes,
entry->avg_io_latency,
entry->avg_io_size, entry->cmf_info,
entry->timer_utilization,
entry->timer_interval, *head_idx);
}
*head_idx = (*head_idx + 1) % ring_size;
/* Don't feed more than max_read_entries */
cnt++;
if (cnt >= max_read_entries)
break;
}
spin_unlock_bh(ring_lock);
return cnt;
}
/**
* lpfc_cmf_setup - Initialize idle_stat tracking
* @phba: Pointer to HBA context object.
@@ -8069,19 +8235,29 @@ no_cmf:
phba->cmf_interval_rate = LPFC_CMF_INTERVAL;
/* Allocate RX Monitor Buffer */
if (!phba->rxtable) {
phba->rxtable = kmalloc_array(LPFC_MAX_RXMONITOR_ENTRY,
sizeof(struct rxtable_entry),
if (!phba->rx_monitor) {
phba->rx_monitor = kzalloc(sizeof(*phba->rx_monitor),
GFP_KERNEL);
if (!phba->rxtable) {
if (!phba->rx_monitor) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"2644 Failed to alloc memory "
"for RX Monitor Buffer\n");
return -ENOMEM;
}
/* Instruct the rx_monitor object to instantiate its ring */
if (lpfc_rx_monitor_create_ring(phba->rx_monitor,
LPFC_MAX_RXMONITOR_ENTRY)) {
kfree(phba->rx_monitor);
phba->rx_monitor = NULL;
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"2645 Failed to alloc memory "
"for RX Monitor's Ring\n");
return -ENOMEM;
}
atomic_set(&phba->rxtable_idx_head, 0);
atomic_set(&phba->rxtable_idx_tail, 0);
}
return 0;
}

View File

@@ -816,6 +816,14 @@ store_state_field(struct device *dev, struct device_attribute *attr,
}
mutex_lock(&sdev->state_mutex);
switch (sdev->sdev_state) {
case SDEV_RUNNING:
case SDEV_OFFLINE:
break;
default:
mutex_unlock(&sdev->state_mutex);
return -EINVAL;
}
if (sdev->sdev_state == SDEV_RUNNING && state == SDEV_RUNNING) {
ret = 0;
} else {

View File

@@ -1105,6 +1105,7 @@ static int vdec_probe(struct platform_device *pdev)
err_vdev_release:
video_device_release(vdev);
v4l2_device_unregister(&core->v4l2_dev);
return ret;
}
@@ -1113,6 +1114,7 @@ static int vdec_remove(struct platform_device *pdev)
struct amvdec_core *core = platform_get_drvdata(pdev);
video_unregister_device(core->vdev_dec);
v4l2_device_unregister(&core->v4l2_dev);
return 0;
}

View File

@@ -334,6 +334,9 @@ tee_ioctl_shm_register(struct tee_context *ctx,
if (data.flags)
return -EINVAL;
if (!access_ok((void __user *)(unsigned long)data.addr, data.length))
return -EFAULT;
shm = tee_shm_register(ctx, data.addr, data.length,
TEE_SHM_DMA_BUF | TEE_SHM_USER_MAPPED);
if (IS_ERR(shm))

View File

@@ -223,9 +223,6 @@ struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr,
goto err;
}
if (!access_ok((void __user *)addr, length))
return ERR_PTR(-EFAULT);
mutex_lock(&teedev->mutex);
shm->id = idr_alloc(&teedev->idr, shm, 1, 0, GFP_KERNEL);
mutex_unlock(&teedev->mutex);

View File

@@ -118,7 +118,7 @@ config SERIAL_8250_CONSOLE
config SERIAL_8250_GSC
tristate
depends on SERIAL_8250 && GSC
depends on SERIAL_8250 && PARISC
default SERIAL_8250
config SERIAL_8250_DMA

View File

@@ -591,6 +591,11 @@ static int ar933x_config_rs485(struct uart_port *port,
dev_err(port->dev, "RS485 needs rts-gpio\n");
return 1;
}
if (rs485conf->flags & SER_RS485_ENABLED)
gpiod_set_value(up->rts_gpiod,
!!(rs485conf->flags & SER_RS485_RTS_AFTER_SEND));
port->rs485 = *rs485conf;
return 0;
}

View File

@@ -1041,6 +1041,48 @@ stifb_copyarea(struct fb_info *info, const struct fb_copyarea *area)
SETUP_FB(fb);
}
#define ARTIST_VRAM_SIZE 0x000804
#define ARTIST_VRAM_SRC 0x000808
#define ARTIST_VRAM_SIZE_TRIGGER_WINFILL 0x000a04
#define ARTIST_VRAM_DEST_TRIGGER_BLOCKMOVE 0x000b00
#define ARTIST_SRC_BM_ACCESS 0x018008
#define ARTIST_FGCOLOR 0x018010
#define ARTIST_BGCOLOR 0x018014
#define ARTIST_BITMAP_OP 0x01801c
static void
stifb_fillrect(struct fb_info *info, const struct fb_fillrect *rect)
{
struct stifb_info *fb = container_of(info, struct stifb_info, info);
if (rect->rop != ROP_COPY ||
(fb->id == S9000_ID_HCRX && fb->info.var.bits_per_pixel == 32))
return cfb_fillrect(info, rect);
SETUP_HW(fb);
if (fb->info.var.bits_per_pixel == 32) {
WRITE_WORD(0xBBA0A000, fb, REG_10);
NGLE_REALLY_SET_IMAGE_PLANEMASK(fb, 0xffffffff);
} else {
WRITE_WORD(fb->id == S9000_ID_HCRX ? 0x13a02000 : 0x13a01000, fb, REG_10);
NGLE_REALLY_SET_IMAGE_PLANEMASK(fb, 0xff);
}
WRITE_WORD(0x03000300, fb, ARTIST_BITMAP_OP);
WRITE_WORD(0x2ea01000, fb, ARTIST_SRC_BM_ACCESS);
NGLE_QUICK_SET_DST_BM_ACCESS(fb, 0x2ea01000);
NGLE_REALLY_SET_IMAGE_FG_COLOR(fb, rect->color);
WRITE_WORD(0, fb, ARTIST_BGCOLOR);
NGLE_SET_DSTXY(fb, (rect->dx << 16) | (rect->dy));
SET_LENXY_START_RECFILL(fb, (rect->width << 16) | (rect->height));
SETUP_FB(fb);
}
static void __init
stifb_init_display(struct stifb_info *fb)
{
@@ -1105,7 +1147,7 @@ static const struct fb_ops stifb_ops = {
.owner = THIS_MODULE,
.fb_setcolreg = stifb_setcolreg,
.fb_blank = stifb_blank,
.fb_fillrect = cfb_fillrect,
.fb_fillrect = stifb_fillrect,
.fb_copyarea = stifb_copyarea,
.fb_imageblit = cfb_imageblit,
};
@@ -1297,7 +1339,7 @@ static int __init stifb_init_fb(struct sti_struct *sti, int bpp_pref)
goto out_err0;
}
info->screen_size = fix->smem_len;
info->flags = FBINFO_DEFAULT | FBINFO_HWACCEL_COPYAREA;
info->flags = FBINFO_HWACCEL_COPYAREA | FBINFO_HWACCEL_FILLRECT;
info->pseudo_palette = &fb->pseudo_palette;
/* This has to be done !!! */

View File

@@ -289,8 +289,10 @@ static void prelim_release(struct preftree *preftree)
struct prelim_ref *ref, *next_ref;
rbtree_postorder_for_each_entry_safe(ref, next_ref,
&preftree->root.rb_root, rbnode)
&preftree->root.rb_root, rbnode) {
free_inode_elem_list(ref->inode_list);
free_pref(ref);
}
preftree->root = RB_ROOT_CACHED;
preftree->count = 0;
@@ -648,6 +650,18 @@ unode_aux_to_inode_list(struct ulist_node *node)
return (struct extent_inode_elem *)(uintptr_t)node->aux;
}
static void free_leaf_list(struct ulist *ulist)
{
struct ulist_node *node;
struct ulist_iterator uiter;
ULIST_ITER_INIT(&uiter);
while ((node = ulist_next(ulist, &uiter)))
free_inode_elem_list(unode_aux_to_inode_list(node));
ulist_free(ulist);
}
/*
* We maintain three separate rbtrees: one for direct refs, one for
* indirect refs which have a key, and one for indirect refs which do not
@@ -762,7 +776,11 @@ static int resolve_indirect_refs(struct btrfs_fs_info *fs_info,
cond_resched();
}
out:
ulist_free(parents);
/*
* We may have inode lists attached to refs in the parents ulist, so we
* must free them before freeing the ulist and its refs.
*/
free_leaf_list(parents);
return ret;
}
@@ -1371,6 +1389,12 @@ again:
if (ret < 0)
goto out;
ref->inode_list = eie;
/*
* We transferred the list ownership to the ref,
* so set to NULL to avoid a double free in case
* an error happens after this.
*/
eie = NULL;
}
ret = ulist_add_merge_ptr(refs, ref->parent,
ref->inode_list,
@@ -1396,6 +1420,14 @@ again:
eie->next = ref->inode_list;
}
eie = NULL;
/*
* We have transferred the inode list ownership from
* this ref to the ref we added to the 'refs' ulist.
* So set this ref's inode list to NULL to avoid
* use-after-free when our caller uses it or double
* frees in case an error happens before we return.
*/
ref->inode_list = NULL;
}
cond_resched();
}
@@ -1412,24 +1444,6 @@ out:
return ret;
}
static void free_leaf_list(struct ulist *blocks)
{
struct ulist_node *node = NULL;
struct extent_inode_elem *eie;
struct ulist_iterator uiter;
ULIST_ITER_INIT(&uiter);
while ((node = ulist_next(blocks, &uiter))) {
if (!node->aux)
continue;
eie = unode_aux_to_inode_list(node);
free_inode_elem_list(eie);
node->aux = 0;
}
ulist_free(blocks);
}
/*
* Finds all leafs with a reference to the specified combination of bytenr and
* offset. key_list_head will point to a list of corresponding keys (caller must

View File

@@ -58,7 +58,7 @@ static int btrfs_encode_fh(struct inode *inode, u32 *fh, int *max_len,
}
struct dentry *btrfs_get_dentry(struct super_block *sb, u64 objectid,
u64 root_objectid, u32 generation,
u64 root_objectid, u64 generation,
int check_generation)
{
struct btrfs_fs_info *fs_info = btrfs_sb(sb);

View File

@@ -19,7 +19,7 @@ struct btrfs_fid {
} __attribute__ ((packed));
struct dentry *btrfs_get_dentry(struct super_block *sb, u64 objectid,
u64 root_objectid, u32 generation,
u64 root_objectid, u64 generation,
int check_generation);
struct dentry *btrfs_get_parent(struct dentry *child);

View File

@@ -3307,21 +3307,22 @@ void btrfs_free_tree_block(struct btrfs_trans_handle *trans,
}
/*
* If this is a leaf and there are tree mod log users, we may
* have recorded mod log operations that point to this leaf.
* So we must make sure no one reuses this leaf's extent before
* mod log operations are applied to a node, otherwise after
* rewinding a node using the mod log operations we get an
* inconsistent btree, as the leaf's extent may now be used as
* a node or leaf for another different btree.
* If there are tree mod log users we may have recorded mod log
* operations for this node. If we re-allocate this node we
* could replay operations on this node that happened when it
* existed in a completely different root. For example if it
* was part of root A, then was reallocated to root B, and we
* are doing a btrfs_old_search_slot(root b), we could replay
* operations that happened when the block was part of root A,
* giving us an inconsistent view of the btree.
*
* We are safe from races here because at this point no other
* node or root points to this extent buffer, so if after this
* check a new tree mod log user joins, it will not be able to
* find a node pointing to this leaf and record operations that
* point to this leaf.
* check a new tree mod log user joins we will not have an
* existing log of operations on this node that we have to
* contend with.
*/
if (btrfs_header_level(buf) == 0 &&
test_bit(BTRFS_FS_TREE_MOD_LOG_USERS, &fs_info->flags))
if (test_bit(BTRFS_FS_TREE_MOD_LOG_USERS, &fs_info->flags))
must_pin = true;
if (must_pin || btrfs_is_zoned(fs_info)) {

View File

@@ -1906,7 +1906,6 @@ static ssize_t check_direct_IO(struct btrfs_fs_info *fs_info,
static ssize_t btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
{
const bool is_sync_write = (iocb->ki_flags & IOCB_DSYNC);
struct file *file = iocb->ki_filp;
struct inode *inode = file_inode(file);
struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
@@ -1917,6 +1916,7 @@ static ssize_t btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
loff_t endbyte;
ssize_t err;
unsigned int ilock_flags = 0;
struct iomap_dio *dio;
if (iocb->ki_flags & IOCB_NOWAIT)
ilock_flags |= BTRFS_ILOCK_TRY;
@@ -1959,15 +1959,6 @@ relock:
goto buffered;
}
/*
* We remove IOCB_DSYNC so that we don't deadlock when iomap_dio_rw()
* calls generic_write_sync() (through iomap_dio_complete()), because
* that results in calling fsync (btrfs_sync_file()) which will try to
* lock the inode in exclusive/write mode.
*/
if (is_sync_write)
iocb->ki_flags &= ~IOCB_DSYNC;
/*
* The iov_iter can be mapped to the same file range we are writing to.
* If that's the case, then we will deadlock in the iomap code, because
@@ -1986,12 +1977,23 @@ relock:
* So here we disable page faults in the iov_iter and then retry if we
* got -EFAULT, faulting in the pages before the retry.
*/
again:
from->nofault = true;
err = iomap_dio_rw(iocb, from, &btrfs_dio_iomap_ops, &btrfs_dio_ops,
dio = __iomap_dio_rw(iocb, from, &btrfs_dio_iomap_ops, &btrfs_dio_ops,
IOMAP_DIO_PARTIAL, written);
from->nofault = false;
/*
* iomap_dio_complete() will call btrfs_sync_file() if we have a dsync
* iocb, and that needs to lock the inode. So unlock it before calling
* iomap_dio_complete() to avoid a deadlock.
*/
btrfs_inode_unlock(inode, ilock_flags);
if (IS_ERR_OR_NULL(dio))
err = PTR_ERR_OR_ZERO(dio);
else
err = iomap_dio_complete(dio);
/* No increment (+=) because iomap returns a cumulative value. */
if (err > 0)
written = err;
@@ -2017,19 +2019,10 @@ again:
} else {
fault_in_iov_iter_readable(from, left);
prev_left = left;
goto again;
goto relock;
}
}
btrfs_inode_unlock(inode, ilock_flags);
/*
* Add back IOCB_DSYNC. Our caller, btrfs_file_write_iter(), will do
* the fsync (call generic_write_sync()).
*/
if (is_sync_write)
iocb->ki_flags |= IOCB_DSYNC;
/* If 'err' is -ENOTBLK then it means we must fallback to buffered IO. */
if ((err < 0 && err != -ENOTBLK) || !iov_iter_count(from))
goto out;

View File

@@ -232,8 +232,10 @@ static int test_no_shared_qgroup(struct btrfs_root *root,
ret = insert_normal_tree_ref(root, nodesize, nodesize, 0,
BTRFS_FS_TREE_OBJECTID);
if (ret)
if (ret) {
ulist_free(old_roots);
return ret;
}
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots, false);
if (ret) {
@@ -266,8 +268,10 @@ static int test_no_shared_qgroup(struct btrfs_root *root,
}
ret = remove_extent_item(root, nodesize, nodesize);
if (ret)
if (ret) {
ulist_free(old_roots);
return -EINVAL;
}
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots, false);
if (ret) {
@@ -329,8 +333,10 @@ static int test_multiple_refs(struct btrfs_root *root,
ret = insert_normal_tree_ref(root, nodesize, nodesize, 0,
BTRFS_FS_TREE_OBJECTID);
if (ret)
if (ret) {
ulist_free(old_roots);
return ret;
}
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots, false);
if (ret) {
@@ -362,8 +368,10 @@ static int test_multiple_refs(struct btrfs_root *root,
ret = add_tree_ref(root, nodesize, nodesize, 0,
BTRFS_FIRST_FREE_OBJECTID);
if (ret)
if (ret) {
ulist_free(old_roots);
return ret;
}
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots, false);
if (ret) {
@@ -401,8 +409,10 @@ static int test_multiple_refs(struct btrfs_root *root,
ret = remove_extent_ref(root, nodesize, nodesize, 0,
BTRFS_FIRST_FREE_OBJECTID);
if (ret)
if (ret) {
ulist_free(old_roots);
return ret;
}
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots, false);
if (ret) {

View File

@@ -3711,12 +3711,11 @@ CIFSTCon(const unsigned int xid, struct cifs_ses *ses,
pSMB->AndXCommand = 0xFF;
pSMB->Flags = cpu_to_le16(TCON_EXTENDED_SECINFO);
bcc_ptr = &pSMB->Password[0];
if (tcon->pipe || (ses->server->sec_mode & SECMODE_USER)) {
pSMB->PasswordLength = cpu_to_le16(1); /* minimum */
*bcc_ptr = 0; /* password is null byte */
bcc_ptr++; /* skip password */
/* already aligned so no need to do it below */
}
if (ses->server->sign)
smb_buffer->Flags2 |= SMBFLG2_SECURITY_SIGNATURE;

View File

@@ -205,14 +205,19 @@ static int allocate_filesystem_keyring(struct super_block *sb)
}
/*
* This is called at unmount time to release all encryption keys that have been
* added to the filesystem, along with the keyring that contains them.
* Release all encryption keys that have been added to the filesystem, along
* with the keyring that contains them.
*
* Note that besides clearing and freeing memory, this might need to evict keys
* from the keyslots of an inline crypto engine. Therefore, this must be called
* while the filesystem's underlying block device(s) are still available.
* This is called at unmount time. The filesystem's underlying block device(s)
* are still available at this time; this is important because after user file
* accesses have been allowed, this function may need to evict keys from the
* keyslots of an inline crypto engine, which requires the block device(s).
*
* This is also called when the super_block is being freed. This is needed to
* avoid a memory leak if mounting fails after the "test_dummy_encryption"
* option was processed, as in that case the unmount-time call isn't made.
*/
void fscrypt_sb_delete(struct super_block *sb)
void fscrypt_destroy_keyring(struct super_block *sb)
{
struct fscrypt_keyring *keyring = sb->s_master_keys;
size_t i;

View File

@@ -425,7 +425,8 @@ int ext4_ext_migrate(struct inode *inode)
* already is extent-based, error out.
*/
if (!ext4_has_feature_extents(inode->i_sb) ||
(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS) ||
ext4_has_inline_data(inode))
return -EINVAL;
if (S_ISLNK(inode->i_mode) && inode->i_blocks == 0)

Some files were not shown because too many files have changed in this diff Show More