Merge 5.15.29 into android-5.15
Changes in 5.15.29
arm64: dts: qcom: sm8350: Describe GCC dependency clocks
arm64: dts: qcom: sm8350: Correct UFS symbol clocks
HID: elo: Revert USB reference counting
HID: hid-thrustmaster: fix OOB read in thrustmaster_interrupts
ARM: boot: dts: bcm2711: Fix HVS register range
clk: qcom: gdsc: Add support to update GDSC transition delay
clk: qcom: dispcc: Update the transition delay for MDSS GDSC
HID: vivaldi: fix sysfs attributes leak
arm64: dts: armada-3720-turris-mox: Add missing ethernet0 alias
tipc: fix kernel panic when enabling bearer
vdpa/mlx5: add validation for VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET command
vduse: Fix returning wrong type in vduse_domain_alloc_iova()
net: phy: meson-gxl: fix interrupt handling in forced mode
mISDN: Fix memory leak in dsp_pipeline_build()
vhost: fix hung thread due to erroneous iotlb entries
virtio-blk: Don't use MAX_DISCARD_SEGMENTS if max_discard_seg is zero
vdpa: fix use-after-free on vp_vdpa_remove
isdn: hfcpci: check the return value of dma_set_mask() in setup_hw()
net: qlogic: check the return value of dma_alloc_coherent() in qed_vf_hw_prepare()
esp: Fix possible buffer overflow in ESP transformation
esp: Fix BEET mode inter address family tunneling on GSO
qed: return status of qed_iov_get_link
smsc95xx: Ignore -ENODEV errors when device is unplugged
gpiolib: acpi: Convert ACPI value of debounce to microseconds
drm/sun4i: mixer: Fix P010 and P210 format numbers
net: dsa: mt7530: fix incorrect test in mt753x_phylink_validate()
ARM: dts: aspeed: Fix AST2600 quad spi group
iavf: Fix handling of vlan strip virtual channel messages
i40e: stop disabling VFs due to PF error responses
ice: stop disabling VFs due to PF error responses
ice: Fix error with handling of bonding MTU
ice: Don't use GFP_KERNEL in atomic context
ice: Fix curr_link_speed advertised speed
ethernet: Fix error handling in xemaclite_of_probe
tipc: fix incorrect order of state message data sanity check
net: ethernet: ti: cpts: Handle error for clk_enable
net: ethernet: lpc_eth: Handle error for clk_enable
net: marvell: prestera: Add missing of_node_put() in prestera_switch_set_base_mac_addr
ax25: Fix NULL pointer dereference in ax25_kill_by_device
net/mlx5: Fix size field in bufferx_reg struct
net/mlx5: Fix a race on command flush flow
net/mlx5e: Lag, Only handle events from highest priority multipath entry
NFC: port100: fix use-after-free in port100_send_complete
selftests: pmtu.sh: Kill tcpdump processes launched by subshell.
selftests: pmtu.sh: Kill nettest processes launched in subshell.
gpio: ts4900: Do not set DAT and OE together
gianfar: ethtool: Fix refcount leak in gfar_get_ts_info
net: phy: DP83822: clear MISR2 register to disable interrupts
sctp: fix kernel-infoleak for SCTP sockets
net: bcmgenet: Don't claim WOL when its not available
net: phy: meson-gxl: improve link-up behavior
selftests/bpf: Add test for bpf_timer overwriting crash
swiotlb: fix info leak with DMA_FROM_DEVICE
usb: dwc3: pci: add support for the Intel Raptor Lake-S
pinctrl: tigerlake: Revert "Add Alder Lake-M ACPI ID"
KVM: Fix lockdep false negative during host resume
kvm: x86: Disable KVM_HC_CLOCK_PAIRING if tsc is in always catchup mode
spi: rockchip: Fix error in getting num-cs property
spi: rockchip: terminate dma transmission when slave abort
drm/vc4: hdmi: Unregister codec device on unbind
x86/kvm: Don't use pv tlb/ipi/sched_yield if on 1 vCPU
net-sysfs: add check for netdevice being present to speed_show
hwmon: (pmbus) Clear pmbus fault/warning bits after read
PCI: Mark all AMD Navi10 and Navi14 GPU ATS as broken
gpio: Return EPROBE_DEFER if gc->to_irq is NULL
drm/amdgpu: bypass tiling flag check in virtual display case (v2)
Revert "xen-netback: remove 'hotplug-status' once it has served its purpose"
Revert "xen-netback: Check for hotplug-status existence before watching"
ipv6: prevent a possible race condition with lifetimes
tracing: Ensure trace buffer is at least 4096 bytes large
tracing/osnoise: Make osnoise_main to sleep for microseconds
selftest/vm: fix map_fixed_noreplace test failure
selftests/memfd: clean up mapping in mfd_fail_write
ARM: Spectre-BHB: provide empty stub for non-config
fuse: fix fileattr op failure
fuse: fix pipe buffer lifetime for direct_io
staging: rtl8723bs: Fix access-point mode deadlock
staging: gdm724x: fix use after free in gdm_lte_rx()
net: macb: Fix lost RX packet wakeup race in NAPI receive
riscv: alternative only works on !XIP_KERNEL
mmc: meson: Fix usage of meson_mmc_post_req()
riscv: Fix auipc+jalr relocation range checks
tracing/osnoise: Force quiescent states while tracing
arm64: dts: marvell: armada-37xx: Remap IO space to bus address 0x0
arm64: Ensure execute-only permissions are not allowed without EPAN
arm64: kasan: fix include error in MTE functions
swiotlb: rework "fix info leak with DMA_FROM_DEVICE"
KVM: x86/mmu: kvm_faultin_pfn has to return false if pfh is returned
virtio: unexport virtio_finalize_features
virtio: acknowledge all features before access
net/mlx5: Fix offloading with ESWITCH_IPV4_TTL_MODIFY_ENABLE
ARM: fix Thumb2 regression with Spectre BHB
watch_queue: Fix filter limit check
watch_queue, pipe: Free watchqueue state after clearing pipe ring
watch_queue: Fix to release page in ->release()
watch_queue: Fix to always request a pow-of-2 pipe ring size
watch_queue: Fix the alloc bitmap size to reflect notes allocated
watch_queue: Free the alloc bitmap when the watch_queue is torn down
watch_queue: Fix lack of barrier/sync/lock between post and read
watch_queue: Make comment about setting ->defunct more accurate
x86/boot: Fix memremap of setup_indirect structures
x86/boot: Add setup_indirect support in early_memremap_is_setup_data()
x86/sgx: Free backing memory after faulting the enclave page
x86/traps: Mark do_int3() NOKPROBE_SYMBOL
drm/panel: Select DRM_DP_HELPER for DRM_PANEL_EDP
btrfs: make send work with concurrent block group relocation
drm/i915: Workaround broken BIOS DBUF configuration on TGL/RKL
riscv: dts: k210: fix broken IRQs on hart1
block: drop unused includes in <linux/genhd.h>
Revert "net: dsa: mv88e6xxx: flush switchdev FDB workqueue before removing VLAN"
vhost: allow batching hint without size
Linux 5.15.29
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I5c9c6006b90a8283a81fd5f7c79776e1a0cfb6b1
This commit is contained in:
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 15
|
||||
SUBLEVEL = 28
|
||||
SUBLEVEL = 29
|
||||
EXTRAVERSION =
|
||||
NAME = Trick or Treat
|
||||
|
||||
|
||||
@@ -118,7 +118,7 @@
|
||||
};
|
||||
|
||||
pinctrl_fwqspid_default: fwqspid_default {
|
||||
function = "FWQSPID";
|
||||
function = "FWSPID";
|
||||
groups = "FWQSPID";
|
||||
};
|
||||
|
||||
|
||||
@@ -290,6 +290,7 @@
|
||||
|
||||
hvs: hvs@7e400000 {
|
||||
compatible = "brcm,bcm2711-hvs";
|
||||
reg = <0x7e400000 0x8000>;
|
||||
interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>;
|
||||
};
|
||||
|
||||
|
||||
@@ -25,7 +25,13 @@ enum {
|
||||
SPECTRE_V2_METHOD_LOOP8 = BIT(__SPECTRE_V2_METHOD_LOOP8),
|
||||
};
|
||||
|
||||
#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
|
||||
void spectre_v2_update_state(unsigned int state, unsigned int methods);
|
||||
#else
|
||||
static inline void spectre_v2_update_state(unsigned int state,
|
||||
unsigned int methods)
|
||||
{}
|
||||
#endif
|
||||
|
||||
int spectre_bhb_update_vectors(unsigned int method);
|
||||
|
||||
|
||||
@@ -1038,9 +1038,9 @@ vector_bhb_loop8_\name:
|
||||
|
||||
@ bhb workaround
|
||||
mov r0, #8
|
||||
1: b . + 4
|
||||
3: b . + 4
|
||||
subs r0, r0, #1
|
||||
bne 1b
|
||||
bne 3b
|
||||
dsb
|
||||
isb
|
||||
b 2b
|
||||
|
||||
@@ -1168,9 +1168,6 @@ config HW_PERF_EVENTS
|
||||
def_bool y
|
||||
depends on ARM_PMU
|
||||
|
||||
config ARCH_HAS_FILTER_PGPROT
|
||||
def_bool y
|
||||
|
||||
# Supported by clang >= 7.0
|
||||
config CC_HAVE_SHADOW_CALL_STACK
|
||||
def_bool $(cc-option, -fsanitize=shadow-call-stack -ffixed-x18)
|
||||
|
||||
@@ -18,6 +18,7 @@
|
||||
|
||||
aliases {
|
||||
spi0 = &spi0;
|
||||
ethernet0 = ð0;
|
||||
ethernet1 = ð1;
|
||||
mmc0 = &sdhci0;
|
||||
mmc1 = &sdhci1;
|
||||
@@ -138,7 +139,9 @@
|
||||
/*
|
||||
* U-Boot port for Turris Mox has a bug which always expects that "ranges" DT property
|
||||
* contains exactly 2 ranges with 3 (child) address cells, 2 (parent) address cells and
|
||||
* 2 size cells and also expects that the second range starts at 16 MB offset. If these
|
||||
* 2 size cells and also expects that the second range starts at 16 MB offset. Also it
|
||||
* expects that first range uses same address for PCI (child) and CPU (parent) cells (so
|
||||
* no remapping) and that this address is the lowest from all specified ranges. If these
|
||||
* conditions are not met then U-Boot crashes during loading kernel DTB file. PCIe address
|
||||
* space is 128 MB long, so the best split between MEM and IO is to use fixed 16 MB window
|
||||
* for IO and the rest 112 MB (64+32+16) for MEM, despite that maximal IO size is just 64 kB.
|
||||
@@ -147,6 +150,9 @@
|
||||
* https://source.denx.de/u-boot/u-boot/-/commit/cb2ddb291ee6fcbddd6d8f4ff49089dfe580f5d7
|
||||
* https://source.denx.de/u-boot/u-boot/-/commit/c64ac3b3185aeb3846297ad7391fc6df8ecd73bf
|
||||
* https://source.denx.de/u-boot/u-boot/-/commit/4a82fca8e330157081fc132a591ebd99ba02ee33
|
||||
* Bug related to requirement of same child and parent addresses for first range is fixed
|
||||
* in U-Boot version 2022.04 by following commit:
|
||||
* https://source.denx.de/u-boot/u-boot/-/commit/1fd54253bca7d43d046bba4853fe5fafd034bc17
|
||||
*/
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
|
||||
@@ -497,7 +497,7 @@
|
||||
* (totaling 127 MiB) for MEM.
|
||||
*/
|
||||
ranges = <0x82000000 0 0xe8000000 0 0xe8000000 0 0x07f00000 /* Port 0 MEM */
|
||||
0x81000000 0 0xefff0000 0 0xefff0000 0 0x00010000>; /* Port 0 IO */
|
||||
0x81000000 0 0x00000000 0 0xefff0000 0 0x00010000>; /* Port 0 IO */
|
||||
interrupt-map-mask = <0 0 0 7>;
|
||||
interrupt-map = <0 0 0 1 &pcie_intc 0>,
|
||||
<0 0 0 2 &pcie_intc 1>,
|
||||
|
||||
@@ -35,6 +35,24 @@
|
||||
clock-frequency = <32000>;
|
||||
#clock-cells = <0>;
|
||||
};
|
||||
|
||||
ufs_phy_rx_symbol_0_clk: ufs-phy-rx-symbol-0 {
|
||||
compatible = "fixed-clock";
|
||||
clock-frequency = <1000>;
|
||||
#clock-cells = <0>;
|
||||
};
|
||||
|
||||
ufs_phy_rx_symbol_1_clk: ufs-phy-rx-symbol-1 {
|
||||
compatible = "fixed-clock";
|
||||
clock-frequency = <1000>;
|
||||
#clock-cells = <0>;
|
||||
};
|
||||
|
||||
ufs_phy_tx_symbol_0_clk: ufs-phy-tx-symbol-0 {
|
||||
compatible = "fixed-clock";
|
||||
clock-frequency = <1000>;
|
||||
#clock-cells = <0>;
|
||||
};
|
||||
};
|
||||
|
||||
cpus {
|
||||
@@ -443,8 +461,30 @@
|
||||
#clock-cells = <1>;
|
||||
#reset-cells = <1>;
|
||||
#power-domain-cells = <1>;
|
||||
clock-names = "bi_tcxo", "sleep_clk";
|
||||
clocks = <&rpmhcc RPMH_CXO_CLK>, <&sleep_clk>;
|
||||
clock-names = "bi_tcxo",
|
||||
"sleep_clk",
|
||||
"pcie_0_pipe_clk",
|
||||
"pcie_1_pipe_clk",
|
||||
"ufs_card_rx_symbol_0_clk",
|
||||
"ufs_card_rx_symbol_1_clk",
|
||||
"ufs_card_tx_symbol_0_clk",
|
||||
"ufs_phy_rx_symbol_0_clk",
|
||||
"ufs_phy_rx_symbol_1_clk",
|
||||
"ufs_phy_tx_symbol_0_clk",
|
||||
"usb3_phy_wrapper_gcc_usb30_pipe_clk",
|
||||
"usb3_uni_phy_sec_gcc_usb30_pipe_clk";
|
||||
clocks = <&rpmhcc RPMH_CXO_CLK>,
|
||||
<&sleep_clk>,
|
||||
<0>,
|
||||
<0>,
|
||||
<0>,
|
||||
<0>,
|
||||
<0>,
|
||||
<&ufs_phy_rx_symbol_0_clk>,
|
||||
<&ufs_phy_rx_symbol_1_clk>,
|
||||
<&ufs_phy_tx_symbol_0_clk>,
|
||||
<0>,
|
||||
<0>;
|
||||
};
|
||||
|
||||
ipcc: mailbox@408000 {
|
||||
@@ -1060,8 +1100,8 @@
|
||||
<75000000 300000000>,
|
||||
<0 0>,
|
||||
<0 0>,
|
||||
<75000000 300000000>,
|
||||
<75000000 300000000>;
|
||||
<0 0>,
|
||||
<0 0>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
#ifndef __ASM_MTE_KASAN_H
|
||||
#define __ASM_MTE_KASAN_H
|
||||
|
||||
#include <asm/compiler.h>
|
||||
#include <asm/mte-def.h>
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
@@ -92,7 +92,7 @@ extern bool arm64_use_ng_mappings;
|
||||
#define __P001 PAGE_READONLY
|
||||
#define __P010 PAGE_READONLY
|
||||
#define __P011 PAGE_READONLY
|
||||
#define __P100 PAGE_EXECONLY
|
||||
#define __P100 PAGE_READONLY_EXEC /* PAGE_EXECONLY if Enhanced PAN */
|
||||
#define __P101 PAGE_READONLY_EXEC
|
||||
#define __P110 PAGE_READONLY_EXEC
|
||||
#define __P111 PAGE_READONLY_EXEC
|
||||
@@ -101,7 +101,7 @@ extern bool arm64_use_ng_mappings;
|
||||
#define __S001 PAGE_READONLY
|
||||
#define __S010 PAGE_SHARED
|
||||
#define __S011 PAGE_SHARED
|
||||
#define __S100 PAGE_EXECONLY
|
||||
#define __S100 PAGE_READONLY_EXEC /* PAGE_EXECONLY if Enhanced PAN */
|
||||
#define __S101 PAGE_READONLY_EXEC
|
||||
#define __S110 PAGE_SHARED_EXEC
|
||||
#define __S111 PAGE_SHARED_EXEC
|
||||
|
||||
@@ -1026,18 +1026,6 @@ static inline bool arch_wants_old_prefaulted_pte(void)
|
||||
}
|
||||
#define arch_wants_old_prefaulted_pte arch_wants_old_prefaulted_pte
|
||||
|
||||
static inline pgprot_t arch_filter_pgprot(pgprot_t prot)
|
||||
{
|
||||
if (cpus_have_const_cap(ARM64_HAS_EPAN))
|
||||
return prot;
|
||||
|
||||
if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY))
|
||||
return prot;
|
||||
|
||||
return PAGE_READONLY_EXEC;
|
||||
}
|
||||
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
#endif /* __ASM_PGTABLE_H */
|
||||
|
||||
@@ -7,8 +7,10 @@
|
||||
|
||||
#include <linux/io.h>
|
||||
#include <linux/memblock.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include <asm/cpufeature.h>
|
||||
#include <asm/page.h>
|
||||
|
||||
/*
|
||||
@@ -38,3 +40,18 @@ int valid_mmap_phys_addr_range(unsigned long pfn, size_t size)
|
||||
{
|
||||
return !(((pfn << PAGE_SHIFT) + size) & ~PHYS_MASK);
|
||||
}
|
||||
|
||||
static int __init adjust_protection_map(void)
|
||||
{
|
||||
/*
|
||||
* With Enhanced PAN we can honour the execute-only permissions as
|
||||
* there is no PAN override with such mappings.
|
||||
*/
|
||||
if (cpus_have_const_cap(ARM64_HAS_EPAN)) {
|
||||
protection_map[VM_EXEC] = PAGE_EXECONLY;
|
||||
protection_map[VM_EXEC | VM_SHARED] = PAGE_EXECONLY;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
arch_initcall(adjust_protection_map);
|
||||
|
||||
@@ -2,6 +2,7 @@ menu "CPU errata selection"
|
||||
|
||||
config RISCV_ERRATA_ALTERNATIVE
|
||||
bool "RISC-V alternative scheme"
|
||||
depends on !XIP_KERNEL
|
||||
default y
|
||||
help
|
||||
This Kconfig allows the kernel to automatically patch the
|
||||
|
||||
@@ -14,8 +14,8 @@ config SOC_SIFIVE
|
||||
select CLK_SIFIVE
|
||||
select CLK_SIFIVE_PRCI
|
||||
select SIFIVE_PLIC
|
||||
select RISCV_ERRATA_ALTERNATIVE
|
||||
select ERRATA_SIFIVE
|
||||
select RISCV_ERRATA_ALTERNATIVE if !XIP_KERNEL
|
||||
select ERRATA_SIFIVE if !XIP_KERNEL
|
||||
help
|
||||
This enables support for SiFive SoC platform hardware.
|
||||
|
||||
|
||||
@@ -113,7 +113,8 @@
|
||||
compatible = "canaan,k210-plic", "sifive,plic-1.0.0";
|
||||
reg = <0xC000000 0x4000000>;
|
||||
interrupt-controller;
|
||||
interrupts-extended = <&cpu0_intc 11 &cpu1_intc 11>;
|
||||
interrupts-extended = <&cpu0_intc 11>, <&cpu0_intc 9>,
|
||||
<&cpu1_intc 11>, <&cpu1_intc 9>;
|
||||
riscv,ndev = <65>;
|
||||
};
|
||||
|
||||
|
||||
@@ -13,6 +13,19 @@
|
||||
#include <linux/pgtable.h>
|
||||
#include <asm/sections.h>
|
||||
|
||||
/*
|
||||
* The auipc+jalr instruction pair can reach any PC-relative offset
|
||||
* in the range [-2^31 - 2^11, 2^31 - 2^11)
|
||||
*/
|
||||
static bool riscv_insn_valid_32bit_offset(ptrdiff_t val)
|
||||
{
|
||||
#ifdef CONFIG_32BIT
|
||||
return true;
|
||||
#else
|
||||
return (-(1L << 31) - (1L << 11)) <= val && val < ((1L << 31) - (1L << 11));
|
||||
#endif
|
||||
}
|
||||
|
||||
static int apply_r_riscv_32_rela(struct module *me, u32 *location, Elf_Addr v)
|
||||
{
|
||||
if (v != (u32)v) {
|
||||
@@ -95,7 +108,7 @@ static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location,
|
||||
ptrdiff_t offset = (void *)v - (void *)location;
|
||||
s32 hi20;
|
||||
|
||||
if (offset != (s32)offset) {
|
||||
if (!riscv_insn_valid_32bit_offset(offset)) {
|
||||
pr_err(
|
||||
"%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
|
||||
me->name, (long long)v, location);
|
||||
@@ -197,10 +210,9 @@ static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location,
|
||||
Elf_Addr v)
|
||||
{
|
||||
ptrdiff_t offset = (void *)v - (void *)location;
|
||||
s32 fill_v = offset;
|
||||
u32 hi20, lo12;
|
||||
|
||||
if (offset != fill_v) {
|
||||
if (!riscv_insn_valid_32bit_offset(offset)) {
|
||||
/* Only emit the plt entry if offset over 32-bit range */
|
||||
if (IS_ENABLED(CONFIG_MODULE_SECTIONS)) {
|
||||
offset = module_emit_plt_entry(me, v);
|
||||
@@ -224,10 +236,9 @@ static int apply_r_riscv_call_rela(struct module *me, u32 *location,
|
||||
Elf_Addr v)
|
||||
{
|
||||
ptrdiff_t offset = (void *)v - (void *)location;
|
||||
s32 fill_v = offset;
|
||||
u32 hi20, lo12;
|
||||
|
||||
if (offset != fill_v) {
|
||||
if (!riscv_insn_valid_32bit_offset(offset)) {
|
||||
pr_err(
|
||||
"%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
|
||||
me->name, (long long)v, location);
|
||||
|
||||
@@ -27,6 +27,7 @@
|
||||
#include <linux/blk-mq.h>
|
||||
#include <linux/ata.h>
|
||||
#include <linux/hdreg.h>
|
||||
#include <linux/major.h>
|
||||
#include <linux/cdrom.h>
|
||||
#include <linux/proc_fs.h>
|
||||
#include <linux/seq_file.h>
|
||||
|
||||
@@ -12,6 +12,30 @@
|
||||
#include "encls.h"
|
||||
#include "sgx.h"
|
||||
|
||||
/*
|
||||
* Calculate byte offset of a PCMD struct associated with an enclave page. PCMD's
|
||||
* follow right after the EPC data in the backing storage. In addition to the
|
||||
* visible enclave pages, there's one extra page slot for SECS, before PCMD
|
||||
* structs.
|
||||
*/
|
||||
static inline pgoff_t sgx_encl_get_backing_page_pcmd_offset(struct sgx_encl *encl,
|
||||
unsigned long page_index)
|
||||
{
|
||||
pgoff_t epc_end_off = encl->size + sizeof(struct sgx_secs);
|
||||
|
||||
return epc_end_off + page_index * sizeof(struct sgx_pcmd);
|
||||
}
|
||||
|
||||
/*
|
||||
* Free a page from the backing storage in the given page index.
|
||||
*/
|
||||
static inline void sgx_encl_truncate_backing_page(struct sgx_encl *encl, unsigned long page_index)
|
||||
{
|
||||
struct inode *inode = file_inode(encl->backing);
|
||||
|
||||
shmem_truncate_range(inode, PFN_PHYS(page_index), PFN_PHYS(page_index) + PAGE_SIZE - 1);
|
||||
}
|
||||
|
||||
/*
|
||||
* ELDU: Load an EPC page as unblocked. For more info, see "OS Management of EPC
|
||||
* Pages" in the SDM.
|
||||
@@ -22,9 +46,11 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
|
||||
{
|
||||
unsigned long va_offset = encl_page->desc & SGX_ENCL_PAGE_VA_OFFSET_MASK;
|
||||
struct sgx_encl *encl = encl_page->encl;
|
||||
pgoff_t page_index, page_pcmd_off;
|
||||
struct sgx_pageinfo pginfo;
|
||||
struct sgx_backing b;
|
||||
pgoff_t page_index;
|
||||
bool pcmd_page_empty;
|
||||
u8 *pcmd_page;
|
||||
int ret;
|
||||
|
||||
if (secs_page)
|
||||
@@ -32,14 +58,16 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
|
||||
else
|
||||
page_index = PFN_DOWN(encl->size);
|
||||
|
||||
page_pcmd_off = sgx_encl_get_backing_page_pcmd_offset(encl, page_index);
|
||||
|
||||
ret = sgx_encl_get_backing(encl, page_index, &b);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
pginfo.addr = encl_page->desc & PAGE_MASK;
|
||||
pginfo.contents = (unsigned long)kmap_atomic(b.contents);
|
||||
pginfo.metadata = (unsigned long)kmap_atomic(b.pcmd) +
|
||||
b.pcmd_offset;
|
||||
pcmd_page = kmap_atomic(b.pcmd);
|
||||
pginfo.metadata = (unsigned long)pcmd_page + b.pcmd_offset;
|
||||
|
||||
if (secs_page)
|
||||
pginfo.secs = (u64)sgx_get_epc_virt_addr(secs_page);
|
||||
@@ -55,11 +83,24 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
|
||||
ret = -EFAULT;
|
||||
}
|
||||
|
||||
kunmap_atomic((void *)(unsigned long)(pginfo.metadata - b.pcmd_offset));
|
||||
memset(pcmd_page + b.pcmd_offset, 0, sizeof(struct sgx_pcmd));
|
||||
|
||||
/*
|
||||
* The area for the PCMD in the page was zeroed above. Check if the
|
||||
* whole page is now empty meaning that all PCMD's have been zeroed:
|
||||
*/
|
||||
pcmd_page_empty = !memchr_inv(pcmd_page, 0, PAGE_SIZE);
|
||||
|
||||
kunmap_atomic(pcmd_page);
|
||||
kunmap_atomic((void *)(unsigned long)pginfo.contents);
|
||||
|
||||
sgx_encl_put_backing(&b, false);
|
||||
|
||||
sgx_encl_truncate_backing_page(encl, page_index);
|
||||
|
||||
if (pcmd_page_empty)
|
||||
sgx_encl_truncate_backing_page(encl, PFN_DOWN(page_pcmd_off));
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -579,7 +620,7 @@ static struct page *sgx_encl_get_backing_page(struct sgx_encl *encl,
|
||||
int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
|
||||
struct sgx_backing *backing)
|
||||
{
|
||||
pgoff_t pcmd_index = PFN_DOWN(encl->size) + 1 + (page_index >> 5);
|
||||
pgoff_t page_pcmd_off = sgx_encl_get_backing_page_pcmd_offset(encl, page_index);
|
||||
struct page *contents;
|
||||
struct page *pcmd;
|
||||
|
||||
@@ -587,7 +628,7 @@ int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
|
||||
if (IS_ERR(contents))
|
||||
return PTR_ERR(contents);
|
||||
|
||||
pcmd = sgx_encl_get_backing_page(encl, pcmd_index);
|
||||
pcmd = sgx_encl_get_backing_page(encl, PFN_DOWN(page_pcmd_off));
|
||||
if (IS_ERR(pcmd)) {
|
||||
put_page(contents);
|
||||
return PTR_ERR(pcmd);
|
||||
@@ -596,9 +637,7 @@ int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
|
||||
backing->page_index = page_index;
|
||||
backing->contents = contents;
|
||||
backing->pcmd = pcmd;
|
||||
backing->pcmd_offset =
|
||||
(page_index & (PAGE_SIZE / sizeof(struct sgx_pcmd) - 1)) *
|
||||
sizeof(struct sgx_pcmd);
|
||||
backing->pcmd_offset = page_pcmd_off & (PAGE_SIZE - 1);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -995,8 +995,10 @@ early_param("memmap", parse_memmap_opt);
|
||||
*/
|
||||
void __init e820__reserve_setup_data(void)
|
||||
{
|
||||
struct setup_indirect *indirect;
|
||||
struct setup_data *data;
|
||||
u64 pa_data;
|
||||
u64 pa_data, pa_next;
|
||||
u32 len;
|
||||
|
||||
pa_data = boot_params.hdr.setup_data;
|
||||
if (!pa_data)
|
||||
@@ -1004,6 +1006,14 @@ void __init e820__reserve_setup_data(void)
|
||||
|
||||
while (pa_data) {
|
||||
data = early_memremap(pa_data, sizeof(*data));
|
||||
if (!data) {
|
||||
pr_warn("e820: failed to memremap setup_data entry\n");
|
||||
return;
|
||||
}
|
||||
|
||||
len = sizeof(*data);
|
||||
pa_next = data->next;
|
||||
|
||||
e820__range_update(pa_data, sizeof(*data)+data->len, E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
|
||||
|
||||
/*
|
||||
@@ -1015,18 +1025,27 @@ void __init e820__reserve_setup_data(void)
|
||||
sizeof(*data) + data->len,
|
||||
E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
|
||||
|
||||
if (data->type == SETUP_INDIRECT &&
|
||||
((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) {
|
||||
e820__range_update(((struct setup_indirect *)data->data)->addr,
|
||||
((struct setup_indirect *)data->data)->len,
|
||||
E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
|
||||
e820__range_update_kexec(((struct setup_indirect *)data->data)->addr,
|
||||
((struct setup_indirect *)data->data)->len,
|
||||
E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
|
||||
if (data->type == SETUP_INDIRECT) {
|
||||
len += data->len;
|
||||
early_memunmap(data, sizeof(*data));
|
||||
data = early_memremap(pa_data, len);
|
||||
if (!data) {
|
||||
pr_warn("e820: failed to memremap indirect setup_data\n");
|
||||
return;
|
||||
}
|
||||
|
||||
pa_data = data->next;
|
||||
early_memunmap(data, sizeof(*data));
|
||||
indirect = (struct setup_indirect *)data->data;
|
||||
|
||||
if (indirect->type != SETUP_INDIRECT) {
|
||||
e820__range_update(indirect->addr, indirect->len,
|
||||
E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
|
||||
e820__range_update_kexec(indirect->addr, indirect->len,
|
||||
E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
|
||||
}
|
||||
}
|
||||
|
||||
pa_data = pa_next;
|
||||
early_memunmap(data, len);
|
||||
}
|
||||
|
||||
e820__update_table(e820_table);
|
||||
|
||||
@@ -88,11 +88,13 @@ create_setup_data_node(struct dentry *parent, int no,
|
||||
|
||||
static int __init create_setup_data_nodes(struct dentry *parent)
|
||||
{
|
||||
struct setup_indirect *indirect;
|
||||
struct setup_data_node *node;
|
||||
struct setup_data *data;
|
||||
int error;
|
||||
u64 pa_data, pa_next;
|
||||
struct dentry *d;
|
||||
u64 pa_data;
|
||||
int error;
|
||||
u32 len;
|
||||
int no = 0;
|
||||
|
||||
d = debugfs_create_dir("setup_data", parent);
|
||||
@@ -112,12 +114,29 @@ static int __init create_setup_data_nodes(struct dentry *parent)
|
||||
error = -ENOMEM;
|
||||
goto err_dir;
|
||||
}
|
||||
pa_next = data->next;
|
||||
|
||||
if (data->type == SETUP_INDIRECT &&
|
||||
((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) {
|
||||
node->paddr = ((struct setup_indirect *)data->data)->addr;
|
||||
node->type = ((struct setup_indirect *)data->data)->type;
|
||||
node->len = ((struct setup_indirect *)data->data)->len;
|
||||
if (data->type == SETUP_INDIRECT) {
|
||||
len = sizeof(*data) + data->len;
|
||||
memunmap(data);
|
||||
data = memremap(pa_data, len, MEMREMAP_WB);
|
||||
if (!data) {
|
||||
kfree(node);
|
||||
error = -ENOMEM;
|
||||
goto err_dir;
|
||||
}
|
||||
|
||||
indirect = (struct setup_indirect *)data->data;
|
||||
|
||||
if (indirect->type != SETUP_INDIRECT) {
|
||||
node->paddr = indirect->addr;
|
||||
node->type = indirect->type;
|
||||
node->len = indirect->len;
|
||||
} else {
|
||||
node->paddr = pa_data;
|
||||
node->type = data->type;
|
||||
node->len = data->len;
|
||||
}
|
||||
} else {
|
||||
node->paddr = pa_data;
|
||||
node->type = data->type;
|
||||
@@ -125,7 +144,7 @@ static int __init create_setup_data_nodes(struct dentry *parent)
|
||||
}
|
||||
|
||||
create_setup_data_node(d, no, node);
|
||||
pa_data = data->next;
|
||||
pa_data = pa_next;
|
||||
|
||||
memunmap(data);
|
||||
no++;
|
||||
|
||||
@@ -91,26 +91,41 @@ static int get_setup_data_paddr(int nr, u64 *paddr)
|
||||
|
||||
static int __init get_setup_data_size(int nr, size_t *size)
|
||||
{
|
||||
int i = 0;
|
||||
u64 pa_data = boot_params.hdr.setup_data, pa_next;
|
||||
struct setup_indirect *indirect;
|
||||
struct setup_data *data;
|
||||
u64 pa_data = boot_params.hdr.setup_data;
|
||||
int i = 0;
|
||||
u32 len;
|
||||
|
||||
while (pa_data) {
|
||||
data = memremap(pa_data, sizeof(*data), MEMREMAP_WB);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
pa_next = data->next;
|
||||
|
||||
if (nr == i) {
|
||||
if (data->type == SETUP_INDIRECT &&
|
||||
((struct setup_indirect *)data->data)->type != SETUP_INDIRECT)
|
||||
*size = ((struct setup_indirect *)data->data)->len;
|
||||
if (data->type == SETUP_INDIRECT) {
|
||||
len = sizeof(*data) + data->len;
|
||||
memunmap(data);
|
||||
data = memremap(pa_data, len, MEMREMAP_WB);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
indirect = (struct setup_indirect *)data->data;
|
||||
|
||||
if (indirect->type != SETUP_INDIRECT)
|
||||
*size = indirect->len;
|
||||
else
|
||||
*size = data->len;
|
||||
} else {
|
||||
*size = data->len;
|
||||
}
|
||||
|
||||
memunmap(data);
|
||||
return 0;
|
||||
}
|
||||
|
||||
pa_data = data->next;
|
||||
pa_data = pa_next;
|
||||
memunmap(data);
|
||||
i++;
|
||||
}
|
||||
@@ -120,9 +135,11 @@ static int __init get_setup_data_size(int nr, size_t *size)
|
||||
static ssize_t type_show(struct kobject *kobj,
|
||||
struct kobj_attribute *attr, char *buf)
|
||||
{
|
||||
struct setup_indirect *indirect;
|
||||
struct setup_data *data;
|
||||
int nr, ret;
|
||||
u64 paddr;
|
||||
struct setup_data *data;
|
||||
u32 len;
|
||||
|
||||
ret = kobj_to_setup_data_nr(kobj, &nr);
|
||||
if (ret)
|
||||
@@ -135,10 +152,20 @@ static ssize_t type_show(struct kobject *kobj,
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
if (data->type == SETUP_INDIRECT)
|
||||
ret = sprintf(buf, "0x%x\n", ((struct setup_indirect *)data->data)->type);
|
||||
else
|
||||
if (data->type == SETUP_INDIRECT) {
|
||||
len = sizeof(*data) + data->len;
|
||||
memunmap(data);
|
||||
data = memremap(paddr, len, MEMREMAP_WB);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
indirect = (struct setup_indirect *)data->data;
|
||||
|
||||
ret = sprintf(buf, "0x%x\n", indirect->type);
|
||||
} else {
|
||||
ret = sprintf(buf, "0x%x\n", data->type);
|
||||
}
|
||||
|
||||
memunmap(data);
|
||||
return ret;
|
||||
}
|
||||
@@ -149,9 +176,10 @@ static ssize_t setup_data_data_read(struct file *fp,
|
||||
char *buf,
|
||||
loff_t off, size_t count)
|
||||
{
|
||||
struct setup_indirect *indirect;
|
||||
struct setup_data *data;
|
||||
int nr, ret = 0;
|
||||
u64 paddr, len;
|
||||
struct setup_data *data;
|
||||
void *p;
|
||||
|
||||
ret = kobj_to_setup_data_nr(kobj, &nr);
|
||||
@@ -165,10 +193,27 @@ static ssize_t setup_data_data_read(struct file *fp,
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
if (data->type == SETUP_INDIRECT &&
|
||||
((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) {
|
||||
paddr = ((struct setup_indirect *)data->data)->addr;
|
||||
len = ((struct setup_indirect *)data->data)->len;
|
||||
if (data->type == SETUP_INDIRECT) {
|
||||
len = sizeof(*data) + data->len;
|
||||
memunmap(data);
|
||||
data = memremap(paddr, len, MEMREMAP_WB);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
indirect = (struct setup_indirect *)data->data;
|
||||
|
||||
if (indirect->type != SETUP_INDIRECT) {
|
||||
paddr = indirect->addr;
|
||||
len = indirect->len;
|
||||
} else {
|
||||
/*
|
||||
* Even though this is technically undefined, return
|
||||
* the data as though it is a normal setup_data struct.
|
||||
* This will at least allow it to be inspected.
|
||||
*/
|
||||
paddr += sizeof(*data);
|
||||
len = data->len;
|
||||
}
|
||||
} else {
|
||||
paddr += sizeof(*data);
|
||||
len = data->len;
|
||||
|
||||
@@ -457,19 +457,22 @@ static bool pv_tlb_flush_supported(void)
|
||||
{
|
||||
return (kvm_para_has_feature(KVM_FEATURE_PV_TLB_FLUSH) &&
|
||||
!kvm_para_has_hint(KVM_HINTS_REALTIME) &&
|
||||
kvm_para_has_feature(KVM_FEATURE_STEAL_TIME));
|
||||
kvm_para_has_feature(KVM_FEATURE_STEAL_TIME) &&
|
||||
(num_possible_cpus() != 1));
|
||||
}
|
||||
|
||||
static bool pv_ipi_supported(void)
|
||||
{
|
||||
return kvm_para_has_feature(KVM_FEATURE_PV_SEND_IPI);
|
||||
return (kvm_para_has_feature(KVM_FEATURE_PV_SEND_IPI) &&
|
||||
(num_possible_cpus() != 1));
|
||||
}
|
||||
|
||||
static bool pv_sched_yield_supported(void)
|
||||
{
|
||||
return (kvm_para_has_feature(KVM_FEATURE_PV_SCHED_YIELD) &&
|
||||
!kvm_para_has_hint(KVM_HINTS_REALTIME) &&
|
||||
kvm_para_has_feature(KVM_FEATURE_STEAL_TIME));
|
||||
kvm_para_has_feature(KVM_FEATURE_STEAL_TIME) &&
|
||||
(num_possible_cpus() != 1));
|
||||
}
|
||||
|
||||
#define KVM_IPI_CLUSTER_SIZE (2 * BITS_PER_LONG)
|
||||
|
||||
@@ -368,21 +368,41 @@ static void __init parse_setup_data(void)
|
||||
|
||||
static void __init memblock_x86_reserve_range_setup_data(void)
|
||||
{
|
||||
struct setup_indirect *indirect;
|
||||
struct setup_data *data;
|
||||
u64 pa_data;
|
||||
u64 pa_data, pa_next;
|
||||
u32 len;
|
||||
|
||||
pa_data = boot_params.hdr.setup_data;
|
||||
while (pa_data) {
|
||||
data = early_memremap(pa_data, sizeof(*data));
|
||||
if (!data) {
|
||||
pr_warn("setup: failed to memremap setup_data entry\n");
|
||||
return;
|
||||
}
|
||||
|
||||
len = sizeof(*data);
|
||||
pa_next = data->next;
|
||||
|
||||
memblock_reserve(pa_data, sizeof(*data) + data->len);
|
||||
|
||||
if (data->type == SETUP_INDIRECT &&
|
||||
((struct setup_indirect *)data->data)->type != SETUP_INDIRECT)
|
||||
memblock_reserve(((struct setup_indirect *)data->data)->addr,
|
||||
((struct setup_indirect *)data->data)->len);
|
||||
|
||||
pa_data = data->next;
|
||||
if (data->type == SETUP_INDIRECT) {
|
||||
len += data->len;
|
||||
early_memunmap(data, sizeof(*data));
|
||||
data = early_memremap(pa_data, len);
|
||||
if (!data) {
|
||||
pr_warn("setup: failed to memremap indirect setup_data\n");
|
||||
return;
|
||||
}
|
||||
|
||||
indirect = (struct setup_indirect *)data->data;
|
||||
|
||||
if (indirect->type != SETUP_INDIRECT)
|
||||
memblock_reserve(indirect->addr, indirect->len);
|
||||
}
|
||||
|
||||
pa_data = pa_next;
|
||||
early_memunmap(data, len);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -659,6 +659,7 @@ static bool do_int3(struct pt_regs *regs)
|
||||
|
||||
return res == NOTIFY_STOP;
|
||||
}
|
||||
NOKPROBE_SYMBOL(do_int3);
|
||||
|
||||
static void do_int3_user(struct pt_regs *regs)
|
||||
{
|
||||
|
||||
@@ -3967,6 +3967,7 @@ static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
|
||||
|
||||
*pfn = __gfn_to_pfn_memslot(slot, gfn, false, NULL,
|
||||
write, writable, hva);
|
||||
return false;
|
||||
|
||||
out_retry:
|
||||
*r = RET_PF_RETRY;
|
||||
|
||||
@@ -8666,6 +8666,13 @@ static int kvm_pv_clock_pairing(struct kvm_vcpu *vcpu, gpa_t paddr,
|
||||
if (clock_type != KVM_CLOCK_PAIRING_WALLCLOCK)
|
||||
return -KVM_EOPNOTSUPP;
|
||||
|
||||
/*
|
||||
* When tsc is in permanent catchup mode guests won't be able to use
|
||||
* pvclock_read_retry loop to get consistent view of pvclock
|
||||
*/
|
||||
if (vcpu->arch.tsc_always_catchup)
|
||||
return -KVM_EOPNOTSUPP;
|
||||
|
||||
if (!kvm_get_walltime_and_clockread(&ts, &cycle))
|
||||
return -KVM_EOPNOTSUPP;
|
||||
|
||||
|
||||
@@ -614,6 +614,7 @@ static bool memremap_is_efi_data(resource_size_t phys_addr,
|
||||
static bool memremap_is_setup_data(resource_size_t phys_addr,
|
||||
unsigned long size)
|
||||
{
|
||||
struct setup_indirect *indirect;
|
||||
struct setup_data *data;
|
||||
u64 paddr, paddr_next;
|
||||
|
||||
@@ -626,6 +627,10 @@ static bool memremap_is_setup_data(resource_size_t phys_addr,
|
||||
|
||||
data = memremap(paddr, sizeof(*data),
|
||||
MEMREMAP_WB | MEMREMAP_DEC);
|
||||
if (!data) {
|
||||
pr_warn("failed to memremap setup_data entry\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
paddr_next = data->next;
|
||||
len = data->len;
|
||||
@@ -635,10 +640,21 @@ static bool memremap_is_setup_data(resource_size_t phys_addr,
|
||||
return true;
|
||||
}
|
||||
|
||||
if (data->type == SETUP_INDIRECT &&
|
||||
((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) {
|
||||
paddr = ((struct setup_indirect *)data->data)->addr;
|
||||
len = ((struct setup_indirect *)data->data)->len;
|
||||
if (data->type == SETUP_INDIRECT) {
|
||||
memunmap(data);
|
||||
data = memremap(paddr, sizeof(*data) + len,
|
||||
MEMREMAP_WB | MEMREMAP_DEC);
|
||||
if (!data) {
|
||||
pr_warn("failed to memremap indirect setup_data\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
indirect = (struct setup_indirect *)data->data;
|
||||
|
||||
if (indirect->type != SETUP_INDIRECT) {
|
||||
paddr = indirect->addr;
|
||||
len = indirect->len;
|
||||
}
|
||||
}
|
||||
|
||||
memunmap(data);
|
||||
@@ -659,22 +675,51 @@ static bool memremap_is_setup_data(resource_size_t phys_addr,
|
||||
static bool __init early_memremap_is_setup_data(resource_size_t phys_addr,
|
||||
unsigned long size)
|
||||
{
|
||||
struct setup_indirect *indirect;
|
||||
struct setup_data *data;
|
||||
u64 paddr, paddr_next;
|
||||
|
||||
paddr = boot_params.hdr.setup_data;
|
||||
while (paddr) {
|
||||
unsigned int len;
|
||||
unsigned int len, size;
|
||||
|
||||
if (phys_addr == paddr)
|
||||
return true;
|
||||
|
||||
data = early_memremap_decrypted(paddr, sizeof(*data));
|
||||
if (!data) {
|
||||
pr_warn("failed to early memremap setup_data entry\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
size = sizeof(*data);
|
||||
|
||||
paddr_next = data->next;
|
||||
len = data->len;
|
||||
|
||||
if ((phys_addr > paddr) && (phys_addr < (paddr + len))) {
|
||||
early_memunmap(data, sizeof(*data));
|
||||
return true;
|
||||
}
|
||||
|
||||
if (data->type == SETUP_INDIRECT) {
|
||||
size += len;
|
||||
early_memunmap(data, sizeof(*data));
|
||||
data = early_memremap_decrypted(paddr, size);
|
||||
if (!data) {
|
||||
pr_warn("failed to early memremap indirect setup_data\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
indirect = (struct setup_indirect *)data->data;
|
||||
|
||||
if (indirect->type != SETUP_INDIRECT) {
|
||||
paddr = indirect->addr;
|
||||
len = indirect->len;
|
||||
}
|
||||
}
|
||||
|
||||
early_memunmap(data, size);
|
||||
|
||||
if ((phys_addr > paddr) && (phys_addr < (paddr + len)))
|
||||
return true;
|
||||
|
||||
@@ -19,6 +19,7 @@
|
||||
#include <linux/seq_file.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/kmod.h>
|
||||
#include <linux/major.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/idr.h>
|
||||
#include <linux/log2.h>
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
#include <linux/genhd.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
struct bd_holder_disk {
|
||||
struct list_head list;
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
* Copyright (C) 2020 Christoph Hellwig
|
||||
*/
|
||||
#include <linux/fs.h>
|
||||
#include <linux/major.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/ctype.h>
|
||||
#include <linux/genhd.h>
|
||||
|
||||
@@ -61,6 +61,7 @@
|
||||
#include <linux/hdreg.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/major.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/blk-mq.h>
|
||||
|
||||
@@ -68,6 +68,7 @@
|
||||
#include <linux/delay.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/blk-mq.h>
|
||||
#include <linux/major.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/completion.h>
|
||||
#include <linux/wait.h>
|
||||
|
||||
@@ -184,6 +184,7 @@ static int print_unex = 1;
|
||||
#include <linux/ioport.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/major.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/mod_devicetable.h>
|
||||
#include <linux/mutex.h>
|
||||
|
||||
@@ -16,6 +16,7 @@
|
||||
#include <linux/fd.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/blk-mq.h>
|
||||
#include <linux/major.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/hdreg.h>
|
||||
#include <linux/kernel.h>
|
||||
|
||||
@@ -865,9 +865,15 @@ static int virtblk_probe(struct virtio_device *vdev)
|
||||
|
||||
virtio_cread(vdev, struct virtio_blk_config, max_discard_seg,
|
||||
&v);
|
||||
|
||||
/*
|
||||
* max_discard_seg == 0 is out of spec but we always
|
||||
* handled it.
|
||||
*/
|
||||
if (!v)
|
||||
v = sg_elems - 2;
|
||||
blk_queue_max_discard_segments(q,
|
||||
min_not_zero(v,
|
||||
MAX_DISCARD_SEGMENTS));
|
||||
min(v, MAX_DISCARD_SEGMENTS));
|
||||
|
||||
blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
|
||||
}
|
||||
|
||||
@@ -42,6 +42,7 @@
|
||||
#include <linux/cdrom.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/major.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/scatterlist.h>
|
||||
#include <linux/bitmap.h>
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* Copyright (c) 2019, The Linux Foundation. All rights reserved.
|
||||
* Copyright (c) 2019, 2022, The Linux Foundation. All rights reserved.
|
||||
*/
|
||||
|
||||
#include <linux/clk-provider.h>
|
||||
@@ -625,6 +625,9 @@ static struct clk_branch disp_cc_mdss_vsync_clk = {
|
||||
|
||||
static struct gdsc mdss_gdsc = {
|
||||
.gdscr = 0x3000,
|
||||
.en_rest_wait_val = 0x2,
|
||||
.en_few_wait_val = 0x2,
|
||||
.clk_dis_wait_val = 0xf,
|
||||
.pd = {
|
||||
.name = "mdss_gdsc",
|
||||
},
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* Copyright (c) 2021, The Linux Foundation. All rights reserved.
|
||||
* Copyright (c) 2021-2022, The Linux Foundation. All rights reserved.
|
||||
*/
|
||||
|
||||
#include <linux/clk-provider.h>
|
||||
@@ -787,6 +787,9 @@ static struct clk_branch disp_cc_sleep_clk = {
|
||||
|
||||
static struct gdsc disp_cc_mdss_core_gdsc = {
|
||||
.gdscr = 0x1004,
|
||||
.en_rest_wait_val = 0x2,
|
||||
.en_few_wait_val = 0x2,
|
||||
.clk_dis_wait_val = 0xf,
|
||||
.pd = {
|
||||
.name = "disp_cc_mdss_core_gdsc",
|
||||
},
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
|
||||
* Copyright (c) 2018-2020, 2022, The Linux Foundation. All rights reserved.
|
||||
*/
|
||||
|
||||
#include <linux/clk-provider.h>
|
||||
@@ -1125,6 +1125,9 @@ static struct clk_branch disp_cc_mdss_vsync_clk = {
|
||||
|
||||
static struct gdsc mdss_gdsc = {
|
||||
.gdscr = 0x3000,
|
||||
.en_rest_wait_val = 0x2,
|
||||
.en_few_wait_val = 0x2,
|
||||
.clk_dis_wait_val = 0xf,
|
||||
.pd = {
|
||||
.name = "mdss_gdsc",
|
||||
},
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* Copyright (c) 2015, 2017-2018, The Linux Foundation. All rights reserved.
|
||||
* Copyright (c) 2015, 2017-2018, 2022, The Linux Foundation. All rights reserved.
|
||||
*/
|
||||
|
||||
#include <linux/bitops.h>
|
||||
@@ -34,9 +34,14 @@
|
||||
#define CFG_GDSCR_OFFSET 0x4
|
||||
|
||||
/* Wait 2^n CXO cycles between all states. Here, n=2 (4 cycles). */
|
||||
#define EN_REST_WAIT_VAL (0x2 << 20)
|
||||
#define EN_FEW_WAIT_VAL (0x8 << 16)
|
||||
#define CLK_DIS_WAIT_VAL (0x2 << 12)
|
||||
#define EN_REST_WAIT_VAL 0x2
|
||||
#define EN_FEW_WAIT_VAL 0x8
|
||||
#define CLK_DIS_WAIT_VAL 0x2
|
||||
|
||||
/* Transition delay shifts */
|
||||
#define EN_REST_WAIT_SHIFT 20
|
||||
#define EN_FEW_WAIT_SHIFT 16
|
||||
#define CLK_DIS_WAIT_SHIFT 12
|
||||
|
||||
#define RETAIN_MEM BIT(14)
|
||||
#define RETAIN_PERIPH BIT(13)
|
||||
@@ -341,7 +346,18 @@ static int gdsc_init(struct gdsc *sc)
|
||||
*/
|
||||
mask = HW_CONTROL_MASK | SW_OVERRIDE_MASK |
|
||||
EN_REST_WAIT_MASK | EN_FEW_WAIT_MASK | CLK_DIS_WAIT_MASK;
|
||||
val = EN_REST_WAIT_VAL | EN_FEW_WAIT_VAL | CLK_DIS_WAIT_VAL;
|
||||
|
||||
if (!sc->en_rest_wait_val)
|
||||
sc->en_rest_wait_val = EN_REST_WAIT_VAL;
|
||||
if (!sc->en_few_wait_val)
|
||||
sc->en_few_wait_val = EN_FEW_WAIT_VAL;
|
||||
if (!sc->clk_dis_wait_val)
|
||||
sc->clk_dis_wait_val = CLK_DIS_WAIT_VAL;
|
||||
|
||||
val = sc->en_rest_wait_val << EN_REST_WAIT_SHIFT |
|
||||
sc->en_few_wait_val << EN_FEW_WAIT_SHIFT |
|
||||
sc->clk_dis_wait_val << CLK_DIS_WAIT_SHIFT;
|
||||
|
||||
ret = regmap_update_bits(sc->regmap, sc->gdscr, mask, val);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
/*
|
||||
* Copyright (c) 2015, 2017-2018, The Linux Foundation. All rights reserved.
|
||||
* Copyright (c) 2015, 2017-2018, 2022, The Linux Foundation. All rights reserved.
|
||||
*/
|
||||
|
||||
#ifndef __QCOM_GDSC_H__
|
||||
@@ -22,6 +22,9 @@ struct reset_controller_dev;
|
||||
* @cxcs: offsets of branch registers to toggle mem/periph bits in
|
||||
* @cxc_count: number of @cxcs
|
||||
* @pwrsts: Possible powerdomain power states
|
||||
* @en_rest_wait_val: transition delay value for receiving enr ack signal
|
||||
* @en_few_wait_val: transition delay value for receiving enf ack signal
|
||||
* @clk_dis_wait_val: transition delay value for halting clock
|
||||
* @resets: ids of resets associated with this gdsc
|
||||
* @reset_count: number of @resets
|
||||
* @rcdev: reset controller
|
||||
@@ -35,6 +38,9 @@ struct gdsc {
|
||||
unsigned int clamp_io_ctrl;
|
||||
unsigned int *cxcs;
|
||||
unsigned int cxc_count;
|
||||
unsigned int en_rest_wait_val;
|
||||
unsigned int en_few_wait_val;
|
||||
unsigned int clk_dis_wait_val;
|
||||
const u8 pwrsts;
|
||||
/* Powerdomain allowable state bitfields */
|
||||
#define PWRSTS_OFF BIT(0)
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
/*
|
||||
* Digital I/O driver for Technologic Systems I2C FPGA Core
|
||||
*
|
||||
* Copyright (C) 2015 Technologic Systems
|
||||
* Copyright (C) 2015, 2018 Technologic Systems
|
||||
* Copyright (C) 2016 Savoir-Faire Linux
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
@@ -55,19 +55,33 @@ static int ts4900_gpio_direction_input(struct gpio_chip *chip,
|
||||
{
|
||||
struct ts4900_gpio_priv *priv = gpiochip_get_data(chip);
|
||||
|
||||
/*
|
||||
* This will clear the output enable bit, the other bits are
|
||||
* dontcare when this is cleared
|
||||
/* Only clear the OE bit here, requires a RMW. Prevents potential issue
|
||||
* with OE and data getting to the physical pin at different times.
|
||||
*/
|
||||
return regmap_write(priv->regmap, offset, 0);
|
||||
return regmap_update_bits(priv->regmap, offset, TS4900_GPIO_OE, 0);
|
||||
}
|
||||
|
||||
static int ts4900_gpio_direction_output(struct gpio_chip *chip,
|
||||
unsigned int offset, int value)
|
||||
{
|
||||
struct ts4900_gpio_priv *priv = gpiochip_get_data(chip);
|
||||
unsigned int reg;
|
||||
int ret;
|
||||
|
||||
/* If changing from an input to an output, we need to first set the
|
||||
* proper data bit to what is requested and then set OE bit. This
|
||||
* prevents a glitch that can occur on the IO line
|
||||
*/
|
||||
regmap_read(priv->regmap, offset, ®);
|
||||
if (!(reg & TS4900_GPIO_OE)) {
|
||||
if (value)
|
||||
reg = TS4900_GPIO_OUT;
|
||||
else
|
||||
reg &= ~TS4900_GPIO_OUT;
|
||||
|
||||
regmap_write(priv->regmap, offset, reg);
|
||||
}
|
||||
|
||||
if (value)
|
||||
ret = regmap_write(priv->regmap, offset, TS4900_GPIO_OE |
|
||||
TS4900_GPIO_OUT);
|
||||
|
||||
@@ -311,7 +311,8 @@ static struct gpio_desc *acpi_request_own_gpiod(struct gpio_chip *chip,
|
||||
if (IS_ERR(desc))
|
||||
return desc;
|
||||
|
||||
ret = gpio_set_debounce_timeout(desc, agpio->debounce_timeout);
|
||||
/* ACPI uses hundredths of milliseconds units */
|
||||
ret = gpio_set_debounce_timeout(desc, agpio->debounce_timeout * 10);
|
||||
if (ret)
|
||||
dev_warn(chip->parent,
|
||||
"Failed to set debounce-timeout for pin 0x%04X, err %d\n",
|
||||
@@ -1052,7 +1053,8 @@ int acpi_dev_gpio_irq_get_by(struct acpi_device *adev, const char *name, int ind
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
ret = gpio_set_debounce_timeout(desc, info.debounce);
|
||||
/* ACPI uses hundredths of milliseconds units */
|
||||
ret = gpio_set_debounce_timeout(desc, info.debounce * 10);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
||||
@@ -2186,6 +2186,16 @@ static int gpio_set_bias(struct gpio_desc *desc)
|
||||
return gpio_set_config_with_argument_optional(desc, bias, arg);
|
||||
}
|
||||
|
||||
/**
|
||||
* gpio_set_debounce_timeout() - Set debounce timeout
|
||||
* @desc: GPIO descriptor to set the debounce timeout
|
||||
* @debounce: Debounce timeout in microseconds
|
||||
*
|
||||
* The function calls the certain GPIO driver to set debounce timeout
|
||||
* in the hardware.
|
||||
*
|
||||
* Returns 0 on success, or negative error code otherwise.
|
||||
*/
|
||||
int gpio_set_debounce_timeout(struct gpio_desc *desc, unsigned int debounce)
|
||||
{
|
||||
return gpio_set_config_with_argument_optional(desc,
|
||||
@@ -3106,6 +3116,16 @@ int gpiod_to_irq(const struct gpio_desc *desc)
|
||||
|
||||
return retirq;
|
||||
}
|
||||
#ifdef CONFIG_GPIOLIB_IRQCHIP
|
||||
if (gc->irq.chip) {
|
||||
/*
|
||||
* Avoid race condition with other code, which tries to lookup
|
||||
* an IRQ before the irqchip has been properly registered,
|
||||
* i.e. while gpiochip is still being brought up.
|
||||
*/
|
||||
return -EPROBE_DEFER;
|
||||
}
|
||||
#endif
|
||||
return -ENXIO;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(gpiod_to_irq);
|
||||
|
||||
@@ -1145,7 +1145,7 @@ int amdgpu_display_framebuffer_init(struct drm_device *dev,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (!dev->mode_config.allow_fb_modifiers) {
|
||||
if (!dev->mode_config.allow_fb_modifiers && !adev->enable_virtual_display) {
|
||||
drm_WARN_ONCE(dev, adev->family >= AMDGPU_FAMILY_AI,
|
||||
"GFX9+ requires FB check based on format modifier\n");
|
||||
ret = check_tiling_flags_gfx6(rfb);
|
||||
|
||||
@@ -1658,7 +1658,7 @@ static void fixup_plane_bitmasks(struct intel_crtc_state *crtc_state)
|
||||
}
|
||||
}
|
||||
|
||||
static void intel_plane_disable_noatomic(struct intel_crtc *crtc,
|
||||
void intel_plane_disable_noatomic(struct intel_crtc *crtc,
|
||||
struct intel_plane *plane)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
@@ -13217,6 +13217,7 @@ intel_modeset_setup_hw_state(struct drm_device *dev,
|
||||
vlv_wm_sanitize(dev_priv);
|
||||
} else if (DISPLAY_VER(dev_priv) >= 9) {
|
||||
skl_wm_get_hw_state(dev_priv);
|
||||
skl_wm_sanitize(dev_priv);
|
||||
} else if (HAS_PCH_SPLIT(dev_priv)) {
|
||||
ilk_wm_get_hw_state(dev_priv);
|
||||
}
|
||||
|
||||
@@ -629,6 +629,8 @@ void intel_plane_unpin_fb(struct intel_plane_state *old_plane_state);
|
||||
struct intel_encoder *
|
||||
intel_get_crtc_new_encoder(const struct intel_atomic_state *state,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
void intel_plane_disable_noatomic(struct intel_crtc *crtc,
|
||||
struct intel_plane *plane);
|
||||
|
||||
unsigned int intel_surf_alignment(const struct drm_framebuffer *fb,
|
||||
int color_plane);
|
||||
|
||||
@@ -6681,6 +6681,74 @@ void skl_wm_get_hw_state(struct drm_i915_private *dev_priv)
|
||||
dbuf_state->enabled_slices = dev_priv->dbuf.enabled_slices;
|
||||
}
|
||||
|
||||
static bool skl_dbuf_is_misconfigured(struct drm_i915_private *i915)
|
||||
{
|
||||
const struct intel_dbuf_state *dbuf_state =
|
||||
to_intel_dbuf_state(i915->dbuf.obj.state);
|
||||
struct skl_ddb_entry entries[I915_MAX_PIPES] = {};
|
||||
struct intel_crtc *crtc;
|
||||
|
||||
for_each_intel_crtc(&i915->drm, crtc) {
|
||||
const struct intel_crtc_state *crtc_state =
|
||||
to_intel_crtc_state(crtc->base.state);
|
||||
|
||||
entries[crtc->pipe] = crtc_state->wm.skl.ddb;
|
||||
}
|
||||
|
||||
for_each_intel_crtc(&i915->drm, crtc) {
|
||||
const struct intel_crtc_state *crtc_state =
|
||||
to_intel_crtc_state(crtc->base.state);
|
||||
u8 slices;
|
||||
|
||||
slices = skl_compute_dbuf_slices(crtc, dbuf_state->active_pipes,
|
||||
dbuf_state->joined_mbus);
|
||||
if (dbuf_state->slices[crtc->pipe] & ~slices)
|
||||
return true;
|
||||
|
||||
if (skl_ddb_allocation_overlaps(&crtc_state->wm.skl.ddb, entries,
|
||||
I915_MAX_PIPES, crtc->pipe))
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
void skl_wm_sanitize(struct drm_i915_private *i915)
|
||||
{
|
||||
struct intel_crtc *crtc;
|
||||
|
||||
/*
|
||||
* On TGL/RKL (at least) the BIOS likes to assign the planes
|
||||
* to the wrong DBUF slices. This will cause an infinite loop
|
||||
* in skl_commit_modeset_enables() as it can't find a way to
|
||||
* transition between the old bogus DBUF layout to the new
|
||||
* proper DBUF layout without DBUF allocation overlaps between
|
||||
* the planes (which cannot be allowed or else the hardware
|
||||
* may hang). If we detect a bogus DBUF layout just turn off
|
||||
* all the planes so that skl_commit_modeset_enables() can
|
||||
* simply ignore them.
|
||||
*/
|
||||
if (!skl_dbuf_is_misconfigured(i915))
|
||||
return;
|
||||
|
||||
drm_dbg_kms(&i915->drm, "BIOS has misprogrammed the DBUF, disabling all planes\n");
|
||||
|
||||
for_each_intel_crtc(&i915->drm, crtc) {
|
||||
struct intel_plane *plane = to_intel_plane(crtc->base.primary);
|
||||
const struct intel_plane_state *plane_state =
|
||||
to_intel_plane_state(plane->base.state);
|
||||
struct intel_crtc_state *crtc_state =
|
||||
to_intel_crtc_state(crtc->base.state);
|
||||
|
||||
if (plane_state->uapi.visible)
|
||||
intel_plane_disable_noatomic(crtc, plane);
|
||||
|
||||
drm_WARN_ON(&i915->drm, crtc_state->active_planes != 0);
|
||||
|
||||
memset(&crtc_state->wm.skl.ddb, 0, sizeof(crtc_state->wm.skl.ddb));
|
||||
}
|
||||
}
|
||||
|
||||
static void ilk_pipe_wm_get_hw_state(struct intel_crtc *crtc)
|
||||
{
|
||||
struct drm_device *dev = crtc->base.dev;
|
||||
|
||||
@@ -48,6 +48,7 @@ void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc,
|
||||
struct skl_pipe_wm *out);
|
||||
void g4x_wm_sanitize(struct drm_i915_private *dev_priv);
|
||||
void vlv_wm_sanitize(struct drm_i915_private *dev_priv);
|
||||
void skl_wm_sanitize(struct drm_i915_private *dev_priv);
|
||||
bool intel_can_enable_sagv(struct drm_i915_private *dev_priv,
|
||||
const struct intel_bw_state *bw_state);
|
||||
void intel_sagv_pre_plane_update(struct intel_atomic_state *state);
|
||||
|
||||
@@ -83,6 +83,7 @@ config DRM_PANEL_SIMPLE
|
||||
depends on PM
|
||||
select VIDEOMODE_HELPERS
|
||||
select DRM_DP_AUX_BUS
|
||||
select DRM_DP_HELPER
|
||||
help
|
||||
DRM panel driver for dumb panels that need at most a regulator and
|
||||
a GPIO to be powered up. Optionally a backlight can be attached so
|
||||
|
||||
@@ -111,10 +111,10 @@
|
||||
/* format 13 is semi-planar YUV411 VUVU */
|
||||
#define SUN8I_MIXER_FBFMT_YUV411 14
|
||||
/* format 15 doesn't exist */
|
||||
/* format 16 is P010 YVU */
|
||||
#define SUN8I_MIXER_FBFMT_P010_YUV 17
|
||||
/* format 18 is P210 YVU */
|
||||
#define SUN8I_MIXER_FBFMT_P210_YUV 19
|
||||
#define SUN8I_MIXER_FBFMT_P010_YUV 16
|
||||
/* format 17 is P010 YVU */
|
||||
#define SUN8I_MIXER_FBFMT_P210_YUV 18
|
||||
/* format 19 is P210 YVU */
|
||||
/* format 20 is packed YVU444 10-bit */
|
||||
/* format 21 is packed YUV444 10-bit */
|
||||
|
||||
|
||||
@@ -1522,6 +1522,7 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *vc4_hdmi)
|
||||
dev_err(dev, "Couldn't register the HDMI codec: %ld\n", PTR_ERR(codec_pdev));
|
||||
return PTR_ERR(codec_pdev);
|
||||
}
|
||||
vc4_hdmi->audio.codec_pdev = codec_pdev;
|
||||
|
||||
dai_link->cpus = &vc4_hdmi->audio.cpu;
|
||||
dai_link->codecs = &vc4_hdmi->audio.codec;
|
||||
@@ -1561,6 +1562,12 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *vc4_hdmi)
|
||||
|
||||
}
|
||||
|
||||
static void vc4_hdmi_audio_exit(struct vc4_hdmi *vc4_hdmi)
|
||||
{
|
||||
platform_device_unregister(vc4_hdmi->audio.codec_pdev);
|
||||
vc4_hdmi->audio.codec_pdev = NULL;
|
||||
}
|
||||
|
||||
static irqreturn_t vc4_hdmi_hpd_irq_thread(int irq, void *priv)
|
||||
{
|
||||
struct vc4_hdmi *vc4_hdmi = priv;
|
||||
@@ -2298,6 +2305,7 @@ static void vc4_hdmi_unbind(struct device *dev, struct device *master,
|
||||
kfree(vc4_hdmi->hdmi_regset.regs);
|
||||
kfree(vc4_hdmi->hd_regset.regs);
|
||||
|
||||
vc4_hdmi_audio_exit(vc4_hdmi);
|
||||
vc4_hdmi_cec_exit(vc4_hdmi);
|
||||
vc4_hdmi_hotplug_exit(vc4_hdmi);
|
||||
vc4_hdmi_connector_destroy(&vc4_hdmi->connector);
|
||||
|
||||
@@ -113,6 +113,7 @@ struct vc4_hdmi_audio {
|
||||
struct snd_soc_dai_link_component platform;
|
||||
struct snd_dmaengine_dai_dma_data dma_data;
|
||||
struct hdmi_audio_infoframe infoframe;
|
||||
struct platform_device *codec_pdev;
|
||||
bool streaming;
|
||||
};
|
||||
|
||||
|
||||
@@ -228,7 +228,6 @@ static int elo_probe(struct hid_device *hdev, const struct hid_device_id *id)
|
||||
{
|
||||
struct elo_priv *priv;
|
||||
int ret;
|
||||
struct usb_device *udev;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
@@ -238,8 +237,7 @@ static int elo_probe(struct hid_device *hdev, const struct hid_device_id *id)
|
||||
return -ENOMEM;
|
||||
|
||||
INIT_DELAYED_WORK(&priv->work, elo_work);
|
||||
udev = interface_to_usbdev(to_usb_interface(hdev->dev.parent));
|
||||
priv->usbdev = usb_get_dev(udev);
|
||||
priv->usbdev = interface_to_usbdev(to_usb_interface(hdev->dev.parent));
|
||||
|
||||
hid_set_drvdata(hdev, priv);
|
||||
|
||||
@@ -262,7 +260,6 @@ static int elo_probe(struct hid_device *hdev, const struct hid_device_id *id)
|
||||
|
||||
return 0;
|
||||
err_free:
|
||||
usb_put_dev(udev);
|
||||
kfree(priv);
|
||||
return ret;
|
||||
}
|
||||
@@ -271,8 +268,6 @@ static void elo_remove(struct hid_device *hdev)
|
||||
{
|
||||
struct elo_priv *priv = hid_get_drvdata(hdev);
|
||||
|
||||
usb_put_dev(priv->usbdev);
|
||||
|
||||
hid_hw_stop(hdev);
|
||||
cancel_delayed_work_sync(&priv->work);
|
||||
kfree(priv);
|
||||
|
||||
@@ -158,6 +158,12 @@ static void thrustmaster_interrupts(struct hid_device *hdev)
|
||||
return;
|
||||
}
|
||||
|
||||
if (usbif->cur_altsetting->desc.bNumEndpoints < 2) {
|
||||
kfree(send_buf);
|
||||
hid_err(hdev, "Wrong number of endpoints?\n");
|
||||
return;
|
||||
}
|
||||
|
||||
ep = &usbif->cur_altsetting->endpoint[1];
|
||||
b_ep = ep->desc.bEndpointAddress;
|
||||
|
||||
|
||||
@@ -143,7 +143,7 @@ out:
|
||||
static int vivaldi_input_configured(struct hid_device *hdev,
|
||||
struct hid_input *hidinput)
|
||||
{
|
||||
return sysfs_create_group(&hdev->dev.kobj, &input_attribute_group);
|
||||
return devm_device_add_group(&hdev->dev, &input_attribute_group);
|
||||
}
|
||||
|
||||
static const struct hid_device_id vivaldi_table[] = {
|
||||
|
||||
@@ -911,6 +911,11 @@ static int pmbus_get_boolean(struct i2c_client *client, struct pmbus_boolean *b,
|
||||
pmbus_update_sensor_data(client, s2);
|
||||
|
||||
regval = status & mask;
|
||||
if (regval) {
|
||||
ret = pmbus_write_byte_data(client, page, reg, regval);
|
||||
if (ret)
|
||||
goto unlock;
|
||||
}
|
||||
if (s1 && s2) {
|
||||
s64 v1, v2;
|
||||
|
||||
|
||||
@@ -2005,7 +2005,11 @@ setup_hw(struct hfc_pci *hc)
|
||||
}
|
||||
/* Allocate memory for FIFOS */
|
||||
/* the memory needs to be on a 32k boundary within the first 4G */
|
||||
dma_set_mask(&hc->pdev->dev, 0xFFFF8000);
|
||||
if (dma_set_mask(&hc->pdev->dev, 0xFFFF8000)) {
|
||||
printk(KERN_WARNING
|
||||
"HFC-PCI: No usable DMA configuration!\n");
|
||||
return -EIO;
|
||||
}
|
||||
buffer = dma_alloc_coherent(&hc->pdev->dev, 0x8000, &hc->hw.dmahandle,
|
||||
GFP_KERNEL);
|
||||
/* We silently assume the address is okay if nonzero */
|
||||
|
||||
@@ -192,7 +192,7 @@ void dsp_pipeline_destroy(struct dsp_pipeline *pipeline)
|
||||
int dsp_pipeline_build(struct dsp_pipeline *pipeline, const char *cfg)
|
||||
{
|
||||
int found = 0;
|
||||
char *dup, *tok, *name, *args;
|
||||
char *dup, *next, *tok, *name, *args;
|
||||
struct dsp_element_entry *entry, *n;
|
||||
struct dsp_pipeline_entry *pipeline_entry;
|
||||
struct mISDN_dsp_element *elem;
|
||||
@@ -203,10 +203,10 @@ int dsp_pipeline_build(struct dsp_pipeline *pipeline, const char *cfg)
|
||||
if (!list_empty(&pipeline->list))
|
||||
_dsp_pipeline_destroy(pipeline);
|
||||
|
||||
dup = kstrdup(cfg, GFP_ATOMIC);
|
||||
dup = next = kstrdup(cfg, GFP_ATOMIC);
|
||||
if (!dup)
|
||||
return 0;
|
||||
while ((tok = strsep(&dup, "|"))) {
|
||||
while ((tok = strsep(&next, "|"))) {
|
||||
if (!strlen(tok))
|
||||
continue;
|
||||
name = strsep(&tok, "(");
|
||||
|
||||
@@ -51,6 +51,7 @@
|
||||
#include <linux/hdreg.h>
|
||||
#include <linux/proc_fs.h>
|
||||
#include <linux/random.h>
|
||||
#include <linux/major.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/reboot.h>
|
||||
#include <linux/file.h>
|
||||
|
||||
@@ -173,6 +173,8 @@ struct meson_host {
|
||||
int irq;
|
||||
|
||||
bool vqmmc_enabled;
|
||||
bool needs_pre_post_req;
|
||||
|
||||
};
|
||||
|
||||
#define CMD_CFG_LENGTH_MASK GENMASK(8, 0)
|
||||
@@ -663,6 +665,8 @@ static void meson_mmc_request_done(struct mmc_host *mmc,
|
||||
struct meson_host *host = mmc_priv(mmc);
|
||||
|
||||
host->cmd = NULL;
|
||||
if (host->needs_pre_post_req)
|
||||
meson_mmc_post_req(mmc, mrq, 0);
|
||||
mmc_request_done(host->mmc, mrq);
|
||||
}
|
||||
|
||||
@@ -880,7 +884,7 @@ static int meson_mmc_validate_dram_access(struct mmc_host *mmc, struct mmc_data
|
||||
static void meson_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
|
||||
{
|
||||
struct meson_host *host = mmc_priv(mmc);
|
||||
bool needs_pre_post_req = mrq->data &&
|
||||
host->needs_pre_post_req = mrq->data &&
|
||||
!(mrq->data->host_cookie & SD_EMMC_PRE_REQ_DONE);
|
||||
|
||||
/*
|
||||
@@ -896,22 +900,19 @@ static void meson_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
|
||||
}
|
||||
}
|
||||
|
||||
if (needs_pre_post_req) {
|
||||
if (host->needs_pre_post_req) {
|
||||
meson_mmc_get_transfer_mode(mmc, mrq);
|
||||
if (!meson_mmc_desc_chain_mode(mrq->data))
|
||||
needs_pre_post_req = false;
|
||||
host->needs_pre_post_req = false;
|
||||
}
|
||||
|
||||
if (needs_pre_post_req)
|
||||
if (host->needs_pre_post_req)
|
||||
meson_mmc_pre_req(mmc, mrq);
|
||||
|
||||
/* Stop execution */
|
||||
writel(0, host->regs + SD_EMMC_START);
|
||||
|
||||
meson_mmc_start_cmd(mmc, mrq->sbc ?: mrq->cmd);
|
||||
|
||||
if (needs_pre_post_req)
|
||||
meson_mmc_post_req(mmc, mrq, 0);
|
||||
}
|
||||
|
||||
static void meson_mmc_read_resp(struct mmc_host *mmc, struct mmc_command *cmd)
|
||||
|
||||
@@ -2928,7 +2928,7 @@ mt753x_phylink_validate(struct dsa_switch *ds, int port,
|
||||
|
||||
phylink_set_port_modes(mask);
|
||||
|
||||
if (state->interface != PHY_INTERFACE_MODE_TRGMII ||
|
||||
if (state->interface != PHY_INTERFACE_MODE_TRGMII &&
|
||||
!phy_interface_mode_is_8023z(state->interface)) {
|
||||
phylink_set(mask, 10baseT_Half);
|
||||
phylink_set(mask, 10baseT_Full);
|
||||
|
||||
@@ -2291,13 +2291,6 @@ static int mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port,
|
||||
if (!mv88e6xxx_max_vid(chip))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/* The ATU removal procedure needs the FID to be mapped in the VTU,
|
||||
* but FDB deletion runs concurrently with VLAN deletion. Flush the DSA
|
||||
* switchdev workqueue to ensure that all FDB entries are deleted
|
||||
* before we remove the VLAN.
|
||||
*/
|
||||
dsa_flush_workqueue();
|
||||
|
||||
mv88e6xxx_reg_lock(chip);
|
||||
|
||||
err = mv88e6xxx_port_get_pvid(chip, port, &pvid);
|
||||
|
||||
@@ -40,6 +40,13 @@
|
||||
void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
|
||||
{
|
||||
struct bcmgenet_priv *priv = netdev_priv(dev);
|
||||
struct device *kdev = &priv->pdev->dev;
|
||||
|
||||
if (!device_can_wakeup(kdev)) {
|
||||
wol->supported = 0;
|
||||
wol->wolopts = 0;
|
||||
return;
|
||||
}
|
||||
|
||||
wol->supported = WAKE_MAGIC | WAKE_MAGICSECURE | WAKE_FILTER;
|
||||
wol->wolopts = priv->wolopts;
|
||||
|
||||
@@ -1606,7 +1606,14 @@ static int macb_poll(struct napi_struct *napi, int budget)
|
||||
if (work_done < budget) {
|
||||
napi_complete_done(napi, work_done);
|
||||
|
||||
/* Packets received while interrupts were disabled */
|
||||
/* RSR bits only seem to propagate to raise interrupts when
|
||||
* interrupts are enabled at the time, so if bits are already
|
||||
* set due to packets received while interrupts were disabled,
|
||||
* they will not cause another interrupt to be generated when
|
||||
* interrupts are re-enabled.
|
||||
* Check for this case here. This has been seen to happen
|
||||
* around 30% of the time under heavy network load.
|
||||
*/
|
||||
status = macb_readl(bp, RSR);
|
||||
if (status) {
|
||||
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
|
||||
@@ -1614,6 +1621,22 @@ static int macb_poll(struct napi_struct *napi, int budget)
|
||||
napi_reschedule(napi);
|
||||
} else {
|
||||
queue_writel(queue, IER, bp->rx_intr_mask);
|
||||
|
||||
/* In rare cases, packets could have been received in
|
||||
* the window between the check above and re-enabling
|
||||
* interrupts. Therefore, a double-check is required
|
||||
* to avoid losing a wakeup. This can potentially race
|
||||
* with the interrupt handler doing the same actions
|
||||
* if an interrupt is raised just after enabling them,
|
||||
* but this should be harmless.
|
||||
*/
|
||||
status = macb_readl(bp, RSR);
|
||||
if (unlikely(status)) {
|
||||
queue_writel(queue, IDR, bp->rx_intr_mask);
|
||||
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
|
||||
queue_writel(queue, ISR, MACB_BIT(RCOMP));
|
||||
napi_schedule(napi);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1460,6 +1460,7 @@ static int gfar_get_ts_info(struct net_device *dev,
|
||||
ptp_node = of_find_compatible_node(NULL, NULL, "fsl,etsec-ptp");
|
||||
if (ptp_node) {
|
||||
ptp_dev = of_find_device_by_node(ptp_node);
|
||||
of_node_put(ptp_node);
|
||||
if (ptp_dev)
|
||||
ptp = platform_get_drvdata(ptp_dev);
|
||||
}
|
||||
|
||||
@@ -742,10 +742,8 @@ static void i40e_dbg_dump_vf(struct i40e_pf *pf, int vf_id)
|
||||
vsi = pf->vsi[vf->lan_vsi_idx];
|
||||
dev_info(&pf->pdev->dev, "vf %2d: VSI id=%d, seid=%d, qps=%d\n",
|
||||
vf_id, vf->lan_vsi_id, vsi->seid, vf->num_queue_pairs);
|
||||
dev_info(&pf->pdev->dev, " num MDD=%lld, invalid msg=%lld, valid msg=%lld\n",
|
||||
vf->num_mdd_events,
|
||||
vf->num_invalid_msgs,
|
||||
vf->num_valid_msgs);
|
||||
dev_info(&pf->pdev->dev, " num MDD=%lld\n",
|
||||
vf->num_mdd_events);
|
||||
} else {
|
||||
dev_info(&pf->pdev->dev, "invalid VF id %d\n", vf_id);
|
||||
}
|
||||
|
||||
@@ -1917,19 +1917,17 @@ sriov_configure_out:
|
||||
/***********************virtual channel routines******************/
|
||||
|
||||
/**
|
||||
* i40e_vc_send_msg_to_vf_ex
|
||||
* i40e_vc_send_msg_to_vf
|
||||
* @vf: pointer to the VF info
|
||||
* @v_opcode: virtual channel opcode
|
||||
* @v_retval: virtual channel return value
|
||||
* @msg: pointer to the msg buffer
|
||||
* @msglen: msg length
|
||||
* @is_quiet: true for not printing unsuccessful return values, false otherwise
|
||||
*
|
||||
* send msg to VF
|
||||
**/
|
||||
static int i40e_vc_send_msg_to_vf_ex(struct i40e_vf *vf, u32 v_opcode,
|
||||
u32 v_retval, u8 *msg, u16 msglen,
|
||||
bool is_quiet)
|
||||
static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode,
|
||||
u32 v_retval, u8 *msg, u16 msglen)
|
||||
{
|
||||
struct i40e_pf *pf;
|
||||
struct i40e_hw *hw;
|
||||
@@ -1944,25 +1942,6 @@ static int i40e_vc_send_msg_to_vf_ex(struct i40e_vf *vf, u32 v_opcode,
|
||||
hw = &pf->hw;
|
||||
abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
|
||||
|
||||
/* single place to detect unsuccessful return values */
|
||||
if (v_retval && !is_quiet) {
|
||||
vf->num_invalid_msgs++;
|
||||
dev_info(&pf->pdev->dev, "VF %d failed opcode %d, retval: %d\n",
|
||||
vf->vf_id, v_opcode, v_retval);
|
||||
if (vf->num_invalid_msgs >
|
||||
I40E_DEFAULT_NUM_INVALID_MSGS_ALLOWED) {
|
||||
dev_err(&pf->pdev->dev,
|
||||
"Number of invalid messages exceeded for VF %d\n",
|
||||
vf->vf_id);
|
||||
dev_err(&pf->pdev->dev, "Use PF Control I/F to enable the VF\n");
|
||||
set_bit(I40E_VF_STATE_DISABLED, &vf->vf_states);
|
||||
}
|
||||
} else {
|
||||
vf->num_valid_msgs++;
|
||||
/* reset the invalid counter, if a valid message is received. */
|
||||
vf->num_invalid_msgs = 0;
|
||||
}
|
||||
|
||||
aq_ret = i40e_aq_send_msg_to_vf(hw, abs_vf_id, v_opcode, v_retval,
|
||||
msg, msglen, NULL);
|
||||
if (aq_ret) {
|
||||
@@ -1975,23 +1954,6 @@ static int i40e_vc_send_msg_to_vf_ex(struct i40e_vf *vf, u32 v_opcode,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_vc_send_msg_to_vf
|
||||
* @vf: pointer to the VF info
|
||||
* @v_opcode: virtual channel opcode
|
||||
* @v_retval: virtual channel return value
|
||||
* @msg: pointer to the msg buffer
|
||||
* @msglen: msg length
|
||||
*
|
||||
* send msg to VF
|
||||
**/
|
||||
static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode,
|
||||
u32 v_retval, u8 *msg, u16 msglen)
|
||||
{
|
||||
return i40e_vc_send_msg_to_vf_ex(vf, v_opcode, v_retval,
|
||||
msg, msglen, false);
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_vc_send_resp_to_vf
|
||||
* @vf: pointer to the VF info
|
||||
@@ -2813,7 +2775,6 @@ error_param:
|
||||
* i40e_check_vf_permission
|
||||
* @vf: pointer to the VF info
|
||||
* @al: MAC address list from virtchnl
|
||||
* @is_quiet: set true for printing msg without opcode info, false otherwise
|
||||
*
|
||||
* Check that the given list of MAC addresses is allowed. Will return -EPERM
|
||||
* if any address in the list is not valid. Checks the following conditions:
|
||||
@@ -2828,15 +2789,13 @@ error_param:
|
||||
* addresses might not be accurate.
|
||||
**/
|
||||
static inline int i40e_check_vf_permission(struct i40e_vf *vf,
|
||||
struct virtchnl_ether_addr_list *al,
|
||||
bool *is_quiet)
|
||||
struct virtchnl_ether_addr_list *al)
|
||||
{
|
||||
struct i40e_pf *pf = vf->pf;
|
||||
struct i40e_vsi *vsi = pf->vsi[vf->lan_vsi_idx];
|
||||
int mac2add_cnt = 0;
|
||||
int i;
|
||||
|
||||
*is_quiet = false;
|
||||
for (i = 0; i < al->num_elements; i++) {
|
||||
struct i40e_mac_filter *f;
|
||||
u8 *addr = al->list[i].addr;
|
||||
@@ -2860,7 +2819,6 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
|
||||
!ether_addr_equal(addr, vf->default_lan_addr.addr)) {
|
||||
dev_err(&pf->pdev->dev,
|
||||
"VF attempting to override administratively set MAC address, bring down and up the VF interface to resume normal operation\n");
|
||||
*is_quiet = true;
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
@@ -2897,7 +2855,6 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
|
||||
(struct virtchnl_ether_addr_list *)msg;
|
||||
struct i40e_pf *pf = vf->pf;
|
||||
struct i40e_vsi *vsi = NULL;
|
||||
bool is_quiet = false;
|
||||
i40e_status ret = 0;
|
||||
int i;
|
||||
|
||||
@@ -2914,7 +2871,7 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
|
||||
*/
|
||||
spin_lock_bh(&vsi->mac_filter_hash_lock);
|
||||
|
||||
ret = i40e_check_vf_permission(vf, al, &is_quiet);
|
||||
ret = i40e_check_vf_permission(vf, al);
|
||||
if (ret) {
|
||||
spin_unlock_bh(&vsi->mac_filter_hash_lock);
|
||||
goto error_param;
|
||||
@@ -2952,8 +2909,8 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
|
||||
|
||||
error_param:
|
||||
/* send the response to the VF */
|
||||
return i40e_vc_send_msg_to_vf_ex(vf, VIRTCHNL_OP_ADD_ETH_ADDR,
|
||||
ret, NULL, 0, is_quiet);
|
||||
return i40e_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ADD_ETH_ADDR,
|
||||
ret, NULL, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -10,8 +10,6 @@
|
||||
|
||||
#define I40E_VIRTCHNL_SUPPORTED_QTYPES 2
|
||||
|
||||
#define I40E_DEFAULT_NUM_INVALID_MSGS_ALLOWED 10
|
||||
|
||||
#define I40E_VLAN_PRIORITY_SHIFT 13
|
||||
#define I40E_VLAN_MASK 0xFFF
|
||||
#define I40E_PRIORITY_MASK 0xE000
|
||||
@@ -92,9 +90,6 @@ struct i40e_vf {
|
||||
u8 num_queue_pairs; /* num of qps assigned to VF vsis */
|
||||
u8 num_req_queues; /* num of requested qps */
|
||||
u64 num_mdd_events; /* num of mdd events detected */
|
||||
/* num of continuous malformed or invalid msgs detected */
|
||||
u64 num_invalid_msgs;
|
||||
u64 num_valid_msgs; /* num of valid msgs detected */
|
||||
|
||||
unsigned long vf_caps; /* vf's adv. capabilities */
|
||||
unsigned long vf_states; /* vf's runtime states */
|
||||
|
||||
@@ -1460,6 +1460,22 @@ void iavf_request_reset(struct iavf_adapter *adapter)
|
||||
adapter->current_op = VIRTCHNL_OP_UNKNOWN;
|
||||
}
|
||||
|
||||
/**
|
||||
* iavf_netdev_features_vlan_strip_set - update vlan strip status
|
||||
* @netdev: ptr to netdev being adjusted
|
||||
* @enable: enable or disable vlan strip
|
||||
*
|
||||
* Helper function to change vlan strip status in netdev->features.
|
||||
*/
|
||||
static void iavf_netdev_features_vlan_strip_set(struct net_device *netdev,
|
||||
const bool enable)
|
||||
{
|
||||
if (enable)
|
||||
netdev->features |= NETIF_F_HW_VLAN_CTAG_RX;
|
||||
else
|
||||
netdev->features &= ~NETIF_F_HW_VLAN_CTAG_RX;
|
||||
}
|
||||
|
||||
/**
|
||||
* iavf_virtchnl_completion
|
||||
* @adapter: adapter structure
|
||||
@@ -1683,8 +1699,18 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
|
||||
}
|
||||
break;
|
||||
case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
|
||||
dev_warn(&adapter->pdev->dev, "Changing VLAN Stripping is not allowed when Port VLAN is configured\n");
|
||||
/* Vlan stripping could not be enabled by ethtool.
|
||||
* Disable it in netdev->features.
|
||||
*/
|
||||
iavf_netdev_features_vlan_strip_set(netdev, false);
|
||||
break;
|
||||
case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
|
||||
dev_warn(&adapter->pdev->dev, "Changing VLAN Stripping is not allowed when Port VLAN is configured\n");
|
||||
/* Vlan stripping could not be disabled by ethtool.
|
||||
* Enable it in netdev->features.
|
||||
*/
|
||||
iavf_netdev_features_vlan_strip_set(netdev, true);
|
||||
break;
|
||||
default:
|
||||
dev_err(&adapter->pdev->dev, "PF returned error %d (%s) to our request %d\n",
|
||||
@@ -1918,6 +1944,20 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
|
||||
spin_unlock_bh(&adapter->adv_rss_lock);
|
||||
}
|
||||
break;
|
||||
case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
|
||||
/* PF enabled vlan strip on this VF.
|
||||
* Update netdev->features if needed to be in sync with ethtool.
|
||||
*/
|
||||
if (!v_retval)
|
||||
iavf_netdev_features_vlan_strip_set(netdev, true);
|
||||
break;
|
||||
case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
|
||||
/* PF disabled vlan strip on this VF.
|
||||
* Update netdev->features if needed to be in sync with ethtool.
|
||||
*/
|
||||
if (!v_retval)
|
||||
iavf_netdev_features_vlan_strip_set(netdev, false);
|
||||
break;
|
||||
default:
|
||||
if (adapter->current_op && (v_opcode != adapter->current_op))
|
||||
dev_warn(&adapter->pdev->dev, "Expected response %d from PF, received %d\n",
|
||||
|
||||
@@ -398,6 +398,7 @@ enum ice_pf_flags {
|
||||
ICE_FLAG_MDD_AUTO_RESET_VF,
|
||||
ICE_FLAG_LINK_LENIENT_MODE_ENA,
|
||||
ICE_FLAG_PLUG_AUX_DEV,
|
||||
ICE_FLAG_MTU_CHANGED,
|
||||
ICE_PF_FLAGS_NBITS /* must be last */
|
||||
};
|
||||
|
||||
|
||||
@@ -2275,7 +2275,7 @@ ice_set_link_ksettings(struct net_device *netdev,
|
||||
goto done;
|
||||
}
|
||||
|
||||
curr_link_speed = pi->phy.link_info.link_speed;
|
||||
curr_link_speed = pi->phy.curr_user_speed_req;
|
||||
adv_link_speed = ice_ksettings_find_adv_link_speed(ks);
|
||||
|
||||
/* If speed didn't get set, set it to what it currently is.
|
||||
|
||||
@@ -2146,6 +2146,17 @@ static void ice_service_task(struct work_struct *work)
|
||||
if (test_and_clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags))
|
||||
ice_plug_aux_dev(pf);
|
||||
|
||||
if (test_and_clear_bit(ICE_FLAG_MTU_CHANGED, pf->flags)) {
|
||||
struct iidc_event *event;
|
||||
|
||||
event = kzalloc(sizeof(*event), GFP_KERNEL);
|
||||
if (event) {
|
||||
set_bit(IIDC_EVENT_AFTER_MTU_CHANGE, event->type);
|
||||
ice_send_event_to_aux(pf, event);
|
||||
kfree(event);
|
||||
}
|
||||
}
|
||||
|
||||
ice_clean_adminq_subtask(pf);
|
||||
ice_check_media_subtask(pf);
|
||||
ice_check_for_hang_subtask(pf);
|
||||
@@ -2863,7 +2874,7 @@ static irqreturn_t ice_misc_intr(int __always_unused irq, void *data)
|
||||
struct iidc_event *event;
|
||||
|
||||
ena_mask &= ~ICE_AUX_CRIT_ERR;
|
||||
event = kzalloc(sizeof(*event), GFP_KERNEL);
|
||||
event = kzalloc(sizeof(*event), GFP_ATOMIC);
|
||||
if (event) {
|
||||
set_bit(IIDC_EVENT_CRIT_ERR, event->type);
|
||||
/* report the entire OICR value to AUX driver */
|
||||
@@ -6532,7 +6543,6 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu)
|
||||
struct ice_netdev_priv *np = netdev_priv(netdev);
|
||||
struct ice_vsi *vsi = np->vsi;
|
||||
struct ice_pf *pf = vsi->back;
|
||||
struct iidc_event *event;
|
||||
u8 count = 0;
|
||||
int err = 0;
|
||||
|
||||
@@ -6567,14 +6577,6 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu)
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
event = kzalloc(sizeof(*event), GFP_KERNEL);
|
||||
if (!event)
|
||||
return -ENOMEM;
|
||||
|
||||
set_bit(IIDC_EVENT_BEFORE_MTU_CHANGE, event->type);
|
||||
ice_send_event_to_aux(pf, event);
|
||||
clear_bit(IIDC_EVENT_BEFORE_MTU_CHANGE, event->type);
|
||||
|
||||
netdev->mtu = (unsigned int)new_mtu;
|
||||
|
||||
/* if VSI is up, bring it down and then back up */
|
||||
@@ -6582,21 +6584,18 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu)
|
||||
err = ice_down(vsi);
|
||||
if (err) {
|
||||
netdev_err(netdev, "change MTU if_down err %d\n", err);
|
||||
goto event_after;
|
||||
return err;
|
||||
}
|
||||
|
||||
err = ice_up(vsi);
|
||||
if (err) {
|
||||
netdev_err(netdev, "change MTU if_up err %d\n", err);
|
||||
goto event_after;
|
||||
return err;
|
||||
}
|
||||
}
|
||||
|
||||
netdev_dbg(netdev, "changed MTU to %d\n", new_mtu);
|
||||
event_after:
|
||||
set_bit(IIDC_EVENT_AFTER_MTU_CHANGE, event->type);
|
||||
ice_send_event_to_aux(pf, event);
|
||||
kfree(event);
|
||||
set_bit(ICE_FLAG_MTU_CHANGED, pf->flags);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
@@ -2234,24 +2234,6 @@ ice_vc_send_msg_to_vf(struct ice_vf *vf, u32 v_opcode,
|
||||
|
||||
dev = ice_pf_to_dev(pf);
|
||||
|
||||
/* single place to detect unsuccessful return values */
|
||||
if (v_retval) {
|
||||
vf->num_inval_msgs++;
|
||||
dev_info(dev, "VF %d failed opcode %d, retval: %d\n", vf->vf_id,
|
||||
v_opcode, v_retval);
|
||||
if (vf->num_inval_msgs > ICE_DFLT_NUM_INVAL_MSGS_ALLOWED) {
|
||||
dev_err(dev, "Number of invalid messages exceeded for VF %d\n",
|
||||
vf->vf_id);
|
||||
dev_err(dev, "Use PF Control I/F to enable the VF\n");
|
||||
set_bit(ICE_VF_STATE_DIS, vf->vf_states);
|
||||
return -EIO;
|
||||
}
|
||||
} else {
|
||||
vf->num_valid_msgs++;
|
||||
/* reset the invalid counter, if a valid message is received. */
|
||||
vf->num_inval_msgs = 0;
|
||||
}
|
||||
|
||||
aq_ret = ice_aq_send_msg_to_vf(&pf->hw, vf->vf_id, v_opcode, v_retval,
|
||||
msg, msglen, NULL);
|
||||
if (aq_ret && pf->hw.mailboxq.sq_last_status != ICE_AQ_RC_ENOSYS) {
|
||||
|
||||
@@ -14,7 +14,6 @@
|
||||
#define ICE_MAX_MACADDR_PER_VF 18
|
||||
|
||||
/* Malicious Driver Detection */
|
||||
#define ICE_DFLT_NUM_INVAL_MSGS_ALLOWED 10
|
||||
#define ICE_MDD_EVENTS_THRESHOLD 30
|
||||
|
||||
/* Static VF transaction/status register def */
|
||||
@@ -107,8 +106,6 @@ struct ice_vf {
|
||||
unsigned int tx_rate; /* Tx bandwidth limit in Mbps */
|
||||
DECLARE_BITMAP(vf_states, ICE_VF_STATES_NBITS); /* VF runtime states */
|
||||
|
||||
u64 num_inval_msgs; /* number of continuous invalid msgs */
|
||||
u64 num_valid_msgs; /* number of valid msgs detected */
|
||||
unsigned long vf_caps; /* VF's adv. capabilities */
|
||||
u8 num_req_qs; /* num of queue pairs requested by VF */
|
||||
u16 num_mac;
|
||||
|
||||
@@ -492,6 +492,7 @@ static int prestera_switch_set_base_mac_addr(struct prestera_switch *sw)
|
||||
dev_info(prestera_dev(sw), "using random base mac address\n");
|
||||
}
|
||||
of_node_put(base_mac_np);
|
||||
of_node_put(np);
|
||||
|
||||
return prestera_hw_switch_mac_set(sw, sw->base_mac);
|
||||
}
|
||||
|
||||
@@ -130,11 +130,8 @@ static int cmd_alloc_index(struct mlx5_cmd *cmd)
|
||||
|
||||
static void cmd_free_index(struct mlx5_cmd *cmd, int idx)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&cmd->alloc_lock, flags);
|
||||
lockdep_assert_held(&cmd->alloc_lock);
|
||||
set_bit(idx, &cmd->bitmask);
|
||||
spin_unlock_irqrestore(&cmd->alloc_lock, flags);
|
||||
}
|
||||
|
||||
static void cmd_ent_get(struct mlx5_cmd_work_ent *ent)
|
||||
@@ -144,17 +141,21 @@ static void cmd_ent_get(struct mlx5_cmd_work_ent *ent)
|
||||
|
||||
static void cmd_ent_put(struct mlx5_cmd_work_ent *ent)
|
||||
{
|
||||
struct mlx5_cmd *cmd = ent->cmd;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&cmd->alloc_lock, flags);
|
||||
if (!refcount_dec_and_test(&ent->refcnt))
|
||||
return;
|
||||
goto out;
|
||||
|
||||
if (ent->idx >= 0) {
|
||||
struct mlx5_cmd *cmd = ent->cmd;
|
||||
|
||||
cmd_free_index(cmd, ent->idx);
|
||||
up(ent->page_queue ? &cmd->pages_sem : &cmd->sem);
|
||||
}
|
||||
|
||||
cmd_free_ent(ent);
|
||||
out:
|
||||
spin_unlock_irqrestore(&cmd->alloc_lock, flags);
|
||||
}
|
||||
|
||||
static struct mlx5_cmd_layout *get_inst(struct mlx5_cmd *cmd, int idx)
|
||||
|
||||
@@ -126,6 +126,10 @@ static void mlx5_lag_fib_route_event(struct mlx5_lag *ldev,
|
||||
return;
|
||||
}
|
||||
|
||||
/* Handle multipath entry with lower priority value */
|
||||
if (mp->mfi && mp->mfi != fi && fi->fib_priority >= mp->mfi->fib_priority)
|
||||
return;
|
||||
|
||||
/* Handle add/replace event */
|
||||
nhs = fib_info_num_path(fi);
|
||||
if (nhs == 1) {
|
||||
@@ -135,12 +139,13 @@ static void mlx5_lag_fib_route_event(struct mlx5_lag *ldev,
|
||||
int i = mlx5_lag_dev_get_netdev_idx(ldev, nh_dev);
|
||||
|
||||
if (i < 0)
|
||||
i = MLX5_LAG_NORMAL_AFFINITY;
|
||||
else
|
||||
++i;
|
||||
return;
|
||||
|
||||
i++;
|
||||
mlx5_lag_set_port_affinity(ldev, i);
|
||||
}
|
||||
|
||||
mp->mfi = fi;
|
||||
return;
|
||||
}
|
||||
|
||||
|
||||
@@ -121,9 +121,6 @@ u32 mlx5_chains_get_nf_ft_chain(struct mlx5_fs_chains *chains)
|
||||
|
||||
u32 mlx5_chains_get_prio_range(struct mlx5_fs_chains *chains)
|
||||
{
|
||||
if (!mlx5_chains_prios_supported(chains))
|
||||
return 1;
|
||||
|
||||
if (mlx5_chains_ignore_flow_level_supported(chains))
|
||||
return UINT_MAX;
|
||||
|
||||
|
||||
@@ -1469,6 +1469,7 @@ static int lpc_eth_drv_resume(struct platform_device *pdev)
|
||||
{
|
||||
struct net_device *ndev = platform_get_drvdata(pdev);
|
||||
struct netdata_local *pldat;
|
||||
int ret;
|
||||
|
||||
if (device_may_wakeup(&pdev->dev))
|
||||
disable_irq_wake(ndev->irq);
|
||||
@@ -1478,7 +1479,9 @@ static int lpc_eth_drv_resume(struct platform_device *pdev)
|
||||
pldat = netdev_priv(ndev);
|
||||
|
||||
/* Enable interface clock */
|
||||
clk_enable(pldat->clk);
|
||||
ret = clk_enable(pldat->clk);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Reset and initialize */
|
||||
__lpc_eth_reset(pldat);
|
||||
|
||||
@@ -3778,7 +3778,7 @@ bool qed_iov_mark_vf_flr(struct qed_hwfn *p_hwfn, u32 *p_disabled_vfs)
|
||||
return found;
|
||||
}
|
||||
|
||||
static void qed_iov_get_link(struct qed_hwfn *p_hwfn,
|
||||
static int qed_iov_get_link(struct qed_hwfn *p_hwfn,
|
||||
u16 vfid,
|
||||
struct qed_mcp_link_params *p_params,
|
||||
struct qed_mcp_link_state *p_link,
|
||||
@@ -3790,7 +3790,7 @@ static void qed_iov_get_link(struct qed_hwfn *p_hwfn,
|
||||
struct qed_bulletin_content *p_bulletin;
|
||||
|
||||
if (!p_vf)
|
||||
return;
|
||||
return -EINVAL;
|
||||
|
||||
p_bulletin = p_vf->bulletin.p_virt;
|
||||
|
||||
@@ -3800,6 +3800,7 @@ static void qed_iov_get_link(struct qed_hwfn *p_hwfn,
|
||||
__qed_vf_get_link_state(p_hwfn, p_link, p_bulletin);
|
||||
if (p_caps)
|
||||
__qed_vf_get_link_caps(p_hwfn, p_caps, p_bulletin);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
@@ -4658,6 +4659,7 @@ static int qed_get_vf_config(struct qed_dev *cdev,
|
||||
struct qed_public_vf_info *vf_info;
|
||||
struct qed_mcp_link_state link;
|
||||
u32 tx_rate;
|
||||
int ret;
|
||||
|
||||
/* Sanitize request */
|
||||
if (IS_VF(cdev))
|
||||
@@ -4671,7 +4673,9 @@ static int qed_get_vf_config(struct qed_dev *cdev,
|
||||
|
||||
vf_info = qed_iov_get_public_vf_info(hwfn, vf_id, true);
|
||||
|
||||
qed_iov_get_link(hwfn, vf_id, NULL, &link, NULL);
|
||||
ret = qed_iov_get_link(hwfn, vf_id, NULL, &link, NULL);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Fill information about VF */
|
||||
ivi->vf = vf_id;
|
||||
|
||||
@@ -513,6 +513,9 @@ int qed_vf_hw_prepare(struct qed_hwfn *p_hwfn)
|
||||
p_iov->bulletin.size,
|
||||
&p_iov->bulletin.phys,
|
||||
GFP_KERNEL);
|
||||
if (!p_iov->bulletin.p_virt)
|
||||
goto free_pf2vf_reply;
|
||||
|
||||
DP_VERBOSE(p_hwfn, QED_MSG_IOV,
|
||||
"VF's bulletin Board [%p virt 0x%llx phys 0x%08x bytes]\n",
|
||||
p_iov->bulletin.p_virt,
|
||||
@@ -552,6 +555,10 @@ int qed_vf_hw_prepare(struct qed_hwfn *p_hwfn)
|
||||
|
||||
return rc;
|
||||
|
||||
free_pf2vf_reply:
|
||||
dma_free_coherent(&p_hwfn->cdev->pdev->dev,
|
||||
sizeof(union pfvf_tlvs),
|
||||
p_iov->pf2vf_reply, p_iov->pf2vf_reply_phys);
|
||||
free_vf2pf_request:
|
||||
dma_free_coherent(&p_hwfn->cdev->pdev->dev,
|
||||
sizeof(union vfpf_tlvs),
|
||||
|
||||
@@ -568,7 +568,9 @@ int cpts_register(struct cpts *cpts)
|
||||
for (i = 0; i < CPTS_MAX_EVENTS; i++)
|
||||
list_add(&cpts->pool_data[i].list, &cpts->pool);
|
||||
|
||||
clk_enable(cpts->refclk);
|
||||
err = clk_enable(cpts->refclk);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
cpts_write32(cpts, CPTS_EN, control);
|
||||
cpts_write32(cpts, TS_PEND_EN, int_enable);
|
||||
|
||||
@@ -1185,7 +1185,7 @@ static int xemaclite_of_probe(struct platform_device *ofdev)
|
||||
if (rc) {
|
||||
dev_err(dev,
|
||||
"Cannot register network device, aborting\n");
|
||||
goto error;
|
||||
goto put_node;
|
||||
}
|
||||
|
||||
dev_info(dev,
|
||||
@@ -1193,6 +1193,8 @@ static int xemaclite_of_probe(struct platform_device *ofdev)
|
||||
(unsigned long __force)ndev->mem_start, lp->base_addr, ndev->irq);
|
||||
return 0;
|
||||
|
||||
put_node:
|
||||
of_node_put(lp->phy_node);
|
||||
error:
|
||||
free_netdev(ndev);
|
||||
return rc;
|
||||
|
||||
@@ -274,7 +274,7 @@ static int dp83822_config_intr(struct phy_device *phydev)
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
err = phy_write(phydev, MII_DP83822_MISR1, 0);
|
||||
err = phy_write(phydev, MII_DP83822_MISR2, 0);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
|
||||
@@ -30,8 +30,12 @@
|
||||
#define INTSRC_LINK_DOWN BIT(4)
|
||||
#define INTSRC_REMOTE_FAULT BIT(5)
|
||||
#define INTSRC_ANEG_COMPLETE BIT(6)
|
||||
#define INTSRC_ENERGY_DETECT BIT(7)
|
||||
#define INTSRC_MASK 30
|
||||
|
||||
#define INT_SOURCES (INTSRC_LINK_DOWN | INTSRC_ANEG_COMPLETE | \
|
||||
INTSRC_ENERGY_DETECT)
|
||||
|
||||
#define BANK_ANALOG_DSP 0
|
||||
#define BANK_WOL 1
|
||||
#define BANK_BIST 3
|
||||
@@ -200,7 +204,6 @@ static int meson_gxl_ack_interrupt(struct phy_device *phydev)
|
||||
|
||||
static int meson_gxl_config_intr(struct phy_device *phydev)
|
||||
{
|
||||
u16 val;
|
||||
int ret;
|
||||
|
||||
if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
|
||||
@@ -209,16 +212,9 @@ static int meson_gxl_config_intr(struct phy_device *phydev)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
val = INTSRC_ANEG_PR
|
||||
| INTSRC_PARALLEL_FAULT
|
||||
| INTSRC_ANEG_LP_ACK
|
||||
| INTSRC_LINK_DOWN
|
||||
| INTSRC_REMOTE_FAULT
|
||||
| INTSRC_ANEG_COMPLETE;
|
||||
ret = phy_write(phydev, INTSRC_MASK, val);
|
||||
ret = phy_write(phydev, INTSRC_MASK, INT_SOURCES);
|
||||
} else {
|
||||
val = 0;
|
||||
ret = phy_write(phydev, INTSRC_MASK, val);
|
||||
ret = phy_write(phydev, INTSRC_MASK, 0);
|
||||
|
||||
/* Ack any pending IRQ */
|
||||
ret = meson_gxl_ack_interrupt(phydev);
|
||||
@@ -237,9 +233,22 @@ static irqreturn_t meson_gxl_handle_interrupt(struct phy_device *phydev)
|
||||
return IRQ_NONE;
|
||||
}
|
||||
|
||||
irq_status &= INT_SOURCES;
|
||||
|
||||
if (irq_status == 0)
|
||||
return IRQ_NONE;
|
||||
|
||||
/* Aneg-complete interrupt is used for link-up detection */
|
||||
if (phydev->autoneg == AUTONEG_ENABLE &&
|
||||
irq_status == INTSRC_ENERGY_DETECT)
|
||||
return IRQ_HANDLED;
|
||||
|
||||
/* Give PHY some time before MAC starts sending data. This works
|
||||
* around an issue where network doesn't come up properly.
|
||||
*/
|
||||
if (!(irq_status & INTSRC_LINK_DOWN))
|
||||
phy_queue_state_machine(phydev, msecs_to_jiffies(100));
|
||||
else
|
||||
phy_trigger_machine(phydev);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
|
||||
@@ -84,7 +84,8 @@ static int __must_check __smsc95xx_read_reg(struct usbnet *dev, u32 index,
|
||||
ret = fn(dev, USB_VENDOR_REQUEST_READ_REGISTER, USB_DIR_IN
|
||||
| USB_TYPE_VENDOR | USB_RECIP_DEVICE,
|
||||
0, index, &buf, 4);
|
||||
if (unlikely(ret < 0)) {
|
||||
if (ret < 0) {
|
||||
if (ret != -ENODEV)
|
||||
netdev_warn(dev->net, "Failed to read reg index 0x%08x: %d\n",
|
||||
index, ret);
|
||||
return ret;
|
||||
@@ -116,7 +117,7 @@ static int __must_check __smsc95xx_write_reg(struct usbnet *dev, u32 index,
|
||||
ret = fn(dev, USB_VENDOR_REQUEST_WRITE_REGISTER, USB_DIR_OUT
|
||||
| USB_TYPE_VENDOR | USB_RECIP_DEVICE,
|
||||
0, index, &buf, 4);
|
||||
if (unlikely(ret < 0))
|
||||
if (ret < 0 && ret != -ENODEV)
|
||||
netdev_warn(dev->net, "Failed to write reg index 0x%08x: %d\n",
|
||||
index, ret);
|
||||
|
||||
@@ -159,6 +160,9 @@ static int __must_check __smsc95xx_phy_wait_not_busy(struct usbnet *dev,
|
||||
do {
|
||||
ret = __smsc95xx_read_reg(dev, MII_ADDR, &val, in_pm);
|
||||
if (ret < 0) {
|
||||
/* Ignore -ENODEV error during disconnect() */
|
||||
if (ret == -ENODEV)
|
||||
return 0;
|
||||
netdev_warn(dev->net, "Error reading MII_ACCESS\n");
|
||||
return ret;
|
||||
}
|
||||
@@ -194,6 +198,7 @@ static int __smsc95xx_mdio_read(struct usbnet *dev, int phy_id, int idx,
|
||||
addr = mii_address_cmd(phy_id, idx, MII_READ_ | MII_BUSY_);
|
||||
ret = __smsc95xx_write_reg(dev, MII_ADDR, addr, in_pm);
|
||||
if (ret < 0) {
|
||||
if (ret != -ENODEV)
|
||||
netdev_warn(dev->net, "Error writing MII_ADDR\n");
|
||||
goto done;
|
||||
}
|
||||
@@ -206,6 +211,7 @@ static int __smsc95xx_mdio_read(struct usbnet *dev, int phy_id, int idx,
|
||||
|
||||
ret = __smsc95xx_read_reg(dev, MII_DATA, &val, in_pm);
|
||||
if (ret < 0) {
|
||||
if (ret != -ENODEV)
|
||||
netdev_warn(dev->net, "Error reading MII_DATA\n");
|
||||
goto done;
|
||||
}
|
||||
@@ -214,6 +220,10 @@ static int __smsc95xx_mdio_read(struct usbnet *dev, int phy_id, int idx,
|
||||
|
||||
done:
|
||||
mutex_unlock(&dev->phy_mutex);
|
||||
|
||||
/* Ignore -ENODEV error during disconnect() */
|
||||
if (ret == -ENODEV)
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -235,6 +245,7 @@ static void __smsc95xx_mdio_write(struct usbnet *dev, int phy_id,
|
||||
val = regval;
|
||||
ret = __smsc95xx_write_reg(dev, MII_DATA, val, in_pm);
|
||||
if (ret < 0) {
|
||||
if (ret != -ENODEV)
|
||||
netdev_warn(dev->net, "Error writing MII_DATA\n");
|
||||
goto done;
|
||||
}
|
||||
@@ -243,6 +254,7 @@ static void __smsc95xx_mdio_write(struct usbnet *dev, int phy_id,
|
||||
addr = mii_address_cmd(phy_id, idx, MII_WRITE_ | MII_BUSY_);
|
||||
ret = __smsc95xx_write_reg(dev, MII_ADDR, addr, in_pm);
|
||||
if (ret < 0) {
|
||||
if (ret != -ENODEV)
|
||||
netdev_warn(dev->net, "Error writing MII_ADDR\n");
|
||||
goto done;
|
||||
}
|
||||
|
||||
@@ -256,6 +256,7 @@ static void backend_disconnect(struct backend_info *be)
|
||||
unsigned int queue_index;
|
||||
|
||||
xen_unregister_watchers(vif);
|
||||
xenbus_rm(XBT_NIL, be->dev->nodename, "hotplug-status");
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
xenvif_debugfs_delif(vif);
|
||||
#endif /* CONFIG_DEBUG_FS */
|
||||
@@ -675,7 +676,6 @@ static void hotplug_status_changed(struct xenbus_watch *watch,
|
||||
|
||||
/* Not interested in this watch anymore. */
|
||||
unregister_hotplug_status_watch(be);
|
||||
xenbus_rm(XBT_NIL, be->dev->nodename, "hotplug-status");
|
||||
}
|
||||
kfree(str);
|
||||
}
|
||||
@@ -824,15 +824,11 @@ static void connect(struct backend_info *be)
|
||||
xenvif_carrier_on(be->vif);
|
||||
|
||||
unregister_hotplug_status_watch(be);
|
||||
if (xenbus_exists(XBT_NIL, dev->nodename, "hotplug-status")) {
|
||||
err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
|
||||
NULL, hotplug_status_changed,
|
||||
"%s/%s", dev->nodename,
|
||||
"hotplug-status");
|
||||
if (err)
|
||||
goto err;
|
||||
err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch, NULL,
|
||||
hotplug_status_changed,
|
||||
"%s/%s", dev->nodename, "hotplug-status");
|
||||
if (!err)
|
||||
be->have_hotplug_status_watch = 1;
|
||||
}
|
||||
|
||||
netif_tx_wake_all_queues(be->vif->dev);
|
||||
|
||||
|
||||
@@ -1612,7 +1612,9 @@ free_nfc_dev:
|
||||
nfc_digital_free_device(dev->nfc_digital_dev);
|
||||
|
||||
error:
|
||||
usb_kill_urb(dev->in_urb);
|
||||
usb_free_urb(dev->in_urb);
|
||||
usb_kill_urb(dev->out_urb);
|
||||
usb_free_urb(dev->out_urb);
|
||||
usb_put_dev(dev->udev);
|
||||
|
||||
|
||||
@@ -5344,11 +5344,6 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0422, quirk_no_ext_tags);
|
||||
*/
|
||||
static void quirk_amd_harvest_no_ats(struct pci_dev *pdev)
|
||||
{
|
||||
if ((pdev->device == 0x7312 && pdev->revision != 0x00) ||
|
||||
(pdev->device == 0x7340 && pdev->revision != 0xc5) ||
|
||||
(pdev->device == 0x7341 && pdev->revision != 0x00))
|
||||
return;
|
||||
|
||||
if (pdev->device == 0x15d8) {
|
||||
if (pdev->revision == 0xcf &&
|
||||
pdev->subsystem_vendor == 0xea50 &&
|
||||
@@ -5370,10 +5365,19 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x98e4, quirk_amd_harvest_no_ats);
|
||||
/* AMD Iceland dGPU */
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x6900, quirk_amd_harvest_no_ats);
|
||||
/* AMD Navi10 dGPU */
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7310, quirk_amd_harvest_no_ats);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7312, quirk_amd_harvest_no_ats);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7318, quirk_amd_harvest_no_ats);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7319, quirk_amd_harvest_no_ats);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x731a, quirk_amd_harvest_no_ats);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x731b, quirk_amd_harvest_no_ats);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x731e, quirk_amd_harvest_no_ats);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x731f, quirk_amd_harvest_no_ats);
|
||||
/* AMD Navi14 dGPU */
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7340, quirk_amd_harvest_no_ats);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7341, quirk_amd_harvest_no_ats);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7347, quirk_amd_harvest_no_ats);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x734f, quirk_amd_harvest_no_ats);
|
||||
/* AMD Raven platform iGPU */
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x15d8, quirk_amd_harvest_no_ats);
|
||||
#endif /* CONFIG_PCI_ATS */
|
||||
|
||||
@@ -749,7 +749,6 @@ static const struct acpi_device_id tgl_pinctrl_acpi_match[] = {
|
||||
{ "INT34C5", (kernel_ulong_t)&tgllp_soc_data },
|
||||
{ "INT34C6", (kernel_ulong_t)&tglh_soc_data },
|
||||
{ "INTC1055", (kernel_ulong_t)&tgllp_soc_data },
|
||||
{ "INTC1057", (kernel_ulong_t)&tgllp_soc_data },
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(acpi, tgl_pinctrl_acpi_match);
|
||||
|
||||
@@ -14,6 +14,7 @@
|
||||
#define KMSG_COMPONENT "dasd"
|
||||
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/major.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/blkpg.h>
|
||||
|
||||
|
||||
@@ -48,6 +48,7 @@
|
||||
#include <linux/blkpg.h>
|
||||
#include <linux/blk-pm.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/major.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/string_helpers.h>
|
||||
#include <linux/async.h>
|
||||
|
||||
@@ -31,6 +31,7 @@ static int sg_version_num = 30536; /* 2 digits for each component */
|
||||
#include <linux/errno.h>
|
||||
#include <linux/mtio.h>
|
||||
#include <linux/ioctl.h>
|
||||
#include <linux/major.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/fcntl.h>
|
||||
#include <linux/init.h>
|
||||
|
||||
@@ -44,6 +44,7 @@
|
||||
#include <linux/cdrom.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/major.h>
|
||||
#include <linux/blkdev.h>
|
||||
#include <linux/blk-pm.h>
|
||||
#include <linux/mutex.h>
|
||||
|
||||
@@ -32,6 +32,7 @@ static const char *verstr = "20160209";
|
||||
#include <linux/slab.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/mtio.h>
|
||||
#include <linux/major.h>
|
||||
#include <linux/cdrom.h>
|
||||
#include <linux/ioctl.h>
|
||||
#include <linux/fcntl.h>
|
||||
|
||||
@@ -585,6 +585,12 @@ static int rockchip_spi_slave_abort(struct spi_controller *ctlr)
|
||||
{
|
||||
struct rockchip_spi *rs = spi_controller_get_devdata(ctlr);
|
||||
|
||||
if (atomic_read(&rs->state) & RXDMA)
|
||||
dmaengine_terminate_sync(ctlr->dma_rx);
|
||||
if (atomic_read(&rs->state) & TXDMA)
|
||||
dmaengine_terminate_sync(ctlr->dma_tx);
|
||||
atomic_set(&rs->state, 0);
|
||||
spi_enable_chip(rs, false);
|
||||
rs->slave_abort = true;
|
||||
spi_finalize_current_transfer(ctlr);
|
||||
|
||||
@@ -654,7 +660,7 @@ static int rockchip_spi_probe(struct platform_device *pdev)
|
||||
struct spi_controller *ctlr;
|
||||
struct resource *mem;
|
||||
struct device_node *np = pdev->dev.of_node;
|
||||
u32 rsd_nsecs;
|
||||
u32 rsd_nsecs, num_cs;
|
||||
bool slave_mode;
|
||||
|
||||
slave_mode = of_property_read_bool(np, "spi-slave");
|
||||
@@ -764,8 +770,9 @@ static int rockchip_spi_probe(struct platform_device *pdev)
|
||||
* rk spi0 has two native cs, spi1..5 one cs only
|
||||
* if num-cs is missing in the dts, default to 1
|
||||
*/
|
||||
if (of_property_read_u16(np, "num-cs", &ctlr->num_chipselect))
|
||||
ctlr->num_chipselect = 1;
|
||||
if (of_property_read_u32(np, "num-cs", &num_cs))
|
||||
num_cs = 1;
|
||||
ctlr->num_chipselect = num_cs;
|
||||
ctlr->use_gpio_descriptors = true;
|
||||
}
|
||||
ctlr->dev.of_node = pdev->dev.of_node;
|
||||
|
||||
@@ -76,14 +76,15 @@ static void tx_complete(void *arg)
|
||||
|
||||
static int gdm_lte_rx(struct sk_buff *skb, struct nic *nic, int nic_type)
|
||||
{
|
||||
int ret;
|
||||
int ret, len;
|
||||
|
||||
len = skb->len + ETH_HLEN;
|
||||
ret = netif_rx_ni(skb);
|
||||
if (ret == NET_RX_DROP) {
|
||||
nic->stats.rx_dropped++;
|
||||
} else {
|
||||
nic->stats.rx_packets++;
|
||||
nic->stats.rx_bytes += skb->len + ETH_HLEN;
|
||||
nic->stats.rx_bytes += len;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -5915,6 +5915,7 @@ u8 chk_bmc_sleepq_hdl(struct adapter *padapter, unsigned char *pbuf)
|
||||
struct sta_info *psta_bmc;
|
||||
struct list_head *xmitframe_plist, *xmitframe_phead, *tmp;
|
||||
struct xmit_frame *pxmitframe = NULL;
|
||||
struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
|
||||
struct sta_priv *pstapriv = &padapter->stapriv;
|
||||
|
||||
/* for BC/MC Frames */
|
||||
@@ -5925,7 +5926,8 @@ u8 chk_bmc_sleepq_hdl(struct adapter *padapter, unsigned char *pbuf)
|
||||
if ((pstapriv->tim_bitmap&BIT(0)) && (psta_bmc->sleepq_len > 0)) {
|
||||
msleep(10);/* 10ms, ATIM(HIQ) Windows */
|
||||
|
||||
spin_lock_bh(&psta_bmc->sleep_q.lock);
|
||||
/* spin_lock_bh(&psta_bmc->sleep_q.lock); */
|
||||
spin_lock_bh(&pxmitpriv->lock);
|
||||
|
||||
xmitframe_phead = get_list_head(&psta_bmc->sleep_q);
|
||||
list_for_each_safe(xmitframe_plist, tmp, xmitframe_phead) {
|
||||
@@ -5948,7 +5950,8 @@ u8 chk_bmc_sleepq_hdl(struct adapter *padapter, unsigned char *pbuf)
|
||||
rtw_hal_xmitframe_enqueue(padapter, pxmitframe);
|
||||
}
|
||||
|
||||
spin_unlock_bh(&psta_bmc->sleep_q.lock);
|
||||
/* spin_unlock_bh(&psta_bmc->sleep_q.lock); */
|
||||
spin_unlock_bh(&pxmitpriv->lock);
|
||||
|
||||
/* check hi queue and bmc_sleepq */
|
||||
rtw_chk_hi_queue_cmd(padapter);
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user