Merge 5.15.19 into android13-5.15
Changes in 5.15.19
can: m_can: m_can_fifo_{read,write}: don't read or write from/to FIFO if length is 0
net: sfp: ignore disabled SFP node
net: stmmac: configure PTP clock source prior to PTP initialization
net: stmmac: skip only stmmac_ptp_register when resume from suspend
ARM: 9179/1: uaccess: avoid alignment faults in copy_[from|to]_kernel_nofault
ARM: 9180/1: Thumb2: align ALT_UP() sections in modules sufficiently
KVM: arm64: Use shadow SPSR_EL1 when injecting exceptions on !VHE
s390/module: fix loading modules with a lot of relocations
s390/hypfs: include z/VM guests with access control group set
s390/nmi: handle guarded storage validity failures for KVM guests
s390/nmi: handle vector validity failures for KVM guests
bpf: Guard against accessing NULL pt_regs in bpf_get_task_stack()
powerpc32/bpf: Fix codegen for bpf-to-bpf calls
powerpc/bpf: Update ldimm64 instructions during extra pass
ucount: Make get_ucount a safe get_user replacement
scsi: zfcp: Fix failed recovery on gone remote port with non-NPIV FCP devices
udf: Restore i_lenAlloc when inode expansion fails
udf: Fix NULL ptr deref when converting from inline format
efi: runtime: avoid EFIv2 runtime services on Apple x86 machines
PM: wakeup: simplify the output logic of pm_show_wakelocks()
tracing/histogram: Fix a potential memory leak for kstrdup()
tracing: Don't inc err_log entry count if entry allocation fails
ceph: properly put ceph_string reference after async create attempt
ceph: set pool_ns in new inode layout for async creates
fsnotify: fix fsnotify hooks in pseudo filesystems
Revert "KVM: SVM: avoid infinite loop on NPF from bad address"
psi: Fix uaf issue when psi trigger is destroyed while being polled
powerpc/audit: Fix syscall_get_arch()
perf/x86/intel/uncore: Fix CAS_COUNT_WRITE issue for ICX
perf/x86/intel: Add a quirk for the calculation of the number of counters on Alder Lake
drm/etnaviv: relax submit size limits
drm/atomic: Add the crtc to affected crtc only if uapi.enable = true
drm/amd/display: Fix FP start/end for dcn30_internal_validate_bw.
KVM: LAPIC: Also cancel preemption timer during SET_LAPIC
KVM: SVM: Never reject emulation due to SMAP errata for !SEV guests
KVM: SVM: Don't intercept #GP for SEV guests
KVM: x86: nSVM: skip eax alignment check for non-SVM instructions
KVM: x86: Forcibly leave nested virt when SMM state is toggled
KVM: x86: Keep MSR_IA32_XSS unchanged for INIT
KVM: x86: Update vCPU's runtime CPUID on write to MSR_IA32_XSS
KVM: x86: Sync the states size with the XCR0/IA32_XSS at, any time
KVM: PPC: Book3S HV Nested: Fix nested HFSCR being clobbered with multiple vCPUs
dm: revert partial fix for redundant bio-based IO accounting
block: add bio_start_io_acct_time() to control start_time
dm: properly fix redundant bio-based IO accounting
serial: pl011: Fix incorrect rs485 RTS polarity on set_mctrl
serial: 8250: of: Fix mapped region size when using reg-offset property
serial: stm32: fix software flow control transfer
tty: n_gsm: fix SW flow control encoding/handling
tty: Partially revert the removal of the Cyclades public API
tty: Add support for Brainboxes UC cards.
kbuild: remove include/linux/cyclades.h from header file check
usb-storage: Add unusual-devs entry for VL817 USB-SATA bridge
usb: xhci-plat: fix crash when suspend if remote wake enable
usb: common: ulpi: Fix crash in ulpi_match()
usb: gadget: f_sourcesink: Fix isoc transfer for USB_SPEED_SUPER_PLUS
usb: cdnsp: Fix segmentation fault in cdns_lost_power function
usb: dwc3: xilinx: Skip resets and USB3 register settings for USB2.0 mode
usb: dwc3: xilinx: Fix error handling when getting USB3 PHY
USB: core: Fix hang in usb_kill_urb by adding memory barriers
usb: typec: tcpci: don't touch CC line if it's Vconn source
usb: typec: tcpm: Do not disconnect while receiving VBUS off
usb: typec: tcpm: Do not disconnect when receiving VSAFE0V
ucsi_ccg: Check DEV_INT bit only when starting CCG4
mm, kasan: use compare-exchange operation to set KASAN page tag
jbd2: export jbd2_journal_[grab|put]_journal_head
ocfs2: fix a deadlock when commit trans
sched/membarrier: Fix membarrier-rseq fence command missing from query bitmask
PCI/sysfs: Find shadow ROM before static attribute initialization
x86/MCE/AMD: Allow thresholding interface updates after init
x86/cpu: Add Xeon Icelake-D to list of CPUs that support PPIN
powerpc/32s: Allocate one 256k IBAT instead of two consecutives 128k IBATs
powerpc/32s: Fix kasan_init_region() for KASAN
powerpc/32: Fix boot failure with GCC latent entropy plugin
i40e: Increase delay to 1 s after global EMP reset
i40e: Fix issue when maximum queues is exceeded
i40e: Fix queues reservation for XDP
i40e: Fix for failed to init adminq while VF reset
i40e: fix unsigned stat widths
usb: roles: fix include/linux/usb/role.h compile issue
rpmsg: char: Fix race between the release of rpmsg_ctrldev and cdev
rpmsg: char: Fix race between the release of rpmsg_eptdev and cdev
scsi: elx: efct: Don't use GFP_KERNEL under spin lock
scsi: bnx2fc: Flush destroy_work queue before calling bnx2fc_interface_put()
ipv6_tunnel: Rate limit warning messages
ARM: 9170/1: fix panic when kasan and kprobe are enabled
net: fix information leakage in /proc/net/ptype
hwmon: (lm90) Mark alert as broken for MAX6646/6647/6649
hwmon: (lm90) Mark alert as broken for MAX6680
ping: fix the sk_bound_dev_if match in ping_lookup
ipv4: avoid using shared IP generator for connected sockets
hwmon: (lm90) Reduce maximum conversion rate for G781
NFSv4: Handle case where the lookup of a directory fails
NFSv4: nfs_atomic_open() can race when looking up a non-regular file
net-procfs: show net devices bound packet types
drm/msm: Fix wrong size calculation
drm/msm/dsi: Fix missing put_device() call in dsi_get_phy
drm/msm/dsi: invalid parameter check in msm_dsi_phy_enable
ipv6: annotate accesses to fn->fn_sernum
NFS: Ensure the server has an up to date ctime before hardlinking
NFS: Ensure the server has an up to date ctime before renaming
KVM: arm64: pkvm: Use the mm_ops indirection for cache maintenance
SUNRPC: Use BIT() macro in rpc_show_xprt_state()
SUNRPC: Don't dereference xprt->snd_task if it's a cookie
powerpc64/bpf: Limit 'ldbrx' to processors compliant with ISA v2.06
netfilter: conntrack: don't increment invalid counter on NF_REPEAT
powerpc/64s: Mask SRR0 before checking against the masked NIP
perf: Fix perf_event_read_local() time
sched/pelt: Relax the sync of util_sum with util_avg
net: phy: broadcom: hook up soft_reset for BCM54616S
net: stmmac: dwmac-visconti: Fix bit definitions for ETHER_CLK_SEL
net: stmmac: dwmac-visconti: Fix clock configuration for RMII mode
phylib: fix potential use-after-free
octeontx2-af: Do not fixup all VF action entries
octeontx2-af: Fix LBK backpressure id count
octeontx2-af: Retry until RVU block reset complete
octeontx2-pf: cn10k: Ensure valid pointers are freed to aura
octeontx2-af: verify CQ context updates
octeontx2-af: Increase link credit restore polling timeout
octeontx2-af: cn10k: Do not enable RPM loopback for LPC interfaces
octeontx2-pf: Forward error codes to VF
rxrpc: Adjust retransmission backoff
efi/libstub: arm64: Fix image check alignment at entry
io_uring: fix bug in slow unregistering of nodes
Drivers: hv: balloon: account for vmbus packet header in max_pkt_size
hwmon: (lm90) Re-enable interrupts after alert clears
hwmon: (lm90) Mark alert as broken for MAX6654
hwmon: (lm90) Fix sysfs and udev notifications
hwmon: (adt7470) Prevent divide by zero in adt7470_fan_write()
powerpc/perf: Fix power_pmu_disable to call clear_pmi_irq_pending only if PMI is pending
ipv4: fix ip option filtering for locally generated fragments
ibmvnic: Allow extra failures before disabling
ibmvnic: init ->running_cap_crqs early
ibmvnic: don't spin in tasklet
net/smc: Transitional solution for clcsock race issue
video: hyperv_fb: Fix validation of screen resolution
can: tcan4x5x: regmap: fix max register value
drm/msm/hdmi: Fix missing put_device() call in msm_hdmi_get_phy
drm/msm/dpu: invalid parameter check in dpu_setup_dspp_pcc
drm/msm/a6xx: Add missing suspend_count increment
yam: fix a memory leak in yam_siocdevprivate()
net: cpsw: Properly initialise struct page_pool_params
net: hns3: handle empty unknown interrupt for VF
sch_htb: Fail on unsupported parameters when offload is requested
Revert "drm/ast: Support 1600x900 with 108MHz PCLK"
KVM: selftests: Don't skip L2's VMCALL in SMM test for SVM guest
ceph: put the requests/sessions when it fails to alloc memory
gve: Fix GFP flags when allocing pages
Revert "ipv6: Honor all IPv6 PIO Valid Lifetime values"
net: bridge: vlan: fix single net device option dumping
ipv4: raw: lock the socket in raw_bind()
ipv4: tcp: send zero IPID in SYNACK messages
ipv4: remove sparse error in ip_neigh_gw4()
net: bridge: vlan: fix memory leak in __allowed_ingress
Bluetooth: refactor malicious adv data check
irqchip/realtek-rtl: Map control data to virq
irqchip/realtek-rtl: Fix off-by-one in routing
dt-bindings: can: tcan4x5x: fix mram-cfg RX FIFO config
perf/core: Fix cgroup event list management
psi: fix "no previous prototype" warnings when CONFIG_CGROUPS=n
psi: fix "defined but not used" warnings when CONFIG_PROC_FS=n
usb: dwc3: xilinx: fix uninitialized return value
usr/include/Makefile: add linux/nfc.h to the compile-test coverage
fsnotify: invalidate dcache before IN_DELETE event
block: Fix wrong offset in bio_truncate()
mtd: rawnand: mpc5121: Remove unused variable in ads5121_select_chip()
Linux 5.15.19
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I66399d45af362fa8e1672ba38c0d672e21afc716
This commit is contained in:
@@ -92,7 +92,8 @@ Triggers can be set on more than one psi metric and more than one trigger
|
||||
for the same psi metric can be specified. However for each trigger a separate
|
||||
file descriptor is required to be able to poll it separately from others,
|
||||
therefore for each trigger a separate open() syscall should be made even
|
||||
when opening the same psi interface file.
|
||||
when opening the same psi interface file. Write operations to a file descriptor
|
||||
with an already existing psi trigger will fail with EBUSY.
|
||||
|
||||
Monitors activate only when system enters stall state for the monitored
|
||||
psi metric and deactivates upon exit from the stall state. While system is
|
||||
|
||||
@@ -31,7 +31,7 @@ tcan4x5x: tcan4x5x@0 {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
spi-max-frequency = <10000000>;
|
||||
bosch,mram-cfg = <0x0 0 0 32 0 0 1 1>;
|
||||
bosch,mram-cfg = <0x0 0 0 16 0 0 1 1>;
|
||||
interrupt-parent = <&gpio1>;
|
||||
interrupts = <14 IRQ_TYPE_LEVEL_LOW>;
|
||||
device-state-gpios = <&gpio3 21 GPIO_ACTIVE_HIGH>;
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 15
|
||||
SUBLEVEL = 18
|
||||
SUBLEVEL = 19
|
||||
EXTRAVERSION =
|
||||
NAME = Trick or Treat
|
||||
|
||||
|
||||
@@ -259,6 +259,7 @@
|
||||
*/
|
||||
#define ALT_UP(instr...) \
|
||||
.pushsection ".alt.smp.init", "a" ;\
|
||||
.align 2 ;\
|
||||
.long 9998b - . ;\
|
||||
9997: instr ;\
|
||||
.if . - 9997b == 2 ;\
|
||||
@@ -270,6 +271,7 @@
|
||||
.popsection
|
||||
#define ALT_UP_B(label) \
|
||||
.pushsection ".alt.smp.init", "a" ;\
|
||||
.align 2 ;\
|
||||
.long 9998b - . ;\
|
||||
W(b) . + (label - 9998b) ;\
|
||||
.popsection
|
||||
|
||||
@@ -96,6 +96,7 @@ unsigned long get_wchan(struct task_struct *p);
|
||||
#define __ALT_SMP_ASM(smp, up) \
|
||||
"9998: " smp "\n" \
|
||||
" .pushsection \".alt.smp.init\", \"a\"\n" \
|
||||
" .align 2\n" \
|
||||
" .long 9998b - .\n" \
|
||||
" " up "\n" \
|
||||
" .popsection\n"
|
||||
|
||||
@@ -11,6 +11,7 @@
|
||||
#include <linux/string.h>
|
||||
#include <asm/memory.h>
|
||||
#include <asm/domain.h>
|
||||
#include <asm/unaligned.h>
|
||||
#include <asm/unified.h>
|
||||
#include <asm/compiler.h>
|
||||
|
||||
@@ -497,7 +498,10 @@ do { \
|
||||
} \
|
||||
default: __err = __get_user_bad(); break; \
|
||||
} \
|
||||
*(type *)(dst) = __val; \
|
||||
if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) \
|
||||
put_unaligned(__val, (type *)(dst)); \
|
||||
else \
|
||||
*(type *)(dst) = __val; /* aligned by caller */ \
|
||||
if (__err) \
|
||||
goto err_label; \
|
||||
} while (0)
|
||||
@@ -507,7 +511,9 @@ do { \
|
||||
const type *__pk_ptr = (dst); \
|
||||
unsigned long __dst = (unsigned long)__pk_ptr; \
|
||||
int __err = 0; \
|
||||
type __val = *(type *)src; \
|
||||
type __val = IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) \
|
||||
? get_unaligned((type *)(src)) \
|
||||
: *(type *)(src); /* aligned by caller */ \
|
||||
switch (sizeof(type)) { \
|
||||
case 1: __put_user_asm_byte(__val, __dst, __err, ""); break; \
|
||||
case 2: __put_user_asm_half(__val, __dst, __err, ""); break; \
|
||||
|
||||
@@ -1,4 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
KASAN_SANITIZE_actions-common.o := n
|
||||
KASAN_SANITIZE_actions-arm.o := n
|
||||
KASAN_SANITIZE_actions-thumb.o := n
|
||||
obj-$(CONFIG_KPROBES) += core.o actions-common.o checkers-common.o
|
||||
obj-$(CONFIG_ARM_KPROBES_TEST) += test-kprobes.o
|
||||
test-kprobes-objs := test-core.o
|
||||
|
||||
@@ -76,5 +76,5 @@ static void pci_fixup_video(struct pci_dev *pdev)
|
||||
}
|
||||
}
|
||||
}
|
||||
DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_ANY_ID, PCI_ANY_ID,
|
||||
PCI_CLASS_DISPLAY_VGA, 8, pci_fixup_video);
|
||||
DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_ANY_ID, PCI_ANY_ID,
|
||||
PCI_CLASS_DISPLAY_VGA, 8, pci_fixup_video);
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
#include <linux/pci.h>
|
||||
#include <loongson.h>
|
||||
|
||||
static void pci_fixup_radeon(struct pci_dev *pdev)
|
||||
static void pci_fixup_video(struct pci_dev *pdev)
|
||||
{
|
||||
struct resource *res = &pdev->resource[PCI_ROM_RESOURCE];
|
||||
|
||||
@@ -22,8 +22,7 @@ static void pci_fixup_radeon(struct pci_dev *pdev)
|
||||
res->flags = IORESOURCE_MEM | IORESOURCE_ROM_SHADOW |
|
||||
IORESOURCE_PCI_FIXED;
|
||||
|
||||
dev_info(&pdev->dev, "BAR %d: assigned %pR for Radeon ROM\n",
|
||||
PCI_ROM_RESOURCE, res);
|
||||
dev_info(&pdev->dev, "Video device with shadowed ROM at %pR\n", res);
|
||||
}
|
||||
DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_ATI, 0x9615,
|
||||
PCI_CLASS_DISPLAY_VGA, 8, pci_fixup_radeon);
|
||||
DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_ATI, 0x9615,
|
||||
PCI_CLASS_DISPLAY_VGA, 8, pci_fixup_video);
|
||||
|
||||
@@ -143,6 +143,8 @@ static __always_inline void update_user_segments(u32 val)
|
||||
update_user_segment(15, val);
|
||||
}
|
||||
|
||||
int __init find_free_bat(void);
|
||||
unsigned int bat_block_size(unsigned long base, unsigned long top);
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
/* We happily ignore the smaller BATs on 601, we don't actually use
|
||||
|
||||
@@ -39,7 +39,6 @@ struct kvm_nested_guest {
|
||||
pgd_t *shadow_pgtable; /* our page table for this guest */
|
||||
u64 l1_gr_to_hr; /* L1's addr of part'n-scoped table */
|
||||
u64 process_table; /* process table entry for this guest */
|
||||
u64 hfscr; /* HFSCR that the L1 requested for this nested guest */
|
||||
long refcnt; /* number of pointers to this struct */
|
||||
struct mutex tlb_lock; /* serialize page faults and tlbies */
|
||||
struct kvm_nested_guest *next;
|
||||
|
||||
@@ -814,6 +814,7 @@ struct kvm_vcpu_arch {
|
||||
|
||||
/* For support of nested guests */
|
||||
struct kvm_nested_guest *nested;
|
||||
u64 nested_hfscr; /* HFSCR that the L1 requested for the nested guest */
|
||||
u32 nested_vcpu_id;
|
||||
gpa_t nested_io_gpr;
|
||||
#endif
|
||||
|
||||
@@ -498,6 +498,7 @@
|
||||
#define PPC_RAW_LDX(r, base, b) (0x7c00002a | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
|
||||
#define PPC_RAW_LHZ(r, base, i) (0xa0000000 | ___PPC_RT(r) | ___PPC_RA(base) | IMM_L(i))
|
||||
#define PPC_RAW_LHBRX(r, base, b) (0x7c00062c | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
|
||||
#define PPC_RAW_LWBRX(r, base, b) (0x7c00042c | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
|
||||
#define PPC_RAW_LDBRX(r, base, b) (0x7c000428 | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
|
||||
#define PPC_RAW_STWCX(s, a, b) (0x7c00012d | ___PPC_RS(s) | ___PPC_RA(a) | ___PPC_RB(b))
|
||||
#define PPC_RAW_CMPWI(a, i) (0x2c000000 | ___PPC_RA(a) | IMM_L(i))
|
||||
|
||||
@@ -90,7 +90,7 @@ static inline void syscall_get_arguments(struct task_struct *task,
|
||||
unsigned long val, mask = -1UL;
|
||||
unsigned int n = 6;
|
||||
|
||||
if (is_32bit_task())
|
||||
if (is_tsk_32bit_task(task))
|
||||
mask = 0xffffffff;
|
||||
|
||||
while (n--) {
|
||||
@@ -115,7 +115,7 @@ static inline void syscall_set_arguments(struct task_struct *task,
|
||||
|
||||
static inline int syscall_get_arch(struct task_struct *task)
|
||||
{
|
||||
if (is_32bit_task())
|
||||
if (is_tsk_32bit_task(task))
|
||||
return AUDIT_ARCH_PPC;
|
||||
else if (IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN))
|
||||
return AUDIT_ARCH_PPC64LE;
|
||||
|
||||
@@ -165,8 +165,10 @@ static inline bool test_thread_local_flags(unsigned int flags)
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
#define is_32bit_task() (test_thread_flag(TIF_32BIT))
|
||||
#define is_tsk_32bit_task(tsk) (test_tsk_thread_flag(tsk, TIF_32BIT))
|
||||
#else
|
||||
#define is_32bit_task() (IS_ENABLED(CONFIG_PPC32))
|
||||
#define is_tsk_32bit_task(tsk) (IS_ENABLED(CONFIG_PPC32))
|
||||
#endif
|
||||
|
||||
#if defined(CONFIG_PPC64)
|
||||
|
||||
@@ -11,6 +11,7 @@ CFLAGS_prom_init.o += -fPIC
|
||||
CFLAGS_btext.o += -fPIC
|
||||
endif
|
||||
|
||||
CFLAGS_early_32.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
|
||||
CFLAGS_cputable.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
|
||||
CFLAGS_prom_init.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
|
||||
CFLAGS_btext.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
|
||||
|
||||
@@ -30,6 +30,7 @@ COMPAT_SYS_CALL_TABLE:
|
||||
.ifc \srr,srr
|
||||
mfspr r11,SPRN_SRR0
|
||||
ld r12,_NIP(r1)
|
||||
clrrdi r11,r11,2
|
||||
clrrdi r12,r12,2
|
||||
100: tdne r11,r12
|
||||
EMIT_WARN_ENTRY 100b,__FILE__,__LINE__,(BUGFLAG_WARNING | BUGFLAG_ONCE)
|
||||
@@ -40,6 +41,7 @@ COMPAT_SYS_CALL_TABLE:
|
||||
.else
|
||||
mfspr r11,SPRN_HSRR0
|
||||
ld r12,_NIP(r1)
|
||||
clrrdi r11,r11,2
|
||||
clrrdi r12,r12,2
|
||||
100: tdne r11,r12
|
||||
EMIT_WARN_ENTRY 100b,__FILE__,__LINE__,(BUGFLAG_WARNING | BUGFLAG_ONCE)
|
||||
|
||||
@@ -1731,7 +1731,6 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu,
|
||||
|
||||
static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct kvm_nested_guest *nested = vcpu->arch.nested;
|
||||
int r;
|
||||
int srcu_idx;
|
||||
|
||||
@@ -1831,7 +1830,7 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu)
|
||||
* it into a HEAI.
|
||||
*/
|
||||
if (!(vcpu->arch.hfscr_permitted & (1UL << cause)) ||
|
||||
(nested->hfscr & (1UL << cause))) {
|
||||
(vcpu->arch.nested_hfscr & (1UL << cause))) {
|
||||
vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST;
|
||||
|
||||
/*
|
||||
|
||||
@@ -362,7 +362,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
|
||||
/* set L1 state to L2 state */
|
||||
vcpu->arch.nested = l2;
|
||||
vcpu->arch.nested_vcpu_id = l2_hv.vcpu_token;
|
||||
l2->hfscr = l2_hv.hfscr;
|
||||
vcpu->arch.nested_hfscr = l2_hv.hfscr;
|
||||
vcpu->arch.regs = l2_regs;
|
||||
|
||||
/* Guest must always run with ME enabled, HV disabled. */
|
||||
|
||||
@@ -19,6 +19,9 @@ CFLAGS_code-patching.o += -DDISABLE_BRANCH_PROFILING
|
||||
CFLAGS_feature-fixups.o += -DDISABLE_BRANCH_PROFILING
|
||||
endif
|
||||
|
||||
CFLAGS_code-patching.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
|
||||
CFLAGS_feature-fixups.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
|
||||
|
||||
obj-y += alloc.o code-patching.o feature-fixups.o pmem.o test_code-patching.o
|
||||
|
||||
ifndef CONFIG_KASAN
|
||||
|
||||
@@ -76,7 +76,7 @@ unsigned long p_block_mapped(phys_addr_t pa)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int find_free_bat(void)
|
||||
int __init find_free_bat(void)
|
||||
{
|
||||
int b;
|
||||
int n = mmu_has_feature(MMU_FTR_USE_HIGH_BATS) ? 8 : 4;
|
||||
@@ -100,7 +100,7 @@ static int find_free_bat(void)
|
||||
* - block size has to be a power of two. This is calculated by finding the
|
||||
* highest bit set to 1.
|
||||
*/
|
||||
static unsigned int block_size(unsigned long base, unsigned long top)
|
||||
unsigned int bat_block_size(unsigned long base, unsigned long top)
|
||||
{
|
||||
unsigned int max_size = SZ_256M;
|
||||
unsigned int base_shift = (ffs(base) - 1) & 31;
|
||||
@@ -145,7 +145,7 @@ static unsigned long __init __mmu_mapin_ram(unsigned long base, unsigned long to
|
||||
int idx;
|
||||
|
||||
while ((idx = find_free_bat()) != -1 && base != top) {
|
||||
unsigned int size = block_size(base, top);
|
||||
unsigned int size = bat_block_size(base, top);
|
||||
|
||||
if (size < 128 << 10)
|
||||
break;
|
||||
@@ -196,18 +196,17 @@ void mmu_mark_initmem_nx(void)
|
||||
int nb = mmu_has_feature(MMU_FTR_USE_HIGH_BATS) ? 8 : 4;
|
||||
int i;
|
||||
unsigned long base = (unsigned long)_stext - PAGE_OFFSET;
|
||||
unsigned long top = (unsigned long)_etext - PAGE_OFFSET;
|
||||
unsigned long top = ALIGN((unsigned long)_etext - PAGE_OFFSET, SZ_128K);
|
||||
unsigned long border = (unsigned long)__init_begin - PAGE_OFFSET;
|
||||
unsigned long size;
|
||||
|
||||
for (i = 0; i < nb - 1 && base < top && top - base > (128 << 10);) {
|
||||
size = block_size(base, top);
|
||||
for (i = 0; i < nb - 1 && base < top;) {
|
||||
size = bat_block_size(base, top);
|
||||
setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_TEXT);
|
||||
base += size;
|
||||
}
|
||||
if (base < top) {
|
||||
size = block_size(base, top);
|
||||
size = max(size, 128UL << 10);
|
||||
size = bat_block_size(base, top);
|
||||
if ((top - base) > size) {
|
||||
size <<= 1;
|
||||
if (strict_kernel_rwx_enabled() && base + size > border)
|
||||
|
||||
@@ -10,48 +10,51 @@ int __init kasan_init_region(void *start, size_t size)
|
||||
{
|
||||
unsigned long k_start = (unsigned long)kasan_mem_to_shadow(start);
|
||||
unsigned long k_end = (unsigned long)kasan_mem_to_shadow(start + size);
|
||||
unsigned long k_cur = k_start;
|
||||
int k_size = k_end - k_start;
|
||||
int k_size_base = 1 << (ffs(k_size) - 1);
|
||||
unsigned long k_nobat = k_start;
|
||||
unsigned long k_cur;
|
||||
phys_addr_t phys;
|
||||
int ret;
|
||||
void *block;
|
||||
|
||||
block = memblock_alloc(k_size, k_size_base);
|
||||
while (k_nobat < k_end) {
|
||||
unsigned int k_size = bat_block_size(k_nobat, k_end);
|
||||
int idx = find_free_bat();
|
||||
|
||||
if (block && k_size_base >= SZ_128K && k_start == ALIGN(k_start, k_size_base)) {
|
||||
int shift = ffs(k_size - k_size_base);
|
||||
int k_size_more = shift ? 1 << (shift - 1) : 0;
|
||||
if (idx == -1)
|
||||
break;
|
||||
if (k_size < SZ_128K)
|
||||
break;
|
||||
phys = memblock_phys_alloc_range(k_size, k_size, 0,
|
||||
MEMBLOCK_ALLOC_ANYWHERE);
|
||||
if (!phys)
|
||||
break;
|
||||
|
||||
setbat(-1, k_start, __pa(block), k_size_base, PAGE_KERNEL);
|
||||
if (k_size_more >= SZ_128K)
|
||||
setbat(-1, k_start + k_size_base, __pa(block) + k_size_base,
|
||||
k_size_more, PAGE_KERNEL);
|
||||
if (v_block_mapped(k_start))
|
||||
k_cur = k_start + k_size_base;
|
||||
if (v_block_mapped(k_start + k_size_base))
|
||||
k_cur = k_start + k_size_base + k_size_more;
|
||||
|
||||
update_bats();
|
||||
setbat(idx, k_nobat, phys, k_size, PAGE_KERNEL);
|
||||
k_nobat += k_size;
|
||||
}
|
||||
if (k_nobat != k_start)
|
||||
update_bats();
|
||||
|
||||
if (!block)
|
||||
block = memblock_alloc(k_size, PAGE_SIZE);
|
||||
if (!block)
|
||||
return -ENOMEM;
|
||||
if (k_nobat < k_end) {
|
||||
phys = memblock_phys_alloc_range(k_end - k_nobat, PAGE_SIZE, 0,
|
||||
MEMBLOCK_ALLOC_ANYWHERE);
|
||||
if (!phys)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ret = kasan_init_shadow_page_tables(k_start, k_end);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
kasan_update_early_region(k_start, k_cur, __pte(0));
|
||||
kasan_update_early_region(k_start, k_nobat, __pte(0));
|
||||
|
||||
for (; k_cur < k_end; k_cur += PAGE_SIZE) {
|
||||
for (k_cur = k_nobat; k_cur < k_end; k_cur += PAGE_SIZE) {
|
||||
pmd_t *pmd = pmd_off_k(k_cur);
|
||||
void *va = block + k_cur - k_start;
|
||||
pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
|
||||
pte_t pte = pfn_pte(PHYS_PFN(phys + k_cur - k_nobat), PAGE_KERNEL);
|
||||
|
||||
__set_pte_at(&init_mm, k_cur, pte_offset_kernel(pmd, k_cur), pte, 0);
|
||||
}
|
||||
flush_tlb_kernel_range(k_start, k_end);
|
||||
memset(kasan_mem_to_shadow(start), 0, k_end - k_start);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -23,15 +23,15 @@ static void bpf_jit_fill_ill_insns(void *area, unsigned int size)
|
||||
memset32(area, BREAKPOINT_INSTRUCTION, size / 4);
|
||||
}
|
||||
|
||||
/* Fix the branch target addresses for subprog calls */
|
||||
static int bpf_jit_fixup_subprog_calls(struct bpf_prog *fp, u32 *image,
|
||||
struct codegen_context *ctx, u32 *addrs)
|
||||
/* Fix updated addresses (for subprog calls, ldimm64, et al) during extra pass */
|
||||
static int bpf_jit_fixup_addresses(struct bpf_prog *fp, u32 *image,
|
||||
struct codegen_context *ctx, u32 *addrs)
|
||||
{
|
||||
const struct bpf_insn *insn = fp->insnsi;
|
||||
bool func_addr_fixed;
|
||||
u64 func_addr;
|
||||
u32 tmp_idx;
|
||||
int i, ret;
|
||||
int i, j, ret;
|
||||
|
||||
for (i = 0; i < fp->len; i++) {
|
||||
/*
|
||||
@@ -66,6 +66,23 @@ static int bpf_jit_fixup_subprog_calls(struct bpf_prog *fp, u32 *image,
|
||||
* of the JITed sequence remains unchanged.
|
||||
*/
|
||||
ctx->idx = tmp_idx;
|
||||
} else if (insn[i].code == (BPF_LD | BPF_IMM | BPF_DW)) {
|
||||
tmp_idx = ctx->idx;
|
||||
ctx->idx = addrs[i] / 4;
|
||||
#ifdef CONFIG_PPC32
|
||||
PPC_LI32(ctx->b2p[insn[i].dst_reg] - 1, (u32)insn[i + 1].imm);
|
||||
PPC_LI32(ctx->b2p[insn[i].dst_reg], (u32)insn[i].imm);
|
||||
for (j = ctx->idx - addrs[i] / 4; j < 4; j++)
|
||||
EMIT(PPC_RAW_NOP());
|
||||
#else
|
||||
func_addr = ((u64)(u32)insn[i].imm) | (((u64)(u32)insn[i + 1].imm) << 32);
|
||||
PPC_LI64(b2p[insn[i].dst_reg], func_addr);
|
||||
/* overwrite rest with nops */
|
||||
for (j = ctx->idx - addrs[i] / 4; j < 5; j++)
|
||||
EMIT(PPC_RAW_NOP());
|
||||
#endif
|
||||
ctx->idx = tmp_idx;
|
||||
i++;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -193,13 +210,13 @@ skip_init_ctx:
|
||||
/*
|
||||
* Do not touch the prologue and epilogue as they will remain
|
||||
* unchanged. Only fix the branch target address for subprog
|
||||
* calls in the body.
|
||||
* calls in the body, and ldimm64 instructions.
|
||||
*
|
||||
* This does not change the offsets and lengths of the subprog
|
||||
* call instruction sequences and hence, the size of the JITed
|
||||
* image as well.
|
||||
*/
|
||||
bpf_jit_fixup_subprog_calls(fp, code_base, &cgctx, addrs);
|
||||
bpf_jit_fixup_addresses(fp, code_base, &cgctx, addrs);
|
||||
|
||||
/* There is no need to perform the usual passes. */
|
||||
goto skip_codegen_passes;
|
||||
|
||||
@@ -191,6 +191,9 @@ void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, u64 fun
|
||||
|
||||
if (image && rel < 0x2000000 && rel >= -0x2000000) {
|
||||
PPC_BL_ABS(func);
|
||||
EMIT(PPC_RAW_NOP());
|
||||
EMIT(PPC_RAW_NOP());
|
||||
EMIT(PPC_RAW_NOP());
|
||||
} else {
|
||||
/* Load function address into r0 */
|
||||
EMIT(PPC_RAW_LIS(_R0, IMM_H(func)));
|
||||
@@ -289,6 +292,8 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
|
||||
bool func_addr_fixed;
|
||||
u64 func_addr;
|
||||
u32 true_cond;
|
||||
u32 tmp_idx;
|
||||
int j;
|
||||
|
||||
/*
|
||||
* addrs[] maps a BPF bytecode address into a real offset from
|
||||
@@ -836,8 +841,12 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
|
||||
* 16 byte instruction that uses two 'struct bpf_insn'
|
||||
*/
|
||||
case BPF_LD | BPF_IMM | BPF_DW: /* dst = (u64) imm */
|
||||
tmp_idx = ctx->idx;
|
||||
PPC_LI32(dst_reg_h, (u32)insn[i + 1].imm);
|
||||
PPC_LI32(dst_reg, (u32)insn[i].imm);
|
||||
/* padding to allow full 4 instructions for later patching */
|
||||
for (j = ctx->idx - tmp_idx; j < 4; j++)
|
||||
EMIT(PPC_RAW_NOP());
|
||||
/* Adjust for two bpf instructions */
|
||||
addrs[++i] = ctx->idx * 4;
|
||||
break;
|
||||
|
||||
@@ -318,6 +318,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
|
||||
u64 imm64;
|
||||
u32 true_cond;
|
||||
u32 tmp_idx;
|
||||
int j;
|
||||
|
||||
/*
|
||||
* addrs[] maps a BPF bytecode address into a real offset from
|
||||
@@ -632,17 +633,21 @@ bpf_alu32_trunc:
|
||||
EMIT(PPC_RAW_MR(dst_reg, b2p[TMP_REG_1]));
|
||||
break;
|
||||
case 64:
|
||||
/*
|
||||
* Way easier and faster(?) to store the value
|
||||
* into stack and then use ldbrx
|
||||
*
|
||||
* ctx->seen will be reliable in pass2, but
|
||||
* the instructions generated will remain the
|
||||
* same across all passes
|
||||
*/
|
||||
/* Store the value to stack and then use byte-reverse loads */
|
||||
PPC_BPF_STL(dst_reg, 1, bpf_jit_stack_local(ctx));
|
||||
EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], 1, bpf_jit_stack_local(ctx)));
|
||||
EMIT(PPC_RAW_LDBRX(dst_reg, 0, b2p[TMP_REG_1]));
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_206)) {
|
||||
EMIT(PPC_RAW_LDBRX(dst_reg, 0, b2p[TMP_REG_1]));
|
||||
} else {
|
||||
EMIT(PPC_RAW_LWBRX(dst_reg, 0, b2p[TMP_REG_1]));
|
||||
if (IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN))
|
||||
EMIT(PPC_RAW_SLDI(dst_reg, dst_reg, 32));
|
||||
EMIT(PPC_RAW_LI(b2p[TMP_REG_2], 4));
|
||||
EMIT(PPC_RAW_LWBRX(b2p[TMP_REG_2], b2p[TMP_REG_2], b2p[TMP_REG_1]));
|
||||
if (IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
|
||||
EMIT(PPC_RAW_SLDI(b2p[TMP_REG_2], b2p[TMP_REG_2], 32));
|
||||
EMIT(PPC_RAW_OR(dst_reg, dst_reg, b2p[TMP_REG_2]));
|
||||
}
|
||||
break;
|
||||
}
|
||||
break;
|
||||
@@ -806,9 +811,13 @@ emit_clear:
|
||||
case BPF_LD | BPF_IMM | BPF_DW: /* dst = (u64) imm */
|
||||
imm64 = ((u64)(u32) insn[i].imm) |
|
||||
(((u64)(u32) insn[i+1].imm) << 32);
|
||||
tmp_idx = ctx->idx;
|
||||
PPC_LI64(dst_reg, imm64);
|
||||
/* padding to allow full 5 instructions for later patching */
|
||||
for (j = ctx->idx - tmp_idx; j < 5; j++)
|
||||
EMIT(PPC_RAW_NOP());
|
||||
/* Adjust for two bpf instructions */
|
||||
addrs[++i] = ctx->idx * 4;
|
||||
PPC_LI64(dst_reg, imm64);
|
||||
break;
|
||||
|
||||
/*
|
||||
|
||||
@@ -1326,9 +1326,20 @@ static void power_pmu_disable(struct pmu *pmu)
|
||||
* Otherwise provide a warning if there is PMI pending, but
|
||||
* no counter is found overflown.
|
||||
*/
|
||||
if (any_pmc_overflown(cpuhw))
|
||||
clear_pmi_irq_pending();
|
||||
else
|
||||
if (any_pmc_overflown(cpuhw)) {
|
||||
/*
|
||||
* Since power_pmu_disable runs under local_irq_save, it
|
||||
* could happen that code hits a PMC overflow without PMI
|
||||
* pending in paca. Hence only clear PMI pending if it was
|
||||
* set.
|
||||
*
|
||||
* If a PMI is pending, then MSR[EE] must be disabled (because
|
||||
* the masked PMI handler disabling EE). So it is safe to
|
||||
* call clear_pmi_irq_pending().
|
||||
*/
|
||||
if (pmi_irq_pending())
|
||||
clear_pmi_irq_pending();
|
||||
} else
|
||||
WARN_ON(pmi_irq_pending());
|
||||
|
||||
val = mmcra = cpuhw->mmcr.mmcra;
|
||||
|
||||
@@ -20,6 +20,7 @@
|
||||
|
||||
static char local_guest[] = " ";
|
||||
static char all_guests[] = "* ";
|
||||
static char *all_groups = all_guests;
|
||||
static char *guest_query;
|
||||
|
||||
struct diag2fc_data {
|
||||
@@ -62,10 +63,11 @@ static int diag2fc(int size, char* query, void *addr)
|
||||
|
||||
memcpy(parm_list.userid, query, NAME_LEN);
|
||||
ASCEBC(parm_list.userid, NAME_LEN);
|
||||
parm_list.addr = (unsigned long) addr ;
|
||||
memcpy(parm_list.aci_grp, all_groups, NAME_LEN);
|
||||
ASCEBC(parm_list.aci_grp, NAME_LEN);
|
||||
parm_list.addr = (unsigned long)addr;
|
||||
parm_list.size = size;
|
||||
parm_list.fmt = 0x02;
|
||||
memset(parm_list.aci_grp, 0x40, NAME_LEN);
|
||||
rc = -1;
|
||||
|
||||
diag_stat_inc(DIAG_STAT_X2FC);
|
||||
|
||||
@@ -33,7 +33,7 @@
|
||||
#define DEBUGP(fmt , ...)
|
||||
#endif
|
||||
|
||||
#define PLT_ENTRY_SIZE 20
|
||||
#define PLT_ENTRY_SIZE 22
|
||||
|
||||
void *module_alloc(unsigned long size)
|
||||
{
|
||||
@@ -340,27 +340,26 @@ static int apply_rela(Elf_Rela *rela, Elf_Addr base, Elf_Sym *symtab,
|
||||
case R_390_PLTOFF32: /* 32 bit offset from GOT to PLT. */
|
||||
case R_390_PLTOFF64: /* 16 bit offset from GOT to PLT. */
|
||||
if (info->plt_initialized == 0) {
|
||||
unsigned int insn[5];
|
||||
unsigned int *ip = me->core_layout.base +
|
||||
me->arch.plt_offset +
|
||||
info->plt_offset;
|
||||
unsigned char insn[PLT_ENTRY_SIZE];
|
||||
char *plt_base;
|
||||
char *ip;
|
||||
|
||||
insn[0] = 0x0d10e310; /* basr 1,0 */
|
||||
insn[1] = 0x100a0004; /* lg 1,10(1) */
|
||||
plt_base = me->core_layout.base + me->arch.plt_offset;
|
||||
ip = plt_base + info->plt_offset;
|
||||
*(int *)insn = 0x0d10e310; /* basr 1,0 */
|
||||
*(int *)&insn[4] = 0x100c0004; /* lg 1,12(1) */
|
||||
if (IS_ENABLED(CONFIG_EXPOLINE) && !nospec_disable) {
|
||||
unsigned int *ij;
|
||||
ij = me->core_layout.base +
|
||||
me->arch.plt_offset +
|
||||
me->arch.plt_size - PLT_ENTRY_SIZE;
|
||||
insn[2] = 0xa7f40000 + /* j __jump_r1 */
|
||||
(unsigned int)(u16)
|
||||
(((unsigned long) ij - 8 -
|
||||
(unsigned long) ip) / 2);
|
||||
char *jump_r1;
|
||||
|
||||
jump_r1 = plt_base + me->arch.plt_size -
|
||||
PLT_ENTRY_SIZE;
|
||||
/* brcl 0xf,__jump_r1 */
|
||||
*(short *)&insn[8] = 0xc0f4;
|
||||
*(int *)&insn[10] = (jump_r1 - (ip + 8)) / 2;
|
||||
} else {
|
||||
insn[2] = 0x07f10000; /* br %r1 */
|
||||
*(int *)&insn[8] = 0x07f10000; /* br %r1 */
|
||||
}
|
||||
insn[3] = (unsigned int) (val >> 32);
|
||||
insn[4] = (unsigned int) val;
|
||||
*(long *)&insn[14] = val;
|
||||
|
||||
write(ip, insn, sizeof(insn));
|
||||
info->plt_initialized = 1;
|
||||
|
||||
@@ -273,7 +273,14 @@ static int notrace s390_validate_registers(union mci mci, int umode)
|
||||
/* Validate vector registers */
|
||||
union ctlreg0 cr0;
|
||||
|
||||
if (!mci.vr) {
|
||||
/*
|
||||
* The vector validity must only be checked if not running a
|
||||
* KVM guest. For KVM guests the machine check is forwarded by
|
||||
* KVM and it is the responsibility of the guest to take
|
||||
* appropriate actions. The host vector or FPU values have been
|
||||
* saved by KVM and will be restored by KVM.
|
||||
*/
|
||||
if (!mci.vr && !test_cpu_flag(CIF_MCCK_GUEST)) {
|
||||
/*
|
||||
* Vector registers can't be restored. If the kernel
|
||||
* currently uses vector registers the system is
|
||||
@@ -316,11 +323,21 @@ static int notrace s390_validate_registers(union mci mci, int umode)
|
||||
if (cr2.gse) {
|
||||
if (!mci.gs) {
|
||||
/*
|
||||
* Guarded storage register can't be restored and
|
||||
* the current processes uses guarded storage.
|
||||
* It has to be terminated.
|
||||
* 2 cases:
|
||||
* - machine check in kernel or userspace
|
||||
* - machine check while running SIE (KVM guest)
|
||||
* For kernel or userspace the userspace values of
|
||||
* guarded storage control can not be recreated, the
|
||||
* process must be terminated.
|
||||
* For SIE the guest values of guarded storage can not
|
||||
* be recreated. This is either due to a bug or due to
|
||||
* GS being disabled in the guest. The guest will be
|
||||
* notified by KVM code and the guests machine check
|
||||
* handling must take care of this. The host values
|
||||
* are saved by KVM and are not affected.
|
||||
*/
|
||||
kill_task = 1;
|
||||
if (!test_cpu_flag(CIF_MCCK_GUEST))
|
||||
kill_task = 1;
|
||||
} else {
|
||||
load_gs_cb((struct gs_cb *)mcesa->guarded_storage_save_area);
|
||||
}
|
||||
|
||||
@@ -6187,6 +6187,19 @@ __init int intel_pmu_init(void)
|
||||
pmu->num_counters = x86_pmu.num_counters;
|
||||
pmu->num_counters_fixed = x86_pmu.num_counters_fixed;
|
||||
}
|
||||
|
||||
/*
|
||||
* Quirk: For some Alder Lake machine, when all E-cores are disabled in
|
||||
* a BIOS, the leaf 0xA will enumerate all counters of P-cores. However,
|
||||
* the X86_FEATURE_HYBRID_CPU is still set. The above codes will
|
||||
* mistakenly add extra counters for P-cores. Correct the number of
|
||||
* counters here.
|
||||
*/
|
||||
if ((pmu->num_counters > 8) || (pmu->num_counters_fixed > 4)) {
|
||||
pmu->num_counters = x86_pmu.num_counters;
|
||||
pmu->num_counters_fixed = x86_pmu.num_counters_fixed;
|
||||
}
|
||||
|
||||
pmu->max_pebs_events = min_t(unsigned, MAX_PEBS_EVENTS, pmu->num_counters);
|
||||
pmu->unconstrained = (struct event_constraint)
|
||||
__EVENT_CONSTRAINT(0, (1ULL << pmu->num_counters) - 1,
|
||||
|
||||
@@ -5482,7 +5482,7 @@ static struct intel_uncore_type icx_uncore_imc = {
|
||||
.fixed_ctr_bits = 48,
|
||||
.fixed_ctr = SNR_IMC_MMIO_PMON_FIXED_CTR,
|
||||
.fixed_ctl = SNR_IMC_MMIO_PMON_FIXED_CTL,
|
||||
.event_descs = hswep_uncore_imc_events,
|
||||
.event_descs = snr_uncore_imc_events,
|
||||
.perf_ctr = SNR_IMC_MMIO_PMON_CTR0,
|
||||
.event_ctl = SNR_IMC_MMIO_PMON_CTL0,
|
||||
.event_mask = SNBEP_PMON_RAW_EVENT_MASK,
|
||||
|
||||
@@ -1487,6 +1487,7 @@ struct kvm_x86_ops {
|
||||
};
|
||||
|
||||
struct kvm_x86_nested_ops {
|
||||
void (*leave_nested)(struct kvm_vcpu *vcpu);
|
||||
int (*check_events)(struct kvm_vcpu *vcpu);
|
||||
bool (*hv_timer_pending)(struct kvm_vcpu *vcpu);
|
||||
void (*triple_fault)(struct kvm_vcpu *vcpu);
|
||||
|
||||
@@ -400,7 +400,7 @@ static void threshold_restart_bank(void *_tr)
|
||||
u32 hi, lo;
|
||||
|
||||
/* sysfs write might race against an offline operation */
|
||||
if (this_cpu_read(threshold_banks))
|
||||
if (!this_cpu_read(threshold_banks) && !tr->set_lvt_off)
|
||||
return;
|
||||
|
||||
rdmsr(tr->b->address, lo, hi);
|
||||
|
||||
@@ -486,6 +486,7 @@ static void intel_ppin_init(struct cpuinfo_x86 *c)
|
||||
case INTEL_FAM6_BROADWELL_X:
|
||||
case INTEL_FAM6_SKYLAKE_X:
|
||||
case INTEL_FAM6_ICELAKE_X:
|
||||
case INTEL_FAM6_ICELAKE_D:
|
||||
case INTEL_FAM6_SAPPHIRERAPIDS_X:
|
||||
case INTEL_FAM6_XEON_PHI_KNL:
|
||||
case INTEL_FAM6_XEON_PHI_KNM:
|
||||
|
||||
@@ -2623,7 +2623,7 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
|
||||
kvm_apic_set_version(vcpu);
|
||||
|
||||
apic_update_ppr(apic);
|
||||
hrtimer_cancel(&apic->lapic_timer.timer);
|
||||
cancel_apic_timer(apic);
|
||||
apic->lapic_timer.expired_tscdeadline = 0;
|
||||
apic_update_lvtt(apic);
|
||||
apic_manage_nmi_watchdog(apic, kvm_lapic_get_reg(apic, APIC_LVT0));
|
||||
|
||||
@@ -942,9 +942,9 @@ void svm_free_nested(struct vcpu_svm *svm)
|
||||
/*
|
||||
* Forcibly leave nested mode in order to be able to reset the VCPU later on.
|
||||
*/
|
||||
void svm_leave_nested(struct vcpu_svm *svm)
|
||||
void svm_leave_nested(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct kvm_vcpu *vcpu = &svm->vcpu;
|
||||
struct vcpu_svm *svm = to_svm(vcpu);
|
||||
|
||||
if (is_guest_mode(vcpu)) {
|
||||
svm->nested.nested_run_pending = 0;
|
||||
@@ -1313,7 +1313,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
|
||||
return -EINVAL;
|
||||
|
||||
if (!(kvm_state->flags & KVM_STATE_NESTED_GUEST_MODE)) {
|
||||
svm_leave_nested(svm);
|
||||
svm_leave_nested(vcpu);
|
||||
svm_set_gif(svm, !!(kvm_state->flags & KVM_STATE_NESTED_GIF_SET));
|
||||
return 0;
|
||||
}
|
||||
@@ -1378,7 +1378,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
|
||||
*/
|
||||
|
||||
if (is_guest_mode(vcpu))
|
||||
svm_leave_nested(svm);
|
||||
svm_leave_nested(vcpu);
|
||||
else
|
||||
svm->nested.vmcb02.ptr->save = svm->vmcb01.ptr->save;
|
||||
|
||||
@@ -1432,6 +1432,7 @@ static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu)
|
||||
}
|
||||
|
||||
struct kvm_x86_nested_ops svm_nested_ops = {
|
||||
.leave_nested = svm_leave_nested,
|
||||
.check_events = svm_check_nested_events,
|
||||
.triple_fault = nested_svm_triple_fault,
|
||||
.get_nested_state_pages = svm_get_nested_state_pages,
|
||||
|
||||
@@ -281,7 +281,7 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
|
||||
|
||||
if ((old_efer & EFER_SVME) != (efer & EFER_SVME)) {
|
||||
if (!(efer & EFER_SVME)) {
|
||||
svm_leave_nested(svm);
|
||||
svm_leave_nested(vcpu);
|
||||
svm_set_gif(svm, true);
|
||||
/* #GP intercept is still needed for vmware backdoor */
|
||||
if (!enable_vmware_backdoor)
|
||||
@@ -303,7 +303,11 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (svm_gp_erratum_intercept)
|
||||
/*
|
||||
* Never intercept #GP for SEV guests, KVM can't
|
||||
* decrypt guest memory to workaround the erratum.
|
||||
*/
|
||||
if (svm_gp_erratum_intercept && !sev_guest(vcpu->kvm))
|
||||
set_exception_intercept(svm, GP_VECTOR);
|
||||
}
|
||||
}
|
||||
@@ -1176,9 +1180,10 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
|
||||
* Guest access to VMware backdoor ports could legitimately
|
||||
* trigger #GP because of TSS I/O permission bitmap.
|
||||
* We intercept those #GP and allow access to them anyway
|
||||
* as VMware does.
|
||||
* as VMware does. Don't intercept #GP for SEV guests as KVM can't
|
||||
* decrypt guest memory to decode the faulting instruction.
|
||||
*/
|
||||
if (enable_vmware_backdoor)
|
||||
if (enable_vmware_backdoor && !sev_guest(vcpu->kvm))
|
||||
set_exception_intercept(svm, GP_VECTOR);
|
||||
|
||||
svm_set_intercept(svm, INTERCEPT_INTR);
|
||||
@@ -2233,10 +2238,6 @@ static int gp_interception(struct kvm_vcpu *vcpu)
|
||||
if (error_code)
|
||||
goto reinject;
|
||||
|
||||
/* All SVM instructions expect page aligned RAX */
|
||||
if (svm->vmcb->save.rax & ~PAGE_MASK)
|
||||
goto reinject;
|
||||
|
||||
/* Decode the instruction for usage later */
|
||||
if (x86_decode_emulated_instruction(vcpu, 0, NULL, 0) != EMULATION_OK)
|
||||
goto reinject;
|
||||
@@ -2254,8 +2255,13 @@ static int gp_interception(struct kvm_vcpu *vcpu)
|
||||
if (!is_guest_mode(vcpu))
|
||||
return kvm_emulate_instruction(vcpu,
|
||||
EMULTYPE_VMWARE_GP | EMULTYPE_NO_DECODE);
|
||||
} else
|
||||
} else {
|
||||
/* All SVM instructions expect page aligned RAX */
|
||||
if (svm->vmcb->save.rax & ~PAGE_MASK)
|
||||
goto reinject;
|
||||
|
||||
return emulate_svm_instr(vcpu, opcode);
|
||||
}
|
||||
|
||||
reinject:
|
||||
kvm_queue_exception_e(vcpu, GP_VECTOR, error_code);
|
||||
@@ -4407,8 +4413,13 @@ static bool svm_can_emulate_instruction(struct kvm_vcpu *vcpu, void *insn, int i
|
||||
bool smep, smap, is_user;
|
||||
unsigned long cr4;
|
||||
|
||||
/* Emulation is always possible when KVM has access to all guest state. */
|
||||
if (!sev_guest(vcpu->kvm))
|
||||
return true;
|
||||
|
||||
/*
|
||||
* When the guest is an SEV-ES guest, emulation is not possible.
|
||||
* Emulation is impossible for SEV-ES guests as KVM doesn't have access
|
||||
* to guest register state.
|
||||
*/
|
||||
if (sev_es_guest(vcpu->kvm))
|
||||
return false;
|
||||
@@ -4456,21 +4467,11 @@ static bool svm_can_emulate_instruction(struct kvm_vcpu *vcpu, void *insn, int i
|
||||
if (likely(!insn || insn_len))
|
||||
return true;
|
||||
|
||||
/*
|
||||
* If RIP is invalid, go ahead with emulation which will cause an
|
||||
* internal error exit.
|
||||
*/
|
||||
if (!kvm_vcpu_gfn_to_memslot(vcpu, kvm_rip_read(vcpu) >> PAGE_SHIFT))
|
||||
return true;
|
||||
|
||||
cr4 = kvm_read_cr4(vcpu);
|
||||
smep = cr4 & X86_CR4_SMEP;
|
||||
smap = cr4 & X86_CR4_SMAP;
|
||||
is_user = svm_get_cpl(vcpu) == 3;
|
||||
if (smap && (!smep || is_user)) {
|
||||
if (!sev_guest(vcpu->kvm))
|
||||
return true;
|
||||
|
||||
pr_err_ratelimited("KVM: SEV Guest triggered AMD Erratum 1096\n");
|
||||
kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
|
||||
}
|
||||
|
||||
@@ -461,7 +461,7 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm)
|
||||
|
||||
int enter_svm_guest_mode(struct kvm_vcpu *vcpu,
|
||||
u64 vmcb_gpa, struct vmcb *vmcb12, bool from_vmrun);
|
||||
void svm_leave_nested(struct vcpu_svm *svm);
|
||||
void svm_leave_nested(struct kvm_vcpu *vcpu);
|
||||
void svm_free_nested(struct vcpu_svm *svm);
|
||||
int svm_allocate_nested(struct vcpu_svm *svm);
|
||||
int nested_svm_vmrun(struct kvm_vcpu *vcpu);
|
||||
|
||||
@@ -6748,6 +6748,7 @@ __init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct kvm_vcpu *))
|
||||
}
|
||||
|
||||
struct kvm_x86_nested_ops vmx_nested_ops = {
|
||||
.leave_nested = vmx_leave_nested,
|
||||
.check_events = vmx_check_nested_events,
|
||||
.hv_timer_pending = nested_vmx_preemption_timer_pending,
|
||||
.triple_fault = nested_vmx_triple_fault,
|
||||
|
||||
@@ -3453,6 +3453,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
|
||||
if (data & ~supported_xss)
|
||||
return 1;
|
||||
vcpu->arch.ia32_xss = data;
|
||||
kvm_update_cpuid_runtime(vcpu);
|
||||
break;
|
||||
case MSR_SMI_COUNT:
|
||||
if (!msr_info->host_initiated)
|
||||
@@ -4727,8 +4728,10 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
|
||||
vcpu->arch.apic->sipi_vector = events->sipi_vector;
|
||||
|
||||
if (events->flags & KVM_VCPUEVENT_VALID_SMM) {
|
||||
if (!!(vcpu->arch.hflags & HF_SMM_MASK) != events->smi.smm)
|
||||
if (!!(vcpu->arch.hflags & HF_SMM_MASK) != events->smi.smm) {
|
||||
kvm_x86_ops.nested_ops->leave_nested(vcpu);
|
||||
kvm_smm_changed(vcpu, events->smi.smm);
|
||||
}
|
||||
|
||||
vcpu->arch.smi_pending = events->smi.pending;
|
||||
|
||||
@@ -10987,7 +10990,8 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
|
||||
|
||||
vcpu->arch.msr_misc_features_enables = 0;
|
||||
|
||||
vcpu->arch.xcr0 = XFEATURE_MASK_FP;
|
||||
__kvm_set_xcr(vcpu, 0, XFEATURE_MASK_FP);
|
||||
__kvm_set_msr(vcpu, MSR_IA32_XSS, 0, true);
|
||||
}
|
||||
|
||||
memset(vcpu->arch.regs, 0, sizeof(vcpu->arch.regs));
|
||||
@@ -11006,8 +11010,6 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
|
||||
eax = 0x600;
|
||||
kvm_rdx_write(vcpu, eax);
|
||||
|
||||
vcpu->arch.ia32_xss = 0;
|
||||
|
||||
static_call(kvm_x86_vcpu_reset)(vcpu, init_event);
|
||||
|
||||
kvm_set_rflags(vcpu, X86_EFLAGS_FIXED);
|
||||
|
||||
@@ -353,8 +353,8 @@ static void pci_fixup_video(struct pci_dev *pdev)
|
||||
}
|
||||
}
|
||||
}
|
||||
DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_ANY_ID, PCI_ANY_ID,
|
||||
PCI_CLASS_DISPLAY_VGA, 8, pci_fixup_video);
|
||||
DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_ANY_ID, PCI_ANY_ID,
|
||||
PCI_CLASS_DISPLAY_VGA, 8, pci_fixup_video);
|
||||
|
||||
|
||||
static const struct dmi_system_id msi_k8t_dmi_table[] = {
|
||||
|
||||
@@ -570,7 +570,8 @@ void bio_truncate(struct bio *bio, unsigned new_size)
|
||||
offset = new_size - done;
|
||||
else
|
||||
offset = 0;
|
||||
zero_user(bv.bv_page, offset, bv.bv_len - offset);
|
||||
zero_user(bv.bv_page, bv.bv_offset + offset,
|
||||
bv.bv_len - offset);
|
||||
truncated = true;
|
||||
}
|
||||
done += bv.bv_len;
|
||||
|
||||
@@ -1293,21 +1293,33 @@ void blk_account_io_start(struct request *rq)
|
||||
}
|
||||
|
||||
static unsigned long __part_start_io_acct(struct block_device *part,
|
||||
unsigned int sectors, unsigned int op)
|
||||
unsigned int sectors, unsigned int op,
|
||||
unsigned long start_time)
|
||||
{
|
||||
const int sgrp = op_stat_group(op);
|
||||
unsigned long now = READ_ONCE(jiffies);
|
||||
|
||||
part_stat_lock();
|
||||
update_io_ticks(part, now, false);
|
||||
update_io_ticks(part, start_time, false);
|
||||
part_stat_inc(part, ios[sgrp]);
|
||||
part_stat_add(part, sectors[sgrp], sectors);
|
||||
part_stat_local_inc(part, in_flight[op_is_write(op)]);
|
||||
part_stat_unlock();
|
||||
|
||||
return now;
|
||||
return start_time;
|
||||
}
|
||||
|
||||
/**
|
||||
* bio_start_io_acct_time - start I/O accounting for bio based drivers
|
||||
* @bio: bio to start account for
|
||||
* @start_time: start time that should be passed back to bio_end_io_acct().
|
||||
*/
|
||||
void bio_start_io_acct_time(struct bio *bio, unsigned long start_time)
|
||||
{
|
||||
__part_start_io_acct(bio->bi_bdev, bio_sectors(bio),
|
||||
bio_op(bio), start_time);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(bio_start_io_acct_time);
|
||||
|
||||
/**
|
||||
* bio_start_io_acct - start I/O accounting for bio based drivers
|
||||
* @bio: bio to start account for
|
||||
@@ -1316,14 +1328,15 @@ static unsigned long __part_start_io_acct(struct block_device *part,
|
||||
*/
|
||||
unsigned long bio_start_io_acct(struct bio *bio)
|
||||
{
|
||||
return __part_start_io_acct(bio->bi_bdev, bio_sectors(bio), bio_op(bio));
|
||||
return __part_start_io_acct(bio->bi_bdev, bio_sectors(bio),
|
||||
bio_op(bio), jiffies);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(bio_start_io_acct);
|
||||
|
||||
unsigned long disk_start_io_acct(struct gendisk *disk, unsigned int sectors,
|
||||
unsigned int op)
|
||||
{
|
||||
return __part_start_io_acct(disk->part0, sectors, op);
|
||||
return __part_start_io_acct(disk->part0, sectors, op, jiffies);
|
||||
}
|
||||
EXPORT_SYMBOL(disk_start_io_acct);
|
||||
|
||||
|
||||
@@ -719,6 +719,13 @@ void __init efi_systab_report_header(const efi_table_hdr_t *systab_hdr,
|
||||
systab_hdr->revision >> 16,
|
||||
systab_hdr->revision & 0xffff,
|
||||
vendor);
|
||||
|
||||
if (IS_ENABLED(CONFIG_X86_64) &&
|
||||
systab_hdr->revision > EFI_1_10_SYSTEM_TABLE_REVISION &&
|
||||
!strcmp(vendor, "Apple")) {
|
||||
pr_info("Apple Mac detected, using EFI v1.10 runtime services only\n");
|
||||
efi.runtime_version = EFI_1_10_SYSTEM_TABLE_REVISION;
|
||||
}
|
||||
}
|
||||
|
||||
static __initdata char memory_type_name[][13] = {
|
||||
|
||||
@@ -119,9 +119,9 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
|
||||
if (image->image_base != _text)
|
||||
efi_err("FIRMWARE BUG: efi_loaded_image_t::image_base has bogus value\n");
|
||||
|
||||
if (!IS_ALIGNED((u64)_text, EFI_KIMG_ALIGN))
|
||||
efi_err("FIRMWARE BUG: kernel image not aligned on %ldk boundary\n",
|
||||
EFI_KIMG_ALIGN >> 10);
|
||||
if (!IS_ALIGNED((u64)_text, SEGMENT_ALIGN))
|
||||
efi_err("FIRMWARE BUG: kernel image not aligned on %dk boundary\n",
|
||||
SEGMENT_ALIGN >> 10);
|
||||
|
||||
kernel_size = _edata - _text;
|
||||
kernel_memsize = kernel_size + (_end - _edata);
|
||||
|
||||
@@ -1879,7 +1879,6 @@ static noinline bool dcn30_internal_validate_bw(
|
||||
dc->res_pool->funcs->update_soc_for_wm_a(dc, context);
|
||||
pipe_cnt = dc->res_pool->funcs->populate_dml_pipes(dc, context, pipes, fast_validate);
|
||||
|
||||
DC_FP_START();
|
||||
if (!pipe_cnt) {
|
||||
out = true;
|
||||
goto validate_out;
|
||||
@@ -2103,7 +2102,6 @@ validate_fail:
|
||||
out = false;
|
||||
|
||||
validate_out:
|
||||
DC_FP_END();
|
||||
return out;
|
||||
}
|
||||
|
||||
@@ -2304,7 +2302,9 @@ bool dcn30_validate_bandwidth(struct dc *dc,
|
||||
|
||||
BW_VAL_TRACE_COUNT();
|
||||
|
||||
DC_FP_START();
|
||||
out = dcn30_internal_validate_bw(dc, context, pipes, &pipe_cnt, &vlevel, fast_validate);
|
||||
DC_FP_END();
|
||||
|
||||
if (pipe_cnt == 0)
|
||||
goto validate_out;
|
||||
|
||||
@@ -282,8 +282,6 @@ static const struct ast_vbios_enhtable res_1360x768[] = {
|
||||
};
|
||||
|
||||
static const struct ast_vbios_enhtable res_1600x900[] = {
|
||||
{1800, 1600, 24, 80, 1000, 900, 1, 3, VCLK108, /* 60Hz */
|
||||
(SyncPP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 3, 0x3A },
|
||||
{1760, 1600, 48, 32, 926, 900, 3, 5, VCLK97_75, /* 60Hz CVT RB */
|
||||
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
|
||||
AST2500PreCatchCRT), 60, 1, 0x3A },
|
||||
|
||||
@@ -1310,8 +1310,10 @@ int drm_atomic_check_only(struct drm_atomic_state *state)
|
||||
|
||||
DRM_DEBUG_ATOMIC("checking %p\n", state);
|
||||
|
||||
for_each_new_crtc_in_state(state, crtc, new_crtc_state, i)
|
||||
requested_crtc |= drm_crtc_mask(crtc);
|
||||
for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) {
|
||||
if (new_crtc_state->enable)
|
||||
requested_crtc |= drm_crtc_mask(crtc);
|
||||
}
|
||||
|
||||
for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, i) {
|
||||
ret = drm_atomic_plane_check(old_plane_state, new_plane_state);
|
||||
@@ -1360,8 +1362,10 @@ int drm_atomic_check_only(struct drm_atomic_state *state)
|
||||
}
|
||||
}
|
||||
|
||||
for_each_new_crtc_in_state(state, crtc, new_crtc_state, i)
|
||||
affected_crtc |= drm_crtc_mask(crtc);
|
||||
for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) {
|
||||
if (new_crtc_state->enable)
|
||||
affected_crtc |= drm_crtc_mask(crtc);
|
||||
}
|
||||
|
||||
/*
|
||||
* For commits that allow modesets drivers can add other CRTCs to the
|
||||
|
||||
@@ -469,8 +469,8 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (args->stream_size > SZ_64K || args->nr_relocs > SZ_64K ||
|
||||
args->nr_bos > SZ_64K || args->nr_pmrs > 128) {
|
||||
if (args->stream_size > SZ_128K || args->nr_relocs > SZ_128K ||
|
||||
args->nr_bos > SZ_128K || args->nr_pmrs > 128) {
|
||||
DRM_ERROR("submit arguments out of size limits\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@@ -1557,6 +1557,8 @@ static int a6xx_pm_suspend(struct msm_gpu *gpu)
|
||||
for (i = 0; i < gpu->nr_rings; i++)
|
||||
a6xx_gpu->shadow[i] = 0;
|
||||
|
||||
gpu->suspend_count++;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -26,9 +26,16 @@ static void dpu_setup_dspp_pcc(struct dpu_hw_dspp *ctx,
|
||||
struct dpu_hw_pcc_cfg *cfg)
|
||||
{
|
||||
|
||||
u32 base = ctx->cap->sblk->pcc.base;
|
||||
u32 base;
|
||||
|
||||
if (!ctx || !base) {
|
||||
if (!ctx) {
|
||||
DRM_ERROR("invalid ctx %pK\n", ctx);
|
||||
return;
|
||||
}
|
||||
|
||||
base = ctx->cap->sblk->pcc.base;
|
||||
|
||||
if (!base) {
|
||||
DRM_ERROR("invalid ctx %pK pcc base 0x%x\n", ctx, base);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -40,7 +40,12 @@ static int dsi_get_phy(struct msm_dsi *msm_dsi)
|
||||
|
||||
of_node_put(phy_node);
|
||||
|
||||
if (!phy_pdev || !msm_dsi->phy) {
|
||||
if (!phy_pdev) {
|
||||
DRM_DEV_ERROR(&pdev->dev, "%s: phy driver is not ready\n", __func__);
|
||||
return -EPROBE_DEFER;
|
||||
}
|
||||
if (!msm_dsi->phy) {
|
||||
put_device(&phy_pdev->dev);
|
||||
DRM_DEV_ERROR(&pdev->dev, "%s: phy driver is not ready\n", __func__);
|
||||
return -EPROBE_DEFER;
|
||||
}
|
||||
|
||||
@@ -806,12 +806,14 @@ int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
|
||||
struct msm_dsi_phy_clk_request *clk_req,
|
||||
struct msm_dsi_phy_shared_timings *shared_timings)
|
||||
{
|
||||
struct device *dev = &phy->pdev->dev;
|
||||
struct device *dev;
|
||||
int ret;
|
||||
|
||||
if (!phy || !phy->cfg->ops.enable)
|
||||
return -EINVAL;
|
||||
|
||||
dev = &phy->pdev->dev;
|
||||
|
||||
ret = dsi_phy_enable_resource(phy);
|
||||
if (ret) {
|
||||
DRM_DEV_ERROR(dev, "%s: resource enable failed, %d\n",
|
||||
|
||||
@@ -97,10 +97,15 @@ static int msm_hdmi_get_phy(struct hdmi *hdmi)
|
||||
|
||||
of_node_put(phy_node);
|
||||
|
||||
if (!phy_pdev || !hdmi->phy) {
|
||||
if (!phy_pdev) {
|
||||
DRM_DEV_ERROR(&pdev->dev, "phy driver is not ready\n");
|
||||
return -EPROBE_DEFER;
|
||||
}
|
||||
if (!hdmi->phy) {
|
||||
DRM_DEV_ERROR(&pdev->dev, "phy driver is not ready\n");
|
||||
put_device(&phy_pdev->dev);
|
||||
return -EPROBE_DEFER;
|
||||
}
|
||||
|
||||
hdmi->phy_dev = get_device(&phy_pdev->dev);
|
||||
|
||||
|
||||
@@ -437,7 +437,7 @@ static int msm_init_vram(struct drm_device *dev)
|
||||
of_node_put(node);
|
||||
if (ret)
|
||||
return ret;
|
||||
size = r.end - r.start;
|
||||
size = r.end - r.start + 1;
|
||||
DRM_INFO("using VRAM carveout: %lx@%pa\n", size, &r.start);
|
||||
|
||||
/* if we have no IOMMU, then we need to use carveout allocator.
|
||||
|
||||
@@ -1660,6 +1660,13 @@ static int balloon_connect_vsp(struct hv_device *dev)
|
||||
unsigned long t;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* max_pkt_size should be large enough for one vmbus packet header plus
|
||||
* our receive buffer size. Hyper-V sends messages up to
|
||||
* HV_HYP_PAGE_SIZE bytes long on balloon channel.
|
||||
*/
|
||||
dev->channel->max_pkt_size = HV_HYP_PAGE_SIZE * 2;
|
||||
|
||||
ret = vmbus_open(dev->channel, dm_ring_size, dm_ring_size, NULL, 0,
|
||||
balloon_onchannelcallback, dev);
|
||||
if (ret)
|
||||
|
||||
@@ -662,6 +662,9 @@ static int adt7470_fan_write(struct device *dev, u32 attr, int channel, long val
|
||||
struct adt7470_data *data = dev_get_drvdata(dev);
|
||||
int err;
|
||||
|
||||
if (val <= 0)
|
||||
return -EINVAL;
|
||||
|
||||
val = FAN_RPM_TO_PERIOD(val);
|
||||
val = clamp_val(val, 1, 65534);
|
||||
|
||||
|
||||
@@ -373,7 +373,7 @@ static const struct lm90_params lm90_params[] = {
|
||||
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
|
||||
| LM90_HAVE_BROKEN_ALERT | LM90_HAVE_CRIT,
|
||||
.alert_alarms = 0x7c,
|
||||
.max_convrate = 8,
|
||||
.max_convrate = 7,
|
||||
},
|
||||
[lm86] = {
|
||||
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
|
||||
@@ -394,12 +394,13 @@ static const struct lm90_params lm90_params[] = {
|
||||
.max_convrate = 9,
|
||||
},
|
||||
[max6646] = {
|
||||
.flags = LM90_HAVE_CRIT,
|
||||
.flags = LM90_HAVE_CRIT | LM90_HAVE_BROKEN_ALERT,
|
||||
.alert_alarms = 0x7c,
|
||||
.max_convrate = 6,
|
||||
.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
|
||||
},
|
||||
[max6654] = {
|
||||
.flags = LM90_HAVE_BROKEN_ALERT,
|
||||
.alert_alarms = 0x7c,
|
||||
.max_convrate = 7,
|
||||
.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
|
||||
@@ -418,7 +419,7 @@ static const struct lm90_params lm90_params[] = {
|
||||
},
|
||||
[max6680] = {
|
||||
.flags = LM90_HAVE_OFFSET | LM90_HAVE_CRIT
|
||||
| LM90_HAVE_CRIT_ALRM_SWP,
|
||||
| LM90_HAVE_CRIT_ALRM_SWP | LM90_HAVE_BROKEN_ALERT,
|
||||
.alert_alarms = 0x7c,
|
||||
.max_convrate = 7,
|
||||
},
|
||||
@@ -848,7 +849,7 @@ static int lm90_update_device(struct device *dev)
|
||||
* Re-enable ALERT# output if it was originally enabled and
|
||||
* relevant alarms are all clear
|
||||
*/
|
||||
if (!(data->config_orig & 0x80) &&
|
||||
if ((client->irq || !(data->config_orig & 0x80)) &&
|
||||
!(data->alarms & data->alert_alarms)) {
|
||||
if (data->config & 0x80) {
|
||||
dev_dbg(&client->dev, "Re-enabling ALERT#\n");
|
||||
@@ -1807,22 +1808,22 @@ static bool lm90_is_tripped(struct i2c_client *client, u16 *status)
|
||||
|
||||
if (st & LM90_STATUS_LLOW)
|
||||
hwmon_notify_event(data->hwmon_dev, hwmon_temp,
|
||||
hwmon_temp_min, 0);
|
||||
hwmon_temp_min_alarm, 0);
|
||||
if (st & LM90_STATUS_RLOW)
|
||||
hwmon_notify_event(data->hwmon_dev, hwmon_temp,
|
||||
hwmon_temp_min, 1);
|
||||
hwmon_temp_min_alarm, 1);
|
||||
if (st2 & MAX6696_STATUS2_R2LOW)
|
||||
hwmon_notify_event(data->hwmon_dev, hwmon_temp,
|
||||
hwmon_temp_min, 2);
|
||||
hwmon_temp_min_alarm, 2);
|
||||
if (st & LM90_STATUS_LHIGH)
|
||||
hwmon_notify_event(data->hwmon_dev, hwmon_temp,
|
||||
hwmon_temp_max, 0);
|
||||
hwmon_temp_max_alarm, 0);
|
||||
if (st & LM90_STATUS_RHIGH)
|
||||
hwmon_notify_event(data->hwmon_dev, hwmon_temp,
|
||||
hwmon_temp_max, 1);
|
||||
hwmon_temp_max_alarm, 1);
|
||||
if (st2 & MAX6696_STATUS2_R2HIGH)
|
||||
hwmon_notify_event(data->hwmon_dev, hwmon_temp,
|
||||
hwmon_temp_max, 2);
|
||||
hwmon_temp_max_alarm, 2);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
@@ -62,7 +62,7 @@ static struct irq_chip realtek_ictl_irq = {
|
||||
|
||||
static int intc_map(struct irq_domain *d, unsigned int irq, irq_hw_number_t hw)
|
||||
{
|
||||
irq_set_chip_and_handler(hw, &realtek_ictl_irq, handle_level_irq);
|
||||
irq_set_chip_and_handler(irq, &realtek_ictl_irq, handle_level_irq);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -95,7 +95,8 @@ out:
|
||||
* SoC interrupts are cascaded to MIPS CPU interrupts according to the
|
||||
* interrupt-map in the device tree. Each SoC interrupt gets 4 bits for
|
||||
* the CPU interrupt in an Interrupt Routing Register. Max 32 SoC interrupts
|
||||
* thus go into 4 IRRs.
|
||||
* thus go into 4 IRRs. A routing value of '0' means the interrupt is left
|
||||
* disconnected. Routing values {1..15} connect to output lines {0..14}.
|
||||
*/
|
||||
static int __init map_interrupts(struct device_node *node, struct irq_domain *domain)
|
||||
{
|
||||
@@ -134,7 +135,7 @@ static int __init map_interrupts(struct device_node *node, struct irq_domain *do
|
||||
of_node_put(cpu_ictl);
|
||||
|
||||
cpu_int = be32_to_cpup(imap + 2);
|
||||
if (cpu_int > 7)
|
||||
if (cpu_int > 7 || cpu_int < 2)
|
||||
return -EINVAL;
|
||||
|
||||
if (!(mips_irqs_set & BIT(cpu_int))) {
|
||||
@@ -143,7 +144,8 @@ static int __init map_interrupts(struct device_node *node, struct irq_domain *do
|
||||
mips_irqs_set |= BIT(cpu_int);
|
||||
}
|
||||
|
||||
regs[(soc_int * 4) / 32] |= cpu_int << (soc_int * 4) % 32;
|
||||
/* Use routing values (1..6) for CPU interrupts (2..7) */
|
||||
regs[(soc_int * 4) / 32] |= (cpu_int - 1) << (soc_int * 4) % 32;
|
||||
imap += 3;
|
||||
}
|
||||
|
||||
|
||||
@@ -489,7 +489,7 @@ static void start_io_acct(struct dm_io *io)
|
||||
struct mapped_device *md = io->md;
|
||||
struct bio *bio = io->orig_bio;
|
||||
|
||||
io->start_time = bio_start_io_acct(bio);
|
||||
bio_start_io_acct_time(bio, io->start_time);
|
||||
if (unlikely(dm_stats_used(&md->stats)))
|
||||
dm_stats_account_io(&md->stats, bio_data_dir(bio),
|
||||
bio->bi_iter.bi_sector, bio_sectors(bio),
|
||||
@@ -535,7 +535,7 @@ static struct dm_io *alloc_io(struct mapped_device *md, struct bio *bio)
|
||||
io->md = md;
|
||||
spin_lock_init(&io->endio_lock);
|
||||
|
||||
start_io_acct(io);
|
||||
io->start_time = jiffies;
|
||||
|
||||
return io;
|
||||
}
|
||||
@@ -1514,9 +1514,6 @@ static void init_clone_info(struct clone_info *ci, struct mapped_device *md,
|
||||
ci->sector = bio->bi_iter.bi_sector;
|
||||
}
|
||||
|
||||
#define __dm_part_stat_sub(part, field, subnd) \
|
||||
(part_stat_get(part, field) -= (subnd))
|
||||
|
||||
/*
|
||||
* Entry point to split a bio into clones and submit them to the targets.
|
||||
*/
|
||||
@@ -1553,23 +1550,12 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md,
|
||||
GFP_NOIO, &md->queue->bio_split);
|
||||
ci.io->orig_bio = b;
|
||||
|
||||
/*
|
||||
* Adjust IO stats for each split, otherwise upon queue
|
||||
* reentry there will be redundant IO accounting.
|
||||
* NOTE: this is a stop-gap fix, a proper fix involves
|
||||
* significant refactoring of DM core's bio splitting
|
||||
* (by eliminating DM's splitting and just using bio_split)
|
||||
*/
|
||||
part_stat_lock();
|
||||
__dm_part_stat_sub(dm_disk(md)->part0,
|
||||
sectors[op_stat_group(bio_op(bio))], ci.sector_count);
|
||||
part_stat_unlock();
|
||||
|
||||
bio_chain(b, bio);
|
||||
trace_block_split(b, bio->bi_iter.bi_sector);
|
||||
ret = submit_bio_noacct(bio);
|
||||
}
|
||||
}
|
||||
start_io_acct(ci.io);
|
||||
|
||||
/* drop the extra reference count */
|
||||
dm_io_dec_pending(ci.io, errno_to_blk_status(error));
|
||||
|
||||
@@ -291,7 +291,6 @@ static int ads5121_chipselect_init(struct mtd_info *mtd)
|
||||
/* Control chips select signal on ADS5121 board */
|
||||
static void ads5121_select_chip(struct nand_chip *nand, int chip)
|
||||
{
|
||||
struct mtd_info *mtd = nand_to_mtd(nand);
|
||||
struct mpc5121_nfc_prv *prv = nand_get_controller_data(nand);
|
||||
u8 v;
|
||||
|
||||
|
||||
@@ -336,6 +336,9 @@ m_can_fifo_read(struct m_can_classdev *cdev,
|
||||
u32 addr_offset = cdev->mcfg[MRAM_RXF0].off + fgi * RXF0_ELEMENT_SIZE +
|
||||
offset;
|
||||
|
||||
if (val_count == 0)
|
||||
return 0;
|
||||
|
||||
return cdev->ops->read_fifo(cdev, addr_offset, val, val_count);
|
||||
}
|
||||
|
||||
@@ -346,6 +349,9 @@ m_can_fifo_write(struct m_can_classdev *cdev,
|
||||
u32 addr_offset = cdev->mcfg[MRAM_TXB].off + fpi * TXB_ELEMENT_SIZE +
|
||||
offset;
|
||||
|
||||
if (val_count == 0)
|
||||
return 0;
|
||||
|
||||
return cdev->ops->write_fifo(cdev, addr_offset, val, val_count);
|
||||
}
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
#define TCAN4X5X_SPI_INSTRUCTION_WRITE (0x61 << 24)
|
||||
#define TCAN4X5X_SPI_INSTRUCTION_READ (0x41 << 24)
|
||||
|
||||
#define TCAN4X5X_MAX_REGISTER 0x8ffc
|
||||
#define TCAN4X5X_MAX_REGISTER 0x87fc
|
||||
|
||||
static int tcan4x5x_regmap_gather_write(void *context,
|
||||
const void *reg, size_t reg_len,
|
||||
|
||||
@@ -815,7 +815,7 @@ static inline bool gve_is_gqi(struct gve_priv *priv)
|
||||
/* buffers */
|
||||
int gve_alloc_page(struct gve_priv *priv, struct device *dev,
|
||||
struct page **page, dma_addr_t *dma,
|
||||
enum dma_data_direction);
|
||||
enum dma_data_direction, gfp_t gfp_flags);
|
||||
void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma,
|
||||
enum dma_data_direction);
|
||||
/* tx handling */
|
||||
|
||||
@@ -746,9 +746,9 @@ static void gve_free_rings(struct gve_priv *priv)
|
||||
|
||||
int gve_alloc_page(struct gve_priv *priv, struct device *dev,
|
||||
struct page **page, dma_addr_t *dma,
|
||||
enum dma_data_direction dir)
|
||||
enum dma_data_direction dir, gfp_t gfp_flags)
|
||||
{
|
||||
*page = alloc_page(GFP_KERNEL);
|
||||
*page = alloc_page(gfp_flags);
|
||||
if (!*page) {
|
||||
priv->page_alloc_fail++;
|
||||
return -ENOMEM;
|
||||
@@ -792,7 +792,7 @@ static int gve_alloc_queue_page_list(struct gve_priv *priv, u32 id,
|
||||
for (i = 0; i < pages; i++) {
|
||||
err = gve_alloc_page(priv, &priv->pdev->dev, &qpl->pages[i],
|
||||
&qpl->page_buses[i],
|
||||
gve_qpl_dma_dir(priv, id));
|
||||
gve_qpl_dma_dir(priv, id), GFP_KERNEL);
|
||||
/* caller handles clean up */
|
||||
if (err)
|
||||
return -ENOMEM;
|
||||
|
||||
@@ -79,7 +79,8 @@ static int gve_rx_alloc_buffer(struct gve_priv *priv, struct device *dev,
|
||||
dma_addr_t dma;
|
||||
int err;
|
||||
|
||||
err = gve_alloc_page(priv, dev, &page, &dma, DMA_FROM_DEVICE);
|
||||
err = gve_alloc_page(priv, dev, &page, &dma, DMA_FROM_DEVICE,
|
||||
GFP_ATOMIC);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
|
||||
@@ -157,7 +157,7 @@ static int gve_alloc_page_dqo(struct gve_priv *priv,
|
||||
int err;
|
||||
|
||||
err = gve_alloc_page(priv, &priv->pdev->dev, &buf_state->page_info.page,
|
||||
&buf_state->addr, DMA_FROM_DEVICE);
|
||||
&buf_state->addr, DMA_FROM_DEVICE, GFP_KERNEL);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
|
||||
@@ -2496,8 +2496,7 @@ static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data)
|
||||
break;
|
||||
}
|
||||
|
||||
if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER)
|
||||
hclgevf_enable_vector(&hdev->misc_vector, true);
|
||||
hclgevf_enable_vector(&hdev->misc_vector, true);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
@@ -2424,6 +2424,7 @@ static void __ibmvnic_reset(struct work_struct *work)
|
||||
struct ibmvnic_rwi *rwi;
|
||||
unsigned long flags;
|
||||
u32 reset_state;
|
||||
int num_fails = 0;
|
||||
int rc = 0;
|
||||
|
||||
adapter = container_of(work, struct ibmvnic_adapter, ibmvnic_reset);
|
||||
@@ -2477,11 +2478,23 @@ static void __ibmvnic_reset(struct work_struct *work)
|
||||
rc = do_hard_reset(adapter, rwi, reset_state);
|
||||
rtnl_unlock();
|
||||
}
|
||||
if (rc) {
|
||||
/* give backing device time to settle down */
|
||||
if (rc)
|
||||
num_fails++;
|
||||
else
|
||||
num_fails = 0;
|
||||
|
||||
/* If auto-priority-failover is enabled we can get
|
||||
* back to back failovers during resets, resulting
|
||||
* in at least two failed resets (from high-priority
|
||||
* backing device to low-priority one and then back)
|
||||
* If resets continue to fail beyond that, give the
|
||||
* adapter some time to settle down before retrying.
|
||||
*/
|
||||
if (num_fails >= 3) {
|
||||
netdev_dbg(adapter->netdev,
|
||||
"[S:%s] Hard reset failed, waiting 60 secs\n",
|
||||
adapter_state_to_string(adapter->state));
|
||||
"[S:%s] Hard reset failed %d times, waiting 60 secs\n",
|
||||
adapter_state_to_string(adapter->state),
|
||||
num_fails);
|
||||
set_current_state(TASK_UNINTERRUPTIBLE);
|
||||
schedule_timeout(60 * HZ);
|
||||
}
|
||||
@@ -3662,11 +3675,25 @@ static void send_request_cap(struct ibmvnic_adapter *adapter, int retry)
|
||||
struct device *dev = &adapter->vdev->dev;
|
||||
union ibmvnic_crq crq;
|
||||
int max_entries;
|
||||
int cap_reqs;
|
||||
|
||||
/* We send out 6 or 7 REQUEST_CAPABILITY CRQs below (depending on
|
||||
* the PROMISC flag). Initialize this count upfront. When the tasklet
|
||||
* receives a response to all of these, it will send the next protocol
|
||||
* message (QUERY_IP_OFFLOAD).
|
||||
*/
|
||||
if (!(adapter->netdev->flags & IFF_PROMISC) ||
|
||||
adapter->promisc_supported)
|
||||
cap_reqs = 7;
|
||||
else
|
||||
cap_reqs = 6;
|
||||
|
||||
if (!retry) {
|
||||
/* Sub-CRQ entries are 32 byte long */
|
||||
int entries_page = 4 * PAGE_SIZE / (sizeof(u64) * 4);
|
||||
|
||||
atomic_set(&adapter->running_cap_crqs, cap_reqs);
|
||||
|
||||
if (adapter->min_tx_entries_per_subcrq > entries_page ||
|
||||
adapter->min_rx_add_entries_per_subcrq > entries_page) {
|
||||
dev_err(dev, "Fatal, invalid entries per sub-crq\n");
|
||||
@@ -3727,44 +3754,45 @@ static void send_request_cap(struct ibmvnic_adapter *adapter, int retry)
|
||||
adapter->opt_rx_comp_queues;
|
||||
|
||||
adapter->req_rx_add_queues = adapter->max_rx_add_queues;
|
||||
} else {
|
||||
atomic_add(cap_reqs, &adapter->running_cap_crqs);
|
||||
}
|
||||
|
||||
memset(&crq, 0, sizeof(crq));
|
||||
crq.request_capability.first = IBMVNIC_CRQ_CMD;
|
||||
crq.request_capability.cmd = REQUEST_CAPABILITY;
|
||||
|
||||
crq.request_capability.capability = cpu_to_be16(REQ_TX_QUEUES);
|
||||
crq.request_capability.number = cpu_to_be64(adapter->req_tx_queues);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
cap_reqs--;
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
|
||||
crq.request_capability.capability = cpu_to_be16(REQ_RX_QUEUES);
|
||||
crq.request_capability.number = cpu_to_be64(adapter->req_rx_queues);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
cap_reqs--;
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
|
||||
crq.request_capability.capability = cpu_to_be16(REQ_RX_ADD_QUEUES);
|
||||
crq.request_capability.number = cpu_to_be64(adapter->req_rx_add_queues);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
cap_reqs--;
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
|
||||
crq.request_capability.capability =
|
||||
cpu_to_be16(REQ_TX_ENTRIES_PER_SUBCRQ);
|
||||
crq.request_capability.number =
|
||||
cpu_to_be64(adapter->req_tx_entries_per_subcrq);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
cap_reqs--;
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
|
||||
crq.request_capability.capability =
|
||||
cpu_to_be16(REQ_RX_ADD_ENTRIES_PER_SUBCRQ);
|
||||
crq.request_capability.number =
|
||||
cpu_to_be64(adapter->req_rx_add_entries_per_subcrq);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
cap_reqs--;
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
|
||||
crq.request_capability.capability = cpu_to_be16(REQ_MTU);
|
||||
crq.request_capability.number = cpu_to_be64(adapter->req_mtu);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
cap_reqs--;
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
|
||||
if (adapter->netdev->flags & IFF_PROMISC) {
|
||||
@@ -3772,16 +3800,21 @@ static void send_request_cap(struct ibmvnic_adapter *adapter, int retry)
|
||||
crq.request_capability.capability =
|
||||
cpu_to_be16(PROMISC_REQUESTED);
|
||||
crq.request_capability.number = cpu_to_be64(1);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
cap_reqs--;
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
}
|
||||
} else {
|
||||
crq.request_capability.capability =
|
||||
cpu_to_be16(PROMISC_REQUESTED);
|
||||
crq.request_capability.number = cpu_to_be64(0);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
cap_reqs--;
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
}
|
||||
|
||||
/* Keep at end to catch any discrepancy between expected and actual
|
||||
* CRQs sent.
|
||||
*/
|
||||
WARN_ON(cap_reqs != 0);
|
||||
}
|
||||
|
||||
static int pending_scrq(struct ibmvnic_adapter *adapter,
|
||||
@@ -4175,118 +4208,132 @@ static void send_query_map(struct ibmvnic_adapter *adapter)
|
||||
static void send_query_cap(struct ibmvnic_adapter *adapter)
|
||||
{
|
||||
union ibmvnic_crq crq;
|
||||
int cap_reqs;
|
||||
|
||||
/* We send out 25 QUERY_CAPABILITY CRQs below. Initialize this count
|
||||
* upfront. When the tasklet receives a response to all of these, it
|
||||
* can send out the next protocol messaage (REQUEST_CAPABILITY).
|
||||
*/
|
||||
cap_reqs = 25;
|
||||
|
||||
atomic_set(&adapter->running_cap_crqs, cap_reqs);
|
||||
|
||||
atomic_set(&adapter->running_cap_crqs, 0);
|
||||
memset(&crq, 0, sizeof(crq));
|
||||
crq.query_capability.first = IBMVNIC_CRQ_CMD;
|
||||
crq.query_capability.cmd = QUERY_CAPABILITY;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MIN_TX_QUEUES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MIN_RX_QUEUES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MIN_RX_ADD_QUEUES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MAX_TX_QUEUES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MAX_RX_QUEUES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MAX_RX_ADD_QUEUES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability =
|
||||
cpu_to_be16(MIN_TX_ENTRIES_PER_SUBCRQ);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability =
|
||||
cpu_to_be16(MIN_RX_ADD_ENTRIES_PER_SUBCRQ);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability =
|
||||
cpu_to_be16(MAX_TX_ENTRIES_PER_SUBCRQ);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability =
|
||||
cpu_to_be16(MAX_RX_ADD_ENTRIES_PER_SUBCRQ);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(TCP_IP_OFFLOAD);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(PROMISC_SUPPORTED);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MIN_MTU);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MAX_MTU);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MAX_MULTICAST_FILTERS);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(VLAN_HEADER_INSERTION);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(RX_VLAN_HEADER_INSERTION);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MAX_TX_SG_ENTRIES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(RX_SG_SUPPORTED);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(OPT_TX_COMP_SUB_QUEUES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(OPT_RX_COMP_QUEUES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability =
|
||||
cpu_to_be16(OPT_RX_BUFADD_Q_PER_RX_COMP_Q);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability =
|
||||
cpu_to_be16(OPT_TX_ENTRIES_PER_SUBCRQ);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability =
|
||||
cpu_to_be16(OPT_RXBA_ENTRIES_PER_SUBCRQ);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(TX_RX_DESC_REQ);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
/* Keep at end to catch any discrepancy between expected and actual
|
||||
* CRQs sent.
|
||||
*/
|
||||
WARN_ON(cap_reqs != 0);
|
||||
}
|
||||
|
||||
static void send_query_ip_offload(struct ibmvnic_adapter *adapter)
|
||||
@@ -4591,6 +4638,8 @@ static void handle_request_cap_rsp(union ibmvnic_crq *crq,
|
||||
char *name;
|
||||
|
||||
atomic_dec(&adapter->running_cap_crqs);
|
||||
netdev_dbg(adapter->netdev, "Outstanding request-caps: %d\n",
|
||||
atomic_read(&adapter->running_cap_crqs));
|
||||
switch (be16_to_cpu(crq->request_capability_rsp.capability)) {
|
||||
case REQ_TX_QUEUES:
|
||||
req_value = &adapter->req_tx_queues;
|
||||
@@ -5268,12 +5317,6 @@ static void ibmvnic_tasklet(struct tasklet_struct *t)
|
||||
ibmvnic_handle_crq(crq, adapter);
|
||||
crq->generic.first = 0;
|
||||
}
|
||||
|
||||
/* remain in tasklet until all
|
||||
* capabilities responses are received
|
||||
*/
|
||||
if (!adapter->wait_capability)
|
||||
done = true;
|
||||
}
|
||||
/* if capabilities CRQ's were sent in this tasklet, the following
|
||||
* tasklet must wait until all responses are received
|
||||
|
||||
@@ -174,7 +174,6 @@ enum i40e_interrupt_policy {
|
||||
|
||||
struct i40e_lump_tracking {
|
||||
u16 num_entries;
|
||||
u16 search_hint;
|
||||
u16 list[0];
|
||||
#define I40E_PILE_VALID_BIT 0x8000
|
||||
#define I40E_IWARP_IRQ_PILE_ID (I40E_PILE_VALID_BIT - 2)
|
||||
@@ -848,12 +847,12 @@ struct i40e_vsi {
|
||||
struct rtnl_link_stats64 net_stats_offsets;
|
||||
struct i40e_eth_stats eth_stats;
|
||||
struct i40e_eth_stats eth_stats_offsets;
|
||||
u32 tx_restart;
|
||||
u32 tx_busy;
|
||||
u64 tx_restart;
|
||||
u64 tx_busy;
|
||||
u64 tx_linearize;
|
||||
u64 tx_force_wb;
|
||||
u32 rx_buf_failed;
|
||||
u32 rx_page_failed;
|
||||
u64 rx_buf_failed;
|
||||
u64 rx_page_failed;
|
||||
|
||||
/* These are containers of ring pointers, allocated at run-time */
|
||||
struct i40e_ring **rx_rings;
|
||||
|
||||
@@ -240,7 +240,7 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
|
||||
(unsigned long int)vsi->net_stats_offsets.rx_compressed,
|
||||
(unsigned long int)vsi->net_stats_offsets.tx_compressed);
|
||||
dev_info(&pf->pdev->dev,
|
||||
" tx_restart = %d, tx_busy = %d, rx_buf_failed = %d, rx_page_failed = %d\n",
|
||||
" tx_restart = %llu, tx_busy = %llu, rx_buf_failed = %llu, rx_page_failed = %llu\n",
|
||||
vsi->tx_restart, vsi->tx_busy,
|
||||
vsi->rx_buf_failed, vsi->rx_page_failed);
|
||||
rcu_read_lock();
|
||||
|
||||
@@ -196,10 +196,6 @@ int i40e_free_virt_mem_d(struct i40e_hw *hw, struct i40e_virt_mem *mem)
|
||||
* @id: an owner id to stick on the items assigned
|
||||
*
|
||||
* Returns the base item index of the lump, or negative for error
|
||||
*
|
||||
* The search_hint trick and lack of advanced fit-finding only work
|
||||
* because we're highly likely to have all the same size lump requests.
|
||||
* Linear search time and any fragmentation should be minimal.
|
||||
**/
|
||||
static int i40e_get_lump(struct i40e_pf *pf, struct i40e_lump_tracking *pile,
|
||||
u16 needed, u16 id)
|
||||
@@ -214,8 +210,21 @@ static int i40e_get_lump(struct i40e_pf *pf, struct i40e_lump_tracking *pile,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* start the linear search with an imperfect hint */
|
||||
i = pile->search_hint;
|
||||
/* Allocate last queue in the pile for FDIR VSI queue
|
||||
* so it doesn't fragment the qp_pile
|
||||
*/
|
||||
if (pile == pf->qp_pile && pf->vsi[id]->type == I40E_VSI_FDIR) {
|
||||
if (pile->list[pile->num_entries - 1] & I40E_PILE_VALID_BIT) {
|
||||
dev_err(&pf->pdev->dev,
|
||||
"Cannot allocate queue %d for I40E_VSI_FDIR\n",
|
||||
pile->num_entries - 1);
|
||||
return -ENOMEM;
|
||||
}
|
||||
pile->list[pile->num_entries - 1] = id | I40E_PILE_VALID_BIT;
|
||||
return pile->num_entries - 1;
|
||||
}
|
||||
|
||||
i = 0;
|
||||
while (i < pile->num_entries) {
|
||||
/* skip already allocated entries */
|
||||
if (pile->list[i] & I40E_PILE_VALID_BIT) {
|
||||
@@ -234,7 +243,6 @@ static int i40e_get_lump(struct i40e_pf *pf, struct i40e_lump_tracking *pile,
|
||||
for (j = 0; j < needed; j++)
|
||||
pile->list[i+j] = id | I40E_PILE_VALID_BIT;
|
||||
ret = i;
|
||||
pile->search_hint = i + j;
|
||||
break;
|
||||
}
|
||||
|
||||
@@ -257,7 +265,7 @@ static int i40e_put_lump(struct i40e_lump_tracking *pile, u16 index, u16 id)
|
||||
{
|
||||
int valid_id = (id | I40E_PILE_VALID_BIT);
|
||||
int count = 0;
|
||||
int i;
|
||||
u16 i;
|
||||
|
||||
if (!pile || index >= pile->num_entries)
|
||||
return -EINVAL;
|
||||
@@ -269,8 +277,6 @@ static int i40e_put_lump(struct i40e_lump_tracking *pile, u16 index, u16 id)
|
||||
count++;
|
||||
}
|
||||
|
||||
if (count && index < pile->search_hint)
|
||||
pile->search_hint = index;
|
||||
|
||||
return count;
|
||||
}
|
||||
@@ -772,9 +778,9 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi)
|
||||
struct rtnl_link_stats64 *ns; /* netdev stats */
|
||||
struct i40e_eth_stats *oes;
|
||||
struct i40e_eth_stats *es; /* device's eth stats */
|
||||
u32 tx_restart, tx_busy;
|
||||
u64 tx_restart, tx_busy;
|
||||
struct i40e_ring *p;
|
||||
u32 rx_page, rx_buf;
|
||||
u64 rx_page, rx_buf;
|
||||
u64 bytes, packets;
|
||||
unsigned int start;
|
||||
u64 tx_linearize;
|
||||
@@ -10574,15 +10580,9 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
|
||||
}
|
||||
i40e_get_oem_version(&pf->hw);
|
||||
|
||||
if (test_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state) &&
|
||||
((hw->aq.fw_maj_ver == 4 && hw->aq.fw_min_ver <= 33) ||
|
||||
hw->aq.fw_maj_ver < 4) && hw->mac.type == I40E_MAC_XL710) {
|
||||
/* The following delay is necessary for 4.33 firmware and older
|
||||
* to recover after EMP reset. 200 ms should suffice but we
|
||||
* put here 300 ms to be sure that FW is ready to operate
|
||||
* after reset.
|
||||
*/
|
||||
mdelay(300);
|
||||
if (test_and_clear_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state)) {
|
||||
/* The following delay is necessary for firmware update. */
|
||||
mdelay(1000);
|
||||
}
|
||||
|
||||
/* re-verify the eeprom if we just had an EMP reset */
|
||||
@@ -11792,7 +11792,6 @@ static int i40e_init_interrupt_scheme(struct i40e_pf *pf)
|
||||
return -ENOMEM;
|
||||
|
||||
pf->irq_pile->num_entries = vectors;
|
||||
pf->irq_pile->search_hint = 0;
|
||||
|
||||
/* track first vector for misc interrupts, ignore return */
|
||||
(void)i40e_get_lump(pf, pf->irq_pile, 1, I40E_PILE_VALID_BIT - 1);
|
||||
@@ -12595,7 +12594,6 @@ static int i40e_sw_init(struct i40e_pf *pf)
|
||||
goto sw_init_done;
|
||||
}
|
||||
pf->qp_pile->num_entries = pf->hw.func_caps.num_tx_qp;
|
||||
pf->qp_pile->search_hint = 0;
|
||||
|
||||
pf->tx_timeout_recovery_level = 1;
|
||||
|
||||
|
||||
@@ -413,6 +413,9 @@
|
||||
#define I40E_VFINT_DYN_CTLN(_INTVF) (0x00024800 + ((_INTVF) * 4)) /* _i=0...511 */ /* Reset: VFR */
|
||||
#define I40E_VFINT_DYN_CTLN_CLEARPBA_SHIFT 1
|
||||
#define I40E_VFINT_DYN_CTLN_CLEARPBA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN_CLEARPBA_SHIFT)
|
||||
#define I40E_VFINT_ICR0_ADMINQ_SHIFT 30
|
||||
#define I40E_VFINT_ICR0_ADMINQ_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ADMINQ_SHIFT)
|
||||
#define I40E_VFINT_ICR0_ENA(_VF) (0x0002C000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */
|
||||
#define I40E_VPINT_AEQCTL(_VF) (0x0002B800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */
|
||||
#define I40E_VPINT_AEQCTL_MSIX_INDX_SHIFT 0
|
||||
#define I40E_VPINT_AEQCTL_ITR_INDX_SHIFT 11
|
||||
|
||||
@@ -1376,6 +1376,32 @@ static i40e_status i40e_config_vf_promiscuous_mode(struct i40e_vf *vf,
|
||||
return aq_ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_sync_vfr_reset
|
||||
* @hw: pointer to hw struct
|
||||
* @vf_id: VF identifier
|
||||
*
|
||||
* Before trigger hardware reset, we need to know if no other process has
|
||||
* reserved the hardware for any reset operations. This check is done by
|
||||
* examining the status of the RSTAT1 register used to signal the reset.
|
||||
**/
|
||||
static int i40e_sync_vfr_reset(struct i40e_hw *hw, int vf_id)
|
||||
{
|
||||
u32 reg;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < I40E_VFR_WAIT_COUNT; i++) {
|
||||
reg = rd32(hw, I40E_VFINT_ICR0_ENA(vf_id)) &
|
||||
I40E_VFINT_ICR0_ADMINQ_MASK;
|
||||
if (reg)
|
||||
return 0;
|
||||
|
||||
usleep_range(100, 200);
|
||||
}
|
||||
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_trigger_vf_reset
|
||||
* @vf: pointer to the VF structure
|
||||
@@ -1390,9 +1416,11 @@ static void i40e_trigger_vf_reset(struct i40e_vf *vf, bool flr)
|
||||
struct i40e_pf *pf = vf->pf;
|
||||
struct i40e_hw *hw = &pf->hw;
|
||||
u32 reg, reg_idx, bit_idx;
|
||||
bool vf_active;
|
||||
u32 radq;
|
||||
|
||||
/* warn the VF */
|
||||
clear_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states);
|
||||
vf_active = test_and_clear_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states);
|
||||
|
||||
/* Disable VF's configuration API during reset. The flag is re-enabled
|
||||
* in i40e_alloc_vf_res(), when it's safe again to access VF's VSI.
|
||||
@@ -1406,7 +1434,19 @@ static void i40e_trigger_vf_reset(struct i40e_vf *vf, bool flr)
|
||||
* just need to clean up, so don't hit the VFRTRIG register.
|
||||
*/
|
||||
if (!flr) {
|
||||
/* reset VF using VPGEN_VFRTRIG reg */
|
||||
/* Sync VFR reset before trigger next one */
|
||||
radq = rd32(hw, I40E_VFINT_ICR0_ENA(vf->vf_id)) &
|
||||
I40E_VFINT_ICR0_ADMINQ_MASK;
|
||||
if (vf_active && !radq)
|
||||
/* waiting for finish reset by virtual driver */
|
||||
if (i40e_sync_vfr_reset(hw, vf->vf_id))
|
||||
dev_info(&pf->pdev->dev,
|
||||
"Reset VF %d never finished\n",
|
||||
vf->vf_id);
|
||||
|
||||
/* Reset VF using VPGEN_VFRTRIG reg. It is also setting
|
||||
* in progress state in rstat1 register.
|
||||
*/
|
||||
reg = rd32(hw, I40E_VPGEN_VFRTRIG(vf->vf_id));
|
||||
reg |= I40E_VPGEN_VFRTRIG_VFSWR_MASK;
|
||||
wr32(hw, I40E_VPGEN_VFRTRIG(vf->vf_id), reg);
|
||||
@@ -2617,6 +2657,59 @@ error_param:
|
||||
aq_ret);
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_check_enough_queue - find big enough queue number
|
||||
* @vf: pointer to the VF info
|
||||
* @needed: the number of items needed
|
||||
*
|
||||
* Returns the base item index of the queue, or negative for error
|
||||
**/
|
||||
static int i40e_check_enough_queue(struct i40e_vf *vf, u16 needed)
|
||||
{
|
||||
unsigned int i, cur_queues, more, pool_size;
|
||||
struct i40e_lump_tracking *pile;
|
||||
struct i40e_pf *pf = vf->pf;
|
||||
struct i40e_vsi *vsi;
|
||||
|
||||
vsi = pf->vsi[vf->lan_vsi_idx];
|
||||
cur_queues = vsi->alloc_queue_pairs;
|
||||
|
||||
/* if current allocated queues are enough for need */
|
||||
if (cur_queues >= needed)
|
||||
return vsi->base_queue;
|
||||
|
||||
pile = pf->qp_pile;
|
||||
if (cur_queues > 0) {
|
||||
/* if the allocated queues are not zero
|
||||
* just check if there are enough queues for more
|
||||
* behind the allocated queues.
|
||||
*/
|
||||
more = needed - cur_queues;
|
||||
for (i = vsi->base_queue + cur_queues;
|
||||
i < pile->num_entries; i++) {
|
||||
if (pile->list[i] & I40E_PILE_VALID_BIT)
|
||||
break;
|
||||
|
||||
if (more-- == 1)
|
||||
/* there is enough */
|
||||
return vsi->base_queue;
|
||||
}
|
||||
}
|
||||
|
||||
pool_size = 0;
|
||||
for (i = 0; i < pile->num_entries; i++) {
|
||||
if (pile->list[i] & I40E_PILE_VALID_BIT) {
|
||||
pool_size = 0;
|
||||
continue;
|
||||
}
|
||||
if (needed <= ++pool_size)
|
||||
/* there is enough */
|
||||
return i;
|
||||
}
|
||||
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_vc_request_queues_msg
|
||||
* @vf: pointer to the VF info
|
||||
@@ -2651,6 +2744,12 @@ static int i40e_vc_request_queues_msg(struct i40e_vf *vf, u8 *msg)
|
||||
req_pairs - cur_pairs,
|
||||
pf->queues_left);
|
||||
vfres->num_queue_pairs = pf->queues_left + cur_pairs;
|
||||
} else if (i40e_check_enough_queue(vf, req_pairs) < 0) {
|
||||
dev_warn(&pf->pdev->dev,
|
||||
"VF %d requested %d more queues, but there is not enough for it.\n",
|
||||
vf->vf_id,
|
||||
req_pairs - cur_pairs);
|
||||
vfres->num_queue_pairs = cur_pairs;
|
||||
} else {
|
||||
/* successful request */
|
||||
vf->num_req_queues = req_pairs;
|
||||
|
||||
@@ -19,6 +19,7 @@
|
||||
#define I40E_MAX_VF_PROMISC_FLAGS 3
|
||||
|
||||
#define I40E_VF_STATE_WAIT_COUNT 20
|
||||
#define I40E_VFR_WAIT_COUNT 100
|
||||
|
||||
/* Various queue ctrls */
|
||||
enum i40e_queue_ctrl {
|
||||
|
||||
@@ -698,6 +698,9 @@ enum nix_af_status {
|
||||
NIX_AF_ERR_INVALID_BANDPROF = -426,
|
||||
NIX_AF_ERR_IPOLICER_NOTSUPP = -427,
|
||||
NIX_AF_ERR_BANDPROF_INVAL_REQ = -428,
|
||||
NIX_AF_ERR_CQ_CTX_WRITE_ERR = -429,
|
||||
NIX_AF_ERR_AQ_CTX_RETRY_WRITE = -430,
|
||||
NIX_AF_ERR_LINK_CREDITS = -431,
|
||||
};
|
||||
|
||||
/* For NIX RX vtag action */
|
||||
|
||||
@@ -251,22 +251,19 @@ int rpm_lmac_internal_loopback(void *rpmd, int lmac_id, bool enable)
|
||||
if (!rpm || lmac_id >= rpm->lmac_count)
|
||||
return -ENODEV;
|
||||
lmac_type = rpm->mac_ops->get_lmac_type(rpm, lmac_id);
|
||||
if (lmac_type == LMAC_MODE_100G_R) {
|
||||
cfg = rpm_read(rpm, lmac_id, RPMX_MTI_PCS100X_CONTROL1);
|
||||
|
||||
if (enable)
|
||||
cfg |= RPMX_MTI_PCS_LBK;
|
||||
else
|
||||
cfg &= ~RPMX_MTI_PCS_LBK;
|
||||
rpm_write(rpm, lmac_id, RPMX_MTI_PCS100X_CONTROL1, cfg);
|
||||
} else {
|
||||
cfg = rpm_read(rpm, lmac_id, RPMX_MTI_LPCSX_CONTROL1);
|
||||
if (enable)
|
||||
cfg |= RPMX_MTI_PCS_LBK;
|
||||
else
|
||||
cfg &= ~RPMX_MTI_PCS_LBK;
|
||||
rpm_write(rpm, lmac_id, RPMX_MTI_LPCSX_CONTROL1, cfg);
|
||||
if (lmac_type == LMAC_MODE_QSGMII || lmac_type == LMAC_MODE_SGMII) {
|
||||
dev_err(&rpm->pdev->dev, "loopback not supported for LPC mode\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
cfg = rpm_read(rpm, lmac_id, RPMX_MTI_PCS100X_CONTROL1);
|
||||
|
||||
if (enable)
|
||||
cfg |= RPMX_MTI_PCS_LBK;
|
||||
else
|
||||
cfg &= ~RPMX_MTI_PCS_LBK;
|
||||
rpm_write(rpm, lmac_id, RPMX_MTI_PCS100X_CONTROL1, cfg);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -520,8 +520,11 @@ static void rvu_block_reset(struct rvu *rvu, int blkaddr, u64 rst_reg)
|
||||
|
||||
rvu_write64(rvu, blkaddr, rst_reg, BIT_ULL(0));
|
||||
err = rvu_poll_reg(rvu, blkaddr, rst_reg, BIT_ULL(63), true);
|
||||
if (err)
|
||||
dev_err(rvu->dev, "HW block:%d reset failed\n", blkaddr);
|
||||
if (err) {
|
||||
dev_err(rvu->dev, "HW block:%d reset timeout retrying again\n", blkaddr);
|
||||
while (rvu_poll_reg(rvu, blkaddr, rst_reg, BIT_ULL(63), true) == -EBUSY)
|
||||
;
|
||||
}
|
||||
}
|
||||
|
||||
static void rvu_reset_all_blocks(struct rvu *rvu)
|
||||
|
||||
@@ -1131,6 +1131,8 @@ static void print_nix_cn10k_sq_ctx(struct seq_file *m,
|
||||
seq_printf(m, "W3: head_offset\t\t\t%d\nW3: smenq_next_sqb_vld\t\t%d\n\n",
|
||||
sq_ctx->head_offset, sq_ctx->smenq_next_sqb_vld);
|
||||
|
||||
seq_printf(m, "W3: smq_next_sq_vld\t\t%d\nW3: smq_pend\t\t\t%d\n",
|
||||
sq_ctx->smq_next_sq_vld, sq_ctx->smq_pend);
|
||||
seq_printf(m, "W4: next_sqb \t\t\t%llx\n\n", sq_ctx->next_sqb);
|
||||
seq_printf(m, "W5: tail_sqb \t\t\t%llx\n\n", sq_ctx->tail_sqb);
|
||||
seq_printf(m, "W6: smenq_sqb \t\t\t%llx\n\n", sq_ctx->smenq_sqb);
|
||||
|
||||
@@ -28,6 +28,7 @@ static int nix_verify_bandprof(struct nix_cn10k_aq_enq_req *req,
|
||||
static int nix_free_all_bandprof(struct rvu *rvu, u16 pcifunc);
|
||||
static void nix_clear_ratelimit_aggr(struct rvu *rvu, struct nix_hw *nix_hw,
|
||||
u32 leaf_prof);
|
||||
static const char *nix_get_ctx_name(int ctype);
|
||||
|
||||
enum mc_tbl_sz {
|
||||
MC_TBL_SZ_256,
|
||||
@@ -511,11 +512,11 @@ static int rvu_nix_get_bpid(struct rvu *rvu, struct nix_bp_cfg_req *req,
|
||||
cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST);
|
||||
lmac_chan_cnt = cfg & 0xFF;
|
||||
|
||||
cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST1);
|
||||
sdp_chan_cnt = cfg & 0xFFF;
|
||||
|
||||
cgx_bpid_cnt = hw->cgx_links * lmac_chan_cnt;
|
||||
lbk_bpid_cnt = hw->lbk_links * ((cfg >> 16) & 0xFF);
|
||||
|
||||
cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST1);
|
||||
sdp_chan_cnt = cfg & 0xFFF;
|
||||
sdp_bpid_cnt = hw->sdp_links * sdp_chan_cnt;
|
||||
|
||||
pfvf = rvu_get_pfvf(rvu, req->hdr.pcifunc);
|
||||
@@ -1061,10 +1062,68 @@ static int rvu_nix_blk_aq_enq_inst(struct rvu *rvu, struct nix_hw *nix_hw,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int rvu_nix_verify_aq_ctx(struct rvu *rvu, struct nix_hw *nix_hw,
|
||||
struct nix_aq_enq_req *req, u8 ctype)
|
||||
{
|
||||
struct nix_cn10k_aq_enq_req aq_req;
|
||||
struct nix_cn10k_aq_enq_rsp aq_rsp;
|
||||
int rc, word;
|
||||
|
||||
if (req->ctype != NIX_AQ_CTYPE_CQ)
|
||||
return 0;
|
||||
|
||||
rc = nix_aq_context_read(rvu, nix_hw, &aq_req, &aq_rsp,
|
||||
req->hdr.pcifunc, ctype, req->qidx);
|
||||
if (rc) {
|
||||
dev_err(rvu->dev,
|
||||
"%s: Failed to fetch %s%d context of PFFUNC 0x%x\n",
|
||||
__func__, nix_get_ctx_name(ctype), req->qidx,
|
||||
req->hdr.pcifunc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
/* Make copy of original context & mask which are required
|
||||
* for resubmission
|
||||
*/
|
||||
memcpy(&aq_req.cq_mask, &req->cq_mask, sizeof(struct nix_cq_ctx_s));
|
||||
memcpy(&aq_req.cq, &req->cq, sizeof(struct nix_cq_ctx_s));
|
||||
|
||||
/* exclude fields which HW can update */
|
||||
aq_req.cq_mask.cq_err = 0;
|
||||
aq_req.cq_mask.wrptr = 0;
|
||||
aq_req.cq_mask.tail = 0;
|
||||
aq_req.cq_mask.head = 0;
|
||||
aq_req.cq_mask.avg_level = 0;
|
||||
aq_req.cq_mask.update_time = 0;
|
||||
aq_req.cq_mask.substream = 0;
|
||||
|
||||
/* Context mask (cq_mask) holds mask value of fields which
|
||||
* are changed in AQ WRITE operation.
|
||||
* for example cq.drop = 0xa;
|
||||
* cq_mask.drop = 0xff;
|
||||
* Below logic performs '&' between cq and cq_mask so that non
|
||||
* updated fields are masked out for request and response
|
||||
* comparison
|
||||
*/
|
||||
for (word = 0; word < sizeof(struct nix_cq_ctx_s) / sizeof(u64);
|
||||
word++) {
|
||||
*(u64 *)((u8 *)&aq_rsp.cq + word * 8) &=
|
||||
(*(u64 *)((u8 *)&aq_req.cq_mask + word * 8));
|
||||
*(u64 *)((u8 *)&aq_req.cq + word * 8) &=
|
||||
(*(u64 *)((u8 *)&aq_req.cq_mask + word * 8));
|
||||
}
|
||||
|
||||
if (memcmp(&aq_req.cq, &aq_rsp.cq, sizeof(struct nix_cq_ctx_s)))
|
||||
return NIX_AF_ERR_AQ_CTX_RETRY_WRITE;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int rvu_nix_aq_enq_inst(struct rvu *rvu, struct nix_aq_enq_req *req,
|
||||
struct nix_aq_enq_rsp *rsp)
|
||||
{
|
||||
struct nix_hw *nix_hw;
|
||||
int err, retries = 5;
|
||||
int blkaddr;
|
||||
|
||||
blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, req->hdr.pcifunc);
|
||||
@@ -1075,7 +1134,24 @@ static int rvu_nix_aq_enq_inst(struct rvu *rvu, struct nix_aq_enq_req *req,
|
||||
if (!nix_hw)
|
||||
return NIX_AF_ERR_INVALID_NIXBLK;
|
||||
|
||||
return rvu_nix_blk_aq_enq_inst(rvu, nix_hw, req, rsp);
|
||||
retry:
|
||||
err = rvu_nix_blk_aq_enq_inst(rvu, nix_hw, req, rsp);
|
||||
|
||||
/* HW errata 'AQ Modification to CQ could be discarded on heavy traffic'
|
||||
* As a work around perfrom CQ context read after each AQ write. If AQ
|
||||
* read shows AQ write is not updated perform AQ write again.
|
||||
*/
|
||||
if (!err && req->op == NIX_AQ_INSTOP_WRITE) {
|
||||
err = rvu_nix_verify_aq_ctx(rvu, nix_hw, req, NIX_AQ_CTYPE_CQ);
|
||||
if (err == NIX_AF_ERR_AQ_CTX_RETRY_WRITE) {
|
||||
if (retries--)
|
||||
goto retry;
|
||||
else
|
||||
return NIX_AF_ERR_CQ_CTX_WRITE_ERR;
|
||||
}
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static const char *nix_get_ctx_name(int ctype)
|
||||
@@ -3815,8 +3891,8 @@ nix_config_link_credits(struct rvu *rvu, int blkaddr, int link,
|
||||
NIX_AF_TL1X_SW_XOFF(schq), BIT_ULL(0));
|
||||
}
|
||||
|
||||
rc = -EBUSY;
|
||||
poll_tmo = jiffies + usecs_to_jiffies(10000);
|
||||
rc = NIX_AF_ERR_LINK_CREDITS;
|
||||
poll_tmo = jiffies + usecs_to_jiffies(200000);
|
||||
/* Wait for credits to return */
|
||||
do {
|
||||
if (time_after(jiffies, poll_tmo))
|
||||
|
||||
@@ -402,6 +402,7 @@ static void npc_fixup_vf_rule(struct rvu *rvu, struct npc_mcam *mcam,
|
||||
int blkaddr, int index, struct mcam_entry *entry,
|
||||
bool *enable)
|
||||
{
|
||||
struct rvu_npc_mcam_rule *rule;
|
||||
u16 owner, target_func;
|
||||
struct rvu_pfvf *pfvf;
|
||||
u64 rx_action;
|
||||
@@ -423,6 +424,12 @@ static void npc_fixup_vf_rule(struct rvu *rvu, struct npc_mcam *mcam,
|
||||
test_bit(NIXLF_INITIALIZED, &pfvf->flags)))
|
||||
*enable = false;
|
||||
|
||||
/* fix up not needed for the rules added by user(ntuple filters) */
|
||||
list_for_each_entry(rule, &mcam->mcam_rules, list) {
|
||||
if (rule->entry == index)
|
||||
return;
|
||||
}
|
||||
|
||||
/* copy VF default entry action to the VF mcam entry */
|
||||
rx_action = npc_get_default_entry_action(rvu, mcam, blkaddr,
|
||||
target_func);
|
||||
@@ -489,8 +496,8 @@ static void npc_config_mcam_entry(struct rvu *rvu, struct npc_mcam *mcam,
|
||||
}
|
||||
|
||||
/* PF installing VF rule */
|
||||
if (intf == NIX_INTF_RX && actindex < mcam->bmap_entries)
|
||||
npc_fixup_vf_rule(rvu, mcam, blkaddr, index, entry, &enable);
|
||||
if (is_npc_intf_rx(intf) && actindex < mcam->bmap_entries)
|
||||
npc_fixup_vf_rule(rvu, mcam, blkaddr, actindex, entry, &enable);
|
||||
|
||||
/* Set 'action' */
|
||||
rvu_write64(rvu, blkaddr,
|
||||
@@ -916,7 +923,8 @@ static void npc_update_vf_flow_entry(struct rvu *rvu, struct npc_mcam *mcam,
|
||||
int blkaddr, u16 pcifunc, u64 rx_action)
|
||||
{
|
||||
int actindex, index, bank, entry;
|
||||
bool enable;
|
||||
struct rvu_npc_mcam_rule *rule;
|
||||
bool enable, update;
|
||||
|
||||
if (!(pcifunc & RVU_PFVF_FUNC_MASK))
|
||||
return;
|
||||
@@ -924,6 +932,14 @@ static void npc_update_vf_flow_entry(struct rvu *rvu, struct npc_mcam *mcam,
|
||||
mutex_lock(&mcam->lock);
|
||||
for (index = 0; index < mcam->bmap_entries; index++) {
|
||||
if (mcam->entry2target_pffunc[index] == pcifunc) {
|
||||
update = true;
|
||||
/* update not needed for the rules added via ntuple filters */
|
||||
list_for_each_entry(rule, &mcam->mcam_rules, list) {
|
||||
if (rule->entry == index)
|
||||
update = false;
|
||||
}
|
||||
if (!update)
|
||||
continue;
|
||||
bank = npc_get_bank(mcam, index);
|
||||
actindex = index;
|
||||
entry = index & (mcam->banksize - 1);
|
||||
|
||||
@@ -1098,14 +1098,6 @@ find_rule:
|
||||
write_req.cntr = rule->cntr;
|
||||
}
|
||||
|
||||
err = rvu_mbox_handler_npc_mcam_write_entry(rvu, &write_req,
|
||||
&write_rsp);
|
||||
if (err) {
|
||||
rvu_mcam_remove_counter_from_rule(rvu, owner, rule);
|
||||
if (new)
|
||||
kfree(rule);
|
||||
return err;
|
||||
}
|
||||
/* update rule */
|
||||
memcpy(&rule->packet, &dummy.packet, sizeof(rule->packet));
|
||||
memcpy(&rule->mask, &dummy.mask, sizeof(rule->mask));
|
||||
@@ -1129,6 +1121,18 @@ find_rule:
|
||||
if (req->default_rule)
|
||||
pfvf->def_ucast_rule = rule;
|
||||
|
||||
/* write to mcam entry registers */
|
||||
err = rvu_mbox_handler_npc_mcam_write_entry(rvu, &write_req,
|
||||
&write_rsp);
|
||||
if (err) {
|
||||
rvu_mcam_remove_counter_from_rule(rvu, owner, rule);
|
||||
if (new) {
|
||||
list_del(&rule->list);
|
||||
kfree(rule);
|
||||
}
|
||||
return err;
|
||||
}
|
||||
|
||||
/* VF's MAC address is being changed via PF */
|
||||
if (pf_set_vfs_mac) {
|
||||
ether_addr_copy(pfvf->default_mac, req->packet.dmac);
|
||||
|
||||
@@ -591,6 +591,7 @@ static inline void __cn10k_aura_freeptr(struct otx2_nic *pfvf, u64 aura,
|
||||
size++;
|
||||
tar_addr |= ((size - 1) & 0x7) << 4;
|
||||
}
|
||||
dma_wmb();
|
||||
memcpy((u64 *)lmt_info->lmt_addr, ptrs, sizeof(u64) * num_ptrs);
|
||||
/* Perform LMTST flush */
|
||||
cn10k_lmt_flush(val, tar_addr);
|
||||
|
||||
@@ -386,7 +386,12 @@ static int otx2_forward_vf_mbox_msgs(struct otx2_nic *pf,
|
||||
dst_mdev->msg_size = mbox_hdr->msg_size;
|
||||
dst_mdev->num_msgs = num_msgs;
|
||||
err = otx2_sync_mbox_msg(dst_mbox);
|
||||
if (err) {
|
||||
/* Error code -EIO indicate there is a communication failure
|
||||
* to the AF. Rest of the error codes indicate that AF processed
|
||||
* VF messages and set the error codes in response messages
|
||||
* (if any) so simply forward responses to VF.
|
||||
*/
|
||||
if (err == -EIO) {
|
||||
dev_warn(pf->dev,
|
||||
"AF not responding to VF%d messages\n", vf);
|
||||
/* restore PF mbase and exit */
|
||||
|
||||
@@ -22,21 +22,21 @@
|
||||
#define ETHER_CLK_SEL_RMII_CLK_EN BIT(2)
|
||||
#define ETHER_CLK_SEL_RMII_CLK_RST BIT(3)
|
||||
#define ETHER_CLK_SEL_DIV_SEL_2 BIT(4)
|
||||
#define ETHER_CLK_SEL_DIV_SEL_20 BIT(0)
|
||||
#define ETHER_CLK_SEL_DIV_SEL_20 0
|
||||
#define ETHER_CLK_SEL_FREQ_SEL_125M (BIT(9) | BIT(8))
|
||||
#define ETHER_CLK_SEL_FREQ_SEL_50M BIT(9)
|
||||
#define ETHER_CLK_SEL_FREQ_SEL_25M BIT(8)
|
||||
#define ETHER_CLK_SEL_FREQ_SEL_2P5M 0
|
||||
#define ETHER_CLK_SEL_TX_CLK_EXT_SEL_IN BIT(0)
|
||||
#define ETHER_CLK_SEL_TX_CLK_EXT_SEL_IN 0
|
||||
#define ETHER_CLK_SEL_TX_CLK_EXT_SEL_TXC BIT(10)
|
||||
#define ETHER_CLK_SEL_TX_CLK_EXT_SEL_DIV BIT(11)
|
||||
#define ETHER_CLK_SEL_RX_CLK_EXT_SEL_IN BIT(0)
|
||||
#define ETHER_CLK_SEL_RX_CLK_EXT_SEL_IN 0
|
||||
#define ETHER_CLK_SEL_RX_CLK_EXT_SEL_RXC BIT(12)
|
||||
#define ETHER_CLK_SEL_RX_CLK_EXT_SEL_DIV BIT(13)
|
||||
#define ETHER_CLK_SEL_TX_CLK_O_TX_I BIT(0)
|
||||
#define ETHER_CLK_SEL_TX_CLK_O_TX_I 0
|
||||
#define ETHER_CLK_SEL_TX_CLK_O_RMII_I BIT(14)
|
||||
#define ETHER_CLK_SEL_TX_O_E_N_IN BIT(15)
|
||||
#define ETHER_CLK_SEL_RMII_CLK_SEL_IN BIT(0)
|
||||
#define ETHER_CLK_SEL_RMII_CLK_SEL_IN 0
|
||||
#define ETHER_CLK_SEL_RMII_CLK_SEL_RX_C BIT(16)
|
||||
|
||||
#define ETHER_CLK_SEL_RX_TX_CLK_EN (ETHER_CLK_SEL_RX_CLK_EN | ETHER_CLK_SEL_TX_CLK_EN)
|
||||
@@ -96,31 +96,41 @@ static void visconti_eth_fix_mac_speed(void *priv, unsigned int speed)
|
||||
val |= ETHER_CLK_SEL_TX_O_E_N_IN;
|
||||
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
|
||||
|
||||
/* Set Clock-Mux, Start clock, Set TX_O direction */
|
||||
switch (dwmac->phy_intf_sel) {
|
||||
case ETHER_CONFIG_INTF_RGMII:
|
||||
val = clk_sel_val | ETHER_CLK_SEL_RX_CLK_EXT_SEL_RXC;
|
||||
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
|
||||
|
||||
val |= ETHER_CLK_SEL_RX_TX_CLK_EN;
|
||||
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
|
||||
|
||||
val &= ~ETHER_CLK_SEL_TX_O_E_N_IN;
|
||||
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
|
||||
break;
|
||||
case ETHER_CONFIG_INTF_RMII:
|
||||
val = clk_sel_val | ETHER_CLK_SEL_RX_CLK_EXT_SEL_DIV |
|
||||
ETHER_CLK_SEL_TX_CLK_EXT_SEL_TXC | ETHER_CLK_SEL_TX_O_E_N_IN |
|
||||
ETHER_CLK_SEL_TX_CLK_EXT_SEL_DIV | ETHER_CLK_SEL_TX_O_E_N_IN |
|
||||
ETHER_CLK_SEL_RMII_CLK_SEL_RX_C;
|
||||
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
|
||||
|
||||
val |= ETHER_CLK_SEL_RMII_CLK_RST;
|
||||
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
|
||||
|
||||
val |= ETHER_CLK_SEL_RMII_CLK_EN | ETHER_CLK_SEL_RX_TX_CLK_EN;
|
||||
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
|
||||
break;
|
||||
case ETHER_CONFIG_INTF_MII:
|
||||
default:
|
||||
val = clk_sel_val | ETHER_CLK_SEL_RX_CLK_EXT_SEL_RXC |
|
||||
ETHER_CLK_SEL_TX_CLK_EXT_SEL_DIV | ETHER_CLK_SEL_TX_O_E_N_IN |
|
||||
ETHER_CLK_SEL_RMII_CLK_EN;
|
||||
ETHER_CLK_SEL_TX_CLK_EXT_SEL_TXC | ETHER_CLK_SEL_TX_O_E_N_IN;
|
||||
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
|
||||
|
||||
val |= ETHER_CLK_SEL_RX_TX_CLK_EN;
|
||||
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
|
||||
break;
|
||||
}
|
||||
|
||||
/* Start clock */
|
||||
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
|
||||
val |= ETHER_CLK_SEL_RX_TX_CLK_EN;
|
||||
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
|
||||
|
||||
val &= ~ETHER_CLK_SEL_TX_O_E_N_IN;
|
||||
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
|
||||
|
||||
spin_unlock_irqrestore(&dwmac->lock, flags);
|
||||
}
|
||||
|
||||
|
||||
@@ -899,6 +899,9 @@ static int stmmac_init_ptp(struct stmmac_priv *priv)
|
||||
bool xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac;
|
||||
int ret;
|
||||
|
||||
if (priv->plat->ptp_clk_freq_config)
|
||||
priv->plat->ptp_clk_freq_config(priv);
|
||||
|
||||
ret = stmmac_init_tstamp_counter(priv, STMMAC_HWTS_ACTIVE);
|
||||
if (ret)
|
||||
return ret;
|
||||
@@ -921,8 +924,6 @@ static int stmmac_init_ptp(struct stmmac_priv *priv)
|
||||
priv->hwts_tx_en = 0;
|
||||
priv->hwts_rx_en = 0;
|
||||
|
||||
stmmac_ptp_register(priv);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -3237,7 +3238,7 @@ static int stmmac_fpe_start_wq(struct stmmac_priv *priv)
|
||||
/**
|
||||
* stmmac_hw_setup - setup mac in a usable state.
|
||||
* @dev : pointer to the device structure.
|
||||
* @init_ptp: initialize PTP if set
|
||||
* @ptp_register: register PTP if set
|
||||
* Description:
|
||||
* this is the main function to setup the HW in a usable state because the
|
||||
* dma engine is reset, the core registers are configured (e.g. AXI,
|
||||
@@ -3247,7 +3248,7 @@ static int stmmac_fpe_start_wq(struct stmmac_priv *priv)
|
||||
* 0 on success and an appropriate (-)ve integer as defined in errno.h
|
||||
* file on failure.
|
||||
*/
|
||||
static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
|
||||
static int stmmac_hw_setup(struct net_device *dev, bool ptp_register)
|
||||
{
|
||||
struct stmmac_priv *priv = netdev_priv(dev);
|
||||
u32 rx_cnt = priv->plat->rx_queues_to_use;
|
||||
@@ -3304,13 +3305,13 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
|
||||
|
||||
stmmac_mmc_setup(priv);
|
||||
|
||||
if (init_ptp) {
|
||||
ret = stmmac_init_ptp(priv);
|
||||
if (ret == -EOPNOTSUPP)
|
||||
netdev_warn(priv->dev, "PTP not supported by HW\n");
|
||||
else if (ret)
|
||||
netdev_warn(priv->dev, "PTP init failed\n");
|
||||
}
|
||||
ret = stmmac_init_ptp(priv);
|
||||
if (ret == -EOPNOTSUPP)
|
||||
netdev_warn(priv->dev, "PTP not supported by HW\n");
|
||||
else if (ret)
|
||||
netdev_warn(priv->dev, "PTP init failed\n");
|
||||
else if (ptp_register)
|
||||
stmmac_ptp_register(priv);
|
||||
|
||||
priv->eee_tw_timer = STMMAC_DEFAULT_TWT_LS;
|
||||
|
||||
|
||||
@@ -297,9 +297,6 @@ void stmmac_ptp_register(struct stmmac_priv *priv)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (priv->plat->ptp_clk_freq_config)
|
||||
priv->plat->ptp_clk_freq_config(priv);
|
||||
|
||||
for (i = 0; i < priv->dma_cap.pps_out_num; i++) {
|
||||
if (i >= STMMAC_PPS_MAX)
|
||||
break;
|
||||
|
||||
@@ -1144,7 +1144,7 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv)
|
||||
static struct page_pool *cpsw_create_page_pool(struct cpsw_common *cpsw,
|
||||
int size)
|
||||
{
|
||||
struct page_pool_params pp_params;
|
||||
struct page_pool_params pp_params = {};
|
||||
struct page_pool *pool;
|
||||
|
||||
pp_params.order = 0;
|
||||
|
||||
@@ -950,9 +950,7 @@ static int yam_siocdevprivate(struct net_device *dev, struct ifreq *ifr, void __
|
||||
ym = memdup_user(data, sizeof(struct yamdrv_ioctl_mcs));
|
||||
if (IS_ERR(ym))
|
||||
return PTR_ERR(ym);
|
||||
if (ym->cmd != SIOCYAMSMCS)
|
||||
return -EINVAL;
|
||||
if (ym->bitrate > YAM_MAXBITRATE) {
|
||||
if (ym->cmd != SIOCYAMSMCS || ym->bitrate > YAM_MAXBITRATE) {
|
||||
kfree(ym);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@@ -768,6 +768,7 @@ static struct phy_driver broadcom_drivers[] = {
|
||||
.phy_id_mask = 0xfffffff0,
|
||||
.name = "Broadcom BCM54616S",
|
||||
/* PHY_GBIT_FEATURES */
|
||||
.soft_reset = genphy_soft_reset,
|
||||
.config_init = bcm54xx_config_init,
|
||||
.config_aneg = bcm54616s_config_aneg,
|
||||
.config_intr = bcm_phy_config_intr,
|
||||
|
||||
@@ -1746,6 +1746,9 @@ void phy_detach(struct phy_device *phydev)
|
||||
phy_driver_is_genphy_10g(phydev))
|
||||
device_release_driver(&phydev->mdio.dev);
|
||||
|
||||
/* Assert the reset signal */
|
||||
phy_device_reset(phydev, 1);
|
||||
|
||||
/*
|
||||
* The phydev might go away on the put_device() below, so avoid
|
||||
* a use-after-free bug by reading the underlying bus first.
|
||||
@@ -1757,9 +1760,6 @@ void phy_detach(struct phy_device *phydev)
|
||||
ndev_owner = dev->dev.parent->driver->owner;
|
||||
if (ndev_owner != bus->owner)
|
||||
module_put(bus->owner);
|
||||
|
||||
/* Assert the reset signal */
|
||||
phy_device_reset(phydev, 1);
|
||||
}
|
||||
EXPORT_SYMBOL(phy_detach);
|
||||
|
||||
|
||||
@@ -651,6 +651,11 @@ struct sfp_bus *sfp_bus_find_fwnode(struct fwnode_handle *fwnode)
|
||||
else if (ret < 0)
|
||||
return ERR_PTR(ret);
|
||||
|
||||
if (!fwnode_device_is_available(ref.fwnode)) {
|
||||
fwnode_handle_put(ref.fwnode);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
bus = sfp_bus_get(ref.fwnode);
|
||||
fwnode_handle_put(ref.fwnode);
|
||||
if (!bus)
|
||||
|
||||
@@ -92,7 +92,7 @@ static int rpmsg_eptdev_destroy(struct device *dev, void *data)
|
||||
/* wake up any blocked readers */
|
||||
wake_up_interruptible(&eptdev->readq);
|
||||
|
||||
device_del(&eptdev->dev);
|
||||
cdev_device_del(&eptdev->cdev, &eptdev->dev);
|
||||
put_device(&eptdev->dev);
|
||||
|
||||
return 0;
|
||||
@@ -335,7 +335,6 @@ static void rpmsg_eptdev_release_device(struct device *dev)
|
||||
|
||||
ida_simple_remove(&rpmsg_ept_ida, dev->id);
|
||||
ida_simple_remove(&rpmsg_minor_ida, MINOR(eptdev->dev.devt));
|
||||
cdev_del(&eptdev->cdev);
|
||||
kfree(eptdev);
|
||||
}
|
||||
|
||||
@@ -380,19 +379,13 @@ static int rpmsg_eptdev_create(struct rpmsg_ctrldev *ctrldev,
|
||||
dev->id = ret;
|
||||
dev_set_name(dev, "rpmsg%d", ret);
|
||||
|
||||
ret = cdev_add(&eptdev->cdev, dev->devt, 1);
|
||||
ret = cdev_device_add(&eptdev->cdev, &eptdev->dev);
|
||||
if (ret)
|
||||
goto free_ept_ida;
|
||||
|
||||
/* We can now rely on the release function for cleanup */
|
||||
dev->release = rpmsg_eptdev_release_device;
|
||||
|
||||
ret = device_add(dev);
|
||||
if (ret) {
|
||||
dev_err(dev, "device_add failed: %d\n", ret);
|
||||
put_device(dev);
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
||||
free_ept_ida:
|
||||
@@ -461,7 +454,6 @@ static void rpmsg_ctrldev_release_device(struct device *dev)
|
||||
|
||||
ida_simple_remove(&rpmsg_ctrl_ida, dev->id);
|
||||
ida_simple_remove(&rpmsg_minor_ida, MINOR(dev->devt));
|
||||
cdev_del(&ctrldev->cdev);
|
||||
kfree(ctrldev);
|
||||
}
|
||||
|
||||
@@ -496,19 +488,13 @@ static int rpmsg_chrdev_probe(struct rpmsg_device *rpdev)
|
||||
dev->id = ret;
|
||||
dev_set_name(&ctrldev->dev, "rpmsg_ctrl%d", ret);
|
||||
|
||||
ret = cdev_add(&ctrldev->cdev, dev->devt, 1);
|
||||
ret = cdev_device_add(&ctrldev->cdev, &ctrldev->dev);
|
||||
if (ret)
|
||||
goto free_ctrl_ida;
|
||||
|
||||
/* We can now rely on the release function for cleanup */
|
||||
dev->release = rpmsg_ctrldev_release_device;
|
||||
|
||||
ret = device_add(dev);
|
||||
if (ret) {
|
||||
dev_err(&rpdev->dev, "device_add failed: %d\n", ret);
|
||||
put_device(dev);
|
||||
}
|
||||
|
||||
dev_set_drvdata(&rpdev->dev, ctrldev);
|
||||
|
||||
return ret;
|
||||
@@ -534,7 +520,7 @@ static void rpmsg_chrdev_remove(struct rpmsg_device *rpdev)
|
||||
if (ret)
|
||||
dev_warn(&rpdev->dev, "failed to nuke endpoints: %d\n", ret);
|
||||
|
||||
device_del(&ctrldev->dev);
|
||||
cdev_device_del(&ctrldev->cdev, &ctrldev->dev);
|
||||
put_device(&ctrldev->dev);
|
||||
}
|
||||
|
||||
|
||||
@@ -521,6 +521,8 @@ static void zfcp_fc_adisc_handler(void *data)
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* re-init to undo drop from zfcp_fc_adisc() */
|
||||
port->d_id = ntoh24(adisc_resp->adisc_port_id);
|
||||
/* port is good, unblock rport without going through erp */
|
||||
zfcp_scsi_schedule_rport_register(port);
|
||||
out:
|
||||
@@ -534,6 +536,7 @@ static int zfcp_fc_adisc(struct zfcp_port *port)
|
||||
struct zfcp_fc_req *fc_req;
|
||||
struct zfcp_adapter *adapter = port->adapter;
|
||||
struct Scsi_Host *shost = adapter->scsi_host;
|
||||
u32 d_id;
|
||||
int ret;
|
||||
|
||||
fc_req = kmem_cache_zalloc(zfcp_fc_req_cache, GFP_ATOMIC);
|
||||
@@ -558,7 +561,15 @@ static int zfcp_fc_adisc(struct zfcp_port *port)
|
||||
fc_req->u.adisc.req.adisc_cmd = ELS_ADISC;
|
||||
hton24(fc_req->u.adisc.req.adisc_port_id, fc_host_port_id(shost));
|
||||
|
||||
ret = zfcp_fsf_send_els(adapter, port->d_id, &fc_req->ct_els,
|
||||
d_id = port->d_id; /* remember as destination for send els below */
|
||||
/*
|
||||
* Force fresh GID_PN lookup on next port recovery.
|
||||
* Must happen after request setup and before sending request,
|
||||
* to prevent race with port->d_id re-init in zfcp_fc_adisc_handler().
|
||||
*/
|
||||
port->d_id = 0;
|
||||
|
||||
ret = zfcp_fsf_send_els(adapter, d_id, &fc_req->ct_els,
|
||||
ZFCP_FC_CTELS_TMO);
|
||||
if (ret)
|
||||
kmem_cache_free(zfcp_fc_req_cache, fc_req);
|
||||
|
||||
@@ -82,7 +82,7 @@ static int bnx2fc_bind_pcidev(struct bnx2fc_hba *hba);
|
||||
static void bnx2fc_unbind_pcidev(struct bnx2fc_hba *hba);
|
||||
static struct fc_lport *bnx2fc_if_create(struct bnx2fc_interface *interface,
|
||||
struct device *parent, int npiv);
|
||||
static void bnx2fc_destroy_work(struct work_struct *work);
|
||||
static void bnx2fc_port_destroy(struct fcoe_port *port);
|
||||
|
||||
static struct bnx2fc_hba *bnx2fc_hba_lookup(struct net_device *phys_dev);
|
||||
static struct bnx2fc_interface *bnx2fc_interface_lookup(struct net_device
|
||||
@@ -907,9 +907,6 @@ static void bnx2fc_indicate_netevent(void *context, unsigned long event,
|
||||
__bnx2fc_destroy(interface);
|
||||
}
|
||||
mutex_unlock(&bnx2fc_dev_lock);
|
||||
|
||||
/* Ensure ALL destroy work has been completed before return */
|
||||
flush_workqueue(bnx2fc_wq);
|
||||
return;
|
||||
|
||||
default:
|
||||
@@ -1215,8 +1212,8 @@ static int bnx2fc_vport_destroy(struct fc_vport *vport)
|
||||
mutex_unlock(&n_port->lp_mutex);
|
||||
bnx2fc_free_vport(interface->hba, port->lport);
|
||||
bnx2fc_port_shutdown(port->lport);
|
||||
bnx2fc_port_destroy(port);
|
||||
bnx2fc_interface_put(interface);
|
||||
queue_work(bnx2fc_wq, &port->destroy_work);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -1525,7 +1522,6 @@ static struct fc_lport *bnx2fc_if_create(struct bnx2fc_interface *interface,
|
||||
port->lport = lport;
|
||||
port->priv = interface;
|
||||
port->get_netdev = bnx2fc_netdev;
|
||||
INIT_WORK(&port->destroy_work, bnx2fc_destroy_work);
|
||||
|
||||
/* Configure fcoe_port */
|
||||
rc = bnx2fc_lport_config(lport);
|
||||
@@ -1653,8 +1649,8 @@ static void __bnx2fc_destroy(struct bnx2fc_interface *interface)
|
||||
bnx2fc_interface_cleanup(interface);
|
||||
bnx2fc_stop(interface);
|
||||
list_del(&interface->list);
|
||||
bnx2fc_port_destroy(port);
|
||||
bnx2fc_interface_put(interface);
|
||||
queue_work(bnx2fc_wq, &port->destroy_work);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1694,15 +1690,12 @@ netdev_err:
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void bnx2fc_destroy_work(struct work_struct *work)
|
||||
static void bnx2fc_port_destroy(struct fcoe_port *port)
|
||||
{
|
||||
struct fcoe_port *port;
|
||||
struct fc_lport *lport;
|
||||
|
||||
port = container_of(work, struct fcoe_port, destroy_work);
|
||||
lport = port->lport;
|
||||
|
||||
BNX2FC_HBA_DBG(lport, "Entered bnx2fc_destroy_work\n");
|
||||
BNX2FC_HBA_DBG(lport, "Entered %s, destroying lport %p\n", __func__, lport);
|
||||
|
||||
bnx2fc_if_destroy(lport);
|
||||
}
|
||||
@@ -2556,9 +2549,6 @@ static void bnx2fc_ulp_exit(struct cnic_dev *dev)
|
||||
__bnx2fc_destroy(interface);
|
||||
mutex_unlock(&bnx2fc_dev_lock);
|
||||
|
||||
/* Ensure ALL destroy work has been completed before return */
|
||||
flush_workqueue(bnx2fc_wq);
|
||||
|
||||
bnx2fc_ulp_stop(hba);
|
||||
/* unregister cnic device */
|
||||
if (test_and_clear_bit(BNX2FC_CNIC_REGISTERED, &hba->reg_with_cnic))
|
||||
|
||||
@@ -46,18 +46,14 @@ efc_els_io_alloc_size(struct efc_node *node, u32 reqlen, u32 rsplen)
|
||||
|
||||
efc = node->efc;
|
||||
|
||||
spin_lock_irqsave(&node->els_ios_lock, flags);
|
||||
|
||||
if (!node->els_io_enabled) {
|
||||
efc_log_err(efc, "els io alloc disabled\n");
|
||||
spin_unlock_irqrestore(&node->els_ios_lock, flags);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
els = mempool_alloc(efc->els_io_pool, GFP_ATOMIC);
|
||||
if (!els) {
|
||||
atomic_add_return(1, &efc->els_io_alloc_failed_count);
|
||||
spin_unlock_irqrestore(&node->els_ios_lock, flags);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
@@ -74,7 +70,6 @@ efc_els_io_alloc_size(struct efc_node *node, u32 reqlen, u32 rsplen)
|
||||
&els->io.req.phys, GFP_DMA);
|
||||
if (!els->io.req.virt) {
|
||||
mempool_free(els, efc->els_io_pool);
|
||||
spin_unlock_irqrestore(&node->els_ios_lock, flags);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
@@ -94,10 +89,11 @@ efc_els_io_alloc_size(struct efc_node *node, u32 reqlen, u32 rsplen)
|
||||
|
||||
/* add els structure to ELS IO list */
|
||||
INIT_LIST_HEAD(&els->list_entry);
|
||||
spin_lock_irqsave(&node->els_ios_lock, flags);
|
||||
list_add_tail(&els->list_entry, &node->els_ios_list);
|
||||
spin_unlock_irqrestore(&node->els_ios_lock, flags);
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&node->els_ios_lock, flags);
|
||||
return els;
|
||||
}
|
||||
|
||||
|
||||
@@ -318,6 +318,7 @@ static struct tty_driver *gsm_tty_driver;
|
||||
#define GSM1_ESCAPE_BITS 0x20
|
||||
#define XON 0x11
|
||||
#define XOFF 0x13
|
||||
#define ISO_IEC_646_MASK 0x7F
|
||||
|
||||
static const struct tty_port_operations gsm_port_ops;
|
||||
|
||||
@@ -527,7 +528,8 @@ static int gsm_stuff_frame(const u8 *input, u8 *output, int len)
|
||||
int olen = 0;
|
||||
while (len--) {
|
||||
if (*input == GSM1_SOF || *input == GSM1_ESCAPE
|
||||
|| *input == XON || *input == XOFF) {
|
||||
|| (*input & ISO_IEC_646_MASK) == XON
|
||||
|| (*input & ISO_IEC_646_MASK) == XOFF) {
|
||||
*output++ = GSM1_ESCAPE;
|
||||
*output++ = *input++ ^ GSM1_ESCAPE_BITS;
|
||||
olen++;
|
||||
|
||||
@@ -83,8 +83,17 @@ static int of_platform_serial_setup(struct platform_device *ofdev,
|
||||
port->mapsize = resource_size(&resource);
|
||||
|
||||
/* Check for shifted address mapping */
|
||||
if (of_property_read_u32(np, "reg-offset", &prop) == 0)
|
||||
if (of_property_read_u32(np, "reg-offset", &prop) == 0) {
|
||||
if (prop >= port->mapsize) {
|
||||
dev_warn(&ofdev->dev, "reg-offset %u exceeds region size %pa\n",
|
||||
prop, &port->mapsize);
|
||||
ret = -EINVAL;
|
||||
goto err_unprepare;
|
||||
}
|
||||
|
||||
port->mapbase += prop;
|
||||
port->mapsize -= prop;
|
||||
}
|
||||
|
||||
port->iotype = UPIO_MEM;
|
||||
if (of_property_read_u32(np, "reg-io-width", &prop) == 0) {
|
||||
|
||||
@@ -5203,8 +5203,30 @@ static const struct pci_device_id serial_pci_tbl[] = {
|
||||
{ PCI_VENDOR_ID_INTASHIELD, PCI_DEVICE_ID_INTASHIELD_IS400,
|
||||
PCI_ANY_ID, PCI_ANY_ID, 0, 0, /* 135a.0dc0 */
|
||||
pbn_b2_4_115200 },
|
||||
/* Brainboxes Devices */
|
||||
/*
|
||||
* BrainBoxes UC-260
|
||||
* Brainboxes UC-101
|
||||
*/
|
||||
{ PCI_VENDOR_ID_INTASHIELD, 0x0BA1,
|
||||
PCI_ANY_ID, PCI_ANY_ID,
|
||||
0, 0,
|
||||
pbn_b2_2_115200 },
|
||||
/*
|
||||
* Brainboxes UC-235/246
|
||||
*/
|
||||
{ PCI_VENDOR_ID_INTASHIELD, 0x0AA1,
|
||||
PCI_ANY_ID, PCI_ANY_ID,
|
||||
0, 0,
|
||||
pbn_b2_1_115200 },
|
||||
/*
|
||||
* Brainboxes UC-257
|
||||
*/
|
||||
{ PCI_VENDOR_ID_INTASHIELD, 0x0861,
|
||||
PCI_ANY_ID, PCI_ANY_ID,
|
||||
0, 0,
|
||||
pbn_b2_2_115200 },
|
||||
/*
|
||||
* Brainboxes UC-260/271/701/756
|
||||
*/
|
||||
{ PCI_VENDOR_ID_INTASHIELD, 0x0D21,
|
||||
PCI_ANY_ID, PCI_ANY_ID,
|
||||
@@ -5212,7 +5234,81 @@ static const struct pci_device_id serial_pci_tbl[] = {
|
||||
pbn_b2_4_115200 },
|
||||
{ PCI_VENDOR_ID_INTASHIELD, 0x0E34,
|
||||
PCI_ANY_ID, PCI_ANY_ID,
|
||||
PCI_CLASS_COMMUNICATION_MULTISERIAL << 8, 0xffff00,
|
||||
PCI_CLASS_COMMUNICATION_MULTISERIAL << 8, 0xffff00,
|
||||
pbn_b2_4_115200 },
|
||||
/*
|
||||
* Brainboxes UC-268
|
||||
*/
|
||||
{ PCI_VENDOR_ID_INTASHIELD, 0x0841,
|
||||
PCI_ANY_ID, PCI_ANY_ID,
|
||||
0, 0,
|
||||
pbn_b2_4_115200 },
|
||||
/*
|
||||
* Brainboxes UC-275/279
|
||||
*/
|
||||
{ PCI_VENDOR_ID_INTASHIELD, 0x0881,
|
||||
PCI_ANY_ID, PCI_ANY_ID,
|
||||
0, 0,
|
||||
pbn_b2_8_115200 },
|
||||
/*
|
||||
* Brainboxes UC-302
|
||||
*/
|
||||
{ PCI_VENDOR_ID_INTASHIELD, 0x08E1,
|
||||
PCI_ANY_ID, PCI_ANY_ID,
|
||||
0, 0,
|
||||
pbn_b2_2_115200 },
|
||||
/*
|
||||
* Brainboxes UC-310
|
||||
*/
|
||||
{ PCI_VENDOR_ID_INTASHIELD, 0x08C1,
|
||||
PCI_ANY_ID, PCI_ANY_ID,
|
||||
0, 0,
|
||||
pbn_b2_2_115200 },
|
||||
/*
|
||||
* Brainboxes UC-313
|
||||
*/
|
||||
{ PCI_VENDOR_ID_INTASHIELD, 0x08A3,
|
||||
PCI_ANY_ID, PCI_ANY_ID,
|
||||
0, 0,
|
||||
pbn_b2_2_115200 },
|
||||
/*
|
||||
* Brainboxes UC-320/324
|
||||
*/
|
||||
{ PCI_VENDOR_ID_INTASHIELD, 0x0A61,
|
||||
PCI_ANY_ID, PCI_ANY_ID,
|
||||
0, 0,
|
||||
pbn_b2_1_115200 },
|
||||
/*
|
||||
* Brainboxes UC-346
|
||||
*/
|
||||
{ PCI_VENDOR_ID_INTASHIELD, 0x0B02,
|
||||
PCI_ANY_ID, PCI_ANY_ID,
|
||||
0, 0,
|
||||
pbn_b2_4_115200 },
|
||||
/*
|
||||
* Brainboxes UC-357
|
||||
*/
|
||||
{ PCI_VENDOR_ID_INTASHIELD, 0x0A81,
|
||||
PCI_ANY_ID, PCI_ANY_ID,
|
||||
0, 0,
|
||||
pbn_b2_2_115200 },
|
||||
{ PCI_VENDOR_ID_INTASHIELD, 0x0A83,
|
||||
PCI_ANY_ID, PCI_ANY_ID,
|
||||
0, 0,
|
||||
pbn_b2_2_115200 },
|
||||
/*
|
||||
* Brainboxes UC-368
|
||||
*/
|
||||
{ PCI_VENDOR_ID_INTASHIELD, 0x0C41,
|
||||
PCI_ANY_ID, PCI_ANY_ID,
|
||||
0, 0,
|
||||
pbn_b2_4_115200 },
|
||||
/*
|
||||
* Brainboxes UC-420/431
|
||||
*/
|
||||
{ PCI_VENDOR_ID_INTASHIELD, 0x0921,
|
||||
PCI_ANY_ID, PCI_ANY_ID,
|
||||
0, 0,
|
||||
pbn_b2_4_115200 },
|
||||
/*
|
||||
* Perle PCI-RAS cards
|
||||
|
||||
@@ -1615,8 +1615,12 @@ static void pl011_set_mctrl(struct uart_port *port, unsigned int mctrl)
|
||||
container_of(port, struct uart_amba_port, port);
|
||||
unsigned int cr;
|
||||
|
||||
if (port->rs485.flags & SER_RS485_ENABLED)
|
||||
mctrl &= ~TIOCM_RTS;
|
||||
if (port->rs485.flags & SER_RS485_ENABLED) {
|
||||
if (port->rs485.flags & SER_RS485_RTS_AFTER_SEND)
|
||||
mctrl &= ~TIOCM_RTS;
|
||||
else
|
||||
mctrl |= TIOCM_RTS;
|
||||
}
|
||||
|
||||
cr = pl011_read(uap, REG_CR);
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user