Merge 5.15.82 into android14-5.15
Changes in 5.15.82
arm64: mte: Avoid setting PG_mte_tagged if no tags cleared or restored
drm/i915: Create a dummy object for gen6 ppgtt
drm/i915/gt: Use i915_vm_put on ppgtt_create error paths
erofs: fix order >= MAX_ORDER warning due to crafted negative i_size
btrfs: sink iterator parameter to btrfs_ioctl_logical_to_ino
btrfs: free btrfs_path before copying inodes to userspace
spi: spi-imx: Fix spi_bus_clk if requested clock is higher than input clock
btrfs: move QUOTA_ENABLED check to rescan_should_stop from btrfs_qgroup_rescan_worker
btrfs: qgroup: fix sleep from invalid context bug in btrfs_qgroup_inherit()
drm/display/dp_mst: Fix drm_dp_mst_add_affected_dsc_crtcs() return code
drm/amdgpu: update drm_display_info correctly when the edid is read
drm/amdgpu: Partially revert "drm/amdgpu: update drm_display_info correctly when the edid is read"
iio: health: afe4403: Fix oob read in afe4403_read_raw
iio: health: afe4404: Fix oob read in afe4404_[read|write]_raw
iio: light: rpr0521: add missing Kconfig dependencies
bpf, perf: Use subprog name when reporting subprog ksymbol
scripts/faddr2line: Fix regression in name resolution on ppc64le
ARM: at91: rm9200: fix usb device clock id
libbpf: Handle size overflow for ringbuf mmap
hwmon: (ltc2947) fix temperature scaling
hwmon: (ina3221) Fix shunt sum critical calculation
hwmon: (i5500_temp) fix missing pci_disable_device()
hwmon: (ibmpex) Fix possible UAF when ibmpex_register_bmc() fails
bpf: Do not copy spin lock field from user in bpf_selem_alloc
nvmem: rmem: Fix return value check in rmem_read()
of: property: decrement node refcount in of_fwnode_get_reference_args()
ixgbevf: Fix resource leak in ixgbevf_init_module()
i40e: Fix error handling in i40e_init_module()
fm10k: Fix error handling in fm10k_init_module()
iavf: remove redundant ret variable
iavf: Fix error handling in iavf_init_module()
e100: Fix possible use after free in e100_xmit_prepare
net/mlx5: DR, Rename list field in matcher struct to list_node
net/mlx5: DR, Fix uninitialized var warning
net/mlx5: Fix uninitialized variable bug in outlen_write()
net/mlx5e: Fix use-after-free when reverting termination table
can: sja1000_isa: sja1000_isa_probe(): add missing free_sja1000dev()
can: cc770: cc770_isa_probe(): add missing free_cc770dev()
can: etas_es58x: es58x_init_netdev(): free netdev when register_candev()
can: m_can: pci: add missing m_can_class_free_dev() in probe/remove methods
can: m_can: Add check for devm_clk_get
qlcnic: fix sleep-in-atomic-context bugs caused by msleep
aquantia: Do not purge addresses when setting the number of rings
wifi: cfg80211: fix buffer overflow in elem comparison
wifi: cfg80211: don't allow multi-BSSID in S1G
wifi: mac8021: fix possible oob access in ieee80211_get_rate_duration
net: phy: fix null-ptr-deref while probe() failed
net: ethernet: ti: am65-cpsw: fix error handling in am65_cpsw_nuss_probe()
net: net_netdev: Fix error handling in ntb_netdev_init_module()
net/9p: Fix a potential socket leak in p9_socket_open
net: ethernet: nixge: fix NULL dereference
net: wwan: iosm: fix kernel test robot reported error
net: wwan: iosm: fix dma_alloc_coherent incompatible pointer type
dsa: lan9303: Correct stat name
tipc: re-fetch skb cb after tipc_msg_validate
net: hsr: Fix potential use-after-free
net: mdiobus: fix unbalanced node reference count
afs: Fix fileserver probe RTT handling
net: tun: Fix use-after-free in tun_detach()
packet: do not set TP_STATUS_CSUM_VALID on CHECKSUM_COMPLETE
sctp: fix memory leak in sctp_stream_outq_migrate()
net: ethernet: renesas: ravb: Fix promiscuous mode after system resumed
hwmon: (coretemp) Check for null before removing sysfs attrs
hwmon: (coretemp) fix pci device refcount leak in nv1a_ram_new()
riscv: vdso: fix section overlapping under some conditions
riscv: mm: Proper page permissions after initmem free
ALSA: dice: fix regression for Lexicon I-ONIX FW810S
error-injection: Add prompt for function error injection
tools/vm/slabinfo-gnuplot: use "grep -E" instead of "egrep"
nilfs2: fix NULL pointer dereference in nilfs_palloc_commit_free_entry()
x86/bugs: Make sure MSR_SPEC_CTRL is updated properly upon resume from S3
pinctrl: intel: Save and restore pins in "direct IRQ" mode
v4l2: don't fall back to follow_pfn() if pin_user_pages_fast() fails
net: stmmac: Set MAC's flow control register to reflect current settings
mmc: mmc_test: Fix removal of debugfs file
mmc: core: Fix ambiguous TRIM and DISCARD arg
mmc: sdhci-esdhc-imx: correct CQHCI exit halt state check
mmc: sdhci-sprd: Fix no reset data and command after voltage switch
mmc: sdhci: Fix voltage switch delay
drm/amdgpu: temporarily disable broken Clang builds due to blown stack-frame
drm/amdgpu: enable Vangogh VCN indirect sram mode
drm/i915: Fix negative value passed as remaining time
drm/i915: Never return 0 if not all requests retired
tracing/osnoise: Fix duration type
tracing: Fix race where histograms can be called before the event
tracing: Free buffers when a used dynamic event is removed
io_uring: update res mask in io_poll_check_events
io_uring: fix tw losing poll events
io_uring: cmpxchg for poll arm refs release
io_uring: make poll refs more robust
io_uring/poll: fix poll_refs race with cancelation
KVM: x86/mmu: Fix race condition in direct_page_fault
ASoC: ops: Fix bounds check for _sx controls
pinctrl: single: Fix potential division by zero
riscv: Sync efi page table's kernel mappings before switching
riscv: fix race when vmap stack overflow
riscv: kexec: Fixup irq controller broken in kexec crash path
nvme: fix SRCU protection of nvme_ns_head list
iommu/vt-d: Fix PCI device refcount leak in has_external_pci()
iommu/vt-d: Fix PCI device refcount leak in dmar_dev_scope_init()
mm: __isolate_lru_page_prepare() in isolate_migratepages_block()
mm: migrate: fix THP's mapcount on isolation
parisc: Increase FRAME_WARN to 2048 bytes on parisc
Kconfig.debug: provide a little extra FRAME_WARN leeway when KASAN is enabled
selftests: net: add delete nexthop route warning test
selftests: net: fix nexthop warning cleanup double ip typo
ipv4: Handle attempt to delete multipath route when fib_info contains an nh reference
ipv4: Fix route deletion when nexthop info is not specified
serial: stm32: Factor out GPIO RTS toggling into separate function
serial: stm32: Use TC interrupt to deassert GPIO RTS in RS485 mode
serial: stm32: Deassert Transmit Enable on ->rs485_config()
i2c: npcm7xx: Fix error handling in npcm_i2c_init()
i2c: imx: Only DMA messages with I2C_M_DMA_SAFE flag set
ACPI: HMAT: remove unnecessary variable initialization
ACPI: HMAT: Fix initiator registration for single-initiator systems
Revert "clocksource/drivers/riscv: Events are stopped during CPU suspend"
char: tpm: Protect tpm_pm_suspend with locks
Input: raydium_ts_i2c - fix memory leak in raydium_i2c_send()
ipc/sem: Fix dangling sem_array access in semtimedop race
proc: avoid integer type confusion in get_proc_long
proc: proc_skip_spaces() shouldn't think it is working on C strings
Linux 5.15.82
Change-Id: I1631261fcfd321674c546c41628bb3283e02f1a5
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 15
|
||||
SUBLEVEL = 81
|
||||
SUBLEVEL = 82
|
||||
EXTRAVERSION =
|
||||
NAME = Trick or Treat
|
||||
|
||||
|
||||
@@ -660,7 +660,7 @@
|
||||
compatible = "atmel,at91rm9200-udc";
|
||||
reg = <0xfffb0000 0x4000>;
|
||||
interrupts = <11 IRQ_TYPE_LEVEL_HIGH 2>;
|
||||
clocks = <&pmc PMC_TYPE_PERIPHERAL 11>, <&pmc PMC_TYPE_SYSTEM 2>;
|
||||
clocks = <&pmc PMC_TYPE_PERIPHERAL 11>, <&pmc PMC_TYPE_SYSTEM 1>;
|
||||
clock-names = "pclk", "hclk";
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
@@ -56,6 +56,11 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte,
|
||||
* the new page->flags are visible before the tags were updated.
|
||||
*/
|
||||
smp_wmb();
|
||||
/*
|
||||
* Test PG_mte_tagged again in case it was racing with another
|
||||
* set_pte_at().
|
||||
*/
|
||||
if (!test_and_set_bit(PG_mte_tagged, &page->flags))
|
||||
mte_clear_page_tags(page_address(page));
|
||||
}
|
||||
|
||||
@@ -72,7 +77,7 @@ void mte_sync_tags(pte_t old_pte, pte_t pte)
|
||||
|
||||
/* if PG_mte_tagged is set, tags have already been initialised */
|
||||
for (i = 0; i < nr_pages; i++, page++) {
|
||||
if (!test_and_set_bit(PG_mte_tagged, &page->flags))
|
||||
if (!test_bit(PG_mte_tagged, &page->flags))
|
||||
mte_sync_page_tags(page, old_pte, check_swap,
|
||||
pte_is_tagged);
|
||||
}
|
||||
|
||||
@@ -62,6 +62,11 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page)
|
||||
* the new page->flags are visible before the tags were updated.
|
||||
*/
|
||||
smp_wmb();
|
||||
/*
|
||||
* Test PG_mte_tagged again in case it was racing with another
|
||||
* set_pte_at().
|
||||
*/
|
||||
if (!test_and_set_bit(PG_mte_tagged, &page->flags))
|
||||
mte_restore_page_tags(page_address(page), tags);
|
||||
|
||||
return true;
|
||||
|
||||
@@ -23,6 +23,7 @@
|
||||
#define REG_L __REG_SEL(ld, lw)
|
||||
#define REG_S __REG_SEL(sd, sw)
|
||||
#define REG_SC __REG_SEL(sc.d, sc.w)
|
||||
#define REG_AMOSWAP_AQ __REG_SEL(amoswap.d.aq, amoswap.w.aq)
|
||||
#define REG_ASM __REG_SEL(.dword, .word)
|
||||
#define SZREG __REG_SEL(8, 4)
|
||||
#define LGREG __REG_SEL(3, 2)
|
||||
|
||||
@@ -10,6 +10,7 @@
|
||||
#include <asm/mmu_context.h>
|
||||
#include <asm/ptrace.h>
|
||||
#include <asm/tlbflush.h>
|
||||
#include <asm/pgalloc.h>
|
||||
|
||||
#ifdef CONFIG_EFI
|
||||
extern void efi_init(void);
|
||||
@@ -20,7 +21,10 @@ extern void efi_init(void);
|
||||
int efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md);
|
||||
int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
|
||||
|
||||
#define arch_efi_call_virt_setup() efi_virtmap_load()
|
||||
#define arch_efi_call_virt_setup() ({ \
|
||||
sync_kernel_mappings(efi_mm.pgd); \
|
||||
efi_virtmap_load(); \
|
||||
})
|
||||
#define arch_efi_call_virt_teardown() efi_virtmap_unload()
|
||||
|
||||
#define arch_efi_call_virt(p, f, args...) p->f(args)
|
||||
|
||||
@@ -38,6 +38,13 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
|
||||
}
|
||||
#endif /* __PAGETABLE_PMD_FOLDED */
|
||||
|
||||
static inline void sync_kernel_mappings(pgd_t *pgd)
|
||||
{
|
||||
memcpy(pgd + USER_PTRS_PER_PGD,
|
||||
init_mm.pgd + USER_PTRS_PER_PGD,
|
||||
(PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
|
||||
}
|
||||
|
||||
static inline pgd_t *pgd_alloc(struct mm_struct *mm)
|
||||
{
|
||||
pgd_t *pgd;
|
||||
@@ -46,9 +53,7 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
|
||||
if (likely(pgd != NULL)) {
|
||||
memset(pgd, 0, USER_PTRS_PER_PGD * sizeof(pgd_t));
|
||||
/* Copy kernel mappings */
|
||||
memcpy(pgd + USER_PTRS_PER_PGD,
|
||||
init_mm.pgd + USER_PTRS_PER_PGD,
|
||||
(PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
|
||||
sync_kernel_mappings(pgd);
|
||||
}
|
||||
return pgd;
|
||||
}
|
||||
|
||||
@@ -387,6 +387,19 @@ handle_syscall_trace_exit:
|
||||
|
||||
#ifdef CONFIG_VMAP_STACK
|
||||
handle_kernel_stack_overflow:
|
||||
/*
|
||||
* Takes the psuedo-spinlock for the shadow stack, in case multiple
|
||||
* harts are concurrently overflowing their kernel stacks. We could
|
||||
* store any value here, but since we're overflowing the kernel stack
|
||||
* already we only have SP to use as a scratch register. So we just
|
||||
* swap in the address of the spinlock, as that's definately non-zero.
|
||||
*
|
||||
* Pairs with a store_release in handle_bad_stack().
|
||||
*/
|
||||
1: la sp, spin_shadow_stack
|
||||
REG_AMOSWAP_AQ sp, sp, (sp)
|
||||
bnez sp, 1b
|
||||
|
||||
la sp, shadow_stack
|
||||
addi sp, sp, SHADOW_OVERFLOW_STACK_SIZE
|
||||
|
||||
|
||||
@@ -15,6 +15,8 @@
|
||||
#include <linux/compiler.h> /* For unreachable() */
|
||||
#include <linux/cpu.h> /* For cpu_down() */
|
||||
#include <linux/reboot.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/irq.h>
|
||||
|
||||
/*
|
||||
* kexec_image_info - Print received image details
|
||||
@@ -154,6 +156,37 @@ void crash_smp_send_stop(void)
|
||||
cpus_stopped = 1;
|
||||
}
|
||||
|
||||
static void machine_kexec_mask_interrupts(void)
|
||||
{
|
||||
unsigned int i;
|
||||
struct irq_desc *desc;
|
||||
|
||||
for_each_irq_desc(i, desc) {
|
||||
struct irq_chip *chip;
|
||||
int ret;
|
||||
|
||||
chip = irq_desc_get_chip(desc);
|
||||
if (!chip)
|
||||
continue;
|
||||
|
||||
/*
|
||||
* First try to remove the active state. If this
|
||||
* fails, try to EOI the interrupt.
|
||||
*/
|
||||
ret = irq_set_irqchip_state(i, IRQCHIP_STATE_ACTIVE, false);
|
||||
|
||||
if (ret && irqd_irq_inprogress(&desc->irq_data) &&
|
||||
chip->irq_eoi)
|
||||
chip->irq_eoi(&desc->irq_data);
|
||||
|
||||
if (chip->irq_mask)
|
||||
chip->irq_mask(&desc->irq_data);
|
||||
|
||||
if (chip->irq_disable && !irqd_irq_disabled(&desc->irq_data))
|
||||
chip->irq_disable(&desc->irq_data);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* machine_crash_shutdown - Prepare to kexec after a kernel crash
|
||||
*
|
||||
@@ -169,6 +202,8 @@ machine_crash_shutdown(struct pt_regs *regs)
|
||||
crash_smp_send_stop();
|
||||
|
||||
crash_save_cpu(regs, smp_processor_id());
|
||||
machine_kexec_mask_interrupts();
|
||||
|
||||
pr_info("Starting crashdump kernel...\n");
|
||||
}
|
||||
|
||||
|
||||
@@ -331,10 +331,11 @@ subsys_initcall(topology_init);
|
||||
|
||||
void free_initmem(void)
|
||||
{
|
||||
if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX))
|
||||
set_kernel_memory(lm_alias(__init_begin), lm_alias(__init_end),
|
||||
IS_ENABLED(CONFIG_64BIT) ?
|
||||
set_memory_rw : set_memory_rw_nx);
|
||||
if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX)) {
|
||||
set_kernel_memory(lm_alias(__init_begin), lm_alias(__init_end), set_memory_rw_nx);
|
||||
if (IS_ENABLED(CONFIG_64BIT))
|
||||
set_kernel_memory(__init_begin, __init_end, set_memory_nx);
|
||||
}
|
||||
|
||||
free_initmem_default(POISON_FREE_INITMEM);
|
||||
}
|
||||
|
||||
@@ -218,11 +218,29 @@ asmlinkage unsigned long get_overflow_stack(void)
|
||||
OVERFLOW_STACK_SIZE;
|
||||
}
|
||||
|
||||
/*
|
||||
* A pseudo spinlock to protect the shadow stack from being used by multiple
|
||||
* harts concurrently. This isn't a real spinlock because the lock side must
|
||||
* be taken without a valid stack and only a single register, it's only taken
|
||||
* while in the process of panicing anyway so the performance and error
|
||||
* checking a proper spinlock gives us doesn't matter.
|
||||
*/
|
||||
unsigned long spin_shadow_stack;
|
||||
|
||||
asmlinkage void handle_bad_stack(struct pt_regs *regs)
|
||||
{
|
||||
unsigned long tsk_stk = (unsigned long)current->stack;
|
||||
unsigned long ovf_stk = (unsigned long)this_cpu_ptr(overflow_stack);
|
||||
|
||||
/*
|
||||
* We're done with the shadow stack by this point, as we're on the
|
||||
* overflow stack. Tell any other concurrent overflowing harts that
|
||||
* they can proceed with panicing by releasing the pseudo-spinlock.
|
||||
*
|
||||
* This pairs with an amoswap.aq in handle_kernel_stack_overflow.
|
||||
*/
|
||||
smp_store_release(&spin_shadow_stack, 0);
|
||||
|
||||
console_verbose();
|
||||
|
||||
pr_emerg("Insufficient stack space to handle exception!\n");
|
||||
|
||||
@@ -17,6 +17,7 @@ vdso-syms += flush_icache
|
||||
obj-vdso = $(patsubst %, %.o, $(vdso-syms)) note.o
|
||||
|
||||
ccflags-y := -fno-stack-protector
|
||||
ccflags-y += -DDISABLE_BRANCH_PROFILING
|
||||
|
||||
ifneq ($(c-gettimeofday-y),)
|
||||
CFLAGS_vgettimeofday.o += -fPIC -include $(c-gettimeofday-y)
|
||||
|
||||
@@ -310,7 +310,7 @@ static inline void indirect_branch_prediction_barrier(void)
|
||||
/* The Intel SPEC CTRL MSR base value cache */
|
||||
extern u64 x86_spec_ctrl_base;
|
||||
DECLARE_PER_CPU(u64, x86_spec_ctrl_current);
|
||||
extern void write_spec_ctrl_current(u64 val, bool force);
|
||||
extern void update_spec_ctrl_cond(u64 val);
|
||||
extern u64 spec_ctrl_current(void);
|
||||
|
||||
/*
|
||||
|
||||
@@ -60,11 +60,18 @@ EXPORT_SYMBOL_GPL(x86_spec_ctrl_current);
|
||||
|
||||
static DEFINE_MUTEX(spec_ctrl_mutex);
|
||||
|
||||
/* Update SPEC_CTRL MSR and its cached copy unconditionally */
|
||||
static void update_spec_ctrl(u64 val)
|
||||
{
|
||||
this_cpu_write(x86_spec_ctrl_current, val);
|
||||
wrmsrl(MSR_IA32_SPEC_CTRL, val);
|
||||
}
|
||||
|
||||
/*
|
||||
* Keep track of the SPEC_CTRL MSR value for the current task, which may differ
|
||||
* from x86_spec_ctrl_base due to STIBP/SSB in __speculation_ctrl_update().
|
||||
*/
|
||||
void write_spec_ctrl_current(u64 val, bool force)
|
||||
void update_spec_ctrl_cond(u64 val)
|
||||
{
|
||||
if (this_cpu_read(x86_spec_ctrl_current) == val)
|
||||
return;
|
||||
@@ -75,7 +82,7 @@ void write_spec_ctrl_current(u64 val, bool force)
|
||||
* When KERNEL_IBRS this MSR is written on return-to-user, unless
|
||||
* forced the update can be delayed until that time.
|
||||
*/
|
||||
if (force || !cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS))
|
||||
if (!cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS))
|
||||
wrmsrl(MSR_IA32_SPEC_CTRL, val);
|
||||
}
|
||||
|
||||
@@ -1328,7 +1335,7 @@ static void __init spec_ctrl_disable_kernel_rrsba(void)
|
||||
|
||||
if (ia32_cap & ARCH_CAP_RRSBA) {
|
||||
x86_spec_ctrl_base |= SPEC_CTRL_RRSBA_DIS_S;
|
||||
write_spec_ctrl_current(x86_spec_ctrl_base, true);
|
||||
update_spec_ctrl(x86_spec_ctrl_base);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1450,7 +1457,7 @@ static void __init spectre_v2_select_mitigation(void)
|
||||
|
||||
if (spectre_v2_in_ibrs_mode(mode)) {
|
||||
x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
|
||||
write_spec_ctrl_current(x86_spec_ctrl_base, true);
|
||||
update_spec_ctrl(x86_spec_ctrl_base);
|
||||
}
|
||||
|
||||
switch (mode) {
|
||||
@@ -1564,7 +1571,7 @@ static void __init spectre_v2_select_mitigation(void)
|
||||
static void update_stibp_msr(void * __unused)
|
||||
{
|
||||
u64 val = spec_ctrl_current() | (x86_spec_ctrl_base & SPEC_CTRL_STIBP);
|
||||
write_spec_ctrl_current(val, true);
|
||||
update_spec_ctrl(val);
|
||||
}
|
||||
|
||||
/* Update x86_spec_ctrl_base in case SMT state changed. */
|
||||
@@ -1797,7 +1804,7 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
|
||||
x86_amd_ssb_disable();
|
||||
} else {
|
||||
x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
|
||||
write_spec_ctrl_current(x86_spec_ctrl_base, true);
|
||||
update_spec_ctrl(x86_spec_ctrl_base);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2048,7 +2055,7 @@ int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
|
||||
void x86_spec_ctrl_setup_ap(void)
|
||||
{
|
||||
if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
|
||||
write_spec_ctrl_current(x86_spec_ctrl_base, true);
|
||||
update_spec_ctrl(x86_spec_ctrl_base);
|
||||
|
||||
if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
|
||||
x86_amd_ssb_disable();
|
||||
|
||||
@@ -584,7 +584,7 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp,
|
||||
}
|
||||
|
||||
if (updmsr)
|
||||
write_spec_ctrl_current(msr, false);
|
||||
update_spec_ctrl_cond(msr);
|
||||
}
|
||||
|
||||
static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk)
|
||||
|
||||
@@ -2369,6 +2369,7 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm,
|
||||
{
|
||||
bool list_unstable;
|
||||
|
||||
lockdep_assert_held_write(&kvm->mmu_lock);
|
||||
trace_kvm_mmu_prepare_zap_page(sp);
|
||||
++kvm->stat.mmu_shadow_zapped;
|
||||
*nr_zapped = mmu_zap_unsync_children(kvm, sp, invalid_list);
|
||||
@@ -4019,16 +4020,17 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
|
||||
|
||||
if (!is_noslot_pfn(pfn) && mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, hva))
|
||||
goto out_unlock;
|
||||
|
||||
if (is_tdp_mmu_fault) {
|
||||
r = kvm_tdp_mmu_map(vcpu, gpa, error_code, map_writable, max_level,
|
||||
pfn, prefault);
|
||||
} else {
|
||||
r = make_mmu_pages_available(vcpu);
|
||||
if (r)
|
||||
goto out_unlock;
|
||||
|
||||
if (is_tdp_mmu_fault)
|
||||
r = kvm_tdp_mmu_map(vcpu, gpa, error_code, map_writable, max_level,
|
||||
pfn, prefault);
|
||||
else
|
||||
r = __direct_map(vcpu, gpa, error_code, map_writable, max_level, pfn,
|
||||
prefault, is_tdp);
|
||||
}
|
||||
|
||||
out_unlock:
|
||||
if (is_tdp_mmu_fault)
|
||||
|
||||
@@ -563,17 +563,26 @@ static int initiator_cmp(void *priv, const struct list_head *a,
|
||||
{
|
||||
struct memory_initiator *ia;
|
||||
struct memory_initiator *ib;
|
||||
unsigned long *p_nodes = priv;
|
||||
|
||||
ia = list_entry(a, struct memory_initiator, node);
|
||||
ib = list_entry(b, struct memory_initiator, node);
|
||||
|
||||
set_bit(ia->processor_pxm, p_nodes);
|
||||
set_bit(ib->processor_pxm, p_nodes);
|
||||
|
||||
return ia->processor_pxm - ib->processor_pxm;
|
||||
}
|
||||
|
||||
static int initiators_to_nodemask(unsigned long *p_nodes)
|
||||
{
|
||||
struct memory_initiator *initiator;
|
||||
|
||||
if (list_empty(&initiators))
|
||||
return -ENXIO;
|
||||
|
||||
list_for_each_entry(initiator, &initiators, node)
|
||||
set_bit(initiator->processor_pxm, p_nodes);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void hmat_register_target_initiators(struct memory_target *target)
|
||||
{
|
||||
static DECLARE_BITMAP(p_nodes, MAX_NUMNODES);
|
||||
@@ -610,7 +619,10 @@ static void hmat_register_target_initiators(struct memory_target *target)
|
||||
* initiators.
|
||||
*/
|
||||
bitmap_zero(p_nodes, MAX_NUMNODES);
|
||||
list_sort(p_nodes, &initiators, initiator_cmp);
|
||||
list_sort(NULL, &initiators, initiator_cmp);
|
||||
if (initiators_to_nodemask(p_nodes) < 0)
|
||||
return;
|
||||
|
||||
if (!access0done) {
|
||||
for (i = WRITE_LATENCY; i <= READ_BANDWIDTH; i++) {
|
||||
loc = localities_types[i];
|
||||
@@ -644,8 +656,9 @@ static void hmat_register_target_initiators(struct memory_target *target)
|
||||
|
||||
/* Access 1 ignores Generic Initiators */
|
||||
bitmap_zero(p_nodes, MAX_NUMNODES);
|
||||
list_sort(p_nodes, &initiators, initiator_cmp);
|
||||
best = 0;
|
||||
if (initiators_to_nodemask(p_nodes) < 0)
|
||||
return;
|
||||
|
||||
for (i = WRITE_LATENCY; i <= READ_BANDWIDTH; i++) {
|
||||
loc = localities_types[i];
|
||||
if (!loc)
|
||||
|
||||
@@ -401,13 +401,14 @@ int tpm_pm_suspend(struct device *dev)
|
||||
!pm_suspend_via_firmware())
|
||||
goto suspended;
|
||||
|
||||
if (!tpm_chip_start(chip)) {
|
||||
rc = tpm_try_get_ops(chip);
|
||||
if (!rc) {
|
||||
if (chip->flags & TPM_CHIP_FLAG_TPM2)
|
||||
tpm2_shutdown(chip, TPM2_SU_STATE);
|
||||
else
|
||||
rc = tpm1_pm_suspend(chip, tpm_suspend_pcr);
|
||||
|
||||
tpm_chip_stop(chip);
|
||||
tpm_put_ops(chip);
|
||||
}
|
||||
|
||||
suspended:
|
||||
|
||||
@@ -40,7 +40,7 @@ static const struct clk_pll_characteristics rm9200_pll_characteristics = {
|
||||
};
|
||||
|
||||
static const struct sck at91rm9200_systemck[] = {
|
||||
{ .n = "udpck", .p = "usbck", .id = 2 },
|
||||
{ .n = "udpck", .p = "usbck", .id = 1 },
|
||||
{ .n = "uhpck", .p = "usbck", .id = 4 },
|
||||
{ .n = "pck0", .p = "prog0", .id = 8 },
|
||||
{ .n = "pck1", .p = "prog1", .id = 9 },
|
||||
|
||||
@@ -32,7 +32,7 @@ static int riscv_clock_next_event(unsigned long delta,
|
||||
static unsigned int riscv_clock_event_irq;
|
||||
static DEFINE_PER_CPU(struct clock_event_device, riscv_clock_event) = {
|
||||
.name = "riscv_timer_clockevent",
|
||||
.features = CLOCK_EVT_FEAT_ONESHOT | CLOCK_EVT_FEAT_C3STOP,
|
||||
.features = CLOCK_EVT_FEAT_ONESHOT,
|
||||
.rating = 100,
|
||||
.set_next_event = riscv_clock_next_event,
|
||||
};
|
||||
|
||||
@@ -315,8 +315,10 @@ static void amdgpu_connector_get_edid(struct drm_connector *connector)
|
||||
if (!amdgpu_connector->edid) {
|
||||
/* some laptops provide a hardcoded edid in rom for LCDs */
|
||||
if (((connector->connector_type == DRM_MODE_CONNECTOR_LVDS) ||
|
||||
(connector->connector_type == DRM_MODE_CONNECTOR_eDP)))
|
||||
(connector->connector_type == DRM_MODE_CONNECTOR_eDP))) {
|
||||
amdgpu_connector->edid = amdgpu_connector_get_hardcoded_edid(adev);
|
||||
drm_connector_update_edid_property(connector, amdgpu_connector->edid);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -149,6 +149,9 @@ int amdgpu_vcn_sw_init(struct amdgpu_device *adev)
|
||||
break;
|
||||
case CHIP_VANGOGH:
|
||||
fw_name = FIRMWARE_VANGOGH;
|
||||
if ((adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) &&
|
||||
(adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG))
|
||||
adev->vcn.indirect_sram = true;
|
||||
break;
|
||||
case CHIP_DIMGREY_CAVEFISH:
|
||||
fw_name = FIRMWARE_DIMGREY_CAVEFISH;
|
||||
|
||||
@@ -5,6 +5,7 @@ menu "Display Engine Configuration"
|
||||
config DRM_AMD_DC
|
||||
bool "AMD DC - Enable new display engine"
|
||||
default y
|
||||
depends on BROKEN || !CC_IS_CLANG || X86_64 || SPARC64 || ARM64
|
||||
select SND_HDA_COMPONENT if SND_HDA_CORE
|
||||
select DRM_AMD_DC_DCN if (X86 || PPC64) && !(KCOV_INSTRUMENT_ALL && KCOV_ENABLE_COMPARISONS)
|
||||
help
|
||||
@@ -12,6 +13,12 @@ config DRM_AMD_DC
|
||||
support for AMDGPU. This adds required support for Vega and
|
||||
Raven ASICs.
|
||||
|
||||
calculate_bandwidth() is presently broken on all !(X86_64 || SPARC64 || ARM64)
|
||||
architectures built with Clang (all released versions), whereby the stack
|
||||
frame gets blown up to well over 5k. This would cause an immediate kernel
|
||||
panic on most architectures. We'll revert this when the following bug report
|
||||
has been resolved: https://github.com/llvm/llvm-project/issues/41896.
|
||||
|
||||
config DRM_AMD_DC_DCN
|
||||
def_bool n
|
||||
help
|
||||
|
||||
@@ -2997,13 +2997,12 @@ void amdgpu_dm_update_connector_after_detect(
|
||||
aconnector->edid =
|
||||
(struct edid *)sink->dc_edid.raw_edid;
|
||||
|
||||
drm_connector_update_edid_property(connector,
|
||||
aconnector->edid);
|
||||
if (aconnector->dc_link->aux_mode)
|
||||
drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,
|
||||
aconnector->edid);
|
||||
}
|
||||
|
||||
drm_connector_update_edid_property(connector, aconnector->edid);
|
||||
amdgpu_dm_update_freesync_caps(connector, aconnector->edid);
|
||||
update_connector_ext_caps(aconnector);
|
||||
} else {
|
||||
|
||||
@@ -5285,7 +5285,7 @@ int drm_dp_mst_add_affected_dsc_crtcs(struct drm_atomic_state *state, struct drm
|
||||
mst_state = drm_atomic_get_mst_topology_state(state, mgr);
|
||||
|
||||
if (IS_ERR(mst_state))
|
||||
return -EINVAL;
|
||||
return PTR_ERR(mst_state);
|
||||
|
||||
list_for_each_entry(pos, &mst_state->vcpis, next) {
|
||||
|
||||
|
||||
@@ -143,23 +143,9 @@ static const struct drm_i915_gem_object_ops i915_gem_object_internal_ops = {
|
||||
.put_pages = i915_gem_object_put_pages_internal,
|
||||
};
|
||||
|
||||
/**
|
||||
* i915_gem_object_create_internal: create an object with volatile pages
|
||||
* @i915: the i915 device
|
||||
* @size: the size in bytes of backing storage to allocate for the object
|
||||
*
|
||||
* Creates a new object that wraps some internal memory for private use.
|
||||
* This object is not backed by swappable storage, and as such its contents
|
||||
* are volatile and only valid whilst pinned. If the object is reaped by the
|
||||
* shrinker, its pages and data will be discarded. Equally, it is not a full
|
||||
* GEM object and so not valid for access from userspace. This makes it useful
|
||||
* for hardware interfaces like ringbuffers (which are pinned from the time
|
||||
* the request is written to the time the hardware stops accessing it), but
|
||||
* not for contexts (which need to be preserved when not active for later
|
||||
* reuse). Note that it is not cleared upon allocation.
|
||||
*/
|
||||
struct drm_i915_gem_object *
|
||||
i915_gem_object_create_internal(struct drm_i915_private *i915,
|
||||
__i915_gem_object_create_internal(struct drm_i915_private *i915,
|
||||
const struct drm_i915_gem_object_ops *ops,
|
||||
phys_addr_t size)
|
||||
{
|
||||
static struct lock_class_key lock_class;
|
||||
@@ -177,7 +163,7 @@ i915_gem_object_create_internal(struct drm_i915_private *i915,
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
drm_gem_private_object_init(&i915->drm, &obj->base, size);
|
||||
i915_gem_object_init(obj, &i915_gem_object_internal_ops, &lock_class, 0);
|
||||
i915_gem_object_init(obj, ops, &lock_class, 0);
|
||||
obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
|
||||
|
||||
/*
|
||||
@@ -197,3 +183,25 @@ i915_gem_object_create_internal(struct drm_i915_private *i915,
|
||||
|
||||
return obj;
|
||||
}
|
||||
|
||||
/**
|
||||
* i915_gem_object_create_internal: create an object with volatile pages
|
||||
* @i915: the i915 device
|
||||
* @size: the size in bytes of backing storage to allocate for the object
|
||||
*
|
||||
* Creates a new object that wraps some internal memory for private use.
|
||||
* This object is not backed by swappable storage, and as such its contents
|
||||
* are volatile and only valid whilst pinned. If the object is reaped by the
|
||||
* shrinker, its pages and data will be discarded. Equally, it is not a full
|
||||
* GEM object and so not valid for access from userspace. This makes it useful
|
||||
* for hardware interfaces like ringbuffers (which are pinned from the time
|
||||
* the request is written to the time the hardware stops accessing it), but
|
||||
* not for contexts (which need to be preserved when not active for later
|
||||
* reuse). Note that it is not cleared upon allocation.
|
||||
*/
|
||||
struct drm_i915_gem_object *
|
||||
i915_gem_object_create_internal(struct drm_i915_private *i915,
|
||||
phys_addr_t size)
|
||||
{
|
||||
return __i915_gem_object_create_internal(i915, &i915_gem_object_internal_ops, size);
|
||||
}
|
||||
|
||||
@@ -244,6 +244,7 @@ err_scratch1:
|
||||
i915_gem_object_put(vm->scratch[1]);
|
||||
err_scratch0:
|
||||
i915_gem_object_put(vm->scratch[0]);
|
||||
vm->scratch[0] = NULL;
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -262,15 +263,13 @@ static void gen6_ppgtt_cleanup(struct i915_address_space *vm)
|
||||
{
|
||||
struct gen6_ppgtt *ppgtt = to_gen6_ppgtt(i915_vm_to_ppgtt(vm));
|
||||
|
||||
__i915_vma_put(ppgtt->vma);
|
||||
|
||||
gen6_ppgtt_free_pd(ppgtt);
|
||||
free_scratch(vm);
|
||||
|
||||
mutex_destroy(&ppgtt->flush);
|
||||
mutex_destroy(&ppgtt->pin_mutex);
|
||||
|
||||
if (ppgtt->base.pd)
|
||||
free_pd(&ppgtt->base.vm, ppgtt->base.pd);
|
||||
|
||||
mutex_destroy(&ppgtt->flush);
|
||||
}
|
||||
|
||||
static int pd_vma_set_pages(struct i915_vma *vma)
|
||||
@@ -331,37 +330,6 @@ static const struct i915_vma_ops pd_vma_ops = {
|
||||
.unbind_vma = pd_vma_unbind,
|
||||
};
|
||||
|
||||
static struct i915_vma *pd_vma_create(struct gen6_ppgtt *ppgtt, int size)
|
||||
{
|
||||
struct i915_ggtt *ggtt = ppgtt->base.vm.gt->ggtt;
|
||||
struct i915_vma *vma;
|
||||
|
||||
GEM_BUG_ON(!IS_ALIGNED(size, I915_GTT_PAGE_SIZE));
|
||||
GEM_BUG_ON(size > ggtt->vm.total);
|
||||
|
||||
vma = i915_vma_alloc();
|
||||
if (!vma)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
i915_active_init(&vma->active, NULL, NULL, 0);
|
||||
|
||||
kref_init(&vma->ref);
|
||||
mutex_init(&vma->pages_mutex);
|
||||
vma->vm = i915_vm_get(&ggtt->vm);
|
||||
vma->ops = &pd_vma_ops;
|
||||
vma->private = ppgtt;
|
||||
|
||||
vma->size = size;
|
||||
vma->fence_size = size;
|
||||
atomic_set(&vma->flags, I915_VMA_GGTT);
|
||||
vma->ggtt_view.type = I915_GGTT_VIEW_ROTATED; /* prevent fencing */
|
||||
|
||||
INIT_LIST_HEAD(&vma->obj_link);
|
||||
INIT_LIST_HEAD(&vma->closed_link);
|
||||
|
||||
return vma;
|
||||
}
|
||||
|
||||
int gen6_ppgtt_pin(struct i915_ppgtt *base, struct i915_gem_ww_ctx *ww)
|
||||
{
|
||||
struct gen6_ppgtt *ppgtt = to_gen6_ppgtt(base);
|
||||
@@ -378,24 +346,85 @@ int gen6_ppgtt_pin(struct i915_ppgtt *base, struct i915_gem_ww_ctx *ww)
|
||||
if (atomic_add_unless(&ppgtt->pin_count, 1, 0))
|
||||
return 0;
|
||||
|
||||
if (mutex_lock_interruptible(&ppgtt->pin_mutex))
|
||||
return -EINTR;
|
||||
/* grab the ppgtt resv to pin the object */
|
||||
err = i915_vm_lock_objects(&ppgtt->base.vm, ww);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
/*
|
||||
* PPGTT PDEs reside in the GGTT and consists of 512 entries. The
|
||||
* allocator works in address space sizes, so it's multiplied by page
|
||||
* size. We allocate at the top of the GTT to avoid fragmentation.
|
||||
*/
|
||||
err = 0;
|
||||
if (!atomic_read(&ppgtt->pin_count))
|
||||
if (!atomic_read(&ppgtt->pin_count)) {
|
||||
err = i915_ggtt_pin(ppgtt->vma, ww, GEN6_PD_ALIGN, PIN_HIGH);
|
||||
|
||||
GEM_BUG_ON(ppgtt->vma->fence);
|
||||
clear_bit(I915_VMA_CAN_FENCE_BIT, __i915_vma_flags(ppgtt->vma));
|
||||
}
|
||||
if (!err)
|
||||
atomic_inc(&ppgtt->pin_count);
|
||||
mutex_unlock(&ppgtt->pin_mutex);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int pd_dummy_obj_get_pages(struct drm_i915_gem_object *obj)
|
||||
{
|
||||
obj->mm.pages = ZERO_SIZE_PTR;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pd_dummy_obj_put_pages(struct drm_i915_gem_object *obj,
|
||||
struct sg_table *pages)
|
||||
{
|
||||
}
|
||||
|
||||
static const struct drm_i915_gem_object_ops pd_dummy_obj_ops = {
|
||||
.name = "pd_dummy_obj",
|
||||
.get_pages = pd_dummy_obj_get_pages,
|
||||
.put_pages = pd_dummy_obj_put_pages,
|
||||
};
|
||||
|
||||
static struct i915_page_directory *
|
||||
gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
|
||||
{
|
||||
struct i915_ggtt * const ggtt = ppgtt->base.vm.gt->ggtt;
|
||||
struct i915_page_directory *pd;
|
||||
int err;
|
||||
|
||||
pd = __alloc_pd(I915_PDES);
|
||||
if (unlikely(!pd))
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
pd->pt.base = __i915_gem_object_create_internal(ppgtt->base.vm.gt->i915,
|
||||
&pd_dummy_obj_ops,
|
||||
I915_PDES * SZ_4K);
|
||||
if (IS_ERR(pd->pt.base)) {
|
||||
err = PTR_ERR(pd->pt.base);
|
||||
pd->pt.base = NULL;
|
||||
goto err_pd;
|
||||
}
|
||||
|
||||
pd->pt.base->base.resv = i915_vm_resv_get(&ppgtt->base.vm);
|
||||
pd->pt.base->shares_resv_from = &ppgtt->base.vm;
|
||||
|
||||
ppgtt->vma = i915_vma_instance(pd->pt.base, &ggtt->vm, NULL);
|
||||
if (IS_ERR(ppgtt->vma)) {
|
||||
err = PTR_ERR(ppgtt->vma);
|
||||
ppgtt->vma = NULL;
|
||||
goto err_pd;
|
||||
}
|
||||
|
||||
/* The dummy object we create is special, override ops.. */
|
||||
ppgtt->vma->ops = &pd_vma_ops;
|
||||
ppgtt->vma->private = ppgtt;
|
||||
return pd;
|
||||
|
||||
err_pd:
|
||||
free_pd(&ppgtt->base.vm, pd);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
void gen6_ppgtt_unpin(struct i915_ppgtt *base)
|
||||
{
|
||||
struct gen6_ppgtt *ppgtt = to_gen6_ppgtt(base);
|
||||
@@ -427,7 +456,6 @@ struct i915_ppgtt *gen6_ppgtt_create(struct intel_gt *gt)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
mutex_init(&ppgtt->flush);
|
||||
mutex_init(&ppgtt->pin_mutex);
|
||||
|
||||
ppgtt_init(&ppgtt->base, gt);
|
||||
ppgtt->base.vm.pd_shift = ilog2(SZ_4K * SZ_4K / sizeof(gen6_pte_t));
|
||||
@@ -442,30 +470,19 @@ struct i915_ppgtt *gen6_ppgtt_create(struct intel_gt *gt)
|
||||
ppgtt->base.vm.alloc_pt_dma = alloc_pt_dma;
|
||||
ppgtt->base.vm.pte_encode = ggtt->vm.pte_encode;
|
||||
|
||||
ppgtt->base.pd = __alloc_pd(I915_PDES);
|
||||
if (!ppgtt->base.pd) {
|
||||
err = -ENOMEM;
|
||||
goto err_free;
|
||||
}
|
||||
|
||||
err = gen6_ppgtt_init_scratch(ppgtt);
|
||||
if (err)
|
||||
goto err_pd;
|
||||
goto err_put;
|
||||
|
||||
ppgtt->vma = pd_vma_create(ppgtt, GEN6_PD_SIZE);
|
||||
if (IS_ERR(ppgtt->vma)) {
|
||||
err = PTR_ERR(ppgtt->vma);
|
||||
goto err_scratch;
|
||||
ppgtt->base.pd = gen6_alloc_top_pd(ppgtt);
|
||||
if (IS_ERR(ppgtt->base.pd)) {
|
||||
err = PTR_ERR(ppgtt->base.pd);
|
||||
goto err_put;
|
||||
}
|
||||
|
||||
return &ppgtt->base;
|
||||
|
||||
err_scratch:
|
||||
free_scratch(&ppgtt->base.vm);
|
||||
err_pd:
|
||||
free_pd(&ppgtt->base.vm, ppgtt->base.pd);
|
||||
err_free:
|
||||
mutex_destroy(&ppgtt->pin_mutex);
|
||||
kfree(ppgtt);
|
||||
err_put:
|
||||
i915_vm_put(&ppgtt->base.vm);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
@@ -19,7 +19,6 @@ struct gen6_ppgtt {
|
||||
u32 pp_dir;
|
||||
|
||||
atomic_t pin_count;
|
||||
struct mutex pin_mutex;
|
||||
|
||||
bool scan_for_unused_pt;
|
||||
};
|
||||
|
||||
@@ -196,7 +196,10 @@ static void gen8_ppgtt_cleanup(struct i915_address_space *vm)
|
||||
if (intel_vgpu_active(vm->i915))
|
||||
gen8_ppgtt_notify_vgt(ppgtt, false);
|
||||
|
||||
__gen8_ppgtt_cleanup(vm, ppgtt->pd, gen8_pd_top_count(vm), vm->top);
|
||||
if (ppgtt->pd)
|
||||
__gen8_ppgtt_cleanup(vm, ppgtt->pd,
|
||||
gen8_pd_top_count(vm), vm->top);
|
||||
|
||||
free_scratch(vm);
|
||||
}
|
||||
|
||||
@@ -656,8 +659,10 @@ static int gen8_init_scratch(struct i915_address_space *vm)
|
||||
struct drm_i915_gem_object *obj;
|
||||
|
||||
obj = vm->alloc_pt_dma(vm, I915_GTT_PAGE_SIZE_4K);
|
||||
if (IS_ERR(obj))
|
||||
if (IS_ERR(obj)) {
|
||||
ret = PTR_ERR(obj);
|
||||
goto free_scratch;
|
||||
}
|
||||
|
||||
ret = map_pt_dma(vm, obj);
|
||||
if (ret) {
|
||||
@@ -676,7 +681,8 @@ static int gen8_init_scratch(struct i915_address_space *vm)
|
||||
free_scratch:
|
||||
while (i--)
|
||||
i915_gem_object_put(vm->scratch[i]);
|
||||
return -ENOMEM;
|
||||
vm->scratch[0] = NULL;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int gen8_preallocate_top_level_pdp(struct i915_ppgtt *ppgtt)
|
||||
@@ -753,6 +759,7 @@ err_pd:
|
||||
*/
|
||||
struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt)
|
||||
{
|
||||
struct i915_page_directory *pd;
|
||||
struct i915_ppgtt *ppgtt;
|
||||
int err;
|
||||
|
||||
@@ -779,21 +786,7 @@ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt)
|
||||
else
|
||||
ppgtt->vm.alloc_pt_dma = alloc_pt_dma;
|
||||
|
||||
err = gen8_init_scratch(&ppgtt->vm);
|
||||
if (err)
|
||||
goto err_free;
|
||||
|
||||
ppgtt->pd = gen8_alloc_top_pd(&ppgtt->vm);
|
||||
if (IS_ERR(ppgtt->pd)) {
|
||||
err = PTR_ERR(ppgtt->pd);
|
||||
goto err_free_scratch;
|
||||
}
|
||||
|
||||
if (!i915_vm_is_4lvl(&ppgtt->vm)) {
|
||||
err = gen8_preallocate_top_level_pdp(ppgtt);
|
||||
if (err)
|
||||
goto err_free_pd;
|
||||
}
|
||||
ppgtt->vm.pte_encode = gen8_pte_encode;
|
||||
|
||||
ppgtt->vm.bind_async_flags = I915_VMA_LOCAL_BIND;
|
||||
ppgtt->vm.insert_entries = gen8_ppgtt_insert;
|
||||
@@ -801,22 +794,31 @@ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt)
|
||||
ppgtt->vm.allocate_va_range = gen8_ppgtt_alloc;
|
||||
ppgtt->vm.clear_range = gen8_ppgtt_clear;
|
||||
ppgtt->vm.foreach = gen8_ppgtt_foreach;
|
||||
ppgtt->vm.cleanup = gen8_ppgtt_cleanup;
|
||||
|
||||
ppgtt->vm.pte_encode = gen8_pte_encode;
|
||||
err = gen8_init_scratch(&ppgtt->vm);
|
||||
if (err)
|
||||
goto err_put;
|
||||
|
||||
pd = gen8_alloc_top_pd(&ppgtt->vm);
|
||||
if (IS_ERR(pd)) {
|
||||
err = PTR_ERR(pd);
|
||||
goto err_put;
|
||||
}
|
||||
ppgtt->pd = pd;
|
||||
|
||||
if (!i915_vm_is_4lvl(&ppgtt->vm)) {
|
||||
err = gen8_preallocate_top_level_pdp(ppgtt);
|
||||
if (err)
|
||||
goto err_put;
|
||||
}
|
||||
|
||||
if (intel_vgpu_active(gt->i915))
|
||||
gen8_ppgtt_notify_vgt(ppgtt, true);
|
||||
|
||||
ppgtt->vm.cleanup = gen8_ppgtt_cleanup;
|
||||
|
||||
return ppgtt;
|
||||
|
||||
err_free_pd:
|
||||
__gen8_ppgtt_cleanup(&ppgtt->vm, ppgtt->pd,
|
||||
gen8_pd_top_count(&ppgtt->vm), ppgtt->vm.top);
|
||||
err_free_scratch:
|
||||
free_scratch(&ppgtt->vm);
|
||||
err_free:
|
||||
kfree(ppgtt);
|
||||
err_put:
|
||||
i915_vm_put(&ppgtt->vm);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
@@ -650,8 +650,13 @@ int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout)
|
||||
return -EINTR;
|
||||
}
|
||||
|
||||
return timeout ? timeout : intel_uc_wait_for_idle(>->uc,
|
||||
remaining_timeout);
|
||||
if (timeout)
|
||||
return timeout;
|
||||
|
||||
if (remaining_timeout < 0)
|
||||
remaining_timeout = 0;
|
||||
|
||||
return intel_uc_wait_for_idle(>->uc, remaining_timeout);
|
||||
}
|
||||
|
||||
int intel_gt_init(struct intel_gt *gt)
|
||||
|
||||
@@ -199,7 +199,7 @@ out_active: spin_lock(&timelines->lock);
|
||||
if (remaining_timeout)
|
||||
*remaining_timeout = timeout;
|
||||
|
||||
return active_count ? timeout : 0;
|
||||
return active_count ? timeout ?: -ETIME : 0;
|
||||
}
|
||||
|
||||
static void retire_work_handler(struct work_struct *work)
|
||||
|
||||
@@ -341,6 +341,9 @@ void free_scratch(struct i915_address_space *vm)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (!vm->scratch[0])
|
||||
return;
|
||||
|
||||
for (i = 0; i <= vm->top; i++)
|
||||
i915_gem_object_put(vm->scratch[i]);
|
||||
}
|
||||
|
||||
@@ -1905,6 +1905,10 @@ int i915_gem_evict_vm(struct i915_address_space *vm);
|
||||
struct drm_i915_gem_object *
|
||||
i915_gem_object_create_internal(struct drm_i915_private *dev_priv,
|
||||
phys_addr_t size);
|
||||
struct drm_i915_gem_object *
|
||||
__i915_gem_object_create_internal(struct drm_i915_private *dev_priv,
|
||||
const struct drm_i915_gem_object_ops *ops,
|
||||
phys_addr_t size);
|
||||
|
||||
/* i915_gem_tiling.c */
|
||||
static inline bool i915_gem_object_needs_bit17_swizzle(struct drm_i915_gem_object *obj)
|
||||
|
||||
@@ -242,10 +242,13 @@ static int adjust_tjmax(struct cpuinfo_x86 *c, u32 id, struct device *dev)
|
||||
*/
|
||||
if (host_bridge && host_bridge->vendor == PCI_VENDOR_ID_INTEL) {
|
||||
for (i = 0; i < ARRAY_SIZE(tjmax_pci_table); i++) {
|
||||
if (host_bridge->device == tjmax_pci_table[i].device)
|
||||
if (host_bridge->device == tjmax_pci_table[i].device) {
|
||||
pci_dev_put(host_bridge);
|
||||
return tjmax_pci_table[i].tjmax;
|
||||
}
|
||||
}
|
||||
}
|
||||
pci_dev_put(host_bridge);
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(tjmax_table); i++) {
|
||||
if (strstr(c->x86_model_id, tjmax_table[i].id))
|
||||
@@ -533,6 +536,10 @@ static void coretemp_remove_core(struct platform_data *pdata, int indx)
|
||||
{
|
||||
struct temp_data *tdata = pdata->core_data[indx];
|
||||
|
||||
/* if we errored on add then this is already gone */
|
||||
if (!tdata)
|
||||
return;
|
||||
|
||||
/* Remove the sysfs attributes */
|
||||
sysfs_remove_group(&pdata->hwmon_dev->kobj, &tdata->attr_group);
|
||||
|
||||
|
||||
@@ -108,7 +108,7 @@ static int i5500_temp_probe(struct pci_dev *pdev,
|
||||
u32 tstimer;
|
||||
s8 tsfsc;
|
||||
|
||||
err = pci_enable_device(pdev);
|
||||
err = pcim_enable_device(pdev);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "Failed to enable device\n");
|
||||
return err;
|
||||
|
||||
@@ -502,6 +502,7 @@ static void ibmpex_register_bmc(int iface, struct device *dev)
|
||||
return;
|
||||
|
||||
out_register:
|
||||
list_del(&data->list);
|
||||
hwmon_device_unregister(data->hwmon_dev);
|
||||
out_user:
|
||||
ipmi_destroy_user(data->user);
|
||||
|
||||
@@ -228,7 +228,7 @@ static int ina3221_read_value(struct ina3221_data *ina, unsigned int reg,
|
||||
* Shunt Voltage Sum register has 14-bit value with 1-bit shift
|
||||
* Other Shunt Voltage registers have 12 bits with 3-bit shift
|
||||
*/
|
||||
if (reg == INA3221_SHUNT_SUM)
|
||||
if (reg == INA3221_SHUNT_SUM || reg == INA3221_CRIT_SUM)
|
||||
*val = sign_extend32(regval >> 1, 14);
|
||||
else
|
||||
*val = sign_extend32(regval >> 3, 12);
|
||||
@@ -465,7 +465,7 @@ static int ina3221_write_curr(struct device *dev, u32 attr,
|
||||
* SHUNT_SUM: (1 / 40uV) << 1 = 1 / 20uV
|
||||
* SHUNT[1-3]: (1 / 40uV) << 3 = 1 / 5uV
|
||||
*/
|
||||
if (reg == INA3221_SHUNT_SUM)
|
||||
if (reg == INA3221_SHUNT_SUM || reg == INA3221_CRIT_SUM)
|
||||
regval = DIV_ROUND_CLOSEST(voltage_uv, 20) & 0xfffe;
|
||||
else
|
||||
regval = DIV_ROUND_CLOSEST(voltage_uv, 5) & 0xfff8;
|
||||
|
||||
@@ -396,7 +396,7 @@ static int ltc2947_read_temp(struct device *dev, const u32 attr, long *val,
|
||||
return ret;
|
||||
|
||||
/* in milidegrees celcius, temp is given by: */
|
||||
*val = (__val * 204) + 550;
|
||||
*val = (__val * 204) + 5500;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -1051,7 +1051,8 @@ static int i2c_imx_read(struct imx_i2c_struct *i2c_imx, struct i2c_msg *msgs,
|
||||
int i, result;
|
||||
unsigned int temp;
|
||||
int block_data = msgs->flags & I2C_M_RECV_LEN;
|
||||
int use_dma = i2c_imx->dma && msgs->len >= DMA_THRESHOLD && !block_data;
|
||||
int use_dma = i2c_imx->dma && msgs->flags & I2C_M_DMA_SAFE &&
|
||||
msgs->len >= DMA_THRESHOLD && !block_data;
|
||||
|
||||
dev_dbg(&i2c_imx->adapter.dev,
|
||||
"<%s> write slave address: addr=0x%x\n",
|
||||
@@ -1217,7 +1218,8 @@ static int i2c_imx_xfer_common(struct i2c_adapter *adapter,
|
||||
result = i2c_imx_read(i2c_imx, &msgs[i], is_lastmsg, atomic);
|
||||
} else {
|
||||
if (!atomic &&
|
||||
i2c_imx->dma && msgs[i].len >= DMA_THRESHOLD)
|
||||
i2c_imx->dma && msgs[i].len >= DMA_THRESHOLD &&
|
||||
msgs[i].flags & I2C_M_DMA_SAFE)
|
||||
result = i2c_imx_dma_write(i2c_imx, &msgs[i]);
|
||||
else
|
||||
result = i2c_imx_write(i2c_imx, &msgs[i], atomic);
|
||||
|
||||
@@ -2362,8 +2362,17 @@ static struct platform_driver npcm_i2c_bus_driver = {
|
||||
|
||||
static int __init npcm_i2c_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
npcm_i2c_debugfs_dir = debugfs_create_dir("npcm_i2c", NULL);
|
||||
return platform_driver_register(&npcm_i2c_bus_driver);
|
||||
|
||||
ret = platform_driver_register(&npcm_i2c_bus_driver);
|
||||
if (ret) {
|
||||
debugfs_remove_recursive(npcm_i2c_debugfs_dir);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
module_init(npcm_i2c_init);
|
||||
|
||||
|
||||
@@ -245,14 +245,14 @@ static int afe4403_read_raw(struct iio_dev *indio_dev,
|
||||
int *val, int *val2, long mask)
|
||||
{
|
||||
struct afe4403_data *afe = iio_priv(indio_dev);
|
||||
unsigned int reg = afe4403_channel_values[chan->address];
|
||||
unsigned int field = afe4403_channel_leds[chan->address];
|
||||
unsigned int reg, field;
|
||||
int ret;
|
||||
|
||||
switch (chan->type) {
|
||||
case IIO_INTENSITY:
|
||||
switch (mask) {
|
||||
case IIO_CHAN_INFO_RAW:
|
||||
reg = afe4403_channel_values[chan->address];
|
||||
ret = afe4403_read(afe, reg, val);
|
||||
if (ret)
|
||||
return ret;
|
||||
@@ -262,6 +262,7 @@ static int afe4403_read_raw(struct iio_dev *indio_dev,
|
||||
case IIO_CURRENT:
|
||||
switch (mask) {
|
||||
case IIO_CHAN_INFO_RAW:
|
||||
field = afe4403_channel_leds[chan->address];
|
||||
ret = regmap_field_read(afe->fields[field], val);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@@ -250,20 +250,20 @@ static int afe4404_read_raw(struct iio_dev *indio_dev,
|
||||
int *val, int *val2, long mask)
|
||||
{
|
||||
struct afe4404_data *afe = iio_priv(indio_dev);
|
||||
unsigned int value_reg = afe4404_channel_values[chan->address];
|
||||
unsigned int led_field = afe4404_channel_leds[chan->address];
|
||||
unsigned int offdac_field = afe4404_channel_offdacs[chan->address];
|
||||
unsigned int value_reg, led_field, offdac_field;
|
||||
int ret;
|
||||
|
||||
switch (chan->type) {
|
||||
case IIO_INTENSITY:
|
||||
switch (mask) {
|
||||
case IIO_CHAN_INFO_RAW:
|
||||
value_reg = afe4404_channel_values[chan->address];
|
||||
ret = regmap_read(afe->regmap, value_reg, val);
|
||||
if (ret)
|
||||
return ret;
|
||||
return IIO_VAL_INT;
|
||||
case IIO_CHAN_INFO_OFFSET:
|
||||
offdac_field = afe4404_channel_offdacs[chan->address];
|
||||
ret = regmap_field_read(afe->fields[offdac_field], val);
|
||||
if (ret)
|
||||
return ret;
|
||||
@@ -273,6 +273,7 @@ static int afe4404_read_raw(struct iio_dev *indio_dev,
|
||||
case IIO_CURRENT:
|
||||
switch (mask) {
|
||||
case IIO_CHAN_INFO_RAW:
|
||||
led_field = afe4404_channel_leds[chan->address];
|
||||
ret = regmap_field_read(afe->fields[led_field], val);
|
||||
if (ret)
|
||||
return ret;
|
||||
@@ -295,19 +296,20 @@ static int afe4404_write_raw(struct iio_dev *indio_dev,
|
||||
int val, int val2, long mask)
|
||||
{
|
||||
struct afe4404_data *afe = iio_priv(indio_dev);
|
||||
unsigned int led_field = afe4404_channel_leds[chan->address];
|
||||
unsigned int offdac_field = afe4404_channel_offdacs[chan->address];
|
||||
unsigned int led_field, offdac_field;
|
||||
|
||||
switch (chan->type) {
|
||||
case IIO_INTENSITY:
|
||||
switch (mask) {
|
||||
case IIO_CHAN_INFO_OFFSET:
|
||||
offdac_field = afe4404_channel_offdacs[chan->address];
|
||||
return regmap_field_write(afe->fields[offdac_field], val);
|
||||
}
|
||||
break;
|
||||
case IIO_CURRENT:
|
||||
switch (mask) {
|
||||
case IIO_CHAN_INFO_RAW:
|
||||
led_field = afe4404_channel_leds[chan->address];
|
||||
return regmap_field_write(afe->fields[led_field], val);
|
||||
}
|
||||
break;
|
||||
|
||||
@@ -294,6 +294,8 @@ config RPR0521
|
||||
tristate "ROHM RPR0521 ALS and proximity sensor driver"
|
||||
depends on I2C
|
||||
select REGMAP_I2C
|
||||
select IIO_BUFFER
|
||||
select IIO_TRIGGERED_BUFFER
|
||||
help
|
||||
Say Y here if you want to build support for ROHM's RPR0521
|
||||
ambient light and proximity sensor device.
|
||||
|
||||
@@ -210,12 +210,14 @@ static int raydium_i2c_send(struct i2c_client *client,
|
||||
|
||||
error = raydium_i2c_xfer(client, addr, xfer, ARRAY_SIZE(xfer));
|
||||
if (likely(!error))
|
||||
return 0;
|
||||
goto out;
|
||||
|
||||
msleep(RM_RETRY_DELAY_MS);
|
||||
} while (++tries < RM_MAX_RETRIES);
|
||||
|
||||
dev_err(&client->dev, "%s failed: %d\n", __func__, error);
|
||||
out:
|
||||
kfree(tx_buf);
|
||||
return error;
|
||||
}
|
||||
|
||||
|
||||
@@ -822,6 +822,7 @@ int __init dmar_dev_scope_init(void)
|
||||
info = dmar_alloc_pci_notify_info(dev,
|
||||
BUS_NOTIFY_ADD_DEVICE);
|
||||
if (!info) {
|
||||
pci_dev_put(dev);
|
||||
return dmar_dev_scope_status;
|
||||
} else {
|
||||
dmar_pci_bus_add_dev(info);
|
||||
|
||||
@@ -4241,8 +4241,10 @@ static inline bool has_external_pci(void)
|
||||
struct pci_dev *pdev = NULL;
|
||||
|
||||
for_each_pci_dev(pdev)
|
||||
if (pdev->external_facing)
|
||||
if (pdev->external_facing) {
|
||||
pci_dev_put(pdev);
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
@@ -35,10 +35,7 @@
|
||||
int get_vaddr_frames(unsigned long start, unsigned int nr_frames,
|
||||
struct frame_vector *vec)
|
||||
{
|
||||
struct mm_struct *mm = current->mm;
|
||||
struct vm_area_struct *vma;
|
||||
int ret = 0;
|
||||
int err;
|
||||
int ret;
|
||||
|
||||
if (nr_frames == 0)
|
||||
return 0;
|
||||
@@ -51,45 +48,17 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames,
|
||||
ret = pin_user_pages_fast(start, nr_frames,
|
||||
FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM,
|
||||
(struct page **)(vec->ptrs));
|
||||
if (ret > 0) {
|
||||
vec->got_ref = true;
|
||||
vec->is_pfns = false;
|
||||
goto out_unlocked;
|
||||
}
|
||||
|
||||
mmap_read_lock(mm);
|
||||
vec->got_ref = false;
|
||||
vec->is_pfns = true;
|
||||
ret = 0;
|
||||
do {
|
||||
unsigned long *nums = frame_vector_pfns(vec);
|
||||
|
||||
vma = vma_lookup(mm, start);
|
||||
if (!vma)
|
||||
break;
|
||||
|
||||
while (ret < nr_frames && start + PAGE_SIZE <= vma->vm_end) {
|
||||
err = follow_pfn(vma, start, &nums[ret]);
|
||||
if (err) {
|
||||
if (ret == 0)
|
||||
ret = err;
|
||||
goto out;
|
||||
}
|
||||
start += PAGE_SIZE;
|
||||
ret++;
|
||||
}
|
||||
/* Bail out if VMA doesn't completely cover the tail page. */
|
||||
if (start < vma->vm_end)
|
||||
break;
|
||||
} while (ret < nr_frames);
|
||||
out:
|
||||
mmap_read_unlock(mm);
|
||||
out_unlocked:
|
||||
if (!ret)
|
||||
ret = -EFAULT;
|
||||
if (ret > 0)
|
||||
vec->nr_frames = ret;
|
||||
|
||||
if (likely(ret > 0))
|
||||
return ret;
|
||||
|
||||
/* This used to (racily) return non-refcounted pfns. Let people know */
|
||||
WARN_ONCE(1, "get_vaddr_frames() cannot follow VM_IO mapping");
|
||||
vec->nr_frames = 0;
|
||||
return ret ? ret : -EFAULT;
|
||||
}
|
||||
EXPORT_SYMBOL(get_vaddr_frames);
|
||||
|
||||
|
||||
@@ -1490,6 +1490,11 @@ void mmc_init_erase(struct mmc_card *card)
|
||||
card->pref_erase = 0;
|
||||
}
|
||||
|
||||
static bool is_trim_arg(unsigned int arg)
|
||||
{
|
||||
return (arg & MMC_TRIM_OR_DISCARD_ARGS) && arg != MMC_DISCARD_ARG;
|
||||
}
|
||||
|
||||
static unsigned int mmc_mmc_erase_timeout(struct mmc_card *card,
|
||||
unsigned int arg, unsigned int qty)
|
||||
{
|
||||
@@ -1772,7 +1777,7 @@ int mmc_erase(struct mmc_card *card, unsigned int from, unsigned int nr,
|
||||
!(card->ext_csd.sec_feature_support & EXT_CSD_SEC_ER_EN))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (mmc_card_mmc(card) && (arg & MMC_TRIM_ARGS) &&
|
||||
if (mmc_card_mmc(card) && is_trim_arg(arg) &&
|
||||
!(card->ext_csd.sec_feature_support & EXT_CSD_SEC_GB_CL_EN))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
@@ -1802,7 +1807,7 @@ int mmc_erase(struct mmc_card *card, unsigned int from, unsigned int nr,
|
||||
* identified by the card->eg_boundary flag.
|
||||
*/
|
||||
rem = card->erase_size - (from % card->erase_size);
|
||||
if ((arg & MMC_TRIM_ARGS) && (card->eg_boundary) && (nr > rem)) {
|
||||
if ((arg & MMC_TRIM_OR_DISCARD_ARGS) && card->eg_boundary && nr > rem) {
|
||||
err = mmc_do_erase(card, from, from + rem - 1, arg);
|
||||
from += rem;
|
||||
if ((err) || (to <= from))
|
||||
|
||||
@@ -3181,7 +3181,8 @@ static int __mmc_test_register_dbgfs_file(struct mmc_card *card,
|
||||
struct mmc_test_dbgfs_file *df;
|
||||
|
||||
if (card->debugfs_root)
|
||||
debugfs_create_file(name, mode, card->debugfs_root, card, fops);
|
||||
file = debugfs_create_file(name, mode, card->debugfs_root,
|
||||
card, fops);
|
||||
|
||||
df = kmalloc(sizeof(*df), GFP_KERNEL);
|
||||
if (!df) {
|
||||
|
||||
@@ -1495,7 +1495,7 @@ static void esdhc_cqe_enable(struct mmc_host *mmc)
|
||||
* system resume back.
|
||||
*/
|
||||
cqhci_writel(cq_host, 0, CQHCI_CTL);
|
||||
if (cqhci_readl(cq_host, CQHCI_CTL) && CQHCI_HALT)
|
||||
if (cqhci_readl(cq_host, CQHCI_CTL) & CQHCI_HALT)
|
||||
dev_err(mmc_dev(host->mmc),
|
||||
"failed to exit halt state when enable CQE\n");
|
||||
|
||||
|
||||
@@ -457,7 +457,7 @@ static int sdhci_sprd_voltage_switch(struct mmc_host *mmc, struct mmc_ios *ios)
|
||||
}
|
||||
|
||||
if (IS_ERR(sprd_host->pinctrl))
|
||||
return 0;
|
||||
goto reset;
|
||||
|
||||
switch (ios->signal_voltage) {
|
||||
case MMC_SIGNAL_VOLTAGE_180:
|
||||
@@ -485,6 +485,8 @@ static int sdhci_sprd_voltage_switch(struct mmc_host *mmc, struct mmc_ios *ios)
|
||||
|
||||
/* Wait for 300 ~ 500 us for pin state stable */
|
||||
usleep_range(300, 500);
|
||||
|
||||
reset:
|
||||
sdhci_reset(host, SDHCI_RESET_CMD | SDHCI_RESET_DATA);
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -341,6 +341,7 @@ static void sdhci_init(struct sdhci_host *host, int soft)
|
||||
if (soft) {
|
||||
/* force clock reconfiguration */
|
||||
host->clock = 0;
|
||||
host->reinit_uhs = true;
|
||||
mmc->ops->set_ios(mmc, &mmc->ios);
|
||||
}
|
||||
}
|
||||
@@ -2263,11 +2264,46 @@ void sdhci_set_uhs_signaling(struct sdhci_host *host, unsigned timing)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(sdhci_set_uhs_signaling);
|
||||
|
||||
static bool sdhci_timing_has_preset(unsigned char timing)
|
||||
{
|
||||
switch (timing) {
|
||||
case MMC_TIMING_UHS_SDR12:
|
||||
case MMC_TIMING_UHS_SDR25:
|
||||
case MMC_TIMING_UHS_SDR50:
|
||||
case MMC_TIMING_UHS_SDR104:
|
||||
case MMC_TIMING_UHS_DDR50:
|
||||
case MMC_TIMING_MMC_DDR52:
|
||||
return true;
|
||||
};
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool sdhci_preset_needed(struct sdhci_host *host, unsigned char timing)
|
||||
{
|
||||
return !(host->quirks2 & SDHCI_QUIRK2_PRESET_VALUE_BROKEN) &&
|
||||
sdhci_timing_has_preset(timing);
|
||||
}
|
||||
|
||||
static bool sdhci_presetable_values_change(struct sdhci_host *host, struct mmc_ios *ios)
|
||||
{
|
||||
/*
|
||||
* Preset Values are: Driver Strength, Clock Generator and SDCLK/RCLK
|
||||
* Frequency. Check if preset values need to be enabled, or the Driver
|
||||
* Strength needs updating. Note, clock changes are handled separately.
|
||||
*/
|
||||
return !host->preset_enabled &&
|
||||
(sdhci_preset_needed(host, ios->timing) || host->drv_type != ios->drv_type);
|
||||
}
|
||||
|
||||
void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
|
||||
{
|
||||
struct sdhci_host *host = mmc_priv(mmc);
|
||||
bool reinit_uhs = host->reinit_uhs;
|
||||
bool turning_on_clk = false;
|
||||
u8 ctrl;
|
||||
|
||||
host->reinit_uhs = false;
|
||||
|
||||
if (ios->power_mode == MMC_POWER_UNDEFINED)
|
||||
return;
|
||||
|
||||
@@ -2293,6 +2329,8 @@ void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
|
||||
sdhci_enable_preset_value(host, false);
|
||||
|
||||
if (!ios->clock || ios->clock != host->clock) {
|
||||
turning_on_clk = ios->clock && !host->clock;
|
||||
|
||||
host->ops->set_clock(host, ios->clock);
|
||||
host->clock = ios->clock;
|
||||
|
||||
@@ -2319,6 +2357,17 @@ void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
|
||||
|
||||
host->ops->set_bus_width(host, ios->bus_width);
|
||||
|
||||
/*
|
||||
* Special case to avoid multiple clock changes during voltage
|
||||
* switching.
|
||||
*/
|
||||
if (!reinit_uhs &&
|
||||
turning_on_clk &&
|
||||
host->timing == ios->timing &&
|
||||
host->version >= SDHCI_SPEC_300 &&
|
||||
!sdhci_presetable_values_change(host, ios))
|
||||
return;
|
||||
|
||||
ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
|
||||
|
||||
if (!(host->quirks & SDHCI_QUIRK_NO_HISPD_BIT)) {
|
||||
@@ -2362,6 +2411,7 @@ void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
|
||||
}
|
||||
|
||||
sdhci_writew(host, ctrl_2, SDHCI_HOST_CONTROL2);
|
||||
host->drv_type = ios->drv_type;
|
||||
} else {
|
||||
/*
|
||||
* According to SDHC Spec v3.00, if the Preset Value
|
||||
@@ -2389,19 +2439,14 @@ void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
|
||||
host->ops->set_uhs_signaling(host, ios->timing);
|
||||
host->timing = ios->timing;
|
||||
|
||||
if (!(host->quirks2 & SDHCI_QUIRK2_PRESET_VALUE_BROKEN) &&
|
||||
((ios->timing == MMC_TIMING_UHS_SDR12) ||
|
||||
(ios->timing == MMC_TIMING_UHS_SDR25) ||
|
||||
(ios->timing == MMC_TIMING_UHS_SDR50) ||
|
||||
(ios->timing == MMC_TIMING_UHS_SDR104) ||
|
||||
(ios->timing == MMC_TIMING_UHS_DDR50) ||
|
||||
(ios->timing == MMC_TIMING_MMC_DDR52))) {
|
||||
if (sdhci_preset_needed(host, ios->timing)) {
|
||||
u16 preset;
|
||||
|
||||
sdhci_enable_preset_value(host, true);
|
||||
preset = sdhci_get_preset_value(host);
|
||||
ios->drv_type = FIELD_GET(SDHCI_PRESET_DRV_MASK,
|
||||
preset);
|
||||
host->drv_type = ios->drv_type;
|
||||
}
|
||||
|
||||
/* Re-enable SD Clock */
|
||||
@@ -3739,6 +3784,7 @@ int sdhci_resume_host(struct sdhci_host *host)
|
||||
sdhci_init(host, 0);
|
||||
host->pwr = 0;
|
||||
host->clock = 0;
|
||||
host->reinit_uhs = true;
|
||||
mmc->ops->set_ios(mmc, &mmc->ios);
|
||||
} else {
|
||||
sdhci_init(host, (mmc->pm_flags & MMC_PM_KEEP_POWER));
|
||||
@@ -3801,6 +3847,7 @@ int sdhci_runtime_resume_host(struct sdhci_host *host, int soft_reset)
|
||||
/* Force clock and power re-program */
|
||||
host->pwr = 0;
|
||||
host->clock = 0;
|
||||
host->reinit_uhs = true;
|
||||
mmc->ops->start_signal_voltage_switch(mmc, &mmc->ios);
|
||||
mmc->ops->set_ios(mmc, &mmc->ios);
|
||||
|
||||
|
||||
@@ -527,6 +527,8 @@ struct sdhci_host {
|
||||
|
||||
unsigned int clock; /* Current clock (MHz) */
|
||||
u8 pwr; /* Current voltage */
|
||||
u8 drv_type; /* Current UHS-I driver type */
|
||||
bool reinit_uhs; /* Force UHS-related re-initialization */
|
||||
|
||||
bool runtime_suspended; /* Host is runtime suspended */
|
||||
bool bus_on; /* Bus power prevents runtime suspend */
|
||||
|
||||
@@ -264,13 +264,15 @@ static int cc770_isa_probe(struct platform_device *pdev)
|
||||
if (err) {
|
||||
dev_err(&pdev->dev,
|
||||
"couldn't register device (err=%d)\n", err);
|
||||
goto exit_unmap;
|
||||
goto exit_free;
|
||||
}
|
||||
|
||||
dev_info(&pdev->dev, "device registered (reg_base=0x%p, irq=%d)\n",
|
||||
priv->reg_base, dev->irq);
|
||||
return 0;
|
||||
|
||||
exit_free:
|
||||
free_cc770dev(dev);
|
||||
exit_unmap:
|
||||
if (mem[idx])
|
||||
iounmap(base);
|
||||
|
||||
@@ -1931,7 +1931,7 @@ int m_can_class_get_clocks(struct m_can_classdev *cdev)
|
||||
cdev->hclk = devm_clk_get(cdev->dev, "hclk");
|
||||
cdev->cclk = devm_clk_get(cdev->dev, "cclk");
|
||||
|
||||
if (IS_ERR(cdev->cclk)) {
|
||||
if (IS_ERR(cdev->hclk) || IS_ERR(cdev->cclk)) {
|
||||
dev_err(cdev->dev, "no clock found\n");
|
||||
ret = -ENODEV;
|
||||
}
|
||||
|
||||
@@ -120,7 +120,7 @@ static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
|
||||
|
||||
ret = pci_alloc_irq_vectors(pci, 1, 1, PCI_IRQ_ALL_TYPES);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
goto err_free_dev;
|
||||
|
||||
mcan_class->dev = &pci->dev;
|
||||
mcan_class->net->irq = pci_irq_vector(pci, 0);
|
||||
@@ -132,7 +132,7 @@ static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
|
||||
|
||||
ret = m_can_class_register(mcan_class);
|
||||
if (ret)
|
||||
goto err;
|
||||
goto err_free_irq;
|
||||
|
||||
/* Enable interrupt control at CAN wrapper IP */
|
||||
writel(0x1, base + CTL_CSR_INT_CTL_OFFSET);
|
||||
@@ -144,8 +144,10 @@ static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
|
||||
|
||||
return 0;
|
||||
|
||||
err:
|
||||
err_free_irq:
|
||||
pci_free_irq_vectors(pci);
|
||||
err_free_dev:
|
||||
m_can_class_free_dev(mcan_class->net);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -161,6 +163,7 @@ static void m_can_pci_remove(struct pci_dev *pci)
|
||||
writel(0x0, priv->base + CTL_CSR_INT_CTL_OFFSET);
|
||||
|
||||
m_can_class_unregister(mcan_class);
|
||||
m_can_class_free_dev(mcan_class->net);
|
||||
pci_free_irq_vectors(pci);
|
||||
}
|
||||
|
||||
|
||||
@@ -202,13 +202,15 @@ static int sja1000_isa_probe(struct platform_device *pdev)
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "registering %s failed (err=%d)\n",
|
||||
DRV_NAME, err);
|
||||
goto exit_unmap;
|
||||
goto exit_free;
|
||||
}
|
||||
|
||||
dev_info(&pdev->dev, "%s device registered (reg_base=0x%p, irq=%d)\n",
|
||||
DRV_NAME, priv->reg_base, dev->irq);
|
||||
return 0;
|
||||
|
||||
exit_free:
|
||||
free_sja1000dev(dev);
|
||||
exit_unmap:
|
||||
if (mem[idx])
|
||||
iounmap(base);
|
||||
|
||||
@@ -2098,8 +2098,11 @@ static int es58x_init_netdev(struct es58x_device *es58x_dev, int channel_idx)
|
||||
netdev->flags |= IFF_ECHO; /* We support local echo */
|
||||
|
||||
ret = register_candev(netdev);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
es58x_dev->netdev[channel_idx] = NULL;
|
||||
free_candev(netdev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
netdev_queue_set_dql_min_limit(netdev_get_tx_queue(netdev, 0),
|
||||
es58x_dev->param->dql_min_limit);
|
||||
|
||||
@@ -959,7 +959,7 @@ static const struct lan9303_mib_desc lan9303_mib[] = {
|
||||
{ .offset = LAN9303_MAC_TX_BRDCST_CNT_0, .name = "TxBroad", },
|
||||
{ .offset = LAN9303_MAC_TX_PAUSE_CNT_0, .name = "TxPause", },
|
||||
{ .offset = LAN9303_MAC_TX_MULCST_CNT_0, .name = "TxMulti", },
|
||||
{ .offset = LAN9303_MAC_RX_UNDSZE_CNT_0, .name = "TxUnderRun", },
|
||||
{ .offset = LAN9303_MAC_RX_UNDSZE_CNT_0, .name = "RxShort", },
|
||||
{ .offset = LAN9303_MAC_TX_64_CNT_0, .name = "Tx64Byte", },
|
||||
{ .offset = LAN9303_MAC_TX_127_CNT_0, .name = "Tx128Byte", },
|
||||
{ .offset = LAN9303_MAC_TX_255_CNT_0, .name = "Tx256Byte", },
|
||||
|
||||
@@ -13,6 +13,7 @@
|
||||
#include "aq_ptp.h"
|
||||
#include "aq_filters.h"
|
||||
#include "aq_macsec.h"
|
||||
#include "aq_main.h"
|
||||
|
||||
#include <linux/ptp_clock_kernel.h>
|
||||
|
||||
@@ -845,7 +846,7 @@ static int aq_set_ringparam(struct net_device *ndev,
|
||||
|
||||
if (netif_running(ndev)) {
|
||||
ndev_running = true;
|
||||
dev_close(ndev);
|
||||
aq_ndev_close(ndev);
|
||||
}
|
||||
|
||||
cfg->rxds = max(ring->rx_pending, hw_caps->rxds_min);
|
||||
@@ -861,7 +862,7 @@ static int aq_set_ringparam(struct net_device *ndev,
|
||||
goto err_exit;
|
||||
|
||||
if (ndev_running)
|
||||
err = dev_open(ndev, NULL);
|
||||
err = aq_ndev_open(ndev);
|
||||
|
||||
err_exit:
|
||||
return err;
|
||||
|
||||
@@ -53,7 +53,7 @@ struct net_device *aq_ndev_alloc(void)
|
||||
return ndev;
|
||||
}
|
||||
|
||||
static int aq_ndev_open(struct net_device *ndev)
|
||||
int aq_ndev_open(struct net_device *ndev)
|
||||
{
|
||||
struct aq_nic_s *aq_nic = netdev_priv(ndev);
|
||||
int err = 0;
|
||||
@@ -83,7 +83,7 @@ err_exit:
|
||||
return err;
|
||||
}
|
||||
|
||||
static int aq_ndev_close(struct net_device *ndev)
|
||||
int aq_ndev_close(struct net_device *ndev)
|
||||
{
|
||||
struct aq_nic_s *aq_nic = netdev_priv(ndev);
|
||||
int err = 0;
|
||||
|
||||
@@ -14,5 +14,7 @@
|
||||
|
||||
void aq_ndev_schedule_work(struct work_struct *work);
|
||||
struct net_device *aq_ndev_alloc(void);
|
||||
int aq_ndev_open(struct net_device *ndev);
|
||||
int aq_ndev_close(struct net_device *ndev);
|
||||
|
||||
#endif /* AQ_MAIN_H */
|
||||
|
||||
@@ -1742,11 +1742,8 @@ static int e100_xmit_prepare(struct nic *nic, struct cb *cb,
|
||||
dma_addr = dma_map_single(&nic->pdev->dev, skb->data, skb->len,
|
||||
DMA_TO_DEVICE);
|
||||
/* If we can't map the skb, have the upper layer try later */
|
||||
if (dma_mapping_error(&nic->pdev->dev, dma_addr)) {
|
||||
dev_kfree_skb_any(skb);
|
||||
skb = NULL;
|
||||
if (dma_mapping_error(&nic->pdev->dev, dma_addr))
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/*
|
||||
* Use the last 4 bytes of the SKB payload packet as the CRC, used for
|
||||
|
||||
@@ -32,6 +32,8 @@ struct workqueue_struct *fm10k_workqueue;
|
||||
**/
|
||||
static int __init fm10k_init_module(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
pr_info("%s\n", fm10k_driver_string);
|
||||
pr_info("%s\n", fm10k_copyright);
|
||||
|
||||
@@ -43,7 +45,13 @@ static int __init fm10k_init_module(void)
|
||||
|
||||
fm10k_dbg_init();
|
||||
|
||||
return fm10k_register_pci_driver();
|
||||
ret = fm10k_register_pci_driver();
|
||||
if (ret) {
|
||||
fm10k_dbg_exit();
|
||||
destroy_workqueue(fm10k_workqueue);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
module_init(fm10k_init_module);
|
||||
|
||||
|
||||
@@ -16494,6 +16494,8 @@ static struct pci_driver i40e_driver = {
|
||||
**/
|
||||
static int __init i40e_init_module(void)
|
||||
{
|
||||
int err;
|
||||
|
||||
pr_info("%s: %s\n", i40e_driver_name, i40e_driver_string);
|
||||
pr_info("%s: %s\n", i40e_driver_name, i40e_copyright);
|
||||
|
||||
@@ -16511,7 +16513,14 @@ static int __init i40e_init_module(void)
|
||||
}
|
||||
|
||||
i40e_dbg_init();
|
||||
return pci_register_driver(&i40e_driver);
|
||||
err = pci_register_driver(&i40e_driver);
|
||||
if (err) {
|
||||
destroy_workqueue(i40e_wq);
|
||||
i40e_dbg_exit();
|
||||
return err;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
module_init(i40e_init_module);
|
||||
|
||||
|
||||
@@ -1448,7 +1448,6 @@ static void iavf_fill_rss_lut(struct iavf_adapter *adapter)
|
||||
static int iavf_init_rss(struct iavf_adapter *adapter)
|
||||
{
|
||||
struct iavf_hw *hw = &adapter->hw;
|
||||
int ret;
|
||||
|
||||
if (!RSS_PF(adapter)) {
|
||||
/* Enable PCTYPES for RSS, TCP/UDP with IPv4/IPv6 */
|
||||
@@ -1464,9 +1463,8 @@ static int iavf_init_rss(struct iavf_adapter *adapter)
|
||||
|
||||
iavf_fill_rss_lut(adapter);
|
||||
netdev_rss_key_fill((void *)adapter->rss_key, adapter->rss_key_size);
|
||||
ret = iavf_config_rss(adapter);
|
||||
|
||||
return ret;
|
||||
return iavf_config_rss(adapter);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -4355,7 +4353,11 @@ static int __init iavf_init_module(void)
|
||||
pr_err("%s: Failed to create workqueue\n", iavf_driver_name);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ret = pci_register_driver(&iavf_driver);
|
||||
if (ret)
|
||||
destroy_workqueue(iavf_wq);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
@@ -4859,6 +4859,8 @@ static struct pci_driver ixgbevf_driver = {
|
||||
**/
|
||||
static int __init ixgbevf_init_module(void)
|
||||
{
|
||||
int err;
|
||||
|
||||
pr_info("%s\n", ixgbevf_driver_string);
|
||||
pr_info("%s\n", ixgbevf_copyright);
|
||||
ixgbevf_wq = create_singlethread_workqueue(ixgbevf_driver_name);
|
||||
@@ -4867,7 +4869,13 @@ static int __init ixgbevf_init_module(void)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
return pci_register_driver(&ixgbevf_driver);
|
||||
err = pci_register_driver(&ixgbevf_driver);
|
||||
if (err) {
|
||||
destroy_workqueue(ixgbevf_wq);
|
||||
return err;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
module_init(ixgbevf_init_module);
|
||||
|
||||
@@ -1434,8 +1434,8 @@ static ssize_t outlen_write(struct file *filp, const char __user *buf,
|
||||
return -EFAULT;
|
||||
|
||||
err = sscanf(outlen_str, "%d", &outlen);
|
||||
if (err < 0)
|
||||
return err;
|
||||
if (err != 1)
|
||||
return -EINVAL;
|
||||
|
||||
ptr = kzalloc(outlen, GFP_KERNEL);
|
||||
if (!ptr)
|
||||
|
||||
@@ -309,6 +309,8 @@ revert_changes:
|
||||
for (curr_dest = 0; curr_dest < num_vport_dests; curr_dest++) {
|
||||
struct mlx5_termtbl_handle *tt = attr->dests[curr_dest].termtbl;
|
||||
|
||||
attr->dests[curr_dest].termtbl = NULL;
|
||||
|
||||
/* search for the destination associated with the
|
||||
* current term table
|
||||
*/
|
||||
|
||||
@@ -709,7 +709,7 @@ static int dr_matcher_add_to_tbl(struct mlx5dr_matcher *matcher)
|
||||
int ret;
|
||||
|
||||
next_matcher = NULL;
|
||||
list_for_each_entry(tmp_matcher, &tbl->matcher_list, matcher_list) {
|
||||
list_for_each_entry(tmp_matcher, &tbl->matcher_list, list_node) {
|
||||
if (tmp_matcher->prio >= matcher->prio) {
|
||||
next_matcher = tmp_matcher;
|
||||
break;
|
||||
@@ -719,11 +719,11 @@ static int dr_matcher_add_to_tbl(struct mlx5dr_matcher *matcher)
|
||||
|
||||
prev_matcher = NULL;
|
||||
if (next_matcher && !first)
|
||||
prev_matcher = list_prev_entry(next_matcher, matcher_list);
|
||||
prev_matcher = list_prev_entry(next_matcher, list_node);
|
||||
else if (!first)
|
||||
prev_matcher = list_last_entry(&tbl->matcher_list,
|
||||
struct mlx5dr_matcher,
|
||||
matcher_list);
|
||||
list_node);
|
||||
|
||||
if (dmn->type == MLX5DR_DOMAIN_TYPE_FDB ||
|
||||
dmn->type == MLX5DR_DOMAIN_TYPE_NIC_RX) {
|
||||
@@ -744,12 +744,12 @@ static int dr_matcher_add_to_tbl(struct mlx5dr_matcher *matcher)
|
||||
}
|
||||
|
||||
if (prev_matcher)
|
||||
list_add(&matcher->matcher_list, &prev_matcher->matcher_list);
|
||||
list_add(&matcher->list_node, &prev_matcher->list_node);
|
||||
else if (next_matcher)
|
||||
list_add_tail(&matcher->matcher_list,
|
||||
&next_matcher->matcher_list);
|
||||
list_add_tail(&matcher->list_node,
|
||||
&next_matcher->list_node);
|
||||
else
|
||||
list_add(&matcher->matcher_list, &tbl->matcher_list);
|
||||
list_add(&matcher->list_node, &tbl->matcher_list);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -922,7 +922,7 @@ mlx5dr_matcher_create(struct mlx5dr_table *tbl,
|
||||
matcher->prio = priority;
|
||||
matcher->match_criteria = match_criteria_enable;
|
||||
refcount_set(&matcher->refcount, 1);
|
||||
INIT_LIST_HEAD(&matcher->matcher_list);
|
||||
INIT_LIST_HEAD(&matcher->list_node);
|
||||
|
||||
mlx5dr_domain_lock(tbl->dmn);
|
||||
|
||||
@@ -985,15 +985,15 @@ static int dr_matcher_remove_from_tbl(struct mlx5dr_matcher *matcher)
|
||||
struct mlx5dr_domain *dmn = tbl->dmn;
|
||||
int ret = 0;
|
||||
|
||||
if (list_is_last(&matcher->matcher_list, &tbl->matcher_list))
|
||||
if (list_is_last(&matcher->list_node, &tbl->matcher_list))
|
||||
next_matcher = NULL;
|
||||
else
|
||||
next_matcher = list_next_entry(matcher, matcher_list);
|
||||
next_matcher = list_next_entry(matcher, list_node);
|
||||
|
||||
if (matcher->matcher_list.prev == &tbl->matcher_list)
|
||||
if (matcher->list_node.prev == &tbl->matcher_list)
|
||||
prev_matcher = NULL;
|
||||
else
|
||||
prev_matcher = list_prev_entry(matcher, matcher_list);
|
||||
prev_matcher = list_prev_entry(matcher, list_node);
|
||||
|
||||
if (dmn->type == MLX5DR_DOMAIN_TYPE_FDB ||
|
||||
dmn->type == MLX5DR_DOMAIN_TYPE_NIC_RX) {
|
||||
@@ -1013,7 +1013,7 @@ static int dr_matcher_remove_from_tbl(struct mlx5dr_matcher *matcher)
|
||||
return ret;
|
||||
}
|
||||
|
||||
list_del(&matcher->matcher_list);
|
||||
list_del(&matcher->list_node);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -9,7 +9,7 @@ int mlx5dr_table_set_miss_action(struct mlx5dr_table *tbl,
|
||||
struct mlx5dr_matcher *last_matcher = NULL;
|
||||
struct mlx5dr_htbl_connect_info info;
|
||||
struct mlx5dr_ste_htbl *last_htbl;
|
||||
int ret;
|
||||
int ret = -EOPNOTSUPP;
|
||||
|
||||
if (action && action->action_type != DR_ACTION_TYP_FT)
|
||||
return -EOPNOTSUPP;
|
||||
@@ -19,7 +19,7 @@ int mlx5dr_table_set_miss_action(struct mlx5dr_table *tbl,
|
||||
if (!list_empty(&tbl->matcher_list))
|
||||
last_matcher = list_last_entry(&tbl->matcher_list,
|
||||
struct mlx5dr_matcher,
|
||||
matcher_list);
|
||||
list_node);
|
||||
|
||||
if (tbl->dmn->type == MLX5DR_DOMAIN_TYPE_NIC_RX ||
|
||||
tbl->dmn->type == MLX5DR_DOMAIN_TYPE_FDB) {
|
||||
@@ -68,6 +68,9 @@ int mlx5dr_table_set_miss_action(struct mlx5dr_table *tbl,
|
||||
}
|
||||
}
|
||||
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
/* Release old action */
|
||||
if (tbl->miss_action)
|
||||
refcount_dec(&tbl->miss_action->refcount);
|
||||
|
||||
@@ -891,7 +891,7 @@ struct mlx5dr_matcher {
|
||||
struct mlx5dr_table *tbl;
|
||||
struct mlx5dr_matcher_rx_tx rx;
|
||||
struct mlx5dr_matcher_rx_tx tx;
|
||||
struct list_head matcher_list;
|
||||
struct list_head list_node;
|
||||
u32 prio;
|
||||
struct mlx5dr_match_param mask;
|
||||
u8 match_criteria;
|
||||
|
||||
@@ -249,6 +249,7 @@ static void nixge_hw_dma_bd_release(struct net_device *ndev)
|
||||
struct sk_buff *skb;
|
||||
int i;
|
||||
|
||||
if (priv->rx_bd_v) {
|
||||
for (i = 0; i < RX_BD_NUM; i++) {
|
||||
phys_addr = nixge_hw_dma_bd_get_addr(&priv->rx_bd_v[i],
|
||||
phys);
|
||||
@@ -263,11 +264,11 @@ static void nixge_hw_dma_bd_release(struct net_device *ndev)
|
||||
dev_kfree_skb(skb);
|
||||
}
|
||||
|
||||
if (priv->rx_bd_v)
|
||||
dma_free_coherent(ndev->dev.parent,
|
||||
sizeof(*priv->rx_bd_v) * RX_BD_NUM,
|
||||
priv->rx_bd_v,
|
||||
priv->rx_bd_p);
|
||||
}
|
||||
|
||||
if (priv->tx_skb)
|
||||
devm_kfree(ndev->dev.parent, priv->tx_skb);
|
||||
|
||||
@@ -2991,7 +2991,7 @@ static void qlcnic_83xx_recover_driver_lock(struct qlcnic_adapter *adapter)
|
||||
QLCWRX(adapter->ahw, QLC_83XX_RECOVER_DRV_LOCK, val);
|
||||
dev_info(&adapter->pdev->dev,
|
||||
"%s: lock recovery initiated\n", __func__);
|
||||
msleep(QLC_83XX_DRV_LOCK_RECOVERY_DELAY);
|
||||
mdelay(QLC_83XX_DRV_LOCK_RECOVERY_DELAY);
|
||||
val = QLCRDX(adapter->ahw, QLC_83XX_RECOVER_DRV_LOCK);
|
||||
id = ((val >> 2) & 0xF);
|
||||
if (id == adapter->portnum) {
|
||||
@@ -3027,7 +3027,7 @@ int qlcnic_83xx_lock_driver(struct qlcnic_adapter *adapter)
|
||||
if (status)
|
||||
break;
|
||||
|
||||
msleep(QLC_83XX_DRV_LOCK_WAIT_DELAY);
|
||||
mdelay(QLC_83XX_DRV_LOCK_WAIT_DELAY);
|
||||
i++;
|
||||
|
||||
if (i == 1)
|
||||
|
||||
@@ -2491,6 +2491,7 @@ static int __maybe_unused ravb_resume(struct device *dev)
|
||||
ret = ravb_open(ndev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
ravb_set_rx_mode(ndev);
|
||||
netif_device_attach(ndev);
|
||||
}
|
||||
|
||||
|
||||
@@ -745,6 +745,8 @@ static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex,
|
||||
if (fc & FLOW_RX) {
|
||||
pr_debug("\tReceive Flow-Control ON\n");
|
||||
flow |= GMAC_RX_FLOW_CTRL_RFE;
|
||||
} else {
|
||||
pr_debug("\tReceive Flow-Control OFF\n");
|
||||
}
|
||||
writel(flow, ioaddr + GMAC_RX_FLOW_CTRL);
|
||||
|
||||
|
||||
@@ -1158,7 +1158,15 @@ static void stmmac_mac_link_up(struct phylink_config *config,
|
||||
ctrl |= priv->hw->link.duplex;
|
||||
|
||||
/* Flow Control operation */
|
||||
if (tx_pause && rx_pause)
|
||||
if (rx_pause && tx_pause)
|
||||
priv->flow_ctrl = FLOW_AUTO;
|
||||
else if (rx_pause && !tx_pause)
|
||||
priv->flow_ctrl = FLOW_RX;
|
||||
else if (!rx_pause && tx_pause)
|
||||
priv->flow_ctrl = FLOW_TX;
|
||||
else
|
||||
priv->flow_ctrl = FLOW_OFF;
|
||||
|
||||
stmmac_mac_flow_ctrl(priv, duplex);
|
||||
|
||||
if (ctrl != old_ctrl)
|
||||
|
||||
@@ -2054,7 +2054,7 @@ static void am65_cpsw_nuss_cleanup_ndev(struct am65_cpsw_common *common)
|
||||
|
||||
for (i = 0; i < common->port_num; i++) {
|
||||
port = &common->ports[i];
|
||||
if (port->ndev)
|
||||
if (port->ndev && port->ndev->reg_state == NETREG_REGISTERED)
|
||||
unregister_netdev(port->ndev);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -120,7 +120,7 @@ int fwnode_mdiobus_register_phy(struct mii_bus *bus,
|
||||
/* Associate the fwnode with the device structure so it
|
||||
* can be looked up later.
|
||||
*/
|
||||
phy->mdio.dev.fwnode = child;
|
||||
phy->mdio.dev.fwnode = fwnode_handle_get(child);
|
||||
|
||||
/* All data is now stored in the phy struct, so register it */
|
||||
rc = phy_device_register(phy);
|
||||
|
||||
@@ -484,7 +484,14 @@ static int __init ntb_netdev_init_module(void)
|
||||
rc = ntb_transport_register_client_dev(KBUILD_MODNAME);
|
||||
if (rc)
|
||||
return rc;
|
||||
return ntb_transport_register_client(&ntb_netdev_client);
|
||||
|
||||
rc = ntb_transport_register_client(&ntb_netdev_client);
|
||||
if (rc) {
|
||||
ntb_transport_unregister_client_dev(KBUILD_MODNAME);
|
||||
return rc;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
module_init(ntb_netdev_init_module);
|
||||
|
||||
|
||||
@@ -215,6 +215,7 @@ static void phy_mdio_device_free(struct mdio_device *mdiodev)
|
||||
|
||||
static void phy_device_release(struct device *dev)
|
||||
{
|
||||
fwnode_handle_put(dev->fwnode);
|
||||
kfree(to_phy_device(dev));
|
||||
}
|
||||
|
||||
@@ -1518,6 +1519,7 @@ error:
|
||||
|
||||
error_module_put:
|
||||
module_put(d->driver->owner);
|
||||
d->driver = NULL;
|
||||
error_put_device:
|
||||
put_device(d);
|
||||
if (ndev_owner != bus->owner)
|
||||
|
||||
@@ -687,7 +687,6 @@ static void __tun_detach(struct tun_file *tfile, bool clean)
|
||||
if (tun)
|
||||
xdp_rxq_info_unreg(&tfile->xdp_rxq);
|
||||
ptr_ring_cleanup(&tfile->tx_ring, tun_ptr_free);
|
||||
sock_put(&tfile->sk);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -703,6 +702,9 @@ static void tun_detach(struct tun_file *tfile, bool clean)
|
||||
if (dev)
|
||||
netdev_state_change(dev);
|
||||
rtnl_unlock();
|
||||
|
||||
if (clean)
|
||||
sock_put(&tfile->sk);
|
||||
}
|
||||
|
||||
static void tun_detach_all(struct net_device *dev)
|
||||
|
||||
@@ -830,8 +830,7 @@ void ipc_mux_ul_encoded_process(struct iosm_mux *ipc_mux, struct sk_buff *skb)
|
||||
ipc_mux->ul_data_pend_bytes);
|
||||
|
||||
/* Reset the skb settings. */
|
||||
skb->tail = 0;
|
||||
skb->len = 0;
|
||||
skb_trim(skb, 0);
|
||||
|
||||
/* Add the consumed ADB to the free list. */
|
||||
skb_queue_tail((&ipc_mux->ul_adb.free_list), skb);
|
||||
|
||||
@@ -122,7 +122,7 @@ struct iosm_protocol {
|
||||
struct iosm_imem *imem;
|
||||
struct ipc_rsp *rsp_ring[IPC_MEM_MSG_ENTRIES];
|
||||
struct device *dev;
|
||||
phys_addr_t phy_ap_shm;
|
||||
dma_addr_t phy_ap_shm;
|
||||
u32 old_msg_tail;
|
||||
};
|
||||
|
||||
|
||||
@@ -3920,7 +3920,7 @@ static void nvme_ns_remove(struct nvme_ns *ns)
|
||||
mutex_unlock(&ns->ctrl->subsys->lock);
|
||||
|
||||
/* guarantee not available in head->list */
|
||||
synchronize_rcu();
|
||||
synchronize_srcu(&ns->head->srcu);
|
||||
|
||||
/* wait for concurrent submissions */
|
||||
if (nvme_mpath_clear_current_path(ns))
|
||||
|
||||
@@ -151,11 +151,14 @@ void nvme_mpath_revalidate_paths(struct nvme_ns *ns)
|
||||
struct nvme_ns_head *head = ns->head;
|
||||
sector_t capacity = get_capacity(head->disk);
|
||||
int node;
|
||||
int srcu_idx;
|
||||
|
||||
srcu_idx = srcu_read_lock(&head->srcu);
|
||||
list_for_each_entry_rcu(ns, &head->list, siblings) {
|
||||
if (capacity != get_capacity(ns->disk))
|
||||
clear_bit(NVME_NS_READY, &ns->flags);
|
||||
}
|
||||
srcu_read_unlock(&head->srcu, srcu_idx);
|
||||
|
||||
for_each_node(node)
|
||||
rcu_assign_pointer(head->current_path[node], NULL);
|
||||
|
||||
@@ -37,9 +37,9 @@ static int rmem_read(void *context, unsigned int offset,
|
||||
* but as of Dec 2020 this isn't possible on arm64.
|
||||
*/
|
||||
addr = memremap(priv->mem->base, available, MEMREMAP_WB);
|
||||
if (IS_ERR(addr)) {
|
||||
if (!addr) {
|
||||
dev_err(priv->dev, "Failed to remap memory region\n");
|
||||
return PTR_ERR(addr);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
count = memory_read_from_buffer(val, bytes, &off, addr, available);
|
||||
|
||||
@@ -975,8 +975,10 @@ of_fwnode_get_reference_args(const struct fwnode_handle *fwnode,
|
||||
nargs, index, &of_args);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (!args)
|
||||
if (!args) {
|
||||
of_node_put(of_args.np);
|
||||
return 0;
|
||||
}
|
||||
|
||||
args->nargs = of_args.args_count;
|
||||
args->fwnode = of_fwnode_handle(of_args.np);
|
||||
|
||||
@@ -436,9 +436,14 @@ static void __intel_gpio_set_direction(void __iomem *padcfg0, bool input)
|
||||
writel(value, padcfg0);
|
||||
}
|
||||
|
||||
static int __intel_gpio_get_gpio_mode(u32 value)
|
||||
{
|
||||
return (value & PADCFG0_PMODE_MASK) >> PADCFG0_PMODE_SHIFT;
|
||||
}
|
||||
|
||||
static int intel_gpio_get_gpio_mode(void __iomem *padcfg0)
|
||||
{
|
||||
return (readl(padcfg0) & PADCFG0_PMODE_MASK) >> PADCFG0_PMODE_SHIFT;
|
||||
return __intel_gpio_get_gpio_mode(readl(padcfg0));
|
||||
}
|
||||
|
||||
static void intel_gpio_set_gpio_mode(void __iomem *padcfg0)
|
||||
@@ -1659,6 +1664,7 @@ EXPORT_SYMBOL_GPL(intel_pinctrl_get_soc_data);
|
||||
static bool intel_pinctrl_should_save(struct intel_pinctrl *pctrl, unsigned int pin)
|
||||
{
|
||||
const struct pin_desc *pd = pin_desc_get(pctrl->pctldev, pin);
|
||||
u32 value;
|
||||
|
||||
if (!pd || !intel_pad_usable(pctrl, pin))
|
||||
return false;
|
||||
@@ -1673,6 +1679,25 @@ static bool intel_pinctrl_should_save(struct intel_pinctrl *pctrl, unsigned int
|
||||
gpiochip_line_is_irq(&pctrl->chip, intel_pin_to_gpio(pctrl, pin)))
|
||||
return true;
|
||||
|
||||
/*
|
||||
* The firmware on some systems may configure GPIO pins to be
|
||||
* an interrupt source in so called "direct IRQ" mode. In such
|
||||
* cases the GPIO controller driver has no idea if those pins
|
||||
* are being used or not. At the same time, there is a known bug
|
||||
* in the firmwares that don't restore the pin settings correctly
|
||||
* after suspend, i.e. by an unknown reason the Rx value becomes
|
||||
* inverted.
|
||||
*
|
||||
* Hence, let's save and restore the pins that are configured
|
||||
* as GPIOs in the input mode with GPIROUTIOXAPIC bit set.
|
||||
*
|
||||
* See https://bugzilla.kernel.org/show_bug.cgi?id=214749.
|
||||
*/
|
||||
value = readl(intel_get_padcfg(pctrl, pin, PADCFG0));
|
||||
if ((value & PADCFG0_GPIROUTIOXAPIC) && (value & PADCFG0_GPIOTXDIS) &&
|
||||
(__intel_gpio_get_gpio_mode(value) == PADCFG0_PMODE_GPIO))
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
@@ -727,7 +727,7 @@ static int pcs_allocate_pin_table(struct pcs_device *pcs)
|
||||
|
||||
mux_bytes = pcs->width / BITS_PER_BYTE;
|
||||
|
||||
if (pcs->bits_per_mux) {
|
||||
if (pcs->bits_per_mux && pcs->fmask) {
|
||||
pcs->bits_per_pin = fls(pcs->fmask);
|
||||
nr_pins = (pcs->size * BITS_PER_BYTE) / pcs->bits_per_pin;
|
||||
} else {
|
||||
|
||||
@@ -439,8 +439,7 @@ static unsigned int mx51_ecspi_clkdiv(struct spi_imx_data *spi_imx,
|
||||
unsigned int pre, post;
|
||||
unsigned int fin = spi_imx->spi_clk;
|
||||
|
||||
if (unlikely(fspi > fin))
|
||||
return 0;
|
||||
fspi = min(fspi, fin);
|
||||
|
||||
post = fls(fin) - fls(fspi);
|
||||
if (fin > fspi << post)
|
||||
|
||||
@@ -61,6 +61,53 @@ static void stm32_usart_clr_bits(struct uart_port *port, u32 reg, u32 bits)
|
||||
writel_relaxed(val, port->membase + reg);
|
||||
}
|
||||
|
||||
static unsigned int stm32_usart_tx_empty(struct uart_port *port)
|
||||
{
|
||||
struct stm32_port *stm32_port = to_stm32_port(port);
|
||||
const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
|
||||
|
||||
if (readl_relaxed(port->membase + ofs->isr) & USART_SR_TC)
|
||||
return TIOCSER_TEMT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void stm32_usart_rs485_rts_enable(struct uart_port *port)
|
||||
{
|
||||
struct stm32_port *stm32_port = to_stm32_port(port);
|
||||
struct serial_rs485 *rs485conf = &port->rs485;
|
||||
|
||||
if (stm32_port->hw_flow_control ||
|
||||
!(rs485conf->flags & SER_RS485_ENABLED))
|
||||
return;
|
||||
|
||||
if (rs485conf->flags & SER_RS485_RTS_ON_SEND) {
|
||||
mctrl_gpio_set(stm32_port->gpios,
|
||||
stm32_port->port.mctrl | TIOCM_RTS);
|
||||
} else {
|
||||
mctrl_gpio_set(stm32_port->gpios,
|
||||
stm32_port->port.mctrl & ~TIOCM_RTS);
|
||||
}
|
||||
}
|
||||
|
||||
static void stm32_usart_rs485_rts_disable(struct uart_port *port)
|
||||
{
|
||||
struct stm32_port *stm32_port = to_stm32_port(port);
|
||||
struct serial_rs485 *rs485conf = &port->rs485;
|
||||
|
||||
if (stm32_port->hw_flow_control ||
|
||||
!(rs485conf->flags & SER_RS485_ENABLED))
|
||||
return;
|
||||
|
||||
if (rs485conf->flags & SER_RS485_RTS_ON_SEND) {
|
||||
mctrl_gpio_set(stm32_port->gpios,
|
||||
stm32_port->port.mctrl & ~TIOCM_RTS);
|
||||
} else {
|
||||
mctrl_gpio_set(stm32_port->gpios,
|
||||
stm32_port->port.mctrl | TIOCM_RTS);
|
||||
}
|
||||
}
|
||||
|
||||
static void stm32_usart_config_reg_rs485(u32 *cr1, u32 *cr3, u32 delay_ADE,
|
||||
u32 delay_DDE, u32 baud)
|
||||
{
|
||||
@@ -149,6 +196,12 @@ static int stm32_usart_config_rs485(struct uart_port *port,
|
||||
|
||||
stm32_usart_set_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
|
||||
|
||||
/* Adjust RTS polarity in case it's driven in software */
|
||||
if (stm32_usart_tx_empty(port))
|
||||
stm32_usart_rs485_rts_disable(port);
|
||||
else
|
||||
stm32_usart_rs485_rts_enable(port);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -314,6 +367,14 @@ static void stm32_usart_tx_interrupt_enable(struct uart_port *port)
|
||||
stm32_usart_set_bits(port, ofs->cr1, USART_CR1_TXEIE);
|
||||
}
|
||||
|
||||
static void stm32_usart_tc_interrupt_enable(struct uart_port *port)
|
||||
{
|
||||
struct stm32_port *stm32_port = to_stm32_port(port);
|
||||
const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
|
||||
|
||||
stm32_usart_set_bits(port, ofs->cr1, USART_CR1_TCIE);
|
||||
}
|
||||
|
||||
static void stm32_usart_tx_interrupt_disable(struct uart_port *port)
|
||||
{
|
||||
struct stm32_port *stm32_port = to_stm32_port(port);
|
||||
@@ -325,6 +386,14 @@ static void stm32_usart_tx_interrupt_disable(struct uart_port *port)
|
||||
stm32_usart_clr_bits(port, ofs->cr1, USART_CR1_TXEIE);
|
||||
}
|
||||
|
||||
static void stm32_usart_tc_interrupt_disable(struct uart_port *port)
|
||||
{
|
||||
struct stm32_port *stm32_port = to_stm32_port(port);
|
||||
const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
|
||||
|
||||
stm32_usart_clr_bits(port, ofs->cr1, USART_CR1_TCIE);
|
||||
}
|
||||
|
||||
static void stm32_usart_transmit_chars_pio(struct uart_port *port)
|
||||
{
|
||||
struct stm32_port *stm32_port = to_stm32_port(port);
|
||||
@@ -426,6 +495,13 @@ static void stm32_usart_transmit_chars(struct uart_port *port)
|
||||
u32 isr;
|
||||
int ret;
|
||||
|
||||
if (!stm32_port->hw_flow_control &&
|
||||
port->rs485.flags & SER_RS485_ENABLED) {
|
||||
stm32_port->txdone = false;
|
||||
stm32_usart_tc_interrupt_disable(port);
|
||||
stm32_usart_rs485_rts_enable(port);
|
||||
}
|
||||
|
||||
if (port->x_char) {
|
||||
if (stm32_port->tx_dma_busy)
|
||||
stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
|
||||
@@ -465,8 +541,14 @@ static void stm32_usart_transmit_chars(struct uart_port *port)
|
||||
if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
|
||||
uart_write_wakeup(port);
|
||||
|
||||
if (uart_circ_empty(xmit))
|
||||
if (uart_circ_empty(xmit)) {
|
||||
stm32_usart_tx_interrupt_disable(port);
|
||||
if (!stm32_port->hw_flow_control &&
|
||||
port->rs485.flags & SER_RS485_ENABLED) {
|
||||
stm32_port->txdone = true;
|
||||
stm32_usart_tc_interrupt_enable(port);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static irqreturn_t stm32_usart_interrupt(int irq, void *ptr)
|
||||
@@ -479,6 +561,13 @@ static irqreturn_t stm32_usart_interrupt(int irq, void *ptr)
|
||||
|
||||
sr = readl_relaxed(port->membase + ofs->isr);
|
||||
|
||||
if (!stm32_port->hw_flow_control &&
|
||||
port->rs485.flags & SER_RS485_ENABLED &&
|
||||
(sr & USART_SR_TC)) {
|
||||
stm32_usart_tc_interrupt_disable(port);
|
||||
stm32_usart_rs485_rts_disable(port);
|
||||
}
|
||||
|
||||
if ((sr & USART_SR_RTOF) && ofs->icr != UNDEF_REG)
|
||||
writel_relaxed(USART_ICR_RTOCF,
|
||||
port->membase + ofs->icr);
|
||||
@@ -518,17 +607,6 @@ static irqreturn_t stm32_usart_threaded_interrupt(int irq, void *ptr)
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static unsigned int stm32_usart_tx_empty(struct uart_port *port)
|
||||
{
|
||||
struct stm32_port *stm32_port = to_stm32_port(port);
|
||||
const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
|
||||
|
||||
if (readl_relaxed(port->membase + ofs->isr) & USART_SR_TC)
|
||||
return TIOCSER_TEMT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void stm32_usart_set_mctrl(struct uart_port *port, unsigned int mctrl)
|
||||
{
|
||||
struct stm32_port *stm32_port = to_stm32_port(port);
|
||||
@@ -566,41 +644,22 @@ static void stm32_usart_disable_ms(struct uart_port *port)
|
||||
/* Transmit stop */
|
||||
static void stm32_usart_stop_tx(struct uart_port *port)
|
||||
{
|
||||
struct stm32_port *stm32_port = to_stm32_port(port);
|
||||
struct serial_rs485 *rs485conf = &port->rs485;
|
||||
|
||||
stm32_usart_tx_interrupt_disable(port);
|
||||
|
||||
if (rs485conf->flags & SER_RS485_ENABLED) {
|
||||
if (rs485conf->flags & SER_RS485_RTS_ON_SEND) {
|
||||
mctrl_gpio_set(stm32_port->gpios,
|
||||
stm32_port->port.mctrl & ~TIOCM_RTS);
|
||||
} else {
|
||||
mctrl_gpio_set(stm32_port->gpios,
|
||||
stm32_port->port.mctrl | TIOCM_RTS);
|
||||
}
|
||||
}
|
||||
stm32_usart_rs485_rts_disable(port);
|
||||
}
|
||||
|
||||
/* There are probably characters waiting to be transmitted. */
|
||||
static void stm32_usart_start_tx(struct uart_port *port)
|
||||
{
|
||||
struct stm32_port *stm32_port = to_stm32_port(port);
|
||||
struct serial_rs485 *rs485conf = &port->rs485;
|
||||
struct circ_buf *xmit = &port->state->xmit;
|
||||
|
||||
if (uart_circ_empty(xmit) && !port->x_char)
|
||||
if (uart_circ_empty(xmit) && !port->x_char) {
|
||||
stm32_usart_rs485_rts_disable(port);
|
||||
return;
|
||||
}
|
||||
|
||||
if (rs485conf->flags & SER_RS485_ENABLED) {
|
||||
if (rs485conf->flags & SER_RS485_RTS_ON_SEND) {
|
||||
mctrl_gpio_set(stm32_port->gpios,
|
||||
stm32_port->port.mctrl | TIOCM_RTS);
|
||||
} else {
|
||||
mctrl_gpio_set(stm32_port->gpios,
|
||||
stm32_port->port.mctrl & ~TIOCM_RTS);
|
||||
}
|
||||
}
|
||||
stm32_usart_rs485_rts_enable(port);
|
||||
|
||||
stm32_usart_transmit_chars(port);
|
||||
}
|
||||
|
||||
@@ -267,6 +267,7 @@ struct stm32_port {
|
||||
bool hw_flow_control;
|
||||
bool swap; /* swap RX & TX pins */
|
||||
bool fifoen;
|
||||
bool txdone;
|
||||
int rxftcfg; /* RX FIFO threshold CFG */
|
||||
int txftcfg; /* TX FIFO threshold CFG */
|
||||
bool wakeup_src;
|
||||
|
||||
@@ -167,8 +167,8 @@ responded:
|
||||
clear_bit(AFS_SERVER_FL_HAS_FS64, &server->flags);
|
||||
}
|
||||
|
||||
if (rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us) &&
|
||||
rtt_us < server->probe.rtt) {
|
||||
rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us);
|
||||
if (rtt_us < server->probe.rtt) {
|
||||
server->probe.rtt = rtt_us;
|
||||
server->rtt = rtt_us;
|
||||
alist->preferred = index;
|
||||
|
||||
@@ -2060,10 +2060,29 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int build_ino_list(u64 inum, u64 offset, u64 root, void *ctx)
|
||||
{
|
||||
struct btrfs_data_container *inodes = ctx;
|
||||
const size_t c = 3 * sizeof(u64);
|
||||
|
||||
if (inodes->bytes_left >= c) {
|
||||
inodes->bytes_left -= c;
|
||||
inodes->val[inodes->elem_cnt] = inum;
|
||||
inodes->val[inodes->elem_cnt + 1] = offset;
|
||||
inodes->val[inodes->elem_cnt + 2] = root;
|
||||
inodes->elem_cnt += 3;
|
||||
} else {
|
||||
inodes->bytes_missing += c - inodes->bytes_left;
|
||||
inodes->bytes_left = 0;
|
||||
inodes->elem_missed += 3;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int iterate_inodes_from_logical(u64 logical, struct btrfs_fs_info *fs_info,
|
||||
struct btrfs_path *path,
|
||||
iterate_extent_inodes_t *iterate, void *ctx,
|
||||
bool ignore_offset)
|
||||
void *ctx, bool ignore_offset)
|
||||
{
|
||||
int ret;
|
||||
u64 extent_item_pos;
|
||||
@@ -2081,7 +2100,7 @@ int iterate_inodes_from_logical(u64 logical, struct btrfs_fs_info *fs_info,
|
||||
extent_item_pos = logical - found_key.objectid;
|
||||
ret = iterate_extent_inodes(fs_info, found_key.objectid,
|
||||
extent_item_pos, search_commit_root,
|
||||
iterate, ctx, ignore_offset);
|
||||
build_ino_list, ctx, ignore_offset);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -35,8 +35,7 @@ int iterate_extent_inodes(struct btrfs_fs_info *fs_info,
|
||||
bool ignore_offset);
|
||||
|
||||
int iterate_inodes_from_logical(u64 logical, struct btrfs_fs_info *fs_info,
|
||||
struct btrfs_path *path,
|
||||
iterate_extent_inodes_t *iterate, void *ctx,
|
||||
struct btrfs_path *path, void *ctx,
|
||||
bool ignore_offset);
|
||||
|
||||
int paths_from_inode(u64 inum, struct inode_fs_paths *ipath);
|
||||
|
||||
@@ -3911,26 +3911,6 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int build_ino_list(u64 inum, u64 offset, u64 root, void *ctx)
|
||||
{
|
||||
struct btrfs_data_container *inodes = ctx;
|
||||
const size_t c = 3 * sizeof(u64);
|
||||
|
||||
if (inodes->bytes_left >= c) {
|
||||
inodes->bytes_left -= c;
|
||||
inodes->val[inodes->elem_cnt] = inum;
|
||||
inodes->val[inodes->elem_cnt + 1] = offset;
|
||||
inodes->val[inodes->elem_cnt + 2] = root;
|
||||
inodes->elem_cnt += 3;
|
||||
} else {
|
||||
inodes->bytes_missing += c - inodes->bytes_left;
|
||||
inodes->bytes_left = 0;
|
||||
inodes->elem_missed += 3;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static long btrfs_ioctl_logical_to_ino(struct btrfs_fs_info *fs_info,
|
||||
void __user *arg, int version)
|
||||
{
|
||||
@@ -3966,21 +3946,20 @@ static long btrfs_ioctl_logical_to_ino(struct btrfs_fs_info *fs_info,
|
||||
size = min_t(u32, loi->size, SZ_16M);
|
||||
}
|
||||
|
||||
inodes = init_data_container(size);
|
||||
if (IS_ERR(inodes)) {
|
||||
ret = PTR_ERR(inodes);
|
||||
goto out_loi;
|
||||
}
|
||||
|
||||
path = btrfs_alloc_path();
|
||||
if (!path) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
inodes = init_data_container(size);
|
||||
if (IS_ERR(inodes)) {
|
||||
ret = PTR_ERR(inodes);
|
||||
inodes = NULL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = iterate_inodes_from_logical(loi->logical, fs_info, path,
|
||||
build_ino_list, inodes, ignore_offset);
|
||||
inodes, ignore_offset);
|
||||
btrfs_free_path(path);
|
||||
if (ret == -EINVAL)
|
||||
ret = -ENOENT;
|
||||
if (ret < 0)
|
||||
@@ -3992,7 +3971,6 @@ static long btrfs_ioctl_logical_to_ino(struct btrfs_fs_info *fs_info,
|
||||
ret = -EFAULT;
|
||||
|
||||
out:
|
||||
btrfs_free_path(path);
|
||||
kvfree(inodes);
|
||||
out_loi:
|
||||
kfree(loi);
|
||||
|
||||
@@ -2903,14 +2903,7 @@ int btrfs_qgroup_inherit(struct btrfs_trans_handle *trans, u64 srcid,
|
||||
dstgroup->rsv_rfer = inherit->lim.rsv_rfer;
|
||||
dstgroup->rsv_excl = inherit->lim.rsv_excl;
|
||||
|
||||
ret = update_qgroup_limit_item(trans, dstgroup);
|
||||
if (ret) {
|
||||
fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT;
|
||||
btrfs_info(fs_info,
|
||||
"unable to update quota limit for %llu",
|
||||
dstgroup->qgroupid);
|
||||
goto unlock;
|
||||
}
|
||||
qgroup_dirty(fs_info, dstgroup);
|
||||
}
|
||||
|
||||
if (srcid) {
|
||||
@@ -3280,7 +3273,8 @@ out:
|
||||
static bool rescan_should_stop(struct btrfs_fs_info *fs_info)
|
||||
{
|
||||
return btrfs_fs_closing(fs_info) ||
|
||||
test_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state);
|
||||
test_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state) ||
|
||||
!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
|
||||
}
|
||||
|
||||
static void btrfs_qgroup_rescan_worker(struct btrfs_work *work)
|
||||
@@ -3310,11 +3304,9 @@ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work)
|
||||
err = PTR_ERR(trans);
|
||||
break;
|
||||
}
|
||||
if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) {
|
||||
err = -EINTR;
|
||||
} else {
|
||||
|
||||
err = qgroup_rescan_leaf(trans, path);
|
||||
}
|
||||
|
||||
if (err > 0)
|
||||
btrfs_commit_transaction(trans);
|
||||
else
|
||||
@@ -3328,7 +3320,7 @@ out:
|
||||
if (err > 0 &&
|
||||
fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT) {
|
||||
fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT;
|
||||
} else if (err < 0) {
|
||||
} else if (err < 0 || stopped) {
|
||||
fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT;
|
||||
}
|
||||
mutex_unlock(&fs_info->qgroup_rescan_lock);
|
||||
|
||||
@@ -222,7 +222,7 @@ static int erofs_fill_symlink(struct inode *inode, void *data,
|
||||
|
||||
/* if it cannot be handled with fast symlink scheme */
|
||||
if (vi->datalayout != EROFS_INODE_FLAT_INLINE ||
|
||||
inode->i_size >= PAGE_SIZE) {
|
||||
inode->i_size >= PAGE_SIZE || inode->i_size < 0) {
|
||||
inode->i_op = &erofs_symlink_iops;
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -5322,7 +5322,29 @@ struct io_poll_table {
|
||||
};
|
||||
|
||||
#define IO_POLL_CANCEL_FLAG BIT(31)
|
||||
#define IO_POLL_REF_MASK GENMASK(30, 0)
|
||||
#define IO_POLL_RETRY_FLAG BIT(30)
|
||||
#define IO_POLL_REF_MASK GENMASK(29, 0)
|
||||
|
||||
/*
|
||||
* We usually have 1-2 refs taken, 128 is more than enough and we want to
|
||||
* maximise the margin between this amount and the moment when it overflows.
|
||||
*/
|
||||
#define IO_POLL_REF_BIAS 128
|
||||
|
||||
static bool io_poll_get_ownership_slowpath(struct io_kiocb *req)
|
||||
{
|
||||
int v;
|
||||
|
||||
/*
|
||||
* poll_refs are already elevated and we don't have much hope for
|
||||
* grabbing the ownership. Instead of incrementing set a retry flag
|
||||
* to notify the loop that there might have been some change.
|
||||
*/
|
||||
v = atomic_fetch_or(IO_POLL_RETRY_FLAG, &req->poll_refs);
|
||||
if (v & IO_POLL_REF_MASK)
|
||||
return false;
|
||||
return !(atomic_fetch_inc(&req->poll_refs) & IO_POLL_REF_MASK);
|
||||
}
|
||||
|
||||
/*
|
||||
* If refs part of ->poll_refs (see IO_POLL_REF_MASK) is 0, it's free. We can
|
||||
@@ -5332,6 +5354,8 @@ struct io_poll_table {
|
||||
*/
|
||||
static inline bool io_poll_get_ownership(struct io_kiocb *req)
|
||||
{
|
||||
if (unlikely(atomic_read(&req->poll_refs) >= IO_POLL_REF_BIAS))
|
||||
return io_poll_get_ownership_slowpath(req);
|
||||
return !(atomic_fetch_inc(&req->poll_refs) & IO_POLL_REF_MASK);
|
||||
}
|
||||
|
||||
@@ -5440,6 +5464,23 @@ static int io_poll_check_events(struct io_kiocb *req)
|
||||
return 0;
|
||||
if (v & IO_POLL_CANCEL_FLAG)
|
||||
return -ECANCELED;
|
||||
/*
|
||||
* cqe.res contains only events of the first wake up
|
||||
* and all others are be lost. Redo vfs_poll() to get
|
||||
* up to date state.
|
||||
*/
|
||||
if ((v & IO_POLL_REF_MASK) != 1)
|
||||
req->result = 0;
|
||||
if (v & IO_POLL_RETRY_FLAG) {
|
||||
req->result = 0;
|
||||
/*
|
||||
* We won't find new events that came in between
|
||||
* vfs_poll and the ref put unless we clear the
|
||||
* flag in advance.
|
||||
*/
|
||||
atomic_andnot(IO_POLL_RETRY_FLAG, &req->poll_refs);
|
||||
v &= ~IO_POLL_RETRY_FLAG;
|
||||
}
|
||||
|
||||
if (!req->result) {
|
||||
struct poll_table_struct pt = { ._key = poll->events };
|
||||
@@ -5464,11 +5505,15 @@ static int io_poll_check_events(struct io_kiocb *req)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* force the next iteration to vfs_poll() */
|
||||
req->result = 0;
|
||||
|
||||
/*
|
||||
* Release all references, retry if someone tried to restart
|
||||
* task_work while we were executing it.
|
||||
*/
|
||||
} while (atomic_sub_return(v & IO_POLL_REF_MASK, &req->poll_refs));
|
||||
} while (atomic_sub_return(v & IO_POLL_REF_MASK, &req->poll_refs) &
|
||||
IO_POLL_REF_MASK);
|
||||
|
||||
return 1;
|
||||
}
|
||||
@@ -5640,7 +5685,6 @@ static int __io_arm_poll_handler(struct io_kiocb *req,
|
||||
struct io_poll_table *ipt, __poll_t mask)
|
||||
{
|
||||
struct io_ring_ctx *ctx = req->ctx;
|
||||
int v;
|
||||
|
||||
INIT_HLIST_NODE(&req->hash_node);
|
||||
io_init_poll_iocb(poll, mask, io_poll_wake);
|
||||
@@ -5686,11 +5730,10 @@ static int __io_arm_poll_handler(struct io_kiocb *req,
|
||||
}
|
||||
|
||||
/*
|
||||
* Release ownership. If someone tried to queue a tw while it was
|
||||
* locked, kick it off for them.
|
||||
* Try to release ownership. If we see a change of state, e.g.
|
||||
* poll was waken up, queue up a tw, it'll deal with it.
|
||||
*/
|
||||
v = atomic_dec_return(&req->poll_refs);
|
||||
if (unlikely(v & IO_POLL_REF_MASK))
|
||||
if (atomic_cmpxchg(&req->poll_refs, 1, 0) != 1)
|
||||
__io_poll_execute(req, 0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user