Merge 5.15.83 into android14-5.15
Changes in 5.15.83 clk: generalize devm_clk_get() a bit clk: Provide new devm_clk helpers for prepared and enabled clocks mmc: mtk-sd: Fix missing clk_disable_unprepare in msdc_of_clock_parse() arm64: dts: rockchip: keep I2S1 disabled for GPIO function on ROCK Pi 4 series arm: dts: rockchip: fix node name for hym8563 rtc arm: dts: rockchip: remove clock-frequency from rtc ARM: dts: rockchip: fix ir-receiver node names arm64: dts: rockchip: fix ir-receiver node names ARM: dts: rockchip: rk3188: fix lcdc1-rgb24 node name fs: use acquire ordering in __fget_light() ARM: 9251/1: perf: Fix stacktraces for tracepoint events in THUMB2 kernels ARM: 9266/1: mm: fix no-MMU ZERO_PAGE() implementation ASoC: wm8962: Wait for updated value of WM8962_CLOCKING1 register spi: mediatek: Fix DEVAPC Violation at KO Remove ARM: dts: rockchip: disable arm_global_timer on rk3066 and rk3188 ASoC: rt711-sdca: fix the latency time of clock stop prepare state machine transitions 9p/fd: Use P9_HDRSZ for header size regulator: slg51000: Wait after asserting CS pin ALSA: seq: Fix function prototype mismatch in snd_seq_expand_var_event selftests/net: Find nettest in current directory btrfs: send: avoid unaligned encoded writes when attempting to clone range ASoC: soc-pcm: Add NULL check in BE reparenting regulator: twl6030: fix get status of twl6032 regulators fbcon: Use kzalloc() in fbcon_prepare_logo() usb: dwc3: gadget: Disable GUSB2PHYCFG.SUSPHY for End Transfer 9p/xen: check logical size for buffer size net: usb: qmi_wwan: add u-blox 0x1342 composition mm/khugepaged: take the right locks for page table retraction mm/khugepaged: fix GUP-fast interaction by sending IPI mm/khugepaged: invoke MMU notifiers in shmem/file collapse paths rtc: mc146818-lib: extract mc146818_avoid_UIP rtc: cmos: avoid UIP when writing alarm time rtc: cmos: avoid UIP when reading alarm time cifs: fix use-after-free caused by invalid pointer `hostname` drm/bridge: anx7625: Fix edid_read break case in sp_tx_edid_read() xen/netback: Ensure protocol headers don't fall in the non-linear area xen/netback: do some code cleanup xen/netback: don't call kfree_skb() with interrupts disabled media: videobuf2-core: take mmap_lock in vb2_get_unmapped_area() soundwire: intel: Initialize clock stop timeout Revert "ARM: dts: imx7: Fix NAND controller size-cells" media: v4l2-dv-timings.c: fix too strict blanking sanity checks memcg: fix possible use-after-free in memcg_write_event_control() mm/gup: fix gup_pud_range() for dax Bluetooth: btusb: Add debug message for CSR controllers Bluetooth: Fix crash when replugging CSR fake controllers net: mana: Fix race on per-CQ variable napi work_done KVM: s390: vsie: Fix the initialization of the epoch extension (epdx) field drm/vmwgfx: Don't use screen objects when SEV is active drm/amdgpu/sdma_v4_0: turn off SDMA ring buffer in the s2idle suspend drm/shmem-helper: Remove errant put in error path drm/shmem-helper: Avoid vm_open error paths net: dsa: sja1105: avoid out of bounds access in sja1105_init_l2_policing() HID: usbhid: Add ALWAYS_POLL quirk for some mice HID: hid-lg4ff: Add check for empty lbuf HID: core: fix shift-out-of-bounds in hid_report_raw_event HID: ite: Enable QUIRK_TOUCHPAD_ON_OFF_REPORT on Acer Aspire Switch V 10 can: af_can: fix NULL pointer dereference in can_rcv_filter clk: Fix pointer casting to prevent oops in devm_clk_release() gpiolib: improve coding style for local variables gpiolib: check the 'ngpios' property in core gpiolib code gpiolib: fix memory leak in gpiochip_setup_dev() netfilter: nft_set_pipapo: Actually validate intervals in fields after the first one drm/vmwgfx: Fix race issue calling pin_user_pages ieee802154: cc2520: Fix error return code in cc2520_hw_init() ca8210: Fix crash by zero initializing data netfilter: ctnetlink: fix compilation warning after data race fixes in ct mark drm/bridge: ti-sn65dsi86: Fix output polarity setting bug gpio: amd8111: Fix PCI device reference count leak e1000e: Fix TX dispatch condition igb: Allocate MSI-X vector when testing net: broadcom: Add PTP_1588_CLOCK_OPTIONAL dependency for BCMGENET under ARCH_BCM2835 drm: bridge: dw_hdmi: fix preference of RGB modes over YUV420 af_unix: Get user_ns from in_skb in unix_diag_get_exact(). vmxnet3: correctly report encapsulated LRO packet vmxnet3: use correct intrConf reference when using extended queues Bluetooth: 6LoWPAN: add missing hci_dev_put() in get_l2cap_conn() Bluetooth: Fix not cleanup led when bt_init fails net: dsa: ksz: Check return value net: dsa: hellcreek: Check return value net: dsa: sja1105: Check return value selftests: rtnetlink: correct xfrm policy rule in kci_test_ipsec_offload mac802154: fix missing INIT_LIST_HEAD in ieee802154_if_add() net: encx24j600: Add parentheses to fix precedence net: encx24j600: Fix invalid logic in reading of MISTAT register net: mdiobus: fwnode_mdiobus_register_phy() rework error handling net: mdiobus: fix double put fwnode in the error path octeontx2-pf: Fix potential memory leak in otx2_init_tc() xen-netfront: Fix NULL sring after live migration net: mvneta: Prevent out of bounds read in mvneta_config_rss() i40e: Fix not setting default xps_cpus after reset i40e: Fix for VF MAC address 0 i40e: Disallow ip4 and ip6 l4_4_bytes NFC: nci: Bounds check struct nfc_target arrays nvme initialize core quirks before calling nvme_init_subsystem gpio/rockchip: fix refcount leak in rockchip_gpiolib_register() net: stmmac: fix "snps,axi-config" node property parsing ip_gre: do not report erspan version on GRE interface net: microchip: sparx5: Fix missing destroy_workqueue of mact_queue net: thunderx: Fix missing destroy_workqueue of nicvf_rx_mode_wq net: hisilicon: Fix potential use-after-free in hisi_femac_rx() net: mdio: fix unbalanced fwnode reference count in mdio_device_release() net: hisilicon: Fix potential use-after-free in hix5hd2_rx() tipc: Fix potential OOB in tipc_link_proto_rcv() ipv4: Fix incorrect route flushing when source address is deleted ipv4: Fix incorrect route flushing when table ID 0 is used net: dsa: sja1105: fix memory leak in sja1105_setup_devlink_regions() tipc: call tipc_lxc_xmit without holding node_read_lock ethernet: aeroflex: fix potential skb leak in greth_init_rings() dpaa2-switch: Fix memory leak in dpaa2_switch_acl_entry_add() and dpaa2_switch_acl_entry_remove() xen/netback: fix build warning net: phy: mxl-gpy: fix version reporting net: plip: don't call kfree_skb/dev_kfree_skb() under spin_lock_irq() ipv6: avoid use-after-free in ip6_fragment() net: thunderbolt: fix memory leak in tbnet_open() net: mvneta: Fix an out of bounds check macsec: add missing attribute validation for offload s390/qeth: fix various format strings s390/qeth: fix use-after-free in hsci can: esd_usb: Allow REC and TEC to return to zero block: move CONFIG_BLOCK guard to top Makefile io_uring: move to separate directory io_uring: Fix a null-ptr-deref in io_tctx_exit_cb() Linux 5.15.83 Change-Id: I08ef74d6ad8786c191050294dcbf1090908e7c4d Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
@@ -7244,9 +7244,6 @@ F: include/linux/fs.h
|
||||
F: include/linux/fs_types.h
|
||||
F: include/uapi/linux/fs.h
|
||||
F: include/uapi/linux/openat2.h
|
||||
X: fs/io-wq.c
|
||||
X: fs/io-wq.h
|
||||
X: fs/io_uring.c
|
||||
|
||||
FINTEK F75375S HARDWARE MONITOR AND FAN CONTROLLER DRIVER
|
||||
M: Riku Voipio <riku.voipio@iki.fi>
|
||||
@@ -9825,9 +9822,7 @@ L: io-uring@vger.kernel.org
|
||||
S: Maintained
|
||||
T: git git://git.kernel.dk/linux-block
|
||||
T: git git://git.kernel.dk/liburing
|
||||
F: fs/io-wq.c
|
||||
F: fs/io-wq.h
|
||||
F: fs/io_uring.c
|
||||
F: io_uring/
|
||||
F: include/linux/io_uring.h
|
||||
F: include/uapi/linux/io_uring.h
|
||||
F: tools/io_uring/
|
||||
|
||||
6
Makefile
6
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 15
|
||||
SUBLEVEL = 82
|
||||
SUBLEVEL = 83
|
||||
EXTRAVERSION =
|
||||
NAME = Trick or Treat
|
||||
|
||||
@@ -1212,7 +1212,9 @@ endif
|
||||
$(Q)$(MAKE) $(hdr-inst)=$(hdr-prefix)arch/$(SRCARCH)/include/uapi
|
||||
|
||||
ifeq ($(KBUILD_EXTMOD),)
|
||||
core-y += kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/
|
||||
core-y += kernel/ certs/ mm/ fs/ ipc/ security/ crypto/
|
||||
core-$(CONFIG_BLOCK) += block/
|
||||
core-$(CONFIG_IO_URING) += io_uring/
|
||||
|
||||
vmlinux-dirs := $(patsubst %/,%,$(filter %/, \
|
||||
$(core-y) $(core-m) $(drivers-y) $(drivers-m) \
|
||||
|
||||
@@ -1255,7 +1255,7 @@
|
||||
gpmi: nand-controller@33002000{
|
||||
compatible = "fsl,imx7d-gpmi-nand";
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
#size-cells = <1>;
|
||||
reg = <0x33002000 0x2000>, <0x33004000 0x4000>;
|
||||
reg-names = "gpmi-nand", "bch";
|
||||
interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
|
||||
|
||||
@@ -31,11 +31,10 @@
|
||||
&i2c1 {
|
||||
status = "okay";
|
||||
|
||||
hym8563: hym8563@51 {
|
||||
hym8563: rtc@51 {
|
||||
compatible = "haoyu,hym8563";
|
||||
reg = <0x51>;
|
||||
#clock-cells = <0>;
|
||||
clock-frequency = <32768>;
|
||||
clock-output-names = "xin32k";
|
||||
};
|
||||
};
|
||||
|
||||
@@ -71,7 +71,7 @@
|
||||
#sound-dai-cells = <0>;
|
||||
};
|
||||
|
||||
ir_recv: gpio-ir-receiver {
|
||||
ir_recv: ir-receiver {
|
||||
compatible = "gpio-ir-receiver";
|
||||
gpios = <&gpio0 RK_PB2 GPIO_ACTIVE_LOW>;
|
||||
pinctrl-names = "default";
|
||||
|
||||
@@ -378,7 +378,7 @@
|
||||
rockchip,pins = <2 RK_PD3 1 &pcfg_pull_none>;
|
||||
};
|
||||
|
||||
lcdc1_rgb24: ldcd1-rgb24 {
|
||||
lcdc1_rgb24: lcdc1-rgb24 {
|
||||
rockchip,pins = <2 RK_PA0 1 &pcfg_pull_none>,
|
||||
<2 RK_PA1 1 &pcfg_pull_none>,
|
||||
<2 RK_PA2 1 &pcfg_pull_none>,
|
||||
@@ -606,7 +606,6 @@
|
||||
|
||||
&global_timer {
|
||||
interrupts = <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_EDGE_RISING)>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
&local_timer {
|
||||
|
||||
@@ -54,7 +54,7 @@
|
||||
vin-supply = <&vcc_sys>;
|
||||
};
|
||||
|
||||
hym8563@51 {
|
||||
rtc@51 {
|
||||
compatible = "haoyu,hym8563";
|
||||
reg = <0x51>;
|
||||
|
||||
|
||||
@@ -233,11 +233,10 @@
|
||||
vin-supply = <&vcc_sys>;
|
||||
};
|
||||
|
||||
hym8563: hym8563@51 {
|
||||
hym8563: rtc@51 {
|
||||
compatible = "haoyu,hym8563";
|
||||
reg = <0x51>;
|
||||
#clock-cells = <0>;
|
||||
clock-frequency = <32768>;
|
||||
clock-output-names = "xin32k";
|
||||
interrupt-parent = <&gpio7>;
|
||||
interrupts = <RK_PA4 IRQ_TYPE_EDGE_FALLING>;
|
||||
|
||||
@@ -162,11 +162,10 @@
|
||||
vin-supply = <&vcc_sys>;
|
||||
};
|
||||
|
||||
hym8563: hym8563@51 {
|
||||
hym8563: rtc@51 {
|
||||
compatible = "haoyu,hym8563";
|
||||
reg = <0x51>;
|
||||
#clock-cells = <0>;
|
||||
clock-frequency = <32768>;
|
||||
clock-output-names = "xin32k";
|
||||
};
|
||||
|
||||
|
||||
@@ -165,11 +165,10 @@
|
||||
};
|
||||
|
||||
&i2c0 {
|
||||
hym8563: hym8563@51 {
|
||||
hym8563: rtc@51 {
|
||||
compatible = "haoyu,hym8563";
|
||||
reg = <0x51>;
|
||||
#clock-cells = <0>;
|
||||
clock-frequency = <32768>;
|
||||
clock-output-names = "xin32k";
|
||||
interrupt-parent = <&gpio0>;
|
||||
interrupts = <RK_PA4 IRQ_TYPE_EDGE_FALLING>;
|
||||
|
||||
@@ -241,7 +241,6 @@
|
||||
interrupt-parent = <&gpio5>;
|
||||
interrupts = <RK_PC3 IRQ_TYPE_LEVEL_LOW>;
|
||||
#clock-cells = <0>;
|
||||
clock-frequency = <32768>;
|
||||
clock-output-names = "hym8563";
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&hym8563_int>;
|
||||
|
||||
@@ -76,6 +76,13 @@
|
||||
reg = <0x1013c200 0x20>;
|
||||
interrupts = <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_EDGE_RISING)>;
|
||||
clocks = <&cru CORE_PERI>;
|
||||
status = "disabled";
|
||||
/* The clock source and the sched_clock provided by the arm_global_timer
|
||||
* on Rockchip rk3066a/rk3188 are quite unstable because their rates
|
||||
* depend on the CPU frequency.
|
||||
* Keep the arm_global_timer disabled in order to have the
|
||||
* DW_APB_TIMER (rk3066a) or ROCKCHIP_TIMER (rk3188) selected by default.
|
||||
*/
|
||||
};
|
||||
|
||||
local_timer: local-timer@1013c600 {
|
||||
|
||||
@@ -17,7 +17,7 @@ extern unsigned long perf_misc_flags(struct pt_regs *regs);
|
||||
|
||||
#define perf_arch_fetch_caller_regs(regs, __ip) { \
|
||||
(regs)->ARM_pc = (__ip); \
|
||||
(regs)->ARM_fp = (unsigned long) __builtin_frame_address(0); \
|
||||
frame_pointer((regs)) = (unsigned long) __builtin_frame_address(0); \
|
||||
(regs)->ARM_sp = current_stack_pointer; \
|
||||
(regs)->ARM_cpsr = SVC_MODE; \
|
||||
}
|
||||
|
||||
@@ -44,12 +44,6 @@
|
||||
|
||||
typedef pte_t *pte_addr_t;
|
||||
|
||||
/*
|
||||
* ZERO_PAGE is a global shared page that is always zero: used
|
||||
* for zero-mapped memory areas etc..
|
||||
*/
|
||||
#define ZERO_PAGE(vaddr) (virt_to_page(0))
|
||||
|
||||
/*
|
||||
* Mark the prot value as uncacheable and unbufferable.
|
||||
*/
|
||||
|
||||
@@ -10,6 +10,15 @@
|
||||
#include <linux/const.h>
|
||||
#include <asm/proc-fns.h>
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
/*
|
||||
* ZERO_PAGE is a global shared page that is always zero: used
|
||||
* for zero-mapped memory areas etc..
|
||||
*/
|
||||
extern struct page *empty_zero_page;
|
||||
#define ZERO_PAGE(vaddr) (empty_zero_page)
|
||||
#endif
|
||||
|
||||
#ifndef CONFIG_MMU
|
||||
|
||||
#include <asm-generic/pgtable-nopud.h>
|
||||
@@ -156,13 +165,6 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
|
||||
#define __S111 __PAGE_SHARED_EXEC
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
/*
|
||||
* ZERO_PAGE is a global shared page that is always zero: used
|
||||
* for zero-mapped memory areas etc..
|
||||
*/
|
||||
extern struct page *empty_zero_page;
|
||||
#define ZERO_PAGE(vaddr) (empty_zero_page)
|
||||
|
||||
|
||||
extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
|
||||
|
||||
|
||||
@@ -26,6 +26,13 @@
|
||||
|
||||
unsigned long vectors_base;
|
||||
|
||||
/*
|
||||
* empty_zero_page is a special page that is used for
|
||||
* zero-initialized data and COW.
|
||||
*/
|
||||
struct page *empty_zero_page;
|
||||
EXPORT_SYMBOL(empty_zero_page);
|
||||
|
||||
#ifdef CONFIG_ARM_MPU
|
||||
struct mpu_rgn_info mpu_rgn_info;
|
||||
#endif
|
||||
@@ -148,9 +155,21 @@ void __init adjust_lowmem_bounds(void)
|
||||
*/
|
||||
void __init paging_init(const struct machine_desc *mdesc)
|
||||
{
|
||||
void *zero_page;
|
||||
|
||||
early_trap_init((void *)vectors_base);
|
||||
mpu_setup();
|
||||
|
||||
/* allocate the zero page. */
|
||||
zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
|
||||
if (!zero_page)
|
||||
panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
|
||||
__func__, PAGE_SIZE, PAGE_SIZE);
|
||||
|
||||
bootmem_init();
|
||||
|
||||
empty_zero_page = virt_to_page(zero_page);
|
||||
flush_dcache_page(empty_zero_page);
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
@@ -19,7 +19,7 @@
|
||||
stdout-path = "serial2:1500000n8";
|
||||
};
|
||||
|
||||
ir_rx {
|
||||
ir-receiver {
|
||||
compatible = "gpio-ir-receiver";
|
||||
gpios = <&gpio0 RK_PC0 GPIO_ACTIVE_HIGH>;
|
||||
pinctrl-names = "default";
|
||||
|
||||
@@ -446,7 +446,6 @@
|
||||
&i2s1 {
|
||||
rockchip,playback-channels = <2>;
|
||||
rockchip,capture-channels = <2>;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
&i2s2 {
|
||||
|
||||
@@ -538,8 +538,10 @@ static int shadow_scb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
|
||||
if (test_kvm_cpu_feat(vcpu->kvm, KVM_S390_VM_CPU_FEAT_CEI))
|
||||
scb_s->eca |= scb_o->eca & ECA_CEI;
|
||||
/* Epoch Extension */
|
||||
if (test_kvm_facility(vcpu->kvm, 139))
|
||||
if (test_kvm_facility(vcpu->kvm, 139)) {
|
||||
scb_s->ecd |= scb_o->ecd & ECD_MEF;
|
||||
scb_s->epdx = scb_o->epdx;
|
||||
}
|
||||
|
||||
/* etoken */
|
||||
if (test_kvm_facility(vcpu->kvm, 156))
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
# Makefile for the kernel block layer
|
||||
#
|
||||
|
||||
obj-$(CONFIG_BLOCK) := bdev.o fops.o bio.o elevator.o blk-core.o blk-sysfs.o \
|
||||
obj-y := bdev.o fops.o bio.o elevator.o blk-core.o blk-sysfs.o \
|
||||
blk-flush.o blk-settings.o blk-ioc.o blk-map.o \
|
||||
blk-exec.o blk-merge.o blk-timeout.o \
|
||||
blk-lib.o blk-mq.o blk-mq-tag.o blk-stat.o \
|
||||
|
||||
@@ -1901,6 +1901,11 @@ static int btusb_setup_csr(struct hci_dev *hdev)
|
||||
|
||||
rp = (struct hci_rp_read_local_version *)skb->data;
|
||||
|
||||
bt_dev_info(hdev, "CSR: Setting up dongle with HCI ver=%u rev=%04x; LMP ver=%u subver=%04x; manufacturer=%u",
|
||||
le16_to_cpu(rp->hci_ver), le16_to_cpu(rp->hci_rev),
|
||||
le16_to_cpu(rp->lmp_ver), le16_to_cpu(rp->lmp_subver),
|
||||
le16_to_cpu(rp->manufacturer));
|
||||
|
||||
/* Detect a wide host of Chinese controllers that aren't CSR.
|
||||
*
|
||||
* Known fake bcdDevices: 0x0100, 0x0134, 0x1915, 0x2520, 0x7558, 0x8891
|
||||
|
||||
@@ -4,42 +4,101 @@
|
||||
#include <linux/export.h>
|
||||
#include <linux/gfp.h>
|
||||
|
||||
struct devm_clk_state {
|
||||
struct clk *clk;
|
||||
void (*exit)(struct clk *clk);
|
||||
};
|
||||
|
||||
static void devm_clk_release(struct device *dev, void *res)
|
||||
{
|
||||
clk_put(*(struct clk **)res);
|
||||
struct devm_clk_state *state = res;
|
||||
|
||||
if (state->exit)
|
||||
state->exit(state->clk);
|
||||
|
||||
clk_put(state->clk);
|
||||
}
|
||||
|
||||
static struct clk *__devm_clk_get(struct device *dev, const char *id,
|
||||
struct clk *(*get)(struct device *dev, const char *id),
|
||||
int (*init)(struct clk *clk),
|
||||
void (*exit)(struct clk *clk))
|
||||
{
|
||||
struct devm_clk_state *state;
|
||||
struct clk *clk;
|
||||
int ret;
|
||||
|
||||
state = devres_alloc(devm_clk_release, sizeof(*state), GFP_KERNEL);
|
||||
if (!state)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
clk = get(dev, id);
|
||||
if (IS_ERR(clk)) {
|
||||
ret = PTR_ERR(clk);
|
||||
goto err_clk_get;
|
||||
}
|
||||
|
||||
if (init) {
|
||||
ret = init(clk);
|
||||
if (ret)
|
||||
goto err_clk_init;
|
||||
}
|
||||
|
||||
state->clk = clk;
|
||||
state->exit = exit;
|
||||
|
||||
devres_add(dev, state);
|
||||
|
||||
return clk;
|
||||
|
||||
err_clk_init:
|
||||
|
||||
clk_put(clk);
|
||||
err_clk_get:
|
||||
|
||||
devres_free(state);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
struct clk *devm_clk_get(struct device *dev, const char *id)
|
||||
{
|
||||
struct clk **ptr, *clk;
|
||||
|
||||
ptr = devres_alloc(devm_clk_release, sizeof(*ptr), GFP_KERNEL);
|
||||
if (!ptr)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
clk = clk_get(dev, id);
|
||||
if (!IS_ERR(clk)) {
|
||||
*ptr = clk;
|
||||
devres_add(dev, ptr);
|
||||
} else {
|
||||
devres_free(ptr);
|
||||
}
|
||||
|
||||
return clk;
|
||||
return __devm_clk_get(dev, id, clk_get, NULL, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL(devm_clk_get);
|
||||
|
||||
struct clk *devm_clk_get_prepared(struct device *dev, const char *id)
|
||||
{
|
||||
return __devm_clk_get(dev, id, clk_get, clk_prepare, clk_unprepare);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(devm_clk_get_prepared);
|
||||
|
||||
struct clk *devm_clk_get_enabled(struct device *dev, const char *id)
|
||||
{
|
||||
return __devm_clk_get(dev, id, clk_get,
|
||||
clk_prepare_enable, clk_disable_unprepare);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(devm_clk_get_enabled);
|
||||
|
||||
struct clk *devm_clk_get_optional(struct device *dev, const char *id)
|
||||
{
|
||||
struct clk *clk = devm_clk_get(dev, id);
|
||||
|
||||
if (clk == ERR_PTR(-ENOENT))
|
||||
return NULL;
|
||||
|
||||
return clk;
|
||||
return __devm_clk_get(dev, id, clk_get_optional, NULL, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL(devm_clk_get_optional);
|
||||
|
||||
struct clk *devm_clk_get_optional_prepared(struct device *dev, const char *id)
|
||||
{
|
||||
return __devm_clk_get(dev, id, clk_get_optional,
|
||||
clk_prepare, clk_unprepare);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(devm_clk_get_optional_prepared);
|
||||
|
||||
struct clk *devm_clk_get_optional_enabled(struct device *dev, const char *id)
|
||||
{
|
||||
return __devm_clk_get(dev, id, clk_get_optional,
|
||||
clk_prepare_enable, clk_disable_unprepare);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(devm_clk_get_optional_enabled);
|
||||
|
||||
struct clk_bulk_devres {
|
||||
struct clk_bulk_data *clks;
|
||||
int num_clks;
|
||||
|
||||
@@ -226,7 +226,10 @@ found:
|
||||
ioport_unmap(gp.pm);
|
||||
goto out;
|
||||
}
|
||||
return 0;
|
||||
|
||||
out:
|
||||
pci_dev_put(pdev);
|
||||
return err;
|
||||
}
|
||||
|
||||
@@ -234,6 +237,7 @@ static void __exit amd_gpio_exit(void)
|
||||
{
|
||||
gpiochip_remove(&gp.chip);
|
||||
ioport_unmap(gp.pm);
|
||||
pci_dev_put(gp.pdev);
|
||||
}
|
||||
|
||||
module_init(amd_gpio_init);
|
||||
|
||||
@@ -605,6 +605,7 @@ static int rockchip_gpiolib_register(struct rockchip_pin_bank *bank)
|
||||
return -ENODATA;
|
||||
|
||||
pctldev = of_pinctrl_get(pctlnp);
|
||||
of_node_put(pctlnp);
|
||||
if (!pctldev)
|
||||
return -ENODEV;
|
||||
|
||||
|
||||
@@ -525,12 +525,13 @@ static int gpiochip_setup_dev(struct gpio_device *gdev)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* From this point, the .release() function cleans up gpio_device */
|
||||
gdev->dev.release = gpiodevice_release;
|
||||
|
||||
ret = gpiochip_sysfs_register(gdev);
|
||||
if (ret)
|
||||
goto err_remove_device;
|
||||
|
||||
/* From this point, the .release() function cleans up gpio_device */
|
||||
gdev->dev.release = gpiodevice_release;
|
||||
dev_dbg(&gdev->dev, "registered GPIOs %d to %d on %s\n", gdev->base,
|
||||
gdev->base + gdev->ngpio - 1, gdev->chip->label ? : "generic");
|
||||
|
||||
@@ -594,11 +595,12 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
|
||||
struct lock_class_key *request_key)
|
||||
{
|
||||
struct fwnode_handle *fwnode = gc->parent ? dev_fwnode(gc->parent) : NULL;
|
||||
unsigned long flags;
|
||||
int ret = 0;
|
||||
unsigned i;
|
||||
int base = gc->base;
|
||||
struct gpio_device *gdev;
|
||||
unsigned long flags;
|
||||
unsigned int i;
|
||||
u32 ngpios = 0;
|
||||
int base = 0;
|
||||
int ret = 0;
|
||||
|
||||
/*
|
||||
* First: allocate and populate the internal stat container, and
|
||||
@@ -640,22 +642,43 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
|
||||
else
|
||||
gdev->owner = THIS_MODULE;
|
||||
|
||||
gdev->descs = kcalloc(gc->ngpio, sizeof(gdev->descs[0]), GFP_KERNEL);
|
||||
if (!gdev->descs) {
|
||||
ret = -ENOMEM;
|
||||
/*
|
||||
* Try the device properties if the driver didn't supply the number
|
||||
* of GPIO lines.
|
||||
*/
|
||||
ngpios = gc->ngpio;
|
||||
if (ngpios == 0) {
|
||||
ret = device_property_read_u32(&gdev->dev, "ngpios", &ngpios);
|
||||
if (ret == -ENODATA)
|
||||
/*
|
||||
* -ENODATA means that there is no property found and
|
||||
* we want to issue the error message to the user.
|
||||
* Besides that, we want to return different error code
|
||||
* to state that supplied value is not valid.
|
||||
*/
|
||||
ngpios = 0;
|
||||
else if (ret)
|
||||
goto err_free_dev_name;
|
||||
|
||||
gc->ngpio = ngpios;
|
||||
}
|
||||
|
||||
if (gc->ngpio == 0) {
|
||||
chip_err(gc, "tried to insert a GPIO chip with zero lines\n");
|
||||
ret = -EINVAL;
|
||||
goto err_free_descs;
|
||||
goto err_free_dev_name;
|
||||
}
|
||||
|
||||
if (gc->ngpio > FASTPATH_NGPIO)
|
||||
chip_warn(gc, "line cnt %u is greater than fast path cnt %u\n",
|
||||
gc->ngpio, FASTPATH_NGPIO);
|
||||
|
||||
gdev->descs = kcalloc(gc->ngpio, sizeof(*gdev->descs), GFP_KERNEL);
|
||||
if (!gdev->descs) {
|
||||
ret = -ENOMEM;
|
||||
goto err_free_dev_name;
|
||||
}
|
||||
|
||||
gdev->label = kstrdup_const(gc->label ?: "unknown", GFP_KERNEL);
|
||||
if (!gdev->label) {
|
||||
ret = -ENOMEM;
|
||||
@@ -674,11 +697,13 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
|
||||
* it may be a pipe dream. It will not happen before we get rid
|
||||
* of the sysfs interface anyways.
|
||||
*/
|
||||
base = gc->base;
|
||||
if (base < 0) {
|
||||
base = gpiochip_find_base(gc->ngpio);
|
||||
if (base < 0) {
|
||||
ret = base;
|
||||
spin_unlock_irqrestore(&gpio_lock, flags);
|
||||
ret = base;
|
||||
base = 0;
|
||||
goto err_free_label;
|
||||
}
|
||||
/*
|
||||
@@ -786,6 +811,11 @@ err_remove_of_chip:
|
||||
err_free_gpiochip_mask:
|
||||
gpiochip_remove_pin_ranges(gc);
|
||||
gpiochip_free_valid_mask(gc);
|
||||
if (gdev->dev.release) {
|
||||
/* release() has been registered by gpiochip_setup_dev() */
|
||||
put_device(&gdev->dev);
|
||||
goto err_print_message;
|
||||
}
|
||||
err_remove_from_list:
|
||||
spin_lock_irqsave(&gpio_lock, flags);
|
||||
list_del(&gdev->list);
|
||||
@@ -799,13 +829,14 @@ err_free_dev_name:
|
||||
err_free_ida:
|
||||
ida_free(&gpio_ida, gdev->id);
|
||||
err_free_gdev:
|
||||
kfree(gdev);
|
||||
err_print_message:
|
||||
/* failures here can mean systems won't boot... */
|
||||
if (ret != -EPROBE_DEFER) {
|
||||
pr_err("%s: GPIOs %d..%d (%s) failed to register, %d\n", __func__,
|
||||
gdev->base, gdev->base + gdev->ngpio - 1,
|
||||
base, base + (int)ngpios - 1,
|
||||
gc->label ? : "generic", ret);
|
||||
}
|
||||
kfree(gdev);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(gpiochip_add_data_with_key);
|
||||
|
||||
@@ -978,13 +978,13 @@ static void sdma_v4_0_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 se
|
||||
|
||||
|
||||
/**
|
||||
* sdma_v4_0_gfx_stop - stop the gfx async dma engines
|
||||
* sdma_v4_0_gfx_enable - enable the gfx async dma engines
|
||||
*
|
||||
* @adev: amdgpu_device pointer
|
||||
*
|
||||
* Stop the gfx async dma ring buffers (VEGA10).
|
||||
* @enable: enable SDMA RB/IB
|
||||
* control the gfx async dma ring buffers (VEGA10).
|
||||
*/
|
||||
static void sdma_v4_0_gfx_stop(struct amdgpu_device *adev)
|
||||
static void sdma_v4_0_gfx_enable(struct amdgpu_device *adev, bool enable)
|
||||
{
|
||||
struct amdgpu_ring *sdma[AMDGPU_MAX_SDMA_INSTANCES];
|
||||
u32 rb_cntl, ib_cntl;
|
||||
@@ -999,10 +999,10 @@ static void sdma_v4_0_gfx_stop(struct amdgpu_device *adev)
|
||||
}
|
||||
|
||||
rb_cntl = RREG32_SDMA(i, mmSDMA0_GFX_RB_CNTL);
|
||||
rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_ENABLE, 0);
|
||||
rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_ENABLE, enable ? 1 : 0);
|
||||
WREG32_SDMA(i, mmSDMA0_GFX_RB_CNTL, rb_cntl);
|
||||
ib_cntl = RREG32_SDMA(i, mmSDMA0_GFX_IB_CNTL);
|
||||
ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL, IB_ENABLE, 0);
|
||||
ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL, IB_ENABLE, enable ? 1 : 0);
|
||||
WREG32_SDMA(i, mmSDMA0_GFX_IB_CNTL, ib_cntl);
|
||||
}
|
||||
}
|
||||
@@ -1129,7 +1129,7 @@ static void sdma_v4_0_enable(struct amdgpu_device *adev, bool enable)
|
||||
int i;
|
||||
|
||||
if (!enable) {
|
||||
sdma_v4_0_gfx_stop(adev);
|
||||
sdma_v4_0_gfx_enable(adev, enable);
|
||||
sdma_v4_0_rlc_stop(adev);
|
||||
if (adev->sdma.has_page_queue)
|
||||
sdma_v4_0_page_stop(adev);
|
||||
@@ -2063,8 +2063,10 @@ static int sdma_v4_0_suspend(void *handle)
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
||||
|
||||
/* SMU saves SDMA state for us */
|
||||
if (adev->in_s0ix)
|
||||
if (adev->in_s0ix) {
|
||||
sdma_v4_0_gfx_enable(adev, false);
|
||||
return 0;
|
||||
}
|
||||
|
||||
return sdma_v4_0_hw_fini(adev);
|
||||
}
|
||||
@@ -2074,8 +2076,12 @@ static int sdma_v4_0_resume(void *handle)
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
||||
|
||||
/* SMU restores SDMA state for us */
|
||||
if (adev->in_s0ix)
|
||||
if (adev->in_s0ix) {
|
||||
sdma_v4_0_enable(adev, true);
|
||||
sdma_v4_0_gfx_enable(adev, true);
|
||||
amdgpu_ttm_set_buffer_funcs_status(adev, true);
|
||||
return 0;
|
||||
}
|
||||
|
||||
return sdma_v4_0_hw_init(adev);
|
||||
}
|
||||
|
||||
@@ -796,7 +796,7 @@ static int sp_tx_edid_read(struct anx7625_data *ctx,
|
||||
int count, blocks_num;
|
||||
u8 pblock_buf[MAX_DPCD_BUFFER_SIZE];
|
||||
u8 i, j;
|
||||
u8 g_edid_break = 0;
|
||||
int g_edid_break = 0;
|
||||
int ret;
|
||||
struct device *dev = &ctx->client->dev;
|
||||
|
||||
@@ -827,7 +827,7 @@ static int sp_tx_edid_read(struct anx7625_data *ctx,
|
||||
g_edid_break = edid_read(ctx, offset,
|
||||
pblock_buf);
|
||||
|
||||
if (g_edid_break)
|
||||
if (g_edid_break < 0)
|
||||
break;
|
||||
|
||||
memcpy(&pedid_blocks_buf[offset],
|
||||
|
||||
@@ -2594,6 +2594,9 @@ static u32 *dw_hdmi_bridge_atomic_get_output_bus_fmts(struct drm_bridge *bridge,
|
||||
* if supported. In any case the default RGB888 format is added
|
||||
*/
|
||||
|
||||
/* Default 8bit RGB fallback */
|
||||
output_fmts[i++] = MEDIA_BUS_FMT_RGB888_1X24;
|
||||
|
||||
if (max_bpc >= 16 && info->bpc == 16) {
|
||||
if (info->color_formats & DRM_COLOR_FORMAT_YCRCB444)
|
||||
output_fmts[i++] = MEDIA_BUS_FMT_YUV16_1X48;
|
||||
@@ -2627,9 +2630,6 @@ static u32 *dw_hdmi_bridge_atomic_get_output_bus_fmts(struct drm_bridge *bridge,
|
||||
if (info->color_formats & DRM_COLOR_FORMAT_YCRCB444)
|
||||
output_fmts[i++] = MEDIA_BUS_FMT_YUV8_1X24;
|
||||
|
||||
/* Default 8bit RGB fallback */
|
||||
output_fmts[i++] = MEDIA_BUS_FMT_RGB888_1X24;
|
||||
|
||||
*num_output_fmts = i;
|
||||
|
||||
return output_fmts;
|
||||
|
||||
@@ -920,9 +920,9 @@ static void ti_sn_bridge_set_video_timings(struct ti_sn65dsi86 *pdata)
|
||||
&pdata->bridge.encoder->crtc->state->adjusted_mode;
|
||||
u8 hsync_polarity = 0, vsync_polarity = 0;
|
||||
|
||||
if (mode->flags & DRM_MODE_FLAG_PHSYNC)
|
||||
if (mode->flags & DRM_MODE_FLAG_NHSYNC)
|
||||
hsync_polarity = CHA_HSYNC_POLARITY;
|
||||
if (mode->flags & DRM_MODE_FLAG_PVSYNC)
|
||||
if (mode->flags & DRM_MODE_FLAG_NVSYNC)
|
||||
vsync_polarity = CHA_VSYNC_POLARITY;
|
||||
|
||||
ti_sn65dsi86_write_u16(pdata, SN_CHA_ACTIVE_LINE_LENGTH_LOW_REG,
|
||||
|
||||
@@ -541,12 +541,20 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
|
||||
{
|
||||
struct drm_gem_object *obj = vma->vm_private_data;
|
||||
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
|
||||
int ret;
|
||||
|
||||
WARN_ON(shmem->base.import_attach);
|
||||
|
||||
ret = drm_gem_shmem_get_pages(shmem);
|
||||
WARN_ON_ONCE(ret != 0);
|
||||
mutex_lock(&shmem->pages_lock);
|
||||
|
||||
/*
|
||||
* We should have already pinned the pages when the buffer was first
|
||||
* mmap'd, vm_open() just grabs an additional reference for the new
|
||||
* mm the vma is getting copied into (ie. on fork()).
|
||||
*/
|
||||
if (!WARN_ON_ONCE(!shmem->pages_use_count))
|
||||
shmem->pages_use_count++;
|
||||
|
||||
mutex_unlock(&shmem->pages_lock);
|
||||
|
||||
drm_gem_vm_open(vma);
|
||||
}
|
||||
@@ -591,10 +599,8 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
|
||||
}
|
||||
|
||||
ret = drm_gem_shmem_get_pages(shmem);
|
||||
if (ret) {
|
||||
drm_gem_vm_close(vma);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
|
||||
vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
|
||||
|
||||
@@ -1085,21 +1085,21 @@ int vmw_mksstat_add_ioctl(struct drm_device *dev, void *data,
|
||||
reset_ppn_array(pdesc->strsPPNs, ARRAY_SIZE(pdesc->strsPPNs));
|
||||
|
||||
/* Pin mksGuestStat user pages and store those in the instance descriptor */
|
||||
nr_pinned_stat = pin_user_pages(arg->stat, num_pages_stat, FOLL_LONGTERM, pages_stat, NULL);
|
||||
nr_pinned_stat = pin_user_pages_fast(arg->stat, num_pages_stat, FOLL_LONGTERM, pages_stat);
|
||||
if (num_pages_stat != nr_pinned_stat)
|
||||
goto err_pin_stat;
|
||||
|
||||
for (i = 0; i < num_pages_stat; ++i)
|
||||
pdesc->statPPNs[i] = page_to_pfn(pages_stat[i]);
|
||||
|
||||
nr_pinned_info = pin_user_pages(arg->info, num_pages_info, FOLL_LONGTERM, pages_info, NULL);
|
||||
nr_pinned_info = pin_user_pages_fast(arg->info, num_pages_info, FOLL_LONGTERM, pages_info);
|
||||
if (num_pages_info != nr_pinned_info)
|
||||
goto err_pin_info;
|
||||
|
||||
for (i = 0; i < num_pages_info; ++i)
|
||||
pdesc->infoPPNs[i] = page_to_pfn(pages_info[i]);
|
||||
|
||||
nr_pinned_strs = pin_user_pages(arg->strs, num_pages_strs, FOLL_LONGTERM, pages_strs, NULL);
|
||||
nr_pinned_strs = pin_user_pages_fast(arg->strs, num_pages_strs, FOLL_LONGTERM, pages_strs);
|
||||
if (num_pages_strs != nr_pinned_strs)
|
||||
goto err_pin_strs;
|
||||
|
||||
|
||||
@@ -953,6 +953,10 @@ int vmw_kms_sou_init_display(struct vmw_private *dev_priv)
|
||||
struct drm_device *dev = &dev_priv->drm;
|
||||
int i, ret;
|
||||
|
||||
/* Screen objects won't work if GMR's aren't available */
|
||||
if (!dev_priv->has_gmr)
|
||||
return -ENOSYS;
|
||||
|
||||
if (!(dev_priv->capabilities & SVGA_CAP_SCREEN_OBJECT_2)) {
|
||||
return -ENOSYS;
|
||||
}
|
||||
|
||||
@@ -1310,6 +1310,9 @@ static s32 snto32(__u32 value, unsigned n)
|
||||
if (!value || !n)
|
||||
return 0;
|
||||
|
||||
if (n > 32)
|
||||
n = 32;
|
||||
|
||||
switch (n) {
|
||||
case 8: return ((__s8)value);
|
||||
case 16: return ((__s16)value);
|
||||
|
||||
@@ -261,6 +261,7 @@
|
||||
#define USB_DEVICE_ID_CH_AXIS_295 0x001c
|
||||
|
||||
#define USB_VENDOR_ID_CHERRY 0x046a
|
||||
#define USB_DEVICE_ID_CHERRY_MOUSE_000C 0x000c
|
||||
#define USB_DEVICE_ID_CHERRY_CYMOTION 0x0023
|
||||
#define USB_DEVICE_ID_CHERRY_CYMOTION_SOLAR 0x0027
|
||||
|
||||
@@ -892,6 +893,7 @@
|
||||
#define USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER 0x02fd
|
||||
#define USB_DEVICE_ID_MS_PIXART_MOUSE 0x00cb
|
||||
#define USB_DEVICE_ID_8BITDO_SN30_PRO_PLUS 0x02e0
|
||||
#define USB_DEVICE_ID_MS_MOUSE_0783 0x0783
|
||||
|
||||
#define USB_VENDOR_ID_MOJO 0x8282
|
||||
#define USB_DEVICE_ID_RETRO_ADAPTER 0x3201
|
||||
@@ -1185,6 +1187,7 @@
|
||||
#define USB_DEVICE_ID_SYNAPTICS_DELL_K15A 0x6e21
|
||||
#define USB_DEVICE_ID_SYNAPTICS_ACER_ONE_S1002 0x73f4
|
||||
#define USB_DEVICE_ID_SYNAPTICS_ACER_ONE_S1003 0x73f5
|
||||
#define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_017 0x73f6
|
||||
#define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5 0x81a7
|
||||
|
||||
#define USB_VENDOR_ID_TEXAS_INSTRUMENTS 0x2047
|
||||
@@ -1341,6 +1344,7 @@
|
||||
|
||||
#define USB_VENDOR_ID_PRIMAX 0x0461
|
||||
#define USB_DEVICE_ID_PRIMAX_MOUSE_4D22 0x4d22
|
||||
#define USB_DEVICE_ID_PRIMAX_MOUSE_4E2A 0x4e2a
|
||||
#define USB_DEVICE_ID_PRIMAX_KEYBOARD 0x4e05
|
||||
#define USB_DEVICE_ID_PRIMAX_REZEL 0x4e72
|
||||
#define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F 0x4d0f
|
||||
|
||||
@@ -121,6 +121,11 @@ static const struct hid_device_id ite_devices[] = {
|
||||
USB_VENDOR_ID_SYNAPTICS,
|
||||
USB_DEVICE_ID_SYNAPTICS_ACER_ONE_S1003),
|
||||
.driver_data = QUIRK_TOUCHPAD_ON_OFF_REPORT },
|
||||
/* ITE8910 USB kbd ctlr, with Synaptics touchpad connected to it. */
|
||||
{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
|
||||
USB_VENDOR_ID_SYNAPTICS,
|
||||
USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_017),
|
||||
.driver_data = QUIRK_TOUCHPAD_ON_OFF_REPORT },
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(hid, ite_devices);
|
||||
|
||||
@@ -872,6 +872,12 @@ static ssize_t lg4ff_alternate_modes_store(struct device *dev, struct device_att
|
||||
return -ENOMEM;
|
||||
|
||||
i = strlen(lbuf);
|
||||
|
||||
if (i == 0) {
|
||||
kfree(lbuf);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (lbuf[i-1] == '\n') {
|
||||
if (i == 1) {
|
||||
kfree(lbuf);
|
||||
|
||||
@@ -54,6 +54,7 @@ static const struct hid_device_id hid_quirks[] = {
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_FLIGHT_SIM_YOKE), HID_QUIRK_NOGET },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_PRO_PEDALS), HID_QUIRK_NOGET },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_PRO_THROTTLE), HID_QUIRK_NOGET },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_CHERRY, USB_DEVICE_ID_CHERRY_MOUSE_000C), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K65RGB), HID_QUIRK_NO_INIT_REPORTS },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K65RGB_RAPIDFIRE), HID_QUIRK_NO_INIT_REPORTS | HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K70RGB), HID_QUIRK_NO_INIT_REPORTS },
|
||||
@@ -122,6 +123,7 @@ static const struct hid_device_id hid_quirks[] = {
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C05A), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C06A), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_MCS, USB_DEVICE_ID_MCS_GAMEPADBLOCK), HID_QUIRK_MULTI_INPUT },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_MOUSE_0783), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PIXART_MOUSE), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_POWER_COVER), HID_QUIRK_NO_INIT_REPORTS },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_SURFACE3_COVER), HID_QUIRK_NO_INIT_REPORTS },
|
||||
@@ -146,6 +148,7 @@ static const struct hid_device_id hid_quirks[] = {
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN), HID_QUIRK_NO_INIT_REPORTS },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_MOUSE_4D22), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_MOUSE_4E2A), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D65), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22), HID_QUIRK_ALWAYS_POLL },
|
||||
|
||||
@@ -788,7 +788,13 @@ int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory,
|
||||
num_buffers = max_t(unsigned int, *count, q->min_buffers_needed);
|
||||
num_buffers = min_t(unsigned int, num_buffers, VB2_MAX_FRAME);
|
||||
memset(q->alloc_devs, 0, sizeof(q->alloc_devs));
|
||||
/*
|
||||
* Set this now to ensure that drivers see the correct q->memory value
|
||||
* in the queue_setup op.
|
||||
*/
|
||||
mutex_lock(&q->mmap_lock);
|
||||
q->memory = memory;
|
||||
mutex_unlock(&q->mmap_lock);
|
||||
|
||||
/*
|
||||
* Ask the driver how many buffers and planes per buffer it requires.
|
||||
@@ -797,22 +803,27 @@ int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory,
|
||||
ret = call_qop(q, queue_setup, q, &num_buffers, &num_planes,
|
||||
plane_sizes, q->alloc_devs);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto error;
|
||||
|
||||
/* Check that driver has set sane values */
|
||||
if (WARN_ON(!num_planes))
|
||||
return -EINVAL;
|
||||
if (WARN_ON(!num_planes)) {
|
||||
ret = -EINVAL;
|
||||
goto error;
|
||||
}
|
||||
|
||||
for (i = 0; i < num_planes; i++)
|
||||
if (WARN_ON(!plane_sizes[i]))
|
||||
return -EINVAL;
|
||||
if (WARN_ON(!plane_sizes[i])) {
|
||||
ret = -EINVAL;
|
||||
goto error;
|
||||
}
|
||||
|
||||
/* Finally, allocate buffers and video memory */
|
||||
allocated_buffers =
|
||||
__vb2_queue_alloc(q, memory, num_buffers, num_planes, plane_sizes);
|
||||
if (allocated_buffers == 0) {
|
||||
dprintk(q, 1, "memory allocation failed\n");
|
||||
return -ENOMEM;
|
||||
ret = -ENOMEM;
|
||||
goto error;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -853,7 +864,8 @@ int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory,
|
||||
if (ret < 0) {
|
||||
/*
|
||||
* Note: __vb2_queue_free() will subtract 'allocated_buffers'
|
||||
* from q->num_buffers.
|
||||
* from q->num_buffers and it will reset q->memory to
|
||||
* VB2_MEMORY_UNKNOWN.
|
||||
*/
|
||||
__vb2_queue_free(q, allocated_buffers);
|
||||
mutex_unlock(&q->mmap_lock);
|
||||
@@ -869,6 +881,12 @@ int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory,
|
||||
q->waiting_for_buffers = !q->is_output;
|
||||
|
||||
return 0;
|
||||
|
||||
error:
|
||||
mutex_lock(&q->mmap_lock);
|
||||
q->memory = VB2_MEMORY_UNKNOWN;
|
||||
mutex_unlock(&q->mmap_lock);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(vb2_core_reqbufs);
|
||||
|
||||
@@ -879,6 +897,7 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,
|
||||
{
|
||||
unsigned int num_planes = 0, num_buffers, allocated_buffers;
|
||||
unsigned plane_sizes[VB2_MAX_PLANES] = { };
|
||||
bool no_previous_buffers = !q->num_buffers;
|
||||
int ret;
|
||||
|
||||
if (q->num_buffers == VB2_MAX_FRAME) {
|
||||
@@ -886,13 +905,19 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,
|
||||
return -ENOBUFS;
|
||||
}
|
||||
|
||||
if (!q->num_buffers) {
|
||||
if (no_previous_buffers) {
|
||||
if (q->waiting_in_dqbuf && *count) {
|
||||
dprintk(q, 1, "another dup()ped fd is waiting for a buffer\n");
|
||||
return -EBUSY;
|
||||
}
|
||||
memset(q->alloc_devs, 0, sizeof(q->alloc_devs));
|
||||
/*
|
||||
* Set this now to ensure that drivers see the correct q->memory
|
||||
* value in the queue_setup op.
|
||||
*/
|
||||
mutex_lock(&q->mmap_lock);
|
||||
q->memory = memory;
|
||||
mutex_unlock(&q->mmap_lock);
|
||||
q->waiting_for_buffers = !q->is_output;
|
||||
} else {
|
||||
if (q->memory != memory) {
|
||||
@@ -915,14 +940,15 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,
|
||||
ret = call_qop(q, queue_setup, q, &num_buffers,
|
||||
&num_planes, plane_sizes, q->alloc_devs);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto error;
|
||||
|
||||
/* Finally, allocate buffers and video memory */
|
||||
allocated_buffers = __vb2_queue_alloc(q, memory, num_buffers,
|
||||
num_planes, plane_sizes);
|
||||
if (allocated_buffers == 0) {
|
||||
dprintk(q, 1, "memory allocation failed\n");
|
||||
return -ENOMEM;
|
||||
ret = -ENOMEM;
|
||||
goto error;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -953,7 +979,8 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,
|
||||
if (ret < 0) {
|
||||
/*
|
||||
* Note: __vb2_queue_free() will subtract 'allocated_buffers'
|
||||
* from q->num_buffers.
|
||||
* from q->num_buffers and it will reset q->memory to
|
||||
* VB2_MEMORY_UNKNOWN.
|
||||
*/
|
||||
__vb2_queue_free(q, allocated_buffers);
|
||||
mutex_unlock(&q->mmap_lock);
|
||||
@@ -968,6 +995,14 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,
|
||||
*count = allocated_buffers;
|
||||
|
||||
return 0;
|
||||
|
||||
error:
|
||||
if (no_previous_buffers) {
|
||||
mutex_lock(&q->mmap_lock);
|
||||
q->memory = VB2_MEMORY_UNKNOWN;
|
||||
mutex_unlock(&q->mmap_lock);
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(vb2_core_create_bufs);
|
||||
|
||||
@@ -2124,6 +2159,22 @@ static int __find_plane_by_offset(struct vb2_queue *q, unsigned long off,
|
||||
struct vb2_buffer *vb;
|
||||
unsigned int buffer, plane;
|
||||
|
||||
/*
|
||||
* Sanity checks to ensure the lock is held, MEMORY_MMAP is
|
||||
* used and fileio isn't active.
|
||||
*/
|
||||
lockdep_assert_held(&q->mmap_lock);
|
||||
|
||||
if (q->memory != VB2_MEMORY_MMAP) {
|
||||
dprintk(q, 1, "queue is not currently set up for mmap\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (vb2_fileio_is_active(q)) {
|
||||
dprintk(q, 1, "file io in progress\n");
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
/*
|
||||
* Go over all buffers and their planes, comparing the given offset
|
||||
* with an offset assigned to each plane. If a match is found,
|
||||
@@ -2225,11 +2276,6 @@ int vb2_mmap(struct vb2_queue *q, struct vm_area_struct *vma)
|
||||
int ret;
|
||||
unsigned long length;
|
||||
|
||||
if (q->memory != VB2_MEMORY_MMAP) {
|
||||
dprintk(q, 1, "queue is not currently set up for mmap\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Check memory area access mode.
|
||||
*/
|
||||
@@ -2251,14 +2297,9 @@ int vb2_mmap(struct vb2_queue *q, struct vm_area_struct *vma)
|
||||
|
||||
mutex_lock(&q->mmap_lock);
|
||||
|
||||
if (vb2_fileio_is_active(q)) {
|
||||
dprintk(q, 1, "mmap: file io in progress\n");
|
||||
ret = -EBUSY;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
/*
|
||||
* Find the plane corresponding to the offset passed by userspace.
|
||||
* Find the plane corresponding to the offset passed by userspace. This
|
||||
* will return an error if not MEMORY_MMAP or file I/O is in progress.
|
||||
*/
|
||||
ret = __find_plane_by_offset(q, off, &buffer, &plane);
|
||||
if (ret)
|
||||
@@ -2311,22 +2352,25 @@ unsigned long vb2_get_unmapped_area(struct vb2_queue *q,
|
||||
void *vaddr;
|
||||
int ret;
|
||||
|
||||
if (q->memory != VB2_MEMORY_MMAP) {
|
||||
dprintk(q, 1, "queue is not currently set up for mmap\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
mutex_lock(&q->mmap_lock);
|
||||
|
||||
/*
|
||||
* Find the plane corresponding to the offset passed by userspace.
|
||||
* Find the plane corresponding to the offset passed by userspace. This
|
||||
* will return an error if not MEMORY_MMAP or file I/O is in progress.
|
||||
*/
|
||||
ret = __find_plane_by_offset(q, off, &buffer, &plane);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto unlock;
|
||||
|
||||
vb = q->bufs[buffer];
|
||||
|
||||
vaddr = vb2_plane_vaddr(vb, plane);
|
||||
mutex_unlock(&q->mmap_lock);
|
||||
return vaddr ? (unsigned long)vaddr : -EINVAL;
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&q->mmap_lock);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(vb2_get_unmapped_area);
|
||||
#endif
|
||||
|
||||
@@ -145,6 +145,8 @@ bool v4l2_valid_dv_timings(const struct v4l2_dv_timings *t,
|
||||
const struct v4l2_bt_timings *bt = &t->bt;
|
||||
const struct v4l2_bt_timings_cap *cap = &dvcap->bt;
|
||||
u32 caps = cap->capabilities;
|
||||
const u32 max_vert = 10240;
|
||||
u32 max_hor = 3 * bt->width;
|
||||
|
||||
if (t->type != V4L2_DV_BT_656_1120)
|
||||
return false;
|
||||
@@ -166,14 +168,20 @@ bool v4l2_valid_dv_timings(const struct v4l2_dv_timings *t,
|
||||
if (!bt->interlaced &&
|
||||
(bt->il_vbackporch || bt->il_vsync || bt->il_vfrontporch))
|
||||
return false;
|
||||
if (bt->hfrontporch > 2 * bt->width ||
|
||||
bt->hsync > 1024 || bt->hbackporch > 1024)
|
||||
/*
|
||||
* Some video receivers cannot properly separate the frontporch,
|
||||
* backporch and sync values, and instead they only have the total
|
||||
* blanking. That can be assigned to any of these three fields.
|
||||
* So just check that none of these are way out of range.
|
||||
*/
|
||||
if (bt->hfrontporch > max_hor ||
|
||||
bt->hsync > max_hor || bt->hbackporch > max_hor)
|
||||
return false;
|
||||
if (bt->vfrontporch > 4096 ||
|
||||
bt->vsync > 128 || bt->vbackporch > 4096)
|
||||
if (bt->vfrontporch > max_vert ||
|
||||
bt->vsync > max_vert || bt->vbackporch > max_vert)
|
||||
return false;
|
||||
if (bt->interlaced && (bt->il_vfrontporch > 4096 ||
|
||||
bt->il_vsync > 128 || bt->il_vbackporch > 4096))
|
||||
if (bt->interlaced && (bt->il_vfrontporch > max_vert ||
|
||||
bt->il_vsync > max_vert || bt->il_vbackporch > max_vert))
|
||||
return false;
|
||||
return fnc == NULL || fnc(t, fnc_handle);
|
||||
}
|
||||
|
||||
@@ -2530,13 +2530,11 @@ static int msdc_of_clock_parse(struct platform_device *pdev,
|
||||
if (IS_ERR(host->src_clk_cg))
|
||||
host->src_clk_cg = NULL;
|
||||
|
||||
host->sys_clk_cg = devm_clk_get_optional(&pdev->dev, "sys_cg");
|
||||
/* If present, always enable for this clock gate */
|
||||
host->sys_clk_cg = devm_clk_get_optional_enabled(&pdev->dev, "sys_cg");
|
||||
if (IS_ERR(host->sys_clk_cg))
|
||||
host->sys_clk_cg = NULL;
|
||||
|
||||
/* If present, always enable for this clock gate */
|
||||
clk_prepare_enable(host->sys_clk_cg);
|
||||
|
||||
host->bulk_clks[0].id = "pclk_cg";
|
||||
host->bulk_clks[1].id = "axi_cg";
|
||||
host->bulk_clks[2].id = "ahb_cg";
|
||||
|
||||
@@ -227,6 +227,10 @@ static void esd_usb2_rx_event(struct esd_usb2_net_priv *priv,
|
||||
u8 rxerr = msg->msg.rx.data[2];
|
||||
u8 txerr = msg->msg.rx.data[3];
|
||||
|
||||
netdev_dbg(priv->netdev,
|
||||
"CAN_ERR_EV_EXT: dlc=%#02x state=%02x ecc=%02x rec=%02x tec=%02x\n",
|
||||
msg->msg.rx.dlc, state, ecc, rxerr, txerr);
|
||||
|
||||
skb = alloc_can_err_skb(priv->netdev, &cf);
|
||||
if (skb == NULL) {
|
||||
stats->rx_dropped++;
|
||||
@@ -253,6 +257,8 @@ static void esd_usb2_rx_event(struct esd_usb2_net_priv *priv,
|
||||
break;
|
||||
default:
|
||||
priv->can.state = CAN_STATE_ERROR_ACTIVE;
|
||||
txerr = 0;
|
||||
rxerr = 0;
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
|
||||
@@ -95,6 +95,8 @@ static int sja1105_setup_devlink_regions(struct dsa_switch *ds)
|
||||
if (IS_ERR(region)) {
|
||||
while (--i >= 0)
|
||||
dsa_devlink_region_destroy(priv->regions[i]);
|
||||
|
||||
kfree(priv->regions);
|
||||
return PTR_ERR(region);
|
||||
}
|
||||
|
||||
|
||||
@@ -1025,7 +1025,7 @@ static int sja1105_init_l2_policing(struct sja1105_private *priv)
|
||||
|
||||
policing[bcast].sharindx = port;
|
||||
/* Only SJA1110 has multicast policers */
|
||||
if (mcast <= table->ops->max_entry_count)
|
||||
if (mcast < table->ops->max_entry_count)
|
||||
policing[mcast].sharindx = port;
|
||||
}
|
||||
|
||||
|
||||
@@ -258,6 +258,7 @@ static int greth_init_rings(struct greth_private *greth)
|
||||
if (dma_mapping_error(greth->dev, dma_addr)) {
|
||||
if (netif_msg_ifup(greth))
|
||||
dev_err(greth->dev, "Could not create initial DMA mapping\n");
|
||||
dev_kfree_skb(skb);
|
||||
goto cleanup;
|
||||
}
|
||||
greth->rx_skbuff[i] = skb;
|
||||
|
||||
@@ -71,13 +71,14 @@ config BCM63XX_ENET
|
||||
config BCMGENET
|
||||
tristate "Broadcom GENET internal MAC support"
|
||||
depends on HAS_IOMEM
|
||||
depends on PTP_1588_CLOCK_OPTIONAL || !ARCH_BCM2835
|
||||
select MII
|
||||
select PHYLIB
|
||||
select FIXED_PHY
|
||||
select BCM7XXX_PHY
|
||||
select MDIO_BCM_UNIMAC
|
||||
select DIMLIB
|
||||
select BROADCOM_PHY if (ARCH_BCM2835 && PTP_1588_CLOCK_OPTIONAL)
|
||||
select BROADCOM_PHY if ARCH_BCM2835
|
||||
help
|
||||
This driver supports the built-in Ethernet MACs found in the
|
||||
Broadcom BCM7xxx Set Top Box family chipset.
|
||||
|
||||
@@ -2250,7 +2250,7 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
||||
err = register_netdev(netdev);
|
||||
if (err) {
|
||||
dev_err(dev, "Failed to register netdevice\n");
|
||||
goto err_unregister_interrupts;
|
||||
goto err_destroy_workqueue;
|
||||
}
|
||||
|
||||
nic->msg_enable = debug;
|
||||
@@ -2259,6 +2259,8 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
||||
|
||||
return 0;
|
||||
|
||||
err_destroy_workqueue:
|
||||
destroy_workqueue(nic->nicvf_rx_mode_wq);
|
||||
err_unregister_interrupts:
|
||||
nicvf_unregister_interrupts(nic);
|
||||
err_free_netdev:
|
||||
|
||||
@@ -132,6 +132,7 @@ int dpaa2_switch_acl_entry_add(struct dpaa2_switch_filter_block *filter_block,
|
||||
DMA_TO_DEVICE);
|
||||
if (unlikely(dma_mapping_error(dev, acl_entry_cfg->key_iova))) {
|
||||
dev_err(dev, "DMA mapping failed\n");
|
||||
kfree(cmd_buff);
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
@@ -142,6 +143,7 @@ int dpaa2_switch_acl_entry_add(struct dpaa2_switch_filter_block *filter_block,
|
||||
DMA_TO_DEVICE);
|
||||
if (err) {
|
||||
dev_err(dev, "dpsw_acl_add_entry() failed %d\n", err);
|
||||
kfree(cmd_buff);
|
||||
return err;
|
||||
}
|
||||
|
||||
@@ -172,6 +174,7 @@ dpaa2_switch_acl_entry_remove(struct dpaa2_switch_filter_block *block,
|
||||
DMA_TO_DEVICE);
|
||||
if (unlikely(dma_mapping_error(dev, acl_entry_cfg->key_iova))) {
|
||||
dev_err(dev, "DMA mapping failed\n");
|
||||
kfree(cmd_buff);
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
@@ -182,6 +185,7 @@ dpaa2_switch_acl_entry_remove(struct dpaa2_switch_filter_block *block,
|
||||
DMA_TO_DEVICE);
|
||||
if (err) {
|
||||
dev_err(dev, "dpsw_acl_remove_entry() failed %d\n", err);
|
||||
kfree(cmd_buff);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
||||
@@ -283,7 +283,7 @@ static int hisi_femac_rx(struct net_device *dev, int limit)
|
||||
skb->protocol = eth_type_trans(skb, dev);
|
||||
napi_gro_receive(&priv->napi, skb);
|
||||
dev->stats.rx_packets++;
|
||||
dev->stats.rx_bytes += skb->len;
|
||||
dev->stats.rx_bytes += len;
|
||||
next:
|
||||
pos = (pos + 1) % rxq->num;
|
||||
if (rx_pkts_num >= limit)
|
||||
|
||||
@@ -550,7 +550,7 @@ static int hix5hd2_rx(struct net_device *dev, int limit)
|
||||
skb->protocol = eth_type_trans(skb, dev);
|
||||
napi_gro_receive(&priv->napi, skb);
|
||||
dev->stats.rx_packets++;
|
||||
dev->stats.rx_bytes += skb->len;
|
||||
dev->stats.rx_bytes += len;
|
||||
next:
|
||||
pos = dma_ring_incr(pos, RX_DESC_NUM);
|
||||
}
|
||||
|
||||
@@ -5941,9 +5941,9 @@ static netdev_tx_t e1000_xmit_frame(struct sk_buff *skb,
|
||||
e1000_tx_queue(tx_ring, tx_flags, count);
|
||||
/* Make sure there is space in the ring for the next send. */
|
||||
e1000_maybe_stop_tx(tx_ring,
|
||||
(MAX_SKB_FRAGS *
|
||||
((MAX_SKB_FRAGS + 1) *
|
||||
DIV_ROUND_UP(PAGE_SIZE,
|
||||
adapter->tx_fifo_limit) + 2));
|
||||
adapter->tx_fifo_limit) + 4));
|
||||
|
||||
if (!netdev_xmit_more() ||
|
||||
netif_xmit_stopped(netdev_get_tx_queue(netdev, 0))) {
|
||||
|
||||
@@ -4364,11 +4364,7 @@ static int i40e_check_fdir_input_set(struct i40e_vsi *vsi,
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/* First 4 bytes of L4 header */
|
||||
if (usr_ip4_spec->l4_4_bytes == htonl(0xFFFFFFFF))
|
||||
new_mask |= I40E_L4_SRC_MASK | I40E_L4_DST_MASK;
|
||||
else if (!usr_ip4_spec->l4_4_bytes)
|
||||
new_mask &= ~(I40E_L4_SRC_MASK | I40E_L4_DST_MASK);
|
||||
else
|
||||
if (usr_ip4_spec->l4_4_bytes)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/* Filtering on Type of Service is not supported. */
|
||||
@@ -4407,11 +4403,7 @@ static int i40e_check_fdir_input_set(struct i40e_vsi *vsi,
|
||||
else
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (usr_ip6_spec->l4_4_bytes == htonl(0xFFFFFFFF))
|
||||
new_mask |= I40E_L4_SRC_MASK | I40E_L4_DST_MASK;
|
||||
else if (!usr_ip6_spec->l4_4_bytes)
|
||||
new_mask &= ~(I40E_L4_SRC_MASK | I40E_L4_DST_MASK);
|
||||
else
|
||||
if (usr_ip6_spec->l4_4_bytes)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/* Filtering on Traffic class is not supported. */
|
||||
|
||||
@@ -10519,6 +10519,21 @@ static int i40e_rebuild_channels(struct i40e_vsi *vsi)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_clean_xps_state - clean xps state for every tx_ring
|
||||
* @vsi: ptr to the VSI
|
||||
**/
|
||||
static void i40e_clean_xps_state(struct i40e_vsi *vsi)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (vsi->tx_rings)
|
||||
for (i = 0; i < vsi->num_queue_pairs; i++)
|
||||
if (vsi->tx_rings[i])
|
||||
clear_bit(__I40E_TX_XPS_INIT_DONE,
|
||||
vsi->tx_rings[i]->state);
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_prep_for_reset - prep for the core to reset
|
||||
* @pf: board private structure
|
||||
@@ -10543,9 +10558,11 @@ static void i40e_prep_for_reset(struct i40e_pf *pf)
|
||||
i40e_pf_quiesce_all_vsi(pf);
|
||||
|
||||
for (v = 0; v < pf->num_alloc_vsi; v++) {
|
||||
if (pf->vsi[v])
|
||||
if (pf->vsi[v]) {
|
||||
i40e_clean_xps_state(pf->vsi[v]);
|
||||
pf->vsi[v]->seid = 0;
|
||||
}
|
||||
}
|
||||
|
||||
i40e_shutdown_adminq(&pf->hw);
|
||||
|
||||
|
||||
@@ -1578,6 +1578,7 @@ bool i40e_reset_vf(struct i40e_vf *vf, bool flr)
|
||||
i40e_cleanup_reset_vf(vf);
|
||||
|
||||
i40e_flush(hw);
|
||||
usleep_range(20000, 40000);
|
||||
clear_bit(I40E_VF_STATE_RESETTING, &vf->vf_states);
|
||||
|
||||
return true;
|
||||
@@ -1701,6 +1702,7 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
|
||||
}
|
||||
|
||||
i40e_flush(hw);
|
||||
usleep_range(20000, 40000);
|
||||
clear_bit(__I40E_VF_DISABLE, pf->state);
|
||||
|
||||
return true;
|
||||
|
||||
@@ -1409,6 +1409,8 @@ static int igb_intr_test(struct igb_adapter *adapter, u64 *data)
|
||||
*data = 1;
|
||||
return -1;
|
||||
}
|
||||
wr32(E1000_IVAR_MISC, E1000_IVAR_VALID << 8);
|
||||
wr32(E1000_EIMS, BIT(0));
|
||||
} else if (adapter->flags & IGB_FLAG_HAS_MSI) {
|
||||
shared_int = false;
|
||||
if (request_irq(irq,
|
||||
|
||||
@@ -4162,7 +4162,7 @@ static void mvneta_percpu_elect(struct mvneta_port *pp)
|
||||
/* Use the cpu associated to the rxq when it is online, in all
|
||||
* the other cases, use the cpu 0 which can't be offline.
|
||||
*/
|
||||
if (cpu_online(pp->rxq_def))
|
||||
if (pp->rxq_def < nr_cpu_ids && cpu_online(pp->rxq_def))
|
||||
elected_cpu = pp->rxq_def;
|
||||
|
||||
max_cpu = num_present_cpus();
|
||||
|
||||
@@ -1090,7 +1090,12 @@ int otx2_init_tc(struct otx2_nic *nic)
|
||||
return err;
|
||||
|
||||
tc->flow_ht_params = tc_flow_ht_params;
|
||||
return rhashtable_init(&tc->flow_table, &tc->flow_ht_params);
|
||||
err = rhashtable_init(&tc->flow_table, &tc->flow_ht_params);
|
||||
if (err) {
|
||||
kfree(tc->tc_entries_bitmap);
|
||||
tc->tc_entries_bitmap = NULL;
|
||||
}
|
||||
return err;
|
||||
}
|
||||
|
||||
void otx2_shutdown_tc(struct otx2_nic *nic)
|
||||
|
||||
@@ -359,7 +359,7 @@ static int regmap_encx24j600_phy_reg_read(void *context, unsigned int reg,
|
||||
goto err_out;
|
||||
|
||||
usleep_range(26, 100);
|
||||
while ((ret = regmap_read(ctx->regmap, MISTAT, &mistat) != 0) &&
|
||||
while (((ret = regmap_read(ctx->regmap, MISTAT, &mistat)) == 0) &&
|
||||
(mistat & BUSY))
|
||||
cpu_relax();
|
||||
|
||||
@@ -397,7 +397,7 @@ static int regmap_encx24j600_phy_reg_write(void *context, unsigned int reg,
|
||||
goto err_out;
|
||||
|
||||
usleep_range(26, 100);
|
||||
while ((ret = regmap_read(ctx->regmap, MISTAT, &mistat) != 0) &&
|
||||
while (((ret = regmap_read(ctx->regmap, MISTAT, &mistat)) == 0) &&
|
||||
(mistat & BUSY))
|
||||
cpu_relax();
|
||||
|
||||
|
||||
@@ -829,6 +829,8 @@ static int mchp_sparx5_probe(struct platform_device *pdev)
|
||||
|
||||
cleanup_ports:
|
||||
sparx5_cleanup_ports(sparx5);
|
||||
if (sparx5->mact_queue)
|
||||
destroy_workqueue(sparx5->mact_queue);
|
||||
cleanup_config:
|
||||
kfree(configs);
|
||||
cleanup_pnode:
|
||||
@@ -852,6 +854,7 @@ static int mchp_sparx5_remove(struct platform_device *pdev)
|
||||
sparx5_cleanup_ports(sparx5);
|
||||
/* Unregister netdevs */
|
||||
sparx5_unregister_notifier_blocks(sparx5);
|
||||
destroy_workqueue(sparx5->mact_queue);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -488,7 +488,14 @@ enum {
|
||||
|
||||
#define GDMA_DRV_CAP_FLAG_1_EQ_SHARING_MULTI_VPORT BIT(0)
|
||||
|
||||
#define GDMA_DRV_CAP_FLAGS1 GDMA_DRV_CAP_FLAG_1_EQ_SHARING_MULTI_VPORT
|
||||
/* Advertise to the NIC firmware: the NAPI work_done variable race is fixed,
|
||||
* so the driver is able to reliably support features like busy_poll.
|
||||
*/
|
||||
#define GDMA_DRV_CAP_FLAG_1_NAPI_WKDONE_FIX BIT(2)
|
||||
|
||||
#define GDMA_DRV_CAP_FLAGS1 \
|
||||
(GDMA_DRV_CAP_FLAG_1_EQ_SHARING_MULTI_VPORT | \
|
||||
GDMA_DRV_CAP_FLAG_1_NAPI_WKDONE_FIX)
|
||||
|
||||
#define GDMA_DRV_CAP_FLAGS2 0
|
||||
|
||||
|
||||
@@ -1071,10 +1071,11 @@ static void mana_poll_rx_cq(struct mana_cq *cq)
|
||||
}
|
||||
}
|
||||
|
||||
static void mana_cq_handler(void *context, struct gdma_queue *gdma_queue)
|
||||
static int mana_cq_handler(void *context, struct gdma_queue *gdma_queue)
|
||||
{
|
||||
struct mana_cq *cq = context;
|
||||
u8 arm_bit;
|
||||
int w;
|
||||
|
||||
WARN_ON_ONCE(cq->gdma_cq != gdma_queue);
|
||||
|
||||
@@ -1083,26 +1084,31 @@ static void mana_cq_handler(void *context, struct gdma_queue *gdma_queue)
|
||||
else
|
||||
mana_poll_tx_cq(cq);
|
||||
|
||||
if (cq->work_done < cq->budget &&
|
||||
napi_complete_done(&cq->napi, cq->work_done)) {
|
||||
w = cq->work_done;
|
||||
|
||||
if (w < cq->budget &&
|
||||
napi_complete_done(&cq->napi, w)) {
|
||||
arm_bit = SET_ARM_BIT;
|
||||
} else {
|
||||
arm_bit = 0;
|
||||
}
|
||||
|
||||
mana_gd_ring_cq(gdma_queue, arm_bit);
|
||||
|
||||
return w;
|
||||
}
|
||||
|
||||
static int mana_poll(struct napi_struct *napi, int budget)
|
||||
{
|
||||
struct mana_cq *cq = container_of(napi, struct mana_cq, napi);
|
||||
int w;
|
||||
|
||||
cq->work_done = 0;
|
||||
cq->budget = budget;
|
||||
|
||||
mana_cq_handler(cq, cq->gdma_cq);
|
||||
w = mana_cq_handler(cq, cq->gdma_cq);
|
||||
|
||||
return min(cq->work_done, budget);
|
||||
return min(w, budget);
|
||||
}
|
||||
|
||||
static void mana_schedule_napi(void *context, struct gdma_queue *gdma_queue)
|
||||
|
||||
@@ -108,10 +108,10 @@ static struct stmmac_axi *stmmac_axi_setup(struct platform_device *pdev)
|
||||
|
||||
axi->axi_lpi_en = of_property_read_bool(np, "snps,lpi_en");
|
||||
axi->axi_xit_frm = of_property_read_bool(np, "snps,xit_frm");
|
||||
axi->axi_kbbe = of_property_read_bool(np, "snps,axi_kbbe");
|
||||
axi->axi_fb = of_property_read_bool(np, "snps,axi_fb");
|
||||
axi->axi_mb = of_property_read_bool(np, "snps,axi_mb");
|
||||
axi->axi_rb = of_property_read_bool(np, "snps,axi_rb");
|
||||
axi->axi_kbbe = of_property_read_bool(np, "snps,kbbe");
|
||||
axi->axi_fb = of_property_read_bool(np, "snps,fb");
|
||||
axi->axi_mb = of_property_read_bool(np, "snps,mb");
|
||||
axi->axi_rb = of_property_read_bool(np, "snps,rb");
|
||||
|
||||
if (of_property_read_u32(np, "snps,wr_osr_lmt", &axi->axi_wr_osr_lmt))
|
||||
axi->axi_wr_osr_lmt = 1;
|
||||
|
||||
@@ -927,7 +927,7 @@ static int ca8210_spi_transfer(
|
||||
|
||||
dev_dbg(&spi->dev, "%s called\n", __func__);
|
||||
|
||||
cas_ctl = kmalloc(sizeof(*cas_ctl), GFP_ATOMIC);
|
||||
cas_ctl = kzalloc(sizeof(*cas_ctl), GFP_ATOMIC);
|
||||
if (!cas_ctl)
|
||||
return -ENOMEM;
|
||||
|
||||
|
||||
@@ -970,7 +970,7 @@ static int cc2520_hw_init(struct cc2520_private *priv)
|
||||
|
||||
if (timeout-- <= 0) {
|
||||
dev_err(&priv->spi->dev, "oscillator start failed!\n");
|
||||
return ret;
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
udelay(1);
|
||||
} while (!(status & CC2520_STATUS_XOSC32M_STABLE));
|
||||
|
||||
@@ -3675,6 +3675,7 @@ static const struct nla_policy macsec_rtnl_policy[IFLA_MACSEC_MAX + 1] = {
|
||||
[IFLA_MACSEC_SCB] = { .type = NLA_U8 },
|
||||
[IFLA_MACSEC_REPLAY_PROTECT] = { .type = NLA_U8 },
|
||||
[IFLA_MACSEC_VALIDATION] = { .type = NLA_U8 },
|
||||
[IFLA_MACSEC_OFFLOAD] = { .type = NLA_U8 },
|
||||
};
|
||||
|
||||
static void macsec_free_netdev(struct net_device *dev)
|
||||
|
||||
@@ -77,6 +77,7 @@ int fwnode_mdiobus_phy_device_register(struct mii_bus *mdio,
|
||||
*/
|
||||
rc = phy_device_register(phy);
|
||||
if (rc) {
|
||||
device_set_node(&phy->mdio.dev, NULL);
|
||||
fwnode_handle_put(child);
|
||||
return rc;
|
||||
}
|
||||
@@ -110,8 +111,8 @@ int fwnode_mdiobus_register_phy(struct mii_bus *bus,
|
||||
else
|
||||
phy = phy_device_create(bus, addr, phy_id, 0, NULL);
|
||||
if (IS_ERR(phy)) {
|
||||
unregister_mii_timestamper(mii_ts);
|
||||
return PTR_ERR(phy);
|
||||
rc = PTR_ERR(phy);
|
||||
goto clean_mii_ts;
|
||||
}
|
||||
|
||||
if (is_acpi_node(child)) {
|
||||
@@ -125,17 +126,14 @@ int fwnode_mdiobus_register_phy(struct mii_bus *bus,
|
||||
/* All data is now stored in the phy struct, so register it */
|
||||
rc = phy_device_register(phy);
|
||||
if (rc) {
|
||||
phy_device_free(phy);
|
||||
fwnode_handle_put(phy->mdio.dev.fwnode);
|
||||
return rc;
|
||||
phy->mdio.dev.fwnode = NULL;
|
||||
fwnode_handle_put(child);
|
||||
goto clean_phy;
|
||||
}
|
||||
} else if (is_of_node(child)) {
|
||||
rc = fwnode_mdiobus_phy_device_register(bus, phy, child, addr);
|
||||
if (rc) {
|
||||
unregister_mii_timestamper(mii_ts);
|
||||
phy_device_free(phy);
|
||||
return rc;
|
||||
}
|
||||
if (rc)
|
||||
goto clean_phy;
|
||||
}
|
||||
|
||||
/* phy->mii_ts may already be defined by the PHY driver. A
|
||||
@@ -145,5 +143,12 @@ int fwnode_mdiobus_register_phy(struct mii_bus *bus,
|
||||
if (mii_ts)
|
||||
phy->mii_ts = mii_ts;
|
||||
return 0;
|
||||
|
||||
clean_phy:
|
||||
phy_device_free(phy);
|
||||
clean_mii_ts:
|
||||
unregister_mii_timestamper(mii_ts);
|
||||
|
||||
return rc;
|
||||
}
|
||||
EXPORT_SYMBOL(fwnode_mdiobus_register_phy);
|
||||
|
||||
@@ -68,8 +68,9 @@ static int of_mdiobus_register_device(struct mii_bus *mdio,
|
||||
/* All data is now stored in the mdiodev struct; register it. */
|
||||
rc = mdio_device_register(mdiodev);
|
||||
if (rc) {
|
||||
device_set_node(&mdiodev->dev, NULL);
|
||||
fwnode_handle_put(fwnode);
|
||||
mdio_device_free(mdiodev);
|
||||
of_node_put(child);
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
||||
@@ -21,6 +21,7 @@
|
||||
#include <linux/slab.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/unistd.h>
|
||||
#include <linux/property.h>
|
||||
|
||||
void mdio_device_free(struct mdio_device *mdiodev)
|
||||
{
|
||||
@@ -30,6 +31,7 @@ EXPORT_SYMBOL(mdio_device_free);
|
||||
|
||||
static void mdio_device_release(struct device *dev)
|
||||
{
|
||||
fwnode_handle_put(dev->fwnode);
|
||||
kfree(to_mdio_device(dev));
|
||||
}
|
||||
|
||||
|
||||
@@ -96,6 +96,7 @@ static int gpy_config_init(struct phy_device *phydev)
|
||||
|
||||
static int gpy_probe(struct phy_device *phydev)
|
||||
{
|
||||
int fw_version;
|
||||
int ret;
|
||||
|
||||
if (!phydev->is_c45) {
|
||||
@@ -105,12 +106,12 @@ static int gpy_probe(struct phy_device *phydev)
|
||||
}
|
||||
|
||||
/* Show GPY PHY FW version in dmesg */
|
||||
ret = phy_read(phydev, PHY_FWV);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
fw_version = phy_read(phydev, PHY_FWV);
|
||||
if (fw_version < 0)
|
||||
return fw_version;
|
||||
|
||||
phydev_info(phydev, "Firmware Version: 0x%04X (%s)\n", ret,
|
||||
(ret & PHY_FWV_REL_MASK) ? "release" : "test");
|
||||
phydev_info(phydev, "Firmware Version: 0x%04X (%s)\n", fw_version,
|
||||
(fw_version & PHY_FWV_REL_MASK) ? "release" : "test");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -446,12 +446,12 @@ plip_bh_timeout_error(struct net_device *dev, struct net_local *nl,
|
||||
}
|
||||
rcv->state = PLIP_PK_DONE;
|
||||
if (rcv->skb) {
|
||||
kfree_skb(rcv->skb);
|
||||
dev_kfree_skb_irq(rcv->skb);
|
||||
rcv->skb = NULL;
|
||||
}
|
||||
snd->state = PLIP_PK_DONE;
|
||||
if (snd->skb) {
|
||||
dev_kfree_skb(snd->skb);
|
||||
dev_consume_skb_irq(snd->skb);
|
||||
snd->skb = NULL;
|
||||
}
|
||||
spin_unlock_irq(&nl->lock);
|
||||
|
||||
@@ -902,6 +902,7 @@ static int tbnet_open(struct net_device *dev)
|
||||
tbnet_start_poll, net);
|
||||
if (!ring) {
|
||||
netdev_err(dev, "failed to allocate Rx ring\n");
|
||||
tb_xdomain_release_out_hopid(xd, hopid);
|
||||
tb_ring_free(net->tx_ring.ring);
|
||||
net->tx_ring.ring = NULL;
|
||||
return -ENOMEM;
|
||||
|
||||
@@ -1413,6 +1413,7 @@ static const struct usb_device_id products[] = {
|
||||
{QMI_FIXED_INTF(0x0489, 0xe0b4, 0)}, /* Foxconn T77W968 LTE */
|
||||
{QMI_FIXED_INTF(0x0489, 0xe0b5, 0)}, /* Foxconn T77W968 LTE with eSIM support*/
|
||||
{QMI_FIXED_INTF(0x2692, 0x9025, 4)}, /* Cellient MPL200 (rebranded Qualcomm 05c6:9025) */
|
||||
{QMI_QUIRK_SET_DTR(0x1546, 0x1342, 4)}, /* u-blox LARA-L6 */
|
||||
|
||||
/* 4. Gobi 1000 devices */
|
||||
{QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
|
||||
|
||||
@@ -75,8 +75,14 @@ vmxnet3_enable_all_intrs(struct vmxnet3_adapter *adapter)
|
||||
|
||||
for (i = 0; i < adapter->intr.num_intrs; i++)
|
||||
vmxnet3_enable_intr(adapter, i);
|
||||
if (!VMXNET3_VERSION_GE_6(adapter) ||
|
||||
!adapter->queuesExtEnabled) {
|
||||
adapter->shared->devRead.intrConf.intrCtrl &=
|
||||
cpu_to_le32(~VMXNET3_IC_DISABLE_ALL);
|
||||
} else {
|
||||
adapter->shared->devReadExt.intrConfExt.intrCtrl &=
|
||||
cpu_to_le32(~VMXNET3_IC_DISABLE_ALL);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -85,8 +91,14 @@ vmxnet3_disable_all_intrs(struct vmxnet3_adapter *adapter)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (!VMXNET3_VERSION_GE_6(adapter) ||
|
||||
!adapter->queuesExtEnabled) {
|
||||
adapter->shared->devRead.intrConf.intrCtrl |=
|
||||
cpu_to_le32(VMXNET3_IC_DISABLE_ALL);
|
||||
} else {
|
||||
adapter->shared->devReadExt.intrConfExt.intrCtrl |=
|
||||
cpu_to_le32(VMXNET3_IC_DISABLE_ALL);
|
||||
}
|
||||
for (i = 0; i < adapter->intr.num_intrs; i++)
|
||||
vmxnet3_disable_intr(adapter, i);
|
||||
}
|
||||
@@ -1350,6 +1362,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
|
||||
};
|
||||
u32 num_pkts = 0;
|
||||
bool skip_page_frags = false;
|
||||
bool encap_lro = false;
|
||||
struct Vmxnet3_RxCompDesc *rcd;
|
||||
struct vmxnet3_rx_ctx *ctx = &rq->rx_ctx;
|
||||
u16 segCnt = 0, mss = 0;
|
||||
@@ -1508,13 +1521,18 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
|
||||
if (VMXNET3_VERSION_GE_2(adapter) &&
|
||||
rcd->type == VMXNET3_CDTYPE_RXCOMP_LRO) {
|
||||
struct Vmxnet3_RxCompDescExt *rcdlro;
|
||||
union Vmxnet3_GenericDesc *gdesc;
|
||||
|
||||
rcdlro = (struct Vmxnet3_RxCompDescExt *)rcd;
|
||||
gdesc = (union Vmxnet3_GenericDesc *)rcd;
|
||||
|
||||
segCnt = rcdlro->segCnt;
|
||||
WARN_ON_ONCE(segCnt == 0);
|
||||
mss = rcdlro->mss;
|
||||
if (unlikely(segCnt <= 1))
|
||||
segCnt = 0;
|
||||
encap_lro = (le32_to_cpu(gdesc->dword[0]) &
|
||||
(1UL << VMXNET3_RCD_HDR_INNER_SHIFT));
|
||||
} else {
|
||||
segCnt = 0;
|
||||
}
|
||||
@@ -1582,7 +1600,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
|
||||
vmxnet3_rx_csum(adapter, skb,
|
||||
(union Vmxnet3_GenericDesc *)rcd);
|
||||
skb->protocol = eth_type_trans(skb, adapter->netdev);
|
||||
if (!rcd->tcp ||
|
||||
if ((!rcd->tcp && !encap_lro) ||
|
||||
!(adapter->netdev->features & NETIF_F_LRO))
|
||||
goto not_lro;
|
||||
|
||||
@@ -1591,7 +1609,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
|
||||
SKB_GSO_TCPV4 : SKB_GSO_TCPV6;
|
||||
skb_shinfo(skb)->gso_size = mss;
|
||||
skb_shinfo(skb)->gso_segs = segCnt;
|
||||
} else if (segCnt != 0 || skb->len > mtu) {
|
||||
} else if ((segCnt != 0 || skb->len > mtu) && !encap_lro) {
|
||||
u32 hlen;
|
||||
|
||||
hlen = vmxnet3_get_hdr_len(adapter, skb,
|
||||
@@ -1620,6 +1638,7 @@ not_lro:
|
||||
napi_gro_receive(&rq->napi, skb);
|
||||
|
||||
ctx->skb = NULL;
|
||||
encap_lro = false;
|
||||
num_pkts++;
|
||||
}
|
||||
|
||||
|
||||
@@ -48,7 +48,6 @@
|
||||
#include <linux/debugfs.h>
|
||||
|
||||
typedef unsigned int pending_ring_idx_t;
|
||||
#define INVALID_PENDING_RING_IDX (~0U)
|
||||
|
||||
struct pending_tx_info {
|
||||
struct xen_netif_tx_request req; /* tx request */
|
||||
@@ -82,8 +81,6 @@ struct xenvif_rx_meta {
|
||||
/* Discriminate from any valid pending_idx value. */
|
||||
#define INVALID_PENDING_IDX 0xFFFF
|
||||
|
||||
#define MAX_BUFFER_OFFSET XEN_PAGE_SIZE
|
||||
|
||||
#define MAX_PENDING_REQS XEN_NETIF_TX_RING_SIZE
|
||||
|
||||
/* The maximum number of frags is derived from the size of a grant (same
|
||||
@@ -367,11 +364,6 @@ void xenvif_free(struct xenvif *vif);
|
||||
int xenvif_xenbus_init(void);
|
||||
void xenvif_xenbus_fini(void);
|
||||
|
||||
int xenvif_schedulable(struct xenvif *vif);
|
||||
|
||||
int xenvif_queue_stopped(struct xenvif_queue *queue);
|
||||
void xenvif_wake_queue(struct xenvif_queue *queue);
|
||||
|
||||
/* (Un)Map communication rings. */
|
||||
void xenvif_unmap_frontend_data_rings(struct xenvif_queue *queue);
|
||||
int xenvif_map_frontend_data_rings(struct xenvif_queue *queue,
|
||||
@@ -394,8 +386,7 @@ int xenvif_dealloc_kthread(void *data);
|
||||
irqreturn_t xenvif_ctrl_irq_fn(int irq, void *data);
|
||||
|
||||
bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread);
|
||||
void xenvif_rx_action(struct xenvif_queue *queue);
|
||||
void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb);
|
||||
bool xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb);
|
||||
|
||||
void xenvif_carrier_on(struct xenvif *vif);
|
||||
|
||||
@@ -403,9 +394,6 @@ void xenvif_carrier_on(struct xenvif *vif);
|
||||
void xenvif_zerocopy_callback(struct sk_buff *skb, struct ubuf_info *ubuf,
|
||||
bool zerocopy_success);
|
||||
|
||||
/* Unmap a pending page and release it back to the guest */
|
||||
void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx);
|
||||
|
||||
static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
|
||||
{
|
||||
return MAX_PENDING_REQS -
|
||||
|
||||
@@ -70,7 +70,7 @@ void xenvif_skb_zerocopy_complete(struct xenvif_queue *queue)
|
||||
wake_up(&queue->dealloc_wq);
|
||||
}
|
||||
|
||||
int xenvif_schedulable(struct xenvif *vif)
|
||||
static int xenvif_schedulable(struct xenvif *vif)
|
||||
{
|
||||
return netif_running(vif->dev) &&
|
||||
test_bit(VIF_STATUS_CONNECTED, &vif->status) &&
|
||||
@@ -178,20 +178,6 @@ irqreturn_t xenvif_interrupt(int irq, void *dev_id)
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
int xenvif_queue_stopped(struct xenvif_queue *queue)
|
||||
{
|
||||
struct net_device *dev = queue->vif->dev;
|
||||
unsigned int id = queue->id;
|
||||
return netif_tx_queue_stopped(netdev_get_tx_queue(dev, id));
|
||||
}
|
||||
|
||||
void xenvif_wake_queue(struct xenvif_queue *queue)
|
||||
{
|
||||
struct net_device *dev = queue->vif->dev;
|
||||
unsigned int id = queue->id;
|
||||
netif_tx_wake_queue(netdev_get_tx_queue(dev, id));
|
||||
}
|
||||
|
||||
static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb,
|
||||
struct net_device *sb_dev)
|
||||
{
|
||||
@@ -269,14 +255,16 @@ xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
if (vif->hash.alg == XEN_NETIF_CTRL_HASH_ALGORITHM_NONE)
|
||||
skb_clear_hash(skb);
|
||||
|
||||
xenvif_rx_queue_tail(queue, skb);
|
||||
if (!xenvif_rx_queue_tail(queue, skb))
|
||||
goto drop;
|
||||
|
||||
xenvif_kick_thread(queue);
|
||||
|
||||
return NETDEV_TX_OK;
|
||||
|
||||
drop:
|
||||
vif->dev->stats.tx_dropped++;
|
||||
dev_kfree_skb(skb);
|
||||
dev_kfree_skb_any(skb);
|
||||
return NETDEV_TX_OK;
|
||||
}
|
||||
|
||||
|
||||
@@ -112,6 +112,8 @@ static void make_tx_response(struct xenvif_queue *queue,
|
||||
s8 st);
|
||||
static void push_tx_responses(struct xenvif_queue *queue);
|
||||
|
||||
static void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx);
|
||||
|
||||
static inline int tx_work_todo(struct xenvif_queue *queue);
|
||||
|
||||
static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
|
||||
@@ -330,10 +332,13 @@ static int xenvif_count_requests(struct xenvif_queue *queue,
|
||||
|
||||
|
||||
struct xenvif_tx_cb {
|
||||
u16 pending_idx;
|
||||
u16 copy_pending_idx[XEN_NETBK_LEGACY_SLOTS_MAX + 1];
|
||||
u8 copy_count;
|
||||
};
|
||||
|
||||
#define XENVIF_TX_CB(skb) ((struct xenvif_tx_cb *)(skb)->cb)
|
||||
#define copy_pending_idx(skb, i) (XENVIF_TX_CB(skb)->copy_pending_idx[i])
|
||||
#define copy_count(skb) (XENVIF_TX_CB(skb)->copy_count)
|
||||
|
||||
static inline void xenvif_tx_create_map_op(struct xenvif_queue *queue,
|
||||
u16 pending_idx,
|
||||
@@ -368,31 +373,93 @@ static inline struct sk_buff *xenvif_alloc_skb(unsigned int size)
|
||||
return skb;
|
||||
}
|
||||
|
||||
static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif_queue *queue,
|
||||
static void xenvif_get_requests(struct xenvif_queue *queue,
|
||||
struct sk_buff *skb,
|
||||
struct xen_netif_tx_request *txp,
|
||||
struct gnttab_map_grant_ref *gop,
|
||||
struct xen_netif_tx_request *first,
|
||||
struct xen_netif_tx_request *txfrags,
|
||||
unsigned *copy_ops,
|
||||
unsigned *map_ops,
|
||||
unsigned int frag_overflow,
|
||||
struct sk_buff *nskb)
|
||||
struct sk_buff *nskb,
|
||||
unsigned int extra_count,
|
||||
unsigned int data_len)
|
||||
{
|
||||
struct skb_shared_info *shinfo = skb_shinfo(skb);
|
||||
skb_frag_t *frags = shinfo->frags;
|
||||
u16 pending_idx = XENVIF_TX_CB(skb)->pending_idx;
|
||||
int start;
|
||||
u16 pending_idx;
|
||||
pending_ring_idx_t index;
|
||||
unsigned int nr_slots;
|
||||
struct gnttab_copy *cop = queue->tx_copy_ops + *copy_ops;
|
||||
struct gnttab_map_grant_ref *gop = queue->tx_map_ops + *map_ops;
|
||||
struct xen_netif_tx_request *txp = first;
|
||||
|
||||
nr_slots = shinfo->nr_frags;
|
||||
nr_slots = shinfo->nr_frags + 1;
|
||||
|
||||
/* Skip first skb fragment if it is on same page as header fragment. */
|
||||
start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
|
||||
copy_count(skb) = 0;
|
||||
|
||||
for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots;
|
||||
shinfo->nr_frags++, txp++, gop++) {
|
||||
/* Create copy ops for exactly data_len bytes into the skb head. */
|
||||
__skb_put(skb, data_len);
|
||||
while (data_len > 0) {
|
||||
int amount = data_len > txp->size ? txp->size : data_len;
|
||||
|
||||
cop->source.u.ref = txp->gref;
|
||||
cop->source.domid = queue->vif->domid;
|
||||
cop->source.offset = txp->offset;
|
||||
|
||||
cop->dest.domid = DOMID_SELF;
|
||||
cop->dest.offset = (offset_in_page(skb->data +
|
||||
skb_headlen(skb) -
|
||||
data_len)) & ~XEN_PAGE_MASK;
|
||||
cop->dest.u.gmfn = virt_to_gfn(skb->data + skb_headlen(skb)
|
||||
- data_len);
|
||||
|
||||
cop->len = amount;
|
||||
cop->flags = GNTCOPY_source_gref;
|
||||
|
||||
index = pending_index(queue->pending_cons);
|
||||
pending_idx = queue->pending_ring[index];
|
||||
callback_param(queue, pending_idx).ctx = NULL;
|
||||
copy_pending_idx(skb, copy_count(skb)) = pending_idx;
|
||||
copy_count(skb)++;
|
||||
|
||||
cop++;
|
||||
data_len -= amount;
|
||||
|
||||
if (amount == txp->size) {
|
||||
/* The copy op covered the full tx_request */
|
||||
|
||||
memcpy(&queue->pending_tx_info[pending_idx].req,
|
||||
txp, sizeof(*txp));
|
||||
queue->pending_tx_info[pending_idx].extra_count =
|
||||
(txp == first) ? extra_count : 0;
|
||||
|
||||
if (txp == first)
|
||||
txp = txfrags;
|
||||
else
|
||||
txp++;
|
||||
queue->pending_cons++;
|
||||
nr_slots--;
|
||||
} else {
|
||||
/* The copy op partially covered the tx_request.
|
||||
* The remainder will be mapped.
|
||||
*/
|
||||
txp->offset += amount;
|
||||
txp->size -= amount;
|
||||
}
|
||||
}
|
||||
|
||||
for (shinfo->nr_frags = 0; shinfo->nr_frags < nr_slots;
|
||||
shinfo->nr_frags++, gop++) {
|
||||
index = pending_index(queue->pending_cons++);
|
||||
pending_idx = queue->pending_ring[index];
|
||||
xenvif_tx_create_map_op(queue, pending_idx, txp, 0, gop);
|
||||
xenvif_tx_create_map_op(queue, pending_idx, txp,
|
||||
txp == first ? extra_count : 0, gop);
|
||||
frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
|
||||
|
||||
if (txp == first)
|
||||
txp = txfrags;
|
||||
else
|
||||
txp++;
|
||||
}
|
||||
|
||||
if (frag_overflow) {
|
||||
@@ -413,7 +480,8 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif_queue *que
|
||||
skb_shinfo(skb)->frag_list = nskb;
|
||||
}
|
||||
|
||||
return gop;
|
||||
(*copy_ops) = cop - queue->tx_copy_ops;
|
||||
(*map_ops) = gop - queue->tx_map_ops;
|
||||
}
|
||||
|
||||
static inline void xenvif_grant_handle_set(struct xenvif_queue *queue,
|
||||
@@ -449,7 +517,7 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue,
|
||||
struct gnttab_copy **gopp_copy)
|
||||
{
|
||||
struct gnttab_map_grant_ref *gop_map = *gopp_map;
|
||||
u16 pending_idx = XENVIF_TX_CB(skb)->pending_idx;
|
||||
u16 pending_idx;
|
||||
/* This always points to the shinfo of the skb being checked, which
|
||||
* could be either the first or the one on the frag_list
|
||||
*/
|
||||
@@ -460,12 +528,24 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue,
|
||||
struct skb_shared_info *first_shinfo = NULL;
|
||||
int nr_frags = shinfo->nr_frags;
|
||||
const bool sharedslot = nr_frags &&
|
||||
frag_get_pending_idx(&shinfo->frags[0]) == pending_idx;
|
||||
int i, err;
|
||||
frag_get_pending_idx(&shinfo->frags[0]) ==
|
||||
copy_pending_idx(skb, copy_count(skb) - 1);
|
||||
int i, err = 0;
|
||||
|
||||
for (i = 0; i < copy_count(skb); i++) {
|
||||
int newerr;
|
||||
|
||||
/* Check status of header. */
|
||||
err = (*gopp_copy)->status;
|
||||
if (unlikely(err)) {
|
||||
pending_idx = copy_pending_idx(skb, i);
|
||||
|
||||
newerr = (*gopp_copy)->status;
|
||||
if (likely(!newerr)) {
|
||||
/* The first frag might still have this slot mapped */
|
||||
if (i < copy_count(skb) - 1 || !sharedslot)
|
||||
xenvif_idx_release(queue, pending_idx,
|
||||
XEN_NETIF_RSP_OKAY);
|
||||
} else {
|
||||
err = newerr;
|
||||
if (net_ratelimit())
|
||||
netdev_dbg(queue->vif->dev,
|
||||
"Grant copy of header failed! status: %d pending_idx: %u ref: %u\n",
|
||||
@@ -473,11 +553,12 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue,
|
||||
pending_idx,
|
||||
(*gopp_copy)->source.u.ref);
|
||||
/* The first frag might still have this slot mapped */
|
||||
if (!sharedslot)
|
||||
if (i < copy_count(skb) - 1 || !sharedslot)
|
||||
xenvif_idx_release(queue, pending_idx,
|
||||
XEN_NETIF_RSP_ERROR);
|
||||
}
|
||||
(*gopp_copy)++;
|
||||
}
|
||||
|
||||
check_frags:
|
||||
for (i = 0; i < nr_frags; i++, gop_map++) {
|
||||
@@ -524,14 +605,6 @@ check_frags:
|
||||
if (err)
|
||||
continue;
|
||||
|
||||
/* First error: if the header haven't shared a slot with the
|
||||
* first frag, release it as well.
|
||||
*/
|
||||
if (!sharedslot)
|
||||
xenvif_idx_release(queue,
|
||||
XENVIF_TX_CB(skb)->pending_idx,
|
||||
XEN_NETIF_RSP_OKAY);
|
||||
|
||||
/* Invalidate preceding fragments of this skb. */
|
||||
for (j = 0; j < i; j++) {
|
||||
pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
|
||||
@@ -801,7 +874,6 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
|
||||
unsigned *copy_ops,
|
||||
unsigned *map_ops)
|
||||
{
|
||||
struct gnttab_map_grant_ref *gop = queue->tx_map_ops;
|
||||
struct sk_buff *skb, *nskb;
|
||||
int ret;
|
||||
unsigned int frag_overflow;
|
||||
@@ -883,8 +955,12 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
|
||||
continue;
|
||||
}
|
||||
|
||||
data_len = (txreq.size > XEN_NETBACK_TX_COPY_LEN) ?
|
||||
XEN_NETBACK_TX_COPY_LEN : txreq.size;
|
||||
|
||||
ret = xenvif_count_requests(queue, &txreq, extra_count,
|
||||
txfrags, work_to_do);
|
||||
|
||||
if (unlikely(ret < 0))
|
||||
break;
|
||||
|
||||
@@ -910,9 +986,8 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
|
||||
index = pending_index(queue->pending_cons);
|
||||
pending_idx = queue->pending_ring[index];
|
||||
|
||||
data_len = (txreq.size > XEN_NETBACK_TX_COPY_LEN &&
|
||||
ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
|
||||
XEN_NETBACK_TX_COPY_LEN : txreq.size;
|
||||
if (ret >= XEN_NETBK_LEGACY_SLOTS_MAX - 1 && data_len < txreq.size)
|
||||
data_len = txreq.size;
|
||||
|
||||
skb = xenvif_alloc_skb(data_len);
|
||||
if (unlikely(skb == NULL)) {
|
||||
@@ -923,8 +998,6 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
|
||||
}
|
||||
|
||||
skb_shinfo(skb)->nr_frags = ret;
|
||||
if (data_len < txreq.size)
|
||||
skb_shinfo(skb)->nr_frags++;
|
||||
/* At this point shinfo->nr_frags is in fact the number of
|
||||
* slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
|
||||
*/
|
||||
@@ -986,54 +1059,19 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
|
||||
type);
|
||||
}
|
||||
|
||||
XENVIF_TX_CB(skb)->pending_idx = pending_idx;
|
||||
|
||||
__skb_put(skb, data_len);
|
||||
queue->tx_copy_ops[*copy_ops].source.u.ref = txreq.gref;
|
||||
queue->tx_copy_ops[*copy_ops].source.domid = queue->vif->domid;
|
||||
queue->tx_copy_ops[*copy_ops].source.offset = txreq.offset;
|
||||
|
||||
queue->tx_copy_ops[*copy_ops].dest.u.gmfn =
|
||||
virt_to_gfn(skb->data);
|
||||
queue->tx_copy_ops[*copy_ops].dest.domid = DOMID_SELF;
|
||||
queue->tx_copy_ops[*copy_ops].dest.offset =
|
||||
offset_in_page(skb->data) & ~XEN_PAGE_MASK;
|
||||
|
||||
queue->tx_copy_ops[*copy_ops].len = data_len;
|
||||
queue->tx_copy_ops[*copy_ops].flags = GNTCOPY_source_gref;
|
||||
|
||||
(*copy_ops)++;
|
||||
|
||||
if (data_len < txreq.size) {
|
||||
frag_set_pending_idx(&skb_shinfo(skb)->frags[0],
|
||||
pending_idx);
|
||||
xenvif_tx_create_map_op(queue, pending_idx, &txreq,
|
||||
extra_count, gop);
|
||||
gop++;
|
||||
} else {
|
||||
frag_set_pending_idx(&skb_shinfo(skb)->frags[0],
|
||||
INVALID_PENDING_IDX);
|
||||
memcpy(&queue->pending_tx_info[pending_idx].req,
|
||||
&txreq, sizeof(txreq));
|
||||
queue->pending_tx_info[pending_idx].extra_count =
|
||||
extra_count;
|
||||
}
|
||||
|
||||
queue->pending_cons++;
|
||||
|
||||
gop = xenvif_get_requests(queue, skb, txfrags, gop,
|
||||
frag_overflow, nskb);
|
||||
xenvif_get_requests(queue, skb, &txreq, txfrags, copy_ops,
|
||||
map_ops, frag_overflow, nskb, extra_count,
|
||||
data_len);
|
||||
|
||||
__skb_queue_tail(&queue->tx_queue, skb);
|
||||
|
||||
queue->tx.req_cons = idx;
|
||||
|
||||
if (((gop-queue->tx_map_ops) >= ARRAY_SIZE(queue->tx_map_ops)) ||
|
||||
if ((*map_ops >= ARRAY_SIZE(queue->tx_map_ops)) ||
|
||||
(*copy_ops >= ARRAY_SIZE(queue->tx_copy_ops)))
|
||||
break;
|
||||
}
|
||||
|
||||
(*map_ops) = gop - queue->tx_map_ops;
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -1112,9 +1150,8 @@ static int xenvif_tx_submit(struct xenvif_queue *queue)
|
||||
while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
|
||||
struct xen_netif_tx_request *txp;
|
||||
u16 pending_idx;
|
||||
unsigned data_len;
|
||||
|
||||
pending_idx = XENVIF_TX_CB(skb)->pending_idx;
|
||||
pending_idx = copy_pending_idx(skb, 0);
|
||||
txp = &queue->pending_tx_info[pending_idx].req;
|
||||
|
||||
/* Check the remap error code. */
|
||||
@@ -1133,18 +1170,6 @@ static int xenvif_tx_submit(struct xenvif_queue *queue)
|
||||
continue;
|
||||
}
|
||||
|
||||
data_len = skb->len;
|
||||
callback_param(queue, pending_idx).ctx = NULL;
|
||||
if (data_len < txp->size) {
|
||||
/* Append the packet payload as a fragment. */
|
||||
txp->offset += data_len;
|
||||
txp->size -= data_len;
|
||||
} else {
|
||||
/* Schedule a response immediately. */
|
||||
xenvif_idx_release(queue, pending_idx,
|
||||
XEN_NETIF_RSP_OKAY);
|
||||
}
|
||||
|
||||
if (txp->flags & XEN_NETTXF_csum_blank)
|
||||
skb->ip_summed = CHECKSUM_PARTIAL;
|
||||
else if (txp->flags & XEN_NETTXF_data_validated)
|
||||
@@ -1331,7 +1356,7 @@ static inline void xenvif_tx_dealloc_action(struct xenvif_queue *queue)
|
||||
/* Called after netfront has transmitted */
|
||||
int xenvif_tx_action(struct xenvif_queue *queue, int budget)
|
||||
{
|
||||
unsigned nr_mops, nr_cops = 0;
|
||||
unsigned nr_mops = 0, nr_cops = 0;
|
||||
int work_done, ret;
|
||||
|
||||
if (unlikely(!tx_work_todo(queue)))
|
||||
@@ -1418,7 +1443,7 @@ static void push_tx_responses(struct xenvif_queue *queue)
|
||||
notify_remote_via_irq(queue->tx_irq);
|
||||
}
|
||||
|
||||
void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx)
|
||||
static void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx)
|
||||
{
|
||||
int ret;
|
||||
struct gnttab_unmap_grant_ref tx_unmap_op;
|
||||
|
||||
@@ -82,9 +82,10 @@ static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
|
||||
return false;
|
||||
}
|
||||
|
||||
void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb)
|
||||
bool xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb)
|
||||
{
|
||||
unsigned long flags;
|
||||
bool ret = true;
|
||||
|
||||
spin_lock_irqsave(&queue->rx_queue.lock, flags);
|
||||
|
||||
@@ -92,8 +93,7 @@ void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb)
|
||||
struct net_device *dev = queue->vif->dev;
|
||||
|
||||
netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
|
||||
kfree_skb(skb);
|
||||
queue->vif->dev->stats.rx_dropped++;
|
||||
ret = false;
|
||||
} else {
|
||||
if (skb_queue_empty(&queue->rx_queue))
|
||||
xenvif_update_needed_slots(queue, skb);
|
||||
@@ -104,6 +104,8 @@ void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb)
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct sk_buff *xenvif_rx_dequeue(struct xenvif_queue *queue)
|
||||
@@ -486,7 +488,7 @@ static void xenvif_rx_skb(struct xenvif_queue *queue)
|
||||
|
||||
#define RX_BATCH_SIZE 64
|
||||
|
||||
void xenvif_rx_action(struct xenvif_queue *queue)
|
||||
static void xenvif_rx_action(struct xenvif_queue *queue)
|
||||
{
|
||||
struct sk_buff_head completed_skbs;
|
||||
unsigned int work_done = 0;
|
||||
|
||||
@@ -1866,6 +1866,12 @@ static int netfront_resume(struct xenbus_device *dev)
|
||||
netif_tx_unlock_bh(info->netdev);
|
||||
|
||||
xennet_disconnect_backend(info);
|
||||
|
||||
rtnl_lock();
|
||||
if (info->queues)
|
||||
xennet_destroy_queues(info);
|
||||
rtnl_unlock();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -2921,10 +2921,6 @@ static int nvme_init_identify(struct nvme_ctrl *ctrl)
|
||||
if (!ctrl->identified) {
|
||||
unsigned int i;
|
||||
|
||||
ret = nvme_init_subsystem(ctrl, id);
|
||||
if (ret)
|
||||
goto out_free;
|
||||
|
||||
/*
|
||||
* Check for quirks. Quirk can depend on firmware version,
|
||||
* so, in principle, the set of quirks present can change
|
||||
@@ -2937,6 +2933,10 @@ static int nvme_init_identify(struct nvme_ctrl *ctrl)
|
||||
if (quirk_matches(id, &core_quirks[i]))
|
||||
ctrl->quirks |= core_quirks[i].quirks;
|
||||
}
|
||||
|
||||
ret = nvme_init_subsystem(ctrl, id);
|
||||
if (ret)
|
||||
goto out_free;
|
||||
}
|
||||
memcpy(ctrl->subsys->firmware_rev, id->fr,
|
||||
sizeof(ctrl->subsys->firmware_rev));
|
||||
|
||||
@@ -457,6 +457,8 @@ static int slg51000_i2c_probe(struct i2c_client *client)
|
||||
chip->cs_gpiod = cs_gpiod;
|
||||
}
|
||||
|
||||
usleep_range(10000, 11000);
|
||||
|
||||
i2c_set_clientdata(client, chip);
|
||||
chip->chip_irq = client->irq;
|
||||
chip->dev = dev;
|
||||
|
||||
@@ -67,6 +67,7 @@ struct twlreg_info {
|
||||
#define TWL6030_CFG_STATE_SLEEP 0x03
|
||||
#define TWL6030_CFG_STATE_GRP_SHIFT 5
|
||||
#define TWL6030_CFG_STATE_APP_SHIFT 2
|
||||
#define TWL6030_CFG_STATE_MASK 0x03
|
||||
#define TWL6030_CFG_STATE_APP_MASK (0x03 << TWL6030_CFG_STATE_APP_SHIFT)
|
||||
#define TWL6030_CFG_STATE_APP(v) (((v) & TWL6030_CFG_STATE_APP_MASK) >>\
|
||||
TWL6030_CFG_STATE_APP_SHIFT)
|
||||
@@ -128,12 +129,13 @@ static int twl6030reg_is_enabled(struct regulator_dev *rdev)
|
||||
if (grp < 0)
|
||||
return grp;
|
||||
grp &= P1_GRP_6030;
|
||||
} else {
|
||||
grp = 1;
|
||||
}
|
||||
|
||||
val = twlreg_read(info, TWL_MODULE_PM_RECEIVER, VREG_STATE);
|
||||
val = TWL6030_CFG_STATE_APP(val);
|
||||
} else {
|
||||
val = twlreg_read(info, TWL_MODULE_PM_RECEIVER, VREG_STATE);
|
||||
val &= TWL6030_CFG_STATE_MASK;
|
||||
grp = 1;
|
||||
}
|
||||
|
||||
return grp && (val == TWL6030_CFG_STATE_ON);
|
||||
}
|
||||
@@ -187,7 +189,12 @@ static int twl6030reg_get_status(struct regulator_dev *rdev)
|
||||
|
||||
val = twlreg_read(info, TWL_MODULE_PM_RECEIVER, VREG_STATE);
|
||||
|
||||
switch (TWL6030_CFG_STATE_APP(val)) {
|
||||
if (info->features & TWL6032_SUBCLASS)
|
||||
val &= TWL6030_CFG_STATE_MASK;
|
||||
else
|
||||
val = TWL6030_CFG_STATE_APP(val);
|
||||
|
||||
switch (val) {
|
||||
case TWL6030_CFG_STATE_ON:
|
||||
return REGULATOR_STATUS_NORMAL;
|
||||
|
||||
|
||||
@@ -249,10 +249,46 @@ static int cmos_set_time(struct device *dev, struct rtc_time *t)
|
||||
return mc146818_set_time(t);
|
||||
}
|
||||
|
||||
struct cmos_read_alarm_callback_param {
|
||||
struct cmos_rtc *cmos;
|
||||
struct rtc_time *time;
|
||||
unsigned char rtc_control;
|
||||
};
|
||||
|
||||
static void cmos_read_alarm_callback(unsigned char __always_unused seconds,
|
||||
void *param_in)
|
||||
{
|
||||
struct cmos_read_alarm_callback_param *p =
|
||||
(struct cmos_read_alarm_callback_param *)param_in;
|
||||
struct rtc_time *time = p->time;
|
||||
|
||||
time->tm_sec = CMOS_READ(RTC_SECONDS_ALARM);
|
||||
time->tm_min = CMOS_READ(RTC_MINUTES_ALARM);
|
||||
time->tm_hour = CMOS_READ(RTC_HOURS_ALARM);
|
||||
|
||||
if (p->cmos->day_alrm) {
|
||||
/* ignore upper bits on readback per ACPI spec */
|
||||
time->tm_mday = CMOS_READ(p->cmos->day_alrm) & 0x3f;
|
||||
if (!time->tm_mday)
|
||||
time->tm_mday = -1;
|
||||
|
||||
if (p->cmos->mon_alrm) {
|
||||
time->tm_mon = CMOS_READ(p->cmos->mon_alrm);
|
||||
if (!time->tm_mon)
|
||||
time->tm_mon = -1;
|
||||
}
|
||||
}
|
||||
|
||||
p->rtc_control = CMOS_READ(RTC_CONTROL);
|
||||
}
|
||||
|
||||
static int cmos_read_alarm(struct device *dev, struct rtc_wkalrm *t)
|
||||
{
|
||||
struct cmos_rtc *cmos = dev_get_drvdata(dev);
|
||||
unsigned char rtc_control;
|
||||
struct cmos_read_alarm_callback_param p = {
|
||||
.cmos = cmos,
|
||||
.time = &t->time,
|
||||
};
|
||||
|
||||
/* This not only a rtc_op, but also called directly */
|
||||
if (!is_valid_irq(cmos->irq))
|
||||
@@ -263,28 +299,18 @@ static int cmos_read_alarm(struct device *dev, struct rtc_wkalrm *t)
|
||||
* the future.
|
||||
*/
|
||||
|
||||
spin_lock_irq(&rtc_lock);
|
||||
t->time.tm_sec = CMOS_READ(RTC_SECONDS_ALARM);
|
||||
t->time.tm_min = CMOS_READ(RTC_MINUTES_ALARM);
|
||||
t->time.tm_hour = CMOS_READ(RTC_HOURS_ALARM);
|
||||
/* Some Intel chipsets disconnect the alarm registers when the clock
|
||||
* update is in progress - during this time reads return bogus values
|
||||
* and writes may fail silently. See for example "7th Generation Intel®
|
||||
* Processor Family I/O for U/Y Platforms [...] Datasheet", section
|
||||
* 27.7.1
|
||||
*
|
||||
* Use the mc146818_avoid_UIP() function to avoid this.
|
||||
*/
|
||||
if (!mc146818_avoid_UIP(cmos_read_alarm_callback, &p))
|
||||
return -EIO;
|
||||
|
||||
if (cmos->day_alrm) {
|
||||
/* ignore upper bits on readback per ACPI spec */
|
||||
t->time.tm_mday = CMOS_READ(cmos->day_alrm) & 0x3f;
|
||||
if (!t->time.tm_mday)
|
||||
t->time.tm_mday = -1;
|
||||
|
||||
if (cmos->mon_alrm) {
|
||||
t->time.tm_mon = CMOS_READ(cmos->mon_alrm);
|
||||
if (!t->time.tm_mon)
|
||||
t->time.tm_mon = -1;
|
||||
}
|
||||
}
|
||||
|
||||
rtc_control = CMOS_READ(RTC_CONTROL);
|
||||
spin_unlock_irq(&rtc_lock);
|
||||
|
||||
if (!(rtc_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
|
||||
if (!(p.rtc_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
|
||||
if (((unsigned)t->time.tm_sec) < 0x60)
|
||||
t->time.tm_sec = bcd2bin(t->time.tm_sec);
|
||||
else
|
||||
@@ -313,7 +339,7 @@ static int cmos_read_alarm(struct device *dev, struct rtc_wkalrm *t)
|
||||
}
|
||||
}
|
||||
|
||||
t->enabled = !!(rtc_control & RTC_AIE);
|
||||
t->enabled = !!(p.rtc_control & RTC_AIE);
|
||||
t->pending = 0;
|
||||
|
||||
return 0;
|
||||
@@ -444,10 +470,57 @@ static int cmos_validate_alarm(struct device *dev, struct rtc_wkalrm *t)
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct cmos_set_alarm_callback_param {
|
||||
struct cmos_rtc *cmos;
|
||||
unsigned char mon, mday, hrs, min, sec;
|
||||
struct rtc_wkalrm *t;
|
||||
};
|
||||
|
||||
/* Note: this function may be executed by mc146818_avoid_UIP() more then
|
||||
* once
|
||||
*/
|
||||
static void cmos_set_alarm_callback(unsigned char __always_unused seconds,
|
||||
void *param_in)
|
||||
{
|
||||
struct cmos_set_alarm_callback_param *p =
|
||||
(struct cmos_set_alarm_callback_param *)param_in;
|
||||
|
||||
/* next rtc irq must not be from previous alarm setting */
|
||||
cmos_irq_disable(p->cmos, RTC_AIE);
|
||||
|
||||
/* update alarm */
|
||||
CMOS_WRITE(p->hrs, RTC_HOURS_ALARM);
|
||||
CMOS_WRITE(p->min, RTC_MINUTES_ALARM);
|
||||
CMOS_WRITE(p->sec, RTC_SECONDS_ALARM);
|
||||
|
||||
/* the system may support an "enhanced" alarm */
|
||||
if (p->cmos->day_alrm) {
|
||||
CMOS_WRITE(p->mday, p->cmos->day_alrm);
|
||||
if (p->cmos->mon_alrm)
|
||||
CMOS_WRITE(p->mon, p->cmos->mon_alrm);
|
||||
}
|
||||
|
||||
if (use_hpet_alarm()) {
|
||||
/*
|
||||
* FIXME the HPET alarm glue currently ignores day_alrm
|
||||
* and mon_alrm ...
|
||||
*/
|
||||
hpet_set_alarm_time(p->t->time.tm_hour, p->t->time.tm_min,
|
||||
p->t->time.tm_sec);
|
||||
}
|
||||
|
||||
if (p->t->enabled)
|
||||
cmos_irq_enable(p->cmos, RTC_AIE);
|
||||
}
|
||||
|
||||
static int cmos_set_alarm(struct device *dev, struct rtc_wkalrm *t)
|
||||
{
|
||||
struct cmos_rtc *cmos = dev_get_drvdata(dev);
|
||||
unsigned char mon, mday, hrs, min, sec, rtc_control;
|
||||
struct cmos_set_alarm_callback_param p = {
|
||||
.cmos = cmos,
|
||||
.t = t
|
||||
};
|
||||
unsigned char rtc_control;
|
||||
int ret;
|
||||
|
||||
/* This not only a rtc_op, but also called directly */
|
||||
@@ -458,11 +531,11 @@ static int cmos_set_alarm(struct device *dev, struct rtc_wkalrm *t)
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
mon = t->time.tm_mon + 1;
|
||||
mday = t->time.tm_mday;
|
||||
hrs = t->time.tm_hour;
|
||||
min = t->time.tm_min;
|
||||
sec = t->time.tm_sec;
|
||||
p.mon = t->time.tm_mon + 1;
|
||||
p.mday = t->time.tm_mday;
|
||||
p.hrs = t->time.tm_hour;
|
||||
p.min = t->time.tm_min;
|
||||
p.sec = t->time.tm_sec;
|
||||
|
||||
spin_lock_irq(&rtc_lock);
|
||||
rtc_control = CMOS_READ(RTC_CONTROL);
|
||||
@@ -470,43 +543,21 @@ static int cmos_set_alarm(struct device *dev, struct rtc_wkalrm *t)
|
||||
|
||||
if (!(rtc_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
|
||||
/* Writing 0xff means "don't care" or "match all". */
|
||||
mon = (mon <= 12) ? bin2bcd(mon) : 0xff;
|
||||
mday = (mday >= 1 && mday <= 31) ? bin2bcd(mday) : 0xff;
|
||||
hrs = (hrs < 24) ? bin2bcd(hrs) : 0xff;
|
||||
min = (min < 60) ? bin2bcd(min) : 0xff;
|
||||
sec = (sec < 60) ? bin2bcd(sec) : 0xff;
|
||||
p.mon = (p.mon <= 12) ? bin2bcd(p.mon) : 0xff;
|
||||
p.mday = (p.mday >= 1 && p.mday <= 31) ? bin2bcd(p.mday) : 0xff;
|
||||
p.hrs = (p.hrs < 24) ? bin2bcd(p.hrs) : 0xff;
|
||||
p.min = (p.min < 60) ? bin2bcd(p.min) : 0xff;
|
||||
p.sec = (p.sec < 60) ? bin2bcd(p.sec) : 0xff;
|
||||
}
|
||||
|
||||
spin_lock_irq(&rtc_lock);
|
||||
|
||||
/* next rtc irq must not be from previous alarm setting */
|
||||
cmos_irq_disable(cmos, RTC_AIE);
|
||||
|
||||
/* update alarm */
|
||||
CMOS_WRITE(hrs, RTC_HOURS_ALARM);
|
||||
CMOS_WRITE(min, RTC_MINUTES_ALARM);
|
||||
CMOS_WRITE(sec, RTC_SECONDS_ALARM);
|
||||
|
||||
/* the system may support an "enhanced" alarm */
|
||||
if (cmos->day_alrm) {
|
||||
CMOS_WRITE(mday, cmos->day_alrm);
|
||||
if (cmos->mon_alrm)
|
||||
CMOS_WRITE(mon, cmos->mon_alrm);
|
||||
}
|
||||
|
||||
if (use_hpet_alarm()) {
|
||||
/*
|
||||
* FIXME the HPET alarm glue currently ignores day_alrm
|
||||
* and mon_alrm ...
|
||||
* Some Intel chipsets disconnect the alarm registers when the clock
|
||||
* update is in progress - during this time writes fail silently.
|
||||
*
|
||||
* Use mc146818_avoid_UIP() to avoid this.
|
||||
*/
|
||||
hpet_set_alarm_time(t->time.tm_hour, t->time.tm_min,
|
||||
t->time.tm_sec);
|
||||
}
|
||||
|
||||
if (t->enabled)
|
||||
cmos_irq_enable(cmos, RTC_AIE);
|
||||
|
||||
spin_unlock_irq(&rtc_lock);
|
||||
if (!mc146818_avoid_UIP(cmos_set_alarm_callback, &p))
|
||||
return -EIO;
|
||||
|
||||
cmos->alarm_expires = rtc_tm_to_time64(&t->time);
|
||||
|
||||
|
||||
@@ -8,6 +8,76 @@
|
||||
#include <linux/acpi.h>
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Execute a function while the UIP (Update-in-progress) bit of the RTC is
|
||||
* unset.
|
||||
*
|
||||
* Warning: callback may be executed more then once.
|
||||
*/
|
||||
bool mc146818_avoid_UIP(void (*callback)(unsigned char seconds, void *param),
|
||||
void *param)
|
||||
{
|
||||
int i;
|
||||
unsigned long flags;
|
||||
unsigned char seconds;
|
||||
|
||||
for (i = 0; i < 10; i++) {
|
||||
spin_lock_irqsave(&rtc_lock, flags);
|
||||
|
||||
/*
|
||||
* Check whether there is an update in progress during which the
|
||||
* readout is unspecified. The maximum update time is ~2ms. Poll
|
||||
* every msec for completion.
|
||||
*
|
||||
* Store the second value before checking UIP so a long lasting
|
||||
* NMI which happens to hit after the UIP check cannot make
|
||||
* an update cycle invisible.
|
||||
*/
|
||||
seconds = CMOS_READ(RTC_SECONDS);
|
||||
|
||||
if (CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP) {
|
||||
spin_unlock_irqrestore(&rtc_lock, flags);
|
||||
mdelay(1);
|
||||
continue;
|
||||
}
|
||||
|
||||
/* Revalidate the above readout */
|
||||
if (seconds != CMOS_READ(RTC_SECONDS)) {
|
||||
spin_unlock_irqrestore(&rtc_lock, flags);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (callback)
|
||||
callback(seconds, param);
|
||||
|
||||
/*
|
||||
* Check for the UIP bit again. If it is set now then
|
||||
* the above values may contain garbage.
|
||||
*/
|
||||
if (CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP) {
|
||||
spin_unlock_irqrestore(&rtc_lock, flags);
|
||||
mdelay(1);
|
||||
continue;
|
||||
}
|
||||
|
||||
/*
|
||||
* A NMI might have interrupted the above sequence so check
|
||||
* whether the seconds value has changed which indicates that
|
||||
* the NMI took longer than the UIP bit was set. Unlikely, but
|
||||
* possible and there is also virt...
|
||||
*/
|
||||
if (seconds != CMOS_READ(RTC_SECONDS)) {
|
||||
spin_unlock_irqrestore(&rtc_lock, flags);
|
||||
continue;
|
||||
}
|
||||
spin_unlock_irqrestore(&rtc_lock, flags);
|
||||
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mc146818_avoid_UIP);
|
||||
|
||||
/*
|
||||
* If the UIP (Update-in-progress) bit of the RTC is set for more then
|
||||
* 10ms, the RTC is apparently broken or not present.
|
||||
|
||||
@@ -661,13 +661,13 @@ static void qeth_l2_dev2br_fdb_notify(struct qeth_card *card, u8 code,
|
||||
card->dev, &info.info, NULL);
|
||||
QETH_CARD_TEXT(card, 4, "andelmac");
|
||||
QETH_CARD_TEXT_(card, 4,
|
||||
"mc%012lx", ether_addr_to_u64(ntfy_mac));
|
||||
"mc%012llx", ether_addr_to_u64(ntfy_mac));
|
||||
} else {
|
||||
call_switchdev_notifiers(SWITCHDEV_FDB_ADD_TO_BRIDGE,
|
||||
card->dev, &info.info, NULL);
|
||||
QETH_CARD_TEXT(card, 4, "anaddmac");
|
||||
QETH_CARD_TEXT_(card, 4,
|
||||
"mc%012lx", ether_addr_to_u64(ntfy_mac));
|
||||
"mc%012llx", ether_addr_to_u64(ntfy_mac));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -764,9 +764,8 @@ static void qeth_l2_br2dev_worker(struct work_struct *work)
|
||||
struct list_head *iter;
|
||||
int err = 0;
|
||||
|
||||
kfree(br2dev_event_work);
|
||||
QETH_CARD_TEXT_(card, 4, "b2dw%04x", event);
|
||||
QETH_CARD_TEXT_(card, 4, "ma%012lx", ether_addr_to_u64(addr));
|
||||
QETH_CARD_TEXT_(card, 4, "b2dw%04lx", event);
|
||||
QETH_CARD_TEXT_(card, 4, "ma%012llx", ether_addr_to_u64(addr));
|
||||
|
||||
rcu_read_lock();
|
||||
/* Verify preconditions are still valid: */
|
||||
@@ -795,7 +794,7 @@ static void qeth_l2_br2dev_worker(struct work_struct *work)
|
||||
if (err) {
|
||||
QETH_CARD_TEXT(card, 2, "b2derris");
|
||||
QETH_CARD_TEXT_(card, 2,
|
||||
"err%02x%03d", event,
|
||||
"err%02lx%03d", event,
|
||||
lowerdev->ifindex);
|
||||
}
|
||||
}
|
||||
@@ -813,7 +812,7 @@ static void qeth_l2_br2dev_worker(struct work_struct *work)
|
||||
break;
|
||||
}
|
||||
if (err)
|
||||
QETH_CARD_TEXT_(card, 2, "b2derr%02x", event);
|
||||
QETH_CARD_TEXT_(card, 2, "b2derr%02lx", event);
|
||||
}
|
||||
|
||||
unlock:
|
||||
@@ -821,6 +820,7 @@ unlock:
|
||||
dev_put(brdev);
|
||||
dev_put(lsyncdev);
|
||||
dev_put(dstdev);
|
||||
kfree(br2dev_event_work);
|
||||
}
|
||||
|
||||
static int qeth_l2_br2dev_queue_work(struct net_device *brdev,
|
||||
@@ -878,7 +878,7 @@ static int qeth_l2_switchdev_event(struct notifier_block *unused,
|
||||
while (lowerdev) {
|
||||
if (qeth_l2_must_learn(lowerdev, dstdev)) {
|
||||
card = lowerdev->ml_priv;
|
||||
QETH_CARD_TEXT_(card, 4, "b2dqw%03x", event);
|
||||
QETH_CARD_TEXT_(card, 4, "b2dqw%03lx", event);
|
||||
rc = qeth_l2_br2dev_queue_work(brdev, lowerdev,
|
||||
dstdev, event,
|
||||
fdb_info->addr);
|
||||
|
||||
@@ -1285,6 +1285,7 @@ static int intel_link_probe(struct auxiliary_device *auxdev,
|
||||
cdns->msg_count = 0;
|
||||
|
||||
bus->link_id = auxdev->id;
|
||||
bus->clk_stop_timeout = 1;
|
||||
|
||||
sdw_cdns_probe(cdns);
|
||||
|
||||
|
||||
@@ -912,14 +912,20 @@ static int mtk_spi_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct spi_master *master = platform_get_drvdata(pdev);
|
||||
struct mtk_spi *mdata = spi_master_get_devdata(master);
|
||||
int ret;
|
||||
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
ret = pm_runtime_resume_and_get(&pdev->dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
mtk_spi_reset(mdata);
|
||||
|
||||
if (mdata->dev_comp->no_need_unprepare)
|
||||
clk_unprepare(mdata->spi_clk);
|
||||
|
||||
pm_runtime_put_noidle(&pdev->dev);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -291,7 +291,8 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned int cmd,
|
||||
*
|
||||
* DWC_usb3 3.30a and DWC_usb31 1.90a programming guide section 3.2.2
|
||||
*/
|
||||
if (dwc->gadget->speed <= USB_SPEED_HIGH) {
|
||||
if (dwc->gadget->speed <= USB_SPEED_HIGH ||
|
||||
DWC3_DEPCMD_CMD(cmd) == DWC3_DEPCMD_ENDTRANSFER) {
|
||||
reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
|
||||
if (unlikely(reg & DWC3_GUSB2PHYCFG_SUSPHY)) {
|
||||
saved_config |= DWC3_GUSB2PHYCFG_SUSPHY;
|
||||
|
||||
@@ -601,7 +601,7 @@ static void fbcon_prepare_logo(struct vc_data *vc, struct fb_info *info,
|
||||
if (scr_readw(r) != vc->vc_video_erase_char)
|
||||
break;
|
||||
if (r != q && new_rows >= rows + logo_lines) {
|
||||
save = kmalloc(array3_size(logo_lines, new_cols, 2),
|
||||
save = kzalloc(array3_size(logo_lines, new_cols, 2),
|
||||
GFP_KERNEL);
|
||||
if (save) {
|
||||
int i = cols < new_cols ? cols : new_cols;
|
||||
|
||||
@@ -34,8 +34,6 @@ obj-$(CONFIG_TIMERFD) += timerfd.o
|
||||
obj-$(CONFIG_EVENTFD) += eventfd.o
|
||||
obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
|
||||
obj-$(CONFIG_AIO) += aio.o
|
||||
obj-$(CONFIG_IO_URING) += io_uring.o
|
||||
obj-$(CONFIG_IO_WQ) += io-wq.o
|
||||
obj-$(CONFIG_FS_DAX) += dax.o
|
||||
obj-$(CONFIG_FS_ENCRYPTION) += crypto/
|
||||
obj-$(CONFIG_FS_VERITY) += verity/
|
||||
|
||||
@@ -5398,6 +5398,7 @@ static int clone_range(struct send_ctx *sctx,
|
||||
u64 ext_len;
|
||||
u64 clone_len;
|
||||
u64 clone_data_offset;
|
||||
bool crossed_src_i_size = false;
|
||||
|
||||
if (slot >= btrfs_header_nritems(leaf)) {
|
||||
ret = btrfs_next_leaf(clone_root->root, path);
|
||||
@@ -5454,8 +5455,10 @@ static int clone_range(struct send_ctx *sctx,
|
||||
if (key.offset >= clone_src_i_size)
|
||||
break;
|
||||
|
||||
if (key.offset + ext_len > clone_src_i_size)
|
||||
if (key.offset + ext_len > clone_src_i_size) {
|
||||
ext_len = clone_src_i_size - key.offset;
|
||||
crossed_src_i_size = true;
|
||||
}
|
||||
|
||||
clone_data_offset = btrfs_file_extent_offset(leaf, ei);
|
||||
if (btrfs_file_extent_disk_bytenr(leaf, ei) == disk_byte) {
|
||||
@@ -5515,6 +5518,25 @@ static int clone_range(struct send_ctx *sctx,
|
||||
ret = send_clone(sctx, offset, clone_len,
|
||||
clone_root);
|
||||
}
|
||||
} else if (crossed_src_i_size && clone_len < len) {
|
||||
/*
|
||||
* If we are at i_size of the clone source inode and we
|
||||
* can not clone from it, terminate the loop. This is
|
||||
* to avoid sending two write operations, one with a
|
||||
* length matching clone_len and the final one after
|
||||
* this loop with a length of len - clone_len.
|
||||
*
|
||||
* When using encoded writes (BTRFS_SEND_FLAG_COMPRESSED
|
||||
* was passed to the send ioctl), this helps avoid
|
||||
* sending an encoded write for an offset that is not
|
||||
* sector size aligned, in case the i_size of the source
|
||||
* inode is not sector size aligned. That will make the
|
||||
* receiver fallback to decompression of the data and
|
||||
* writing it using regular buffered IO, therefore while
|
||||
* not incorrect, it's not optimal due decompression and
|
||||
* possible re-compression at the receiver.
|
||||
*/
|
||||
break;
|
||||
} else {
|
||||
ret = send_extent_data(sctx, offset, clone_len);
|
||||
}
|
||||
|
||||
@@ -1392,6 +1392,7 @@ cifs_put_tcp_session(struct TCP_Server_Info *server, int from_reconnect)
|
||||
server->session_key.response = NULL;
|
||||
server->session_key.len = 0;
|
||||
kfree(server->hostname);
|
||||
server->hostname = NULL;
|
||||
|
||||
task = xchg(&server->tsk, NULL);
|
||||
if (task)
|
||||
|
||||
11
fs/file.c
11
fs/file.c
@@ -1029,7 +1029,16 @@ static unsigned long __fget_light(unsigned int fd, fmode_t mask)
|
||||
struct files_struct *files = current->files;
|
||||
struct file *file;
|
||||
|
||||
if (atomic_read(&files->count) == 1) {
|
||||
/*
|
||||
* If another thread is concurrently calling close_fd() followed
|
||||
* by put_files_struct(), we must not observe the old table
|
||||
* entry combined with the new refcount - otherwise we could
|
||||
* return a file that is concurrently being freed.
|
||||
*
|
||||
* atomic_read_acquire() pairs with atomic_dec_and_test() in
|
||||
* put_files_struct().
|
||||
*/
|
||||
if (atomic_read_acquire(&files->count) == 1) {
|
||||
file = files_lookup_fd_raw(files, fd);
|
||||
if (!file || unlikely(file->f_mode & mask))
|
||||
return 0;
|
||||
|
||||
@@ -207,12 +207,16 @@ extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
|
||||
#define tlb_needs_table_invalidate() (true)
|
||||
#endif
|
||||
|
||||
void tlb_remove_table_sync_one(void);
|
||||
|
||||
#else
|
||||
|
||||
#ifdef tlb_needs_table_invalidate
|
||||
#error tlb_needs_table_invalidate() requires MMU_GATHER_RCU_TABLE_FREE
|
||||
#endif
|
||||
|
||||
static inline void tlb_remove_table_sync_one(void) { }
|
||||
|
||||
#endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
|
||||
|
||||
|
||||
|
||||
@@ -71,6 +71,7 @@ struct css_task_iter {
|
||||
ANDROID_KABI_RESERVE(1);
|
||||
};
|
||||
|
||||
extern struct file_system_type cgroup_fs_type;
|
||||
extern struct cgroup_root cgrp_dfl_root;
|
||||
extern struct css_set init_css_set;
|
||||
|
||||
|
||||
@@ -458,6 +458,47 @@ int __must_check devm_clk_bulk_get_all(struct device *dev,
|
||||
*/
|
||||
struct clk *devm_clk_get(struct device *dev, const char *id);
|
||||
|
||||
/**
|
||||
* devm_clk_get_prepared - devm_clk_get() + clk_prepare()
|
||||
* @dev: device for clock "consumer"
|
||||
* @id: clock consumer ID
|
||||
*
|
||||
* Context: May sleep.
|
||||
*
|
||||
* Return: a struct clk corresponding to the clock producer, or
|
||||
* valid IS_ERR() condition containing errno. The implementation
|
||||
* uses @dev and @id to determine the clock consumer, and thereby
|
||||
* the clock producer. (IOW, @id may be identical strings, but
|
||||
* clk_get may return different clock producers depending on @dev.)
|
||||
*
|
||||
* The returned clk (if valid) is prepared. Drivers must however assume
|
||||
* that the clock is not enabled.
|
||||
*
|
||||
* The clock will automatically be unprepared and freed when the device
|
||||
* is unbound from the bus.
|
||||
*/
|
||||
struct clk *devm_clk_get_prepared(struct device *dev, const char *id);
|
||||
|
||||
/**
|
||||
* devm_clk_get_enabled - devm_clk_get() + clk_prepare_enable()
|
||||
* @dev: device for clock "consumer"
|
||||
* @id: clock consumer ID
|
||||
*
|
||||
* Context: May sleep.
|
||||
*
|
||||
* Return: a struct clk corresponding to the clock producer, or
|
||||
* valid IS_ERR() condition containing errno. The implementation
|
||||
* uses @dev and @id to determine the clock consumer, and thereby
|
||||
* the clock producer. (IOW, @id may be identical strings, but
|
||||
* clk_get may return different clock producers depending on @dev.)
|
||||
*
|
||||
* The returned clk (if valid) is prepared and enabled.
|
||||
*
|
||||
* The clock will automatically be disabled, unprepared and freed
|
||||
* when the device is unbound from the bus.
|
||||
*/
|
||||
struct clk *devm_clk_get_enabled(struct device *dev, const char *id);
|
||||
|
||||
/**
|
||||
* devm_clk_get_optional - lookup and obtain a managed reference to an optional
|
||||
* clock producer.
|
||||
@@ -469,6 +510,50 @@ struct clk *devm_clk_get(struct device *dev, const char *id);
|
||||
*/
|
||||
struct clk *devm_clk_get_optional(struct device *dev, const char *id);
|
||||
|
||||
/**
|
||||
* devm_clk_get_optional_prepared - devm_clk_get_optional() + clk_prepare()
|
||||
* @dev: device for clock "consumer"
|
||||
* @id: clock consumer ID
|
||||
*
|
||||
* Context: May sleep.
|
||||
*
|
||||
* Return: a struct clk corresponding to the clock producer, or
|
||||
* valid IS_ERR() condition containing errno. The implementation
|
||||
* uses @dev and @id to determine the clock consumer, and thereby
|
||||
* the clock producer. If no such clk is found, it returns NULL
|
||||
* which serves as a dummy clk. That's the only difference compared
|
||||
* to devm_clk_get_prepared().
|
||||
*
|
||||
* The returned clk (if valid) is prepared. Drivers must however
|
||||
* assume that the clock is not enabled.
|
||||
*
|
||||
* The clock will automatically be unprepared and freed when the
|
||||
* device is unbound from the bus.
|
||||
*/
|
||||
struct clk *devm_clk_get_optional_prepared(struct device *dev, const char *id);
|
||||
|
||||
/**
|
||||
* devm_clk_get_optional_enabled - devm_clk_get_optional() +
|
||||
* clk_prepare_enable()
|
||||
* @dev: device for clock "consumer"
|
||||
* @id: clock consumer ID
|
||||
*
|
||||
* Context: May sleep.
|
||||
*
|
||||
* Return: a struct clk corresponding to the clock producer, or
|
||||
* valid IS_ERR() condition containing errno. The implementation
|
||||
* uses @dev and @id to determine the clock consumer, and thereby
|
||||
* the clock producer. If no such clk is found, it returns NULL
|
||||
* which serves as a dummy clk. That's the only difference compared
|
||||
* to devm_clk_get_enabled().
|
||||
*
|
||||
* The returned clk (if valid) is prepared and enabled.
|
||||
*
|
||||
* The clock will automatically be disabled, unprepared and freed
|
||||
* when the device is unbound from the bus.
|
||||
*/
|
||||
struct clk *devm_clk_get_optional_enabled(struct device *dev, const char *id);
|
||||
|
||||
/**
|
||||
* devm_get_clk_from_child - lookup and obtain a managed reference to a
|
||||
* clock producer from child node.
|
||||
@@ -813,12 +898,36 @@ static inline struct clk *devm_clk_get(struct device *dev, const char *id)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline struct clk *devm_clk_get_prepared(struct device *dev,
|
||||
const char *id)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline struct clk *devm_clk_get_enabled(struct device *dev,
|
||||
const char *id)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline struct clk *devm_clk_get_optional(struct device *dev,
|
||||
const char *id)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline struct clk *devm_clk_get_optional_prepared(struct device *dev,
|
||||
const char *id)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline struct clk *devm_clk_get_optional_enabled(struct device *dev,
|
||||
const char *id)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline int __must_check devm_clk_bulk_get(struct device *dev, int num_clks,
|
||||
struct clk_bulk_data *clks)
|
||||
{
|
||||
|
||||
@@ -129,4 +129,7 @@ bool mc146818_does_rtc_work(void);
|
||||
int mc146818_get_time(struct rtc_time *time);
|
||||
int mc146818_set_time(struct rtc_time *time);
|
||||
|
||||
bool mc146818_avoid_UIP(void (*callback)(unsigned char seconds, void *param),
|
||||
void *param);
|
||||
|
||||
#endif /* _MC146818RTC_H */
|
||||
|
||||
6
io_uring/Makefile
Normal file
6
io_uring/Makefile
Normal file
@@ -0,0 +1,6 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
#
|
||||
# Makefile for io_uring
|
||||
|
||||
obj-$(CONFIG_IO_URING) += io_uring.o
|
||||
obj-$(CONFIG_IO_WQ) += io-wq.o
|
||||
@@ -85,7 +85,7 @@
|
||||
|
||||
#include <uapi/linux/io_uring.h>
|
||||
|
||||
#include "internal.h"
|
||||
#include "../fs/internal.h"
|
||||
#include "io-wq.h"
|
||||
|
||||
#define IORING_MAX_ENTRIES 32768
|
||||
@@ -9467,8 +9467,10 @@ static void io_tctx_exit_cb(struct callback_head *cb)
|
||||
/*
|
||||
* When @in_idle, we're in cancellation and it's racy to remove the
|
||||
* node. It'll be removed by the end of cancellation, just ignore it.
|
||||
* tctx can be NULL if the queueing of this task_work raced with
|
||||
* work cancelation off the exec path.
|
||||
*/
|
||||
if (!atomic_read(&tctx->in_idle))
|
||||
if (tctx && !atomic_read(&tctx->in_idle))
|
||||
io_uring_del_tctx_node((unsigned long)work->ctx);
|
||||
complete(&work->completion);
|
||||
}
|
||||
@@ -168,7 +168,6 @@ struct cgroup_mgctx {
|
||||
extern spinlock_t css_set_lock;
|
||||
extern struct cgroup_subsys *cgroup_subsys[];
|
||||
extern struct list_head cgroup_roots;
|
||||
extern struct file_system_type cgroup_fs_type;
|
||||
|
||||
/* iterate across the hierarchies */
|
||||
#define for_each_root(root) \
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user