Merge 5.15.11 into android13-5.15
Changes in 5.15.11 reset: tegra-bpmp: Revert Handle errors in BPMP response KVM: VMX: clear vmx_x86_ops.sync_pir_to_irr if APICv is disabled KVM: selftests: Make sure kvm_create_max_vcpus test won't hit RLIMIT_NOFILE KVM: downgrade two BUG_ONs to WARN_ON_ONCE x86/kvm: remove unused ack_notifier callbacks KVM: X86: Fix tlb flush for tdp in kvm_invalidate_pcid() mac80211: fix rate control for retransmitted frames mac80211: fix regression in SSN handling of addba tx mac80211: mark TX-during-stop for TX in in_reconfig mac80211: send ADDBA requests using the tid/queue of the aggregation session mac80211: validate extended element ID is present firmware: arm_scpi: Fix string overflow in SCPI genpd driver bpf: Fix kernel address leakage in atomic fetch bpf, selftests: Add test case for atomic fetch on spilled pointer bpf: Fix signed bounds propagation after mov32 bpf: Make 32->64 bounds propagation slightly more robust bpf, selftests: Add test case trying to taint map value pointer bpf: Fix kernel address leakage in atomic cmpxchg's r0 aux reg bpf, selftests: Update test case for atomic cmpxchg on r0 with pointer vduse: fix memory corruption in vduse_dev_ioctl() vduse: check that offset is within bounds in get_config() virtio_ring: Fix querying of maximum DMA mapping size for virtio device vdpa: check that offsets are within bounds s390/entry: fix duplicate tracking of irq nesting level recordmcount.pl: look for jgnop instruction as well as bcrl on s390 arm64: dts: ten64: remove redundant interrupt declaration for gpio-keys ceph: fix up non-directory creation in SGID directories dm btree remove: fix use after free in rebalance_children() audit: improve robustness of the audit queue handling btrfs: convert latest_bdev type to btrfs_device and rename btrfs: use latest_dev in btrfs_show_devname btrfs: update latest_dev when we create a sprout device btrfs: remove stale comment about the btrfs_show_devname scsi: ufs: core: Retry START_STOP on UNIT_ATTENTION drm/i915/hdmi: convert intel_hdmi_to_dev to intel_hdmi_to_i915 drm/i915/hdmi: Turn DP++ TMDS output buffers back on in encoder->shutdown() pinctrl: amd: Fix wakeups when IRQ is shared with SCI arm64: dts: rockchip: remove mmc-hs400-enhanced-strobe from rk3399-khadas-edge arm64: dts: rockchip: fix rk3308-roc-cc vcc-sd supply arm64: dts: rockchip: fix rk3399-leez-p710 vcc3v3-lan supply arm64: dts: rockchip: fix audio-supply for Rock Pi 4 arm64: dts: rockchip: fix poweroff on helios64 dmaengine: idxd: add halt interrupt support dmaengine: idxd: fix calling wq quiesce inside spinlock mac80211: track only QoS data frames for admission control tee: amdtee: fix an IS_ERR() vs NULL bug ceph: fix duplicate increment of opened_inodes metric ceph: initialize pathlen variable in reconnect_caps_cb ARM: socfpga: dts: fix qspi node compatible arm64: dts: imx8mq: remove interconnect property from lcdif clk: Don't parent clks until the parent is fully registered soc: imx: Register SoC device only on i.MX boards iwlwifi: mvm: don't crash on invalid rate w/o STA virtio: always enter drivers/virtio/ virtio/vsock: fix the transport to work with VMADDR_CID_ANY vdpa: Consider device id larger than 31 Revert "drm/fb-helper: improve DRM fbdev emulation device names" selftests: net: Correct ping6 expected rc from 2 to 1 s390/kexec_file: fix error handling when applying relocations sch_cake: do not call cake_destroy() from cake_init() inet_diag: fix kernel-infoleak for UDP sockets netdevsim: don't overwrite read only ethtool parms selftests: icmp_redirect: pass xfail=0 to log_test() net: hns3: fix use-after-free bug in hclgevf_send_mbx_msg net: hns3: fix race condition in debugfs selftests: Add duplicate config only for MD5 VRF tests selftests: Fix raw socket bind tests with VRF selftests: Fix IPv6 address bind tests dmaengine: idxd: fix missed completion on abort path dmaengine: st_fdma: fix MODULE_ALIAS drm: simpledrm: fix wrong unit with pixel clock net/sched: sch_ets: don't remove idle classes from the round-robin list selftests/net: toeplitz: fix udp option net: dsa: mv88e6xxx: Unforce speed & duplex in mac_link_down() selftest/net/forwarding: declare NETIFS p9 p10 mptcp: never allow the PM to close a listener subflow drm/ast: potential dereference of null pointer drm/i915/display: Fix an unsigned subtraction which can never be negative. mac80211: agg-tx: don't schedule_and_wake_txq() under sta->lock cfg80211: Acquire wiphy mutex on regulatory work mac80211: fix lookup when adding AddBA extension element net: stmmac: fix tc flower deletion for VLAN priority Rx steering flow_offload: return EOPNOTSUPP for the unsupported mpls action type rds: memory leak in __rds_conn_create() ice: Use div64_u64 instead of div_u64 in adjfine ice: Don't put stale timestamps in the skb drm/amd/display: Set exit_optimized_pwr_state for DCN31 drm/amd/pm: fix a potential gpu_metrics_table memory leak mptcp: remove tcp ulp setsockopt support mptcp: clear 'kern' flag from fallback sockets mptcp: fix deadlock in __mptcp_push_pending() soc/tegra: fuse: Fix bitwise vs. logical OR warning igb: Fix removal of unicast MAC filters of VFs igbvf: fix double free in `igbvf_probe` igc: Fix typo in i225 LTR functions ixgbe: Document how to enable NBASE-T support ixgbe: set X550 MDIO speed before talking to PHY netdevsim: Zero-initialize memory for new map's value in function nsim_bpf_map_alloc net/packet: rx_owner_map depends on pg_vec net: stmmac: dwmac-rk: fix oob read in rk_gmac_setup sfc_ef100: potential dereference of null pointer dsa: mv88e6xxx: fix debug print for SPEED_UNFORCED net: Fix double 0x prefix print in SKB dump net/smc: Prevent smc_release() from long blocking net: systemport: Add global locking for descriptor lifecycle sit: do not call ipip6_dev_free() from sit_init_net() afs: Fix mmap arm64: kexec: Fix missing error code 'ret' warning in load_other_segments() bpf: Fix extable fixup offset. bpf, selftests: Fix racing issue in btf_skc_cls_ingress test powerpc/85xx: Fix oops when CONFIG_FSL_PMC=n USB: gadget: bRequestType is a bitfield, not a enum Revert "usb: early: convert to readl_poll_timeout_atomic()" KVM: x86: Drop guest CPUID check for host initiated writes to MSR_IA32_PERF_CAPABILITIES tty: n_hdlc: make n_hdlc_tty_wakeup() asynchronous USB: NO_LPM quirk Lenovo USB-C to Ethernet Adapher(RTL8153-04) usb: dwc2: fix STM ID/VBUS detection startup delay in dwc2_driver_probe PCI/MSI: Clear PCI_MSIX_FLAGS_MASKALL on error PCI/MSI: Mask MSI-X vectors only on success usb: xhci-mtk: fix list_del warning when enable list debug usb: xhci: Extend support for runtime power management for AMD's Yellow carp. usb: cdnsp: Fix incorrect status for control request usb: cdnsp: Fix incorrect calling of cdnsp_died function usb: cdnsp: Fix issue in cdnsp_log_ep trace event usb: cdnsp: Fix lack of spin_lock_irqsave/spin_lock_restore usb: typec: tcpm: fix tcpm unregister port but leave a pending timer usb: gadget: u_ether: fix race in setting MAC address in setup phase USB: serial: cp210x: fix CP2105 GPIO registration USB: serial: option: add Telit FN990 compositions selinux: fix sleeping function called from invalid context btrfs: fix memory leak in __add_inode_ref() btrfs: fix double free of anon_dev after failure to create subvolume btrfs: check WRITE_ERR when trying to read an extent buffer btrfs: fix missing blkdev_put() call in btrfs_scan_one_device() zonefs: add MODULE_ALIAS_FS iocost: Fix divide-by-zero on donation from low hweight cgroup serial: 8250_fintek: Fix garbled text for console timekeeping: Really make sure wall_to_monotonic isn't positive cifs: sanitize multiple delimiters in prepath locking/rtmutex: Fix incorrect condition in rtmutex_spin_on_owner() riscv: dts: unleashed: Add gpio card detect to mmc-spi-slot riscv: dts: unmatched: Add gpio card detect to mmc-spi-slot perf inject: Fix segfault due to close without open perf inject: Fix segfault due to perf_data__fd() without open libata: if T_LENGTH is zero, dma direction should be DMA_NONE powerpc/module_64: Fix livepatching for RO modules drm/amdgpu: correct register access for RLC_JUMP_TABLE_RESTORE drm/amdgpu: don't override default ECO_BITs setting drm/amd/pm: fix reading SMU FW version from amdgpu_firmware_info on YC Revert "can: m_can: remove support for custom bit timing" can: m_can: make custom bittiming fields const can: m_can: pci: use custom bit timings for Elkhart Lake ARM: dts: imx6ull-pinfunc: Fix CSI_DATA07__ESAI_TX0 pad name xsk: Do not sleep in poll() when need_wakeup set mptcp: add missing documented NL params bpf, x64: Factor out emission of REX byte in more cases bpf: Fix extable address check. USB: core: Make do_proc_control() and do_proc_bulk() killable media: mxl111sf: change mutex_init() location fuse: annotate lock in fuse_reverse_inval_entry() ovl: fix warning in ovl_create_real() scsi: scsi_debug: Don't call kcalloc() if size arg is zero scsi: scsi_debug: Fix type in min_t to avoid stack OOB scsi: scsi_debug: Sanity check block descriptor length in resp_mode_select() io-wq: remove spurious bit clear on task_work addition io-wq: check for wq exit after adding new worker task_work rcu: Mark accesses to rcu_state.n_force_qs io-wq: drop wqe lock before creating new worker bus: ti-sysc: Fix variable set but not used warning for reinit_modules selftests/damon: test debugfs file reads/writes with huge count Revert "xsk: Do not sleep in poll() when need_wakeup set" xen/blkfront: harden blkfront against event channel storms xen/netfront: harden netfront against event channel storms xen/console: harden hvc_xen against event channel storms xen/netback: fix rx queue stall detection xen/netback: don't queue unlimited number of packages Linux 5.15.11 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I20c400f64f45729c6f833c31ee18eb4b92f5ed89
This commit is contained in:
@@ -440,6 +440,22 @@ NOTE: For 82599-based network connections, if you are enabling jumbo frames in
|
|||||||
a virtual function (VF), jumbo frames must first be enabled in the physical
|
a virtual function (VF), jumbo frames must first be enabled in the physical
|
||||||
function (PF). The VF MTU setting cannot be larger than the PF MTU.
|
function (PF). The VF MTU setting cannot be larger than the PF MTU.
|
||||||
|
|
||||||
|
NBASE-T Support
|
||||||
|
---------------
|
||||||
|
The ixgbe driver supports NBASE-T on some devices. However, the advertisement
|
||||||
|
of NBASE-T speeds is suppressed by default, to accommodate broken network
|
||||||
|
switches which cannot cope with advertised NBASE-T speeds. Use the ethtool
|
||||||
|
command to enable advertising NBASE-T speeds on devices which support it::
|
||||||
|
|
||||||
|
ethtool -s eth? advertise 0x1800000001028
|
||||||
|
|
||||||
|
On Linux systems with INTERFACES(5), this can be specified as a pre-up command
|
||||||
|
in /etc/network/interfaces so that the interface is always brought up with
|
||||||
|
NBASE-T support, e.g.::
|
||||||
|
|
||||||
|
iface eth? inet dhcp
|
||||||
|
pre-up ethtool -s eth? advertise 0x1800000001028 || true
|
||||||
|
|
||||||
Generic Receive Offload, aka GRO
|
Generic Receive Offload, aka GRO
|
||||||
--------------------------------
|
--------------------------------
|
||||||
The driver supports the in-kernel software implementation of GRO. GRO has
|
The driver supports the in-kernel software implementation of GRO. GRO has
|
||||||
|
|||||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 5
|
VERSION = 5
|
||||||
PATCHLEVEL = 15
|
PATCHLEVEL = 15
|
||||||
SUBLEVEL = 10
|
SUBLEVEL = 11
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = Trick or Treat
|
NAME = Trick or Treat
|
||||||
|
|
||||||
|
|||||||
@@ -82,6 +82,6 @@
|
|||||||
#define MX6ULL_PAD_CSI_DATA04__ESAI_TX_FS 0x01F4 0x0480 0x0000 0x9 0x0
|
#define MX6ULL_PAD_CSI_DATA04__ESAI_TX_FS 0x01F4 0x0480 0x0000 0x9 0x0
|
||||||
#define MX6ULL_PAD_CSI_DATA05__ESAI_TX_CLK 0x01F8 0x0484 0x0000 0x9 0x0
|
#define MX6ULL_PAD_CSI_DATA05__ESAI_TX_CLK 0x01F8 0x0484 0x0000 0x9 0x0
|
||||||
#define MX6ULL_PAD_CSI_DATA06__ESAI_TX5_RX0 0x01FC 0x0488 0x0000 0x9 0x0
|
#define MX6ULL_PAD_CSI_DATA06__ESAI_TX5_RX0 0x01FC 0x0488 0x0000 0x9 0x0
|
||||||
#define MX6ULL_PAD_CSI_DATA07__ESAI_T0 0x0200 0x048C 0x0000 0x9 0x0
|
#define MX6ULL_PAD_CSI_DATA07__ESAI_TX0 0x0200 0x048C 0x0000 0x9 0x0
|
||||||
|
|
||||||
#endif /* __DTS_IMX6ULL_PINFUNC_H */
|
#endif /* __DTS_IMX6ULL_PINFUNC_H */
|
||||||
|
|||||||
@@ -12,7 +12,7 @@
|
|||||||
flash0: n25q00@0 {
|
flash0: n25q00@0 {
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <1>;
|
#size-cells = <1>;
|
||||||
compatible = "n25q00aa";
|
compatible = "micron,mt25qu02g", "jedec,spi-nor";
|
||||||
reg = <0>;
|
reg = <0>;
|
||||||
spi-max-frequency = <100000000>;
|
spi-max-frequency = <100000000>;
|
||||||
|
|
||||||
|
|||||||
@@ -119,7 +119,7 @@
|
|||||||
flash: flash@0 {
|
flash: flash@0 {
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <1>;
|
#size-cells = <1>;
|
||||||
compatible = "n25q256a";
|
compatible = "micron,n25q256a", "jedec,spi-nor";
|
||||||
reg = <0>;
|
reg = <0>;
|
||||||
spi-max-frequency = <100000000>;
|
spi-max-frequency = <100000000>;
|
||||||
|
|
||||||
|
|||||||
@@ -124,7 +124,7 @@
|
|||||||
flash0: n25q00@0 {
|
flash0: n25q00@0 {
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <1>;
|
#size-cells = <1>;
|
||||||
compatible = "n25q00";
|
compatible = "micron,mt25qu02g", "jedec,spi-nor";
|
||||||
reg = <0>; /* chip select */
|
reg = <0>; /* chip select */
|
||||||
spi-max-frequency = <100000000>;
|
spi-max-frequency = <100000000>;
|
||||||
|
|
||||||
|
|||||||
@@ -169,7 +169,7 @@
|
|||||||
flash: flash@0 {
|
flash: flash@0 {
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <1>;
|
#size-cells = <1>;
|
||||||
compatible = "n25q00";
|
compatible = "micron,mt25qu02g", "jedec,spi-nor";
|
||||||
reg = <0>;
|
reg = <0>;
|
||||||
spi-max-frequency = <100000000>;
|
spi-max-frequency = <100000000>;
|
||||||
|
|
||||||
|
|||||||
@@ -80,7 +80,7 @@
|
|||||||
flash: flash@0 {
|
flash: flash@0 {
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <1>;
|
#size-cells = <1>;
|
||||||
compatible = "n25q256a";
|
compatible = "micron,n25q256a", "jedec,spi-nor";
|
||||||
reg = <0>;
|
reg = <0>;
|
||||||
spi-max-frequency = <100000000>;
|
spi-max-frequency = <100000000>;
|
||||||
m25p,fast-read;
|
m25p,fast-read;
|
||||||
|
|||||||
@@ -116,7 +116,7 @@
|
|||||||
flash0: n25q512a@0 {
|
flash0: n25q512a@0 {
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <1>;
|
#size-cells = <1>;
|
||||||
compatible = "n25q512a";
|
compatible = "micron,n25q512a", "jedec,spi-nor";
|
||||||
reg = <0>;
|
reg = <0>;
|
||||||
spi-max-frequency = <100000000>;
|
spi-max-frequency = <100000000>;
|
||||||
|
|
||||||
|
|||||||
@@ -224,7 +224,7 @@
|
|||||||
n25q128@0 {
|
n25q128@0 {
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <1>;
|
#size-cells = <1>;
|
||||||
compatible = "n25q128";
|
compatible = "micron,n25q128", "jedec,spi-nor";
|
||||||
reg = <0>; /* chip select */
|
reg = <0>; /* chip select */
|
||||||
spi-max-frequency = <100000000>;
|
spi-max-frequency = <100000000>;
|
||||||
m25p,fast-read;
|
m25p,fast-read;
|
||||||
@@ -241,7 +241,7 @@
|
|||||||
n25q00@1 {
|
n25q00@1 {
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <1>;
|
#size-cells = <1>;
|
||||||
compatible = "n25q00";
|
compatible = "micron,mt25qu02g", "jedec,spi-nor";
|
||||||
reg = <1>; /* chip select */
|
reg = <1>; /* chip select */
|
||||||
spi-max-frequency = <100000000>;
|
spi-max-frequency = <100000000>;
|
||||||
m25p,fast-read;
|
m25p,fast-read;
|
||||||
|
|||||||
@@ -38,7 +38,6 @@
|
|||||||
powerdn {
|
powerdn {
|
||||||
label = "External Power Down";
|
label = "External Power Down";
|
||||||
gpios = <&gpio1 17 GPIO_ACTIVE_LOW>;
|
gpios = <&gpio1 17 GPIO_ACTIVE_LOW>;
|
||||||
interrupts = <&gpio1 17 IRQ_TYPE_EDGE_FALLING>;
|
|
||||||
linux,code = <KEY_POWER>;
|
linux,code = <KEY_POWER>;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -46,7 +45,6 @@
|
|||||||
admin {
|
admin {
|
||||||
label = "ADMIN button";
|
label = "ADMIN button";
|
||||||
gpios = <&gpio3 8 GPIO_ACTIVE_HIGH>;
|
gpios = <&gpio3 8 GPIO_ACTIVE_HIGH>;
|
||||||
interrupts = <&gpio3 8 IRQ_TYPE_EDGE_RISING>;
|
|
||||||
linux,code = <KEY_WPS_BUTTON>;
|
linux,code = <KEY_WPS_BUTTON>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -524,8 +524,6 @@
|
|||||||
<&clk IMX8MQ_VIDEO_PLL1>,
|
<&clk IMX8MQ_VIDEO_PLL1>,
|
||||||
<&clk IMX8MQ_VIDEO_PLL1_OUT>;
|
<&clk IMX8MQ_VIDEO_PLL1_OUT>;
|
||||||
assigned-clock-rates = <0>, <0>, <0>, <594000000>;
|
assigned-clock-rates = <0>, <0>, <0>, <594000000>;
|
||||||
interconnects = <&noc IMX8MQ_ICM_LCDIF &noc IMX8MQ_ICS_DRAM>;
|
|
||||||
interconnect-names = "dram";
|
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
|
|
||||||
port@0 {
|
port@0 {
|
||||||
|
|||||||
@@ -97,7 +97,7 @@
|
|||||||
regulator-max-microvolt = <3300000>;
|
regulator-max-microvolt = <3300000>;
|
||||||
regulator-always-on;
|
regulator-always-on;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
vim-supply = <&vcc_io>;
|
vin-supply = <&vcc_io>;
|
||||||
};
|
};
|
||||||
|
|
||||||
vdd_core: vdd-core {
|
vdd_core: vdd-core {
|
||||||
|
|||||||
@@ -705,7 +705,6 @@
|
|||||||
&sdhci {
|
&sdhci {
|
||||||
bus-width = <8>;
|
bus-width = <8>;
|
||||||
mmc-hs400-1_8v;
|
mmc-hs400-1_8v;
|
||||||
mmc-hs400-enhanced-strobe;
|
|
||||||
non-removable;
|
non-removable;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -269,6 +269,7 @@
|
|||||||
clock-output-names = "xin32k", "rk808-clkout2";
|
clock-output-names = "xin32k", "rk808-clkout2";
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
pinctrl-0 = <&pmic_int_l>;
|
pinctrl-0 = <&pmic_int_l>;
|
||||||
|
rockchip,system-power-controller;
|
||||||
vcc1-supply = <&vcc5v0_sys>;
|
vcc1-supply = <&vcc5v0_sys>;
|
||||||
vcc2-supply = <&vcc5v0_sys>;
|
vcc2-supply = <&vcc5v0_sys>;
|
||||||
vcc3-supply = <&vcc5v0_sys>;
|
vcc3-supply = <&vcc5v0_sys>;
|
||||||
|
|||||||
@@ -55,7 +55,7 @@
|
|||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
regulator-min-microvolt = <3300000>;
|
regulator-min-microvolt = <3300000>;
|
||||||
regulator-max-microvolt = <3300000>;
|
regulator-max-microvolt = <3300000>;
|
||||||
vim-supply = <&vcc3v3_sys>;
|
vin-supply = <&vcc3v3_sys>;
|
||||||
};
|
};
|
||||||
|
|
||||||
vcc3v3_sys: vcc3v3-sys {
|
vcc3v3_sys: vcc3v3-sys {
|
||||||
|
|||||||
@@ -457,7 +457,7 @@
|
|||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
bt656-supply = <&vcc_3v0>;
|
bt656-supply = <&vcc_3v0>;
|
||||||
audio-supply = <&vcc_3v0>;
|
audio-supply = <&vcc1v8_codec>;
|
||||||
sdmmc-supply = <&vcc_sdio>;
|
sdmmc-supply = <&vcc_sdio>;
|
||||||
gpio1830-supply = <&vcc_3v0>;
|
gpio1830-supply = <&vcc_3v0>;
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -149,6 +149,7 @@ int load_other_segments(struct kimage *image,
|
|||||||
initrd_len, cmdline, 0);
|
initrd_len, cmdline, 0);
|
||||||
if (!dtb) {
|
if (!dtb) {
|
||||||
pr_err("Preparing for new dtb failed\n");
|
pr_err("Preparing for new dtb failed\n");
|
||||||
|
ret = -EINVAL;
|
||||||
goto out_err;
|
goto out_err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -422,11 +422,17 @@ static inline int create_stub(const Elf64_Shdr *sechdrs,
|
|||||||
const char *name)
|
const char *name)
|
||||||
{
|
{
|
||||||
long reladdr;
|
long reladdr;
|
||||||
|
func_desc_t desc;
|
||||||
|
int i;
|
||||||
|
|
||||||
if (is_mprofile_ftrace_call(name))
|
if (is_mprofile_ftrace_call(name))
|
||||||
return create_ftrace_stub(entry, addr, me);
|
return create_ftrace_stub(entry, addr, me);
|
||||||
|
|
||||||
memcpy(entry->jump, ppc64_stub_insns, sizeof(ppc64_stub_insns));
|
for (i = 0; i < sizeof(ppc64_stub_insns) / sizeof(u32); i++) {
|
||||||
|
if (patch_instruction(&entry->jump[i],
|
||||||
|
ppc_inst(ppc64_stub_insns[i])))
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
/* Stub uses address relative to r2. */
|
/* Stub uses address relative to r2. */
|
||||||
reladdr = (unsigned long)entry - my_r2(sechdrs, me);
|
reladdr = (unsigned long)entry - my_r2(sechdrs, me);
|
||||||
@@ -437,10 +443,24 @@ static inline int create_stub(const Elf64_Shdr *sechdrs,
|
|||||||
}
|
}
|
||||||
pr_debug("Stub %p get data from reladdr %li\n", entry, reladdr);
|
pr_debug("Stub %p get data from reladdr %li\n", entry, reladdr);
|
||||||
|
|
||||||
entry->jump[0] |= PPC_HA(reladdr);
|
if (patch_instruction(&entry->jump[0],
|
||||||
entry->jump[1] |= PPC_LO(reladdr);
|
ppc_inst(entry->jump[0] | PPC_HA(reladdr))))
|
||||||
entry->funcdata = func_desc(addr);
|
return 0;
|
||||||
entry->magic = STUB_MAGIC;
|
|
||||||
|
if (patch_instruction(&entry->jump[1],
|
||||||
|
ppc_inst(entry->jump[1] | PPC_LO(reladdr))))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
// func_desc_t is 8 bytes if ABIv2, else 16 bytes
|
||||||
|
desc = func_desc(addr);
|
||||||
|
for (i = 0; i < sizeof(func_desc_t) / sizeof(u32); i++) {
|
||||||
|
if (patch_instruction(((u32 *)&entry->funcdata) + i,
|
||||||
|
ppc_inst(((u32 *)(&desc))[i])))
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (patch_instruction(&entry->magic, ppc_inst(STUB_MAGIC)))
|
||||||
|
return 0;
|
||||||
|
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
@@ -495,8 +515,11 @@ static int restore_r2(const char *name, u32 *instruction, struct module *me)
|
|||||||
me->name, *instruction, instruction);
|
me->name, *instruction, instruction);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* ld r2,R2_STACK_OFFSET(r1) */
|
/* ld r2,R2_STACK_OFFSET(r1) */
|
||||||
*instruction = PPC_INST_LD_TOC;
|
if (patch_instruction(instruction, ppc_inst(PPC_INST_LD_TOC)))
|
||||||
|
return 0;
|
||||||
|
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -636,9 +659,12 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Only replace bits 2 through 26 */
|
/* Only replace bits 2 through 26 */
|
||||||
*(uint32_t *)location
|
value = (*(uint32_t *)location & ~0x03fffffc)
|
||||||
= (*(uint32_t *)location & ~0x03fffffc)
|
|
||||||
| (value & 0x03fffffc);
|
| (value & 0x03fffffc);
|
||||||
|
|
||||||
|
if (patch_instruction((u32 *)location, ppc_inst(value)))
|
||||||
|
return -EFAULT;
|
||||||
|
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case R_PPC64_REL64:
|
case R_PPC64_REL64:
|
||||||
|
|||||||
@@ -220,7 +220,7 @@ static int smp_85xx_start_cpu(int cpu)
|
|||||||
local_irq_save(flags);
|
local_irq_save(flags);
|
||||||
hard_irq_disable();
|
hard_irq_disable();
|
||||||
|
|
||||||
if (qoriq_pm_ops)
|
if (qoriq_pm_ops && qoriq_pm_ops->cpu_up_prepare)
|
||||||
qoriq_pm_ops->cpu_up_prepare(cpu);
|
qoriq_pm_ops->cpu_up_prepare(cpu);
|
||||||
|
|
||||||
/* if cpu is not spinning, reset it */
|
/* if cpu is not spinning, reset it */
|
||||||
@@ -292,7 +292,7 @@ static int smp_85xx_kick_cpu(int nr)
|
|||||||
booting_thread_hwid = cpu_thread_in_core(nr);
|
booting_thread_hwid = cpu_thread_in_core(nr);
|
||||||
primary = cpu_first_thread_sibling(nr);
|
primary = cpu_first_thread_sibling(nr);
|
||||||
|
|
||||||
if (qoriq_pm_ops)
|
if (qoriq_pm_ops && qoriq_pm_ops->cpu_up_prepare)
|
||||||
qoriq_pm_ops->cpu_up_prepare(nr);
|
qoriq_pm_ops->cpu_up_prepare(nr);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -80,6 +80,7 @@
|
|||||||
spi-max-frequency = <20000000>;
|
spi-max-frequency = <20000000>;
|
||||||
voltage-ranges = <3300 3300>;
|
voltage-ranges = <3300 3300>;
|
||||||
disable-wp;
|
disable-wp;
|
||||||
|
gpios = <&gpio 11 GPIO_ACTIVE_LOW>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -2,6 +2,7 @@
|
|||||||
/* Copyright (c) 2020 SiFive, Inc */
|
/* Copyright (c) 2020 SiFive, Inc */
|
||||||
|
|
||||||
#include "fu740-c000.dtsi"
|
#include "fu740-c000.dtsi"
|
||||||
|
#include <dt-bindings/gpio/gpio.h>
|
||||||
#include <dt-bindings/interrupt-controller/irq.h>
|
#include <dt-bindings/interrupt-controller/irq.h>
|
||||||
|
|
||||||
/* Clock frequency (in Hz) of the PCB crystal for rtcclk */
|
/* Clock frequency (in Hz) of the PCB crystal for rtcclk */
|
||||||
@@ -228,6 +229,7 @@
|
|||||||
spi-max-frequency = <20000000>;
|
spi-max-frequency = <20000000>;
|
||||||
voltage-ranges = <3300 3300>;
|
voltage-ranges = <3300 3300>;
|
||||||
disable-wp;
|
disable-wp;
|
||||||
|
gpios = <&gpio 15 GPIO_ACTIVE_LOW>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -138,7 +138,7 @@ void noinstr do_io_irq(struct pt_regs *regs)
|
|||||||
struct pt_regs *old_regs = set_irq_regs(regs);
|
struct pt_regs *old_regs = set_irq_regs(regs);
|
||||||
int from_idle;
|
int from_idle;
|
||||||
|
|
||||||
irq_enter();
|
irq_enter_rcu();
|
||||||
|
|
||||||
if (user_mode(regs))
|
if (user_mode(regs))
|
||||||
update_timer_sys();
|
update_timer_sys();
|
||||||
@@ -155,7 +155,8 @@ void noinstr do_io_irq(struct pt_regs *regs)
|
|||||||
do_irq_async(regs, IO_INTERRUPT);
|
do_irq_async(regs, IO_INTERRUPT);
|
||||||
} while (MACHINE_IS_LPAR && irq_pending(regs));
|
} while (MACHINE_IS_LPAR && irq_pending(regs));
|
||||||
|
|
||||||
irq_exit();
|
irq_exit_rcu();
|
||||||
|
|
||||||
set_irq_regs(old_regs);
|
set_irq_regs(old_regs);
|
||||||
irqentry_exit(regs, state);
|
irqentry_exit(regs, state);
|
||||||
|
|
||||||
@@ -169,7 +170,7 @@ void noinstr do_ext_irq(struct pt_regs *regs)
|
|||||||
struct pt_regs *old_regs = set_irq_regs(regs);
|
struct pt_regs *old_regs = set_irq_regs(regs);
|
||||||
int from_idle;
|
int from_idle;
|
||||||
|
|
||||||
irq_enter();
|
irq_enter_rcu();
|
||||||
|
|
||||||
if (user_mode(regs))
|
if (user_mode(regs))
|
||||||
update_timer_sys();
|
update_timer_sys();
|
||||||
@@ -184,7 +185,7 @@ void noinstr do_ext_irq(struct pt_regs *regs)
|
|||||||
|
|
||||||
do_irq_async(regs, EXT_INTERRUPT);
|
do_irq_async(regs, EXT_INTERRUPT);
|
||||||
|
|
||||||
irq_exit();
|
irq_exit_rcu();
|
||||||
set_irq_regs(old_regs);
|
set_irq_regs(old_regs);
|
||||||
irqentry_exit(regs, state);
|
irqentry_exit(regs, state);
|
||||||
|
|
||||||
|
|||||||
@@ -277,6 +277,7 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
|
|||||||
{
|
{
|
||||||
Elf_Rela *relas;
|
Elf_Rela *relas;
|
||||||
int i, r_type;
|
int i, r_type;
|
||||||
|
int ret;
|
||||||
|
|
||||||
relas = (void *)pi->ehdr + relsec->sh_offset;
|
relas = (void *)pi->ehdr + relsec->sh_offset;
|
||||||
|
|
||||||
@@ -311,7 +312,11 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
|
|||||||
addr = section->sh_addr + relas[i].r_offset;
|
addr = section->sh_addr + relas[i].r_offset;
|
||||||
|
|
||||||
r_type = ELF64_R_TYPE(relas[i].r_info);
|
r_type = ELF64_R_TYPE(relas[i].r_info);
|
||||||
arch_kexec_do_relocs(r_type, loc, val, addr);
|
ret = arch_kexec_do_relocs(r_type, loc, val, addr);
|
||||||
|
if (ret) {
|
||||||
|
pr_err("Unknown rela relocation: %d\n", r_type);
|
||||||
|
return -ENOEXEC;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -81,7 +81,6 @@ struct kvm_ioapic {
|
|||||||
unsigned long irq_states[IOAPIC_NUM_PINS];
|
unsigned long irq_states[IOAPIC_NUM_PINS];
|
||||||
struct kvm_io_device dev;
|
struct kvm_io_device dev;
|
||||||
struct kvm *kvm;
|
struct kvm *kvm;
|
||||||
void (*ack_notifier)(void *opaque, int irq);
|
|
||||||
spinlock_t lock;
|
spinlock_t lock;
|
||||||
struct rtc_status rtc_status;
|
struct rtc_status rtc_status;
|
||||||
struct delayed_work eoi_inject;
|
struct delayed_work eoi_inject;
|
||||||
|
|||||||
@@ -56,7 +56,6 @@ struct kvm_pic {
|
|||||||
struct kvm_io_device dev_master;
|
struct kvm_io_device dev_master;
|
||||||
struct kvm_io_device dev_slave;
|
struct kvm_io_device dev_slave;
|
||||||
struct kvm_io_device dev_elcr;
|
struct kvm_io_device dev_elcr;
|
||||||
void (*ack_notifier)(void *opaque, int irq);
|
|
||||||
unsigned long irq_states[PIC_NUM_PINS];
|
unsigned long irq_states[PIC_NUM_PINS];
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -7776,10 +7776,10 @@ static __init int hardware_setup(void)
|
|||||||
ple_window_shrink = 0;
|
ple_window_shrink = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!cpu_has_vmx_apicv()) {
|
if (!cpu_has_vmx_apicv())
|
||||||
enable_apicv = 0;
|
enable_apicv = 0;
|
||||||
|
if (!enable_apicv)
|
||||||
vmx_x86_ops.sync_pir_to_irr = NULL;
|
vmx_x86_ops.sync_pir_to_irr = NULL;
|
||||||
}
|
|
||||||
|
|
||||||
if (cpu_has_vmx_tsc_scaling()) {
|
if (cpu_has_vmx_tsc_scaling()) {
|
||||||
kvm_has_tsc_control = true;
|
kvm_has_tsc_control = true;
|
||||||
|
|||||||
@@ -1091,6 +1091,18 @@ static void kvm_invalidate_pcid(struct kvm_vcpu *vcpu, unsigned long pcid)
|
|||||||
unsigned long roots_to_free = 0;
|
unsigned long roots_to_free = 0;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* MOV CR3 and INVPCID are usually not intercepted when using TDP, but
|
||||||
|
* this is reachable when running EPT=1 and unrestricted_guest=0, and
|
||||||
|
* also via the emulator. KVM's TDP page tables are not in the scope of
|
||||||
|
* the invalidation, but the guest's TLB entries need to be flushed as
|
||||||
|
* the CPU may have cached entries in its TLB for the target PCID.
|
||||||
|
*/
|
||||||
|
if (unlikely(tdp_enabled)) {
|
||||||
|
kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If neither the current CR3 nor any of the prev_roots use the given
|
* If neither the current CR3 nor any of the prev_roots use the given
|
||||||
* PCID, then nothing needs to be done here because a resync will
|
* PCID, then nothing needs to be done here because a resync will
|
||||||
@@ -3347,7 +3359,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
|
|||||||
|
|
||||||
if (!msr_info->host_initiated)
|
if (!msr_info->host_initiated)
|
||||||
return 1;
|
return 1;
|
||||||
if (guest_cpuid_has(vcpu, X86_FEATURE_PDCM) && kvm_get_msr_feature(&msr_ent))
|
if (kvm_get_msr_feature(&msr_ent))
|
||||||
return 1;
|
return 1;
|
||||||
if (data & ~msr_ent.data)
|
if (data & ~msr_ent.data)
|
||||||
return 1;
|
return 1;
|
||||||
|
|||||||
@@ -721,6 +721,20 @@ static void maybe_emit_mod(u8 **pprog, u32 dst_reg, u32 src_reg, bool is64)
|
|||||||
*pprog = prog;
|
*pprog = prog;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Similar version of maybe_emit_mod() for a single register
|
||||||
|
*/
|
||||||
|
static void maybe_emit_1mod(u8 **pprog, u32 reg, bool is64)
|
||||||
|
{
|
||||||
|
u8 *prog = *pprog;
|
||||||
|
|
||||||
|
if (is64)
|
||||||
|
EMIT1(add_1mod(0x48, reg));
|
||||||
|
else if (is_ereg(reg))
|
||||||
|
EMIT1(add_1mod(0x40, reg));
|
||||||
|
*pprog = prog;
|
||||||
|
}
|
||||||
|
|
||||||
/* LDX: dst_reg = *(u8*)(src_reg + off) */
|
/* LDX: dst_reg = *(u8*)(src_reg + off) */
|
||||||
static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
|
static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
|
||||||
{
|
{
|
||||||
@@ -951,10 +965,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
|
|||||||
/* neg dst */
|
/* neg dst */
|
||||||
case BPF_ALU | BPF_NEG:
|
case BPF_ALU | BPF_NEG:
|
||||||
case BPF_ALU64 | BPF_NEG:
|
case BPF_ALU64 | BPF_NEG:
|
||||||
if (BPF_CLASS(insn->code) == BPF_ALU64)
|
maybe_emit_1mod(&prog, dst_reg,
|
||||||
EMIT1(add_1mod(0x48, dst_reg));
|
BPF_CLASS(insn->code) == BPF_ALU64);
|
||||||
else if (is_ereg(dst_reg))
|
|
||||||
EMIT1(add_1mod(0x40, dst_reg));
|
|
||||||
EMIT2(0xF7, add_1reg(0xD8, dst_reg));
|
EMIT2(0xF7, add_1reg(0xD8, dst_reg));
|
||||||
break;
|
break;
|
||||||
|
|
||||||
@@ -968,10 +980,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
|
|||||||
case BPF_ALU64 | BPF_AND | BPF_K:
|
case BPF_ALU64 | BPF_AND | BPF_K:
|
||||||
case BPF_ALU64 | BPF_OR | BPF_K:
|
case BPF_ALU64 | BPF_OR | BPF_K:
|
||||||
case BPF_ALU64 | BPF_XOR | BPF_K:
|
case BPF_ALU64 | BPF_XOR | BPF_K:
|
||||||
if (BPF_CLASS(insn->code) == BPF_ALU64)
|
maybe_emit_1mod(&prog, dst_reg,
|
||||||
EMIT1(add_1mod(0x48, dst_reg));
|
BPF_CLASS(insn->code) == BPF_ALU64);
|
||||||
else if (is_ereg(dst_reg))
|
|
||||||
EMIT1(add_1mod(0x40, dst_reg));
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* b3 holds 'normal' opcode, b2 short form only valid
|
* b3 holds 'normal' opcode, b2 short form only valid
|
||||||
@@ -1112,10 +1122,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
|
|||||||
case BPF_ALU64 | BPF_LSH | BPF_K:
|
case BPF_ALU64 | BPF_LSH | BPF_K:
|
||||||
case BPF_ALU64 | BPF_RSH | BPF_K:
|
case BPF_ALU64 | BPF_RSH | BPF_K:
|
||||||
case BPF_ALU64 | BPF_ARSH | BPF_K:
|
case BPF_ALU64 | BPF_ARSH | BPF_K:
|
||||||
if (BPF_CLASS(insn->code) == BPF_ALU64)
|
maybe_emit_1mod(&prog, dst_reg,
|
||||||
EMIT1(add_1mod(0x48, dst_reg));
|
BPF_CLASS(insn->code) == BPF_ALU64);
|
||||||
else if (is_ereg(dst_reg))
|
|
||||||
EMIT1(add_1mod(0x40, dst_reg));
|
|
||||||
|
|
||||||
b3 = simple_alu_opcodes[BPF_OP(insn->code)];
|
b3 = simple_alu_opcodes[BPF_OP(insn->code)];
|
||||||
if (imm32 == 1)
|
if (imm32 == 1)
|
||||||
@@ -1146,10 +1154,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* shl %rax, %cl | shr %rax, %cl | sar %rax, %cl */
|
/* shl %rax, %cl | shr %rax, %cl | sar %rax, %cl */
|
||||||
if (BPF_CLASS(insn->code) == BPF_ALU64)
|
maybe_emit_1mod(&prog, dst_reg,
|
||||||
EMIT1(add_1mod(0x48, dst_reg));
|
BPF_CLASS(insn->code) == BPF_ALU64);
|
||||||
else if (is_ereg(dst_reg))
|
|
||||||
EMIT1(add_1mod(0x40, dst_reg));
|
|
||||||
|
|
||||||
b3 = simple_alu_opcodes[BPF_OP(insn->code)];
|
b3 = simple_alu_opcodes[BPF_OP(insn->code)];
|
||||||
EMIT2(0xD3, add_1reg(b3, dst_reg));
|
EMIT2(0xD3, add_1reg(b3, dst_reg));
|
||||||
@@ -1274,19 +1280,54 @@ st: if (is_imm8(insn->off))
|
|||||||
case BPF_LDX | BPF_MEM | BPF_DW:
|
case BPF_LDX | BPF_MEM | BPF_DW:
|
||||||
case BPF_LDX | BPF_PROBE_MEM | BPF_DW:
|
case BPF_LDX | BPF_PROBE_MEM | BPF_DW:
|
||||||
if (BPF_MODE(insn->code) == BPF_PROBE_MEM) {
|
if (BPF_MODE(insn->code) == BPF_PROBE_MEM) {
|
||||||
/* test src_reg, src_reg */
|
/* Though the verifier prevents negative insn->off in BPF_PROBE_MEM
|
||||||
maybe_emit_mod(&prog, src_reg, src_reg, true); /* always 1 byte */
|
* add abs(insn->off) to the limit to make sure that negative
|
||||||
EMIT2(0x85, add_2reg(0xC0, src_reg, src_reg));
|
* offset won't be an issue.
|
||||||
/* jne start_of_ldx */
|
* insn->off is s16, so it won't affect valid pointers.
|
||||||
EMIT2(X86_JNE, 0);
|
*/
|
||||||
|
u64 limit = TASK_SIZE_MAX + PAGE_SIZE + abs(insn->off);
|
||||||
|
u8 *end_of_jmp1, *end_of_jmp2;
|
||||||
|
|
||||||
|
/* Conservatively check that src_reg + insn->off is a kernel address:
|
||||||
|
* 1. src_reg + insn->off >= limit
|
||||||
|
* 2. src_reg + insn->off doesn't become small positive.
|
||||||
|
* Cannot do src_reg + insn->off >= limit in one branch,
|
||||||
|
* since it needs two spare registers, but JIT has only one.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/* movabsq r11, limit */
|
||||||
|
EMIT2(add_1mod(0x48, AUX_REG), add_1reg(0xB8, AUX_REG));
|
||||||
|
EMIT((u32)limit, 4);
|
||||||
|
EMIT(limit >> 32, 4);
|
||||||
|
/* cmp src_reg, r11 */
|
||||||
|
maybe_emit_mod(&prog, src_reg, AUX_REG, true);
|
||||||
|
EMIT2(0x39, add_2reg(0xC0, src_reg, AUX_REG));
|
||||||
|
/* if unsigned '<' goto end_of_jmp2 */
|
||||||
|
EMIT2(X86_JB, 0);
|
||||||
|
end_of_jmp1 = prog;
|
||||||
|
|
||||||
|
/* mov r11, src_reg */
|
||||||
|
emit_mov_reg(&prog, true, AUX_REG, src_reg);
|
||||||
|
/* add r11, insn->off */
|
||||||
|
maybe_emit_1mod(&prog, AUX_REG, true);
|
||||||
|
EMIT2_off32(0x81, add_1reg(0xC0, AUX_REG), insn->off);
|
||||||
|
/* jmp if not carry to start_of_ldx
|
||||||
|
* Otherwise ERR_PTR(-EINVAL) + 128 will be the user addr
|
||||||
|
* that has to be rejected.
|
||||||
|
*/
|
||||||
|
EMIT2(0x73 /* JNC */, 0);
|
||||||
|
end_of_jmp2 = prog;
|
||||||
|
|
||||||
/* xor dst_reg, dst_reg */
|
/* xor dst_reg, dst_reg */
|
||||||
emit_mov_imm32(&prog, false, dst_reg, 0);
|
emit_mov_imm32(&prog, false, dst_reg, 0);
|
||||||
/* jmp byte_after_ldx */
|
/* jmp byte_after_ldx */
|
||||||
EMIT2(0xEB, 0);
|
EMIT2(0xEB, 0);
|
||||||
|
|
||||||
/* populate jmp_offset for JNE above */
|
/* populate jmp_offset for JB above to jump to xor dst_reg */
|
||||||
temp[4] = prog - temp - 5 /* sizeof(test + jne) */;
|
end_of_jmp1[-1] = end_of_jmp2 - end_of_jmp1;
|
||||||
|
/* populate jmp_offset for JNC above to jump to start_of_ldx */
|
||||||
start_of_ldx = prog;
|
start_of_ldx = prog;
|
||||||
|
end_of_jmp2[-1] = start_of_ldx - end_of_jmp2;
|
||||||
}
|
}
|
||||||
emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off);
|
emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off);
|
||||||
if (BPF_MODE(insn->code) == BPF_PROBE_MEM) {
|
if (BPF_MODE(insn->code) == BPF_PROBE_MEM) {
|
||||||
@@ -1332,7 +1373,7 @@ st: if (is_imm8(insn->off))
|
|||||||
* End result: x86 insn "mov rbx, qword ptr [rax+0x14]"
|
* End result: x86 insn "mov rbx, qword ptr [rax+0x14]"
|
||||||
* of 4 bytes will be ignored and rbx will be zero inited.
|
* of 4 bytes will be ignored and rbx will be zero inited.
|
||||||
*/
|
*/
|
||||||
ex->fixup = (prog - temp) | (reg2pt_regs[dst_reg] << 8);
|
ex->fixup = (prog - start_of_ldx) | (reg2pt_regs[dst_reg] << 8);
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
|
|
||||||
@@ -1459,10 +1500,8 @@ st: if (is_imm8(insn->off))
|
|||||||
case BPF_JMP | BPF_JSET | BPF_K:
|
case BPF_JMP | BPF_JSET | BPF_K:
|
||||||
case BPF_JMP32 | BPF_JSET | BPF_K:
|
case BPF_JMP32 | BPF_JSET | BPF_K:
|
||||||
/* test dst_reg, imm32 */
|
/* test dst_reg, imm32 */
|
||||||
if (BPF_CLASS(insn->code) == BPF_JMP)
|
maybe_emit_1mod(&prog, dst_reg,
|
||||||
EMIT1(add_1mod(0x48, dst_reg));
|
BPF_CLASS(insn->code) == BPF_JMP);
|
||||||
else if (is_ereg(dst_reg))
|
|
||||||
EMIT1(add_1mod(0x40, dst_reg));
|
|
||||||
EMIT2_off32(0xF7, add_1reg(0xC0, dst_reg), imm32);
|
EMIT2_off32(0xF7, add_1reg(0xC0, dst_reg), imm32);
|
||||||
goto emit_cond_jmp;
|
goto emit_cond_jmp;
|
||||||
|
|
||||||
@@ -1495,10 +1534,8 @@ st: if (is_imm8(insn->off))
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* cmp dst_reg, imm8/32 */
|
/* cmp dst_reg, imm8/32 */
|
||||||
if (BPF_CLASS(insn->code) == BPF_JMP)
|
maybe_emit_1mod(&prog, dst_reg,
|
||||||
EMIT1(add_1mod(0x48, dst_reg));
|
BPF_CLASS(insn->code) == BPF_JMP);
|
||||||
else if (is_ereg(dst_reg))
|
|
||||||
EMIT1(add_1mod(0x40, dst_reg));
|
|
||||||
|
|
||||||
if (is_imm8(imm32))
|
if (is_imm8(imm32))
|
||||||
EMIT3(0x83, add_1reg(0xF8, dst_reg), imm32);
|
EMIT3(0x83, add_1reg(0xF8, dst_reg), imm32);
|
||||||
|
|||||||
@@ -2311,7 +2311,14 @@ static void ioc_timer_fn(struct timer_list *timer)
|
|||||||
hwm = current_hweight_max(iocg);
|
hwm = current_hweight_max(iocg);
|
||||||
new_hwi = hweight_after_donation(iocg, old_hwi, hwm,
|
new_hwi = hweight_after_donation(iocg, old_hwi, hwm,
|
||||||
usage, &now);
|
usage, &now);
|
||||||
if (new_hwi < hwm) {
|
/*
|
||||||
|
* Donation calculation assumes hweight_after_donation
|
||||||
|
* to be positive, a condition that a donor w/ hwa < 2
|
||||||
|
* can't meet. Don't bother with donation if hwa is
|
||||||
|
* below 2. It's not gonna make a meaningful difference
|
||||||
|
* anyway.
|
||||||
|
*/
|
||||||
|
if (new_hwi < hwm && hwa >= 2) {
|
||||||
iocg->hweight_donating = hwa;
|
iocg->hweight_donating = hwa;
|
||||||
iocg->hweight_after_donation = new_hwi;
|
iocg->hweight_after_donation = new_hwi;
|
||||||
list_add(&iocg->surplus_list, &surpluses);
|
list_add(&iocg->surplus_list, &surpluses);
|
||||||
|
|||||||
@@ -41,8 +41,7 @@ obj-$(CONFIG_DMADEVICES) += dma/
|
|||||||
# SOC specific infrastructure drivers.
|
# SOC specific infrastructure drivers.
|
||||||
obj-y += soc/
|
obj-y += soc/
|
||||||
|
|
||||||
obj-$(CONFIG_VIRTIO) += virtio/
|
obj-y += virtio/
|
||||||
obj-$(CONFIG_VIRTIO_PCI_LIB) += virtio/
|
|
||||||
obj-$(CONFIG_VDPA) += vdpa/
|
obj-$(CONFIG_VDPA) += vdpa/
|
||||||
obj-$(CONFIG_XEN) += xen/
|
obj-$(CONFIG_XEN) += xen/
|
||||||
|
|
||||||
|
|||||||
@@ -2826,8 +2826,19 @@ static unsigned int ata_scsi_pass_thru(struct ata_queued_cmd *qc)
|
|||||||
goto invalid_fld;
|
goto invalid_fld;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (ata_is_ncq(tf->protocol) && (cdb[2 + cdb_offset] & 0x3) == 0)
|
if ((cdb[2 + cdb_offset] & 0x3) == 0) {
|
||||||
tf->protocol = ATA_PROT_NCQ_NODATA;
|
/*
|
||||||
|
* When T_LENGTH is zero (No data is transferred), dir should
|
||||||
|
* be DMA_NONE.
|
||||||
|
*/
|
||||||
|
if (scmd->sc_data_direction != DMA_NONE) {
|
||||||
|
fp = 2 + cdb_offset;
|
||||||
|
goto invalid_fld;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (ata_is_ncq(tf->protocol))
|
||||||
|
tf->protocol = ATA_PROT_NCQ_NODATA;
|
||||||
|
}
|
||||||
|
|
||||||
/* enable LBA */
|
/* enable LBA */
|
||||||
tf->flags |= ATA_TFLAG_LBA;
|
tf->flags |= ATA_TFLAG_LBA;
|
||||||
|
|||||||
@@ -1511,9 +1511,12 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
|
|||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
|
struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
|
||||||
struct blkfront_info *info = rinfo->dev_info;
|
struct blkfront_info *info = rinfo->dev_info;
|
||||||
|
unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
|
||||||
|
|
||||||
if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
|
if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
|
||||||
|
xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS);
|
||||||
return IRQ_HANDLED;
|
return IRQ_HANDLED;
|
||||||
|
}
|
||||||
|
|
||||||
spin_lock_irqsave(&rinfo->ring_lock, flags);
|
spin_lock_irqsave(&rinfo->ring_lock, flags);
|
||||||
again:
|
again:
|
||||||
@@ -1529,6 +1532,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
|
|||||||
unsigned long id;
|
unsigned long id;
|
||||||
unsigned int op;
|
unsigned int op;
|
||||||
|
|
||||||
|
eoiflag = 0;
|
||||||
|
|
||||||
RING_COPY_RESPONSE(&rinfo->ring, i, &bret);
|
RING_COPY_RESPONSE(&rinfo->ring, i, &bret);
|
||||||
id = bret.id;
|
id = bret.id;
|
||||||
|
|
||||||
@@ -1645,6 +1650,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
|
|||||||
|
|
||||||
spin_unlock_irqrestore(&rinfo->ring_lock, flags);
|
spin_unlock_irqrestore(&rinfo->ring_lock, flags);
|
||||||
|
|
||||||
|
xen_irq_lateeoi(irq, eoiflag);
|
||||||
|
|
||||||
return IRQ_HANDLED;
|
return IRQ_HANDLED;
|
||||||
|
|
||||||
err:
|
err:
|
||||||
@@ -1652,6 +1659,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
|
|||||||
|
|
||||||
spin_unlock_irqrestore(&rinfo->ring_lock, flags);
|
spin_unlock_irqrestore(&rinfo->ring_lock, flags);
|
||||||
|
|
||||||
|
/* No EOI in order to avoid further interrupts. */
|
||||||
|
|
||||||
pr_alert("%s disabled for further use\n", info->gd->disk_name);
|
pr_alert("%s disabled for further use\n", info->gd->disk_name);
|
||||||
return IRQ_HANDLED;
|
return IRQ_HANDLED;
|
||||||
}
|
}
|
||||||
@@ -1691,8 +1700,8 @@ static int setup_blkring(struct xenbus_device *dev,
|
|||||||
if (err)
|
if (err)
|
||||||
goto fail;
|
goto fail;
|
||||||
|
|
||||||
err = bind_evtchn_to_irqhandler(rinfo->evtchn, blkif_interrupt, 0,
|
err = bind_evtchn_to_irqhandler_lateeoi(rinfo->evtchn, blkif_interrupt,
|
||||||
"blkif", rinfo);
|
0, "blkif", rinfo);
|
||||||
if (err <= 0) {
|
if (err <= 0) {
|
||||||
xenbus_dev_fatal(dev, err,
|
xenbus_dev_fatal(dev, err,
|
||||||
"bind_evtchn_to_irqhandler failed");
|
"bind_evtchn_to_irqhandler failed");
|
||||||
|
|||||||
@@ -2456,12 +2456,11 @@ static void sysc_reinit_modules(struct sysc_soc_info *soc)
|
|||||||
struct sysc_module *module;
|
struct sysc_module *module;
|
||||||
struct list_head *pos;
|
struct list_head *pos;
|
||||||
struct sysc *ddata;
|
struct sysc *ddata;
|
||||||
int error = 0;
|
|
||||||
|
|
||||||
list_for_each(pos, &sysc_soc->restored_modules) {
|
list_for_each(pos, &sysc_soc->restored_modules) {
|
||||||
module = list_entry(pos, struct sysc_module, node);
|
module = list_entry(pos, struct sysc_module, node);
|
||||||
ddata = module->ddata;
|
ddata = module->ddata;
|
||||||
error = sysc_reinit_module(ddata, ddata->enabled);
|
sysc_reinit_module(ddata, ddata->enabled);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -63,6 +63,9 @@ static int process_misc_interrupts(struct idxd_device *idxd, u32 cause)
|
|||||||
int i;
|
int i;
|
||||||
bool err = false;
|
bool err = false;
|
||||||
|
|
||||||
|
if (cause & IDXD_INTC_HALT_STATE)
|
||||||
|
goto halt;
|
||||||
|
|
||||||
if (cause & IDXD_INTC_ERR) {
|
if (cause & IDXD_INTC_ERR) {
|
||||||
spin_lock(&idxd->dev_lock);
|
spin_lock(&idxd->dev_lock);
|
||||||
for (i = 0; i < 4; i++)
|
for (i = 0; i < 4; i++)
|
||||||
@@ -121,6 +124,7 @@ static int process_misc_interrupts(struct idxd_device *idxd, u32 cause)
|
|||||||
if (!err)
|
if (!err)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
halt:
|
||||||
gensts.bits = ioread32(idxd->reg_base + IDXD_GENSTATS_OFFSET);
|
gensts.bits = ioread32(idxd->reg_base + IDXD_GENSTATS_OFFSET);
|
||||||
if (gensts.state == IDXD_DEVICE_STATE_HALT) {
|
if (gensts.state == IDXD_DEVICE_STATE_HALT) {
|
||||||
idxd->state = IDXD_DEV_HALTED;
|
idxd->state = IDXD_DEV_HALTED;
|
||||||
@@ -133,9 +137,10 @@ static int process_misc_interrupts(struct idxd_device *idxd, u32 cause)
|
|||||||
INIT_WORK(&idxd->work, idxd_device_reinit);
|
INIT_WORK(&idxd->work, idxd_device_reinit);
|
||||||
queue_work(idxd->wq, &idxd->work);
|
queue_work(idxd->wq, &idxd->work);
|
||||||
} else {
|
} else {
|
||||||
spin_lock(&idxd->dev_lock);
|
idxd->state = IDXD_DEV_HALTED;
|
||||||
idxd_wqs_quiesce(idxd);
|
idxd_wqs_quiesce(idxd);
|
||||||
idxd_wqs_unmap_portal(idxd);
|
idxd_wqs_unmap_portal(idxd);
|
||||||
|
spin_lock(&idxd->dev_lock);
|
||||||
idxd_device_clear_state(idxd);
|
idxd_device_clear_state(idxd);
|
||||||
dev_err(&idxd->pdev->dev,
|
dev_err(&idxd->pdev->dev,
|
||||||
"idxd halted, need %s.\n",
|
"idxd halted, need %s.\n",
|
||||||
|
|||||||
@@ -158,6 +158,7 @@ enum idxd_device_reset_type {
|
|||||||
#define IDXD_INTC_CMD 0x02
|
#define IDXD_INTC_CMD 0x02
|
||||||
#define IDXD_INTC_OCCUPY 0x04
|
#define IDXD_INTC_OCCUPY 0x04
|
||||||
#define IDXD_INTC_PERFMON_OVFL 0x08
|
#define IDXD_INTC_PERFMON_OVFL 0x08
|
||||||
|
#define IDXD_INTC_HALT_STATE 0x10
|
||||||
|
|
||||||
#define IDXD_CMD_OFFSET 0xa0
|
#define IDXD_CMD_OFFSET 0xa0
|
||||||
union idxd_command_reg {
|
union idxd_command_reg {
|
||||||
|
|||||||
@@ -106,6 +106,7 @@ static void llist_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
|
|||||||
{
|
{
|
||||||
struct idxd_desc *d, *t, *found = NULL;
|
struct idxd_desc *d, *t, *found = NULL;
|
||||||
struct llist_node *head;
|
struct llist_node *head;
|
||||||
|
LIST_HEAD(flist);
|
||||||
|
|
||||||
desc->completion->status = IDXD_COMP_DESC_ABORT;
|
desc->completion->status = IDXD_COMP_DESC_ABORT;
|
||||||
/*
|
/*
|
||||||
@@ -120,7 +121,11 @@ static void llist_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
|
|||||||
found = desc;
|
found = desc;
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
list_add_tail(&desc->list, &ie->work_list);
|
|
||||||
|
if (d->completion->status)
|
||||||
|
list_add_tail(&d->list, &flist);
|
||||||
|
else
|
||||||
|
list_add_tail(&d->list, &ie->work_list);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -130,6 +135,17 @@ static void llist_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
|
|||||||
|
|
||||||
if (found)
|
if (found)
|
||||||
complete_desc(found, IDXD_COMPLETE_ABORT);
|
complete_desc(found, IDXD_COMPLETE_ABORT);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* complete_desc() will return desc to allocator and the desc can be
|
||||||
|
* acquired by a different process and the desc->list can be modified.
|
||||||
|
* Delete desc from list so the list trasversing does not get corrupted
|
||||||
|
* by the other process.
|
||||||
|
*/
|
||||||
|
list_for_each_entry_safe(d, t, &flist, list) {
|
||||||
|
list_del_init(&d->list);
|
||||||
|
complete_desc(d, IDXD_COMPLETE_NORMAL);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
|
int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
|
||||||
|
|||||||
@@ -874,4 +874,4 @@ MODULE_LICENSE("GPL v2");
|
|||||||
MODULE_DESCRIPTION("STMicroelectronics FDMA engine driver");
|
MODULE_DESCRIPTION("STMicroelectronics FDMA engine driver");
|
||||||
MODULE_AUTHOR("Ludovic.barre <Ludovic.barre@st.com>");
|
MODULE_AUTHOR("Ludovic.barre <Ludovic.barre@st.com>");
|
||||||
MODULE_AUTHOR("Peter Griffin <peter.griffin@linaro.org>");
|
MODULE_AUTHOR("Peter Griffin <peter.griffin@linaro.org>");
|
||||||
MODULE_ALIAS("platform: " DRIVER_NAME);
|
MODULE_ALIAS("platform:" DRIVER_NAME);
|
||||||
|
|||||||
@@ -16,7 +16,6 @@ struct scpi_pm_domain {
|
|||||||
struct generic_pm_domain genpd;
|
struct generic_pm_domain genpd;
|
||||||
struct scpi_ops *ops;
|
struct scpi_ops *ops;
|
||||||
u32 domain;
|
u32 domain;
|
||||||
char name[30];
|
|
||||||
};
|
};
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -110,8 +109,13 @@ static int scpi_pm_domain_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
scpi_pd->domain = i;
|
scpi_pd->domain = i;
|
||||||
scpi_pd->ops = scpi_ops;
|
scpi_pd->ops = scpi_ops;
|
||||||
sprintf(scpi_pd->name, "%pOFn.%d", np, i);
|
scpi_pd->genpd.name = devm_kasprintf(dev, GFP_KERNEL,
|
||||||
scpi_pd->genpd.name = scpi_pd->name;
|
"%pOFn.%d", np, i);
|
||||||
|
if (!scpi_pd->genpd.name) {
|
||||||
|
dev_err(dev, "Failed to allocate genpd name:%pOFn.%d\n",
|
||||||
|
np, i);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
scpi_pd->genpd.power_off = scpi_pd_power_off;
|
scpi_pd->genpd.power_off = scpi_pd_power_off;
|
||||||
scpi_pd->genpd.power_on = scpi_pd_power_on;
|
scpi_pd->genpd.power_on = scpi_pd_power_on;
|
||||||
|
|
||||||
|
|||||||
@@ -3061,8 +3061,8 @@ static void gfx_v9_0_init_pg(struct amdgpu_device *adev)
|
|||||||
AMD_PG_SUPPORT_CP |
|
AMD_PG_SUPPORT_CP |
|
||||||
AMD_PG_SUPPORT_GDS |
|
AMD_PG_SUPPORT_GDS |
|
||||||
AMD_PG_SUPPORT_RLC_SMU_HS)) {
|
AMD_PG_SUPPORT_RLC_SMU_HS)) {
|
||||||
WREG32(mmRLC_JUMP_TABLE_RESTORE,
|
WREG32_SOC15(GC, 0, mmRLC_JUMP_TABLE_RESTORE,
|
||||||
adev->gfx.rlc.cp_table_gpu_addr >> 8);
|
adev->gfx.rlc.cp_table_gpu_addr >> 8);
|
||||||
gfx_v9_0_init_gfx_power_gating(adev);
|
gfx_v9_0_init_gfx_power_gating(adev);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -162,7 +162,6 @@ static void gfxhub_v1_0_init_tlb_regs(struct amdgpu_device *adev)
|
|||||||
ENABLE_ADVANCED_DRIVER_MODEL, 1);
|
ENABLE_ADVANCED_DRIVER_MODEL, 1);
|
||||||
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
|
||||||
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
|
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
|
||||||
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ECO_BITS, 0);
|
|
||||||
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
|
||||||
MTYPE, MTYPE_UC);/* XXX for emulation. */
|
MTYPE, MTYPE_UC);/* XXX for emulation. */
|
||||||
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ATC_EN, 1);
|
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ATC_EN, 1);
|
||||||
|
|||||||
@@ -196,7 +196,6 @@ static void gfxhub_v2_0_init_tlb_regs(struct amdgpu_device *adev)
|
|||||||
ENABLE_ADVANCED_DRIVER_MODEL, 1);
|
ENABLE_ADVANCED_DRIVER_MODEL, 1);
|
||||||
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL,
|
||||||
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
|
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
|
||||||
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL, ECO_BITS, 0);
|
|
||||||
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL,
|
||||||
MTYPE, MTYPE_UC); /* UC, uncached */
|
MTYPE, MTYPE_UC); /* UC, uncached */
|
||||||
|
|
||||||
|
|||||||
@@ -197,7 +197,6 @@ static void gfxhub_v2_1_init_tlb_regs(struct amdgpu_device *adev)
|
|||||||
ENABLE_ADVANCED_DRIVER_MODEL, 1);
|
ENABLE_ADVANCED_DRIVER_MODEL, 1);
|
||||||
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL,
|
||||||
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
|
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
|
||||||
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL, ECO_BITS, 0);
|
|
||||||
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL,
|
||||||
MTYPE, MTYPE_UC); /* UC, uncached */
|
MTYPE, MTYPE_UC); /* UC, uncached */
|
||||||
|
|
||||||
|
|||||||
@@ -145,7 +145,6 @@ static void mmhub_v1_0_init_tlb_regs(struct amdgpu_device *adev)
|
|||||||
ENABLE_ADVANCED_DRIVER_MODEL, 1);
|
ENABLE_ADVANCED_DRIVER_MODEL, 1);
|
||||||
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
|
||||||
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
|
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
|
||||||
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ECO_BITS, 0);
|
|
||||||
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
|
||||||
MTYPE, MTYPE_UC);/* XXX for emulation. */
|
MTYPE, MTYPE_UC);/* XXX for emulation. */
|
||||||
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ATC_EN, 1);
|
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ATC_EN, 1);
|
||||||
|
|||||||
@@ -165,7 +165,6 @@ static void mmhub_v1_7_init_tlb_regs(struct amdgpu_device *adev)
|
|||||||
ENABLE_ADVANCED_DRIVER_MODEL, 1);
|
ENABLE_ADVANCED_DRIVER_MODEL, 1);
|
||||||
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
|
||||||
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
|
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
|
||||||
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ECO_BITS, 0);
|
|
||||||
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
|
||||||
MTYPE, MTYPE_UC);/* XXX for emulation. */
|
MTYPE, MTYPE_UC);/* XXX for emulation. */
|
||||||
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ATC_EN, 1);
|
tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ATC_EN, 1);
|
||||||
|
|||||||
@@ -269,7 +269,6 @@ static void mmhub_v2_0_init_tlb_regs(struct amdgpu_device *adev)
|
|||||||
ENABLE_ADVANCED_DRIVER_MODEL, 1);
|
ENABLE_ADVANCED_DRIVER_MODEL, 1);
|
||||||
tmp = REG_SET_FIELD(tmp, MMMC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, MMMC_VM_MX_L1_TLB_CNTL,
|
||||||
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
|
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
|
||||||
tmp = REG_SET_FIELD(tmp, MMMC_VM_MX_L1_TLB_CNTL, ECO_BITS, 0);
|
|
||||||
tmp = REG_SET_FIELD(tmp, MMMC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, MMMC_VM_MX_L1_TLB_CNTL,
|
||||||
MTYPE, MTYPE_UC); /* UC, uncached */
|
MTYPE, MTYPE_UC); /* UC, uncached */
|
||||||
|
|
||||||
|
|||||||
@@ -194,7 +194,6 @@ static void mmhub_v2_3_init_tlb_regs(struct amdgpu_device *adev)
|
|||||||
ENABLE_ADVANCED_DRIVER_MODEL, 1);
|
ENABLE_ADVANCED_DRIVER_MODEL, 1);
|
||||||
tmp = REG_SET_FIELD(tmp, MMMC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, MMMC_VM_MX_L1_TLB_CNTL,
|
||||||
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
|
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
|
||||||
tmp = REG_SET_FIELD(tmp, MMMC_VM_MX_L1_TLB_CNTL, ECO_BITS, 0);
|
|
||||||
tmp = REG_SET_FIELD(tmp, MMMC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, MMMC_VM_MX_L1_TLB_CNTL,
|
||||||
MTYPE, MTYPE_UC); /* UC, uncached */
|
MTYPE, MTYPE_UC); /* UC, uncached */
|
||||||
|
|
||||||
|
|||||||
@@ -189,8 +189,6 @@ static void mmhub_v9_4_init_tlb_regs(struct amdgpu_device *adev, int hubid)
|
|||||||
ENABLE_ADVANCED_DRIVER_MODEL, 1);
|
ENABLE_ADVANCED_DRIVER_MODEL, 1);
|
||||||
tmp = REG_SET_FIELD(tmp, VMSHAREDVC0_MC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, VMSHAREDVC0_MC_VM_MX_L1_TLB_CNTL,
|
||||||
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
|
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
|
||||||
tmp = REG_SET_FIELD(tmp, VMSHAREDVC0_MC_VM_MX_L1_TLB_CNTL,
|
|
||||||
ECO_BITS, 0);
|
|
||||||
tmp = REG_SET_FIELD(tmp, VMSHAREDVC0_MC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, VMSHAREDVC0_MC_VM_MX_L1_TLB_CNTL,
|
||||||
MTYPE, MTYPE_UC);/* XXX for emulation. */
|
MTYPE, MTYPE_UC);/* XXX for emulation. */
|
||||||
tmp = REG_SET_FIELD(tmp, VMSHAREDVC0_MC_VM_MX_L1_TLB_CNTL,
|
tmp = REG_SET_FIELD(tmp, VMSHAREDVC0_MC_VM_MX_L1_TLB_CNTL,
|
||||||
|
|||||||
@@ -100,6 +100,7 @@ static const struct hw_sequencer_funcs dcn31_funcs = {
|
|||||||
.z10_save_init = dcn31_z10_save_init,
|
.z10_save_init = dcn31_z10_save_init,
|
||||||
.is_abm_supported = dcn31_is_abm_supported,
|
.is_abm_supported = dcn31_is_abm_supported,
|
||||||
.set_disp_pattern_generator = dcn30_set_disp_pattern_generator,
|
.set_disp_pattern_generator = dcn30_set_disp_pattern_generator,
|
||||||
|
.exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
|
||||||
.update_visual_confirm_color = dcn20_update_visual_confirm_color,
|
.update_visual_confirm_color = dcn20_update_visual_confirm_color,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -191,6 +191,9 @@ int smu_v12_0_fini_smc_tables(struct smu_context *smu)
|
|||||||
kfree(smu_table->watermarks_table);
|
kfree(smu_table->watermarks_table);
|
||||||
smu_table->watermarks_table = NULL;
|
smu_table->watermarks_table = NULL;
|
||||||
|
|
||||||
|
kfree(smu_table->gpu_metrics_table);
|
||||||
|
smu_table->gpu_metrics_table = NULL;
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -197,6 +197,7 @@ int smu_v13_0_check_fw_status(struct smu_context *smu)
|
|||||||
|
|
||||||
int smu_v13_0_check_fw_version(struct smu_context *smu)
|
int smu_v13_0_check_fw_version(struct smu_context *smu)
|
||||||
{
|
{
|
||||||
|
struct amdgpu_device *adev = smu->adev;
|
||||||
uint32_t if_version = 0xff, smu_version = 0xff;
|
uint32_t if_version = 0xff, smu_version = 0xff;
|
||||||
uint16_t smu_major;
|
uint16_t smu_major;
|
||||||
uint8_t smu_minor, smu_debug;
|
uint8_t smu_minor, smu_debug;
|
||||||
@@ -209,6 +210,8 @@ int smu_v13_0_check_fw_version(struct smu_context *smu)
|
|||||||
smu_major = (smu_version >> 16) & 0xffff;
|
smu_major = (smu_version >> 16) & 0xffff;
|
||||||
smu_minor = (smu_version >> 8) & 0xff;
|
smu_minor = (smu_version >> 8) & 0xff;
|
||||||
smu_debug = (smu_version >> 0) & 0xff;
|
smu_debug = (smu_version >> 0) & 0xff;
|
||||||
|
if (smu->is_apu)
|
||||||
|
adev->pm.fw_version = smu_version;
|
||||||
|
|
||||||
switch (smu->adev->asic_type) {
|
switch (smu->adev->asic_type) {
|
||||||
case CHIP_ALDEBARAN:
|
case CHIP_ALDEBARAN:
|
||||||
|
|||||||
@@ -1121,7 +1121,10 @@ static void ast_crtc_reset(struct drm_crtc *crtc)
|
|||||||
if (crtc->state)
|
if (crtc->state)
|
||||||
crtc->funcs->atomic_destroy_state(crtc, crtc->state);
|
crtc->funcs->atomic_destroy_state(crtc, crtc->state);
|
||||||
|
|
||||||
__drm_atomic_helper_crtc_reset(crtc, &ast_state->base);
|
if (ast_state)
|
||||||
|
__drm_atomic_helper_crtc_reset(crtc, &ast_state->base);
|
||||||
|
else
|
||||||
|
__drm_atomic_helper_crtc_reset(crtc, NULL);
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct drm_crtc_state *
|
static struct drm_crtc_state *
|
||||||
|
|||||||
@@ -1743,7 +1743,13 @@ void drm_fb_helper_fill_info(struct fb_info *info,
|
|||||||
sizes->fb_width, sizes->fb_height);
|
sizes->fb_width, sizes->fb_height);
|
||||||
|
|
||||||
info->par = fb_helper;
|
info->par = fb_helper;
|
||||||
snprintf(info->fix.id, sizeof(info->fix.id), "%s",
|
/*
|
||||||
|
* The DRM drivers fbdev emulation device name can be confusing if the
|
||||||
|
* driver name also has a "drm" suffix on it. Leading to names such as
|
||||||
|
* "simpledrmdrmfb" in /proc/fb. Unfortunately, it's an uAPI and can't
|
||||||
|
* be changed due user-space tools (e.g: pm-utils) matching against it.
|
||||||
|
*/
|
||||||
|
snprintf(info->fix.id, sizeof(info->fix.id), "%sdrmfb",
|
||||||
fb_helper->dev->driver->name);
|
fb_helper->dev->driver->name);
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -584,6 +584,7 @@ void g4x_hdmi_init(struct drm_i915_private *dev_priv,
|
|||||||
else
|
else
|
||||||
intel_encoder->enable = g4x_enable_hdmi;
|
intel_encoder->enable = g4x_enable_hdmi;
|
||||||
}
|
}
|
||||||
|
intel_encoder->shutdown = intel_hdmi_encoder_shutdown;
|
||||||
|
|
||||||
intel_encoder->type = INTEL_OUTPUT_HDMI;
|
intel_encoder->type = INTEL_OUTPUT_HDMI;
|
||||||
intel_encoder->power_domain = intel_port_to_power_domain(port);
|
intel_encoder->power_domain = intel_port_to_power_domain(port);
|
||||||
|
|||||||
@@ -4432,6 +4432,7 @@ static void intel_ddi_encoder_shutdown(struct intel_encoder *encoder)
|
|||||||
enum phy phy = intel_port_to_phy(i915, encoder->port);
|
enum phy phy = intel_port_to_phy(i915, encoder->port);
|
||||||
|
|
||||||
intel_dp_encoder_shutdown(encoder);
|
intel_dp_encoder_shutdown(encoder);
|
||||||
|
intel_hdmi_encoder_shutdown(encoder);
|
||||||
|
|
||||||
if (!intel_phy_is_tc(i915, phy))
|
if (!intel_phy_is_tc(i915, phy))
|
||||||
return;
|
return;
|
||||||
|
|||||||
@@ -606,7 +606,7 @@ static void parse_dmc_fw(struct drm_i915_private *dev_priv,
|
|||||||
continue;
|
continue;
|
||||||
|
|
||||||
offset = readcount + dmc->dmc_info[id].dmc_offset * 4;
|
offset = readcount + dmc->dmc_info[id].dmc_offset * 4;
|
||||||
if (fw->size - offset < 0) {
|
if (offset > fw->size) {
|
||||||
drm_err(&dev_priv->drm, "Reading beyond the fw_size\n");
|
drm_err(&dev_priv->drm, "Reading beyond the fw_size\n");
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -53,21 +53,20 @@
|
|||||||
#include "intel_panel.h"
|
#include "intel_panel.h"
|
||||||
#include "intel_snps_phy.h"
|
#include "intel_snps_phy.h"
|
||||||
|
|
||||||
static struct drm_device *intel_hdmi_to_dev(struct intel_hdmi *intel_hdmi)
|
static struct drm_i915_private *intel_hdmi_to_i915(struct intel_hdmi *intel_hdmi)
|
||||||
{
|
{
|
||||||
return hdmi_to_dig_port(intel_hdmi)->base.base.dev;
|
return to_i915(hdmi_to_dig_port(intel_hdmi)->base.base.dev);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void
|
static void
|
||||||
assert_hdmi_port_disabled(struct intel_hdmi *intel_hdmi)
|
assert_hdmi_port_disabled(struct intel_hdmi *intel_hdmi)
|
||||||
{
|
{
|
||||||
struct drm_device *dev = intel_hdmi_to_dev(intel_hdmi);
|
struct drm_i915_private *dev_priv = intel_hdmi_to_i915(intel_hdmi);
|
||||||
struct drm_i915_private *dev_priv = to_i915(dev);
|
|
||||||
u32 enabled_bits;
|
u32 enabled_bits;
|
||||||
|
|
||||||
enabled_bits = HAS_DDI(dev_priv) ? DDI_BUF_CTL_ENABLE : SDVO_ENABLE;
|
enabled_bits = HAS_DDI(dev_priv) ? DDI_BUF_CTL_ENABLE : SDVO_ENABLE;
|
||||||
|
|
||||||
drm_WARN(dev,
|
drm_WARN(&dev_priv->drm,
|
||||||
intel_de_read(dev_priv, intel_hdmi->hdmi_reg) & enabled_bits,
|
intel_de_read(dev_priv, intel_hdmi->hdmi_reg) & enabled_bits,
|
||||||
"HDMI port enabled, expecting disabled\n");
|
"HDMI port enabled, expecting disabled\n");
|
||||||
}
|
}
|
||||||
@@ -1246,13 +1245,14 @@ static void hsw_set_infoframes(struct intel_encoder *encoder,
|
|||||||
|
|
||||||
void intel_dp_dual_mode_set_tmds_output(struct intel_hdmi *hdmi, bool enable)
|
void intel_dp_dual_mode_set_tmds_output(struct intel_hdmi *hdmi, bool enable)
|
||||||
{
|
{
|
||||||
struct drm_i915_private *dev_priv = to_i915(intel_hdmi_to_dev(hdmi));
|
struct drm_i915_private *dev_priv = intel_hdmi_to_i915(hdmi);
|
||||||
struct i2c_adapter *adapter =
|
struct i2c_adapter *adapter;
|
||||||
intel_gmbus_get_adapter(dev_priv, hdmi->ddc_bus);
|
|
||||||
|
|
||||||
if (hdmi->dp_dual_mode.type < DRM_DP_DUAL_MODE_TYPE2_DVI)
|
if (hdmi->dp_dual_mode.type < DRM_DP_DUAL_MODE_TYPE2_DVI)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
adapter = intel_gmbus_get_adapter(dev_priv, hdmi->ddc_bus);
|
||||||
|
|
||||||
drm_dbg_kms(&dev_priv->drm, "%s DP dual mode adaptor TMDS output\n",
|
drm_dbg_kms(&dev_priv->drm, "%s DP dual mode adaptor TMDS output\n",
|
||||||
enable ? "Enabling" : "Disabling");
|
enable ? "Enabling" : "Disabling");
|
||||||
|
|
||||||
@@ -1830,7 +1830,7 @@ hdmi_port_clock_valid(struct intel_hdmi *hdmi,
|
|||||||
int clock, bool respect_downstream_limits,
|
int clock, bool respect_downstream_limits,
|
||||||
bool has_hdmi_sink)
|
bool has_hdmi_sink)
|
||||||
{
|
{
|
||||||
struct drm_i915_private *dev_priv = to_i915(intel_hdmi_to_dev(hdmi));
|
struct drm_i915_private *dev_priv = intel_hdmi_to_i915(hdmi);
|
||||||
|
|
||||||
if (clock < 25000)
|
if (clock < 25000)
|
||||||
return MODE_CLOCK_LOW;
|
return MODE_CLOCK_LOW;
|
||||||
@@ -1946,8 +1946,7 @@ intel_hdmi_mode_valid(struct drm_connector *connector,
|
|||||||
struct drm_display_mode *mode)
|
struct drm_display_mode *mode)
|
||||||
{
|
{
|
||||||
struct intel_hdmi *hdmi = intel_attached_hdmi(to_intel_connector(connector));
|
struct intel_hdmi *hdmi = intel_attached_hdmi(to_intel_connector(connector));
|
||||||
struct drm_device *dev = intel_hdmi_to_dev(hdmi);
|
struct drm_i915_private *dev_priv = intel_hdmi_to_i915(hdmi);
|
||||||
struct drm_i915_private *dev_priv = to_i915(dev);
|
|
||||||
enum drm_mode_status status;
|
enum drm_mode_status status;
|
||||||
int clock = mode->clock;
|
int clock = mode->clock;
|
||||||
int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
|
int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
|
||||||
@@ -2260,6 +2259,17 @@ int intel_hdmi_compute_config(struct intel_encoder *encoder,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void intel_hdmi_encoder_shutdown(struct intel_encoder *encoder)
|
||||||
|
{
|
||||||
|
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(encoder);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Give a hand to buggy BIOSen which forget to turn
|
||||||
|
* the TMDS output buffers back on after a reboot.
|
||||||
|
*/
|
||||||
|
intel_dp_dual_mode_set_tmds_output(intel_hdmi, true);
|
||||||
|
}
|
||||||
|
|
||||||
static void
|
static void
|
||||||
intel_hdmi_unset_edid(struct drm_connector *connector)
|
intel_hdmi_unset_edid(struct drm_connector *connector)
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -28,6 +28,7 @@ void intel_hdmi_init_connector(struct intel_digital_port *dig_port,
|
|||||||
int intel_hdmi_compute_config(struct intel_encoder *encoder,
|
int intel_hdmi_compute_config(struct intel_encoder *encoder,
|
||||||
struct intel_crtc_state *pipe_config,
|
struct intel_crtc_state *pipe_config,
|
||||||
struct drm_connector_state *conn_state);
|
struct drm_connector_state *conn_state);
|
||||||
|
void intel_hdmi_encoder_shutdown(struct intel_encoder *encoder);
|
||||||
bool intel_hdmi_handle_sink_scrambling(struct intel_encoder *encoder,
|
bool intel_hdmi_handle_sink_scrambling(struct intel_encoder *encoder,
|
||||||
struct drm_connector *connector,
|
struct drm_connector *connector,
|
||||||
bool high_tmds_clock_ratio,
|
bool high_tmds_clock_ratio,
|
||||||
|
|||||||
@@ -458,7 +458,7 @@ static struct drm_display_mode simpledrm_mode(unsigned int width,
|
|||||||
{
|
{
|
||||||
struct drm_display_mode mode = { SIMPLEDRM_MODE(width, height) };
|
struct drm_display_mode mode = { SIMPLEDRM_MODE(width, height) };
|
||||||
|
|
||||||
mode.clock = 60 /* Hz */ * mode.hdisplay * mode.vdisplay;
|
mode.clock = mode.hdisplay * mode.vdisplay * 60 / 1000 /* kHz */;
|
||||||
drm_mode_set_name(&mode);
|
drm_mode_set_name(&mode);
|
||||||
|
|
||||||
return mode;
|
return mode;
|
||||||
|
|||||||
@@ -423,9 +423,9 @@ static int rebalance_children(struct shadow_spine *s,
|
|||||||
|
|
||||||
memcpy(n, dm_block_data(child),
|
memcpy(n, dm_block_data(child),
|
||||||
dm_bm_block_size(dm_tm_get_bm(info->tm)));
|
dm_bm_block_size(dm_tm_get_bm(info->tm)));
|
||||||
dm_tm_unlock(info->tm, child);
|
|
||||||
|
|
||||||
dm_tm_dec(info->tm, dm_block_location(child));
|
dm_tm_dec(info->tm, dm_block_location(child));
|
||||||
|
dm_tm_unlock(info->tm, child);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -931,8 +931,6 @@ static int mxl111sf_init(struct dvb_usb_device *d)
|
|||||||
.len = sizeof(eeprom), .buf = eeprom },
|
.len = sizeof(eeprom), .buf = eeprom },
|
||||||
};
|
};
|
||||||
|
|
||||||
mutex_init(&state->msg_lock);
|
|
||||||
|
|
||||||
ret = get_chip_info(state);
|
ret = get_chip_info(state);
|
||||||
if (mxl_fail(ret))
|
if (mxl_fail(ret))
|
||||||
pr_err("failed to get chip info during probe");
|
pr_err("failed to get chip info during probe");
|
||||||
@@ -1074,6 +1072,14 @@ static int mxl111sf_get_stream_config_dvbt(struct dvb_frontend *fe,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int mxl111sf_probe(struct dvb_usb_device *dev)
|
||||||
|
{
|
||||||
|
struct mxl111sf_state *state = d_to_priv(dev);
|
||||||
|
|
||||||
|
mutex_init(&state->msg_lock);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static struct dvb_usb_device_properties mxl111sf_props_dvbt = {
|
static struct dvb_usb_device_properties mxl111sf_props_dvbt = {
|
||||||
.driver_name = KBUILD_MODNAME,
|
.driver_name = KBUILD_MODNAME,
|
||||||
.owner = THIS_MODULE,
|
.owner = THIS_MODULE,
|
||||||
@@ -1083,6 +1089,7 @@ static struct dvb_usb_device_properties mxl111sf_props_dvbt = {
|
|||||||
.generic_bulk_ctrl_endpoint = 0x02,
|
.generic_bulk_ctrl_endpoint = 0x02,
|
||||||
.generic_bulk_ctrl_endpoint_response = 0x81,
|
.generic_bulk_ctrl_endpoint_response = 0x81,
|
||||||
|
|
||||||
|
.probe = mxl111sf_probe,
|
||||||
.i2c_algo = &mxl111sf_i2c_algo,
|
.i2c_algo = &mxl111sf_i2c_algo,
|
||||||
.frontend_attach = mxl111sf_frontend_attach_dvbt,
|
.frontend_attach = mxl111sf_frontend_attach_dvbt,
|
||||||
.tuner_attach = mxl111sf_attach_tuner,
|
.tuner_attach = mxl111sf_attach_tuner,
|
||||||
@@ -1124,6 +1131,7 @@ static struct dvb_usb_device_properties mxl111sf_props_atsc = {
|
|||||||
.generic_bulk_ctrl_endpoint = 0x02,
|
.generic_bulk_ctrl_endpoint = 0x02,
|
||||||
.generic_bulk_ctrl_endpoint_response = 0x81,
|
.generic_bulk_ctrl_endpoint_response = 0x81,
|
||||||
|
|
||||||
|
.probe = mxl111sf_probe,
|
||||||
.i2c_algo = &mxl111sf_i2c_algo,
|
.i2c_algo = &mxl111sf_i2c_algo,
|
||||||
.frontend_attach = mxl111sf_frontend_attach_atsc,
|
.frontend_attach = mxl111sf_frontend_attach_atsc,
|
||||||
.tuner_attach = mxl111sf_attach_tuner,
|
.tuner_attach = mxl111sf_attach_tuner,
|
||||||
@@ -1165,6 +1173,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mh = {
|
|||||||
.generic_bulk_ctrl_endpoint = 0x02,
|
.generic_bulk_ctrl_endpoint = 0x02,
|
||||||
.generic_bulk_ctrl_endpoint_response = 0x81,
|
.generic_bulk_ctrl_endpoint_response = 0x81,
|
||||||
|
|
||||||
|
.probe = mxl111sf_probe,
|
||||||
.i2c_algo = &mxl111sf_i2c_algo,
|
.i2c_algo = &mxl111sf_i2c_algo,
|
||||||
.frontend_attach = mxl111sf_frontend_attach_mh,
|
.frontend_attach = mxl111sf_frontend_attach_mh,
|
||||||
.tuner_attach = mxl111sf_attach_tuner,
|
.tuner_attach = mxl111sf_attach_tuner,
|
||||||
@@ -1233,6 +1242,7 @@ static struct dvb_usb_device_properties mxl111sf_props_atsc_mh = {
|
|||||||
.generic_bulk_ctrl_endpoint = 0x02,
|
.generic_bulk_ctrl_endpoint = 0x02,
|
||||||
.generic_bulk_ctrl_endpoint_response = 0x81,
|
.generic_bulk_ctrl_endpoint_response = 0x81,
|
||||||
|
|
||||||
|
.probe = mxl111sf_probe,
|
||||||
.i2c_algo = &mxl111sf_i2c_algo,
|
.i2c_algo = &mxl111sf_i2c_algo,
|
||||||
.frontend_attach = mxl111sf_frontend_attach_atsc_mh,
|
.frontend_attach = mxl111sf_frontend_attach_atsc_mh,
|
||||||
.tuner_attach = mxl111sf_attach_tuner,
|
.tuner_attach = mxl111sf_attach_tuner,
|
||||||
@@ -1311,6 +1321,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mercury = {
|
|||||||
.generic_bulk_ctrl_endpoint = 0x02,
|
.generic_bulk_ctrl_endpoint = 0x02,
|
||||||
.generic_bulk_ctrl_endpoint_response = 0x81,
|
.generic_bulk_ctrl_endpoint_response = 0x81,
|
||||||
|
|
||||||
|
.probe = mxl111sf_probe,
|
||||||
.i2c_algo = &mxl111sf_i2c_algo,
|
.i2c_algo = &mxl111sf_i2c_algo,
|
||||||
.frontend_attach = mxl111sf_frontend_attach_mercury,
|
.frontend_attach = mxl111sf_frontend_attach_mercury,
|
||||||
.tuner_attach = mxl111sf_attach_tuner,
|
.tuner_attach = mxl111sf_attach_tuner,
|
||||||
@@ -1381,6 +1392,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mercury_mh = {
|
|||||||
.generic_bulk_ctrl_endpoint = 0x02,
|
.generic_bulk_ctrl_endpoint = 0x02,
|
||||||
.generic_bulk_ctrl_endpoint_response = 0x81,
|
.generic_bulk_ctrl_endpoint_response = 0x81,
|
||||||
|
|
||||||
|
.probe = mxl111sf_probe,
|
||||||
.i2c_algo = &mxl111sf_i2c_algo,
|
.i2c_algo = &mxl111sf_i2c_algo,
|
||||||
.frontend_attach = mxl111sf_frontend_attach_mercury_mh,
|
.frontend_attach = mxl111sf_frontend_attach_mercury_mh,
|
||||||
.tuner_attach = mxl111sf_attach_tuner,
|
.tuner_attach = mxl111sf_attach_tuner,
|
||||||
|
|||||||
@@ -1494,20 +1494,32 @@ static int m_can_dev_setup(struct m_can_classdev *cdev)
|
|||||||
case 30:
|
case 30:
|
||||||
/* CAN_CTRLMODE_FD_NON_ISO is fixed with M_CAN IP v3.0.x */
|
/* CAN_CTRLMODE_FD_NON_ISO is fixed with M_CAN IP v3.0.x */
|
||||||
can_set_static_ctrlmode(dev, CAN_CTRLMODE_FD_NON_ISO);
|
can_set_static_ctrlmode(dev, CAN_CTRLMODE_FD_NON_ISO);
|
||||||
cdev->can.bittiming_const = &m_can_bittiming_const_30X;
|
cdev->can.bittiming_const = cdev->bit_timing ?
|
||||||
cdev->can.data_bittiming_const = &m_can_data_bittiming_const_30X;
|
cdev->bit_timing : &m_can_bittiming_const_30X;
|
||||||
|
|
||||||
|
cdev->can.data_bittiming_const = cdev->data_timing ?
|
||||||
|
cdev->data_timing :
|
||||||
|
&m_can_data_bittiming_const_30X;
|
||||||
break;
|
break;
|
||||||
case 31:
|
case 31:
|
||||||
/* CAN_CTRLMODE_FD_NON_ISO is fixed with M_CAN IP v3.1.x */
|
/* CAN_CTRLMODE_FD_NON_ISO is fixed with M_CAN IP v3.1.x */
|
||||||
can_set_static_ctrlmode(dev, CAN_CTRLMODE_FD_NON_ISO);
|
can_set_static_ctrlmode(dev, CAN_CTRLMODE_FD_NON_ISO);
|
||||||
cdev->can.bittiming_const = &m_can_bittiming_const_31X;
|
cdev->can.bittiming_const = cdev->bit_timing ?
|
||||||
cdev->can.data_bittiming_const = &m_can_data_bittiming_const_31X;
|
cdev->bit_timing : &m_can_bittiming_const_31X;
|
||||||
|
|
||||||
|
cdev->can.data_bittiming_const = cdev->data_timing ?
|
||||||
|
cdev->data_timing :
|
||||||
|
&m_can_data_bittiming_const_31X;
|
||||||
break;
|
break;
|
||||||
case 32:
|
case 32:
|
||||||
case 33:
|
case 33:
|
||||||
/* Support both MCAN version v3.2.x and v3.3.0 */
|
/* Support both MCAN version v3.2.x and v3.3.0 */
|
||||||
cdev->can.bittiming_const = &m_can_bittiming_const_31X;
|
cdev->can.bittiming_const = cdev->bit_timing ?
|
||||||
cdev->can.data_bittiming_const = &m_can_data_bittiming_const_31X;
|
cdev->bit_timing : &m_can_bittiming_const_31X;
|
||||||
|
|
||||||
|
cdev->can.data_bittiming_const = cdev->data_timing ?
|
||||||
|
cdev->data_timing :
|
||||||
|
&m_can_data_bittiming_const_31X;
|
||||||
|
|
||||||
cdev->can.ctrlmode_supported |=
|
cdev->can.ctrlmode_supported |=
|
||||||
(m_can_niso_supported(cdev) ?
|
(m_can_niso_supported(cdev) ?
|
||||||
|
|||||||
@@ -85,6 +85,9 @@ struct m_can_classdev {
|
|||||||
struct sk_buff *tx_skb;
|
struct sk_buff *tx_skb;
|
||||||
struct phy *transceiver;
|
struct phy *transceiver;
|
||||||
|
|
||||||
|
const struct can_bittiming_const *bit_timing;
|
||||||
|
const struct can_bittiming_const *data_timing;
|
||||||
|
|
||||||
struct m_can_ops *ops;
|
struct m_can_ops *ops;
|
||||||
|
|
||||||
int version;
|
int version;
|
||||||
|
|||||||
@@ -18,9 +18,14 @@
|
|||||||
|
|
||||||
#define M_CAN_PCI_MMIO_BAR 0
|
#define M_CAN_PCI_MMIO_BAR 0
|
||||||
|
|
||||||
#define M_CAN_CLOCK_FREQ_EHL 200000000
|
|
||||||
#define CTL_CSR_INT_CTL_OFFSET 0x508
|
#define CTL_CSR_INT_CTL_OFFSET 0x508
|
||||||
|
|
||||||
|
struct m_can_pci_config {
|
||||||
|
const struct can_bittiming_const *bit_timing;
|
||||||
|
const struct can_bittiming_const *data_timing;
|
||||||
|
unsigned int clock_freq;
|
||||||
|
};
|
||||||
|
|
||||||
struct m_can_pci_priv {
|
struct m_can_pci_priv {
|
||||||
struct m_can_classdev cdev;
|
struct m_can_classdev cdev;
|
||||||
|
|
||||||
@@ -84,9 +89,40 @@ static struct m_can_ops m_can_pci_ops = {
|
|||||||
.read_fifo = iomap_read_fifo,
|
.read_fifo = iomap_read_fifo,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static const struct can_bittiming_const m_can_bittiming_const_ehl = {
|
||||||
|
.name = KBUILD_MODNAME,
|
||||||
|
.tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */
|
||||||
|
.tseg1_max = 64,
|
||||||
|
.tseg2_min = 1, /* Time segment 2 = phase_seg2 */
|
||||||
|
.tseg2_max = 128,
|
||||||
|
.sjw_max = 128,
|
||||||
|
.brp_min = 1,
|
||||||
|
.brp_max = 512,
|
||||||
|
.brp_inc = 1,
|
||||||
|
};
|
||||||
|
|
||||||
|
static const struct can_bittiming_const m_can_data_bittiming_const_ehl = {
|
||||||
|
.name = KBUILD_MODNAME,
|
||||||
|
.tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */
|
||||||
|
.tseg1_max = 16,
|
||||||
|
.tseg2_min = 1, /* Time segment 2 = phase_seg2 */
|
||||||
|
.tseg2_max = 8,
|
||||||
|
.sjw_max = 4,
|
||||||
|
.brp_min = 1,
|
||||||
|
.brp_max = 32,
|
||||||
|
.brp_inc = 1,
|
||||||
|
};
|
||||||
|
|
||||||
|
static const struct m_can_pci_config m_can_pci_ehl = {
|
||||||
|
.bit_timing = &m_can_bittiming_const_ehl,
|
||||||
|
.data_timing = &m_can_data_bittiming_const_ehl,
|
||||||
|
.clock_freq = 200000000,
|
||||||
|
};
|
||||||
|
|
||||||
static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
|
static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
|
||||||
{
|
{
|
||||||
struct device *dev = &pci->dev;
|
struct device *dev = &pci->dev;
|
||||||
|
const struct m_can_pci_config *cfg;
|
||||||
struct m_can_classdev *mcan_class;
|
struct m_can_classdev *mcan_class;
|
||||||
struct m_can_pci_priv *priv;
|
struct m_can_pci_priv *priv;
|
||||||
void __iomem *base;
|
void __iomem *base;
|
||||||
@@ -114,6 +150,8 @@ static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
|
|||||||
if (!mcan_class)
|
if (!mcan_class)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
|
cfg = (const struct m_can_pci_config *)id->driver_data;
|
||||||
|
|
||||||
priv = cdev_to_priv(mcan_class);
|
priv = cdev_to_priv(mcan_class);
|
||||||
|
|
||||||
priv->base = base;
|
priv->base = base;
|
||||||
@@ -125,7 +163,9 @@ static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
|
|||||||
mcan_class->dev = &pci->dev;
|
mcan_class->dev = &pci->dev;
|
||||||
mcan_class->net->irq = pci_irq_vector(pci, 0);
|
mcan_class->net->irq = pci_irq_vector(pci, 0);
|
||||||
mcan_class->pm_clock_support = 1;
|
mcan_class->pm_clock_support = 1;
|
||||||
mcan_class->can.clock.freq = id->driver_data;
|
mcan_class->bit_timing = cfg->bit_timing;
|
||||||
|
mcan_class->data_timing = cfg->data_timing;
|
||||||
|
mcan_class->can.clock.freq = cfg->clock_freq;
|
||||||
mcan_class->ops = &m_can_pci_ops;
|
mcan_class->ops = &m_can_pci_ops;
|
||||||
|
|
||||||
pci_set_drvdata(pci, mcan_class);
|
pci_set_drvdata(pci, mcan_class);
|
||||||
@@ -178,8 +218,8 @@ static SIMPLE_DEV_PM_OPS(m_can_pci_pm_ops,
|
|||||||
m_can_pci_suspend, m_can_pci_resume);
|
m_can_pci_suspend, m_can_pci_resume);
|
||||||
|
|
||||||
static const struct pci_device_id m_can_pci_id_table[] = {
|
static const struct pci_device_id m_can_pci_id_table[] = {
|
||||||
{ PCI_VDEVICE(INTEL, 0x4bc1), M_CAN_CLOCK_FREQ_EHL, },
|
{ PCI_VDEVICE(INTEL, 0x4bc1), (kernel_ulong_t)&m_can_pci_ehl, },
|
||||||
{ PCI_VDEVICE(INTEL, 0x4bc2), M_CAN_CLOCK_FREQ_EHL, },
|
{ PCI_VDEVICE(INTEL, 0x4bc2), (kernel_ulong_t)&m_can_pci_ehl, },
|
||||||
{ } /* Terminating Entry */
|
{ } /* Terminating Entry */
|
||||||
};
|
};
|
||||||
MODULE_DEVICE_TABLE(pci, m_can_pci_id_table);
|
MODULE_DEVICE_TABLE(pci, m_can_pci_id_table);
|
||||||
|
|||||||
@@ -769,6 +769,10 @@ static void mv88e6xxx_mac_link_down(struct dsa_switch *ds, int port,
|
|||||||
if ((!mv88e6xxx_port_ppu_updates(chip, port) ||
|
if ((!mv88e6xxx_port_ppu_updates(chip, port) ||
|
||||||
mode == MLO_AN_FIXED) && ops->port_sync_link)
|
mode == MLO_AN_FIXED) && ops->port_sync_link)
|
||||||
err = ops->port_sync_link(chip, port, mode, false);
|
err = ops->port_sync_link(chip, port, mode, false);
|
||||||
|
|
||||||
|
if (!err && ops->port_set_speed_duplex)
|
||||||
|
err = ops->port_set_speed_duplex(chip, port, SPEED_UNFORCED,
|
||||||
|
DUPLEX_UNFORCED);
|
||||||
mv88e6xxx_reg_unlock(chip);
|
mv88e6xxx_reg_unlock(chip);
|
||||||
|
|
||||||
if (err)
|
if (err)
|
||||||
|
|||||||
@@ -283,7 +283,7 @@ static int mv88e6xxx_port_set_speed_duplex(struct mv88e6xxx_chip *chip,
|
|||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
if (speed)
|
if (speed != SPEED_UNFORCED)
|
||||||
dev_dbg(chip->dev, "p%d: Speed set to %d Mbps\n", port, speed);
|
dev_dbg(chip->dev, "p%d: Speed set to %d Mbps\n", port, speed);
|
||||||
else
|
else
|
||||||
dev_dbg(chip->dev, "p%d: Speed unforced\n", port);
|
dev_dbg(chip->dev, "p%d: Speed unforced\n", port);
|
||||||
@@ -516,7 +516,7 @@ int mv88e6393x_port_set_speed_duplex(struct mv88e6xxx_chip *chip, int port,
|
|||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
if (speed)
|
if (speed != SPEED_UNFORCED)
|
||||||
dev_dbg(chip->dev, "p%d: Speed set to %d Mbps\n", port, speed);
|
dev_dbg(chip->dev, "p%d: Speed set to %d Mbps\n", port, speed);
|
||||||
else
|
else
|
||||||
dev_dbg(chip->dev, "p%d: Speed unforced\n", port);
|
dev_dbg(chip->dev, "p%d: Speed unforced\n", port);
|
||||||
|
|||||||
@@ -1309,11 +1309,11 @@ static netdev_tx_t bcm_sysport_xmit(struct sk_buff *skb,
|
|||||||
struct bcm_sysport_priv *priv = netdev_priv(dev);
|
struct bcm_sysport_priv *priv = netdev_priv(dev);
|
||||||
struct device *kdev = &priv->pdev->dev;
|
struct device *kdev = &priv->pdev->dev;
|
||||||
struct bcm_sysport_tx_ring *ring;
|
struct bcm_sysport_tx_ring *ring;
|
||||||
|
unsigned long flags, desc_flags;
|
||||||
struct bcm_sysport_cb *cb;
|
struct bcm_sysport_cb *cb;
|
||||||
struct netdev_queue *txq;
|
struct netdev_queue *txq;
|
||||||
u32 len_status, addr_lo;
|
u32 len_status, addr_lo;
|
||||||
unsigned int skb_len;
|
unsigned int skb_len;
|
||||||
unsigned long flags;
|
|
||||||
dma_addr_t mapping;
|
dma_addr_t mapping;
|
||||||
u16 queue;
|
u16 queue;
|
||||||
int ret;
|
int ret;
|
||||||
@@ -1373,8 +1373,10 @@ static netdev_tx_t bcm_sysport_xmit(struct sk_buff *skb,
|
|||||||
ring->desc_count--;
|
ring->desc_count--;
|
||||||
|
|
||||||
/* Ports are latched, so write upper address first */
|
/* Ports are latched, so write upper address first */
|
||||||
|
spin_lock_irqsave(&priv->desc_lock, desc_flags);
|
||||||
tdma_writel(priv, len_status, TDMA_WRITE_PORT_HI(ring->index));
|
tdma_writel(priv, len_status, TDMA_WRITE_PORT_HI(ring->index));
|
||||||
tdma_writel(priv, addr_lo, TDMA_WRITE_PORT_LO(ring->index));
|
tdma_writel(priv, addr_lo, TDMA_WRITE_PORT_LO(ring->index));
|
||||||
|
spin_unlock_irqrestore(&priv->desc_lock, desc_flags);
|
||||||
|
|
||||||
/* Check ring space and update SW control flow */
|
/* Check ring space and update SW control flow */
|
||||||
if (ring->desc_count == 0)
|
if (ring->desc_count == 0)
|
||||||
@@ -2013,6 +2015,7 @@ static int bcm_sysport_open(struct net_device *dev)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Initialize both hardware and software ring */
|
/* Initialize both hardware and software ring */
|
||||||
|
spin_lock_init(&priv->desc_lock);
|
||||||
for (i = 0; i < dev->num_tx_queues; i++) {
|
for (i = 0; i < dev->num_tx_queues; i++) {
|
||||||
ret = bcm_sysport_init_tx_ring(priv, i);
|
ret = bcm_sysport_init_tx_ring(priv, i);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
|||||||
@@ -711,6 +711,7 @@ struct bcm_sysport_priv {
|
|||||||
int wol_irq;
|
int wol_irq;
|
||||||
|
|
||||||
/* Transmit rings */
|
/* Transmit rings */
|
||||||
|
spinlock_t desc_lock;
|
||||||
struct bcm_sysport_tx_ring *tx_rings;
|
struct bcm_sysport_tx_ring *tx_rings;
|
||||||
|
|
||||||
/* Receive queue */
|
/* Receive queue */
|
||||||
|
|||||||
@@ -830,6 +830,8 @@ struct hnae3_handle {
|
|||||||
|
|
||||||
u8 netdev_flags;
|
u8 netdev_flags;
|
||||||
struct dentry *hnae3_dbgfs;
|
struct dentry *hnae3_dbgfs;
|
||||||
|
/* protects concurrent contention between debugfs commands */
|
||||||
|
struct mutex dbgfs_lock;
|
||||||
|
|
||||||
/* Network interface message level enabled bits */
|
/* Network interface message level enabled bits */
|
||||||
u32 msg_enable;
|
u32 msg_enable;
|
||||||
|
|||||||
@@ -1021,6 +1021,7 @@ static ssize_t hns3_dbg_read(struct file *filp, char __user *buffer,
|
|||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
|
mutex_lock(&handle->dbgfs_lock);
|
||||||
save_buf = &hns3_dbg_cmd[index].buf;
|
save_buf = &hns3_dbg_cmd[index].buf;
|
||||||
|
|
||||||
if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) ||
|
if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) ||
|
||||||
@@ -1033,15 +1034,15 @@ static ssize_t hns3_dbg_read(struct file *filp, char __user *buffer,
|
|||||||
read_buf = *save_buf;
|
read_buf = *save_buf;
|
||||||
} else {
|
} else {
|
||||||
read_buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL);
|
read_buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL);
|
||||||
if (!read_buf)
|
if (!read_buf) {
|
||||||
return -ENOMEM;
|
ret = -ENOMEM;
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
/* save the buffer addr until the last read operation */
|
/* save the buffer addr until the last read operation */
|
||||||
*save_buf = read_buf;
|
*save_buf = read_buf;
|
||||||
}
|
|
||||||
|
|
||||||
/* get data ready for the first time to read */
|
/* get data ready for the first time to read */
|
||||||
if (!*ppos) {
|
|
||||||
ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd,
|
ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd,
|
||||||
read_buf, hns3_dbg_cmd[index].buf_len);
|
read_buf, hns3_dbg_cmd[index].buf_len);
|
||||||
if (ret)
|
if (ret)
|
||||||
@@ -1050,8 +1051,10 @@ static ssize_t hns3_dbg_read(struct file *filp, char __user *buffer,
|
|||||||
|
|
||||||
size = simple_read_from_buffer(buffer, count, ppos, read_buf,
|
size = simple_read_from_buffer(buffer, count, ppos, read_buf,
|
||||||
strlen(read_buf));
|
strlen(read_buf));
|
||||||
if (size > 0)
|
if (size > 0) {
|
||||||
|
mutex_unlock(&handle->dbgfs_lock);
|
||||||
return size;
|
return size;
|
||||||
|
}
|
||||||
|
|
||||||
out:
|
out:
|
||||||
/* free the buffer for the last read operation */
|
/* free the buffer for the last read operation */
|
||||||
@@ -1060,6 +1063,7 @@ out:
|
|||||||
*save_buf = NULL;
|
*save_buf = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
mutex_unlock(&handle->dbgfs_lock);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1132,6 +1136,8 @@ int hns3_dbg_init(struct hnae3_handle *handle)
|
|||||||
debugfs_create_dir(hns3_dbg_dentry[i].name,
|
debugfs_create_dir(hns3_dbg_dentry[i].name,
|
||||||
handle->hnae3_dbgfs);
|
handle->hnae3_dbgfs);
|
||||||
|
|
||||||
|
mutex_init(&handle->dbgfs_lock);
|
||||||
|
|
||||||
for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++) {
|
for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++) {
|
||||||
if ((hns3_dbg_cmd[i].cmd == HNAE3_DBG_CMD_TM_NODES &&
|
if ((hns3_dbg_cmd[i].cmd == HNAE3_DBG_CMD_TM_NODES &&
|
||||||
ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2) ||
|
ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2) ||
|
||||||
@@ -1158,6 +1164,7 @@ int hns3_dbg_init(struct hnae3_handle *handle)
|
|||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
out:
|
out:
|
||||||
|
mutex_destroy(&handle->dbgfs_lock);
|
||||||
debugfs_remove_recursive(handle->hnae3_dbgfs);
|
debugfs_remove_recursive(handle->hnae3_dbgfs);
|
||||||
handle->hnae3_dbgfs = NULL;
|
handle->hnae3_dbgfs = NULL;
|
||||||
return ret;
|
return ret;
|
||||||
@@ -1173,6 +1180,7 @@ void hns3_dbg_uninit(struct hnae3_handle *handle)
|
|||||||
hns3_dbg_cmd[i].buf = NULL;
|
hns3_dbg_cmd[i].buf = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
mutex_destroy(&handle->dbgfs_lock);
|
||||||
debugfs_remove_recursive(handle->hnae3_dbgfs);
|
debugfs_remove_recursive(handle->hnae3_dbgfs);
|
||||||
handle->hnae3_dbgfs = NULL;
|
handle->hnae3_dbgfs = NULL;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -114,7 +114,8 @@ int hclgevf_send_mbx_msg(struct hclgevf_dev *hdev,
|
|||||||
|
|
||||||
memcpy(&req->msg, send_msg, sizeof(struct hclge_vf_to_pf_msg));
|
memcpy(&req->msg, send_msg, sizeof(struct hclge_vf_to_pf_msg));
|
||||||
|
|
||||||
trace_hclge_vf_mbx_send(hdev, req);
|
if (test_bit(HCLGEVF_STATE_NIC_REGISTERED, &hdev->state))
|
||||||
|
trace_hclge_vf_mbx_send(hdev, req);
|
||||||
|
|
||||||
/* synchronous send */
|
/* synchronous send */
|
||||||
if (need_resp) {
|
if (need_resp) {
|
||||||
|
|||||||
@@ -459,7 +459,7 @@ static int ice_ptp_adjfine(struct ptp_clock_info *info, long scaled_ppm)
|
|||||||
scaled_ppm = -scaled_ppm;
|
scaled_ppm = -scaled_ppm;
|
||||||
}
|
}
|
||||||
|
|
||||||
while ((u64)scaled_ppm > div_u64(U64_MAX, incval)) {
|
while ((u64)scaled_ppm > div64_u64(U64_MAX, incval)) {
|
||||||
/* handle overflow by scaling down the scaled_ppm and
|
/* handle overflow by scaling down the scaled_ppm and
|
||||||
* the divisor, losing some precision
|
* the divisor, losing some precision
|
||||||
*/
|
*/
|
||||||
@@ -1182,19 +1182,16 @@ static void ice_ptp_tx_tstamp_work(struct kthread_work *work)
|
|||||||
if (err)
|
if (err)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
/* Check if the timestamp is valid */
|
/* Check if the timestamp is invalid or stale */
|
||||||
if (!(raw_tstamp & ICE_PTP_TS_VALID))
|
if (!(raw_tstamp & ICE_PTP_TS_VALID) ||
|
||||||
|
raw_tstamp == tx->tstamps[idx].cached_tstamp)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
/* clear the timestamp register, so that it won't show valid
|
|
||||||
* again when re-used.
|
|
||||||
*/
|
|
||||||
ice_clear_phy_tstamp(hw, tx->quad, phy_idx);
|
|
||||||
|
|
||||||
/* The timestamp is valid, so we'll go ahead and clear this
|
/* The timestamp is valid, so we'll go ahead and clear this
|
||||||
* index and then send the timestamp up to the stack.
|
* index and then send the timestamp up to the stack.
|
||||||
*/
|
*/
|
||||||
spin_lock(&tx->lock);
|
spin_lock(&tx->lock);
|
||||||
|
tx->tstamps[idx].cached_tstamp = raw_tstamp;
|
||||||
clear_bit(idx, tx->in_use);
|
clear_bit(idx, tx->in_use);
|
||||||
skb = tx->tstamps[idx].skb;
|
skb = tx->tstamps[idx].skb;
|
||||||
tx->tstamps[idx].skb = NULL;
|
tx->tstamps[idx].skb = NULL;
|
||||||
|
|||||||
@@ -46,15 +46,21 @@ struct ice_perout_channel {
|
|||||||
* struct ice_tx_tstamp - Tracking for a single Tx timestamp
|
* struct ice_tx_tstamp - Tracking for a single Tx timestamp
|
||||||
* @skb: pointer to the SKB for this timestamp request
|
* @skb: pointer to the SKB for this timestamp request
|
||||||
* @start: jiffies when the timestamp was first requested
|
* @start: jiffies when the timestamp was first requested
|
||||||
|
* @cached_tstamp: last read timestamp
|
||||||
*
|
*
|
||||||
* This structure tracks a single timestamp request. The SKB pointer is
|
* This structure tracks a single timestamp request. The SKB pointer is
|
||||||
* provided when initiating a request. The start time is used to ensure that
|
* provided when initiating a request. The start time is used to ensure that
|
||||||
* we discard old requests that were not fulfilled within a 2 second time
|
* we discard old requests that were not fulfilled within a 2 second time
|
||||||
* window.
|
* window.
|
||||||
|
* Timestamp values in the PHY are read only and do not get cleared except at
|
||||||
|
* hardware reset or when a new timestamp value is captured. The cached_tstamp
|
||||||
|
* field is used to detect the case where a new timestamp has not yet been
|
||||||
|
* captured, ensuring that we avoid sending stale timestamp data to the stack.
|
||||||
*/
|
*/
|
||||||
struct ice_tx_tstamp {
|
struct ice_tx_tstamp {
|
||||||
struct sk_buff *skb;
|
struct sk_buff *skb;
|
||||||
unsigned long start;
|
unsigned long start;
|
||||||
|
u64 cached_tstamp;
|
||||||
};
|
};
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -7641,6 +7641,20 @@ static int igb_set_vf_mac_filter(struct igb_adapter *adapter, const int vf,
|
|||||||
struct vf_mac_filter *entry = NULL;
|
struct vf_mac_filter *entry = NULL;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
|
if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) &&
|
||||||
|
!vf_data->trusted) {
|
||||||
|
dev_warn(&pdev->dev,
|
||||||
|
"VF %d requested MAC filter but is administratively denied\n",
|
||||||
|
vf);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
if (!is_valid_ether_addr(addr)) {
|
||||||
|
dev_warn(&pdev->dev,
|
||||||
|
"VF %d attempted to set invalid MAC filter\n",
|
||||||
|
vf);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
switch (info) {
|
switch (info) {
|
||||||
case E1000_VF_MAC_FILTER_CLR:
|
case E1000_VF_MAC_FILTER_CLR:
|
||||||
/* remove all unicast MAC filters related to the current VF */
|
/* remove all unicast MAC filters related to the current VF */
|
||||||
@@ -7654,20 +7668,6 @@ static int igb_set_vf_mac_filter(struct igb_adapter *adapter, const int vf,
|
|||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case E1000_VF_MAC_FILTER_ADD:
|
case E1000_VF_MAC_FILTER_ADD:
|
||||||
if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) &&
|
|
||||||
!vf_data->trusted) {
|
|
||||||
dev_warn(&pdev->dev,
|
|
||||||
"VF %d requested MAC filter but is administratively denied\n",
|
|
||||||
vf);
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
if (!is_valid_ether_addr(addr)) {
|
|
||||||
dev_warn(&pdev->dev,
|
|
||||||
"VF %d attempted to set invalid MAC filter\n",
|
|
||||||
vf);
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* try to find empty slot in the list */
|
/* try to find empty slot in the list */
|
||||||
list_for_each(pos, &adapter->vf_macs.l) {
|
list_for_each(pos, &adapter->vf_macs.l) {
|
||||||
entry = list_entry(pos, struct vf_mac_filter, l);
|
entry = list_entry(pos, struct vf_mac_filter, l);
|
||||||
|
|||||||
@@ -2861,6 +2861,7 @@ static int igbvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
err_hw_init:
|
err_hw_init:
|
||||||
|
netif_napi_del(&adapter->rx_ring->napi);
|
||||||
kfree(adapter->tx_ring);
|
kfree(adapter->tx_ring);
|
||||||
kfree(adapter->rx_ring);
|
kfree(adapter->rx_ring);
|
||||||
err_sw_init:
|
err_sw_init:
|
||||||
|
|||||||
@@ -636,7 +636,7 @@ s32 igc_set_ltr_i225(struct igc_hw *hw, bool link)
|
|||||||
ltrv = rd32(IGC_LTRMAXV);
|
ltrv = rd32(IGC_LTRMAXV);
|
||||||
if (ltr_max != (ltrv & IGC_LTRMAXV_LTRV_MASK)) {
|
if (ltr_max != (ltrv & IGC_LTRMAXV_LTRV_MASK)) {
|
||||||
ltrv = IGC_LTRMAXV_LSNP_REQ | ltr_max |
|
ltrv = IGC_LTRMAXV_LSNP_REQ | ltr_max |
|
||||||
(scale_min << IGC_LTRMAXV_SCALE_SHIFT);
|
(scale_max << IGC_LTRMAXV_SCALE_SHIFT);
|
||||||
wr32(IGC_LTRMAXV, ltrv);
|
wr32(IGC_LTRMAXV, ltrv);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5526,6 +5526,10 @@ static int ixgbe_non_sfp_link_config(struct ixgbe_hw *hw)
|
|||||||
if (!speed && hw->mac.ops.get_link_capabilities) {
|
if (!speed && hw->mac.ops.get_link_capabilities) {
|
||||||
ret = hw->mac.ops.get_link_capabilities(hw, &speed,
|
ret = hw->mac.ops.get_link_capabilities(hw, &speed,
|
||||||
&autoneg);
|
&autoneg);
|
||||||
|
/* remove NBASE-T speeds from default autonegotiation
|
||||||
|
* to accommodate broken network switches in the field
|
||||||
|
* which cannot cope with advertised NBASE-T speeds
|
||||||
|
*/
|
||||||
speed &= ~(IXGBE_LINK_SPEED_5GB_FULL |
|
speed &= ~(IXGBE_LINK_SPEED_5GB_FULL |
|
||||||
IXGBE_LINK_SPEED_2_5GB_FULL);
|
IXGBE_LINK_SPEED_2_5GB_FULL);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3405,6 +3405,9 @@ static s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw)
|
|||||||
/* flush pending Tx transactions */
|
/* flush pending Tx transactions */
|
||||||
ixgbe_clear_tx_pending(hw);
|
ixgbe_clear_tx_pending(hw);
|
||||||
|
|
||||||
|
/* set MDIO speed before talking to the PHY in case it's the 1st time */
|
||||||
|
ixgbe_set_mdio_speed(hw);
|
||||||
|
|
||||||
/* PHY ops must be identified and initialized prior to reset */
|
/* PHY ops must be identified and initialized prior to reset */
|
||||||
status = hw->phy.ops.init(hw);
|
status = hw->phy.ops.init(hw);
|
||||||
if (status == IXGBE_ERR_SFP_NOT_SUPPORTED ||
|
if (status == IXGBE_ERR_SFP_NOT_SUPPORTED ||
|
||||||
|
|||||||
@@ -609,6 +609,9 @@ static size_t ef100_update_stats(struct efx_nic *efx,
|
|||||||
ef100_common_stat_mask(mask);
|
ef100_common_stat_mask(mask);
|
||||||
ef100_ethtool_stat_mask(mask);
|
ef100_ethtool_stat_mask(mask);
|
||||||
|
|
||||||
|
if (!mc_stats)
|
||||||
|
return 0;
|
||||||
|
|
||||||
efx_nic_copy_stats(efx, mc_stats);
|
efx_nic_copy_stats(efx, mc_stats);
|
||||||
efx_nic_update_stats(ef100_stat_desc, EF100_STAT_COUNT, mask,
|
efx_nic_update_stats(ef100_stat_desc, EF100_STAT_COUNT, mask,
|
||||||
stats, mc_stats, false);
|
stats, mc_stats, false);
|
||||||
|
|||||||
@@ -33,6 +33,7 @@ struct rk_gmac_ops {
|
|||||||
void (*set_rgmii_speed)(struct rk_priv_data *bsp_priv, int speed);
|
void (*set_rgmii_speed)(struct rk_priv_data *bsp_priv, int speed);
|
||||||
void (*set_rmii_speed)(struct rk_priv_data *bsp_priv, int speed);
|
void (*set_rmii_speed)(struct rk_priv_data *bsp_priv, int speed);
|
||||||
void (*integrated_phy_powerup)(struct rk_priv_data *bsp_priv);
|
void (*integrated_phy_powerup)(struct rk_priv_data *bsp_priv);
|
||||||
|
bool regs_valid;
|
||||||
u32 regs[];
|
u32 regs[];
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -1092,6 +1093,7 @@ static const struct rk_gmac_ops rk3568_ops = {
|
|||||||
.set_to_rmii = rk3568_set_to_rmii,
|
.set_to_rmii = rk3568_set_to_rmii,
|
||||||
.set_rgmii_speed = rk3568_set_gmac_speed,
|
.set_rgmii_speed = rk3568_set_gmac_speed,
|
||||||
.set_rmii_speed = rk3568_set_gmac_speed,
|
.set_rmii_speed = rk3568_set_gmac_speed,
|
||||||
|
.regs_valid = true,
|
||||||
.regs = {
|
.regs = {
|
||||||
0xfe2a0000, /* gmac0 */
|
0xfe2a0000, /* gmac0 */
|
||||||
0xfe010000, /* gmac1 */
|
0xfe010000, /* gmac1 */
|
||||||
@@ -1383,7 +1385,7 @@ static struct rk_priv_data *rk_gmac_setup(struct platform_device *pdev,
|
|||||||
* to be distinguished.
|
* to be distinguished.
|
||||||
*/
|
*/
|
||||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||||
if (res) {
|
if (res && ops->regs_valid) {
|
||||||
int i = 0;
|
int i = 0;
|
||||||
|
|
||||||
while (ops->regs[i]) {
|
while (ops->regs[i]) {
|
||||||
|
|||||||
@@ -172,6 +172,19 @@ struct stmmac_flow_entry {
|
|||||||
int is_l4;
|
int is_l4;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/* Rx Frame Steering */
|
||||||
|
enum stmmac_rfs_type {
|
||||||
|
STMMAC_RFS_T_VLAN,
|
||||||
|
STMMAC_RFS_T_MAX,
|
||||||
|
};
|
||||||
|
|
||||||
|
struct stmmac_rfs_entry {
|
||||||
|
unsigned long cookie;
|
||||||
|
int in_use;
|
||||||
|
int type;
|
||||||
|
int tc;
|
||||||
|
};
|
||||||
|
|
||||||
struct stmmac_priv {
|
struct stmmac_priv {
|
||||||
/* Frequently used values are kept adjacent for cache effect */
|
/* Frequently used values are kept adjacent for cache effect */
|
||||||
u32 tx_coal_frames[MTL_MAX_TX_QUEUES];
|
u32 tx_coal_frames[MTL_MAX_TX_QUEUES];
|
||||||
@@ -289,6 +302,10 @@ struct stmmac_priv {
|
|||||||
struct stmmac_tc_entry *tc_entries;
|
struct stmmac_tc_entry *tc_entries;
|
||||||
unsigned int flow_entries_max;
|
unsigned int flow_entries_max;
|
||||||
struct stmmac_flow_entry *flow_entries;
|
struct stmmac_flow_entry *flow_entries;
|
||||||
|
unsigned int rfs_entries_max[STMMAC_RFS_T_MAX];
|
||||||
|
unsigned int rfs_entries_cnt[STMMAC_RFS_T_MAX];
|
||||||
|
unsigned int rfs_entries_total;
|
||||||
|
struct stmmac_rfs_entry *rfs_entries;
|
||||||
|
|
||||||
/* Pulse Per Second output */
|
/* Pulse Per Second output */
|
||||||
struct stmmac_pps_cfg pps[STMMAC_PPS_MAX];
|
struct stmmac_pps_cfg pps[STMMAC_PPS_MAX];
|
||||||
|
|||||||
@@ -232,11 +232,33 @@ static int tc_setup_cls_u32(struct stmmac_priv *priv,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int tc_rfs_init(struct stmmac_priv *priv)
|
||||||
|
{
|
||||||
|
int i;
|
||||||
|
|
||||||
|
priv->rfs_entries_max[STMMAC_RFS_T_VLAN] = 8;
|
||||||
|
|
||||||
|
for (i = 0; i < STMMAC_RFS_T_MAX; i++)
|
||||||
|
priv->rfs_entries_total += priv->rfs_entries_max[i];
|
||||||
|
|
||||||
|
priv->rfs_entries = devm_kcalloc(priv->device,
|
||||||
|
priv->rfs_entries_total,
|
||||||
|
sizeof(*priv->rfs_entries),
|
||||||
|
GFP_KERNEL);
|
||||||
|
if (!priv->rfs_entries)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
|
dev_info(priv->device, "Enabled RFS Flow TC (entries=%d)\n",
|
||||||
|
priv->rfs_entries_total);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static int tc_init(struct stmmac_priv *priv)
|
static int tc_init(struct stmmac_priv *priv)
|
||||||
{
|
{
|
||||||
struct dma_features *dma_cap = &priv->dma_cap;
|
struct dma_features *dma_cap = &priv->dma_cap;
|
||||||
unsigned int count;
|
unsigned int count;
|
||||||
int i;
|
int ret, i;
|
||||||
|
|
||||||
if (dma_cap->l3l4fnum) {
|
if (dma_cap->l3l4fnum) {
|
||||||
priv->flow_entries_max = dma_cap->l3l4fnum;
|
priv->flow_entries_max = dma_cap->l3l4fnum;
|
||||||
@@ -250,10 +272,14 @@ static int tc_init(struct stmmac_priv *priv)
|
|||||||
for (i = 0; i < priv->flow_entries_max; i++)
|
for (i = 0; i < priv->flow_entries_max; i++)
|
||||||
priv->flow_entries[i].idx = i;
|
priv->flow_entries[i].idx = i;
|
||||||
|
|
||||||
dev_info(priv->device, "Enabled Flow TC (entries=%d)\n",
|
dev_info(priv->device, "Enabled L3L4 Flow TC (entries=%d)\n",
|
||||||
priv->flow_entries_max);
|
priv->flow_entries_max);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ret = tc_rfs_init(priv);
|
||||||
|
if (ret)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
if (!priv->plat->fpe_cfg) {
|
if (!priv->plat->fpe_cfg) {
|
||||||
priv->plat->fpe_cfg = devm_kzalloc(priv->device,
|
priv->plat->fpe_cfg = devm_kzalloc(priv->device,
|
||||||
sizeof(*priv->plat->fpe_cfg),
|
sizeof(*priv->plat->fpe_cfg),
|
||||||
@@ -607,16 +633,45 @@ static int tc_del_flow(struct stmmac_priv *priv,
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static struct stmmac_rfs_entry *tc_find_rfs(struct stmmac_priv *priv,
|
||||||
|
struct flow_cls_offload *cls,
|
||||||
|
bool get_free)
|
||||||
|
{
|
||||||
|
int i;
|
||||||
|
|
||||||
|
for (i = 0; i < priv->rfs_entries_total; i++) {
|
||||||
|
struct stmmac_rfs_entry *entry = &priv->rfs_entries[i];
|
||||||
|
|
||||||
|
if (entry->cookie == cls->cookie)
|
||||||
|
return entry;
|
||||||
|
if (get_free && entry->in_use == false)
|
||||||
|
return entry;
|
||||||
|
}
|
||||||
|
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
#define VLAN_PRIO_FULL_MASK (0x07)
|
#define VLAN_PRIO_FULL_MASK (0x07)
|
||||||
|
|
||||||
static int tc_add_vlan_flow(struct stmmac_priv *priv,
|
static int tc_add_vlan_flow(struct stmmac_priv *priv,
|
||||||
struct flow_cls_offload *cls)
|
struct flow_cls_offload *cls)
|
||||||
{
|
{
|
||||||
|
struct stmmac_rfs_entry *entry = tc_find_rfs(priv, cls, false);
|
||||||
struct flow_rule *rule = flow_cls_offload_flow_rule(cls);
|
struct flow_rule *rule = flow_cls_offload_flow_rule(cls);
|
||||||
struct flow_dissector *dissector = rule->match.dissector;
|
struct flow_dissector *dissector = rule->match.dissector;
|
||||||
int tc = tc_classid_to_hwtc(priv->dev, cls->classid);
|
int tc = tc_classid_to_hwtc(priv->dev, cls->classid);
|
||||||
struct flow_match_vlan match;
|
struct flow_match_vlan match;
|
||||||
|
|
||||||
|
if (!entry) {
|
||||||
|
entry = tc_find_rfs(priv, cls, true);
|
||||||
|
if (!entry)
|
||||||
|
return -ENOENT;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (priv->rfs_entries_cnt[STMMAC_RFS_T_VLAN] >=
|
||||||
|
priv->rfs_entries_max[STMMAC_RFS_T_VLAN])
|
||||||
|
return -ENOENT;
|
||||||
|
|
||||||
/* Nothing to do here */
|
/* Nothing to do here */
|
||||||
if (!dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_VLAN))
|
if (!dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_VLAN))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
@@ -638,6 +693,12 @@ static int tc_add_vlan_flow(struct stmmac_priv *priv,
|
|||||||
|
|
||||||
prio = BIT(match.key->vlan_priority);
|
prio = BIT(match.key->vlan_priority);
|
||||||
stmmac_rx_queue_prio(priv, priv->hw, prio, tc);
|
stmmac_rx_queue_prio(priv, priv->hw, prio, tc);
|
||||||
|
|
||||||
|
entry->in_use = true;
|
||||||
|
entry->cookie = cls->cookie;
|
||||||
|
entry->tc = tc;
|
||||||
|
entry->type = STMMAC_RFS_T_VLAN;
|
||||||
|
priv->rfs_entries_cnt[STMMAC_RFS_T_VLAN]++;
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
@@ -646,20 +707,19 @@ static int tc_add_vlan_flow(struct stmmac_priv *priv,
|
|||||||
static int tc_del_vlan_flow(struct stmmac_priv *priv,
|
static int tc_del_vlan_flow(struct stmmac_priv *priv,
|
||||||
struct flow_cls_offload *cls)
|
struct flow_cls_offload *cls)
|
||||||
{
|
{
|
||||||
struct flow_rule *rule = flow_cls_offload_flow_rule(cls);
|
struct stmmac_rfs_entry *entry = tc_find_rfs(priv, cls, false);
|
||||||
struct flow_dissector *dissector = rule->match.dissector;
|
|
||||||
int tc = tc_classid_to_hwtc(priv->dev, cls->classid);
|
|
||||||
|
|
||||||
/* Nothing to do here */
|
if (!entry || !entry->in_use || entry->type != STMMAC_RFS_T_VLAN)
|
||||||
if (!dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_VLAN))
|
return -ENOENT;
|
||||||
return -EINVAL;
|
|
||||||
|
|
||||||
if (tc < 0) {
|
stmmac_rx_queue_prio(priv, priv->hw, 0, entry->tc);
|
||||||
netdev_err(priv->dev, "Invalid traffic class\n");
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
stmmac_rx_queue_prio(priv, priv->hw, 0, tc);
|
entry->in_use = false;
|
||||||
|
entry->cookie = 0;
|
||||||
|
entry->tc = 0;
|
||||||
|
entry->type = 0;
|
||||||
|
|
||||||
|
priv->rfs_entries_cnt[STMMAC_RFS_T_VLAN]--;
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -514,6 +514,7 @@ nsim_bpf_map_alloc(struct netdevsim *ns, struct bpf_offloaded_map *offmap)
|
|||||||
goto err_free;
|
goto err_free;
|
||||||
key = nmap->entry[i].key;
|
key = nmap->entry[i].key;
|
||||||
*key = i;
|
*key = i;
|
||||||
|
memset(nmap->entry[i].value, 0, offmap->map.value_size);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -77,7 +77,10 @@ static int nsim_set_ringparam(struct net_device *dev,
|
|||||||
{
|
{
|
||||||
struct netdevsim *ns = netdev_priv(dev);
|
struct netdevsim *ns = netdev_priv(dev);
|
||||||
|
|
||||||
memcpy(&ns->ethtool.ring, ring, sizeof(ns->ethtool.ring));
|
ns->ethtool.ring.rx_pending = ring->rx_pending;
|
||||||
|
ns->ethtool.ring.rx_jumbo_pending = ring->rx_jumbo_pending;
|
||||||
|
ns->ethtool.ring.rx_mini_pending = ring->rx_mini_pending;
|
||||||
|
ns->ethtool.ring.tx_pending = ring->tx_pending;
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -268,17 +268,18 @@ static u32 iwl_mvm_get_tx_rate(struct iwl_mvm *mvm,
|
|||||||
int rate_idx = -1;
|
int rate_idx = -1;
|
||||||
u8 rate_plcp;
|
u8 rate_plcp;
|
||||||
u32 rate_flags = 0;
|
u32 rate_flags = 0;
|
||||||
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
|
|
||||||
|
|
||||||
/* info->control is only relevant for non HW rate control */
|
/* info->control is only relevant for non HW rate control */
|
||||||
if (!ieee80211_hw_check(mvm->hw, HAS_RATE_CONTROL)) {
|
if (!ieee80211_hw_check(mvm->hw, HAS_RATE_CONTROL)) {
|
||||||
|
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
|
||||||
|
|
||||||
/* HT rate doesn't make sense for a non data frame */
|
/* HT rate doesn't make sense for a non data frame */
|
||||||
WARN_ONCE(info->control.rates[0].flags & IEEE80211_TX_RC_MCS &&
|
WARN_ONCE(info->control.rates[0].flags & IEEE80211_TX_RC_MCS &&
|
||||||
!ieee80211_is_data(fc),
|
!ieee80211_is_data(fc),
|
||||||
"Got a HT rate (flags:0x%x/mcs:%d/fc:0x%x/state:%d) for a non data frame\n",
|
"Got a HT rate (flags:0x%x/mcs:%d/fc:0x%x/state:%d) for a non data frame\n",
|
||||||
info->control.rates[0].flags,
|
info->control.rates[0].flags,
|
||||||
info->control.rates[0].idx,
|
info->control.rates[0].idx,
|
||||||
le16_to_cpu(fc), mvmsta->sta_state);
|
le16_to_cpu(fc), sta ? mvmsta->sta_state : -1);
|
||||||
|
|
||||||
rate_idx = info->control.rates[0].idx;
|
rate_idx = info->control.rates[0].idx;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -203,6 +203,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */
|
|||||||
unsigned int rx_queue_max;
|
unsigned int rx_queue_max;
|
||||||
unsigned int rx_queue_len;
|
unsigned int rx_queue_len;
|
||||||
unsigned long last_rx_time;
|
unsigned long last_rx_time;
|
||||||
|
unsigned int rx_slots_needed;
|
||||||
bool stalled;
|
bool stalled;
|
||||||
|
|
||||||
struct xenvif_copy_state rx_copy;
|
struct xenvif_copy_state rx_copy;
|
||||||
|
|||||||
@@ -33,28 +33,36 @@
|
|||||||
#include <xen/xen.h>
|
#include <xen/xen.h>
|
||||||
#include <xen/events.h>
|
#include <xen/events.h>
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Update the needed ring page slots for the first SKB queued.
|
||||||
|
* Note that any call sequence outside the RX thread calling this function
|
||||||
|
* needs to wake up the RX thread via a call of xenvif_kick_thread()
|
||||||
|
* afterwards in order to avoid a race with putting the thread to sleep.
|
||||||
|
*/
|
||||||
|
static void xenvif_update_needed_slots(struct xenvif_queue *queue,
|
||||||
|
const struct sk_buff *skb)
|
||||||
|
{
|
||||||
|
unsigned int needed = 0;
|
||||||
|
|
||||||
|
if (skb) {
|
||||||
|
needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);
|
||||||
|
if (skb_is_gso(skb))
|
||||||
|
needed++;
|
||||||
|
if (skb->sw_hash)
|
||||||
|
needed++;
|
||||||
|
}
|
||||||
|
|
||||||
|
WRITE_ONCE(queue->rx_slots_needed, needed);
|
||||||
|
}
|
||||||
|
|
||||||
static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
|
static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
|
||||||
{
|
{
|
||||||
RING_IDX prod, cons;
|
RING_IDX prod, cons;
|
||||||
struct sk_buff *skb;
|
unsigned int needed;
|
||||||
int needed;
|
|
||||||
unsigned long flags;
|
|
||||||
|
|
||||||
spin_lock_irqsave(&queue->rx_queue.lock, flags);
|
needed = READ_ONCE(queue->rx_slots_needed);
|
||||||
|
if (!needed)
|
||||||
skb = skb_peek(&queue->rx_queue);
|
|
||||||
if (!skb) {
|
|
||||||
spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
|
|
||||||
return false;
|
return false;
|
||||||
}
|
|
||||||
|
|
||||||
needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);
|
|
||||||
if (skb_is_gso(skb))
|
|
||||||
needed++;
|
|
||||||
if (skb->sw_hash)
|
|
||||||
needed++;
|
|
||||||
|
|
||||||
spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
|
|
||||||
|
|
||||||
do {
|
do {
|
||||||
prod = queue->rx.sring->req_prod;
|
prod = queue->rx.sring->req_prod;
|
||||||
@@ -80,13 +88,19 @@ void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb)
|
|||||||
|
|
||||||
spin_lock_irqsave(&queue->rx_queue.lock, flags);
|
spin_lock_irqsave(&queue->rx_queue.lock, flags);
|
||||||
|
|
||||||
__skb_queue_tail(&queue->rx_queue, skb);
|
if (queue->rx_queue_len >= queue->rx_queue_max) {
|
||||||
|
|
||||||
queue->rx_queue_len += skb->len;
|
|
||||||
if (queue->rx_queue_len > queue->rx_queue_max) {
|
|
||||||
struct net_device *dev = queue->vif->dev;
|
struct net_device *dev = queue->vif->dev;
|
||||||
|
|
||||||
netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
|
netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
|
||||||
|
kfree_skb(skb);
|
||||||
|
queue->vif->dev->stats.rx_dropped++;
|
||||||
|
} else {
|
||||||
|
if (skb_queue_empty(&queue->rx_queue))
|
||||||
|
xenvif_update_needed_slots(queue, skb);
|
||||||
|
|
||||||
|
__skb_queue_tail(&queue->rx_queue, skb);
|
||||||
|
|
||||||
|
queue->rx_queue_len += skb->len;
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
|
spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
|
||||||
@@ -100,6 +114,8 @@ static struct sk_buff *xenvif_rx_dequeue(struct xenvif_queue *queue)
|
|||||||
|
|
||||||
skb = __skb_dequeue(&queue->rx_queue);
|
skb = __skb_dequeue(&queue->rx_queue);
|
||||||
if (skb) {
|
if (skb) {
|
||||||
|
xenvif_update_needed_slots(queue, skb_peek(&queue->rx_queue));
|
||||||
|
|
||||||
queue->rx_queue_len -= skb->len;
|
queue->rx_queue_len -= skb->len;
|
||||||
if (queue->rx_queue_len < queue->rx_queue_max) {
|
if (queue->rx_queue_len < queue->rx_queue_max) {
|
||||||
struct netdev_queue *txq;
|
struct netdev_queue *txq;
|
||||||
@@ -134,6 +150,7 @@ static void xenvif_rx_queue_drop_expired(struct xenvif_queue *queue)
|
|||||||
break;
|
break;
|
||||||
xenvif_rx_dequeue(queue);
|
xenvif_rx_dequeue(queue);
|
||||||
kfree_skb(skb);
|
kfree_skb(skb);
|
||||||
|
queue->vif->dev->stats.rx_dropped++;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -487,27 +504,31 @@ void xenvif_rx_action(struct xenvif_queue *queue)
|
|||||||
xenvif_rx_copy_flush(queue);
|
xenvif_rx_copy_flush(queue);
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool xenvif_rx_queue_stalled(struct xenvif_queue *queue)
|
static RING_IDX xenvif_rx_queue_slots(const struct xenvif_queue *queue)
|
||||||
{
|
{
|
||||||
RING_IDX prod, cons;
|
RING_IDX prod, cons;
|
||||||
|
|
||||||
prod = queue->rx.sring->req_prod;
|
prod = queue->rx.sring->req_prod;
|
||||||
cons = queue->rx.req_cons;
|
cons = queue->rx.req_cons;
|
||||||
|
|
||||||
|
return prod - cons;
|
||||||
|
}
|
||||||
|
|
||||||
|
static bool xenvif_rx_queue_stalled(const struct xenvif_queue *queue)
|
||||||
|
{
|
||||||
|
unsigned int needed = READ_ONCE(queue->rx_slots_needed);
|
||||||
|
|
||||||
return !queue->stalled &&
|
return !queue->stalled &&
|
||||||
prod - cons < 1 &&
|
xenvif_rx_queue_slots(queue) < needed &&
|
||||||
time_after(jiffies,
|
time_after(jiffies,
|
||||||
queue->last_rx_time + queue->vif->stall_timeout);
|
queue->last_rx_time + queue->vif->stall_timeout);
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool xenvif_rx_queue_ready(struct xenvif_queue *queue)
|
static bool xenvif_rx_queue_ready(struct xenvif_queue *queue)
|
||||||
{
|
{
|
||||||
RING_IDX prod, cons;
|
unsigned int needed = READ_ONCE(queue->rx_slots_needed);
|
||||||
|
|
||||||
prod = queue->rx.sring->req_prod;
|
return queue->stalled && xenvif_rx_queue_slots(queue) >= needed;
|
||||||
cons = queue->rx.req_cons;
|
|
||||||
|
|
||||||
return queue->stalled && prod - cons >= 1;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread)
|
bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread)
|
||||||
|
|||||||
@@ -148,6 +148,9 @@ struct netfront_queue {
|
|||||||
grant_ref_t gref_rx_head;
|
grant_ref_t gref_rx_head;
|
||||||
grant_ref_t grant_rx_ref[NET_RX_RING_SIZE];
|
grant_ref_t grant_rx_ref[NET_RX_RING_SIZE];
|
||||||
|
|
||||||
|
unsigned int rx_rsp_unconsumed;
|
||||||
|
spinlock_t rx_cons_lock;
|
||||||
|
|
||||||
struct page_pool *page_pool;
|
struct page_pool *page_pool;
|
||||||
struct xdp_rxq_info xdp_rxq;
|
struct xdp_rxq_info xdp_rxq;
|
||||||
};
|
};
|
||||||
@@ -376,12 +379,13 @@ static int xennet_open(struct net_device *dev)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void xennet_tx_buf_gc(struct netfront_queue *queue)
|
static bool xennet_tx_buf_gc(struct netfront_queue *queue)
|
||||||
{
|
{
|
||||||
RING_IDX cons, prod;
|
RING_IDX cons, prod;
|
||||||
unsigned short id;
|
unsigned short id;
|
||||||
struct sk_buff *skb;
|
struct sk_buff *skb;
|
||||||
bool more_to_do;
|
bool more_to_do;
|
||||||
|
bool work_done = false;
|
||||||
const struct device *dev = &queue->info->netdev->dev;
|
const struct device *dev = &queue->info->netdev->dev;
|
||||||
|
|
||||||
BUG_ON(!netif_carrier_ok(queue->info->netdev));
|
BUG_ON(!netif_carrier_ok(queue->info->netdev));
|
||||||
@@ -398,6 +402,8 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
|
|||||||
for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
|
for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
|
||||||
struct xen_netif_tx_response txrsp;
|
struct xen_netif_tx_response txrsp;
|
||||||
|
|
||||||
|
work_done = true;
|
||||||
|
|
||||||
RING_COPY_RESPONSE(&queue->tx, cons, &txrsp);
|
RING_COPY_RESPONSE(&queue->tx, cons, &txrsp);
|
||||||
if (txrsp.status == XEN_NETIF_RSP_NULL)
|
if (txrsp.status == XEN_NETIF_RSP_NULL)
|
||||||
continue;
|
continue;
|
||||||
@@ -441,11 +447,13 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
|
|||||||
|
|
||||||
xennet_maybe_wake_tx(queue);
|
xennet_maybe_wake_tx(queue);
|
||||||
|
|
||||||
return;
|
return work_done;
|
||||||
|
|
||||||
err:
|
err:
|
||||||
queue->info->broken = true;
|
queue->info->broken = true;
|
||||||
dev_alert(dev, "Disabled for further use\n");
|
dev_alert(dev, "Disabled for further use\n");
|
||||||
|
|
||||||
|
return work_done;
|
||||||
}
|
}
|
||||||
|
|
||||||
struct xennet_gnttab_make_txreq {
|
struct xennet_gnttab_make_txreq {
|
||||||
@@ -834,6 +842,16 @@ static int xennet_close(struct net_device *dev)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void xennet_set_rx_rsp_cons(struct netfront_queue *queue, RING_IDX val)
|
||||||
|
{
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
spin_lock_irqsave(&queue->rx_cons_lock, flags);
|
||||||
|
queue->rx.rsp_cons = val;
|
||||||
|
queue->rx_rsp_unconsumed = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx);
|
||||||
|
spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
|
||||||
|
}
|
||||||
|
|
||||||
static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
|
static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
|
||||||
grant_ref_t ref)
|
grant_ref_t ref)
|
||||||
{
|
{
|
||||||
@@ -885,7 +903,7 @@ static int xennet_get_extras(struct netfront_queue *queue,
|
|||||||
xennet_move_rx_slot(queue, skb, ref);
|
xennet_move_rx_slot(queue, skb, ref);
|
||||||
} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
|
} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
|
||||||
|
|
||||||
queue->rx.rsp_cons = cons;
|
xennet_set_rx_rsp_cons(queue, cons);
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1039,7 +1057,7 @@ next:
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (unlikely(err))
|
if (unlikely(err))
|
||||||
queue->rx.rsp_cons = cons + slots;
|
xennet_set_rx_rsp_cons(queue, cons + slots);
|
||||||
|
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
@@ -1093,7 +1111,8 @@ static int xennet_fill_frags(struct netfront_queue *queue,
|
|||||||
__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
|
__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
|
||||||
}
|
}
|
||||||
if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
|
if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
|
||||||
queue->rx.rsp_cons = ++cons + skb_queue_len(list);
|
xennet_set_rx_rsp_cons(queue,
|
||||||
|
++cons + skb_queue_len(list));
|
||||||
kfree_skb(nskb);
|
kfree_skb(nskb);
|
||||||
return -ENOENT;
|
return -ENOENT;
|
||||||
}
|
}
|
||||||
@@ -1106,7 +1125,7 @@ static int xennet_fill_frags(struct netfront_queue *queue,
|
|||||||
kfree_skb(nskb);
|
kfree_skb(nskb);
|
||||||
}
|
}
|
||||||
|
|
||||||
queue->rx.rsp_cons = cons;
|
xennet_set_rx_rsp_cons(queue, cons);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -1229,7 +1248,9 @@ err:
|
|||||||
|
|
||||||
if (unlikely(xennet_set_skb_gso(skb, gso))) {
|
if (unlikely(xennet_set_skb_gso(skb, gso))) {
|
||||||
__skb_queue_head(&tmpq, skb);
|
__skb_queue_head(&tmpq, skb);
|
||||||
queue->rx.rsp_cons += skb_queue_len(&tmpq);
|
xennet_set_rx_rsp_cons(queue,
|
||||||
|
queue->rx.rsp_cons +
|
||||||
|
skb_queue_len(&tmpq));
|
||||||
goto err;
|
goto err;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1253,7 +1274,8 @@ err:
|
|||||||
|
|
||||||
__skb_queue_tail(&rxq, skb);
|
__skb_queue_tail(&rxq, skb);
|
||||||
|
|
||||||
i = ++queue->rx.rsp_cons;
|
i = queue->rx.rsp_cons + 1;
|
||||||
|
xennet_set_rx_rsp_cons(queue, i);
|
||||||
work_done++;
|
work_done++;
|
||||||
}
|
}
|
||||||
if (need_xdp_flush)
|
if (need_xdp_flush)
|
||||||
@@ -1417,40 +1439,79 @@ static int xennet_set_features(struct net_device *dev,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
|
static bool xennet_handle_tx(struct netfront_queue *queue, unsigned int *eoi)
|
||||||
{
|
{
|
||||||
struct netfront_queue *queue = dev_id;
|
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
if (queue->info->broken)
|
if (unlikely(queue->info->broken))
|
||||||
return IRQ_HANDLED;
|
return false;
|
||||||
|
|
||||||
spin_lock_irqsave(&queue->tx_lock, flags);
|
spin_lock_irqsave(&queue->tx_lock, flags);
|
||||||
xennet_tx_buf_gc(queue);
|
if (xennet_tx_buf_gc(queue))
|
||||||
|
*eoi = 0;
|
||||||
spin_unlock_irqrestore(&queue->tx_lock, flags);
|
spin_unlock_irqrestore(&queue->tx_lock, flags);
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
|
||||||
|
{
|
||||||
|
unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
|
||||||
|
|
||||||
|
if (likely(xennet_handle_tx(dev_id, &eoiflag)))
|
||||||
|
xen_irq_lateeoi(irq, eoiflag);
|
||||||
|
|
||||||
return IRQ_HANDLED;
|
return IRQ_HANDLED;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static bool xennet_handle_rx(struct netfront_queue *queue, unsigned int *eoi)
|
||||||
|
{
|
||||||
|
unsigned int work_queued;
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
if (unlikely(queue->info->broken))
|
||||||
|
return false;
|
||||||
|
|
||||||
|
spin_lock_irqsave(&queue->rx_cons_lock, flags);
|
||||||
|
work_queued = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx);
|
||||||
|
if (work_queued > queue->rx_rsp_unconsumed) {
|
||||||
|
queue->rx_rsp_unconsumed = work_queued;
|
||||||
|
*eoi = 0;
|
||||||
|
} else if (unlikely(work_queued < queue->rx_rsp_unconsumed)) {
|
||||||
|
const struct device *dev = &queue->info->netdev->dev;
|
||||||
|
|
||||||
|
spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
|
||||||
|
dev_alert(dev, "RX producer index going backwards\n");
|
||||||
|
dev_alert(dev, "Disabled for further use\n");
|
||||||
|
queue->info->broken = true;
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
|
||||||
|
|
||||||
|
if (likely(netif_carrier_ok(queue->info->netdev) && work_queued))
|
||||||
|
napi_schedule(&queue->napi);
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
|
static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
|
||||||
{
|
{
|
||||||
struct netfront_queue *queue = dev_id;
|
unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
|
||||||
struct net_device *dev = queue->info->netdev;
|
|
||||||
|
|
||||||
if (queue->info->broken)
|
if (likely(xennet_handle_rx(dev_id, &eoiflag)))
|
||||||
return IRQ_HANDLED;
|
xen_irq_lateeoi(irq, eoiflag);
|
||||||
|
|
||||||
if (likely(netif_carrier_ok(dev) &&
|
|
||||||
RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
|
|
||||||
napi_schedule(&queue->napi);
|
|
||||||
|
|
||||||
return IRQ_HANDLED;
|
return IRQ_HANDLED;
|
||||||
}
|
}
|
||||||
|
|
||||||
static irqreturn_t xennet_interrupt(int irq, void *dev_id)
|
static irqreturn_t xennet_interrupt(int irq, void *dev_id)
|
||||||
{
|
{
|
||||||
xennet_tx_interrupt(irq, dev_id);
|
unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
|
||||||
xennet_rx_interrupt(irq, dev_id);
|
|
||||||
|
if (xennet_handle_tx(dev_id, &eoiflag) &&
|
||||||
|
xennet_handle_rx(dev_id, &eoiflag))
|
||||||
|
xen_irq_lateeoi(irq, eoiflag);
|
||||||
|
|
||||||
return IRQ_HANDLED;
|
return IRQ_HANDLED;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1768,9 +1829,10 @@ static int setup_netfront_single(struct netfront_queue *queue)
|
|||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto fail;
|
goto fail;
|
||||||
|
|
||||||
err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
|
err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn,
|
||||||
xennet_interrupt,
|
xennet_interrupt, 0,
|
||||||
0, queue->info->netdev->name, queue);
|
queue->info->netdev->name,
|
||||||
|
queue);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto bind_fail;
|
goto bind_fail;
|
||||||
queue->rx_evtchn = queue->tx_evtchn;
|
queue->rx_evtchn = queue->tx_evtchn;
|
||||||
@@ -1798,18 +1860,18 @@ static int setup_netfront_split(struct netfront_queue *queue)
|
|||||||
|
|
||||||
snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
|
snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
|
||||||
"%s-tx", queue->name);
|
"%s-tx", queue->name);
|
||||||
err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
|
err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn,
|
||||||
xennet_tx_interrupt,
|
xennet_tx_interrupt, 0,
|
||||||
0, queue->tx_irq_name, queue);
|
queue->tx_irq_name, queue);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto bind_tx_fail;
|
goto bind_tx_fail;
|
||||||
queue->tx_irq = err;
|
queue->tx_irq = err;
|
||||||
|
|
||||||
snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
|
snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
|
||||||
"%s-rx", queue->name);
|
"%s-rx", queue->name);
|
||||||
err = bind_evtchn_to_irqhandler(queue->rx_evtchn,
|
err = bind_evtchn_to_irqhandler_lateeoi(queue->rx_evtchn,
|
||||||
xennet_rx_interrupt,
|
xennet_rx_interrupt, 0,
|
||||||
0, queue->rx_irq_name, queue);
|
queue->rx_irq_name, queue);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto bind_rx_fail;
|
goto bind_rx_fail;
|
||||||
queue->rx_irq = err;
|
queue->rx_irq = err;
|
||||||
@@ -1911,6 +1973,7 @@ static int xennet_init_queue(struct netfront_queue *queue)
|
|||||||
|
|
||||||
spin_lock_init(&queue->tx_lock);
|
spin_lock_init(&queue->tx_lock);
|
||||||
spin_lock_init(&queue->rx_lock);
|
spin_lock_init(&queue->rx_lock);
|
||||||
|
spin_lock_init(&queue->rx_cons_lock);
|
||||||
|
|
||||||
timer_setup(&queue->rx_refill_timer, rx_refill_timeout, 0);
|
timer_setup(&queue->rx_refill_timer, rx_refill_timeout, 0);
|
||||||
|
|
||||||
|
|||||||
@@ -721,9 +721,6 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
|
|||||||
goto out_disable;
|
goto out_disable;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Ensure that all table entries are masked. */
|
|
||||||
msix_mask_all(base, tsize);
|
|
||||||
|
|
||||||
ret = msix_setup_entries(dev, base, entries, nvec, affd);
|
ret = msix_setup_entries(dev, base, entries, nvec, affd);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto out_disable;
|
goto out_disable;
|
||||||
@@ -750,6 +747,16 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
|
|||||||
/* Set MSI-X enabled bits and unmask the function */
|
/* Set MSI-X enabled bits and unmask the function */
|
||||||
pci_intx_for_msi(dev, 0);
|
pci_intx_for_msi(dev, 0);
|
||||||
dev->msix_enabled = 1;
|
dev->msix_enabled = 1;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Ensure that all table entries are masked to prevent
|
||||||
|
* stale entries from firing in a crash kernel.
|
||||||
|
*
|
||||||
|
* Done late to deal with a broken Marvell NVME device
|
||||||
|
* which takes the MSI-X mask bits into account even
|
||||||
|
* when MSI-X is disabled, which prevents MSI delivery.
|
||||||
|
*/
|
||||||
|
msix_mask_all(base, tsize);
|
||||||
pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0);
|
pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0);
|
||||||
|
|
||||||
pcibios_free_irq(dev);
|
pcibios_free_irq(dev);
|
||||||
@@ -776,7 +783,7 @@ out_free:
|
|||||||
free_msi_irqs(dev);
|
free_msi_irqs(dev);
|
||||||
|
|
||||||
out_disable:
|
out_disable:
|
||||||
pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0);
|
pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE, 0);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -598,14 +598,14 @@ static struct irq_chip amd_gpio_irqchip = {
|
|||||||
|
|
||||||
#define PIN_IRQ_PENDING (BIT(INTERRUPT_STS_OFF) | BIT(WAKE_STS_OFF))
|
#define PIN_IRQ_PENDING (BIT(INTERRUPT_STS_OFF) | BIT(WAKE_STS_OFF))
|
||||||
|
|
||||||
static irqreturn_t amd_gpio_irq_handler(int irq, void *dev_id)
|
static bool do_amd_gpio_irq_handler(int irq, void *dev_id)
|
||||||
{
|
{
|
||||||
struct amd_gpio *gpio_dev = dev_id;
|
struct amd_gpio *gpio_dev = dev_id;
|
||||||
struct gpio_chip *gc = &gpio_dev->gc;
|
struct gpio_chip *gc = &gpio_dev->gc;
|
||||||
irqreturn_t ret = IRQ_NONE;
|
|
||||||
unsigned int i, irqnr;
|
unsigned int i, irqnr;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
u32 __iomem *regs;
|
u32 __iomem *regs;
|
||||||
|
bool ret = false;
|
||||||
u32 regval;
|
u32 regval;
|
||||||
u64 status, mask;
|
u64 status, mask;
|
||||||
|
|
||||||
@@ -627,6 +627,14 @@ static irqreturn_t amd_gpio_irq_handler(int irq, void *dev_id)
|
|||||||
/* Each status bit covers four pins */
|
/* Each status bit covers four pins */
|
||||||
for (i = 0; i < 4; i++) {
|
for (i = 0; i < 4; i++) {
|
||||||
regval = readl(regs + i);
|
regval = readl(regs + i);
|
||||||
|
/* caused wake on resume context for shared IRQ */
|
||||||
|
if (irq < 0 && (regval & BIT(WAKE_STS_OFF))) {
|
||||||
|
dev_dbg(&gpio_dev->pdev->dev,
|
||||||
|
"Waking due to GPIO %d: 0x%x",
|
||||||
|
irqnr + i, regval);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
if (!(regval & PIN_IRQ_PENDING) ||
|
if (!(regval & PIN_IRQ_PENDING) ||
|
||||||
!(regval & BIT(INTERRUPT_MASK_OFF)))
|
!(regval & BIT(INTERRUPT_MASK_OFF)))
|
||||||
continue;
|
continue;
|
||||||
@@ -650,9 +658,12 @@ static irqreturn_t amd_gpio_irq_handler(int irq, void *dev_id)
|
|||||||
}
|
}
|
||||||
writel(regval, regs + i);
|
writel(regval, regs + i);
|
||||||
raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
|
raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
|
||||||
ret = IRQ_HANDLED;
|
ret = true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
/* did not cause wake on resume context for shared IRQ */
|
||||||
|
if (irq < 0)
|
||||||
|
return false;
|
||||||
|
|
||||||
/* Signal EOI to the GPIO unit */
|
/* Signal EOI to the GPIO unit */
|
||||||
raw_spin_lock_irqsave(&gpio_dev->lock, flags);
|
raw_spin_lock_irqsave(&gpio_dev->lock, flags);
|
||||||
@@ -664,6 +675,16 @@ static irqreturn_t amd_gpio_irq_handler(int irq, void *dev_id)
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static irqreturn_t amd_gpio_irq_handler(int irq, void *dev_id)
|
||||||
|
{
|
||||||
|
return IRQ_RETVAL(do_amd_gpio_irq_handler(irq, dev_id));
|
||||||
|
}
|
||||||
|
|
||||||
|
static bool __maybe_unused amd_gpio_check_wake(void *dev_id)
|
||||||
|
{
|
||||||
|
return do_amd_gpio_irq_handler(-1, dev_id);
|
||||||
|
}
|
||||||
|
|
||||||
static int amd_get_groups_count(struct pinctrl_dev *pctldev)
|
static int amd_get_groups_count(struct pinctrl_dev *pctldev)
|
||||||
{
|
{
|
||||||
struct amd_gpio *gpio_dev = pinctrl_dev_get_drvdata(pctldev);
|
struct amd_gpio *gpio_dev = pinctrl_dev_get_drvdata(pctldev);
|
||||||
@@ -1033,6 +1054,7 @@ static int amd_gpio_probe(struct platform_device *pdev)
|
|||||||
goto out2;
|
goto out2;
|
||||||
|
|
||||||
platform_set_drvdata(pdev, gpio_dev);
|
platform_set_drvdata(pdev, gpio_dev);
|
||||||
|
acpi_register_wakeup_handler(gpio_dev->irq, amd_gpio_check_wake, gpio_dev);
|
||||||
|
|
||||||
dev_dbg(&pdev->dev, "amd gpio driver loaded\n");
|
dev_dbg(&pdev->dev, "amd gpio driver loaded\n");
|
||||||
return ret;
|
return ret;
|
||||||
@@ -1050,6 +1072,7 @@ static int amd_gpio_remove(struct platform_device *pdev)
|
|||||||
gpio_dev = platform_get_drvdata(pdev);
|
gpio_dev = platform_get_drvdata(pdev);
|
||||||
|
|
||||||
gpiochip_remove(&gpio_dev->gc);
|
gpiochip_remove(&gpio_dev->gc);
|
||||||
|
acpi_unregister_wakeup_handler(amd_gpio_check_wake, gpio_dev);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -20,7 +20,6 @@ static int tegra_bpmp_reset_common(struct reset_controller_dev *rstc,
|
|||||||
struct tegra_bpmp *bpmp = to_tegra_bpmp(rstc);
|
struct tegra_bpmp *bpmp = to_tegra_bpmp(rstc);
|
||||||
struct mrq_reset_request request;
|
struct mrq_reset_request request;
|
||||||
struct tegra_bpmp_message msg;
|
struct tegra_bpmp_message msg;
|
||||||
int err;
|
|
||||||
|
|
||||||
memset(&request, 0, sizeof(request));
|
memset(&request, 0, sizeof(request));
|
||||||
request.cmd = command;
|
request.cmd = command;
|
||||||
@@ -31,13 +30,7 @@ static int tegra_bpmp_reset_common(struct reset_controller_dev *rstc,
|
|||||||
msg.tx.data = &request;
|
msg.tx.data = &request;
|
||||||
msg.tx.size = sizeof(request);
|
msg.tx.size = sizeof(request);
|
||||||
|
|
||||||
err = tegra_bpmp_transfer(bpmp, &msg);
|
return tegra_bpmp_transfer(bpmp, &msg);
|
||||||
if (err)
|
|
||||||
return err;
|
|
||||||
if (msg.rx.ret)
|
|
||||||
return -EINVAL;
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int tegra_bpmp_reset_module(struct reset_controller_dev *rstc,
|
static int tegra_bpmp_reset_module(struct reset_controller_dev *rstc,
|
||||||
|
|||||||
@@ -1189,7 +1189,7 @@ static int p_fill_from_dev_buffer(struct scsi_cmnd *scp, const void *arr,
|
|||||||
__func__, off_dst, scsi_bufflen(scp), act_len,
|
__func__, off_dst, scsi_bufflen(scp), act_len,
|
||||||
scsi_get_resid(scp));
|
scsi_get_resid(scp));
|
||||||
n = scsi_bufflen(scp) - (off_dst + act_len);
|
n = scsi_bufflen(scp) - (off_dst + act_len);
|
||||||
scsi_set_resid(scp, min_t(int, scsi_get_resid(scp), n));
|
scsi_set_resid(scp, min_t(u32, scsi_get_resid(scp), n));
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1562,7 +1562,8 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
|
|||||||
unsigned char pq_pdt;
|
unsigned char pq_pdt;
|
||||||
unsigned char *arr;
|
unsigned char *arr;
|
||||||
unsigned char *cmd = scp->cmnd;
|
unsigned char *cmd = scp->cmnd;
|
||||||
int alloc_len, n, ret;
|
u32 alloc_len, n;
|
||||||
|
int ret;
|
||||||
bool have_wlun, is_disk, is_zbc, is_disk_zbc;
|
bool have_wlun, is_disk, is_zbc, is_disk_zbc;
|
||||||
|
|
||||||
alloc_len = get_unaligned_be16(cmd + 3);
|
alloc_len = get_unaligned_be16(cmd + 3);
|
||||||
@@ -1585,7 +1586,8 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
|
|||||||
kfree(arr);
|
kfree(arr);
|
||||||
return check_condition_result;
|
return check_condition_result;
|
||||||
} else if (0x1 & cmd[1]) { /* EVPD bit set */
|
} else if (0x1 & cmd[1]) { /* EVPD bit set */
|
||||||
int lu_id_num, port_group_id, target_dev_id, len;
|
int lu_id_num, port_group_id, target_dev_id;
|
||||||
|
u32 len;
|
||||||
char lu_id_str[6];
|
char lu_id_str[6];
|
||||||
int host_no = devip->sdbg_host->shost->host_no;
|
int host_no = devip->sdbg_host->shost->host_no;
|
||||||
|
|
||||||
@@ -1676,9 +1678,9 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
|
|||||||
kfree(arr);
|
kfree(arr);
|
||||||
return check_condition_result;
|
return check_condition_result;
|
||||||
}
|
}
|
||||||
len = min(get_unaligned_be16(arr + 2) + 4, alloc_len);
|
len = min_t(u32, get_unaligned_be16(arr + 2) + 4, alloc_len);
|
||||||
ret = fill_from_dev_buffer(scp, arr,
|
ret = fill_from_dev_buffer(scp, arr,
|
||||||
min(len, SDEBUG_MAX_INQ_ARR_SZ));
|
min_t(u32, len, SDEBUG_MAX_INQ_ARR_SZ));
|
||||||
kfree(arr);
|
kfree(arr);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
@@ -1714,7 +1716,7 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
|
|||||||
}
|
}
|
||||||
put_unaligned_be16(0x2100, arr + n); /* SPL-4 no version claimed */
|
put_unaligned_be16(0x2100, arr + n); /* SPL-4 no version claimed */
|
||||||
ret = fill_from_dev_buffer(scp, arr,
|
ret = fill_from_dev_buffer(scp, arr,
|
||||||
min_t(int, alloc_len, SDEBUG_LONG_INQ_SZ));
|
min_t(u32, alloc_len, SDEBUG_LONG_INQ_SZ));
|
||||||
kfree(arr);
|
kfree(arr);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
@@ -1729,8 +1731,8 @@ static int resp_requests(struct scsi_cmnd *scp,
|
|||||||
unsigned char *cmd = scp->cmnd;
|
unsigned char *cmd = scp->cmnd;
|
||||||
unsigned char arr[SCSI_SENSE_BUFFERSIZE]; /* assume >= 18 bytes */
|
unsigned char arr[SCSI_SENSE_BUFFERSIZE]; /* assume >= 18 bytes */
|
||||||
bool dsense = !!(cmd[1] & 1);
|
bool dsense = !!(cmd[1] & 1);
|
||||||
int alloc_len = cmd[4];
|
u32 alloc_len = cmd[4];
|
||||||
int len = 18;
|
u32 len = 18;
|
||||||
int stopped_state = atomic_read(&devip->stopped);
|
int stopped_state = atomic_read(&devip->stopped);
|
||||||
|
|
||||||
memset(arr, 0, sizeof(arr));
|
memset(arr, 0, sizeof(arr));
|
||||||
@@ -1774,7 +1776,7 @@ static int resp_requests(struct scsi_cmnd *scp,
|
|||||||
arr[7] = 0xa;
|
arr[7] = 0xa;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return fill_from_dev_buffer(scp, arr, min_t(int, len, alloc_len));
|
return fill_from_dev_buffer(scp, arr, min_t(u32, len, alloc_len));
|
||||||
}
|
}
|
||||||
|
|
||||||
static int resp_start_stop(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
|
static int resp_start_stop(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
|
||||||
@@ -2312,7 +2314,8 @@ static int resp_mode_sense(struct scsi_cmnd *scp,
|
|||||||
{
|
{
|
||||||
int pcontrol, pcode, subpcode, bd_len;
|
int pcontrol, pcode, subpcode, bd_len;
|
||||||
unsigned char dev_spec;
|
unsigned char dev_spec;
|
||||||
int alloc_len, offset, len, target_dev_id;
|
u32 alloc_len, offset, len;
|
||||||
|
int target_dev_id;
|
||||||
int target = scp->device->id;
|
int target = scp->device->id;
|
||||||
unsigned char *ap;
|
unsigned char *ap;
|
||||||
unsigned char arr[SDEBUG_MAX_MSENSE_SZ];
|
unsigned char arr[SDEBUG_MAX_MSENSE_SZ];
|
||||||
@@ -2468,7 +2471,7 @@ static int resp_mode_sense(struct scsi_cmnd *scp,
|
|||||||
arr[0] = offset - 1;
|
arr[0] = offset - 1;
|
||||||
else
|
else
|
||||||
put_unaligned_be16((offset - 2), arr + 0);
|
put_unaligned_be16((offset - 2), arr + 0);
|
||||||
return fill_from_dev_buffer(scp, arr, min_t(int, alloc_len, offset));
|
return fill_from_dev_buffer(scp, arr, min_t(u32, alloc_len, offset));
|
||||||
}
|
}
|
||||||
|
|
||||||
#define SDEBUG_MAX_MSELECT_SZ 512
|
#define SDEBUG_MAX_MSELECT_SZ 512
|
||||||
@@ -2499,11 +2502,11 @@ static int resp_mode_select(struct scsi_cmnd *scp,
|
|||||||
__func__, param_len, res);
|
__func__, param_len, res);
|
||||||
md_len = mselect6 ? (arr[0] + 1) : (get_unaligned_be16(arr + 0) + 2);
|
md_len = mselect6 ? (arr[0] + 1) : (get_unaligned_be16(arr + 0) + 2);
|
||||||
bd_len = mselect6 ? arr[3] : get_unaligned_be16(arr + 6);
|
bd_len = mselect6 ? arr[3] : get_unaligned_be16(arr + 6);
|
||||||
if (md_len > 2) {
|
off = bd_len + (mselect6 ? 4 : 8);
|
||||||
|
if (md_len > 2 || off >= res) {
|
||||||
mk_sense_invalid_fld(scp, SDEB_IN_DATA, 0, -1);
|
mk_sense_invalid_fld(scp, SDEB_IN_DATA, 0, -1);
|
||||||
return check_condition_result;
|
return check_condition_result;
|
||||||
}
|
}
|
||||||
off = bd_len + (mselect6 ? 4 : 8);
|
|
||||||
mpage = arr[off] & 0x3f;
|
mpage = arr[off] & 0x3f;
|
||||||
ps = !!(arr[off] & 0x80);
|
ps = !!(arr[off] & 0x80);
|
||||||
if (ps) {
|
if (ps) {
|
||||||
@@ -2583,7 +2586,8 @@ static int resp_ie_l_pg(unsigned char *arr)
|
|||||||
static int resp_log_sense(struct scsi_cmnd *scp,
|
static int resp_log_sense(struct scsi_cmnd *scp,
|
||||||
struct sdebug_dev_info *devip)
|
struct sdebug_dev_info *devip)
|
||||||
{
|
{
|
||||||
int ppc, sp, pcode, subpcode, alloc_len, len, n;
|
int ppc, sp, pcode, subpcode;
|
||||||
|
u32 alloc_len, len, n;
|
||||||
unsigned char arr[SDEBUG_MAX_LSENSE_SZ];
|
unsigned char arr[SDEBUG_MAX_LSENSE_SZ];
|
||||||
unsigned char *cmd = scp->cmnd;
|
unsigned char *cmd = scp->cmnd;
|
||||||
|
|
||||||
@@ -2653,9 +2657,9 @@ static int resp_log_sense(struct scsi_cmnd *scp,
|
|||||||
mk_sense_invalid_fld(scp, SDEB_IN_CDB, 3, -1);
|
mk_sense_invalid_fld(scp, SDEB_IN_CDB, 3, -1);
|
||||||
return check_condition_result;
|
return check_condition_result;
|
||||||
}
|
}
|
||||||
len = min_t(int, get_unaligned_be16(arr + 2) + 4, alloc_len);
|
len = min_t(u32, get_unaligned_be16(arr + 2) + 4, alloc_len);
|
||||||
return fill_from_dev_buffer(scp, arr,
|
return fill_from_dev_buffer(scp, arr,
|
||||||
min_t(int, len, SDEBUG_MAX_INQ_ARR_SZ));
|
min_t(u32, len, SDEBUG_MAX_INQ_ARR_SZ));
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline bool sdebug_dev_is_zoned(struct sdebug_dev_info *devip)
|
static inline bool sdebug_dev_is_zoned(struct sdebug_dev_info *devip)
|
||||||
@@ -4259,6 +4263,8 @@ static int resp_verify(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
|
|||||||
mk_sense_invalid_opcode(scp);
|
mk_sense_invalid_opcode(scp);
|
||||||
return check_condition_result;
|
return check_condition_result;
|
||||||
}
|
}
|
||||||
|
if (vnum == 0)
|
||||||
|
return 0; /* not an error */
|
||||||
a_num = is_bytchk3 ? 1 : vnum;
|
a_num = is_bytchk3 ? 1 : vnum;
|
||||||
/* Treat following check like one for read (i.e. no write) access */
|
/* Treat following check like one for read (i.e. no write) access */
|
||||||
ret = check_device_access_params(scp, lba, a_num, false);
|
ret = check_device_access_params(scp, lba, a_num, false);
|
||||||
@@ -4322,6 +4328,8 @@ static int resp_report_zones(struct scsi_cmnd *scp,
|
|||||||
}
|
}
|
||||||
zs_lba = get_unaligned_be64(cmd + 2);
|
zs_lba = get_unaligned_be64(cmd + 2);
|
||||||
alloc_len = get_unaligned_be32(cmd + 10);
|
alloc_len = get_unaligned_be32(cmd + 10);
|
||||||
|
if (alloc_len == 0)
|
||||||
|
return 0; /* not an error */
|
||||||
rep_opts = cmd[14] & 0x3f;
|
rep_opts = cmd[14] & 0x3f;
|
||||||
partial = cmd[14] & 0x80;
|
partial = cmd[14] & 0x80;
|
||||||
|
|
||||||
@@ -4426,7 +4434,7 @@ static int resp_report_zones(struct scsi_cmnd *scp,
|
|||||||
put_unaligned_be64(sdebug_capacity - 1, arr + 8);
|
put_unaligned_be64(sdebug_capacity - 1, arr + 8);
|
||||||
|
|
||||||
rep_len = (unsigned long)desc - (unsigned long)arr;
|
rep_len = (unsigned long)desc - (unsigned long)arr;
|
||||||
ret = fill_from_dev_buffer(scp, arr, min_t(int, alloc_len, rep_len));
|
ret = fill_from_dev_buffer(scp, arr, min_t(u32, alloc_len, rep_len));
|
||||||
|
|
||||||
fini:
|
fini:
|
||||||
read_unlock(macc_lckp);
|
read_unlock(macc_lckp);
|
||||||
|
|||||||
@@ -36,6 +36,10 @@ static int __init imx_soc_device_init(void)
|
|||||||
int ret;
|
int ret;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
|
/* Return early if this is running on devices with different SoCs */
|
||||||
|
if (!__mxc_cpu_type)
|
||||||
|
return 0;
|
||||||
|
|
||||||
if (of_machine_is_compatible("fsl,ls1021a"))
|
if (of_machine_is_compatible("fsl,ls1021a"))
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
|||||||
@@ -320,7 +320,7 @@ static struct platform_driver tegra_fuse_driver = {
|
|||||||
};
|
};
|
||||||
builtin_platform_driver(tegra_fuse_driver);
|
builtin_platform_driver(tegra_fuse_driver);
|
||||||
|
|
||||||
bool __init tegra_fuse_read_spare(unsigned int spare)
|
u32 __init tegra_fuse_read_spare(unsigned int spare)
|
||||||
{
|
{
|
||||||
unsigned int offset = fuse->soc->info->spare + spare * 4;
|
unsigned int offset = fuse->soc->info->spare + spare * 4;
|
||||||
|
|
||||||
|
|||||||
@@ -65,7 +65,7 @@ struct tegra_fuse {
|
|||||||
void tegra_init_revision(void);
|
void tegra_init_revision(void);
|
||||||
void tegra_init_apbmisc(void);
|
void tegra_init_apbmisc(void);
|
||||||
|
|
||||||
bool __init tegra_fuse_read_spare(unsigned int spare);
|
u32 __init tegra_fuse_read_spare(unsigned int spare);
|
||||||
u32 __init tegra_fuse_read_early(unsigned int offset);
|
u32 __init tegra_fuse_read_early(unsigned int offset);
|
||||||
|
|
||||||
u8 tegra_get_major_rev(void);
|
u8 tegra_get_major_rev(void);
|
||||||
|
|||||||
@@ -203,9 +203,8 @@ static int copy_ta_binary(struct tee_context *ctx, void *ptr, void **ta,
|
|||||||
|
|
||||||
*ta_size = roundup(fw->size, PAGE_SIZE);
|
*ta_size = roundup(fw->size, PAGE_SIZE);
|
||||||
*ta = (void *)__get_free_pages(GFP_KERNEL, get_order(*ta_size));
|
*ta = (void *)__get_free_pages(GFP_KERNEL, get_order(*ta_size));
|
||||||
if (IS_ERR(*ta)) {
|
if (!*ta) {
|
||||||
pr_err("%s: get_free_pages failed 0x%llx\n", __func__,
|
pr_err("%s: get_free_pages failed\n", __func__);
|
||||||
(u64)*ta);
|
|
||||||
rc = -ENOMEM;
|
rc = -ENOMEM;
|
||||||
goto rel_fw;
|
goto rel_fw;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -37,6 +37,8 @@ struct xencons_info {
|
|||||||
struct xenbus_device *xbdev;
|
struct xenbus_device *xbdev;
|
||||||
struct xencons_interface *intf;
|
struct xencons_interface *intf;
|
||||||
unsigned int evtchn;
|
unsigned int evtchn;
|
||||||
|
XENCONS_RING_IDX out_cons;
|
||||||
|
unsigned int out_cons_same;
|
||||||
struct hvc_struct *hvc;
|
struct hvc_struct *hvc;
|
||||||
int irq;
|
int irq;
|
||||||
int vtermno;
|
int vtermno;
|
||||||
@@ -138,6 +140,8 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
|
|||||||
XENCONS_RING_IDX cons, prod;
|
XENCONS_RING_IDX cons, prod;
|
||||||
int recv = 0;
|
int recv = 0;
|
||||||
struct xencons_info *xencons = vtermno_to_xencons(vtermno);
|
struct xencons_info *xencons = vtermno_to_xencons(vtermno);
|
||||||
|
unsigned int eoiflag = 0;
|
||||||
|
|
||||||
if (xencons == NULL)
|
if (xencons == NULL)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
intf = xencons->intf;
|
intf = xencons->intf;
|
||||||
@@ -157,7 +161,27 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
|
|||||||
mb(); /* read ring before consuming */
|
mb(); /* read ring before consuming */
|
||||||
intf->in_cons = cons;
|
intf->in_cons = cons;
|
||||||
|
|
||||||
notify_daemon(xencons);
|
/*
|
||||||
|
* When to mark interrupt having been spurious:
|
||||||
|
* - there was no new data to be read, and
|
||||||
|
* - the backend did not consume some output bytes, and
|
||||||
|
* - the previous round with no read data didn't see consumed bytes
|
||||||
|
* (we might have a race with an interrupt being in flight while
|
||||||
|
* updating xencons->out_cons, so account for that by allowing one
|
||||||
|
* round without any visible reason)
|
||||||
|
*/
|
||||||
|
if (intf->out_cons != xencons->out_cons) {
|
||||||
|
xencons->out_cons = intf->out_cons;
|
||||||
|
xencons->out_cons_same = 0;
|
||||||
|
}
|
||||||
|
if (recv) {
|
||||||
|
notify_daemon(xencons);
|
||||||
|
} else if (xencons->out_cons_same++ > 1) {
|
||||||
|
eoiflag = XEN_EOI_FLAG_SPURIOUS;
|
||||||
|
}
|
||||||
|
|
||||||
|
xen_irq_lateeoi(xencons->irq, eoiflag);
|
||||||
|
|
||||||
return recv;
|
return recv;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -386,7 +410,7 @@ static int xencons_connect_backend(struct xenbus_device *dev,
|
|||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
info->evtchn = evtchn;
|
info->evtchn = evtchn;
|
||||||
irq = bind_evtchn_to_irq(evtchn);
|
irq = bind_interdomain_evtchn_to_irq_lateeoi(dev, evtchn);
|
||||||
if (irq < 0)
|
if (irq < 0)
|
||||||
return irq;
|
return irq;
|
||||||
info->irq = irq;
|
info->irq = irq;
|
||||||
@@ -550,7 +574,7 @@ static int __init xen_hvc_init(void)
|
|||||||
return r;
|
return r;
|
||||||
|
|
||||||
info = vtermno_to_xencons(HVC_COOKIE);
|
info = vtermno_to_xencons(HVC_COOKIE);
|
||||||
info->irq = bind_evtchn_to_irq(info->evtchn);
|
info->irq = bind_evtchn_to_irq_lateeoi(info->evtchn);
|
||||||
}
|
}
|
||||||
if (info->irq < 0)
|
if (info->irq < 0)
|
||||||
info->irq = 0; /* NO_IRQ */
|
info->irq = 0; /* NO_IRQ */
|
||||||
|
|||||||
@@ -140,6 +140,8 @@ struct n_hdlc {
|
|||||||
struct n_hdlc_buf_list rx_buf_list;
|
struct n_hdlc_buf_list rx_buf_list;
|
||||||
struct n_hdlc_buf_list tx_free_buf_list;
|
struct n_hdlc_buf_list tx_free_buf_list;
|
||||||
struct n_hdlc_buf_list rx_free_buf_list;
|
struct n_hdlc_buf_list rx_free_buf_list;
|
||||||
|
struct work_struct write_work;
|
||||||
|
struct tty_struct *tty_for_write_work;
|
||||||
};
|
};
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -154,6 +156,7 @@ static struct n_hdlc_buf *n_hdlc_buf_get(struct n_hdlc_buf_list *list);
|
|||||||
/* Local functions */
|
/* Local functions */
|
||||||
|
|
||||||
static struct n_hdlc *n_hdlc_alloc(void);
|
static struct n_hdlc *n_hdlc_alloc(void);
|
||||||
|
static void n_hdlc_tty_write_work(struct work_struct *work);
|
||||||
|
|
||||||
/* max frame size for memory allocations */
|
/* max frame size for memory allocations */
|
||||||
static int maxframe = 4096;
|
static int maxframe = 4096;
|
||||||
@@ -210,6 +213,8 @@ static void n_hdlc_tty_close(struct tty_struct *tty)
|
|||||||
wake_up_interruptible(&tty->read_wait);
|
wake_up_interruptible(&tty->read_wait);
|
||||||
wake_up_interruptible(&tty->write_wait);
|
wake_up_interruptible(&tty->write_wait);
|
||||||
|
|
||||||
|
cancel_work_sync(&n_hdlc->write_work);
|
||||||
|
|
||||||
n_hdlc_free_buf_list(&n_hdlc->rx_free_buf_list);
|
n_hdlc_free_buf_list(&n_hdlc->rx_free_buf_list);
|
||||||
n_hdlc_free_buf_list(&n_hdlc->tx_free_buf_list);
|
n_hdlc_free_buf_list(&n_hdlc->tx_free_buf_list);
|
||||||
n_hdlc_free_buf_list(&n_hdlc->rx_buf_list);
|
n_hdlc_free_buf_list(&n_hdlc->rx_buf_list);
|
||||||
@@ -241,6 +246,8 @@ static int n_hdlc_tty_open(struct tty_struct *tty)
|
|||||||
return -ENFILE;
|
return -ENFILE;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
INIT_WORK(&n_hdlc->write_work, n_hdlc_tty_write_work);
|
||||||
|
n_hdlc->tty_for_write_work = tty;
|
||||||
tty->disc_data = n_hdlc;
|
tty->disc_data = n_hdlc;
|
||||||
tty->receive_room = 65536;
|
tty->receive_room = 65536;
|
||||||
|
|
||||||
@@ -334,6 +341,20 @@ check_again:
|
|||||||
goto check_again;
|
goto check_again;
|
||||||
} /* end of n_hdlc_send_frames() */
|
} /* end of n_hdlc_send_frames() */
|
||||||
|
|
||||||
|
/**
|
||||||
|
* n_hdlc_tty_write_work - Asynchronous callback for transmit wakeup
|
||||||
|
* @work: pointer to work_struct
|
||||||
|
*
|
||||||
|
* Called when low level device driver can accept more send data.
|
||||||
|
*/
|
||||||
|
static void n_hdlc_tty_write_work(struct work_struct *work)
|
||||||
|
{
|
||||||
|
struct n_hdlc *n_hdlc = container_of(work, struct n_hdlc, write_work);
|
||||||
|
struct tty_struct *tty = n_hdlc->tty_for_write_work;
|
||||||
|
|
||||||
|
n_hdlc_send_frames(n_hdlc, tty);
|
||||||
|
} /* end of n_hdlc_tty_write_work() */
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* n_hdlc_tty_wakeup - Callback for transmit wakeup
|
* n_hdlc_tty_wakeup - Callback for transmit wakeup
|
||||||
* @tty: pointer to associated tty instance data
|
* @tty: pointer to associated tty instance data
|
||||||
@@ -344,7 +365,7 @@ static void n_hdlc_tty_wakeup(struct tty_struct *tty)
|
|||||||
{
|
{
|
||||||
struct n_hdlc *n_hdlc = tty->disc_data;
|
struct n_hdlc *n_hdlc = tty->disc_data;
|
||||||
|
|
||||||
n_hdlc_send_frames(n_hdlc, tty);
|
schedule_work(&n_hdlc->write_work);
|
||||||
} /* end of n_hdlc_tty_wakeup() */
|
} /* end of n_hdlc_tty_wakeup() */
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -290,25 +290,6 @@ static void fintek_8250_set_max_fifo(struct fintek_8250 *pdata)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void fintek_8250_goto_highspeed(struct uart_8250_port *uart,
|
|
||||||
struct fintek_8250 *pdata)
|
|
||||||
{
|
|
||||||
sio_write_reg(pdata, LDN, pdata->index);
|
|
||||||
|
|
||||||
switch (pdata->pid) {
|
|
||||||
case CHIP_ID_F81966:
|
|
||||||
case CHIP_ID_F81866: /* set uart clock for high speed serial mode */
|
|
||||||
sio_write_mask_reg(pdata, F81866_UART_CLK,
|
|
||||||
F81866_UART_CLK_MASK,
|
|
||||||
F81866_UART_CLK_14_769MHZ);
|
|
||||||
|
|
||||||
uart->port.uartclk = 921600 * 16;
|
|
||||||
break;
|
|
||||||
default: /* leave clock speed untouched */
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
static void fintek_8250_set_termios(struct uart_port *port,
|
static void fintek_8250_set_termios(struct uart_port *port,
|
||||||
struct ktermios *termios,
|
struct ktermios *termios,
|
||||||
struct ktermios *old)
|
struct ktermios *old)
|
||||||
@@ -430,7 +411,6 @@ static int probe_setup_port(struct fintek_8250 *pdata,
|
|||||||
|
|
||||||
fintek_8250_set_irq_mode(pdata, level_mode);
|
fintek_8250_set_irq_mode(pdata, level_mode);
|
||||||
fintek_8250_set_max_fifo(pdata);
|
fintek_8250_set_max_fifo(pdata);
|
||||||
fintek_8250_goto_highspeed(uart, pdata);
|
|
||||||
|
|
||||||
fintek_8250_exit_key(addr[i]);
|
fintek_8250_exit_key(addr[i]);
|
||||||
|
|
||||||
|
|||||||
@@ -1541,15 +1541,27 @@ static int cdnsp_gadget_pullup(struct usb_gadget *gadget, int is_on)
|
|||||||
{
|
{
|
||||||
struct cdnsp_device *pdev = gadget_to_cdnsp(gadget);
|
struct cdnsp_device *pdev = gadget_to_cdnsp(gadget);
|
||||||
struct cdns *cdns = dev_get_drvdata(pdev->dev);
|
struct cdns *cdns = dev_get_drvdata(pdev->dev);
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
trace_cdnsp_pullup(is_on);
|
trace_cdnsp_pullup(is_on);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Disable events handling while controller is being
|
||||||
|
* enabled/disabled.
|
||||||
|
*/
|
||||||
|
disable_irq(cdns->dev_irq);
|
||||||
|
spin_lock_irqsave(&pdev->lock, flags);
|
||||||
|
|
||||||
if (!is_on) {
|
if (!is_on) {
|
||||||
cdnsp_reset_device(pdev);
|
cdnsp_reset_device(pdev);
|
||||||
cdns_clear_vbus(cdns);
|
cdns_clear_vbus(cdns);
|
||||||
} else {
|
} else {
|
||||||
cdns_set_vbus(cdns);
|
cdns_set_vbus(cdns);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
spin_unlock_irqrestore(&pdev->lock, flags);
|
||||||
|
enable_irq(cdns->dev_irq);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user