Merge 5.15.91 into android14-5.15
Changes in 5.15.91 memory: tegra: Remove clients SID override programming memory: atmel-sdramc: Fix missing clk_disable_unprepare in atmel_ramc_probe() memory: mvebu-devbus: Fix missing clk_disable_unprepare in mvebu_devbus_probe() dmaengine: ti: k3-udma: Do conditional decrement of UDMA_CHAN_RT_PEER_BCNT_REG arm64: dts: imx8mp-phycore-som: Remove invalid PMIC property ARM: dts: imx6ul-pico-dwarf: Use 'clock-frequency' ARM: dts: imx7d-pico: Use 'clock-frequency' ARM: dts: imx6qdl-gw560x: Remove incorrect 'uart-has-rtscts' arm64: dts: imx8mm-beacon: Fix ecspi2 pinmux ARM: imx: add missing of_node_put() HID: intel_ish-hid: Add check for ishtp_dma_tx_map arm64: dts: imx8mm-venice-gw7901: fix USB2 controller OC polarity soc: imx8m: Fix incorrect check for of_clk_get_by_name() reset: uniphier-glue: Use reset_control_bulk API reset: uniphier-glue: Fix possible null-ptr-deref EDAC/highbank: Fix memory leak in highbank_mc_probe() firmware: arm_scmi: Harden shared memory access in fetch_response firmware: arm_scmi: Harden shared memory access in fetch_notification tomoyo: fix broken dependency on *.conf.default RDMA/core: Fix ib block iterator counter overflow IB/hfi1: Reject a zero-length user expected buffer IB/hfi1: Reserve user expected TIDs IB/hfi1: Fix expected receive setup error exit issues IB/hfi1: Immediately remove invalid memory from hardware IB/hfi1: Remove user expected buffer invalidate race affs: initialize fsdata in affs_truncate() PM: AVS: qcom-cpr: Fix an error handling path in cpr_probe() arm64: dts: qcom: msm8992: Don't use sfpb mutex arm64: dts: qcom: msm8992-libra: Add CPU regulators arm64: dts: qcom: msm8992-libra: Fix the memory map phy: ti: fix Kconfig warning and operator precedence NFSD: fix use-after-free in nfsd4_ssc_setup_dul() ARM: dts: at91: sam9x60: fix the ddr clock for sam9x60 amd-xgbe: TX Flow Ctrl Registers are h/w ver dependent amd-xgbe: Delay AN timeout during KR training bpf: Fix pointer-leak due to insufficient speculative store bypass mitigation phy: rockchip-inno-usb2: Fix missing clk_disable_unprepare() in rockchip_usb2phy_power_on() net: nfc: Fix use-after-free in local_cleanup() net: wan: Add checks for NULL for utdm in undo_uhdlc_init and unmap_si_regs net: enetc: avoid deadlock in enetc_tx_onestep_tstamp() sch_htb: Avoid grafting on htb_destroy_class_offload when destroying htb gpio: use raw spinlock for gpio chip shadowed data gpio: mxc: Protect GPIO irqchip RMW with bgpio spinlock gpio: mxc: Always set GPIOs used as interrupt source to INPUT mode wifi: rndis_wlan: Prevent buffer overflow in rndis_query_oid pinctrl/rockchip: Use temporary variable for struct device pinctrl/rockchip: add error handling for pull/drive register getters pinctrl: rockchip: fix reading pull type on rk3568 net: stmmac: Fix queue statistics reading net/sched: sch_taprio: fix possible use-after-free l2tp: Serialize access to sk_user_data with sk_callback_lock l2tp: Don't sleep and disable BH under writer-side sk_callback_lock l2tp: convert l2tp_tunnel_list to idr l2tp: close all race conditions in l2tp_tunnel_register() octeontx2-pf: Avoid use of GFP_KERNEL in atomic context net: usb: sr9700: Handle negative len net: mdio: validate parameter addr in mdiobus_get_phy() HID: check empty report_list in hid_validate_values() HID: check empty report_list in bigben_probe() net: stmmac: fix invalid call to mdiobus_get_phy() pinctrl: rockchip: fix mux route data for rk3568 HID: revert CHERRY_MOUSE_000C quirk usb: gadget: f_fs: Prevent race during ffs_ep0_queue_wait usb: gadget: f_fs: Ensure ep0req is dequeued before free_request Bluetooth: Fix possible deadlock in rfcomm_sk_state_change net: ipa: disable ipa interrupt during suspend net/mlx5: E-switch, Fix setting of reserved fields on MODIFY_SCHEDULING_ELEMENT net: mlx5: eliminate anonymous module_init & module_exit drm/panfrost: fix GENERIC_ATOMIC64 dependency dmaengine: Fix double increment of client_count in dma_chan_get() net: macb: fix PTP TX timestamp failure due to packet padding virtio-net: correctly enable callback during start_xmit l2tp: prevent lockdep issue in l2tp_tunnel_register() HID: betop: check shape of output reports cifs: fix potential deadlock in cache_refresh_path() dmaengine: xilinx_dma: call of_node_put() when breaking out of for_each_child_of_node() phy: phy-can-transceiver: Skip warning if no "max-bitrate" drm/amd/display: fix issues with driver unload nvme-pci: fix timeout request state check tcp: avoid the lookup process failing to get sk in ehash table octeontx2-pf: Fix the use of GFP_KERNEL in atomic context on rt ptdma: pt_core_execute_cmd() should use spinlock device property: fix of node refcount leak in fwnode_graph_get_next_endpoint() w1: fix deadloop in __w1_remove_master_device() w1: fix WARNING after calling w1_process() driver core: Fix test_async_probe_init saves device in wrong array selftests/net: toeplitz: fix race on tpacket_v3 block close net: dsa: microchip: ksz9477: port map correction in ALU table entry register thermal/core: Remove duplicate information when an error occurs thermal/core: Rename 'trips' to 'num_trips' thermal: Validate new state in cur_state_store() thermal/core: fix error code in __thermal_cooling_device_register() thermal: core: call put_device() only after device_register() fails net: stmmac: enable all safety features by default tcp: fix rate_app_limited to default to 1 scsi: iscsi: Fix multiple iSCSI session unbind events sent to userspace cpufreq: Add Tegra234 to cpufreq-dt-platdev blocklist kcsan: test: don't put the expect array on the stack cpufreq: Add SM6375 to cpufreq-dt-platdev blocklist ASoC: fsl_micfil: Correct the number of steps on SX controls net: usb: cdc_ether: add support for Thales Cinterion PLS62-W modem drm: Add orientation quirk for Lenovo ideapad D330-10IGL s390/debug: add _ASM_S390_ prefix to header guard s390: expicitly align _edata and _end symbols on page boundary perf/x86/msr: Add Emerald Rapids perf/x86/intel/uncore: Add Emerald Rapids cpufreq: armada-37xx: stop using 0 as NULL pointer ASoC: fsl_ssi: Rename AC'97 streams to avoid collisions with AC'97 CODEC ASoC: fsl-asoc-card: Fix naming of AC'97 CODEC widgets spi: spidev: remove debug messages that access spidev->spi without locking KVM: s390: interrupt: use READ_ONCE() before cmpxchg() scsi: hisi_sas: Set a port invalid only if there are no devices attached when refreshing port id r8152: add vendor/device ID pair for Microsoft Devkit platform/x86: touchscreen_dmi: Add info for the CSL Panther Tab HD platform/x86: asus-nb-wmi: Add alternate mapping for KEY_SCREENLOCK lockref: stop doing cpu_relax in the cmpxchg loop firmware: coreboot: Check size of table entry and use flex-array drm/i915: Allow switching away via vga-switcheroo if uninitialized Revert "selftests/bpf: check null propagation only neither reg is PTR_TO_BTF_ID" drm/i915: Remove unused variable x86: ACPI: cstate: Optimize C3 entry on AMD CPUs fs: reiserfs: remove useless new_opts in reiserfs_remount sysctl: add a new register_sysctl_init() interface kernel/panic: move panic sysctls to its own file panic: unset panic_on_warn inside panic() ubsan: no need to unset panic_on_warn in ubsan_epilogue() kasan: no need to unset panic_on_warn in end_report() exit: Add and use make_task_dead. objtool: Add a missing comma to avoid string concatenation hexagon: Fix function name in die() h8300: Fix build errors from do_exit() to make_task_dead() transition csky: Fix function name in csky_alignment() and die() ia64: make IA64_MCA_RECOVERY bool instead of tristate panic: Separate sysctl logic from CONFIG_SMP exit: Put an upper limit on how often we can oops exit: Expose "oops_count" to sysfs exit: Allow oops_limit to be disabled panic: Consolidate open-coded panic_on_warn checks panic: Introduce warn_limit panic: Expose "warn_count" to sysfs docs: Fix path paste-o for /sys/kernel/warn_count exit: Use READ_ONCE() for all oops/warn limit reads Bluetooth: hci_sync: cancel cmd_timer if hci_open failed drm/amdgpu: complete gfxoff allow signal during suspend without delay scsi: hpsa: Fix allocation size for scsi_host_alloc() KVM: SVM: fix tsc scaling cache logic module: Don't wait for GOING modules tracing: Make sure trace_printk() can output as soon as it can be used trace_events_hist: add check for return value of 'create_hist_field' ftrace/scripts: Update the instructions for ftrace-bisect.sh cifs: Fix oops due to uncleared server->smbd_conn in reconnect i2c: mv64xxx: Remove shutdown method from driver i2c: mv64xxx: Add atomic_xfer method to driver ksmbd: add smbd max io size parameter ksmbd: add max connections parameter ksmbd: do not sign response to session request for guest login ksmbd: downgrade ndr version error message to debug ksmbd: limit pdu length size according to connection status ovl: fail on invalid uid/gid mapping at copy up KVM: x86/vmx: Do not skip segment attributes if unusable bit is set KVM: arm64: GICv4.1: Fix race with doorbell on VPE activation/deactivation thermal: intel: int340x: Protect trip temperature from concurrent updates ipv6: fix reachability confirmation with proxy_ndp ARM: 9280/1: mm: fix warning on phys_addr_t to void pointer assignment EDAC/device: Respect any driver-supplied workqueue polling value EDAC/qcom: Do not pass llcc_driv_data as edac_device_ctl_info's pvt_info net: mana: Fix IRQ name - add PCI and queue number scsi: ufs: core: Fix devfreq deadlocks i2c: designware: use casting of u64 in clock multiplication to avoid overflow netlink: prevent potential spectre v1 gadgets net: fix UaF in netns ops registration error path drm/i915/selftest: fix intel_selftest_modify_policy argument types netfilter: nft_set_rbtree: Switch to node list walk for overlap detection netfilter: nft_set_rbtree: skip elements in transaction from garbage collection netlink: annotate data races around nlk->portid netlink: annotate data races around dst_portid and dst_group netlink: annotate data races around sk_state ipv4: prevent potential spectre v1 gadget in ip_metrics_convert() ipv4: prevent potential spectre v1 gadget in fib_metrics_match() netfilter: conntrack: fix vtag checks for ABORT/SHUTDOWN_COMPLETE netrom: Fix use-after-free of a listening socket. net/sched: sch_taprio: do not schedule in taprio_reset() sctp: fail if no bound addresses can be used for a given scope riscv/kprobe: Fix instruction simulation of JALR nvme: fix passthrough csi check gpio: mxc: Unlock on error path in mxc_flip_edge() ravb: Rename "no_ptp_cfg_active" and "ptp_cfg_active" variables net: ravb: Fix lack of register setting after system resumed for Gen3 net: ravb: Fix possible hang if RIS2_QFF1 happen net: mctp: mark socks as dead on unhash, prevent re-add thermal: intel: int340x: Add locking to int340x_thermal_get_trip_type() net/tg3: resolve deadlock in tg3_reset_task() during EEH net: mdio-mux-meson-g12a: force internal PHY off on mux switch treewide: fix up files incorrectly marked executable tools: gpio: fix -c option of gpio-event-mon Revert "Input: synaptics - switch touchpad on HP Laptop 15-da3001TU to RMI mode" cpufreq: Move to_gov_attr_set() to cpufreq.h cpufreq: governor: Use kobject release() method to free dbs_data kbuild: Allow kernel installation packaging to override pkg-config block: fix and cleanup bio_check_ro x86/i8259: Mark legacy PIC interrupts with IRQ_LEVEL netfilter: conntrack: unify established states for SCTP paths perf/x86/amd: fix potential integer overflow on shift of a int Linux 5.15.91 Change-Id: I3349d802533097ac86e5c680fbd40c00c9719ec7 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
6
Documentation/ABI/testing/sysfs-kernel-oops_count
Normal file
6
Documentation/ABI/testing/sysfs-kernel-oops_count
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
What: /sys/kernel/oops_count
|
||||||
|
Date: November 2022
|
||||||
|
KernelVersion: 6.2.0
|
||||||
|
Contact: Linux Kernel Hardening List <linux-hardening@vger.kernel.org>
|
||||||
|
Description:
|
||||||
|
Shows how many times the system has Oopsed since last boot.
|
||||||
6
Documentation/ABI/testing/sysfs-kernel-warn_count
Normal file
6
Documentation/ABI/testing/sysfs-kernel-warn_count
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
What: /sys/kernel/warn_count
|
||||||
|
Date: November 2022
|
||||||
|
KernelVersion: 6.2.0
|
||||||
|
Contact: Linux Kernel Hardening List <linux-hardening@vger.kernel.org>
|
||||||
|
Description:
|
||||||
|
Shows how many times the system has Warned since last boot.
|
||||||
@@ -682,6 +682,15 @@ This is the default behavior.
|
|||||||
an oops event is detected.
|
an oops event is detected.
|
||||||
|
|
||||||
|
|
||||||
|
oops_limit
|
||||||
|
==========
|
||||||
|
|
||||||
|
Number of kernel oopses after which the kernel should panic when
|
||||||
|
``panic_on_oops`` is not set. Setting this to 0 disables checking
|
||||||
|
the count. Setting this to 1 has the same effect as setting
|
||||||
|
``panic_on_oops=1``. The default value is 10000.
|
||||||
|
|
||||||
|
|
||||||
osrelease, ostype & version
|
osrelease, ostype & version
|
||||||
===========================
|
===========================
|
||||||
|
|
||||||
@@ -1507,6 +1516,16 @@ entry will default to 2 instead of 0.
|
|||||||
2 Unprivileged calls to ``bpf()`` are disabled
|
2 Unprivileged calls to ``bpf()`` are disabled
|
||||||
= =============================================================
|
= =============================================================
|
||||||
|
|
||||||
|
|
||||||
|
warn_limit
|
||||||
|
==========
|
||||||
|
|
||||||
|
Number of kernel warnings after which the kernel should panic when
|
||||||
|
``panic_on_warn`` is not set. Setting this to 0 disables checking
|
||||||
|
the warning count. Setting this to 1 has the same effect as setting
|
||||||
|
``panic_on_warn=1``. The default value is 0.
|
||||||
|
|
||||||
|
|
||||||
watchdog
|
watchdog
|
||||||
========
|
========
|
||||||
|
|
||||||
|
|||||||
5
Makefile
5
Makefile
@@ -1,7 +1,7 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 5
|
VERSION = 5
|
||||||
PATCHLEVEL = 15
|
PATCHLEVEL = 15
|
||||||
SUBLEVEL = 90
|
SUBLEVEL = 91
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = Trick or Treat
|
NAME = Trick or Treat
|
||||||
|
|
||||||
@@ -448,6 +448,7 @@ else
|
|||||||
HOSTCC = gcc
|
HOSTCC = gcc
|
||||||
HOSTCXX = g++
|
HOSTCXX = g++
|
||||||
endif
|
endif
|
||||||
|
HOSTPKG_CONFIG = pkg-config
|
||||||
|
|
||||||
KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
|
KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
|
||||||
-O2 -fomit-frame-pointer -std=gnu89
|
-O2 -fomit-frame-pointer -std=gnu89
|
||||||
@@ -544,7 +545,7 @@ KBUILD_LDFLAGS_MODULE :=
|
|||||||
KBUILD_LDFLAGS :=
|
KBUILD_LDFLAGS :=
|
||||||
CLANG_FLAGS :=
|
CLANG_FLAGS :=
|
||||||
|
|
||||||
export ARCH SRCARCH CONFIG_SHELL BASH HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE LD CC
|
export ARCH SRCARCH CONFIG_SHELL BASH HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE LD CC HOSTPKG_CONFIG
|
||||||
export CPP AR NM STRIP OBJCOPY OBJDUMP READELF PAHOLE RESOLVE_BTFIDS LEX YACC AWK INSTALLKERNEL
|
export CPP AR NM STRIP OBJCOPY OBJDUMP READELF PAHOLE RESOLVE_BTFIDS LEX YACC AWK INSTALLKERNEL
|
||||||
export PERL PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX
|
export PERL PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX
|
||||||
export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD
|
export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD
|
||||||
|
|||||||
@@ -192,7 +192,7 @@ die_if_kernel(char * str, struct pt_regs *regs, long err, unsigned long *r9_15)
|
|||||||
local_irq_enable();
|
local_irq_enable();
|
||||||
while (1);
|
while (1);
|
||||||
}
|
}
|
||||||
do_exit(SIGSEGV);
|
make_task_dead(SIGSEGV);
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifndef CONFIG_MATHEMU
|
#ifndef CONFIG_MATHEMU
|
||||||
@@ -577,7 +577,7 @@ do_entUna(void * va, unsigned long opcode, unsigned long reg,
|
|||||||
|
|
||||||
printk("Bad unaligned kernel access at %016lx: %p %lx %lu\n",
|
printk("Bad unaligned kernel access at %016lx: %p %lx %lu\n",
|
||||||
pc, va, opcode, reg);
|
pc, va, opcode, reg);
|
||||||
do_exit(SIGSEGV);
|
make_task_dead(SIGSEGV);
|
||||||
|
|
||||||
got_exception:
|
got_exception:
|
||||||
/* Ok, we caught the exception, but we don't want it. Is there
|
/* Ok, we caught the exception, but we don't want it. Is there
|
||||||
@@ -632,7 +632,7 @@ got_exception:
|
|||||||
local_irq_enable();
|
local_irq_enable();
|
||||||
while (1);
|
while (1);
|
||||||
}
|
}
|
||||||
do_exit(SIGSEGV);
|
make_task_dead(SIGSEGV);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -204,7 +204,7 @@ retry:
|
|||||||
printk(KERN_ALERT "Unable to handle kernel paging request at "
|
printk(KERN_ALERT "Unable to handle kernel paging request at "
|
||||||
"virtual address %016lx\n", address);
|
"virtual address %016lx\n", address);
|
||||||
die_if_kernel("Oops", regs, cause, (unsigned long*)regs - 16);
|
die_if_kernel("Oops", regs, cause, (unsigned long*)regs - 16);
|
||||||
do_exit(SIGKILL);
|
make_task_dead(SIGKILL);
|
||||||
|
|
||||||
/* We ran out of memory, or some other thing happened to us that
|
/* We ran out of memory, or some other thing happened to us that
|
||||||
made us unable to handle the page fault gracefully. */
|
made us unable to handle the page fault gracefully. */
|
||||||
|
|||||||
@@ -632,7 +632,6 @@
|
|||||||
&uart1 {
|
&uart1 {
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
pinctrl-0 = <&pinctrl_uart1>;
|
pinctrl-0 = <&pinctrl_uart1>;
|
||||||
uart-has-rtscts;
|
|
||||||
rts-gpios = <&gpio7 1 GPIO_ACTIVE_HIGH>;
|
rts-gpios = <&gpio7 1 GPIO_ACTIVE_HIGH>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -32,7 +32,7 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
&i2c2 {
|
&i2c2 {
|
||||||
clock_frequency = <100000>;
|
clock-frequency = <100000>;
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
pinctrl-0 = <&pinctrl_i2c2>;
|
pinctrl-0 = <&pinctrl_i2c2>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|||||||
@@ -32,7 +32,7 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
&i2c1 {
|
&i2c1 {
|
||||||
clock_frequency = <100000>;
|
clock-frequency = <100000>;
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
pinctrl-0 = <&pinctrl_i2c1>;
|
pinctrl-0 = <&pinctrl_i2c1>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
@@ -52,7 +52,7 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
&i2c4 {
|
&i2c4 {
|
||||||
clock_frequency = <100000>;
|
clock-frequency = <100000>;
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
pinctrl-0 = <&pinctrl_i2c1>;
|
pinctrl-0 = <&pinctrl_i2c1>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|||||||
@@ -43,7 +43,7 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
&i2c1 {
|
&i2c1 {
|
||||||
clock_frequency = <100000>;
|
clock-frequency = <100000>;
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
pinctrl-0 = <&pinctrl_i2c1>;
|
pinctrl-0 = <&pinctrl_i2c1>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
@@ -64,7 +64,7 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
&i2c2 {
|
&i2c2 {
|
||||||
clock_frequency = <100000>;
|
clock-frequency = <100000>;
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
pinctrl-0 = <&pinctrl_i2c2>;
|
pinctrl-0 = <&pinctrl_i2c2>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|||||||
@@ -567,7 +567,7 @@
|
|||||||
mpddrc: mpddrc@ffffe800 {
|
mpddrc: mpddrc@ffffe800 {
|
||||||
compatible = "microchip,sam9x60-ddramc", "atmel,sama5d3-ddramc";
|
compatible = "microchip,sam9x60-ddramc", "atmel,sama5d3-ddramc";
|
||||||
reg = <0xffffe800 0x200>;
|
reg = <0xffffe800 0x200>;
|
||||||
clocks = <&pmc PMC_TYPE_SYSTEM 2>, <&pmc PMC_TYPE_CORE PMC_MCK>;
|
clocks = <&pmc PMC_TYPE_SYSTEM 2>, <&pmc PMC_TYPE_PERIPHERAL 49>;
|
||||||
clock-names = "ddrck", "mpddr";
|
clock-names = "ddrck", "mpddr";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -334,7 +334,7 @@ static void oops_end(unsigned long flags, struct pt_regs *regs, int signr)
|
|||||||
if (panic_on_oops)
|
if (panic_on_oops)
|
||||||
panic("Fatal exception");
|
panic("Fatal exception");
|
||||||
if (signr)
|
if (signr)
|
||||||
do_exit(signr);
|
make_task_dead(signr);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -23,6 +23,7 @@ static int mx25_read_cpu_rev(void)
|
|||||||
|
|
||||||
np = of_find_compatible_node(NULL, NULL, "fsl,imx25-iim");
|
np = of_find_compatible_node(NULL, NULL, "fsl,imx25-iim");
|
||||||
iim_base = of_iomap(np, 0);
|
iim_base = of_iomap(np, 0);
|
||||||
|
of_node_put(np);
|
||||||
BUG_ON(!iim_base);
|
BUG_ON(!iim_base);
|
||||||
rev = readl(iim_base + MXC_IIMSREV);
|
rev = readl(iim_base + MXC_IIMSREV);
|
||||||
iounmap(iim_base);
|
iounmap(iim_base);
|
||||||
|
|||||||
@@ -28,6 +28,7 @@ static int mx27_read_cpu_rev(void)
|
|||||||
|
|
||||||
np = of_find_compatible_node(NULL, NULL, "fsl,imx27-ccm");
|
np = of_find_compatible_node(NULL, NULL, "fsl,imx27-ccm");
|
||||||
ccm_base = of_iomap(np, 0);
|
ccm_base = of_iomap(np, 0);
|
||||||
|
of_node_put(np);
|
||||||
BUG_ON(!ccm_base);
|
BUG_ON(!ccm_base);
|
||||||
/*
|
/*
|
||||||
* now we have access to the IO registers. As we need
|
* now we have access to the IO registers. As we need
|
||||||
|
|||||||
@@ -39,6 +39,7 @@ static int mx31_read_cpu_rev(void)
|
|||||||
|
|
||||||
np = of_find_compatible_node(NULL, NULL, "fsl,imx31-iim");
|
np = of_find_compatible_node(NULL, NULL, "fsl,imx31-iim");
|
||||||
iim_base = of_iomap(np, 0);
|
iim_base = of_iomap(np, 0);
|
||||||
|
of_node_put(np);
|
||||||
BUG_ON(!iim_base);
|
BUG_ON(!iim_base);
|
||||||
|
|
||||||
/* read SREV register from IIM module */
|
/* read SREV register from IIM module */
|
||||||
|
|||||||
@@ -21,6 +21,7 @@ static int mx35_read_cpu_rev(void)
|
|||||||
|
|
||||||
np = of_find_compatible_node(NULL, NULL, "fsl,imx35-iim");
|
np = of_find_compatible_node(NULL, NULL, "fsl,imx35-iim");
|
||||||
iim_base = of_iomap(np, 0);
|
iim_base = of_iomap(np, 0);
|
||||||
|
of_node_put(np);
|
||||||
BUG_ON(!iim_base);
|
BUG_ON(!iim_base);
|
||||||
|
|
||||||
rev = imx_readl(iim_base + MXC_IIMSREV);
|
rev = imx_readl(iim_base + MXC_IIMSREV);
|
||||||
|
|||||||
@@ -28,6 +28,7 @@ static u32 imx5_read_srev_reg(const char *compat)
|
|||||||
|
|
||||||
np = of_find_compatible_node(NULL, NULL, compat);
|
np = of_find_compatible_node(NULL, NULL, compat);
|
||||||
iim_base = of_iomap(np, 0);
|
iim_base = of_iomap(np, 0);
|
||||||
|
of_node_put(np);
|
||||||
WARN_ON(!iim_base);
|
WARN_ON(!iim_base);
|
||||||
|
|
||||||
srev = readl(iim_base + IIM_SREV) & 0xff;
|
srev = readl(iim_base + IIM_SREV) & 0xff;
|
||||||
|
|||||||
@@ -125,7 +125,7 @@ __do_kernel_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
|
|||||||
show_pte(KERN_ALERT, mm, addr);
|
show_pte(KERN_ALERT, mm, addr);
|
||||||
die("Oops", regs, fsr);
|
die("Oops", regs, fsr);
|
||||||
bust_spinlocks(0);
|
bust_spinlocks(0);
|
||||||
do_exit(SIGKILL);
|
make_task_dead(SIGKILL);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -161,7 +161,7 @@ void __init paging_init(const struct machine_desc *mdesc)
|
|||||||
mpu_setup();
|
mpu_setup();
|
||||||
|
|
||||||
/* allocate the zero page. */
|
/* allocate the zero page. */
|
||||||
zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
|
zero_page = (void *)memblock_alloc(PAGE_SIZE, PAGE_SIZE);
|
||||||
if (!zero_page)
|
if (!zero_page)
|
||||||
panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
|
panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
|
||||||
__func__, PAGE_SIZE, PAGE_SIZE);
|
__func__, PAGE_SIZE, PAGE_SIZE);
|
||||||
|
|||||||
@@ -70,7 +70,7 @@
|
|||||||
&ecspi2 {
|
&ecspi2 {
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
pinctrl-0 = <&pinctrl_espi2>;
|
pinctrl-0 = <&pinctrl_espi2>;
|
||||||
cs-gpios = <&gpio5 9 GPIO_ACTIVE_LOW>;
|
cs-gpios = <&gpio5 13 GPIO_ACTIVE_LOW>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
eeprom@0 {
|
eeprom@0 {
|
||||||
@@ -186,7 +186,7 @@
|
|||||||
MX8MM_IOMUXC_ECSPI2_SCLK_ECSPI2_SCLK 0x82
|
MX8MM_IOMUXC_ECSPI2_SCLK_ECSPI2_SCLK 0x82
|
||||||
MX8MM_IOMUXC_ECSPI2_MOSI_ECSPI2_MOSI 0x82
|
MX8MM_IOMUXC_ECSPI2_MOSI_ECSPI2_MOSI 0x82
|
||||||
MX8MM_IOMUXC_ECSPI2_MISO_ECSPI2_MISO 0x82
|
MX8MM_IOMUXC_ECSPI2_MISO_ECSPI2_MISO 0x82
|
||||||
MX8MM_IOMUXC_ECSPI1_SS0_GPIO5_IO9 0x41
|
MX8MM_IOMUXC_ECSPI2_SS0_GPIO5_IO13 0x41
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -675,6 +675,7 @@
|
|||||||
&usbotg2 {
|
&usbotg2 {
|
||||||
dr_mode = "host";
|
dr_mode = "host";
|
||||||
vbus-supply = <®_usb2_vbus>;
|
vbus-supply = <®_usb2_vbus>;
|
||||||
|
over-current-active-low;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -98,7 +98,6 @@
|
|||||||
|
|
||||||
regulators {
|
regulators {
|
||||||
buck1: BUCK1 {
|
buck1: BUCK1 {
|
||||||
regulator-compatible = "BUCK1";
|
|
||||||
regulator-min-microvolt = <600000>;
|
regulator-min-microvolt = <600000>;
|
||||||
regulator-max-microvolt = <2187500>;
|
regulator-max-microvolt = <2187500>;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
@@ -107,7 +106,6 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
buck2: BUCK2 {
|
buck2: BUCK2 {
|
||||||
regulator-compatible = "BUCK2";
|
|
||||||
regulator-min-microvolt = <600000>;
|
regulator-min-microvolt = <600000>;
|
||||||
regulator-max-microvolt = <2187500>;
|
regulator-max-microvolt = <2187500>;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
@@ -116,7 +114,6 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
buck4: BUCK4 {
|
buck4: BUCK4 {
|
||||||
regulator-compatible = "BUCK4";
|
|
||||||
regulator-min-microvolt = <600000>;
|
regulator-min-microvolt = <600000>;
|
||||||
regulator-max-microvolt = <3400000>;
|
regulator-max-microvolt = <3400000>;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
@@ -124,7 +121,6 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
buck5: BUCK5 {
|
buck5: BUCK5 {
|
||||||
regulator-compatible = "BUCK5";
|
|
||||||
regulator-min-microvolt = <600000>;
|
regulator-min-microvolt = <600000>;
|
||||||
regulator-max-microvolt = <3400000>;
|
regulator-max-microvolt = <3400000>;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
@@ -132,7 +128,6 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
buck6: BUCK6 {
|
buck6: BUCK6 {
|
||||||
regulator-compatible = "BUCK6";
|
|
||||||
regulator-min-microvolt = <600000>;
|
regulator-min-microvolt = <600000>;
|
||||||
regulator-max-microvolt = <3400000>;
|
regulator-max-microvolt = <3400000>;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
@@ -140,7 +135,6 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
ldo1: LDO1 {
|
ldo1: LDO1 {
|
||||||
regulator-compatible = "LDO1";
|
|
||||||
regulator-min-microvolt = <1600000>;
|
regulator-min-microvolt = <1600000>;
|
||||||
regulator-max-microvolt = <3300000>;
|
regulator-max-microvolt = <3300000>;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
@@ -148,7 +142,6 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
ldo2: LDO2 {
|
ldo2: LDO2 {
|
||||||
regulator-compatible = "LDO2";
|
|
||||||
regulator-min-microvolt = <800000>;
|
regulator-min-microvolt = <800000>;
|
||||||
regulator-max-microvolt = <1150000>;
|
regulator-max-microvolt = <1150000>;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
@@ -156,7 +149,6 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
ldo3: LDO3 {
|
ldo3: LDO3 {
|
||||||
regulator-compatible = "LDO3";
|
|
||||||
regulator-min-microvolt = <800000>;
|
regulator-min-microvolt = <800000>;
|
||||||
regulator-max-microvolt = <3300000>;
|
regulator-max-microvolt = <3300000>;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
@@ -164,7 +156,6 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
ldo4: LDO4 {
|
ldo4: LDO4 {
|
||||||
regulator-compatible = "LDO4";
|
|
||||||
regulator-min-microvolt = <800000>;
|
regulator-min-microvolt = <800000>;
|
||||||
regulator-max-microvolt = <3300000>;
|
regulator-max-microvolt = <3300000>;
|
||||||
regulator-boot-on;
|
regulator-boot-on;
|
||||||
@@ -172,7 +163,6 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
ldo5: LDO5 {
|
ldo5: LDO5 {
|
||||||
regulator-compatible = "LDO5";
|
|
||||||
regulator-min-microvolt = <1800000>;
|
regulator-min-microvolt = <1800000>;
|
||||||
regulator-max-microvolt = <3300000>;
|
regulator-max-microvolt = <3300000>;
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -11,6 +11,12 @@
|
|||||||
#include <dt-bindings/gpio/gpio.h>
|
#include <dt-bindings/gpio/gpio.h>
|
||||||
#include <dt-bindings/input/gpio-keys.h>
|
#include <dt-bindings/input/gpio-keys.h>
|
||||||
|
|
||||||
|
/delete-node/ &adsp_mem;
|
||||||
|
/delete-node/ &audio_mem;
|
||||||
|
/delete-node/ &mpss_mem;
|
||||||
|
/delete-node/ &peripheral_region;
|
||||||
|
/delete-node/ &rmtfs_mem;
|
||||||
|
|
||||||
/ {
|
/ {
|
||||||
model = "Xiaomi Mi 4C";
|
model = "Xiaomi Mi 4C";
|
||||||
compatible = "xiaomi,libra", "qcom,msm8992";
|
compatible = "xiaomi,libra", "qcom,msm8992";
|
||||||
@@ -60,25 +66,67 @@
|
|||||||
#size-cells = <2>;
|
#size-cells = <2>;
|
||||||
ranges;
|
ranges;
|
||||||
|
|
||||||
/* This is for getting crash logs using Android downstream kernels */
|
memory_hole: hole@6400000 {
|
||||||
|
reg = <0 0x06400000 0 0x600000>;
|
||||||
|
no-map;
|
||||||
|
};
|
||||||
|
|
||||||
|
memory_hole2: hole2@6c00000 {
|
||||||
|
reg = <0 0x06c00000 0 0x2400000>;
|
||||||
|
no-map;
|
||||||
|
};
|
||||||
|
|
||||||
|
mpss_mem: mpss@9000000 {
|
||||||
|
reg = <0 0x09000000 0 0x5a00000>;
|
||||||
|
no-map;
|
||||||
|
};
|
||||||
|
|
||||||
|
tzapp: tzapp@ea00000 {
|
||||||
|
reg = <0 0x0ea00000 0 0x1900000>;
|
||||||
|
no-map;
|
||||||
|
};
|
||||||
|
|
||||||
|
mdm_rfsa_mem: mdm-rfsa@ca0b0000 {
|
||||||
|
reg = <0 0xca0b0000 0 0x10000>;
|
||||||
|
no-map;
|
||||||
|
};
|
||||||
|
|
||||||
|
rmtfs_mem: rmtfs@ca100000 {
|
||||||
|
compatible = "qcom,rmtfs-mem";
|
||||||
|
reg = <0 0xca100000 0 0x180000>;
|
||||||
|
no-map;
|
||||||
|
|
||||||
|
qcom,client-id = <1>;
|
||||||
|
};
|
||||||
|
|
||||||
|
audio_mem: audio@cb400000 {
|
||||||
|
reg = <0 0xcb000000 0 0x400000>;
|
||||||
|
no-mem;
|
||||||
|
};
|
||||||
|
|
||||||
|
qseecom_mem: qseecom@cb400000 {
|
||||||
|
reg = <0 0xcb400000 0 0x1c00000>;
|
||||||
|
no-mem;
|
||||||
|
};
|
||||||
|
|
||||||
|
adsp_rfsa_mem: adsp-rfsa@cd000000 {
|
||||||
|
reg = <0 0xcd000000 0 0x10000>;
|
||||||
|
no-map;
|
||||||
|
};
|
||||||
|
|
||||||
|
sensor_rfsa_mem: sensor-rfsa@cd010000 {
|
||||||
|
reg = <0 0xcd010000 0 0x10000>;
|
||||||
|
no-map;
|
||||||
|
};
|
||||||
|
|
||||||
ramoops@dfc00000 {
|
ramoops@dfc00000 {
|
||||||
compatible = "ramoops";
|
compatible = "ramoops";
|
||||||
reg = <0x0 0xdfc00000 0x0 0x40000>;
|
reg = <0 0xdfc00000 0 0x40000>;
|
||||||
console-size = <0x10000>;
|
console-size = <0x10000>;
|
||||||
record-size = <0x10000>;
|
record-size = <0x10000>;
|
||||||
ftrace-size = <0x10000>;
|
ftrace-size = <0x10000>;
|
||||||
pmsg-size = <0x20000>;
|
pmsg-size = <0x20000>;
|
||||||
};
|
};
|
||||||
|
|
||||||
modem_region: modem_region@9000000 {
|
|
||||||
reg = <0x0 0x9000000 0x0 0x5a00000>;
|
|
||||||
no-map;
|
|
||||||
};
|
|
||||||
|
|
||||||
tzapp: modem_region@ea00000 {
|
|
||||||
reg = <0x0 0xea00000 0x0 0x1900000>;
|
|
||||||
no-map;
|
|
||||||
};
|
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -120,9 +168,21 @@
|
|||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
|
|
||||||
&peripheral_region {
|
&pm8994_spmi_regulators {
|
||||||
reg = <0x0 0x7400000 0x0 0x1c00000>;
|
VDD_APC0: s8 {
|
||||||
no-map;
|
regulator-min-microvolt = <680000>;
|
||||||
|
regulator-max-microvolt = <1180000>;
|
||||||
|
regulator-always-on;
|
||||||
|
regulator-boot-on;
|
||||||
|
};
|
||||||
|
|
||||||
|
/* APC1 is 3-phase, but quoting downstream, s11 is "the gang leader" */
|
||||||
|
VDD_APC1: s11 {
|
||||||
|
regulator-min-microvolt = <700000>;
|
||||||
|
regulator-max-microvolt = <1225000>;
|
||||||
|
regulator-always-on;
|
||||||
|
regulator-boot-on;
|
||||||
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
&rpm_requests {
|
&rpm_requests {
|
||||||
|
|||||||
@@ -14,10 +14,6 @@
|
|||||||
compatible = "qcom,rpmcc-msm8992";
|
compatible = "qcom,rpmcc-msm8992";
|
||||||
};
|
};
|
||||||
|
|
||||||
&tcsr_mutex {
|
|
||||||
compatible = "qcom,sfpb-mutex";
|
|
||||||
};
|
|
||||||
|
|
||||||
&timer {
|
&timer {
|
||||||
interrupts = <GIC_PPI 2 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
|
interrupts = <GIC_PPI 2 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
|
||||||
<GIC_PPI 3 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
|
<GIC_PPI 3 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
|
||||||
|
|||||||
@@ -237,7 +237,7 @@ void die(const char *str, struct pt_regs *regs, int err)
|
|||||||
raw_spin_unlock_irqrestore(&die_lock, flags);
|
raw_spin_unlock_irqrestore(&die_lock, flags);
|
||||||
|
|
||||||
if (ret != NOTIFY_STOP)
|
if (ret != NOTIFY_STOP)
|
||||||
do_exit(SIGSEGV);
|
make_task_dead(SIGSEGV);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void arm64_show_signal(int signo, const char *str)
|
static void arm64_show_signal(int signo, const char *str)
|
||||||
|
|||||||
@@ -350,26 +350,23 @@ retry:
|
|||||||
* The deactivation of the doorbell interrupt will trigger the
|
* The deactivation of the doorbell interrupt will trigger the
|
||||||
* unmapping of the associated vPE.
|
* unmapping of the associated vPE.
|
||||||
*/
|
*/
|
||||||
static void unmap_all_vpes(struct vgic_dist *dist)
|
static void unmap_all_vpes(struct kvm *kvm)
|
||||||
{
|
{
|
||||||
struct irq_desc *desc;
|
struct vgic_dist *dist = &kvm->arch.vgic;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
for (i = 0; i < dist->its_vm.nr_vpes; i++) {
|
for (i = 0; i < dist->its_vm.nr_vpes; i++)
|
||||||
desc = irq_to_desc(dist->its_vm.vpes[i]->irq);
|
free_irq(dist->its_vm.vpes[i]->irq, kvm_get_vcpu(kvm, i));
|
||||||
irq_domain_deactivate_irq(irq_desc_get_irq_data(desc));
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void map_all_vpes(struct vgic_dist *dist)
|
static void map_all_vpes(struct kvm *kvm)
|
||||||
{
|
{
|
||||||
struct irq_desc *desc;
|
struct vgic_dist *dist = &kvm->arch.vgic;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
for (i = 0; i < dist->its_vm.nr_vpes; i++) {
|
for (i = 0; i < dist->its_vm.nr_vpes; i++)
|
||||||
desc = irq_to_desc(dist->its_vm.vpes[i]->irq);
|
WARN_ON(vgic_v4_request_vpe_irq(kvm_get_vcpu(kvm, i),
|
||||||
irq_domain_activate_irq(irq_desc_get_irq_data(desc), false);
|
dist->its_vm.vpes[i]->irq));
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -394,7 +391,7 @@ int vgic_v3_save_pending_tables(struct kvm *kvm)
|
|||||||
* and enabling of the doorbells have already been done.
|
* and enabling of the doorbells have already been done.
|
||||||
*/
|
*/
|
||||||
if (kvm_vgic_global_state.has_gicv4_1) {
|
if (kvm_vgic_global_state.has_gicv4_1) {
|
||||||
unmap_all_vpes(dist);
|
unmap_all_vpes(kvm);
|
||||||
vlpi_avail = true;
|
vlpi_avail = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -444,7 +441,7 @@ int vgic_v3_save_pending_tables(struct kvm *kvm)
|
|||||||
|
|
||||||
out:
|
out:
|
||||||
if (vlpi_avail)
|
if (vlpi_avail)
|
||||||
map_all_vpes(dist);
|
map_all_vpes(kvm);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -222,6 +222,11 @@ void vgic_v4_get_vlpi_state(struct vgic_irq *irq, bool *val)
|
|||||||
*val = !!(*ptr & mask);
|
*val = !!(*ptr & mask);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int vgic_v4_request_vpe_irq(struct kvm_vcpu *vcpu, int irq)
|
||||||
|
{
|
||||||
|
return request_irq(irq, vgic_v4_doorbell_handler, 0, "vcpu", vcpu);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* vgic_v4_init - Initialize the GICv4 data structures
|
* vgic_v4_init - Initialize the GICv4 data structures
|
||||||
* @kvm: Pointer to the VM being initialized
|
* @kvm: Pointer to the VM being initialized
|
||||||
@@ -283,8 +288,7 @@ int vgic_v4_init(struct kvm *kvm)
|
|||||||
irq_flags &= ~IRQ_NOAUTOEN;
|
irq_flags &= ~IRQ_NOAUTOEN;
|
||||||
irq_set_status_flags(irq, irq_flags);
|
irq_set_status_flags(irq, irq_flags);
|
||||||
|
|
||||||
ret = request_irq(irq, vgic_v4_doorbell_handler,
|
ret = vgic_v4_request_vpe_irq(vcpu, irq);
|
||||||
0, "vcpu", vcpu);
|
|
||||||
if (ret) {
|
if (ret) {
|
||||||
kvm_err("failed to allocate vcpu IRQ%d\n", irq);
|
kvm_err("failed to allocate vcpu IRQ%d\n", irq);
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -329,5 +329,6 @@ int vgic_v4_init(struct kvm *kvm);
|
|||||||
void vgic_v4_teardown(struct kvm *kvm);
|
void vgic_v4_teardown(struct kvm *kvm);
|
||||||
void vgic_v4_configure_vsgis(struct kvm *kvm);
|
void vgic_v4_configure_vsgis(struct kvm *kvm);
|
||||||
void vgic_v4_get_vlpi_state(struct vgic_irq *irq, bool *val);
|
void vgic_v4_get_vlpi_state(struct vgic_irq *irq, bool *val);
|
||||||
|
int vgic_v4_request_vpe_irq(struct kvm_vcpu *vcpu, int irq);
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|||||||
@@ -319,7 +319,7 @@ static void die_kernel_fault(const char *msg, unsigned long addr,
|
|||||||
show_pte(addr);
|
show_pte(addr);
|
||||||
die("Oops", regs, esr);
|
die("Oops", regs, esr);
|
||||||
bust_spinlocks(0);
|
bust_spinlocks(0);
|
||||||
do_exit(SIGKILL);
|
make_task_dead(SIGKILL);
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_KASAN_HW_TAGS
|
#ifdef CONFIG_KASAN_HW_TAGS
|
||||||
|
|||||||
@@ -294,7 +294,7 @@ bad_area:
|
|||||||
__func__, opcode, rz, rx, imm, addr);
|
__func__, opcode, rz, rx, imm, addr);
|
||||||
show_regs(regs);
|
show_regs(regs);
|
||||||
bust_spinlocks(0);
|
bust_spinlocks(0);
|
||||||
do_exit(SIGKILL);
|
make_task_dead(SIGKILL);
|
||||||
}
|
}
|
||||||
|
|
||||||
force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)addr);
|
force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)addr);
|
||||||
|
|||||||
@@ -109,7 +109,7 @@ void die(struct pt_regs *regs, const char *str)
|
|||||||
if (panic_on_oops)
|
if (panic_on_oops)
|
||||||
panic("Fatal exception");
|
panic("Fatal exception");
|
||||||
if (ret != NOTIFY_STOP)
|
if (ret != NOTIFY_STOP)
|
||||||
do_exit(SIGSEGV);
|
make_task_dead(SIGSEGV);
|
||||||
}
|
}
|
||||||
|
|
||||||
void do_trap(struct pt_regs *regs, int signo, int code, unsigned long addr)
|
void do_trap(struct pt_regs *regs, int signo, int code, unsigned long addr)
|
||||||
|
|||||||
@@ -67,7 +67,7 @@ static inline void no_context(struct pt_regs *regs, unsigned long addr)
|
|||||||
pr_alert("Unable to handle kernel paging request at virtual "
|
pr_alert("Unable to handle kernel paging request at virtual "
|
||||||
"addr 0x%08lx, pc: 0x%08lx\n", addr, regs->pc);
|
"addr 0x%08lx, pc: 0x%08lx\n", addr, regs->pc);
|
||||||
die(regs, "Oops");
|
die(regs, "Oops");
|
||||||
do_exit(SIGKILL);
|
make_task_dead(SIGKILL);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void mm_fault_error(struct pt_regs *regs, unsigned long addr, vm_fault_t fault)
|
static inline void mm_fault_error(struct pt_regs *regs, unsigned long addr, vm_fault_t fault)
|
||||||
|
|||||||
@@ -17,6 +17,7 @@
|
|||||||
#include <linux/types.h>
|
#include <linux/types.h>
|
||||||
#include <linux/sched.h>
|
#include <linux/sched.h>
|
||||||
#include <linux/sched/debug.h>
|
#include <linux/sched/debug.h>
|
||||||
|
#include <linux/sched/task.h>
|
||||||
#include <linux/mm_types.h>
|
#include <linux/mm_types.h>
|
||||||
#include <linux/kernel.h>
|
#include <linux/kernel.h>
|
||||||
#include <linux/errno.h>
|
#include <linux/errno.h>
|
||||||
@@ -106,7 +107,7 @@ void die(const char *str, struct pt_regs *fp, unsigned long err)
|
|||||||
dump(fp);
|
dump(fp);
|
||||||
|
|
||||||
spin_unlock_irq(&die_lock);
|
spin_unlock_irq(&die_lock);
|
||||||
do_exit(SIGSEGV);
|
make_task_dead(SIGSEGV);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int kstack_depth_to_print = 24;
|
static int kstack_depth_to_print = 24;
|
||||||
|
|||||||
@@ -51,7 +51,7 @@ asmlinkage int do_page_fault(struct pt_regs *regs, unsigned long address,
|
|||||||
printk(" at virtual address %08lx\n", address);
|
printk(" at virtual address %08lx\n", address);
|
||||||
if (!user_mode(regs))
|
if (!user_mode(regs))
|
||||||
die("Oops", regs, error_code);
|
die("Oops", regs, error_code);
|
||||||
do_exit(SIGKILL);
|
make_task_dead(SIGKILL);
|
||||||
|
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -214,7 +214,7 @@ int die(const char *str, struct pt_regs *regs, long err)
|
|||||||
panic("Fatal exception");
|
panic("Fatal exception");
|
||||||
|
|
||||||
oops_exit();
|
oops_exit();
|
||||||
do_exit(err);
|
make_task_dead(err);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -323,7 +323,7 @@ config ARCH_PROC_KCORE_TEXT
|
|||||||
depends on PROC_KCORE
|
depends on PROC_KCORE
|
||||||
|
|
||||||
config IA64_MCA_RECOVERY
|
config IA64_MCA_RECOVERY
|
||||||
tristate "MCA recovery from errors other than TLB."
|
bool "MCA recovery from errors other than TLB."
|
||||||
|
|
||||||
config IA64_PALINFO
|
config IA64_PALINFO
|
||||||
tristate "/proc/pal support"
|
tristate "/proc/pal support"
|
||||||
|
|||||||
@@ -176,7 +176,7 @@ mca_handler_bh(unsigned long paddr, void *iip, unsigned long ipsr)
|
|||||||
spin_unlock(&mca_bh_lock);
|
spin_unlock(&mca_bh_lock);
|
||||||
|
|
||||||
/* This process is about to be killed itself */
|
/* This process is about to be killed itself */
|
||||||
do_exit(SIGKILL);
|
make_task_dead(SIGKILL);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -85,7 +85,7 @@ die (const char *str, struct pt_regs *regs, long err)
|
|||||||
if (panic_on_oops)
|
if (panic_on_oops)
|
||||||
panic("Fatal exception");
|
panic("Fatal exception");
|
||||||
|
|
||||||
do_exit(SIGSEGV);
|
make_task_dead(SIGSEGV);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -259,7 +259,7 @@ retry:
|
|||||||
regs = NULL;
|
regs = NULL;
|
||||||
bust_spinlocks(0);
|
bust_spinlocks(0);
|
||||||
if (regs)
|
if (regs)
|
||||||
do_exit(SIGKILL);
|
make_task_dead(SIGKILL);
|
||||||
return;
|
return;
|
||||||
|
|
||||||
out_of_memory:
|
out_of_memory:
|
||||||
|
|||||||
@@ -1131,7 +1131,7 @@ void die_if_kernel (char *str, struct pt_regs *fp, int nr)
|
|||||||
pr_crit("%s: %08x\n", str, nr);
|
pr_crit("%s: %08x\n", str, nr);
|
||||||
show_registers(fp);
|
show_registers(fp);
|
||||||
add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
|
add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
|
||||||
do_exit(SIGSEGV);
|
make_task_dead(SIGSEGV);
|
||||||
}
|
}
|
||||||
|
|
||||||
asmlinkage void set_esp0(unsigned long ssp)
|
asmlinkage void set_esp0(unsigned long ssp)
|
||||||
|
|||||||
@@ -48,7 +48,7 @@ int send_fault_sig(struct pt_regs *regs)
|
|||||||
pr_alert("Unable to handle kernel access");
|
pr_alert("Unable to handle kernel access");
|
||||||
pr_cont(" at virtual address %p\n", addr);
|
pr_cont(" at virtual address %p\n", addr);
|
||||||
die_if_kernel("Oops", regs, 0 /*error_code*/);
|
die_if_kernel("Oops", regs, 0 /*error_code*/);
|
||||||
do_exit(SIGKILL);
|
make_task_dead(SIGKILL);
|
||||||
}
|
}
|
||||||
|
|
||||||
return 1;
|
return 1;
|
||||||
|
|||||||
@@ -44,10 +44,10 @@ void die(const char *str, struct pt_regs *fp, long err)
|
|||||||
pr_warn("Oops: %s, sig: %ld\n", str, err);
|
pr_warn("Oops: %s, sig: %ld\n", str, err);
|
||||||
show_regs(fp);
|
show_regs(fp);
|
||||||
spin_unlock_irq(&die_lock);
|
spin_unlock_irq(&die_lock);
|
||||||
/* do_exit() should take care of panic'ing from an interrupt
|
/* make_task_dead() should take care of panic'ing from an interrupt
|
||||||
* context so we don't handle it here
|
* context so we don't handle it here
|
||||||
*/
|
*/
|
||||||
do_exit(err);
|
make_task_dead(err);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* for user application debugging */
|
/* for user application debugging */
|
||||||
|
|||||||
@@ -416,7 +416,7 @@ void __noreturn die(const char *str, struct pt_regs *regs)
|
|||||||
if (regs && kexec_should_crash(current))
|
if (regs && kexec_should_crash(current))
|
||||||
crash_kexec(regs);
|
crash_kexec(regs);
|
||||||
|
|
||||||
do_exit(sig);
|
make_task_dead(sig);
|
||||||
}
|
}
|
||||||
|
|
||||||
extern struct exception_table_entry __start___dbe_table[];
|
extern struct exception_table_entry __start___dbe_table[];
|
||||||
|
|||||||
@@ -223,7 +223,7 @@ inline void handle_fpu_exception(struct pt_regs *regs)
|
|||||||
}
|
}
|
||||||
} else if (fpcsr & FPCSR_mskRIT) {
|
} else if (fpcsr & FPCSR_mskRIT) {
|
||||||
if (!user_mode(regs))
|
if (!user_mode(regs))
|
||||||
do_exit(SIGILL);
|
make_task_dead(SIGILL);
|
||||||
si_signo = SIGILL;
|
si_signo = SIGILL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -141,7 +141,7 @@ void die(const char *str, struct pt_regs *regs, int err)
|
|||||||
|
|
||||||
bust_spinlocks(0);
|
bust_spinlocks(0);
|
||||||
spin_unlock_irq(&die_lock);
|
spin_unlock_irq(&die_lock);
|
||||||
do_exit(SIGSEGV);
|
make_task_dead(SIGSEGV);
|
||||||
}
|
}
|
||||||
|
|
||||||
EXPORT_SYMBOL(die);
|
EXPORT_SYMBOL(die);
|
||||||
@@ -240,7 +240,7 @@ void unhandled_interruption(struct pt_regs *regs)
|
|||||||
pr_emerg("unhandled_interruption\n");
|
pr_emerg("unhandled_interruption\n");
|
||||||
show_regs(regs);
|
show_regs(regs);
|
||||||
if (!user_mode(regs))
|
if (!user_mode(regs))
|
||||||
do_exit(SIGKILL);
|
make_task_dead(SIGKILL);
|
||||||
force_sig(SIGKILL);
|
force_sig(SIGKILL);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -251,7 +251,7 @@ void unhandled_exceptions(unsigned long entry, unsigned long addr,
|
|||||||
addr, type);
|
addr, type);
|
||||||
show_regs(regs);
|
show_regs(regs);
|
||||||
if (!user_mode(regs))
|
if (!user_mode(regs))
|
||||||
do_exit(SIGKILL);
|
make_task_dead(SIGKILL);
|
||||||
force_sig(SIGKILL);
|
force_sig(SIGKILL);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -278,7 +278,7 @@ void do_revinsn(struct pt_regs *regs)
|
|||||||
pr_emerg("Reserved Instruction\n");
|
pr_emerg("Reserved Instruction\n");
|
||||||
show_regs(regs);
|
show_regs(regs);
|
||||||
if (!user_mode(regs))
|
if (!user_mode(regs))
|
||||||
do_exit(SIGILL);
|
make_task_dead(SIGILL);
|
||||||
force_sig(SIGILL);
|
force_sig(SIGILL);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -37,10 +37,10 @@ void die(const char *str, struct pt_regs *regs, long err)
|
|||||||
show_regs(regs);
|
show_regs(regs);
|
||||||
spin_unlock_irq(&die_lock);
|
spin_unlock_irq(&die_lock);
|
||||||
/*
|
/*
|
||||||
* do_exit() should take care of panic'ing from an interrupt
|
* make_task_dead() should take care of panic'ing from an interrupt
|
||||||
* context so we don't handle it here
|
* context so we don't handle it here
|
||||||
*/
|
*/
|
||||||
do_exit(err);
|
make_task_dead(err);
|
||||||
}
|
}
|
||||||
|
|
||||||
void _exception(int signo, struct pt_regs *regs, int code, unsigned long addr)
|
void _exception(int signo, struct pt_regs *regs, int code, unsigned long addr)
|
||||||
|
|||||||
@@ -212,7 +212,7 @@ void die(const char *str, struct pt_regs *regs, long err)
|
|||||||
__asm__ __volatile__("l.nop 1");
|
__asm__ __volatile__("l.nop 1");
|
||||||
do {} while (1);
|
do {} while (1);
|
||||||
#endif
|
#endif
|
||||||
do_exit(SIGSEGV);
|
make_task_dead(SIGSEGV);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* This is normally the 'Oops' routine */
|
/* This is normally the 'Oops' routine */
|
||||||
|
|||||||
@@ -268,7 +268,7 @@ void die_if_kernel(char *str, struct pt_regs *regs, long err)
|
|||||||
panic("Fatal exception");
|
panic("Fatal exception");
|
||||||
|
|
||||||
oops_exit();
|
oops_exit();
|
||||||
do_exit(SIGSEGV);
|
make_task_dead(SIGSEGV);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* gdb uses break 4,8 */
|
/* gdb uses break 4,8 */
|
||||||
|
|||||||
@@ -245,7 +245,7 @@ static void oops_end(unsigned long flags, struct pt_regs *regs,
|
|||||||
|
|
||||||
if (panic_on_oops)
|
if (panic_on_oops)
|
||||||
panic("Fatal exception");
|
panic("Fatal exception");
|
||||||
do_exit(signr);
|
make_task_dead(signr);
|
||||||
}
|
}
|
||||||
NOKPROBE_SYMBOL(oops_end);
|
NOKPROBE_SYMBOL(oops_end);
|
||||||
|
|
||||||
@@ -792,9 +792,9 @@ int machine_check_generic(struct pt_regs *regs)
|
|||||||
void die_mce(const char *str, struct pt_regs *regs, long err)
|
void die_mce(const char *str, struct pt_regs *regs, long err)
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* The machine check wants to kill the interrupted context, but
|
* The machine check wants to kill the interrupted context,
|
||||||
* do_exit() checks for in_interrupt() and panics in that case, so
|
* but make_task_dead() checks for in_interrupt() and panics
|
||||||
* exit the irq/nmi before calling die.
|
* in that case, so exit the irq/nmi before calling die.
|
||||||
*/
|
*/
|
||||||
if (in_nmi())
|
if (in_nmi())
|
||||||
nmi_exit();
|
nmi_exit();
|
||||||
|
|||||||
@@ -71,11 +71,11 @@ bool __kprobes simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *reg
|
|||||||
u32 rd_index = (opcode >> 7) & 0x1f;
|
u32 rd_index = (opcode >> 7) & 0x1f;
|
||||||
u32 rs1_index = (opcode >> 15) & 0x1f;
|
u32 rs1_index = (opcode >> 15) & 0x1f;
|
||||||
|
|
||||||
ret = rv_insn_reg_set_val(regs, rd_index, addr + 4);
|
ret = rv_insn_reg_get_val(regs, rs1_index, &base_addr);
|
||||||
if (!ret)
|
if (!ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
ret = rv_insn_reg_get_val(regs, rs1_index, &base_addr);
|
ret = rv_insn_reg_set_val(regs, rd_index, addr + 4);
|
||||||
if (!ret)
|
if (!ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
|
|||||||
@@ -59,7 +59,7 @@ void die(struct pt_regs *regs, const char *str)
|
|||||||
if (panic_on_oops)
|
if (panic_on_oops)
|
||||||
panic("Fatal exception");
|
panic("Fatal exception");
|
||||||
if (ret != NOTIFY_STOP)
|
if (ret != NOTIFY_STOP)
|
||||||
do_exit(SIGSEGV);
|
make_task_dead(SIGSEGV);
|
||||||
}
|
}
|
||||||
|
|
||||||
void do_trap(struct pt_regs *regs, int signo, int code, unsigned long addr)
|
void do_trap(struct pt_regs *regs, int signo, int code, unsigned long addr)
|
||||||
|
|||||||
@@ -31,7 +31,7 @@ static void die_kernel_fault(const char *msg, unsigned long addr,
|
|||||||
|
|
||||||
bust_spinlocks(0);
|
bust_spinlocks(0);
|
||||||
die(regs, "Oops");
|
die(regs, "Oops");
|
||||||
do_exit(SIGKILL);
|
make_task_dead(SIGKILL);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void no_context(struct pt_regs *regs, unsigned long addr)
|
static inline void no_context(struct pt_regs *regs, unsigned long addr)
|
||||||
|
|||||||
@@ -4,8 +4,8 @@
|
|||||||
*
|
*
|
||||||
* Copyright IBM Corp. 1999, 2020
|
* Copyright IBM Corp. 1999, 2020
|
||||||
*/
|
*/
|
||||||
#ifndef DEBUG_H
|
#ifndef _ASM_S390_DEBUG_H
|
||||||
#define DEBUG_H
|
#define _ASM_S390_DEBUG_H
|
||||||
|
|
||||||
#include <linux/string.h>
|
#include <linux/string.h>
|
||||||
#include <linux/spinlock.h>
|
#include <linux/spinlock.h>
|
||||||
@@ -487,4 +487,4 @@ void debug_register_static(debug_info_t *id, int pages_per_area, int nr_areas);
|
|||||||
|
|
||||||
#endif /* MODULE */
|
#endif /* MODULE */
|
||||||
|
|
||||||
#endif /* DEBUG_H */
|
#endif /* _ASM_S390_DEBUG_H */
|
||||||
|
|||||||
@@ -224,5 +224,5 @@ void die(struct pt_regs *regs, const char *str)
|
|||||||
if (panic_on_oops)
|
if (panic_on_oops)
|
||||||
panic("Fatal exception: panic_on_oops");
|
panic("Fatal exception: panic_on_oops");
|
||||||
oops_exit();
|
oops_exit();
|
||||||
do_exit(SIGSEGV);
|
make_task_dead(SIGSEGV);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -175,7 +175,7 @@ void __s390_handle_mcck(void)
|
|||||||
"malfunction (code 0x%016lx).\n", mcck.mcck_code);
|
"malfunction (code 0x%016lx).\n", mcck.mcck_code);
|
||||||
printk(KERN_EMERG "mcck: task: %s, pid: %d.\n",
|
printk(KERN_EMERG "mcck: task: %s, pid: %d.\n",
|
||||||
current->comm, current->pid);
|
current->comm, current->pid);
|
||||||
do_exit(SIGSEGV);
|
make_task_dead(SIGSEGV);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -80,6 +80,7 @@ SECTIONS
|
|||||||
_end_amode31_refs = .;
|
_end_amode31_refs = .;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
. = ALIGN(PAGE_SIZE);
|
||||||
_edata = .; /* End of data section */
|
_edata = .; /* End of data section */
|
||||||
|
|
||||||
/* will be freed after init */
|
/* will be freed after init */
|
||||||
@@ -194,6 +195,7 @@ SECTIONS
|
|||||||
|
|
||||||
BSS_SECTION(PAGE_SIZE, 4 * PAGE_SIZE, PAGE_SIZE)
|
BSS_SECTION(PAGE_SIZE, 4 * PAGE_SIZE, PAGE_SIZE)
|
||||||
|
|
||||||
|
. = ALIGN(PAGE_SIZE);
|
||||||
_end = . ;
|
_end = . ;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -81,8 +81,9 @@ static int sca_inject_ext_call(struct kvm_vcpu *vcpu, int src_id)
|
|||||||
struct esca_block *sca = vcpu->kvm->arch.sca;
|
struct esca_block *sca = vcpu->kvm->arch.sca;
|
||||||
union esca_sigp_ctrl *sigp_ctrl =
|
union esca_sigp_ctrl *sigp_ctrl =
|
||||||
&(sca->cpu[vcpu->vcpu_id].sigp_ctrl);
|
&(sca->cpu[vcpu->vcpu_id].sigp_ctrl);
|
||||||
union esca_sigp_ctrl new_val = {0}, old_val = *sigp_ctrl;
|
union esca_sigp_ctrl new_val = {0}, old_val;
|
||||||
|
|
||||||
|
old_val = READ_ONCE(*sigp_ctrl);
|
||||||
new_val.scn = src_id;
|
new_val.scn = src_id;
|
||||||
new_val.c = 1;
|
new_val.c = 1;
|
||||||
old_val.c = 0;
|
old_val.c = 0;
|
||||||
@@ -93,8 +94,9 @@ static int sca_inject_ext_call(struct kvm_vcpu *vcpu, int src_id)
|
|||||||
struct bsca_block *sca = vcpu->kvm->arch.sca;
|
struct bsca_block *sca = vcpu->kvm->arch.sca;
|
||||||
union bsca_sigp_ctrl *sigp_ctrl =
|
union bsca_sigp_ctrl *sigp_ctrl =
|
||||||
&(sca->cpu[vcpu->vcpu_id].sigp_ctrl);
|
&(sca->cpu[vcpu->vcpu_id].sigp_ctrl);
|
||||||
union bsca_sigp_ctrl new_val = {0}, old_val = *sigp_ctrl;
|
union bsca_sigp_ctrl new_val = {0}, old_val;
|
||||||
|
|
||||||
|
old_val = READ_ONCE(*sigp_ctrl);
|
||||||
new_val.scn = src_id;
|
new_val.scn = src_id;
|
||||||
new_val.c = 1;
|
new_val.c = 1;
|
||||||
old_val.c = 0;
|
old_val.c = 0;
|
||||||
@@ -124,16 +126,18 @@ static void sca_clear_ext_call(struct kvm_vcpu *vcpu)
|
|||||||
struct esca_block *sca = vcpu->kvm->arch.sca;
|
struct esca_block *sca = vcpu->kvm->arch.sca;
|
||||||
union esca_sigp_ctrl *sigp_ctrl =
|
union esca_sigp_ctrl *sigp_ctrl =
|
||||||
&(sca->cpu[vcpu->vcpu_id].sigp_ctrl);
|
&(sca->cpu[vcpu->vcpu_id].sigp_ctrl);
|
||||||
union esca_sigp_ctrl old = *sigp_ctrl;
|
union esca_sigp_ctrl old;
|
||||||
|
|
||||||
|
old = READ_ONCE(*sigp_ctrl);
|
||||||
expect = old.value;
|
expect = old.value;
|
||||||
rc = cmpxchg(&sigp_ctrl->value, old.value, 0);
|
rc = cmpxchg(&sigp_ctrl->value, old.value, 0);
|
||||||
} else {
|
} else {
|
||||||
struct bsca_block *sca = vcpu->kvm->arch.sca;
|
struct bsca_block *sca = vcpu->kvm->arch.sca;
|
||||||
union bsca_sigp_ctrl *sigp_ctrl =
|
union bsca_sigp_ctrl *sigp_ctrl =
|
||||||
&(sca->cpu[vcpu->vcpu_id].sigp_ctrl);
|
&(sca->cpu[vcpu->vcpu_id].sigp_ctrl);
|
||||||
union bsca_sigp_ctrl old = *sigp_ctrl;
|
union bsca_sigp_ctrl old;
|
||||||
|
|
||||||
|
old = READ_ONCE(*sigp_ctrl);
|
||||||
expect = old.value;
|
expect = old.value;
|
||||||
rc = cmpxchg(&sigp_ctrl->value, old.value, 0);
|
rc = cmpxchg(&sigp_ctrl->value, old.value, 0);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -57,7 +57,7 @@ void die(const char *str, struct pt_regs *regs, long err)
|
|||||||
if (panic_on_oops)
|
if (panic_on_oops)
|
||||||
panic("Fatal exception");
|
panic("Fatal exception");
|
||||||
|
|
||||||
do_exit(SIGSEGV);
|
make_task_dead(SIGSEGV);
|
||||||
}
|
}
|
||||||
|
|
||||||
void die_if_kernel(const char *str, struct pt_regs *regs, long err)
|
void die_if_kernel(const char *str, struct pt_regs *regs, long err)
|
||||||
|
|||||||
@@ -86,9 +86,7 @@ void __noreturn die_if_kernel(char *str, struct pt_regs *regs)
|
|||||||
}
|
}
|
||||||
printk("Instruction DUMP:");
|
printk("Instruction DUMP:");
|
||||||
instruction_dump ((unsigned long *) regs->pc);
|
instruction_dump ((unsigned long *) regs->pc);
|
||||||
if(regs->psr & PSR_PS)
|
make_task_dead((regs->psr & PSR_PS) ? SIGKILL : SIGSEGV);
|
||||||
do_exit(SIGKILL);
|
|
||||||
do_exit(SIGSEGV);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void do_hw_interrupt(struct pt_regs *regs, unsigned long type)
|
void do_hw_interrupt(struct pt_regs *regs, unsigned long type)
|
||||||
|
|||||||
@@ -2559,9 +2559,7 @@ void __noreturn die_if_kernel(char *str, struct pt_regs *regs)
|
|||||||
}
|
}
|
||||||
if (panic_on_oops)
|
if (panic_on_oops)
|
||||||
panic("Fatal exception");
|
panic("Fatal exception");
|
||||||
if (regs->tstate & TSTATE_PRIV)
|
make_task_dead((regs->tstate & TSTATE_PRIV)? SIGKILL : SIGSEGV);
|
||||||
do_exit(SIGKILL);
|
|
||||||
do_exit(SIGSEGV);
|
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(die_if_kernel);
|
EXPORT_SYMBOL(die_if_kernel);
|
||||||
|
|
||||||
|
|||||||
@@ -1239,14 +1239,14 @@ SYM_CODE_START(asm_exc_nmi)
|
|||||||
SYM_CODE_END(asm_exc_nmi)
|
SYM_CODE_END(asm_exc_nmi)
|
||||||
|
|
||||||
.pushsection .text, "ax"
|
.pushsection .text, "ax"
|
||||||
SYM_CODE_START(rewind_stack_do_exit)
|
SYM_CODE_START(rewind_stack_and_make_dead)
|
||||||
/* Prevent any naive code from trying to unwind to our caller. */
|
/* Prevent any naive code from trying to unwind to our caller. */
|
||||||
xorl %ebp, %ebp
|
xorl %ebp, %ebp
|
||||||
|
|
||||||
movl PER_CPU_VAR(cpu_current_top_of_stack), %esi
|
movl PER_CPU_VAR(cpu_current_top_of_stack), %esi
|
||||||
leal -TOP_OF_KERNEL_STACK_PADDING-PTREGS_SIZE(%esi), %esp
|
leal -TOP_OF_KERNEL_STACK_PADDING-PTREGS_SIZE(%esi), %esp
|
||||||
|
|
||||||
call do_exit
|
call make_task_dead
|
||||||
1: jmp 1b
|
1: jmp 1b
|
||||||
SYM_CODE_END(rewind_stack_do_exit)
|
SYM_CODE_END(rewind_stack_and_make_dead)
|
||||||
.popsection
|
.popsection
|
||||||
|
|||||||
@@ -1487,7 +1487,7 @@ SYM_CODE_END(ignore_sysret)
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
.pushsection .text, "ax"
|
.pushsection .text, "ax"
|
||||||
SYM_CODE_START(rewind_stack_do_exit)
|
SYM_CODE_START(rewind_stack_and_make_dead)
|
||||||
UNWIND_HINT_FUNC
|
UNWIND_HINT_FUNC
|
||||||
/* Prevent any naive code from trying to unwind to our caller. */
|
/* Prevent any naive code from trying to unwind to our caller. */
|
||||||
xorl %ebp, %ebp
|
xorl %ebp, %ebp
|
||||||
@@ -1496,6 +1496,6 @@ SYM_CODE_START(rewind_stack_do_exit)
|
|||||||
leaq -PTREGS_SIZE(%rax), %rsp
|
leaq -PTREGS_SIZE(%rax), %rsp
|
||||||
UNWIND_HINT_REGS
|
UNWIND_HINT_REGS
|
||||||
|
|
||||||
call do_exit
|
call make_task_dead
|
||||||
SYM_CODE_END(rewind_stack_do_exit)
|
SYM_CODE_END(rewind_stack_and_make_dead)
|
||||||
.popsection
|
.popsection
|
||||||
|
|||||||
@@ -976,7 +976,7 @@ static int __init amd_core_pmu_init(void)
|
|||||||
* numbered counter following it.
|
* numbered counter following it.
|
||||||
*/
|
*/
|
||||||
for (i = 0; i < x86_pmu.num_counters - 1; i += 2)
|
for (i = 0; i < x86_pmu.num_counters - 1; i += 2)
|
||||||
even_ctr_mask |= 1 << i;
|
even_ctr_mask |= BIT_ULL(i);
|
||||||
|
|
||||||
pair_constraint = (struct event_constraint)
|
pair_constraint = (struct event_constraint)
|
||||||
__EVENT_CONSTRAINT(0, even_ctr_mask, 0,
|
__EVENT_CONSTRAINT(0, even_ctr_mask, 0,
|
||||||
|
|||||||
@@ -1829,6 +1829,7 @@ static const struct x86_cpu_id intel_uncore_match[] __initconst = {
|
|||||||
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, &adl_uncore_init),
|
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, &adl_uncore_init),
|
||||||
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, &adl_uncore_init),
|
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, &adl_uncore_init),
|
||||||
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, &spr_uncore_init),
|
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, &spr_uncore_init),
|
||||||
|
X86_MATCH_INTEL_FAM6_MODEL(EMERALDRAPIDS_X, &spr_uncore_init),
|
||||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_D, &snr_uncore_init),
|
X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_D, &snr_uncore_init),
|
||||||
{},
|
{},
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -69,6 +69,7 @@ static bool test_intel(int idx, void *data)
|
|||||||
case INTEL_FAM6_BROADWELL_G:
|
case INTEL_FAM6_BROADWELL_G:
|
||||||
case INTEL_FAM6_BROADWELL_X:
|
case INTEL_FAM6_BROADWELL_X:
|
||||||
case INTEL_FAM6_SAPPHIRERAPIDS_X:
|
case INTEL_FAM6_SAPPHIRERAPIDS_X:
|
||||||
|
case INTEL_FAM6_EMERALDRAPIDS_X:
|
||||||
|
|
||||||
case INTEL_FAM6_ATOM_SILVERMONT:
|
case INTEL_FAM6_ATOM_SILVERMONT:
|
||||||
case INTEL_FAM6_ATOM_SILVERMONT_D:
|
case INTEL_FAM6_ATOM_SILVERMONT_D:
|
||||||
|
|||||||
@@ -79,6 +79,21 @@ void acpi_processor_power_init_bm_check(struct acpi_processor_flags *flags,
|
|||||||
*/
|
*/
|
||||||
flags->bm_control = 0;
|
flags->bm_control = 0;
|
||||||
}
|
}
|
||||||
|
if (c->x86_vendor == X86_VENDOR_AMD && c->x86 >= 0x17) {
|
||||||
|
/*
|
||||||
|
* For all AMD Zen or newer CPUs that support C3, caches
|
||||||
|
* should not be flushed by software while entering C3
|
||||||
|
* type state. Set bm->check to 1 so that kernel doesn't
|
||||||
|
* need to execute cache flush operation.
|
||||||
|
*/
|
||||||
|
flags->bm_check = 1;
|
||||||
|
/*
|
||||||
|
* In current AMD C state implementation ARB_DIS is no longer
|
||||||
|
* used. So set bm_control to zero to indicate ARB_DIS is not
|
||||||
|
* required while entering C3 type state.
|
||||||
|
*/
|
||||||
|
flags->bm_control = 0;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(acpi_processor_power_init_bm_check);
|
EXPORT_SYMBOL(acpi_processor_power_init_bm_check);
|
||||||
|
|
||||||
|
|||||||
@@ -351,7 +351,7 @@ unsigned long oops_begin(void)
|
|||||||
}
|
}
|
||||||
NOKPROBE_SYMBOL(oops_begin);
|
NOKPROBE_SYMBOL(oops_begin);
|
||||||
|
|
||||||
void __noreturn rewind_stack_do_exit(int signr);
|
void __noreturn rewind_stack_and_make_dead(int signr);
|
||||||
|
|
||||||
void oops_end(unsigned long flags, struct pt_regs *regs, int signr)
|
void oops_end(unsigned long flags, struct pt_regs *regs, int signr)
|
||||||
{
|
{
|
||||||
@@ -386,7 +386,7 @@ void oops_end(unsigned long flags, struct pt_regs *regs, int signr)
|
|||||||
* reuse the task stack and that existing poisons are invalid.
|
* reuse the task stack and that existing poisons are invalid.
|
||||||
*/
|
*/
|
||||||
kasan_unpoison_task_stack(current);
|
kasan_unpoison_task_stack(current);
|
||||||
rewind_stack_do_exit(signr);
|
rewind_stack_and_make_dead(signr);
|
||||||
}
|
}
|
||||||
NOKPROBE_SYMBOL(oops_end);
|
NOKPROBE_SYMBOL(oops_end);
|
||||||
|
|
||||||
|
|||||||
@@ -114,6 +114,7 @@ static void make_8259A_irq(unsigned int irq)
|
|||||||
disable_irq_nosync(irq);
|
disable_irq_nosync(irq);
|
||||||
io_apic_irqs &= ~(1<<irq);
|
io_apic_irqs &= ~(1<<irq);
|
||||||
irq_set_chip_and_handler(irq, &i8259A_chip, handle_level_irq);
|
irq_set_chip_and_handler(irq, &i8259A_chip, handle_level_irq);
|
||||||
|
irq_set_status_flags(irq, IRQ_LEVEL);
|
||||||
enable_irq(irq);
|
enable_irq(irq);
|
||||||
lapic_assign_legacy_vector(irq, true);
|
lapic_assign_legacy_vector(irq, true);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -65,8 +65,10 @@ void __init init_ISA_irqs(void)
|
|||||||
|
|
||||||
legacy_pic->init(0);
|
legacy_pic->init(0);
|
||||||
|
|
||||||
for (i = 0; i < nr_legacy_irqs(); i++)
|
for (i = 0; i < nr_legacy_irqs(); i++) {
|
||||||
irq_set_chip_and_handler(i, chip, handle_level_irq);
|
irq_set_chip_and_handler(i, chip, handle_level_irq);
|
||||||
|
irq_set_status_flags(i, IRQ_LEVEL);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void __init init_IRQ(void)
|
void __init init_IRQ(void)
|
||||||
|
|||||||
@@ -465,11 +465,24 @@ static int has_svm(void)
|
|||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void __svm_write_tsc_multiplier(u64 multiplier)
|
||||||
|
{
|
||||||
|
preempt_disable();
|
||||||
|
|
||||||
|
if (multiplier == __this_cpu_read(current_tsc_ratio))
|
||||||
|
goto out;
|
||||||
|
|
||||||
|
wrmsrl(MSR_AMD64_TSC_RATIO, multiplier);
|
||||||
|
__this_cpu_write(current_tsc_ratio, multiplier);
|
||||||
|
out:
|
||||||
|
preempt_enable();
|
||||||
|
}
|
||||||
|
|
||||||
static void svm_hardware_disable(void)
|
static void svm_hardware_disable(void)
|
||||||
{
|
{
|
||||||
/* Make sure we clean up behind us */
|
/* Make sure we clean up behind us */
|
||||||
if (static_cpu_has(X86_FEATURE_TSCRATEMSR))
|
if (static_cpu_has(X86_FEATURE_TSCRATEMSR))
|
||||||
wrmsrl(MSR_AMD64_TSC_RATIO, TSC_RATIO_DEFAULT);
|
__svm_write_tsc_multiplier(TSC_RATIO_DEFAULT);
|
||||||
|
|
||||||
cpu_svm_disable();
|
cpu_svm_disable();
|
||||||
|
|
||||||
@@ -511,8 +524,11 @@ static int svm_hardware_enable(void)
|
|||||||
wrmsrl(MSR_VM_HSAVE_PA, __sme_page_pa(sd->save_area));
|
wrmsrl(MSR_VM_HSAVE_PA, __sme_page_pa(sd->save_area));
|
||||||
|
|
||||||
if (static_cpu_has(X86_FEATURE_TSCRATEMSR)) {
|
if (static_cpu_has(X86_FEATURE_TSCRATEMSR)) {
|
||||||
wrmsrl(MSR_AMD64_TSC_RATIO, TSC_RATIO_DEFAULT);
|
/*
|
||||||
__this_cpu_write(current_tsc_ratio, TSC_RATIO_DEFAULT);
|
* Set the default value, even if we don't use TSC scaling
|
||||||
|
* to avoid having stale value in the msr
|
||||||
|
*/
|
||||||
|
__svm_write_tsc_multiplier(TSC_RATIO_DEFAULT);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -1125,9 +1141,10 @@ static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
|
|||||||
|
|
||||||
static void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier)
|
static void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier)
|
||||||
{
|
{
|
||||||
wrmsrl(MSR_AMD64_TSC_RATIO, multiplier);
|
__svm_write_tsc_multiplier(multiplier);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
/* Evaluate instruction intercepts that depend on guest CPUID features. */
|
/* Evaluate instruction intercepts that depend on guest CPUID features. */
|
||||||
static void svm_recalc_instruction_intercepts(struct kvm_vcpu *vcpu,
|
static void svm_recalc_instruction_intercepts(struct kvm_vcpu *vcpu,
|
||||||
struct vcpu_svm *svm)
|
struct vcpu_svm *svm)
|
||||||
@@ -1451,13 +1468,8 @@ static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu)
|
|||||||
vmsave(__sme_page_pa(sd->save_area));
|
vmsave(__sme_page_pa(sd->save_area));
|
||||||
}
|
}
|
||||||
|
|
||||||
if (static_cpu_has(X86_FEATURE_TSCRATEMSR)) {
|
if (static_cpu_has(X86_FEATURE_TSCRATEMSR))
|
||||||
u64 tsc_ratio = vcpu->arch.tsc_scaling_ratio;
|
__svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio);
|
||||||
if (tsc_ratio != __this_cpu_read(current_tsc_ratio)) {
|
|
||||||
__this_cpu_write(current_tsc_ratio, tsc_ratio);
|
|
||||||
wrmsrl(MSR_AMD64_TSC_RATIO, tsc_ratio);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (likely(tsc_aux_uret_slot >= 0))
|
if (likely(tsc_aux_uret_slot >= 0))
|
||||||
kvm_set_user_return_msr(tsc_aux_uret_slot, svm->tsc_aux, -1ull);
|
kvm_set_user_return_msr(tsc_aux_uret_slot, svm->tsc_aux, -1ull);
|
||||||
|
|||||||
@@ -487,6 +487,7 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr,
|
|||||||
int nested_svm_exit_special(struct vcpu_svm *svm);
|
int nested_svm_exit_special(struct vcpu_svm *svm);
|
||||||
void nested_load_control_from_vmcb12(struct vcpu_svm *svm,
|
void nested_load_control_from_vmcb12(struct vcpu_svm *svm,
|
||||||
struct vmcb_control_area *control);
|
struct vmcb_control_area *control);
|
||||||
|
void __svm_write_tsc_multiplier(u64 multiplier);
|
||||||
void nested_sync_control_from_vmcb02(struct vcpu_svm *svm);
|
void nested_sync_control_from_vmcb02(struct vcpu_svm *svm);
|
||||||
void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm);
|
void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm);
|
||||||
void svm_switch_vmcb(struct vcpu_svm *svm, struct kvm_vmcb_info *target_vmcb);
|
void svm_switch_vmcb(struct vcpu_svm *svm, struct kvm_vmcb_info *target_vmcb);
|
||||||
|
|||||||
@@ -3361,9 +3361,6 @@ static u32 vmx_segment_access_rights(struct kvm_segment *var)
|
|||||||
{
|
{
|
||||||
u32 ar;
|
u32 ar;
|
||||||
|
|
||||||
if (var->unusable || !var->present)
|
|
||||||
ar = 1 << 16;
|
|
||||||
else {
|
|
||||||
ar = var->type & 15;
|
ar = var->type & 15;
|
||||||
ar |= (var->s & 1) << 4;
|
ar |= (var->s & 1) << 4;
|
||||||
ar |= (var->dpl & 3) << 5;
|
ar |= (var->dpl & 3) << 5;
|
||||||
@@ -3372,7 +3369,7 @@ static u32 vmx_segment_access_rights(struct kvm_segment *var)
|
|||||||
ar |= (var->l & 1) << 13;
|
ar |= (var->l & 1) << 13;
|
||||||
ar |= (var->db & 1) << 14;
|
ar |= (var->db & 1) << 14;
|
||||||
ar |= (var->g & 1) << 15;
|
ar |= (var->g & 1) << 15;
|
||||||
}
|
ar |= (var->unusable || !var->present) << 16;
|
||||||
|
|
||||||
return ar;
|
return ar;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -552,5 +552,5 @@ void die(const char * str, struct pt_regs * regs, long err)
|
|||||||
if (panic_on_oops)
|
if (panic_on_oops)
|
||||||
panic("Fatal exception");
|
panic("Fatal exception");
|
||||||
|
|
||||||
do_exit(err);
|
make_task_dead(err);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -700,14 +700,10 @@ static inline bool should_fail_request(struct block_device *part,
|
|||||||
static inline bool bio_check_ro(struct bio *bio)
|
static inline bool bio_check_ro(struct bio *bio)
|
||||||
{
|
{
|
||||||
if (op_is_write(bio_op(bio)) && bdev_read_only(bio->bi_bdev)) {
|
if (op_is_write(bio_op(bio)) && bdev_read_only(bio->bi_bdev)) {
|
||||||
char b[BDEVNAME_SIZE];
|
|
||||||
|
|
||||||
if (op_is_flush(bio->bi_opf) && !bio_sectors(bio))
|
if (op_is_flush(bio->bi_opf) && !bio_sectors(bio))
|
||||||
return false;
|
return false;
|
||||||
|
pr_warn("Trying to write to read-only block-device %pg\n",
|
||||||
WARN_ONCE(1,
|
bio->bi_bdev);
|
||||||
"Trying to write to read-only block-device %s (partno %d)\n",
|
|
||||||
bio_devname(bio, b), bio->bi_bdev->bd_partno);
|
|
||||||
/* Older lvm-tools actually trigger this */
|
/* Older lvm-tools actually trigger this */
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1055,26 +1055,32 @@ struct fwnode_handle *
|
|||||||
fwnode_graph_get_next_endpoint(const struct fwnode_handle *fwnode,
|
fwnode_graph_get_next_endpoint(const struct fwnode_handle *fwnode,
|
||||||
struct fwnode_handle *prev)
|
struct fwnode_handle *prev)
|
||||||
{
|
{
|
||||||
|
struct fwnode_handle *ep, *port_parent = NULL;
|
||||||
const struct fwnode_handle *parent;
|
const struct fwnode_handle *parent;
|
||||||
struct fwnode_handle *ep;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If this function is in a loop and the previous iteration returned
|
* If this function is in a loop and the previous iteration returned
|
||||||
* an endpoint from fwnode->secondary, then we need to use the secondary
|
* an endpoint from fwnode->secondary, then we need to use the secondary
|
||||||
* as parent rather than @fwnode.
|
* as parent rather than @fwnode.
|
||||||
*/
|
*/
|
||||||
if (prev)
|
if (prev) {
|
||||||
parent = fwnode_graph_get_port_parent(prev);
|
port_parent = fwnode_graph_get_port_parent(prev);
|
||||||
else
|
parent = port_parent;
|
||||||
|
} else {
|
||||||
parent = fwnode;
|
parent = fwnode;
|
||||||
|
}
|
||||||
if (IS_ERR_OR_NULL(parent))
|
if (IS_ERR_OR_NULL(parent))
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
ep = fwnode_call_ptr_op(parent, graph_get_next_endpoint, prev);
|
ep = fwnode_call_ptr_op(parent, graph_get_next_endpoint, prev);
|
||||||
if (ep)
|
if (ep)
|
||||||
return ep;
|
goto out_put_port_parent;
|
||||||
|
|
||||||
return fwnode_graph_get_next_endpoint(parent->secondary, NULL);
|
ep = fwnode_graph_get_next_endpoint(parent->secondary, NULL);
|
||||||
|
|
||||||
|
out_put_port_parent:
|
||||||
|
fwnode_handle_put(port_parent);
|
||||||
|
return ep;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(fwnode_graph_get_next_endpoint);
|
EXPORT_SYMBOL_GPL(fwnode_graph_get_next_endpoint);
|
||||||
|
|
||||||
|
|||||||
@@ -146,7 +146,7 @@ static int __init test_async_probe_init(void)
|
|||||||
calltime = ktime_get();
|
calltime = ktime_get();
|
||||||
for_each_online_cpu(cpu) {
|
for_each_online_cpu(cpu) {
|
||||||
nid = cpu_to_node(cpu);
|
nid = cpu_to_node(cpu);
|
||||||
pdev = &sync_dev[sync_id];
|
pdev = &async_dev[async_id];
|
||||||
|
|
||||||
*pdev = test_platform_device_register_node("test_async_driver",
|
*pdev = test_platform_device_register_node("test_async_driver",
|
||||||
async_id,
|
async_id,
|
||||||
|
|||||||
@@ -445,7 +445,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
|
|||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
|
|
||||||
clk = clk_get(cpu_dev, 0);
|
clk = clk_get(cpu_dev, NULL);
|
||||||
if (IS_ERR(clk)) {
|
if (IS_ERR(clk)) {
|
||||||
dev_err(cpu_dev, "Cannot get clock for CPU0\n");
|
dev_err(cpu_dev, "Cannot get clock for CPU0\n");
|
||||||
return PTR_ERR(clk);
|
return PTR_ERR(clk);
|
||||||
|
|||||||
@@ -133,6 +133,7 @@ static const struct of_device_id blocklist[] __initconst = {
|
|||||||
{ .compatible = "nvidia,tegra30", },
|
{ .compatible = "nvidia,tegra30", },
|
||||||
{ .compatible = "nvidia,tegra124", },
|
{ .compatible = "nvidia,tegra124", },
|
||||||
{ .compatible = "nvidia,tegra210", },
|
{ .compatible = "nvidia,tegra210", },
|
||||||
|
{ .compatible = "nvidia,tegra234", },
|
||||||
|
|
||||||
{ .compatible = "qcom,apq8096", },
|
{ .compatible = "qcom,apq8096", },
|
||||||
{ .compatible = "qcom,msm8996", },
|
{ .compatible = "qcom,msm8996", },
|
||||||
@@ -143,6 +144,7 @@ static const struct of_device_id blocklist[] __initconst = {
|
|||||||
{ .compatible = "qcom,sc8180x", },
|
{ .compatible = "qcom,sc8180x", },
|
||||||
{ .compatible = "qcom,sdm845", },
|
{ .compatible = "qcom,sdm845", },
|
||||||
{ .compatible = "qcom,sm6350", },
|
{ .compatible = "qcom,sm6350", },
|
||||||
|
{ .compatible = "qcom,sm6375", },
|
||||||
{ .compatible = "qcom,sm8150", },
|
{ .compatible = "qcom,sm8150", },
|
||||||
{ .compatible = "qcom,sm8250", },
|
{ .compatible = "qcom,sm8250", },
|
||||||
{ .compatible = "qcom,sm8350", },
|
{ .compatible = "qcom,sm8350", },
|
||||||
|
|||||||
@@ -388,6 +388,15 @@ static void free_policy_dbs_info(struct policy_dbs_info *policy_dbs,
|
|||||||
gov->free(policy_dbs);
|
gov->free(policy_dbs);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void cpufreq_dbs_data_release(struct kobject *kobj)
|
||||||
|
{
|
||||||
|
struct dbs_data *dbs_data = to_dbs_data(to_gov_attr_set(kobj));
|
||||||
|
struct dbs_governor *gov = dbs_data->gov;
|
||||||
|
|
||||||
|
gov->exit(dbs_data);
|
||||||
|
kfree(dbs_data);
|
||||||
|
}
|
||||||
|
|
||||||
int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
|
int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
|
||||||
{
|
{
|
||||||
struct dbs_governor *gov = dbs_governor_of(policy);
|
struct dbs_governor *gov = dbs_governor_of(policy);
|
||||||
@@ -425,6 +434,7 @@ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
|
|||||||
goto free_policy_dbs_info;
|
goto free_policy_dbs_info;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
dbs_data->gov = gov;
|
||||||
gov_attr_set_init(&dbs_data->attr_set, &policy_dbs->list);
|
gov_attr_set_init(&dbs_data->attr_set, &policy_dbs->list);
|
||||||
|
|
||||||
ret = gov->init(dbs_data);
|
ret = gov->init(dbs_data);
|
||||||
@@ -447,6 +457,7 @@ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
|
|||||||
policy->governor_data = policy_dbs;
|
policy->governor_data = policy_dbs;
|
||||||
|
|
||||||
gov->kobj_type.sysfs_ops = &governor_sysfs_ops;
|
gov->kobj_type.sysfs_ops = &governor_sysfs_ops;
|
||||||
|
gov->kobj_type.release = cpufreq_dbs_data_release;
|
||||||
ret = kobject_init_and_add(&dbs_data->attr_set.kobj, &gov->kobj_type,
|
ret = kobject_init_and_add(&dbs_data->attr_set.kobj, &gov->kobj_type,
|
||||||
get_governor_parent_kobj(policy),
|
get_governor_parent_kobj(policy),
|
||||||
"%s", gov->gov.name);
|
"%s", gov->gov.name);
|
||||||
@@ -488,14 +499,9 @@ void cpufreq_dbs_governor_exit(struct cpufreq_policy *policy)
|
|||||||
|
|
||||||
policy->governor_data = NULL;
|
policy->governor_data = NULL;
|
||||||
|
|
||||||
if (!count) {
|
if (!count && !have_governor_per_policy())
|
||||||
if (!have_governor_per_policy())
|
|
||||||
gov->gdbs_data = NULL;
|
gov->gdbs_data = NULL;
|
||||||
|
|
||||||
gov->exit(dbs_data);
|
|
||||||
kfree(dbs_data);
|
|
||||||
}
|
|
||||||
|
|
||||||
free_policy_dbs_info(policy_dbs, gov);
|
free_policy_dbs_info(policy_dbs, gov);
|
||||||
|
|
||||||
mutex_unlock(&gov_dbs_data_mutex);
|
mutex_unlock(&gov_dbs_data_mutex);
|
||||||
|
|||||||
@@ -37,6 +37,7 @@ enum {OD_NORMAL_SAMPLE, OD_SUB_SAMPLE};
|
|||||||
/* Governor demand based switching data (per-policy or global). */
|
/* Governor demand based switching data (per-policy or global). */
|
||||||
struct dbs_data {
|
struct dbs_data {
|
||||||
struct gov_attr_set attr_set;
|
struct gov_attr_set attr_set;
|
||||||
|
struct dbs_governor *gov;
|
||||||
void *tuners;
|
void *tuners;
|
||||||
unsigned int ignore_nice_load;
|
unsigned int ignore_nice_load;
|
||||||
unsigned int sampling_rate;
|
unsigned int sampling_rate;
|
||||||
|
|||||||
@@ -8,11 +8,6 @@
|
|||||||
|
|
||||||
#include "cpufreq_governor.h"
|
#include "cpufreq_governor.h"
|
||||||
|
|
||||||
static inline struct gov_attr_set *to_gov_attr_set(struct kobject *kobj)
|
|
||||||
{
|
|
||||||
return container_of(kobj, struct gov_attr_set, kobj);
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline struct governor_attr *to_gov_attr(struct attribute *attr)
|
static inline struct governor_attr *to_gov_attr(struct attribute *attr)
|
||||||
{
|
{
|
||||||
return container_of(attr, struct governor_attr, attr);
|
return container_of(attr, struct governor_attr, attr);
|
||||||
|
|||||||
@@ -451,7 +451,8 @@ static int dma_chan_get(struct dma_chan *chan)
|
|||||||
/* The channel is already in use, update client count */
|
/* The channel is already in use, update client count */
|
||||||
if (chan->client_count) {
|
if (chan->client_count) {
|
||||||
__module_get(owner);
|
__module_get(owner);
|
||||||
goto out;
|
chan->client_count++;
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!try_module_get(owner))
|
if (!try_module_get(owner))
|
||||||
@@ -470,11 +471,11 @@ static int dma_chan_get(struct dma_chan *chan)
|
|||||||
goto err_out;
|
goto err_out;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
chan->client_count++;
|
||||||
|
|
||||||
if (!dma_has_cap(DMA_PRIVATE, chan->device->cap_mask))
|
if (!dma_has_cap(DMA_PRIVATE, chan->device->cap_mask))
|
||||||
balance_ref_count(chan);
|
balance_ref_count(chan);
|
||||||
|
|
||||||
out:
|
|
||||||
chan->client_count++;
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
err_out:
|
err_out:
|
||||||
|
|||||||
@@ -71,12 +71,13 @@ static int pt_core_execute_cmd(struct ptdma_desc *desc, struct pt_cmd_queue *cmd
|
|||||||
bool soc = FIELD_GET(DWORD0_SOC, desc->dw0);
|
bool soc = FIELD_GET(DWORD0_SOC, desc->dw0);
|
||||||
u8 *q_desc = (u8 *)&cmd_q->qbase[cmd_q->qidx];
|
u8 *q_desc = (u8 *)&cmd_q->qbase[cmd_q->qidx];
|
||||||
u32 tail;
|
u32 tail;
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
if (soc) {
|
if (soc) {
|
||||||
desc->dw0 |= FIELD_PREP(DWORD0_IOC, desc->dw0);
|
desc->dw0 |= FIELD_PREP(DWORD0_IOC, desc->dw0);
|
||||||
desc->dw0 &= ~DWORD0_SOC;
|
desc->dw0 &= ~DWORD0_SOC;
|
||||||
}
|
}
|
||||||
mutex_lock(&cmd_q->q_mutex);
|
spin_lock_irqsave(&cmd_q->q_lock, flags);
|
||||||
|
|
||||||
/* Copy 32-byte command descriptor to hw queue. */
|
/* Copy 32-byte command descriptor to hw queue. */
|
||||||
memcpy(q_desc, desc, 32);
|
memcpy(q_desc, desc, 32);
|
||||||
@@ -91,7 +92,7 @@ static int pt_core_execute_cmd(struct ptdma_desc *desc, struct pt_cmd_queue *cmd
|
|||||||
|
|
||||||
/* Turn the queue back on using our cached control register */
|
/* Turn the queue back on using our cached control register */
|
||||||
pt_start_queue(cmd_q);
|
pt_start_queue(cmd_q);
|
||||||
mutex_unlock(&cmd_q->q_mutex);
|
spin_unlock_irqrestore(&cmd_q->q_lock, flags);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -197,7 +198,7 @@ int pt_core_init(struct pt_device *pt)
|
|||||||
|
|
||||||
cmd_q->pt = pt;
|
cmd_q->pt = pt;
|
||||||
cmd_q->dma_pool = dma_pool;
|
cmd_q->dma_pool = dma_pool;
|
||||||
mutex_init(&cmd_q->q_mutex);
|
spin_lock_init(&cmd_q->q_lock);
|
||||||
|
|
||||||
/* Page alignment satisfies our needs for N <= 128 */
|
/* Page alignment satisfies our needs for N <= 128 */
|
||||||
cmd_q->qsize = Q_SIZE(Q_DESC_SIZE);
|
cmd_q->qsize = Q_SIZE(Q_DESC_SIZE);
|
||||||
|
|||||||
@@ -196,7 +196,7 @@ struct pt_cmd_queue {
|
|||||||
struct ptdma_desc *qbase;
|
struct ptdma_desc *qbase;
|
||||||
|
|
||||||
/* Aligned queue start address (per requirement) */
|
/* Aligned queue start address (per requirement) */
|
||||||
struct mutex q_mutex ____cacheline_aligned;
|
spinlock_t q_lock ____cacheline_aligned;
|
||||||
unsigned int qidx;
|
unsigned int qidx;
|
||||||
|
|
||||||
unsigned int qsize;
|
unsigned int qsize;
|
||||||
|
|||||||
@@ -760,11 +760,12 @@ static void udma_decrement_byte_counters(struct udma_chan *uc, u32 val)
|
|||||||
if (uc->desc->dir == DMA_DEV_TO_MEM) {
|
if (uc->desc->dir == DMA_DEV_TO_MEM) {
|
||||||
udma_rchanrt_write(uc, UDMA_CHAN_RT_BCNT_REG, val);
|
udma_rchanrt_write(uc, UDMA_CHAN_RT_BCNT_REG, val);
|
||||||
udma_rchanrt_write(uc, UDMA_CHAN_RT_SBCNT_REG, val);
|
udma_rchanrt_write(uc, UDMA_CHAN_RT_SBCNT_REG, val);
|
||||||
|
if (uc->config.ep_type != PSIL_EP_NATIVE)
|
||||||
udma_rchanrt_write(uc, UDMA_CHAN_RT_PEER_BCNT_REG, val);
|
udma_rchanrt_write(uc, UDMA_CHAN_RT_PEER_BCNT_REG, val);
|
||||||
} else {
|
} else {
|
||||||
udma_tchanrt_write(uc, UDMA_CHAN_RT_BCNT_REG, val);
|
udma_tchanrt_write(uc, UDMA_CHAN_RT_BCNT_REG, val);
|
||||||
udma_tchanrt_write(uc, UDMA_CHAN_RT_SBCNT_REG, val);
|
udma_tchanrt_write(uc, UDMA_CHAN_RT_SBCNT_REG, val);
|
||||||
if (!uc->bchan)
|
if (!uc->bchan && uc->config.ep_type != PSIL_EP_NATIVE)
|
||||||
udma_tchanrt_write(uc, UDMA_CHAN_RT_PEER_BCNT_REG, val);
|
udma_tchanrt_write(uc, UDMA_CHAN_RT_PEER_BCNT_REG, val);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3138,9 +3138,11 @@ static int xilinx_dma_probe(struct platform_device *pdev)
|
|||||||
/* Initialize the channels */
|
/* Initialize the channels */
|
||||||
for_each_child_of_node(node, child) {
|
for_each_child_of_node(node, child) {
|
||||||
err = xilinx_dma_child_probe(xdev, child);
|
err = xilinx_dma_child_probe(xdev, child);
|
||||||
if (err < 0)
|
if (err < 0) {
|
||||||
|
of_node_put(child);
|
||||||
goto error;
|
goto error;
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (xdev->dma_config->dmatype == XDMA_TYPE_VDMA) {
|
if (xdev->dma_config->dmatype == XDMA_TYPE_VDMA) {
|
||||||
for (i = 0; i < xdev->dma_config->max_channels; i++)
|
for (i = 0; i < xdev->dma_config->max_channels; i++)
|
||||||
|
|||||||
@@ -34,6 +34,9 @@
|
|||||||
static DEFINE_MUTEX(device_ctls_mutex);
|
static DEFINE_MUTEX(device_ctls_mutex);
|
||||||
static LIST_HEAD(edac_device_list);
|
static LIST_HEAD(edac_device_list);
|
||||||
|
|
||||||
|
/* Default workqueue processing interval on this instance, in msecs */
|
||||||
|
#define DEFAULT_POLL_INTERVAL 1000
|
||||||
|
|
||||||
#ifdef CONFIG_EDAC_DEBUG
|
#ifdef CONFIG_EDAC_DEBUG
|
||||||
static void edac_device_dump_device(struct edac_device_ctl_info *edac_dev)
|
static void edac_device_dump_device(struct edac_device_ctl_info *edac_dev)
|
||||||
{
|
{
|
||||||
@@ -366,7 +369,7 @@ static void edac_device_workq_function(struct work_struct *work_req)
|
|||||||
* whole one second to save timers firing all over the period
|
* whole one second to save timers firing all over the period
|
||||||
* between integral seconds
|
* between integral seconds
|
||||||
*/
|
*/
|
||||||
if (edac_dev->poll_msec == 1000)
|
if (edac_dev->poll_msec == DEFAULT_POLL_INTERVAL)
|
||||||
edac_queue_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));
|
edac_queue_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));
|
||||||
else
|
else
|
||||||
edac_queue_work(&edac_dev->work, edac_dev->delay);
|
edac_queue_work(&edac_dev->work, edac_dev->delay);
|
||||||
@@ -396,7 +399,7 @@ static void edac_device_workq_setup(struct edac_device_ctl_info *edac_dev,
|
|||||||
* timers firing on sub-second basis, while they are happy
|
* timers firing on sub-second basis, while they are happy
|
||||||
* to fire together on the 1 second exactly
|
* to fire together on the 1 second exactly
|
||||||
*/
|
*/
|
||||||
if (edac_dev->poll_msec == 1000)
|
if (edac_dev->poll_msec == DEFAULT_POLL_INTERVAL)
|
||||||
edac_queue_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));
|
edac_queue_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));
|
||||||
else
|
else
|
||||||
edac_queue_work(&edac_dev->work, edac_dev->delay);
|
edac_queue_work(&edac_dev->work, edac_dev->delay);
|
||||||
@@ -430,7 +433,7 @@ void edac_device_reset_delay_period(struct edac_device_ctl_info *edac_dev,
|
|||||||
edac_dev->delay = msecs_to_jiffies(msec);
|
edac_dev->delay = msecs_to_jiffies(msec);
|
||||||
|
|
||||||
/* See comment in edac_device_workq_setup() above */
|
/* See comment in edac_device_workq_setup() above */
|
||||||
if (edac_dev->poll_msec == 1000)
|
if (edac_dev->poll_msec == DEFAULT_POLL_INTERVAL)
|
||||||
edac_mod_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));
|
edac_mod_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));
|
||||||
else
|
else
|
||||||
edac_mod_work(&edac_dev->work, edac_dev->delay);
|
edac_mod_work(&edac_dev->work, edac_dev->delay);
|
||||||
@@ -472,11 +475,7 @@ int edac_device_add_device(struct edac_device_ctl_info *edac_dev)
|
|||||||
/* This instance is NOW RUNNING */
|
/* This instance is NOW RUNNING */
|
||||||
edac_dev->op_state = OP_RUNNING_POLL;
|
edac_dev->op_state = OP_RUNNING_POLL;
|
||||||
|
|
||||||
/*
|
edac_device_workq_setup(edac_dev, edac_dev->poll_msec ?: DEFAULT_POLL_INTERVAL);
|
||||||
* enable workq processing on this instance,
|
|
||||||
* default = 1000 msec
|
|
||||||
*/
|
|
||||||
edac_device_workq_setup(edac_dev, 1000);
|
|
||||||
} else {
|
} else {
|
||||||
edac_dev->op_state = OP_RUNNING_INTERRUPT;
|
edac_dev->op_state = OP_RUNNING_INTERRUPT;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -174,8 +174,10 @@ static int highbank_mc_probe(struct platform_device *pdev)
|
|||||||
drvdata = mci->pvt_info;
|
drvdata = mci->pvt_info;
|
||||||
platform_set_drvdata(pdev, mci);
|
platform_set_drvdata(pdev, mci);
|
||||||
|
|
||||||
if (!devres_open_group(&pdev->dev, NULL, GFP_KERNEL))
|
if (!devres_open_group(&pdev->dev, NULL, GFP_KERNEL)) {
|
||||||
return -ENOMEM;
|
res = -ENOMEM;
|
||||||
|
goto free;
|
||||||
|
}
|
||||||
|
|
||||||
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||||
if (!r) {
|
if (!r) {
|
||||||
@@ -243,6 +245,7 @@ err2:
|
|||||||
edac_mc_del_mc(&pdev->dev);
|
edac_mc_del_mc(&pdev->dev);
|
||||||
err:
|
err:
|
||||||
devres_release_group(&pdev->dev, NULL);
|
devres_release_group(&pdev->dev, NULL);
|
||||||
|
free:
|
||||||
edac_mc_free(mci);
|
edac_mc_free(mci);
|
||||||
return res;
|
return res;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -252,7 +252,7 @@ clear:
|
|||||||
static int
|
static int
|
||||||
dump_syn_reg(struct edac_device_ctl_info *edev_ctl, int err_type, u32 bank)
|
dump_syn_reg(struct edac_device_ctl_info *edev_ctl, int err_type, u32 bank)
|
||||||
{
|
{
|
||||||
struct llcc_drv_data *drv = edev_ctl->pvt_info;
|
struct llcc_drv_data *drv = edev_ctl->dev->platform_data;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ret = dump_syn_reg_values(drv, bank, err_type);
|
ret = dump_syn_reg_values(drv, bank, err_type);
|
||||||
@@ -289,7 +289,7 @@ static irqreturn_t
|
|||||||
llcc_ecc_irq_handler(int irq, void *edev_ctl)
|
llcc_ecc_irq_handler(int irq, void *edev_ctl)
|
||||||
{
|
{
|
||||||
struct edac_device_ctl_info *edac_dev_ctl = edev_ctl;
|
struct edac_device_ctl_info *edac_dev_ctl = edev_ctl;
|
||||||
struct llcc_drv_data *drv = edac_dev_ctl->pvt_info;
|
struct llcc_drv_data *drv = edac_dev_ctl->dev->platform_data;
|
||||||
irqreturn_t irq_rc = IRQ_NONE;
|
irqreturn_t irq_rc = IRQ_NONE;
|
||||||
u32 drp_error, trp_error, i;
|
u32 drp_error, trp_error, i;
|
||||||
int ret;
|
int ret;
|
||||||
@@ -358,7 +358,6 @@ static int qcom_llcc_edac_probe(struct platform_device *pdev)
|
|||||||
edev_ctl->dev_name = dev_name(dev);
|
edev_ctl->dev_name = dev_name(dev);
|
||||||
edev_ctl->ctl_name = "llcc";
|
edev_ctl->ctl_name = "llcc";
|
||||||
edev_ctl->panic_on_ue = LLCC_ERP_PANIC_ON_UE;
|
edev_ctl->panic_on_ue = LLCC_ERP_PANIC_ON_UE;
|
||||||
edev_ctl->pvt_info = llcc_driv_data;
|
|
||||||
|
|
||||||
rc = edac_device_add_device(edev_ctl);
|
rc = edac_device_add_device(edev_ctl);
|
||||||
if (rc)
|
if (rc)
|
||||||
|
|||||||
@@ -58,10 +58,11 @@ u32 shmem_read_header(struct scmi_shared_mem __iomem *shmem)
|
|||||||
void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem,
|
void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem,
|
||||||
struct scmi_xfer *xfer)
|
struct scmi_xfer *xfer)
|
||||||
{
|
{
|
||||||
|
size_t len = ioread32(&shmem->length);
|
||||||
|
|
||||||
xfer->hdr.status = ioread32(shmem->msg_payload);
|
xfer->hdr.status = ioread32(shmem->msg_payload);
|
||||||
/* Skip the length of header and status in shmem area i.e 8 bytes */
|
/* Skip the length of header and status in shmem area i.e 8 bytes */
|
||||||
xfer->rx.len = min_t(size_t, xfer->rx.len,
|
xfer->rx.len = min_t(size_t, xfer->rx.len, len > 8 ? len - 8 : 0);
|
||||||
ioread32(&shmem->length) - 8);
|
|
||||||
|
|
||||||
/* Take a copy to the rx buffer.. */
|
/* Take a copy to the rx buffer.. */
|
||||||
memcpy_fromio(xfer->rx.buf, shmem->msg_payload + 4, xfer->rx.len);
|
memcpy_fromio(xfer->rx.buf, shmem->msg_payload + 4, xfer->rx.len);
|
||||||
@@ -70,8 +71,10 @@ void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem,
|
|||||||
void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
|
void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
|
||||||
size_t max_len, struct scmi_xfer *xfer)
|
size_t max_len, struct scmi_xfer *xfer)
|
||||||
{
|
{
|
||||||
|
size_t len = ioread32(&shmem->length);
|
||||||
|
|
||||||
/* Skip only the length of header in shmem area i.e 4 bytes */
|
/* Skip only the length of header in shmem area i.e 4 bytes */
|
||||||
xfer->rx.len = min_t(size_t, max_len, ioread32(&shmem->length) - 4);
|
xfer->rx.len = min_t(size_t, max_len, len > 4 ? len - 4 : 0);
|
||||||
|
|
||||||
/* Take a copy to the rx buffer.. */
|
/* Take a copy to the rx buffer.. */
|
||||||
memcpy_fromio(xfer->rx.buf, shmem->msg_payload, xfer->rx.len);
|
memcpy_fromio(xfer->rx.buf, shmem->msg_payload, xfer->rx.len);
|
||||||
|
|||||||
@@ -93,7 +93,12 @@ static int coreboot_table_populate(struct device *dev, void *ptr)
|
|||||||
for (i = 0; i < header->table_entries; i++) {
|
for (i = 0; i < header->table_entries; i++) {
|
||||||
entry = ptr_entry;
|
entry = ptr_entry;
|
||||||
|
|
||||||
device = kzalloc(sizeof(struct device) + entry->size, GFP_KERNEL);
|
if (entry->size < sizeof(*entry)) {
|
||||||
|
dev_warn(dev, "coreboot table entry too small!\n");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
device = kzalloc(sizeof(device->dev) + entry->size, GFP_KERNEL);
|
||||||
if (!device)
|
if (!device)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
@@ -101,7 +106,7 @@ static int coreboot_table_populate(struct device *dev, void *ptr)
|
|||||||
device->dev.parent = dev;
|
device->dev.parent = dev;
|
||||||
device->dev.bus = &coreboot_bus_type;
|
device->dev.bus = &coreboot_bus_type;
|
||||||
device->dev.release = coreboot_device_release;
|
device->dev.release = coreboot_device_release;
|
||||||
memcpy(&device->entry, ptr_entry, entry->size);
|
memcpy(device->raw, ptr_entry, entry->size);
|
||||||
|
|
||||||
ret = device_register(&device->dev);
|
ret = device_register(&device->dev);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
|||||||
@@ -66,6 +66,7 @@ struct coreboot_device {
|
|||||||
struct coreboot_table_entry entry;
|
struct coreboot_table_entry entry;
|
||||||
struct lb_cbmem_ref cbmem_ref;
|
struct lb_cbmem_ref cbmem_ref;
|
||||||
struct lb_framebuffer framebuffer;
|
struct lb_framebuffer framebuffer;
|
||||||
|
DECLARE_FLEX_ARRAY(u8, raw);
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -35,19 +35,19 @@ static int pt_gpio_request(struct gpio_chip *gc, unsigned offset)
|
|||||||
|
|
||||||
dev_dbg(gc->parent, "pt_gpio_request offset=%x\n", offset);
|
dev_dbg(gc->parent, "pt_gpio_request offset=%x\n", offset);
|
||||||
|
|
||||||
spin_lock_irqsave(&gc->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
|
||||||
|
|
||||||
using_pins = readl(pt_gpio->reg_base + PT_SYNC_REG);
|
using_pins = readl(pt_gpio->reg_base + PT_SYNC_REG);
|
||||||
if (using_pins & BIT(offset)) {
|
if (using_pins & BIT(offset)) {
|
||||||
dev_warn(gc->parent, "PT GPIO pin %x reconfigured\n",
|
dev_warn(gc->parent, "PT GPIO pin %x reconfigured\n",
|
||||||
offset);
|
offset);
|
||||||
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
writel(using_pins | BIT(offset), pt_gpio->reg_base + PT_SYNC_REG);
|
writel(using_pins | BIT(offset), pt_gpio->reg_base + PT_SYNC_REG);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -58,13 +58,13 @@ static void pt_gpio_free(struct gpio_chip *gc, unsigned offset)
|
|||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
u32 using_pins;
|
u32 using_pins;
|
||||||
|
|
||||||
spin_lock_irqsave(&gc->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
|
||||||
|
|
||||||
using_pins = readl(pt_gpio->reg_base + PT_SYNC_REG);
|
using_pins = readl(pt_gpio->reg_base + PT_SYNC_REG);
|
||||||
using_pins &= ~BIT(offset);
|
using_pins &= ~BIT(offset);
|
||||||
writel(using_pins, pt_gpio->reg_base + PT_SYNC_REG);
|
writel(using_pins, pt_gpio->reg_base + PT_SYNC_REG);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
||||||
|
|
||||||
dev_dbg(gc->parent, "pt_gpio_free offset=%x\n", offset);
|
dev_dbg(gc->parent, "pt_gpio_free offset=%x\n", offset);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -92,9 +92,9 @@ brcmstb_gpio_get_active_irqs(struct brcmstb_gpio_bank *bank)
|
|||||||
unsigned long status;
|
unsigned long status;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
spin_lock_irqsave(&bank->gc.bgpio_lock, flags);
|
raw_spin_lock_irqsave(&bank->gc.bgpio_lock, flags);
|
||||||
status = __brcmstb_gpio_get_active_irqs(bank);
|
status = __brcmstb_gpio_get_active_irqs(bank);
|
||||||
spin_unlock_irqrestore(&bank->gc.bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&bank->gc.bgpio_lock, flags);
|
||||||
|
|
||||||
return status;
|
return status;
|
||||||
}
|
}
|
||||||
@@ -114,14 +114,14 @@ static void brcmstb_gpio_set_imask(struct brcmstb_gpio_bank *bank,
|
|||||||
u32 imask;
|
u32 imask;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
spin_lock_irqsave(&gc->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
|
||||||
imask = gc->read_reg(priv->reg_base + GIO_MASK(bank->id));
|
imask = gc->read_reg(priv->reg_base + GIO_MASK(bank->id));
|
||||||
if (enable)
|
if (enable)
|
||||||
imask |= mask;
|
imask |= mask;
|
||||||
else
|
else
|
||||||
imask &= ~mask;
|
imask &= ~mask;
|
||||||
gc->write_reg(priv->reg_base + GIO_MASK(bank->id), imask);
|
gc->write_reg(priv->reg_base + GIO_MASK(bank->id), imask);
|
||||||
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int brcmstb_gpio_to_irq(struct gpio_chip *gc, unsigned offset)
|
static int brcmstb_gpio_to_irq(struct gpio_chip *gc, unsigned offset)
|
||||||
@@ -204,7 +204,7 @@ static int brcmstb_gpio_irq_set_type(struct irq_data *d, unsigned int type)
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_lock_irqsave(&bank->gc.bgpio_lock, flags);
|
raw_spin_lock_irqsave(&bank->gc.bgpio_lock, flags);
|
||||||
|
|
||||||
iedge_config = bank->gc.read_reg(priv->reg_base +
|
iedge_config = bank->gc.read_reg(priv->reg_base +
|
||||||
GIO_EC(bank->id)) & ~mask;
|
GIO_EC(bank->id)) & ~mask;
|
||||||
@@ -220,7 +220,7 @@ static int brcmstb_gpio_irq_set_type(struct irq_data *d, unsigned int type)
|
|||||||
bank->gc.write_reg(priv->reg_base + GIO_LEVEL(bank->id),
|
bank->gc.write_reg(priv->reg_base + GIO_LEVEL(bank->id),
|
||||||
ilevel | level);
|
ilevel | level);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&bank->gc.bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&bank->gc.bgpio_lock, flags);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -41,12 +41,12 @@ static int cdns_gpio_request(struct gpio_chip *chip, unsigned int offset)
|
|||||||
struct cdns_gpio_chip *cgpio = gpiochip_get_data(chip);
|
struct cdns_gpio_chip *cgpio = gpiochip_get_data(chip);
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
spin_lock_irqsave(&chip->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&chip->bgpio_lock, flags);
|
||||||
|
|
||||||
iowrite32(ioread32(cgpio->regs + CDNS_GPIO_BYPASS_MODE) & ~BIT(offset),
|
iowrite32(ioread32(cgpio->regs + CDNS_GPIO_BYPASS_MODE) & ~BIT(offset),
|
||||||
cgpio->regs + CDNS_GPIO_BYPASS_MODE);
|
cgpio->regs + CDNS_GPIO_BYPASS_MODE);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&chip->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&chip->bgpio_lock, flags);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -55,13 +55,13 @@ static void cdns_gpio_free(struct gpio_chip *chip, unsigned int offset)
|
|||||||
struct cdns_gpio_chip *cgpio = gpiochip_get_data(chip);
|
struct cdns_gpio_chip *cgpio = gpiochip_get_data(chip);
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
spin_lock_irqsave(&chip->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&chip->bgpio_lock, flags);
|
||||||
|
|
||||||
iowrite32(ioread32(cgpio->regs + CDNS_GPIO_BYPASS_MODE) |
|
iowrite32(ioread32(cgpio->regs + CDNS_GPIO_BYPASS_MODE) |
|
||||||
(BIT(offset) & cgpio->bypass_orig),
|
(BIT(offset) & cgpio->bypass_orig),
|
||||||
cgpio->regs + CDNS_GPIO_BYPASS_MODE);
|
cgpio->regs + CDNS_GPIO_BYPASS_MODE);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&chip->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&chip->bgpio_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cdns_gpio_irq_mask(struct irq_data *d)
|
static void cdns_gpio_irq_mask(struct irq_data *d)
|
||||||
@@ -90,7 +90,7 @@ static int cdns_gpio_irq_set_type(struct irq_data *d, unsigned int type)
|
|||||||
u32 mask = BIT(d->hwirq);
|
u32 mask = BIT(d->hwirq);
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
spin_lock_irqsave(&chip->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&chip->bgpio_lock, flags);
|
||||||
|
|
||||||
int_value = ioread32(cgpio->regs + CDNS_GPIO_IRQ_VALUE) & ~mask;
|
int_value = ioread32(cgpio->regs + CDNS_GPIO_IRQ_VALUE) & ~mask;
|
||||||
int_type = ioread32(cgpio->regs + CDNS_GPIO_IRQ_TYPE) & ~mask;
|
int_type = ioread32(cgpio->regs + CDNS_GPIO_IRQ_TYPE) & ~mask;
|
||||||
@@ -115,7 +115,7 @@ static int cdns_gpio_irq_set_type(struct irq_data *d, unsigned int type)
|
|||||||
iowrite32(int_type, cgpio->regs + CDNS_GPIO_IRQ_TYPE);
|
iowrite32(int_type, cgpio->regs + CDNS_GPIO_IRQ_TYPE);
|
||||||
|
|
||||||
err_irq_type:
|
err_irq_type:
|
||||||
spin_unlock_irqrestore(&chip->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&chip->bgpio_lock, flags);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -242,9 +242,9 @@ static void dwapb_irq_ack(struct irq_data *d)
|
|||||||
u32 val = BIT(irqd_to_hwirq(d));
|
u32 val = BIT(irqd_to_hwirq(d));
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
spin_lock_irqsave(&gc->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
|
||||||
dwapb_write(gpio, GPIO_PORTA_EOI, val);
|
dwapb_write(gpio, GPIO_PORTA_EOI, val);
|
||||||
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void dwapb_irq_mask(struct irq_data *d)
|
static void dwapb_irq_mask(struct irq_data *d)
|
||||||
@@ -254,10 +254,10 @@ static void dwapb_irq_mask(struct irq_data *d)
|
|||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
u32 val;
|
u32 val;
|
||||||
|
|
||||||
spin_lock_irqsave(&gc->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
|
||||||
val = dwapb_read(gpio, GPIO_INTMASK) | BIT(irqd_to_hwirq(d));
|
val = dwapb_read(gpio, GPIO_INTMASK) | BIT(irqd_to_hwirq(d));
|
||||||
dwapb_write(gpio, GPIO_INTMASK, val);
|
dwapb_write(gpio, GPIO_INTMASK, val);
|
||||||
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void dwapb_irq_unmask(struct irq_data *d)
|
static void dwapb_irq_unmask(struct irq_data *d)
|
||||||
@@ -267,10 +267,10 @@ static void dwapb_irq_unmask(struct irq_data *d)
|
|||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
u32 val;
|
u32 val;
|
||||||
|
|
||||||
spin_lock_irqsave(&gc->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
|
||||||
val = dwapb_read(gpio, GPIO_INTMASK) & ~BIT(irqd_to_hwirq(d));
|
val = dwapb_read(gpio, GPIO_INTMASK) & ~BIT(irqd_to_hwirq(d));
|
||||||
dwapb_write(gpio, GPIO_INTMASK, val);
|
dwapb_write(gpio, GPIO_INTMASK, val);
|
||||||
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void dwapb_irq_enable(struct irq_data *d)
|
static void dwapb_irq_enable(struct irq_data *d)
|
||||||
@@ -280,11 +280,11 @@ static void dwapb_irq_enable(struct irq_data *d)
|
|||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
u32 val;
|
u32 val;
|
||||||
|
|
||||||
spin_lock_irqsave(&gc->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
|
||||||
val = dwapb_read(gpio, GPIO_INTEN);
|
val = dwapb_read(gpio, GPIO_INTEN);
|
||||||
val |= BIT(irqd_to_hwirq(d));
|
val |= BIT(irqd_to_hwirq(d));
|
||||||
dwapb_write(gpio, GPIO_INTEN, val);
|
dwapb_write(gpio, GPIO_INTEN, val);
|
||||||
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void dwapb_irq_disable(struct irq_data *d)
|
static void dwapb_irq_disable(struct irq_data *d)
|
||||||
@@ -294,11 +294,11 @@ static void dwapb_irq_disable(struct irq_data *d)
|
|||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
u32 val;
|
u32 val;
|
||||||
|
|
||||||
spin_lock_irqsave(&gc->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
|
||||||
val = dwapb_read(gpio, GPIO_INTEN);
|
val = dwapb_read(gpio, GPIO_INTEN);
|
||||||
val &= ~BIT(irqd_to_hwirq(d));
|
val &= ~BIT(irqd_to_hwirq(d));
|
||||||
dwapb_write(gpio, GPIO_INTEN, val);
|
dwapb_write(gpio, GPIO_INTEN, val);
|
||||||
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int dwapb_irq_set_type(struct irq_data *d, u32 type)
|
static int dwapb_irq_set_type(struct irq_data *d, u32 type)
|
||||||
@@ -308,7 +308,7 @@ static int dwapb_irq_set_type(struct irq_data *d, u32 type)
|
|||||||
irq_hw_number_t bit = irqd_to_hwirq(d);
|
irq_hw_number_t bit = irqd_to_hwirq(d);
|
||||||
unsigned long level, polarity, flags;
|
unsigned long level, polarity, flags;
|
||||||
|
|
||||||
spin_lock_irqsave(&gc->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
|
||||||
level = dwapb_read(gpio, GPIO_INTTYPE_LEVEL);
|
level = dwapb_read(gpio, GPIO_INTTYPE_LEVEL);
|
||||||
polarity = dwapb_read(gpio, GPIO_INT_POLARITY);
|
polarity = dwapb_read(gpio, GPIO_INT_POLARITY);
|
||||||
|
|
||||||
@@ -343,7 +343,7 @@ static int dwapb_irq_set_type(struct irq_data *d, u32 type)
|
|||||||
dwapb_write(gpio, GPIO_INTTYPE_LEVEL, level);
|
dwapb_write(gpio, GPIO_INTTYPE_LEVEL, level);
|
||||||
if (type != IRQ_TYPE_EDGE_BOTH)
|
if (type != IRQ_TYPE_EDGE_BOTH)
|
||||||
dwapb_write(gpio, GPIO_INT_POLARITY, polarity);
|
dwapb_write(gpio, GPIO_INT_POLARITY, polarity);
|
||||||
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -373,7 +373,7 @@ static int dwapb_gpio_set_debounce(struct gpio_chip *gc,
|
|||||||
unsigned long flags, val_deb;
|
unsigned long flags, val_deb;
|
||||||
unsigned long mask = BIT(offset);
|
unsigned long mask = BIT(offset);
|
||||||
|
|
||||||
spin_lock_irqsave(&gc->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
|
||||||
|
|
||||||
val_deb = dwapb_read(gpio, GPIO_PORTA_DEBOUNCE);
|
val_deb = dwapb_read(gpio, GPIO_PORTA_DEBOUNCE);
|
||||||
if (debounce)
|
if (debounce)
|
||||||
@@ -382,7 +382,7 @@ static int dwapb_gpio_set_debounce(struct gpio_chip *gc,
|
|||||||
val_deb &= ~mask;
|
val_deb &= ~mask;
|
||||||
dwapb_write(gpio, GPIO_PORTA_DEBOUNCE, val_deb);
|
dwapb_write(gpio, GPIO_PORTA_DEBOUNCE, val_deb);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -738,7 +738,7 @@ static int dwapb_gpio_suspend(struct device *dev)
|
|||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
spin_lock_irqsave(&gc->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
|
||||||
for (i = 0; i < gpio->nr_ports; i++) {
|
for (i = 0; i < gpio->nr_ports; i++) {
|
||||||
unsigned int offset;
|
unsigned int offset;
|
||||||
unsigned int idx = gpio->ports[i].idx;
|
unsigned int idx = gpio->ports[i].idx;
|
||||||
@@ -765,7 +765,7 @@ static int dwapb_gpio_suspend(struct device *dev)
|
|||||||
dwapb_write(gpio, GPIO_INTMASK, ~ctx->wake_en);
|
dwapb_write(gpio, GPIO_INTMASK, ~ctx->wake_en);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
||||||
|
|
||||||
clk_bulk_disable_unprepare(DWAPB_NR_CLOCKS, gpio->clks);
|
clk_bulk_disable_unprepare(DWAPB_NR_CLOCKS, gpio->clks);
|
||||||
|
|
||||||
@@ -785,7 +785,7 @@ static int dwapb_gpio_resume(struct device *dev)
|
|||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_lock_irqsave(&gc->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
|
||||||
for (i = 0; i < gpio->nr_ports; i++) {
|
for (i = 0; i < gpio->nr_ports; i++) {
|
||||||
unsigned int offset;
|
unsigned int offset;
|
||||||
unsigned int idx = gpio->ports[i].idx;
|
unsigned int idx = gpio->ports[i].idx;
|
||||||
@@ -812,7 +812,7 @@ static int dwapb_gpio_resume(struct device *dev)
|
|||||||
dwapb_write(gpio, GPIO_PORTA_EOI, 0xffffffff);
|
dwapb_write(gpio, GPIO_PORTA_EOI, 0xffffffff);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -145,7 +145,7 @@ static int grgpio_irq_set_type(struct irq_data *d, unsigned int type)
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
|
raw_spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
|
||||||
|
|
||||||
ipol = priv->gc.read_reg(priv->regs + GRGPIO_IPOL) & ~mask;
|
ipol = priv->gc.read_reg(priv->regs + GRGPIO_IPOL) & ~mask;
|
||||||
iedge = priv->gc.read_reg(priv->regs + GRGPIO_IEDGE) & ~mask;
|
iedge = priv->gc.read_reg(priv->regs + GRGPIO_IEDGE) & ~mask;
|
||||||
@@ -153,7 +153,7 @@ static int grgpio_irq_set_type(struct irq_data *d, unsigned int type)
|
|||||||
priv->gc.write_reg(priv->regs + GRGPIO_IPOL, ipol | pol);
|
priv->gc.write_reg(priv->regs + GRGPIO_IPOL, ipol | pol);
|
||||||
priv->gc.write_reg(priv->regs + GRGPIO_IEDGE, iedge | edge);
|
priv->gc.write_reg(priv->regs + GRGPIO_IEDGE, iedge | edge);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -164,11 +164,11 @@ static void grgpio_irq_mask(struct irq_data *d)
|
|||||||
int offset = d->hwirq;
|
int offset = d->hwirq;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
|
raw_spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
|
||||||
|
|
||||||
grgpio_set_imask(priv, offset, 0);
|
grgpio_set_imask(priv, offset, 0);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void grgpio_irq_unmask(struct irq_data *d)
|
static void grgpio_irq_unmask(struct irq_data *d)
|
||||||
@@ -177,11 +177,11 @@ static void grgpio_irq_unmask(struct irq_data *d)
|
|||||||
int offset = d->hwirq;
|
int offset = d->hwirq;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
|
raw_spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
|
||||||
|
|
||||||
grgpio_set_imask(priv, offset, 1);
|
grgpio_set_imask(priv, offset, 1);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct irq_chip grgpio_irq_chip = {
|
static struct irq_chip grgpio_irq_chip = {
|
||||||
@@ -199,7 +199,7 @@ static irqreturn_t grgpio_irq_handler(int irq, void *dev)
|
|||||||
int i;
|
int i;
|
||||||
int match = 0;
|
int match = 0;
|
||||||
|
|
||||||
spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
|
raw_spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* For each gpio line, call its interrupt handler if it its underlying
|
* For each gpio line, call its interrupt handler if it its underlying
|
||||||
@@ -215,7 +215,7 @@ static irqreturn_t grgpio_irq_handler(int irq, void *dev)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
|
||||||
|
|
||||||
if (!match)
|
if (!match)
|
||||||
dev_warn(priv->dev, "No gpio line matched irq %d\n", irq);
|
dev_warn(priv->dev, "No gpio line matched irq %d\n", irq);
|
||||||
@@ -247,13 +247,13 @@ static int grgpio_irq_map(struct irq_domain *d, unsigned int irq,
|
|||||||
dev_dbg(priv->dev, "Mapping irq %d for gpio line %d\n",
|
dev_dbg(priv->dev, "Mapping irq %d for gpio line %d\n",
|
||||||
irq, offset);
|
irq, offset);
|
||||||
|
|
||||||
spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
|
raw_spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
|
||||||
|
|
||||||
/* Request underlying irq if not already requested */
|
/* Request underlying irq if not already requested */
|
||||||
lirq->irq = irq;
|
lirq->irq = irq;
|
||||||
uirq = &priv->uirqs[lirq->index];
|
uirq = &priv->uirqs[lirq->index];
|
||||||
if (uirq->refcnt == 0) {
|
if (uirq->refcnt == 0) {
|
||||||
spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
|
||||||
ret = request_irq(uirq->uirq, grgpio_irq_handler, 0,
|
ret = request_irq(uirq->uirq, grgpio_irq_handler, 0,
|
||||||
dev_name(priv->dev), priv);
|
dev_name(priv->dev), priv);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
@@ -262,11 +262,11 @@ static int grgpio_irq_map(struct irq_domain *d, unsigned int irq,
|
|||||||
uirq->uirq);
|
uirq->uirq);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
|
raw_spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
|
||||||
}
|
}
|
||||||
uirq->refcnt++;
|
uirq->refcnt++;
|
||||||
|
|
||||||
spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
|
||||||
|
|
||||||
/* Setup irq */
|
/* Setup irq */
|
||||||
irq_set_chip_data(irq, priv);
|
irq_set_chip_data(irq, priv);
|
||||||
@@ -290,7 +290,7 @@ static void grgpio_irq_unmap(struct irq_domain *d, unsigned int irq)
|
|||||||
irq_set_chip_and_handler(irq, NULL, NULL);
|
irq_set_chip_and_handler(irq, NULL, NULL);
|
||||||
irq_set_chip_data(irq, NULL);
|
irq_set_chip_data(irq, NULL);
|
||||||
|
|
||||||
spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
|
raw_spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
|
||||||
|
|
||||||
/* Free underlying irq if last user unmapped */
|
/* Free underlying irq if last user unmapped */
|
||||||
index = -1;
|
index = -1;
|
||||||
@@ -309,13 +309,13 @@ static void grgpio_irq_unmap(struct irq_domain *d, unsigned int irq)
|
|||||||
uirq = &priv->uirqs[lirq->index];
|
uirq = &priv->uirqs[lirq->index];
|
||||||
uirq->refcnt--;
|
uirq->refcnt--;
|
||||||
if (uirq->refcnt == 0) {
|
if (uirq->refcnt == 0) {
|
||||||
spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
|
||||||
free_irq(uirq->uirq, priv);
|
free_irq(uirq->uirq, priv);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static const struct irq_domain_ops grgpio_irq_domain_ops = {
|
static const struct irq_domain_ops grgpio_irq_domain_ops = {
|
||||||
|
|||||||
@@ -65,7 +65,7 @@ static void hlwd_gpio_irqhandler(struct irq_desc *desc)
|
|||||||
int hwirq;
|
int hwirq;
|
||||||
u32 emulated_pending;
|
u32 emulated_pending;
|
||||||
|
|
||||||
spin_lock_irqsave(&hlwd->gpioc.bgpio_lock, flags);
|
raw_spin_lock_irqsave(&hlwd->gpioc.bgpio_lock, flags);
|
||||||
pending = ioread32be(hlwd->regs + HW_GPIOB_INTFLAG);
|
pending = ioread32be(hlwd->regs + HW_GPIOB_INTFLAG);
|
||||||
pending &= ioread32be(hlwd->regs + HW_GPIOB_INTMASK);
|
pending &= ioread32be(hlwd->regs + HW_GPIOB_INTMASK);
|
||||||
|
|
||||||
@@ -93,7 +93,7 @@ static void hlwd_gpio_irqhandler(struct irq_desc *desc)
|
|||||||
/* Mark emulated interrupts as pending */
|
/* Mark emulated interrupts as pending */
|
||||||
pending |= rising | falling;
|
pending |= rising | falling;
|
||||||
}
|
}
|
||||||
spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
|
||||||
|
|
||||||
chained_irq_enter(chip, desc);
|
chained_irq_enter(chip, desc);
|
||||||
|
|
||||||
@@ -118,11 +118,11 @@ static void hlwd_gpio_irq_mask(struct irq_data *data)
|
|||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
u32 mask;
|
u32 mask;
|
||||||
|
|
||||||
spin_lock_irqsave(&hlwd->gpioc.bgpio_lock, flags);
|
raw_spin_lock_irqsave(&hlwd->gpioc.bgpio_lock, flags);
|
||||||
mask = ioread32be(hlwd->regs + HW_GPIOB_INTMASK);
|
mask = ioread32be(hlwd->regs + HW_GPIOB_INTMASK);
|
||||||
mask &= ~BIT(data->hwirq);
|
mask &= ~BIT(data->hwirq);
|
||||||
iowrite32be(mask, hlwd->regs + HW_GPIOB_INTMASK);
|
iowrite32be(mask, hlwd->regs + HW_GPIOB_INTMASK);
|
||||||
spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void hlwd_gpio_irq_unmask(struct irq_data *data)
|
static void hlwd_gpio_irq_unmask(struct irq_data *data)
|
||||||
@@ -132,11 +132,11 @@ static void hlwd_gpio_irq_unmask(struct irq_data *data)
|
|||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
u32 mask;
|
u32 mask;
|
||||||
|
|
||||||
spin_lock_irqsave(&hlwd->gpioc.bgpio_lock, flags);
|
raw_spin_lock_irqsave(&hlwd->gpioc.bgpio_lock, flags);
|
||||||
mask = ioread32be(hlwd->regs + HW_GPIOB_INTMASK);
|
mask = ioread32be(hlwd->regs + HW_GPIOB_INTMASK);
|
||||||
mask |= BIT(data->hwirq);
|
mask |= BIT(data->hwirq);
|
||||||
iowrite32be(mask, hlwd->regs + HW_GPIOB_INTMASK);
|
iowrite32be(mask, hlwd->regs + HW_GPIOB_INTMASK);
|
||||||
spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void hlwd_gpio_irq_enable(struct irq_data *data)
|
static void hlwd_gpio_irq_enable(struct irq_data *data)
|
||||||
@@ -173,7 +173,7 @@ static int hlwd_gpio_irq_set_type(struct irq_data *data, unsigned int flow_type)
|
|||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
u32 level;
|
u32 level;
|
||||||
|
|
||||||
spin_lock_irqsave(&hlwd->gpioc.bgpio_lock, flags);
|
raw_spin_lock_irqsave(&hlwd->gpioc.bgpio_lock, flags);
|
||||||
|
|
||||||
hlwd->edge_emulation &= ~BIT(data->hwirq);
|
hlwd->edge_emulation &= ~BIT(data->hwirq);
|
||||||
|
|
||||||
@@ -194,11 +194,11 @@ static int hlwd_gpio_irq_set_type(struct irq_data *data, unsigned int flow_type)
|
|||||||
hlwd_gpio_irq_setup_emulation(hlwd, data->hwirq, flow_type);
|
hlwd_gpio_irq_setup_emulation(hlwd, data->hwirq, flow_type);
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -57,7 +57,7 @@ static int idt_gpio_irq_set_type(struct irq_data *d, unsigned int flow_type)
|
|||||||
if (sense == IRQ_TYPE_NONE || (sense & IRQ_TYPE_EDGE_BOTH))
|
if (sense == IRQ_TYPE_NONE || (sense & IRQ_TYPE_EDGE_BOTH))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
spin_lock_irqsave(&gc->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
|
||||||
|
|
||||||
ilevel = readl(ctrl->gpio + IDT_GPIO_ILEVEL);
|
ilevel = readl(ctrl->gpio + IDT_GPIO_ILEVEL);
|
||||||
if (sense & IRQ_TYPE_LEVEL_HIGH)
|
if (sense & IRQ_TYPE_LEVEL_HIGH)
|
||||||
@@ -68,7 +68,7 @@ static int idt_gpio_irq_set_type(struct irq_data *d, unsigned int flow_type)
|
|||||||
writel(ilevel, ctrl->gpio + IDT_GPIO_ILEVEL);
|
writel(ilevel, ctrl->gpio + IDT_GPIO_ILEVEL);
|
||||||
irq_set_handler_locked(d, handle_level_irq);
|
irq_set_handler_locked(d, handle_level_irq);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -86,12 +86,12 @@ static void idt_gpio_mask(struct irq_data *d)
|
|||||||
struct idt_gpio_ctrl *ctrl = gpiochip_get_data(gc);
|
struct idt_gpio_ctrl *ctrl = gpiochip_get_data(gc);
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
spin_lock_irqsave(&gc->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
|
||||||
|
|
||||||
ctrl->mask_cache |= BIT(d->hwirq);
|
ctrl->mask_cache |= BIT(d->hwirq);
|
||||||
writel(ctrl->mask_cache, ctrl->pic + IDT_PIC_IRQ_MASK);
|
writel(ctrl->mask_cache, ctrl->pic + IDT_PIC_IRQ_MASK);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void idt_gpio_unmask(struct irq_data *d)
|
static void idt_gpio_unmask(struct irq_data *d)
|
||||||
@@ -100,12 +100,12 @@ static void idt_gpio_unmask(struct irq_data *d)
|
|||||||
struct idt_gpio_ctrl *ctrl = gpiochip_get_data(gc);
|
struct idt_gpio_ctrl *ctrl = gpiochip_get_data(gc);
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
spin_lock_irqsave(&gc->bgpio_lock, flags);
|
raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
|
||||||
|
|
||||||
ctrl->mask_cache &= ~BIT(d->hwirq);
|
ctrl->mask_cache &= ~BIT(d->hwirq);
|
||||||
writel(ctrl->mask_cache, ctrl->pic + IDT_PIC_IRQ_MASK);
|
writel(ctrl->mask_cache, ctrl->pic + IDT_PIC_IRQ_MASK);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int idt_gpio_irq_init_hw(struct gpio_chip *gc)
|
static int idt_gpio_irq_init_hw(struct gpio_chip *gc)
|
||||||
|
|||||||
@@ -128,7 +128,7 @@ static int ixp4xx_gpio_irq_set_type(struct irq_data *d, unsigned int type)
|
|||||||
int_reg = IXP4XX_REG_GPIT1;
|
int_reg = IXP4XX_REG_GPIT1;
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_lock_irqsave(&g->gc.bgpio_lock, flags);
|
raw_spin_lock_irqsave(&g->gc.bgpio_lock, flags);
|
||||||
|
|
||||||
/* Clear the style for the appropriate pin */
|
/* Clear the style for the appropriate pin */
|
||||||
val = __raw_readl(g->base + int_reg);
|
val = __raw_readl(g->base + int_reg);
|
||||||
@@ -147,7 +147,7 @@ static int ixp4xx_gpio_irq_set_type(struct irq_data *d, unsigned int type)
|
|||||||
val |= BIT(d->hwirq);
|
val |= BIT(d->hwirq);
|
||||||
__raw_writel(val, g->base + IXP4XX_REG_GPOE);
|
__raw_writel(val, g->base + IXP4XX_REG_GPOE);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&g->gc.bgpio_lock, flags);
|
raw_spin_unlock_irqrestore(&g->gc.bgpio_lock, flags);
|
||||||
|
|
||||||
/* This parent only accept level high (asserted) */
|
/* This parent only accept level high (asserted) */
|
||||||
return irq_chip_set_type_parent(d, IRQ_TYPE_LEVEL_HIGH);
|
return irq_chip_set_type_parent(d, IRQ_TYPE_LEVEL_HIGH);
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user