Merge 5.15.54 into android14-5.15
Changes in 5.15.54
mm/slub: add missing TID updates on slab deactivation
mm/filemap: fix UAF in find_lock_entries
Revert "selftests/bpf: Add test for bpf_timer overwriting crash"
ALSA: usb-audio: Workarounds for Behringer UMC 204/404 HD
ALSA: hda/realtek: Add quirk for Clevo L140PU
ALSA: cs46xx: Fix missing snd_card_free() call at probe error
can: bcm: use call_rcu() instead of costly synchronize_rcu()
can: grcan: grcan_probe(): remove extra of_node_get()
can: gs_usb: gs_usb_open/close(): fix memory leak
can: m_can: m_can_chip_config(): actually enable internal timestamping
can: m_can: m_can_{read_fifo,echo_tx_event}(): shift timestamp to full 32 bits
can: mcp251xfd: mcp251xfd_regmap_crc_read(): improve workaround handling for mcp2517fd
can: mcp251xfd: mcp251xfd_regmap_crc_read(): update workaround broken CRC on TBC register
bpf: Fix incorrect verifier simulation around jmp32's jeq/jne
bpf: Fix insufficient bounds propagation from adjust_scalar_min_max_vals
usbnet: fix memory leak in error case
net: rose: fix UAF bug caused by rose_t0timer_expiry
netfilter: nft_set_pipapo: release elements in clone from abort path
netfilter: nf_tables: stricter validation of element data
btrfs: rename btrfs_alloc_chunk to btrfs_create_chunk
btrfs: add additional parameters to btrfs_init_tree_ref/btrfs_init_data_ref
btrfs: fix invalid delayed ref after subvolume creation failure
btrfs: fix warning when freeing leaf after subvolume creation failure
Input: cpcap-pwrbutton - handle errors from platform_get_irq()
Input: goodix - change goodix_i2c_write() len parameter type to int
Input: goodix - add a goodix.h header file
Input: goodix - refactor reset handling
Input: goodix - try not to touch the reset-pin on x86/ACPI devices
dma-buf/poll: Get a file reference for outstanding fence callbacks
btrfs: fix deadlock between chunk allocation and chunk btree modifications
drm/i915: Disable bonding on gen12+ platforms
drm/i915/gt: Register the migrate contexts with their engines
drm/i915: Replace the unconditional clflush with drm_clflush_virt_range()
PCI/portdrv: Rename pm_iter() to pcie_port_device_iter()
PCI: pciehp: Ignore Link Down/Up caused by error-induced Hot Reset
media: ir_toy: prevent device from hanging during transmit
memory: renesas-rpc-if: Avoid unaligned bus access for HyperFlash
ath11k: add hw_param for wakeup_mhi
qed: Improve the stack space of filter_config()
platform/x86: wmi: introduce helper to convert driver to WMI driver
platform/x86: wmi: Replace read_takes_no_args with a flags field
platform/x86: wmi: Fix driver->notify() vs ->probe() race
mt76: mt7921: get rid of mt7921_mac_set_beacon_filter
mt76: mt7921: introduce mt7921_mcu_set_beacon_filter utility routine
mt76: mt7921: fix a possible race enabling/disabling runtime-pm
bpf: Stop caching subprog index in the bpf_pseudo_func insn
bpf, arm64: Use emit_addr_mov_i64() for BPF_PSEUDO_FUNC
riscv: defconfig: enable DRM_NOUVEAU
RISC-V: defconfigs: Set CONFIG_FB=y, for FB console
net/mlx5e: Check action fwd/drop flag exists also for nic flows
net/mlx5e: Split actions_match_supported() into a sub function
net/mlx5e: TC, Reject rules with drop and modify hdr action
net/mlx5e: TC, Reject rules with forward and drop actions
ASoC: rt5682: Avoid the unexpected IRQ event during going to suspend
ASoC: rt5682: Re-detect the combo jack after resuming
ASoC: rt5682: Fix deadlock on resume
netfilter: nf_tables: convert pktinfo->tprot_set to flags field
netfilter: nft_payload: support for inner header matching / mangling
netfilter: nft_payload: don't allow th access for fragments
s390/boot: allocate amode31 section in decompressor
s390/setup: use physical pointers for memblock_reserve()
s390/setup: preserve memory at OLDMEM_BASE and OLDMEM_SIZE
ibmvnic: init init_done_rc earlier
ibmvnic: clear fop when retrying probe
ibmvnic: Allow queueing resets during probe
virtio-blk: avoid preallocating big SGL for data
io_uring: ensure that fsnotify is always called
block: use bdev_get_queue() in bio.c
block: only mark bio as tracked if it really is tracked
block: fix rq-qos breakage from skipping rq_qos_done_bio()
stddef: Introduce struct_group() helper macro
media: omap3isp: Use struct_group() for memcpy() region
media: davinci: vpif: fix use-after-free on driver unbind
mt76: mt76_connac: fix MCU_CE_CMD_SET_ROC definition error
mt76: mt7921: do not always disable fw runtime-pm
cxl/port: Hold port reference until decoder release
clk: renesas: r9a07g044: Update multiplier and divider values for PLL2/3
KVM: x86/mmu: Use yield-safe TDP MMU root iter in MMU notifier unmapping
KVM: x86/mmu: Use common TDP MMU zap helper for MMU notifier unmap hook
scsi: qla2xxx: Move heartbeat handling from DPC thread to workqueue
scsi: qla2xxx: Fix laggy FC remote port session recovery
scsi: qla2xxx: edif: Replace list_for_each_safe with list_for_each_entry_safe
scsi: qla2xxx: Fix crash during module load unload test
gfs2: Fix gfs2_file_buffered_write endless loop workaround
vdpa/mlx5: Avoid processing works if workqueue was destroyed
btrfs: handle device lookup with btrfs_dev_lookup_args
btrfs: add a btrfs_get_dev_args_from_path helper
btrfs: use btrfs_get_dev_args_from_path in dev removal ioctls
btrfs: remove device item and update super block in the same transaction
drbd: add error handling support for add_disk()
drbd: Fix double free problem in drbd_create_device
drbd: fix an invalid memory access caused by incorrect use of list iterator
drm/amd/display: Set min dcfclk if pipe count is 0
drm/amd/display: Fix by adding FPU protection for dcn30_internal_validate_bw
NFSD: De-duplicate net_generic(nf->nf_net, nfsd_net_id)
NFSD: COMMIT operations must not return NFS?ERR_INVAL
riscv/mm: Add XIP_FIXUP for riscv_pfn_base
iio: accel: mma8452: use the correct logic to get mma8452_data
batman-adv: Use netif_rx().
mtd: spi-nor: Skip erase logic when SPI_NOR_NO_ERASE is set
Compiler Attributes: add __alloc_size() for better bounds checking
mm: vmalloc: introduce array allocation functions
KVM: use __vcalloc for very large allocations
btrfs: don't access possibly stale fs_info data in device_list_add
KVM: s390x: fix SCK locking
scsi: qla2xxx: Fix loss of NVMe namespaces after driver reload test
powerpc/32: Don't use lmw/stmw for saving/restoring non volatile regs
powerpc: flexible GPR range save/restore macros
powerpc/tm: Fix more userspace r13 corruption
serial: sc16is7xx: Clear RS485 bits in the shutdown
bus: mhi: core: Use correctly sized arguments for bit field
bus: mhi: Fix pm_state conversion to string
stddef: Introduce DECLARE_FLEX_ARRAY() helper
uapi/linux/stddef.h: Add include guards
ASoC: rt5682: move clk related code to rt5682_i2c_probe
ASoC: rt5682: fix an incorrect NULL check on list iterator
drm/amd/vcn: fix an error msg on vcn 3.0
KVM: Don't create VM debugfs files outside of the VM directory
tty: n_gsm: Modify CR,PF bit when config requester
tty: n_gsm: Save dlci address open status when config requester
tty: n_gsm: fix frame reception handling
ALSA: usb-audio: add mapping for MSI MPG X570S Carbon Max Wifi.
ALSA: usb-audio: add mapping for MSI MAG X570S Torpedo MAX.
tty: n_gsm: fix missing update of modem controls after DLCI open
btrfs: zoned: encapsulate inode locking for zoned relocation
btrfs: zoned: use dedicated lock for data relocation
KVM: Initialize debugfs_dentry when a VM is created to avoid NULL deref
mm/hwpoison: mf_mutex for soft offline and unpoison
mm/hwpoison: avoid the impact of hwpoison_filter() return value on mce handler
mm/memory-failure.c: fix race with changing page compound again
mm/hwpoison: fix race between hugetlb free/demotion and memory_failure_hugetlb()
tty: n_gsm: fix invalid use of MSC in advanced option
tty: n_gsm: fix sometimes uninitialized warning in gsm_dlci_modem_output()
serial: 8250_mtk: Make sure to select the right FEATURE_SEL
tty: n_gsm: fix invalid gsmtty_write_room() result
drm/amd: Refactor `amdgpu_aspm` to be evaluated per device
drm/amdgpu: vi: disable ASPM on Intel Alder Lake based systems
drm/i915: Fix a race between vma / object destruction and unbinding
drm/mediatek: Use mailbox rx_callback instead of cmdq_task_cb
drm/mediatek: Remove the pointer of struct cmdq_client
drm/mediatek: Detect CMDQ execution timeout
drm/mediatek: Add cmdq_handle in mtk_crtc
drm/mediatek: Add vblank register/unregister callback functions
Bluetooth: protect le accept and resolv lists with hdev->lock
Bluetooth: btmtksdio: fix use-after-free at btmtksdio_recv_event
io_uring: avoid io-wq -EAGAIN looping for !IOPOLL
irqchip/gic-v3: Ensure pseudo-NMIs have an ISB between ack and handling
irqchip/gic-v3: Refactor ISB + EOIR at ack time
rxrpc: Fix locking issue
dt-bindings: soc: qcom: smd-rpm: Add compatible for MSM8953 SoC
dt-bindings: soc: qcom: smd-rpm: Fix missing MSM8936 compatible
module: change to print useful messages from elf_validity_check()
module: fix [e_shstrndx].sh_size=0 OOB access
iommu/vt-d: Fix PCI bus rescan device hot add
fbdev: fbmem: Fix logo center image dx issue
fbmem: Check virtual screen sizes in fb_set_var()
fbcon: Disallow setting font bigger than screen size
fbcon: Prevent that screen size is smaller than font size
PM: runtime: Redefine pm_runtime_release_supplier()
memregion: Fix memregion_free() fallback definition
video: of_display_timing.h: include errno.h
powerpc/powernv: delay rng platform device creation until later in boot
net: dsa: qca8k: reset cpu port on MTU change
can: kvaser_usb: replace run-time checks with struct kvaser_usb_driver_info
can: kvaser_usb: kvaser_usb_leaf: fix CAN clock frequency regression
can: kvaser_usb: kvaser_usb_leaf: fix bittiming limits
xfs: remove incorrect ASSERT in xfs_rename
Revert "serial: sc16is7xx: Clear RS485 bits in the shutdown"
btrfs: fix error pointer dereference in btrfs_ioctl_rm_dev_v2()
virtio-blk: modify the value type of num in virtio_queue_rq()
btrfs: fix use of uninitialized variable at rm device ioctl
tty: n_gsm: fix encoding of command/response bit
ARM: meson: Fix refcount leak in meson_smp_prepare_cpus
pinctrl: sunxi: a83t: Fix NAND function name for some pins
ASoC: rt711: Add endianness flag in snd_soc_component_driver
ASoC: rt711-sdca: Add endianness flag in snd_soc_component_driver
ASoC: codecs: rt700/rt711/rt711-sdca: resume bus/codec in .set_jack_detect
arm64: dts: qcom: msm8994: Fix CPU6/7 reg values
arm64: dts: qcom: sdm845: use dispcc AHB clock for mdss node
ARM: mxs_defconfig: Enable the framebuffer
arm64: dts: imx8mp-evk: correct mmc pad settings
arm64: dts: imx8mp-evk: correct the uart2 pinctl value
arm64: dts: imx8mp-evk: correct gpio-led pad settings
arm64: dts: imx8mp-evk: correct vbus pad settings
arm64: dts: imx8mp-evk: correct eqos pad settings
arm64: dts: imx8mp-evk: correct I2C1 pad settings
arm64: dts: imx8mp-evk: correct I2C3 pad settings
arm64: dts: imx8mp-phyboard-pollux-rdk: correct uart pad settings
arm64: dts: imx8mp-phyboard-pollux-rdk: correct eqos pad settings
arm64: dts: imx8mp-phyboard-pollux-rdk: correct i2c2 & mmc settings
pinctrl: sunxi: sunxi_pconf_set: use correct offset
arm64: dts: qcom: msm8992-*: Fix vdd_lvs1_2-supply typo
ARM: at91: pm: use proper compatible for sama5d2's rtc
ARM: at91: pm: use proper compatibles for sam9x60's rtc and rtt
ARM: at91: pm: use proper compatibles for sama7g5's rtc and rtt
ARM: dts: at91: sam9x60ek: fix eeprom compatible and size
ARM: dts: at91: sama5d2_icp: fix eeprom compatibles
ARM: at91: fix soc detection for SAM9X60 SiPs
xsk: Clear page contiguity bit when unmapping pool
i2c: piix4: Fix a memory leak in the EFCH MMIO support
i40e: Fix dropped jumbo frames statistics
i40e: Fix VF's MAC Address change on VM
ARM: dts: stm32: use usbphyc ck_usbo_48m as USBH OHCI clock on stm32mp151
ARM: dts: stm32: add missing usbh clock and fix clk order on stm32mp15
ibmvnic: Properly dispose of all skbs during a failover.
selftests: forwarding: fix flood_unicast_test when h2 supports IFF_UNICAST_FLT
selftests: forwarding: fix learning_test when h1 supports IFF_UNICAST_FLT
selftests: forwarding: fix error message in learning_test
r8169: fix accessing unset transport header
i2c: cadence: Unregister the clk notifier in error path
dmaengine: imx-sdma: Allow imx8m for imx7 FW revs
misc: rtsx_usb: fix use of dma mapped buffer for usb bulk transfer
misc: rtsx_usb: use separate command and response buffers
misc: rtsx_usb: set return value in rsp_buf alloc err path
Revert "mm/memory-failure.c: fix race with changing page compound again"
Revert "serial: 8250_mtk: Make sure to select the right FEATURE_SEL"
dt-bindings: dma: allwinner,sun50i-a64-dma: Fix min/max typo
ida: don't use BUG_ON() for debugging
dmaengine: pl330: Fix lockdep warning about non-static key
dmaengine: lgm: Fix an error handling path in intel_ldma_probe()
dmaengine: at_xdma: handle errors of at_xdmac_alloc_desc() correctly
dmaengine: ti: Fix refcount leak in ti_dra7_xbar_route_allocate
dmaengine: qcom: bam_dma: fix runtime PM underflow
dmaengine: ti: Add missing put_device in ti_dra7_xbar_route_allocate
dmaengine: idxd: force wq context cleanup on device disable path
selftests/net: fix section name when using xdp_dummy.o
Linux 5.15.54
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I3ca4c0aa09a3bea6969c7a127d833034a123f437
This commit is contained in:
@@ -64,7 +64,7 @@ if:
|
|||||||
then:
|
then:
|
||||||
properties:
|
properties:
|
||||||
clocks:
|
clocks:
|
||||||
maxItems: 2
|
minItems: 2
|
||||||
|
|
||||||
required:
|
required:
|
||||||
- clock-names
|
- clock-names
|
||||||
|
|||||||
@@ -34,6 +34,8 @@ properties:
|
|||||||
- qcom,rpm-ipq6018
|
- qcom,rpm-ipq6018
|
||||||
- qcom,rpm-msm8226
|
- qcom,rpm-msm8226
|
||||||
- qcom,rpm-msm8916
|
- qcom,rpm-msm8916
|
||||||
|
- qcom,rpm-msm8936
|
||||||
|
- qcom,rpm-msm8953
|
||||||
- qcom,rpm-msm8974
|
- qcom,rpm-msm8974
|
||||||
- qcom,rpm-msm8976
|
- qcom,rpm-msm8976
|
||||||
- qcom,rpm-msm8996
|
- qcom,rpm-msm8996
|
||||||
@@ -57,6 +59,7 @@ if:
|
|||||||
- qcom,rpm-apq8084
|
- qcom,rpm-apq8084
|
||||||
- qcom,rpm-msm8916
|
- qcom,rpm-msm8916
|
||||||
- qcom,rpm-msm8974
|
- qcom,rpm-msm8974
|
||||||
|
- qcom,rpm-msm8953
|
||||||
then:
|
then:
|
||||||
required:
|
required:
|
||||||
- qcom,smd-channels
|
- qcom,smd-channels
|
||||||
|
|||||||
@@ -7952,9 +7952,10 @@ F: drivers/media/usb/go7007/
|
|||||||
|
|
||||||
GOODIX TOUCHSCREEN
|
GOODIX TOUCHSCREEN
|
||||||
M: Bastien Nocera <hadess@hadess.net>
|
M: Bastien Nocera <hadess@hadess.net>
|
||||||
|
M: Hans de Goede <hdegoede@redhat.com>
|
||||||
L: linux-input@vger.kernel.org
|
L: linux-input@vger.kernel.org
|
||||||
S: Maintained
|
S: Maintained
|
||||||
F: drivers/input/touchscreen/goodix.c
|
F: drivers/input/touchscreen/goodix*
|
||||||
|
|
||||||
GOOGLE ETHERNET DRIVERS
|
GOOGLE ETHERNET DRIVERS
|
||||||
M: Jeroen de Borst <jeroendb@google.com>
|
M: Jeroen de Borst <jeroendb@google.com>
|
||||||
|
|||||||
17
Makefile
17
Makefile
@@ -1,7 +1,7 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 5
|
VERSION = 5
|
||||||
PATCHLEVEL = 15
|
PATCHLEVEL = 15
|
||||||
SUBLEVEL = 53
|
SUBLEVEL = 54
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = Trick or Treat
|
NAME = Trick or Treat
|
||||||
|
|
||||||
@@ -1039,6 +1039,21 @@ ifdef CONFIG_CC_IS_GCC
|
|||||||
KBUILD_CFLAGS += -Wno-maybe-uninitialized
|
KBUILD_CFLAGS += -Wno-maybe-uninitialized
|
||||||
endif
|
endif
|
||||||
|
|
||||||
|
ifdef CONFIG_CC_IS_GCC
|
||||||
|
# The allocators already balk at large sizes, so silence the compiler
|
||||||
|
# warnings for bounds checks involving those possible values. While
|
||||||
|
# -Wno-alloc-size-larger-than would normally be used here, earlier versions
|
||||||
|
# of gcc (<9.1) weirdly don't handle the option correctly when _other_
|
||||||
|
# warnings are produced (?!). Using -Walloc-size-larger-than=SIZE_MAX
|
||||||
|
# doesn't work (as it is documented to), silently resolving to "0" prior to
|
||||||
|
# version 9.1 (and producing an error more recently). Numeric values larger
|
||||||
|
# than PTRDIFF_MAX also don't work prior to version 9.1, which are silently
|
||||||
|
# ignored, continuing to default to PTRDIFF_MAX. So, left with no other
|
||||||
|
# choice, we must perform a versioned check to disable this warning.
|
||||||
|
# https://lore.kernel.org/lkml/20210824115859.187f272f@canb.auug.org.au
|
||||||
|
KBUILD_CFLAGS += $(call cc-ifversion, -ge, 0901, -Wno-alloc-size-larger-than)
|
||||||
|
endif
|
||||||
|
|
||||||
# disable invalid "can't wrap" optimizations for signed / pointers
|
# disable invalid "can't wrap" optimizations for signed / pointers
|
||||||
KBUILD_CFLAGS += -fno-strict-overflow
|
KBUILD_CFLAGS += -fno-strict-overflow
|
||||||
|
|
||||||
|
|||||||
@@ -233,10 +233,9 @@
|
|||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
eeprom@53 {
|
eeprom@53 {
|
||||||
compatible = "atmel,24c32";
|
compatible = "atmel,24c02";
|
||||||
reg = <0x53>;
|
reg = <0x53>;
|
||||||
pagesize = <16>;
|
pagesize = <16>;
|
||||||
size = <128>;
|
|
||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -323,21 +323,21 @@
|
|||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
eeprom@50 {
|
eeprom@50 {
|
||||||
compatible = "atmel,24c32";
|
compatible = "atmel,24c02";
|
||||||
reg = <0x50>;
|
reg = <0x50>;
|
||||||
pagesize = <16>;
|
pagesize = <16>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
|
|
||||||
eeprom@52 {
|
eeprom@52 {
|
||||||
compatible = "atmel,24c32";
|
compatible = "atmel,24c02";
|
||||||
reg = <0x52>;
|
reg = <0x52>;
|
||||||
pagesize = <16>;
|
pagesize = <16>;
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
|
||||||
eeprom@53 {
|
eeprom@53 {
|
||||||
compatible = "atmel,24c32";
|
compatible = "atmel,24c02";
|
||||||
reg = <0x53>;
|
reg = <0x53>;
|
||||||
pagesize = <16>;
|
pagesize = <16>;
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
|
|||||||
@@ -1452,7 +1452,7 @@
|
|||||||
usbh_ohci: usb@5800c000 {
|
usbh_ohci: usb@5800c000 {
|
||||||
compatible = "generic-ohci";
|
compatible = "generic-ohci";
|
||||||
reg = <0x5800c000 0x1000>;
|
reg = <0x5800c000 0x1000>;
|
||||||
clocks = <&rcc USBH>;
|
clocks = <&usbphyc>, <&rcc USBH>;
|
||||||
resets = <&rcc USBH_R>;
|
resets = <&rcc USBH_R>;
|
||||||
interrupts = <GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
@@ -1461,7 +1461,7 @@
|
|||||||
usbh_ehci: usb@5800d000 {
|
usbh_ehci: usb@5800d000 {
|
||||||
compatible = "generic-ehci";
|
compatible = "generic-ehci";
|
||||||
reg = <0x5800d000 0x1000>;
|
reg = <0x5800d000 0x1000>;
|
||||||
clocks = <&rcc USBH>;
|
clocks = <&usbphyc>, <&rcc USBH>;
|
||||||
resets = <&rcc USBH_R>;
|
resets = <&rcc USBH_R>;
|
||||||
interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
companion = <&usbh_ohci>;
|
companion = <&usbh_ohci>;
|
||||||
|
|||||||
@@ -93,6 +93,7 @@ CONFIG_REGULATOR_FIXED_VOLTAGE=y
|
|||||||
CONFIG_DRM=y
|
CONFIG_DRM=y
|
||||||
CONFIG_DRM_PANEL_SEIKO_43WVF1G=y
|
CONFIG_DRM_PANEL_SEIKO_43WVF1G=y
|
||||||
CONFIG_DRM_MXSFB=y
|
CONFIG_DRM_MXSFB=y
|
||||||
|
CONFIG_FB=y
|
||||||
CONFIG_FB_MODE_HELPERS=y
|
CONFIG_FB_MODE_HELPERS=y
|
||||||
CONFIG_LCD_CLASS_DEVICE=y
|
CONFIG_LCD_CLASS_DEVICE=y
|
||||||
CONFIG_BACKLIGHT_CLASS_DEVICE=y
|
CONFIG_BACKLIGHT_CLASS_DEVICE=y
|
||||||
|
|||||||
@@ -48,6 +48,7 @@ static inline u32 read_ ## a64(void) \
|
|||||||
return read_sysreg(a32); \
|
return read_sysreg(a32); \
|
||||||
} \
|
} \
|
||||||
|
|
||||||
|
CPUIF_MAP(ICC_EOIR1, ICC_EOIR1_EL1)
|
||||||
CPUIF_MAP(ICC_PMR, ICC_PMR_EL1)
|
CPUIF_MAP(ICC_PMR, ICC_PMR_EL1)
|
||||||
CPUIF_MAP(ICC_AP0R0, ICC_AP0R0_EL1)
|
CPUIF_MAP(ICC_AP0R0, ICC_AP0R0_EL1)
|
||||||
CPUIF_MAP(ICC_AP0R1, ICC_AP0R1_EL1)
|
CPUIF_MAP(ICC_AP0R1, ICC_AP0R1_EL1)
|
||||||
@@ -63,12 +64,6 @@ CPUIF_MAP(ICC_AP1R3, ICC_AP1R3_EL1)
|
|||||||
|
|
||||||
/* Low-level accessors */
|
/* Low-level accessors */
|
||||||
|
|
||||||
static inline void gic_write_eoir(u32 irq)
|
|
||||||
{
|
|
||||||
write_sysreg(irq, ICC_EOIR1);
|
|
||||||
isb();
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void gic_write_dir(u32 val)
|
static inline void gic_write_dir(u32 val)
|
||||||
{
|
{
|
||||||
write_sysreg(val, ICC_DIR);
|
write_sysreg(val, ICC_DIR);
|
||||||
|
|||||||
@@ -146,7 +146,7 @@ static const struct wakeup_source_info ws_info[] = {
|
|||||||
|
|
||||||
static const struct of_device_id sama5d2_ws_ids[] = {
|
static const struct of_device_id sama5d2_ws_ids[] = {
|
||||||
{ .compatible = "atmel,sama5d2-gem", .data = &ws_info[0] },
|
{ .compatible = "atmel,sama5d2-gem", .data = &ws_info[0] },
|
||||||
{ .compatible = "atmel,at91rm9200-rtc", .data = &ws_info[1] },
|
{ .compatible = "atmel,sama5d2-rtc", .data = &ws_info[1] },
|
||||||
{ .compatible = "atmel,sama5d3-udc", .data = &ws_info[2] },
|
{ .compatible = "atmel,sama5d3-udc", .data = &ws_info[2] },
|
||||||
{ .compatible = "atmel,at91rm9200-ohci", .data = &ws_info[2] },
|
{ .compatible = "atmel,at91rm9200-ohci", .data = &ws_info[2] },
|
||||||
{ .compatible = "usb-ohci", .data = &ws_info[2] },
|
{ .compatible = "usb-ohci", .data = &ws_info[2] },
|
||||||
@@ -157,24 +157,24 @@ static const struct of_device_id sama5d2_ws_ids[] = {
|
|||||||
};
|
};
|
||||||
|
|
||||||
static const struct of_device_id sam9x60_ws_ids[] = {
|
static const struct of_device_id sam9x60_ws_ids[] = {
|
||||||
{ .compatible = "atmel,at91sam9x5-rtc", .data = &ws_info[1] },
|
{ .compatible = "microchip,sam9x60-rtc", .data = &ws_info[1] },
|
||||||
{ .compatible = "atmel,at91rm9200-ohci", .data = &ws_info[2] },
|
{ .compatible = "atmel,at91rm9200-ohci", .data = &ws_info[2] },
|
||||||
{ .compatible = "usb-ohci", .data = &ws_info[2] },
|
{ .compatible = "usb-ohci", .data = &ws_info[2] },
|
||||||
{ .compatible = "atmel,at91sam9g45-ehci", .data = &ws_info[2] },
|
{ .compatible = "atmel,at91sam9g45-ehci", .data = &ws_info[2] },
|
||||||
{ .compatible = "usb-ehci", .data = &ws_info[2] },
|
{ .compatible = "usb-ehci", .data = &ws_info[2] },
|
||||||
{ .compatible = "atmel,at91sam9260-rtt", .data = &ws_info[4] },
|
{ .compatible = "microchip,sam9x60-rtt", .data = &ws_info[4] },
|
||||||
{ .compatible = "cdns,sam9x60-macb", .data = &ws_info[5] },
|
{ .compatible = "cdns,sam9x60-macb", .data = &ws_info[5] },
|
||||||
{ /* sentinel */ }
|
{ /* sentinel */ }
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct of_device_id sama7g5_ws_ids[] = {
|
static const struct of_device_id sama7g5_ws_ids[] = {
|
||||||
{ .compatible = "atmel,at91sam9x5-rtc", .data = &ws_info[1] },
|
{ .compatible = "microchip,sama7g5-rtc", .data = &ws_info[1] },
|
||||||
{ .compatible = "microchip,sama7g5-ohci", .data = &ws_info[2] },
|
{ .compatible = "microchip,sama7g5-ohci", .data = &ws_info[2] },
|
||||||
{ .compatible = "usb-ohci", .data = &ws_info[2] },
|
{ .compatible = "usb-ohci", .data = &ws_info[2] },
|
||||||
{ .compatible = "atmel,at91sam9g45-ehci", .data = &ws_info[2] },
|
{ .compatible = "atmel,at91sam9g45-ehci", .data = &ws_info[2] },
|
||||||
{ .compatible = "usb-ehci", .data = &ws_info[2] },
|
{ .compatible = "usb-ehci", .data = &ws_info[2] },
|
||||||
{ .compatible = "microchip,sama7g5-sdhci", .data = &ws_info[3] },
|
{ .compatible = "microchip,sama7g5-sdhci", .data = &ws_info[3] },
|
||||||
{ .compatible = "atmel,at91sam9260-rtt", .data = &ws_info[4] },
|
{ .compatible = "microchip,sama7g5-rtt", .data = &ws_info[4] },
|
||||||
{ /* sentinel */ }
|
{ /* sentinel */ }
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -71,6 +71,7 @@ static void __init meson_smp_prepare_cpus(const char *scu_compatible,
|
|||||||
}
|
}
|
||||||
|
|
||||||
sram_base = of_iomap(node, 0);
|
sram_base = of_iomap(node, 0);
|
||||||
|
of_node_put(node);
|
||||||
if (!sram_base) {
|
if (!sram_base) {
|
||||||
pr_err("Couldn't map SRAM registers\n");
|
pr_err("Couldn't map SRAM registers\n");
|
||||||
return;
|
return;
|
||||||
@@ -91,6 +92,7 @@ static void __init meson_smp_prepare_cpus(const char *scu_compatible,
|
|||||||
}
|
}
|
||||||
|
|
||||||
scu_base = of_iomap(node, 0);
|
scu_base = of_iomap(node, 0);
|
||||||
|
of_node_put(node);
|
||||||
if (!scu_base) {
|
if (!scu_base) {
|
||||||
pr_err("Couldn't map SCU registers\n");
|
pr_err("Couldn't map SCU registers\n");
|
||||||
return;
|
return;
|
||||||
|
|||||||
@@ -285,21 +285,21 @@
|
|||||||
&iomuxc {
|
&iomuxc {
|
||||||
pinctrl_eqos: eqosgrp {
|
pinctrl_eqos: eqosgrp {
|
||||||
fsl,pins = <
|
fsl,pins = <
|
||||||
MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x3
|
MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x2
|
||||||
MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x3
|
MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x2
|
||||||
MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x91
|
MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x90
|
||||||
MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x91
|
MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x90
|
||||||
MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x91
|
MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x90
|
||||||
MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x91
|
MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x90
|
||||||
MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x91
|
MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x90
|
||||||
MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x91
|
MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x90
|
||||||
MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x1f
|
MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x16
|
||||||
MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x1f
|
MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x16
|
||||||
MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x1f
|
MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x16
|
||||||
MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x1f
|
MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x16
|
||||||
MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x1f
|
MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x16
|
||||||
MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x1f
|
MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x16
|
||||||
MX8MP_IOMUXC_SAI2_RXC__GPIO4_IO22 0x19
|
MX8MP_IOMUXC_SAI2_RXC__GPIO4_IO22 0x10
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -351,21 +351,21 @@
|
|||||||
|
|
||||||
pinctrl_gpio_led: gpioledgrp {
|
pinctrl_gpio_led: gpioledgrp {
|
||||||
fsl,pins = <
|
fsl,pins = <
|
||||||
MX8MP_IOMUXC_NAND_READY_B__GPIO3_IO16 0x19
|
MX8MP_IOMUXC_NAND_READY_B__GPIO3_IO16 0x140
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
pinctrl_i2c1: i2c1grp {
|
pinctrl_i2c1: i2c1grp {
|
||||||
fsl,pins = <
|
fsl,pins = <
|
||||||
MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL 0x400001c3
|
MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL 0x400001c2
|
||||||
MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA 0x400001c3
|
MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA 0x400001c2
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
pinctrl_i2c3: i2c3grp {
|
pinctrl_i2c3: i2c3grp {
|
||||||
fsl,pins = <
|
fsl,pins = <
|
||||||
MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL 0x400001c3
|
MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL 0x400001c2
|
||||||
MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c3
|
MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c2
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -377,20 +377,20 @@
|
|||||||
|
|
||||||
pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp {
|
pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp {
|
||||||
fsl,pins = <
|
fsl,pins = <
|
||||||
MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x41
|
MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x40
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
pinctrl_uart2: uart2grp {
|
pinctrl_uart2: uart2grp {
|
||||||
fsl,pins = <
|
fsl,pins = <
|
||||||
MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX 0x49
|
MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX 0x140
|
||||||
MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX 0x49
|
MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX 0x140
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
pinctrl_usb1_vbus: usb1grp {
|
pinctrl_usb1_vbus: usb1grp {
|
||||||
fsl,pins = <
|
fsl,pins = <
|
||||||
MX8MP_IOMUXC_GPIO1_IO14__USB2_OTG_PWR 0x19
|
MX8MP_IOMUXC_GPIO1_IO14__USB2_OTG_PWR 0x10
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -402,7 +402,7 @@
|
|||||||
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d0
|
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d0
|
||||||
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d0
|
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d0
|
||||||
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d0
|
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d0
|
||||||
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
|
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -414,7 +414,7 @@
|
|||||||
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d4
|
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d4
|
||||||
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d4
|
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d4
|
||||||
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4
|
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4
|
||||||
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
|
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -426,7 +426,7 @@
|
|||||||
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d6
|
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d6
|
||||||
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d6
|
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d6
|
||||||
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d6
|
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d6
|
||||||
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
|
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -116,48 +116,48 @@
|
|||||||
&iomuxc {
|
&iomuxc {
|
||||||
pinctrl_eqos: eqosgrp {
|
pinctrl_eqos: eqosgrp {
|
||||||
fsl,pins = <
|
fsl,pins = <
|
||||||
MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x3
|
MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x2
|
||||||
MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x3
|
MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x2
|
||||||
MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x91
|
MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x90
|
||||||
MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x91
|
MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x90
|
||||||
MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x91
|
MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x90
|
||||||
MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x91
|
MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x90
|
||||||
MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x91
|
MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x90
|
||||||
MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x91
|
MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x90
|
||||||
MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x1f
|
MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x16
|
||||||
MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x1f
|
MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x16
|
||||||
MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x1f
|
MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x16
|
||||||
MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x1f
|
MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x16
|
||||||
MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x1f
|
MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x16
|
||||||
MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x1f
|
MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x16
|
||||||
MX8MP_IOMUXC_SAI1_MCLK__GPIO4_IO20 0x10
|
MX8MP_IOMUXC_SAI1_MCLK__GPIO4_IO20 0x10
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
pinctrl_i2c2: i2c2grp {
|
pinctrl_i2c2: i2c2grp {
|
||||||
fsl,pins = <
|
fsl,pins = <
|
||||||
MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL 0x400001c3
|
MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL 0x400001c2
|
||||||
MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA 0x400001c3
|
MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA 0x400001c2
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
pinctrl_i2c2_gpio: i2c2gpiogrp {
|
pinctrl_i2c2_gpio: i2c2gpiogrp {
|
||||||
fsl,pins = <
|
fsl,pins = <
|
||||||
MX8MP_IOMUXC_I2C2_SCL__GPIO5_IO16 0x1e3
|
MX8MP_IOMUXC_I2C2_SCL__GPIO5_IO16 0x1e2
|
||||||
MX8MP_IOMUXC_I2C2_SDA__GPIO5_IO17 0x1e3
|
MX8MP_IOMUXC_I2C2_SDA__GPIO5_IO17 0x1e2
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp {
|
pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp {
|
||||||
fsl,pins = <
|
fsl,pins = <
|
||||||
MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x41
|
MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x40
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
pinctrl_uart1: uart1grp {
|
pinctrl_uart1: uart1grp {
|
||||||
fsl,pins = <
|
fsl,pins = <
|
||||||
MX8MP_IOMUXC_UART1_RXD__UART1_DCE_RX 0x49
|
MX8MP_IOMUXC_UART1_RXD__UART1_DCE_RX 0x40
|
||||||
MX8MP_IOMUXC_UART1_TXD__UART1_DCE_TX 0x49
|
MX8MP_IOMUXC_UART1_TXD__UART1_DCE_TX 0x40
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -175,7 +175,7 @@
|
|||||||
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d0
|
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d0
|
||||||
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d0
|
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d0
|
||||||
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d0
|
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d0
|
||||||
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
|
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -187,7 +187,7 @@
|
|||||||
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d4
|
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d4
|
||||||
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d4
|
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d4
|
||||||
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4
|
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4
|
||||||
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
|
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -199,7 +199,7 @@
|
|||||||
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d6
|
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d6
|
||||||
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d6
|
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d6
|
||||||
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d6
|
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d6
|
||||||
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
|
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -74,7 +74,7 @@
|
|||||||
vdd_l17_29-supply = <&vph_pwr>;
|
vdd_l17_29-supply = <&vph_pwr>;
|
||||||
vdd_l20_21-supply = <&vph_pwr>;
|
vdd_l20_21-supply = <&vph_pwr>;
|
||||||
vdd_l25-supply = <&pm8994_s5>;
|
vdd_l25-supply = <&pm8994_s5>;
|
||||||
vdd_lvs1_2 = <&pm8994_s4>;
|
vdd_lvs1_2-supply = <&pm8994_s4>;
|
||||||
|
|
||||||
/* S1, S2, S6 and S12 are managed by RPMPD */
|
/* S1, S2, S6 and S12 are managed by RPMPD */
|
||||||
|
|
||||||
|
|||||||
@@ -142,7 +142,7 @@
|
|||||||
vdd_l17_29-supply = <&vph_pwr>;
|
vdd_l17_29-supply = <&vph_pwr>;
|
||||||
vdd_l20_21-supply = <&vph_pwr>;
|
vdd_l20_21-supply = <&vph_pwr>;
|
||||||
vdd_l25-supply = <&pm8994_s5>;
|
vdd_l25-supply = <&pm8994_s5>;
|
||||||
vdd_lvs1_2 = <&pm8994_s4>;
|
vdd_lvs1_2-supply = <&pm8994_s4>;
|
||||||
|
|
||||||
/* S1, S2, S6 and S12 are managed by RPMPD */
|
/* S1, S2, S6 and S12 are managed by RPMPD */
|
||||||
|
|
||||||
|
|||||||
@@ -93,7 +93,7 @@
|
|||||||
CPU6: cpu@102 {
|
CPU6: cpu@102 {
|
||||||
device_type = "cpu";
|
device_type = "cpu";
|
||||||
compatible = "arm,cortex-a57";
|
compatible = "arm,cortex-a57";
|
||||||
reg = <0x0 0x101>;
|
reg = <0x0 0x102>;
|
||||||
enable-method = "psci";
|
enable-method = "psci";
|
||||||
next-level-cache = <&L2_1>;
|
next-level-cache = <&L2_1>;
|
||||||
};
|
};
|
||||||
@@ -101,7 +101,7 @@
|
|||||||
CPU7: cpu@103 {
|
CPU7: cpu@103 {
|
||||||
device_type = "cpu";
|
device_type = "cpu";
|
||||||
compatible = "arm,cortex-a57";
|
compatible = "arm,cortex-a57";
|
||||||
reg = <0x0 0x101>;
|
reg = <0x0 0x103>;
|
||||||
enable-method = "psci";
|
enable-method = "psci";
|
||||||
next-level-cache = <&L2_1>;
|
next-level-cache = <&L2_1>;
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -4147,7 +4147,7 @@
|
|||||||
|
|
||||||
power-domains = <&dispcc MDSS_GDSC>;
|
power-domains = <&dispcc MDSS_GDSC>;
|
||||||
|
|
||||||
clocks = <&gcc GCC_DISP_AHB_CLK>,
|
clocks = <&dispcc DISP_CC_MDSS_AHB_CLK>,
|
||||||
<&dispcc DISP_CC_MDSS_MDP_CLK>;
|
<&dispcc DISP_CC_MDSS_MDP_CLK>;
|
||||||
clock-names = "iface", "core";
|
clock-names = "iface", "core";
|
||||||
|
|
||||||
|
|||||||
@@ -26,12 +26,6 @@
|
|||||||
* sets the GP register's most significant bits to 0 with an explicit cast.
|
* sets the GP register's most significant bits to 0 with an explicit cast.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
static inline void gic_write_eoir(u32 irq)
|
|
||||||
{
|
|
||||||
write_sysreg_s(irq, SYS_ICC_EOIR1_EL1);
|
|
||||||
isb();
|
|
||||||
}
|
|
||||||
|
|
||||||
static __always_inline void gic_write_dir(u32 irq)
|
static __always_inline void gic_write_dir(u32 irq)
|
||||||
{
|
{
|
||||||
write_sysreg_s(irq, SYS_ICC_DIR_EL1);
|
write_sysreg_s(irq, SYS_ICC_DIR_EL1);
|
||||||
|
|||||||
@@ -788,6 +788,9 @@ emit_cond_jmp:
|
|||||||
u64 imm64;
|
u64 imm64;
|
||||||
|
|
||||||
imm64 = (u64)insn1.imm << 32 | (u32)imm;
|
imm64 = (u64)insn1.imm << 32 | (u32)imm;
|
||||||
|
if (bpf_pseudo_func(insn))
|
||||||
|
emit_addr_mov_i64(dst, imm64, ctx);
|
||||||
|
else
|
||||||
emit_a64_mov_i64(dst, imm64, ctx);
|
emit_a64_mov_i64(dst, imm64, ctx);
|
||||||
|
|
||||||
return 1;
|
return 1;
|
||||||
|
|||||||
@@ -226,16 +226,19 @@ p_base: mflr r10 /* r10 now points to runtime addr of p_base */
|
|||||||
#ifdef __powerpc64__
|
#ifdef __powerpc64__
|
||||||
|
|
||||||
#define PROM_FRAME_SIZE 512
|
#define PROM_FRAME_SIZE 512
|
||||||
#define SAVE_GPR(n, base) std n,8*(n)(base)
|
|
||||||
#define REST_GPR(n, base) ld n,8*(n)(base)
|
.macro OP_REGS op, width, start, end, base, offset
|
||||||
#define SAVE_2GPRS(n, base) SAVE_GPR(n, base); SAVE_GPR(n+1, base)
|
.Lreg=\start
|
||||||
#define SAVE_4GPRS(n, base) SAVE_2GPRS(n, base); SAVE_2GPRS(n+2, base)
|
.rept (\end - \start + 1)
|
||||||
#define SAVE_8GPRS(n, base) SAVE_4GPRS(n, base); SAVE_4GPRS(n+4, base)
|
\op .Lreg,\offset+\width*.Lreg(\base)
|
||||||
#define SAVE_10GPRS(n, base) SAVE_8GPRS(n, base); SAVE_2GPRS(n+8, base)
|
.Lreg=.Lreg+1
|
||||||
#define REST_2GPRS(n, base) REST_GPR(n, base); REST_GPR(n+1, base)
|
.endr
|
||||||
#define REST_4GPRS(n, base) REST_2GPRS(n, base); REST_2GPRS(n+2, base)
|
.endm
|
||||||
#define REST_8GPRS(n, base) REST_4GPRS(n, base); REST_4GPRS(n+4, base)
|
|
||||||
#define REST_10GPRS(n, base) REST_8GPRS(n, base); REST_2GPRS(n+8, base)
|
#define SAVE_GPRS(start, end, base) OP_REGS std, 8, start, end, base, 0
|
||||||
|
#define REST_GPRS(start, end, base) OP_REGS ld, 8, start, end, base, 0
|
||||||
|
#define SAVE_GPR(n, base) SAVE_GPRS(n, n, base)
|
||||||
|
#define REST_GPR(n, base) REST_GPRS(n, n, base)
|
||||||
|
|
||||||
/* prom handles the jump into and return from firmware. The prom args pointer
|
/* prom handles the jump into and return from firmware. The prom args pointer
|
||||||
is loaded in r3. */
|
is loaded in r3. */
|
||||||
@@ -246,9 +249,7 @@ prom:
|
|||||||
stdu r1,-PROM_FRAME_SIZE(r1) /* Save SP and create stack space */
|
stdu r1,-PROM_FRAME_SIZE(r1) /* Save SP and create stack space */
|
||||||
|
|
||||||
SAVE_GPR(2, r1)
|
SAVE_GPR(2, r1)
|
||||||
SAVE_GPR(13, r1)
|
SAVE_GPRS(13, 31, r1)
|
||||||
SAVE_8GPRS(14, r1)
|
|
||||||
SAVE_10GPRS(22, r1)
|
|
||||||
mfcr r10
|
mfcr r10
|
||||||
std r10,8*32(r1)
|
std r10,8*32(r1)
|
||||||
mfmsr r10
|
mfmsr r10
|
||||||
@@ -283,9 +284,7 @@ prom:
|
|||||||
|
|
||||||
/* Restore other registers */
|
/* Restore other registers */
|
||||||
REST_GPR(2, r1)
|
REST_GPR(2, r1)
|
||||||
REST_GPR(13, r1)
|
REST_GPRS(13, 31, r1)
|
||||||
REST_8GPRS(14, r1)
|
|
||||||
REST_10GPRS(22, r1)
|
|
||||||
ld r10,8*32(r1)
|
ld r10,8*32(r1)
|
||||||
mtcr r10
|
mtcr r10
|
||||||
|
|
||||||
|
|||||||
@@ -38,15 +38,11 @@
|
|||||||
|
|
||||||
#define INITIALIZE \
|
#define INITIALIZE \
|
||||||
PPC_STLU r1,-INT_FRAME_SIZE(r1); \
|
PPC_STLU r1,-INT_FRAME_SIZE(r1); \
|
||||||
SAVE_8GPRS(14, r1); /* push registers onto stack */ \
|
SAVE_GPRS(14, 26, r1) /* push registers onto stack */
|
||||||
SAVE_4GPRS(22, r1); \
|
|
||||||
SAVE_GPR(26, r1)
|
|
||||||
|
|
||||||
#define FINALIZE \
|
#define FINALIZE \
|
||||||
REST_8GPRS(14, r1); /* pop registers from stack */ \
|
REST_GPRS(14, 26, r1); /* pop registers from stack */ \
|
||||||
REST_4GPRS(22, r1); \
|
addi r1,r1,INT_FRAME_SIZE
|
||||||
REST_GPR(26, r1); \
|
|
||||||
addi r1,r1,INT_FRAME_SIZE;
|
|
||||||
|
|
||||||
#ifdef __BIG_ENDIAN__
|
#ifdef __BIG_ENDIAN__
|
||||||
#define LOAD_DATA(reg, off) \
|
#define LOAD_DATA(reg, off) \
|
||||||
|
|||||||
@@ -125,8 +125,7 @@
|
|||||||
|
|
||||||
_GLOBAL(powerpc_sha_transform)
|
_GLOBAL(powerpc_sha_transform)
|
||||||
PPC_STLU r1,-INT_FRAME_SIZE(r1)
|
PPC_STLU r1,-INT_FRAME_SIZE(r1)
|
||||||
SAVE_8GPRS(14, r1)
|
SAVE_GPRS(14, 31, r1)
|
||||||
SAVE_10GPRS(22, r1)
|
|
||||||
|
|
||||||
/* Load up A - E */
|
/* Load up A - E */
|
||||||
lwz RA(0),0(r3) /* A */
|
lwz RA(0),0(r3) /* A */
|
||||||
@@ -184,7 +183,6 @@ _GLOBAL(powerpc_sha_transform)
|
|||||||
stw RD(0),12(r3)
|
stw RD(0),12(r3)
|
||||||
stw RE(0),16(r3)
|
stw RE(0),16(r3)
|
||||||
|
|
||||||
REST_8GPRS(14, r1)
|
REST_GPRS(14, 31, r1)
|
||||||
REST_10GPRS(22, r1)
|
|
||||||
addi r1,r1,INT_FRAME_SIZE
|
addi r1,r1,INT_FRAME_SIZE
|
||||||
blr
|
blr
|
||||||
|
|||||||
@@ -16,30 +16,41 @@
|
|||||||
|
|
||||||
#define SZL (BITS_PER_LONG/8)
|
#define SZL (BITS_PER_LONG/8)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This expands to a sequence of operations with reg incrementing from
|
||||||
|
* start to end inclusive, of this form:
|
||||||
|
*
|
||||||
|
* op reg, (offset + (width * reg))(base)
|
||||||
|
*
|
||||||
|
* Note that offset is not the offset of the first operation unless start
|
||||||
|
* is zero (or width is zero).
|
||||||
|
*/
|
||||||
|
.macro OP_REGS op, width, start, end, base, offset
|
||||||
|
.Lreg=\start
|
||||||
|
.rept (\end - \start + 1)
|
||||||
|
\op .Lreg, \offset + \width * .Lreg(\base)
|
||||||
|
.Lreg=.Lreg+1
|
||||||
|
.endr
|
||||||
|
.endm
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Macros for storing registers into and loading registers from
|
* Macros for storing registers into and loading registers from
|
||||||
* exception frames.
|
* exception frames.
|
||||||
*/
|
*/
|
||||||
#ifdef __powerpc64__
|
#ifdef __powerpc64__
|
||||||
#define SAVE_GPR(n, base) std n,GPR0+8*(n)(base)
|
#define SAVE_GPRS(start, end, base) OP_REGS std, 8, start, end, base, GPR0
|
||||||
#define REST_GPR(n, base) ld n,GPR0+8*(n)(base)
|
#define REST_GPRS(start, end, base) OP_REGS ld, 8, start, end, base, GPR0
|
||||||
#define SAVE_NVGPRS(base) SAVE_8GPRS(14, base); SAVE_10GPRS(22, base)
|
#define SAVE_NVGPRS(base) SAVE_GPRS(14, 31, base)
|
||||||
#define REST_NVGPRS(base) REST_8GPRS(14, base); REST_10GPRS(22, base)
|
#define REST_NVGPRS(base) REST_GPRS(14, 31, base)
|
||||||
#else
|
#else
|
||||||
#define SAVE_GPR(n, base) stw n,GPR0+4*(n)(base)
|
#define SAVE_GPRS(start, end, base) OP_REGS stw, 4, start, end, base, GPR0
|
||||||
#define REST_GPR(n, base) lwz n,GPR0+4*(n)(base)
|
#define REST_GPRS(start, end, base) OP_REGS lwz, 4, start, end, base, GPR0
|
||||||
#define SAVE_NVGPRS(base) stmw 13, GPR0+4*13(base)
|
#define SAVE_NVGPRS(base) SAVE_GPRS(13, 31, base)
|
||||||
#define REST_NVGPRS(base) lmw 13, GPR0+4*13(base)
|
#define REST_NVGPRS(base) REST_GPRS(13, 31, base)
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#define SAVE_2GPRS(n, base) SAVE_GPR(n, base); SAVE_GPR(n+1, base)
|
#define SAVE_GPR(n, base) SAVE_GPRS(n, n, base)
|
||||||
#define SAVE_4GPRS(n, base) SAVE_2GPRS(n, base); SAVE_2GPRS(n+2, base)
|
#define REST_GPR(n, base) REST_GPRS(n, n, base)
|
||||||
#define SAVE_8GPRS(n, base) SAVE_4GPRS(n, base); SAVE_4GPRS(n+4, base)
|
|
||||||
#define SAVE_10GPRS(n, base) SAVE_8GPRS(n, base); SAVE_2GPRS(n+8, base)
|
|
||||||
#define REST_2GPRS(n, base) REST_GPR(n, base); REST_GPR(n+1, base)
|
|
||||||
#define REST_4GPRS(n, base) REST_2GPRS(n, base); REST_2GPRS(n+2, base)
|
|
||||||
#define REST_8GPRS(n, base) REST_4GPRS(n, base); REST_4GPRS(n+4, base)
|
|
||||||
#define REST_10GPRS(n, base) REST_8GPRS(n, base); REST_2GPRS(n+8, base)
|
|
||||||
|
|
||||||
#define SAVE_FPR(n, base) stfd n,8*TS_FPRWIDTH*(n)(base)
|
#define SAVE_FPR(n, base) stfd n,8*TS_FPRWIDTH*(n)(base)
|
||||||
#define SAVE_2FPRS(n, base) SAVE_FPR(n, base); SAVE_FPR(n+1, base)
|
#define SAVE_2FPRS(n, base) SAVE_FPR(n, base); SAVE_FPR(n+1, base)
|
||||||
|
|||||||
@@ -90,8 +90,7 @@ transfer_to_syscall:
|
|||||||
stw r12,8(r1)
|
stw r12,8(r1)
|
||||||
stw r2,_TRAP(r1)
|
stw r2,_TRAP(r1)
|
||||||
SAVE_GPR(0, r1)
|
SAVE_GPR(0, r1)
|
||||||
SAVE_4GPRS(3, r1)
|
SAVE_GPRS(3, 8, r1)
|
||||||
SAVE_2GPRS(7, r1)
|
|
||||||
addi r2,r10,-THREAD
|
addi r2,r10,-THREAD
|
||||||
SAVE_NVGPRS(r1)
|
SAVE_NVGPRS(r1)
|
||||||
|
|
||||||
@@ -139,7 +138,7 @@ syscall_exit_finish:
|
|||||||
mtxer r5
|
mtxer r5
|
||||||
lwz r0,GPR0(r1)
|
lwz r0,GPR0(r1)
|
||||||
lwz r3,GPR3(r1)
|
lwz r3,GPR3(r1)
|
||||||
REST_8GPRS(4,r1)
|
REST_GPRS(4, 11, r1)
|
||||||
lwz r12,GPR12(r1)
|
lwz r12,GPR12(r1)
|
||||||
b 1b
|
b 1b
|
||||||
|
|
||||||
@@ -232,9 +231,9 @@ fast_exception_return:
|
|||||||
beq 3f /* if not, we've got problems */
|
beq 3f /* if not, we've got problems */
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
2: REST_4GPRS(3, r11)
|
2: REST_GPRS(3, 6, r11)
|
||||||
lwz r10,_CCR(r11)
|
lwz r10,_CCR(r11)
|
||||||
REST_2GPRS(1, r11)
|
REST_GPRS(1, 2, r11)
|
||||||
mtcr r10
|
mtcr r10
|
||||||
lwz r10,_LINK(r11)
|
lwz r10,_LINK(r11)
|
||||||
mtlr r10
|
mtlr r10
|
||||||
@@ -298,16 +297,14 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
|
|||||||
* the reliable stack unwinder later on. Clear it.
|
* the reliable stack unwinder later on. Clear it.
|
||||||
*/
|
*/
|
||||||
stw r0,8(r1)
|
stw r0,8(r1)
|
||||||
REST_4GPRS(7, r1)
|
REST_GPRS(7, 12, r1)
|
||||||
REST_2GPRS(11, r1)
|
|
||||||
|
|
||||||
mtcr r3
|
mtcr r3
|
||||||
mtlr r4
|
mtlr r4
|
||||||
mtctr r5
|
mtctr r5
|
||||||
mtspr SPRN_XER,r6
|
mtspr SPRN_XER,r6
|
||||||
|
|
||||||
REST_4GPRS(2, r1)
|
REST_GPRS(2, 6, r1)
|
||||||
REST_GPR(6, r1)
|
|
||||||
REST_GPR(0, r1)
|
REST_GPR(0, r1)
|
||||||
REST_GPR(1, r1)
|
REST_GPR(1, r1)
|
||||||
rfi
|
rfi
|
||||||
@@ -341,8 +338,7 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
|
|||||||
lwz r6,_CCR(r1)
|
lwz r6,_CCR(r1)
|
||||||
li r0,0
|
li r0,0
|
||||||
|
|
||||||
REST_4GPRS(7, r1)
|
REST_GPRS(7, 12, r1)
|
||||||
REST_2GPRS(11, r1)
|
|
||||||
|
|
||||||
mtlr r3
|
mtlr r3
|
||||||
mtctr r4
|
mtctr r4
|
||||||
@@ -354,7 +350,7 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
|
|||||||
*/
|
*/
|
||||||
stw r0,8(r1)
|
stw r0,8(r1)
|
||||||
|
|
||||||
REST_4GPRS(2, r1)
|
REST_GPRS(2, 5, r1)
|
||||||
|
|
||||||
bne- cr1,1f /* emulate stack store */
|
bne- cr1,1f /* emulate stack store */
|
||||||
mtcr r6
|
mtcr r6
|
||||||
@@ -430,8 +426,7 @@ _ASM_NOKPROBE_SYMBOL(interrupt_return)
|
|||||||
bne interrupt_return; \
|
bne interrupt_return; \
|
||||||
lwz r0,GPR0(r1); \
|
lwz r0,GPR0(r1); \
|
||||||
lwz r2,GPR2(r1); \
|
lwz r2,GPR2(r1); \
|
||||||
REST_4GPRS(3, r1); \
|
REST_GPRS(3, 8, r1); \
|
||||||
REST_2GPRS(7, r1); \
|
|
||||||
lwz r10,_XER(r1); \
|
lwz r10,_XER(r1); \
|
||||||
lwz r11,_CTR(r1); \
|
lwz r11,_CTR(r1); \
|
||||||
mtspr SPRN_XER,r10; \
|
mtspr SPRN_XER,r10; \
|
||||||
|
|||||||
@@ -198,8 +198,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
|
|||||||
|
|
||||||
stdcx. r0,0,r1 /* to clear the reservation */
|
stdcx. r0,0,r1 /* to clear the reservation */
|
||||||
|
|
||||||
REST_4GPRS(2, r1)
|
REST_GPRS(2, 9, r1)
|
||||||
REST_4GPRS(6, r1)
|
|
||||||
|
|
||||||
ld r10,_CTR(r1)
|
ld r10,_CTR(r1)
|
||||||
ld r11,_XER(r1)
|
ld r11,_XER(r1)
|
||||||
@@ -375,9 +374,7 @@ ret_from_mc_except:
|
|||||||
exc_##n##_common: \
|
exc_##n##_common: \
|
||||||
std r0,GPR0(r1); /* save r0 in stackframe */ \
|
std r0,GPR0(r1); /* save r0 in stackframe */ \
|
||||||
std r2,GPR2(r1); /* save r2 in stackframe */ \
|
std r2,GPR2(r1); /* save r2 in stackframe */ \
|
||||||
SAVE_4GPRS(3, r1); /* save r3 - r6 in stackframe */ \
|
SAVE_GPRS(3, 9, r1); /* save r3 - r9 in stackframe */ \
|
||||||
SAVE_2GPRS(7, r1); /* save r7, r8 in stackframe */ \
|
|
||||||
std r9,GPR9(r1); /* save r9 in stackframe */ \
|
|
||||||
std r10,_NIP(r1); /* save SRR0 to stackframe */ \
|
std r10,_NIP(r1); /* save SRR0 to stackframe */ \
|
||||||
std r11,_MSR(r1); /* save SRR1 to stackframe */ \
|
std r11,_MSR(r1); /* save SRR1 to stackframe */ \
|
||||||
beq 2f; /* if from kernel mode */ \
|
beq 2f; /* if from kernel mode */ \
|
||||||
@@ -1061,9 +1058,7 @@ bad_stack_book3e:
|
|||||||
std r11,_ESR(r1)
|
std r11,_ESR(r1)
|
||||||
std r0,GPR0(r1); /* save r0 in stackframe */ \
|
std r0,GPR0(r1); /* save r0 in stackframe */ \
|
||||||
std r2,GPR2(r1); /* save r2 in stackframe */ \
|
std r2,GPR2(r1); /* save r2 in stackframe */ \
|
||||||
SAVE_4GPRS(3, r1); /* save r3 - r6 in stackframe */ \
|
SAVE_GPRS(3, 9, r1); /* save r3 - r9 in stackframe */ \
|
||||||
SAVE_2GPRS(7, r1); /* save r7, r8 in stackframe */ \
|
|
||||||
std r9,GPR9(r1); /* save r9 in stackframe */ \
|
|
||||||
ld r3,PACA_EXGEN+EX_R10(r13);/* get back r10 */ \
|
ld r3,PACA_EXGEN+EX_R10(r13);/* get back r10 */ \
|
||||||
ld r4,PACA_EXGEN+EX_R11(r13);/* get back r11 */ \
|
ld r4,PACA_EXGEN+EX_R11(r13);/* get back r11 */ \
|
||||||
mfspr r5,SPRN_SPRG_GEN_SCRATCH;/* get back r13 XXX can be wrong */ \
|
mfspr r5,SPRN_SPRG_GEN_SCRATCH;/* get back r13 XXX can be wrong */ \
|
||||||
@@ -1077,8 +1072,7 @@ bad_stack_book3e:
|
|||||||
std r10,_LINK(r1)
|
std r10,_LINK(r1)
|
||||||
std r11,_CTR(r1)
|
std r11,_CTR(r1)
|
||||||
std r12,_XER(r1)
|
std r12,_XER(r1)
|
||||||
SAVE_10GPRS(14,r1)
|
SAVE_GPRS(14, 31, r1)
|
||||||
SAVE_8GPRS(24,r1)
|
|
||||||
lhz r12,PACA_TRAP_SAVE(r13)
|
lhz r12,PACA_TRAP_SAVE(r13)
|
||||||
std r12,_TRAP(r1)
|
std r12,_TRAP(r1)
|
||||||
addi r11,r1,INT_FRAME_SIZE
|
addi r11,r1,INT_FRAME_SIZE
|
||||||
|
|||||||
@@ -574,8 +574,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
|
|||||||
ld r10,IAREA+EX_CTR(r13)
|
ld r10,IAREA+EX_CTR(r13)
|
||||||
std r10,_CTR(r1)
|
std r10,_CTR(r1)
|
||||||
std r2,GPR2(r1) /* save r2 in stackframe */
|
std r2,GPR2(r1) /* save r2 in stackframe */
|
||||||
SAVE_4GPRS(3, r1) /* save r3 - r6 in stackframe */
|
SAVE_GPRS(3, 8, r1) /* save r3 - r8 in stackframe */
|
||||||
SAVE_2GPRS(7, r1) /* save r7, r8 in stackframe */
|
|
||||||
mflr r9 /* Get LR, later save to stack */
|
mflr r9 /* Get LR, later save to stack */
|
||||||
ld r2,PACATOC(r13) /* get kernel TOC into r2 */
|
ld r2,PACATOC(r13) /* get kernel TOC into r2 */
|
||||||
std r9,_LINK(r1)
|
std r9,_LINK(r1)
|
||||||
@@ -693,8 +692,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
|
|||||||
mtlr r9
|
mtlr r9
|
||||||
ld r9,_CCR(r1)
|
ld r9,_CCR(r1)
|
||||||
mtcr r9
|
mtcr r9
|
||||||
REST_8GPRS(2, r1)
|
REST_GPRS(2, 13, r1)
|
||||||
REST_4GPRS(10, r1)
|
|
||||||
REST_GPR(0, r1)
|
REST_GPR(0, r1)
|
||||||
/* restore original r1. */
|
/* restore original r1. */
|
||||||
ld r1,GPR1(r1)
|
ld r1,GPR1(r1)
|
||||||
|
|||||||
@@ -115,8 +115,7 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
|
|||||||
stw r10,8(r1)
|
stw r10,8(r1)
|
||||||
li r10, \trapno
|
li r10, \trapno
|
||||||
stw r10,_TRAP(r1)
|
stw r10,_TRAP(r1)
|
||||||
SAVE_4GPRS(3, r1)
|
SAVE_GPRS(3, 8, r1)
|
||||||
SAVE_2GPRS(7, r1)
|
|
||||||
SAVE_NVGPRS(r1)
|
SAVE_NVGPRS(r1)
|
||||||
stw r2,GPR2(r1)
|
stw r2,GPR2(r1)
|
||||||
stw r12,_NIP(r1)
|
stw r12,_NIP(r1)
|
||||||
|
|||||||
@@ -87,8 +87,7 @@ END_BTB_FLUSH_SECTION
|
|||||||
stw r10, 8(r1)
|
stw r10, 8(r1)
|
||||||
li r10, \trapno
|
li r10, \trapno
|
||||||
stw r10,_TRAP(r1)
|
stw r10,_TRAP(r1)
|
||||||
SAVE_4GPRS(3, r1)
|
SAVE_GPRS(3, 8, r1)
|
||||||
SAVE_2GPRS(7, r1)
|
|
||||||
SAVE_NVGPRS(r1)
|
SAVE_NVGPRS(r1)
|
||||||
stw r2,GPR2(r1)
|
stw r2,GPR2(r1)
|
||||||
stw r12,_NIP(r1)
|
stw r12,_NIP(r1)
|
||||||
|
|||||||
@@ -166,10 +166,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
|
|||||||
* The value of AMR only matters while we're in the kernel.
|
* The value of AMR only matters while we're in the kernel.
|
||||||
*/
|
*/
|
||||||
mtcr r2
|
mtcr r2
|
||||||
ld r2,GPR2(r1)
|
REST_GPRS(2, 3, r1)
|
||||||
ld r3,GPR3(r1)
|
REST_GPR(13, r1)
|
||||||
ld r13,GPR13(r1)
|
REST_GPR(1, r1)
|
||||||
ld r1,GPR1(r1)
|
|
||||||
RFSCV_TO_USER
|
RFSCV_TO_USER
|
||||||
b . /* prevent speculative execution */
|
b . /* prevent speculative execution */
|
||||||
|
|
||||||
@@ -187,9 +186,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
|
|||||||
mtctr r3
|
mtctr r3
|
||||||
mtlr r4
|
mtlr r4
|
||||||
mtspr SPRN_XER,r5
|
mtspr SPRN_XER,r5
|
||||||
REST_10GPRS(2, r1)
|
REST_GPRS(2, 13, r1)
|
||||||
REST_2GPRS(12, r1)
|
REST_GPR(1, r1)
|
||||||
ld r1,GPR1(r1)
|
|
||||||
RFI_TO_USER
|
RFI_TO_USER
|
||||||
.Lsyscall_vectored_\name\()_rst_end:
|
.Lsyscall_vectored_\name\()_rst_end:
|
||||||
|
|
||||||
@@ -378,10 +376,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
|
|||||||
* The value of AMR only matters while we're in the kernel.
|
* The value of AMR only matters while we're in the kernel.
|
||||||
*/
|
*/
|
||||||
mtcr r2
|
mtcr r2
|
||||||
ld r2,GPR2(r1)
|
REST_GPRS(2, 3, r1)
|
||||||
ld r3,GPR3(r1)
|
REST_GPR(13, r1)
|
||||||
ld r13,GPR13(r1)
|
REST_GPR(1, r1)
|
||||||
ld r1,GPR1(r1)
|
|
||||||
RFI_TO_USER
|
RFI_TO_USER
|
||||||
b . /* prevent speculative execution */
|
b . /* prevent speculative execution */
|
||||||
|
|
||||||
@@ -392,8 +389,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
|
|||||||
mtctr r3
|
mtctr r3
|
||||||
mtspr SPRN_XER,r4
|
mtspr SPRN_XER,r4
|
||||||
ld r0,GPR0(r1)
|
ld r0,GPR0(r1)
|
||||||
REST_8GPRS(4, r1)
|
REST_GPRS(4, 12, r1)
|
||||||
ld r12,GPR12(r1)
|
|
||||||
b .Lsyscall_restore_regs_cont
|
b .Lsyscall_restore_regs_cont
|
||||||
.Lsyscall_rst_end:
|
.Lsyscall_rst_end:
|
||||||
|
|
||||||
@@ -522,17 +518,14 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
|
|||||||
ld r6,_XER(r1)
|
ld r6,_XER(r1)
|
||||||
li r0,0
|
li r0,0
|
||||||
|
|
||||||
REST_4GPRS(7, r1)
|
REST_GPRS(7, 13, r1)
|
||||||
REST_2GPRS(11, r1)
|
|
||||||
REST_GPR(13, r1)
|
|
||||||
|
|
||||||
mtcr r3
|
mtcr r3
|
||||||
mtlr r4
|
mtlr r4
|
||||||
mtctr r5
|
mtctr r5
|
||||||
mtspr SPRN_XER,r6
|
mtspr SPRN_XER,r6
|
||||||
|
|
||||||
REST_4GPRS(2, r1)
|
REST_GPRS(2, 6, r1)
|
||||||
REST_GPR(6, r1)
|
|
||||||
REST_GPR(0, r1)
|
REST_GPR(0, r1)
|
||||||
REST_GPR(1, r1)
|
REST_GPR(1, r1)
|
||||||
.ifc \srr,srr
|
.ifc \srr,srr
|
||||||
@@ -629,8 +622,7 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
|
|||||||
ld r6,_CCR(r1)
|
ld r6,_CCR(r1)
|
||||||
li r0,0
|
li r0,0
|
||||||
|
|
||||||
REST_4GPRS(7, r1)
|
REST_GPRS(7, 12, r1)
|
||||||
REST_2GPRS(11, r1)
|
|
||||||
|
|
||||||
mtlr r3
|
mtlr r3
|
||||||
mtctr r4
|
mtctr r4
|
||||||
@@ -642,7 +634,7 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
|
|||||||
*/
|
*/
|
||||||
std r0,STACK_FRAME_OVERHEAD-16(r1)
|
std r0,STACK_FRAME_OVERHEAD-16(r1)
|
||||||
|
|
||||||
REST_4GPRS(2, r1)
|
REST_GPRS(2, 5, r1)
|
||||||
|
|
||||||
bne- cr1,1f /* emulate stack store */
|
bne- cr1,1f /* emulate stack store */
|
||||||
mtcr r6
|
mtcr r6
|
||||||
|
|||||||
@@ -10,8 +10,8 @@
|
|||||||
#include <asm/asm-offsets.h>
|
#include <asm/asm-offsets.h>
|
||||||
|
|
||||||
#ifdef CONFIG_PPC64
|
#ifdef CONFIG_PPC64
|
||||||
#define SAVE_30GPRS(base) SAVE_10GPRS(2,base); SAVE_10GPRS(12,base); SAVE_10GPRS(22,base)
|
#define SAVE_30GPRS(base) SAVE_GPRS(2, 31, base)
|
||||||
#define REST_30GPRS(base) REST_10GPRS(2,base); REST_10GPRS(12,base); REST_10GPRS(22,base)
|
#define REST_30GPRS(base) REST_GPRS(2, 31, base)
|
||||||
#define TEMPLATE_FOR_IMM_LOAD_INSNS nop; nop; nop; nop; nop
|
#define TEMPLATE_FOR_IMM_LOAD_INSNS nop; nop; nop; nop; nop
|
||||||
#else
|
#else
|
||||||
#define SAVE_30GPRS(base) stmw r2, GPR2(base)
|
#define SAVE_30GPRS(base) stmw r2, GPR2(base)
|
||||||
|
|||||||
@@ -226,11 +226,8 @@ _GLOBAL(tm_reclaim)
|
|||||||
|
|
||||||
/* Sync the userland GPRs 2-12, 14-31 to thread->regs: */
|
/* Sync the userland GPRs 2-12, 14-31 to thread->regs: */
|
||||||
SAVE_GPR(0, r7) /* user r0 */
|
SAVE_GPR(0, r7) /* user r0 */
|
||||||
SAVE_GPR(2, r7) /* user r2 */
|
SAVE_GPRS(2, 6, r7) /* user r2-r6 */
|
||||||
SAVE_4GPRS(3, r7) /* user r3-r6 */
|
SAVE_GPRS(8, 10, r7) /* user r8-r10 */
|
||||||
SAVE_GPR(8, r7) /* user r8 */
|
|
||||||
SAVE_GPR(9, r7) /* user r9 */
|
|
||||||
SAVE_GPR(10, r7) /* user r10 */
|
|
||||||
ld r3, GPR1(r1) /* user r1 */
|
ld r3, GPR1(r1) /* user r1 */
|
||||||
ld r4, GPR7(r1) /* user r7 */
|
ld r4, GPR7(r1) /* user r7 */
|
||||||
ld r5, GPR11(r1) /* user r11 */
|
ld r5, GPR11(r1) /* user r11 */
|
||||||
@@ -445,12 +442,9 @@ restore_gprs:
|
|||||||
ld r6, THREAD_TM_PPR(r3)
|
ld r6, THREAD_TM_PPR(r3)
|
||||||
|
|
||||||
REST_GPR(0, r7) /* GPR0 */
|
REST_GPR(0, r7) /* GPR0 */
|
||||||
REST_2GPRS(2, r7) /* GPR2-3 */
|
REST_GPRS(2, 4, r7) /* GPR2-4 */
|
||||||
REST_GPR(4, r7) /* GPR4 */
|
REST_GPRS(8, 12, r7) /* GPR8-12 */
|
||||||
REST_4GPRS(8, r7) /* GPR8-11 */
|
REST_GPRS(14, 31, r7) /* GPR14-31 */
|
||||||
REST_2GPRS(12, r7) /* GPR12-13 */
|
|
||||||
|
|
||||||
REST_NVGPRS(r7) /* GPR14-31 */
|
|
||||||
|
|
||||||
/* Load up PPR and DSCR here so we don't run with user values for long */
|
/* Load up PPR and DSCR here so we don't run with user values for long */
|
||||||
mtspr SPRN_DSCR, r5
|
mtspr SPRN_DSCR, r5
|
||||||
@@ -486,18 +480,24 @@ restore_gprs:
|
|||||||
REST_GPR(6, r7)
|
REST_GPR(6, r7)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Store r1 and r5 on the stack so that we can access them after we
|
* Store user r1 and r5 and r13 on the stack (in the unused save
|
||||||
* clear MSR RI.
|
* areas / compiler reserved areas), so that we can access them after
|
||||||
|
* we clear MSR RI.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
REST_GPR(5, r7)
|
REST_GPR(5, r7)
|
||||||
std r5, -8(r1)
|
std r5, -8(r1)
|
||||||
ld r5, GPR1(r7)
|
ld r5, GPR13(r7)
|
||||||
std r5, -16(r1)
|
std r5, -16(r1)
|
||||||
|
ld r5, GPR1(r7)
|
||||||
|
std r5, -24(r1)
|
||||||
|
|
||||||
REST_GPR(7, r7)
|
REST_GPR(7, r7)
|
||||||
|
|
||||||
/* Clear MSR RI since we are about to use SCRATCH0. EE is already off */
|
/* Stash the stack pointer away for use after recheckpoint */
|
||||||
|
std r1, PACAR1(r13)
|
||||||
|
|
||||||
|
/* Clear MSR RI since we are about to clobber r13. EE is already off */
|
||||||
li r5, 0
|
li r5, 0
|
||||||
mtmsrd r5, 1
|
mtmsrd r5, 1
|
||||||
|
|
||||||
@@ -508,9 +508,9 @@ restore_gprs:
|
|||||||
* until we turn MSR RI back on.
|
* until we turn MSR RI back on.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
SET_SCRATCH0(r1)
|
|
||||||
ld r5, -8(r1)
|
ld r5, -8(r1)
|
||||||
ld r1, -16(r1)
|
ld r13, -16(r1)
|
||||||
|
ld r1, -24(r1)
|
||||||
|
|
||||||
/* Commit register state as checkpointed state: */
|
/* Commit register state as checkpointed state: */
|
||||||
TRECHKPT
|
TRECHKPT
|
||||||
@@ -526,9 +526,9 @@ restore_gprs:
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
GET_PACA(r13)
|
GET_PACA(r13)
|
||||||
GET_SCRATCH0(r1)
|
ld r1, PACAR1(r13)
|
||||||
|
|
||||||
/* R1 is restored, so we are recoverable again. EE is still off */
|
/* R13, R1 is restored, so we are recoverable again. EE is still off */
|
||||||
li r4, MSR_RI
|
li r4, MSR_RI
|
||||||
mtmsrd r4, 1
|
mtmsrd r4, 1
|
||||||
|
|
||||||
|
|||||||
@@ -41,15 +41,14 @@ _GLOBAL(ftrace_regs_caller)
|
|||||||
|
|
||||||
/* Save all gprs to pt_regs */
|
/* Save all gprs to pt_regs */
|
||||||
SAVE_GPR(0, r1)
|
SAVE_GPR(0, r1)
|
||||||
SAVE_10GPRS(2, r1)
|
SAVE_GPRS(2, 11, r1)
|
||||||
|
|
||||||
/* Ok to continue? */
|
/* Ok to continue? */
|
||||||
lbz r3, PACA_FTRACE_ENABLED(r13)
|
lbz r3, PACA_FTRACE_ENABLED(r13)
|
||||||
cmpdi r3, 0
|
cmpdi r3, 0
|
||||||
beq ftrace_no_trace
|
beq ftrace_no_trace
|
||||||
|
|
||||||
SAVE_10GPRS(12, r1)
|
SAVE_GPRS(12, 31, r1)
|
||||||
SAVE_10GPRS(22, r1)
|
|
||||||
|
|
||||||
/* Save previous stack pointer (r1) */
|
/* Save previous stack pointer (r1) */
|
||||||
addi r8, r1, SWITCH_FRAME_SIZE
|
addi r8, r1, SWITCH_FRAME_SIZE
|
||||||
@@ -109,9 +108,7 @@ ftrace_regs_call:
|
|||||||
|
|
||||||
/* Restore gprs */
|
/* Restore gprs */
|
||||||
REST_GPR(0, r1)
|
REST_GPR(0, r1)
|
||||||
REST_10GPRS(2,r1)
|
REST_GPRS(2, 31, r1)
|
||||||
REST_10GPRS(12,r1)
|
|
||||||
REST_10GPRS(22,r1)
|
|
||||||
|
|
||||||
/* Restore possibly modified LR */
|
/* Restore possibly modified LR */
|
||||||
ld r0, _LINK(r1)
|
ld r0, _LINK(r1)
|
||||||
@@ -157,7 +154,7 @@ _GLOBAL(ftrace_caller)
|
|||||||
stdu r1, -SWITCH_FRAME_SIZE(r1)
|
stdu r1, -SWITCH_FRAME_SIZE(r1)
|
||||||
|
|
||||||
/* Save all gprs to pt_regs */
|
/* Save all gprs to pt_regs */
|
||||||
SAVE_8GPRS(3, r1)
|
SAVE_GPRS(3, 10, r1)
|
||||||
|
|
||||||
lbz r3, PACA_FTRACE_ENABLED(r13)
|
lbz r3, PACA_FTRACE_ENABLED(r13)
|
||||||
cmpdi r3, 0
|
cmpdi r3, 0
|
||||||
@@ -194,7 +191,7 @@ ftrace_call:
|
|||||||
mtctr r3
|
mtctr r3
|
||||||
|
|
||||||
/* Restore gprs */
|
/* Restore gprs */
|
||||||
REST_8GPRS(3,r1)
|
REST_GPRS(3, 10, r1)
|
||||||
|
|
||||||
/* Restore callee's TOC */
|
/* Restore callee's TOC */
|
||||||
ld r2, 24(r1)
|
ld r2, 24(r1)
|
||||||
|
|||||||
@@ -2711,8 +2711,7 @@ kvmppc_bad_host_intr:
|
|||||||
std r0, GPR0(r1)
|
std r0, GPR0(r1)
|
||||||
std r9, GPR1(r1)
|
std r9, GPR1(r1)
|
||||||
std r2, GPR2(r1)
|
std r2, GPR2(r1)
|
||||||
SAVE_4GPRS(3, r1)
|
SAVE_GPRS(3, 8, r1)
|
||||||
SAVE_2GPRS(7, r1)
|
|
||||||
srdi r0, r12, 32
|
srdi r0, r12, 32
|
||||||
clrldi r12, r12, 32
|
clrldi r12, r12, 32
|
||||||
std r0, _CCR(r1)
|
std r0, _CCR(r1)
|
||||||
@@ -2735,7 +2734,7 @@ kvmppc_bad_host_intr:
|
|||||||
ld r9, HSTATE_SCRATCH2(r13)
|
ld r9, HSTATE_SCRATCH2(r13)
|
||||||
ld r12, HSTATE_SCRATCH0(r13)
|
ld r12, HSTATE_SCRATCH0(r13)
|
||||||
GET_SCRATCH0(r0)
|
GET_SCRATCH0(r0)
|
||||||
SAVE_4GPRS(9, r1)
|
SAVE_GPRS(9, 12, r1)
|
||||||
std r0, GPR13(r1)
|
std r0, GPR13(r1)
|
||||||
SAVE_NVGPRS(r1)
|
SAVE_NVGPRS(r1)
|
||||||
ld r5, HSTATE_CFAR(r13)
|
ld r5, HSTATE_CFAR(r13)
|
||||||
|
|||||||
@@ -251,7 +251,7 @@ int kvmppc_uvmem_slot_init(struct kvm *kvm, const struct kvm_memory_slot *slot)
|
|||||||
p = kzalloc(sizeof(*p), GFP_KERNEL);
|
p = kzalloc(sizeof(*p), GFP_KERNEL);
|
||||||
if (!p)
|
if (!p)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
p->pfns = vzalloc(array_size(slot->npages, sizeof(*p->pfns)));
|
p->pfns = vcalloc(slot->npages, sizeof(*p->pfns));
|
||||||
if (!p->pfns) {
|
if (!p->pfns) {
|
||||||
kfree(p);
|
kfree(p);
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|||||||
@@ -37,7 +37,7 @@ _GLOBAL(exec_instr)
|
|||||||
* The stack pointer (GPR1) and the thread pointer (GPR13) are not
|
* The stack pointer (GPR1) and the thread pointer (GPR13) are not
|
||||||
* saved as these should not be modified anyway.
|
* saved as these should not be modified anyway.
|
||||||
*/
|
*/
|
||||||
SAVE_2GPRS(2, r1)
|
SAVE_GPRS(2, 3, r1)
|
||||||
SAVE_NVGPRS(r1)
|
SAVE_NVGPRS(r1)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -75,8 +75,7 @@ _GLOBAL(exec_instr)
|
|||||||
|
|
||||||
/* Load GPRs from pt_regs */
|
/* Load GPRs from pt_regs */
|
||||||
REST_GPR(0, r31)
|
REST_GPR(0, r31)
|
||||||
REST_10GPRS(2, r31)
|
REST_GPRS(2, 12, r31)
|
||||||
REST_GPR(12, r31)
|
|
||||||
REST_NVGPRS(r31)
|
REST_NVGPRS(r31)
|
||||||
|
|
||||||
/* Placeholder for the test instruction */
|
/* Placeholder for the test instruction */
|
||||||
@@ -99,8 +98,7 @@ _GLOBAL(exec_instr)
|
|||||||
subi r3, r3, GPR0
|
subi r3, r3, GPR0
|
||||||
SAVE_GPR(0, r3)
|
SAVE_GPR(0, r3)
|
||||||
SAVE_GPR(2, r3)
|
SAVE_GPR(2, r3)
|
||||||
SAVE_8GPRS(4, r3)
|
SAVE_GPRS(4, 12, r3)
|
||||||
SAVE_GPR(12, r3)
|
|
||||||
SAVE_NVGPRS(r3)
|
SAVE_NVGPRS(r3)
|
||||||
|
|
||||||
/* Save resulting LR to pt_regs */
|
/* Save resulting LR to pt_regs */
|
||||||
|
|||||||
@@ -176,12 +176,8 @@ static int __init pnv_get_random_long_early(unsigned long *v)
|
|||||||
NULL) != pnv_get_random_long_early)
|
NULL) != pnv_get_random_long_early)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
for_each_compatible_node(dn, NULL, "ibm,power-rng") {
|
for_each_compatible_node(dn, NULL, "ibm,power-rng")
|
||||||
if (rng_create(dn))
|
rng_create(dn);
|
||||||
continue;
|
|
||||||
/* Create devices for hwrng driver */
|
|
||||||
of_platform_device_create(dn, NULL, NULL);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!ppc_md.get_random_seed)
|
if (!ppc_md.get_random_seed)
|
||||||
return 0;
|
return 0;
|
||||||
@@ -205,10 +201,18 @@ void __init pnv_rng_init(void)
|
|||||||
|
|
||||||
static int __init pnv_rng_late_init(void)
|
static int __init pnv_rng_late_init(void)
|
||||||
{
|
{
|
||||||
|
struct device_node *dn;
|
||||||
unsigned long v;
|
unsigned long v;
|
||||||
|
|
||||||
/* In case it wasn't called during init for some other reason. */
|
/* In case it wasn't called during init for some other reason. */
|
||||||
if (ppc_md.get_random_seed == pnv_get_random_long_early)
|
if (ppc_md.get_random_seed == pnv_get_random_long_early)
|
||||||
pnv_get_random_long_early(&v);
|
pnv_get_random_long_early(&v);
|
||||||
|
|
||||||
|
if (ppc_md.get_random_seed == powernv_get_random_long) {
|
||||||
|
for_each_compatible_node(dn, NULL, "ibm,power-rng")
|
||||||
|
of_platform_device_create(dn, NULL, NULL);
|
||||||
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
machine_subsys_initcall(powernv, pnv_rng_late_init);
|
machine_subsys_initcall(powernv, pnv_rng_late_init);
|
||||||
|
|||||||
@@ -72,9 +72,11 @@ CONFIG_GPIOLIB=y
|
|||||||
CONFIG_GPIO_SIFIVE=y
|
CONFIG_GPIO_SIFIVE=y
|
||||||
# CONFIG_PTP_1588_CLOCK is not set
|
# CONFIG_PTP_1588_CLOCK is not set
|
||||||
CONFIG_POWER_RESET=y
|
CONFIG_POWER_RESET=y
|
||||||
CONFIG_DRM=y
|
CONFIG_DRM=m
|
||||||
CONFIG_DRM_RADEON=y
|
CONFIG_DRM_RADEON=m
|
||||||
CONFIG_DRM_VIRTIO_GPU=y
|
CONFIG_DRM_NOUVEAU=m
|
||||||
|
CONFIG_DRM_VIRTIO_GPU=m
|
||||||
|
CONFIG_FB=y
|
||||||
CONFIG_FRAMEBUFFER_CONSOLE=y
|
CONFIG_FRAMEBUFFER_CONSOLE=y
|
||||||
CONFIG_USB=y
|
CONFIG_USB=y
|
||||||
CONFIG_USB_XHCI_HCD=y
|
CONFIG_USB_XHCI_HCD=y
|
||||||
|
|||||||
@@ -71,6 +71,7 @@ CONFIG_POWER_RESET=y
|
|||||||
CONFIG_DRM=y
|
CONFIG_DRM=y
|
||||||
CONFIG_DRM_RADEON=y
|
CONFIG_DRM_RADEON=y
|
||||||
CONFIG_DRM_VIRTIO_GPU=y
|
CONFIG_DRM_VIRTIO_GPU=y
|
||||||
|
CONFIG_FB=y
|
||||||
CONFIG_FRAMEBUFFER_CONSOLE=y
|
CONFIG_FRAMEBUFFER_CONSOLE=y
|
||||||
CONFIG_USB=y
|
CONFIG_USB=y
|
||||||
CONFIG_USB_XHCI_HCD=y
|
CONFIG_USB_XHCI_HCD=y
|
||||||
|
|||||||
@@ -265,6 +265,7 @@ pgd_t early_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
|
|||||||
static pmd_t __maybe_unused early_dtb_pmd[PTRS_PER_PMD] __initdata __aligned(PAGE_SIZE);
|
static pmd_t __maybe_unused early_dtb_pmd[PTRS_PER_PMD] __initdata __aligned(PAGE_SIZE);
|
||||||
|
|
||||||
#ifdef CONFIG_XIP_KERNEL
|
#ifdef CONFIG_XIP_KERNEL
|
||||||
|
#define riscv_pfn_base (*(unsigned long *)XIP_FIXUP(&riscv_pfn_base))
|
||||||
#define trampoline_pg_dir ((pgd_t *)XIP_FIXUP(trampoline_pg_dir))
|
#define trampoline_pg_dir ((pgd_t *)XIP_FIXUP(trampoline_pg_dir))
|
||||||
#define fixmap_pte ((pte_t *)XIP_FIXUP(fixmap_pte))
|
#define fixmap_pte ((pte_t *)XIP_FIXUP(fixmap_pte))
|
||||||
#define early_pg_dir ((pgd_t *)XIP_FIXUP(early_pg_dir))
|
#define early_pg_dir ((pgd_t *)XIP_FIXUP(early_pg_dir))
|
||||||
|
|||||||
@@ -24,6 +24,7 @@ struct vmlinux_info {
|
|||||||
unsigned long dynsym_start;
|
unsigned long dynsym_start;
|
||||||
unsigned long rela_dyn_start;
|
unsigned long rela_dyn_start;
|
||||||
unsigned long rela_dyn_end;
|
unsigned long rela_dyn_end;
|
||||||
|
unsigned long amode31_size;
|
||||||
};
|
};
|
||||||
|
|
||||||
/* Symbols defined by linker scripts */
|
/* Symbols defined by linker scripts */
|
||||||
|
|||||||
@@ -15,6 +15,7 @@
|
|||||||
#include "uv.h"
|
#include "uv.h"
|
||||||
|
|
||||||
unsigned long __bootdata_preserved(__kaslr_offset);
|
unsigned long __bootdata_preserved(__kaslr_offset);
|
||||||
|
unsigned long __bootdata(__amode31_base);
|
||||||
unsigned long __bootdata_preserved(VMALLOC_START);
|
unsigned long __bootdata_preserved(VMALLOC_START);
|
||||||
unsigned long __bootdata_preserved(VMALLOC_END);
|
unsigned long __bootdata_preserved(VMALLOC_END);
|
||||||
struct page *__bootdata_preserved(vmemmap);
|
struct page *__bootdata_preserved(vmemmap);
|
||||||
@@ -233,6 +234,12 @@ static void offset_vmlinux_info(unsigned long offset)
|
|||||||
vmlinux.dynsym_start += offset;
|
vmlinux.dynsym_start += offset;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static unsigned long reserve_amode31(unsigned long safe_addr)
|
||||||
|
{
|
||||||
|
__amode31_base = PAGE_ALIGN(safe_addr);
|
||||||
|
return safe_addr + vmlinux.amode31_size;
|
||||||
|
}
|
||||||
|
|
||||||
void startup_kernel(void)
|
void startup_kernel(void)
|
||||||
{
|
{
|
||||||
unsigned long random_lma;
|
unsigned long random_lma;
|
||||||
@@ -247,6 +254,7 @@ void startup_kernel(void)
|
|||||||
setup_lpp();
|
setup_lpp();
|
||||||
store_ipl_parmblock();
|
store_ipl_parmblock();
|
||||||
safe_addr = mem_safe_offset();
|
safe_addr = mem_safe_offset();
|
||||||
|
safe_addr = reserve_amode31(safe_addr);
|
||||||
safe_addr = read_ipl_report(safe_addr);
|
safe_addr = read_ipl_report(safe_addr);
|
||||||
uv_query_info();
|
uv_query_info();
|
||||||
rescue_initrd(safe_addr);
|
rescue_initrd(safe_addr);
|
||||||
|
|||||||
@@ -70,5 +70,6 @@ extern struct exception_table_entry _stop_amode31_ex_table[];
|
|||||||
#define __amode31_data __section(".amode31.data")
|
#define __amode31_data __section(".amode31.data")
|
||||||
#define __amode31_ref __section(".amode31.refs")
|
#define __amode31_ref __section(".amode31.refs")
|
||||||
extern long _start_amode31_refs[], _end_amode31_refs[];
|
extern long _start_amode31_refs[], _end_amode31_refs[];
|
||||||
|
extern unsigned long __amode31_base;
|
||||||
|
|
||||||
#endif /* _ENTRY_H */
|
#endif /* _ENTRY_H */
|
||||||
|
|||||||
@@ -95,10 +95,10 @@ EXPORT_SYMBOL(console_irq);
|
|||||||
* relocated above 2 GB, because it has to use 31 bit addresses.
|
* relocated above 2 GB, because it has to use 31 bit addresses.
|
||||||
* Such code and data is part of the .amode31 section.
|
* Such code and data is part of the .amode31 section.
|
||||||
*/
|
*/
|
||||||
unsigned long __amode31_ref __samode31 = __pa(&_samode31);
|
unsigned long __amode31_ref __samode31 = (unsigned long)&_samode31;
|
||||||
unsigned long __amode31_ref __eamode31 = __pa(&_eamode31);
|
unsigned long __amode31_ref __eamode31 = (unsigned long)&_eamode31;
|
||||||
unsigned long __amode31_ref __stext_amode31 = __pa(&_stext_amode31);
|
unsigned long __amode31_ref __stext_amode31 = (unsigned long)&_stext_amode31;
|
||||||
unsigned long __amode31_ref __etext_amode31 = __pa(&_etext_amode31);
|
unsigned long __amode31_ref __etext_amode31 = (unsigned long)&_etext_amode31;
|
||||||
struct exception_table_entry __amode31_ref *__start_amode31_ex_table = _start_amode31_ex_table;
|
struct exception_table_entry __amode31_ref *__start_amode31_ex_table = _start_amode31_ex_table;
|
||||||
struct exception_table_entry __amode31_ref *__stop_amode31_ex_table = _stop_amode31_ex_table;
|
struct exception_table_entry __amode31_ref *__stop_amode31_ex_table = _stop_amode31_ex_table;
|
||||||
|
|
||||||
@@ -149,6 +149,7 @@ struct mem_detect_info __bootdata(mem_detect);
|
|||||||
struct initrd_data __bootdata(initrd_data);
|
struct initrd_data __bootdata(initrd_data);
|
||||||
|
|
||||||
unsigned long __bootdata_preserved(__kaslr_offset);
|
unsigned long __bootdata_preserved(__kaslr_offset);
|
||||||
|
unsigned long __bootdata(__amode31_base);
|
||||||
unsigned int __bootdata_preserved(zlib_dfltcc_support);
|
unsigned int __bootdata_preserved(zlib_dfltcc_support);
|
||||||
EXPORT_SYMBOL(zlib_dfltcc_support);
|
EXPORT_SYMBOL(zlib_dfltcc_support);
|
||||||
u64 __bootdata_preserved(stfle_fac_list[16]);
|
u64 __bootdata_preserved(stfle_fac_list[16]);
|
||||||
@@ -796,12 +797,12 @@ static void __init check_initrd(void)
|
|||||||
*/
|
*/
|
||||||
static void __init reserve_kernel(void)
|
static void __init reserve_kernel(void)
|
||||||
{
|
{
|
||||||
unsigned long start_pfn = PFN_UP(__pa(_end));
|
|
||||||
|
|
||||||
memblock_reserve(0, STARTUP_NORMAL_OFFSET);
|
memblock_reserve(0, STARTUP_NORMAL_OFFSET);
|
||||||
memblock_reserve((unsigned long)sclp_early_sccb, EXT_SCCB_READ_SCP);
|
memblock_reserve(OLDMEM_BASE, sizeof(unsigned long));
|
||||||
memblock_reserve((unsigned long)_stext, PFN_PHYS(start_pfn)
|
memblock_reserve(OLDMEM_SIZE, sizeof(unsigned long));
|
||||||
- (unsigned long)_stext);
|
memblock_reserve(__amode31_base, __eamode31 - __samode31);
|
||||||
|
memblock_reserve(__pa(sclp_early_sccb), EXT_SCCB_READ_SCP);
|
||||||
|
memblock_reserve(__pa(_stext), _end - _stext);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __init setup_memory(void)
|
static void __init setup_memory(void)
|
||||||
@@ -820,20 +821,14 @@ static void __init setup_memory(void)
|
|||||||
|
|
||||||
static void __init relocate_amode31_section(void)
|
static void __init relocate_amode31_section(void)
|
||||||
{
|
{
|
||||||
unsigned long amode31_addr, amode31_size;
|
unsigned long amode31_size = __eamode31 - __samode31;
|
||||||
long amode31_offset;
|
long amode31_offset = __amode31_base - __samode31;
|
||||||
long *ptr;
|
long *ptr;
|
||||||
|
|
||||||
/* Allocate a new AMODE31 capable memory region */
|
|
||||||
amode31_size = __eamode31 - __samode31;
|
|
||||||
pr_info("Relocating AMODE31 section of size 0x%08lx\n", amode31_size);
|
pr_info("Relocating AMODE31 section of size 0x%08lx\n", amode31_size);
|
||||||
amode31_addr = (unsigned long)memblock_alloc_low(amode31_size, PAGE_SIZE);
|
|
||||||
if (!amode31_addr)
|
|
||||||
panic("Failed to allocate memory for AMODE31 section\n");
|
|
||||||
amode31_offset = amode31_addr - __samode31;
|
|
||||||
|
|
||||||
/* Move original AMODE31 section to the new one */
|
/* Move original AMODE31 section to the new one */
|
||||||
memmove((void *)amode31_addr, (void *)__samode31, amode31_size);
|
memmove((void *)__amode31_base, (void *)__samode31, amode31_size);
|
||||||
/* Zero out the old AMODE31 section to catch invalid accesses within it */
|
/* Zero out the old AMODE31 section to catch invalid accesses within it */
|
||||||
memset((void *)__samode31, 0, amode31_size);
|
memset((void *)__samode31, 0, amode31_size);
|
||||||
|
|
||||||
|
|||||||
@@ -212,6 +212,7 @@ SECTIONS
|
|||||||
QUAD(__dynsym_start) /* dynsym_start */
|
QUAD(__dynsym_start) /* dynsym_start */
|
||||||
QUAD(__rela_dyn_start) /* rela_dyn_start */
|
QUAD(__rela_dyn_start) /* rela_dyn_start */
|
||||||
QUAD(__rela_dyn_end) /* rela_dyn_end */
|
QUAD(__rela_dyn_end) /* rela_dyn_end */
|
||||||
|
QUAD(_eamode31 - _samode31) /* amode31_size */
|
||||||
} :NONE
|
} :NONE
|
||||||
|
|
||||||
/* Debugging sections. */
|
/* Debugging sections. */
|
||||||
|
|||||||
@@ -3913,14 +3913,12 @@ retry:
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
void kvm_s390_set_tod_clock(struct kvm *kvm,
|
static void __kvm_s390_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod)
|
||||||
const struct kvm_s390_vm_tod_clock *gtod)
|
|
||||||
{
|
{
|
||||||
struct kvm_vcpu *vcpu;
|
struct kvm_vcpu *vcpu;
|
||||||
union tod_clock clk;
|
union tod_clock clk;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
mutex_lock(&kvm->lock);
|
|
||||||
preempt_disable();
|
preempt_disable();
|
||||||
|
|
||||||
store_tod_clock_ext(&clk);
|
store_tod_clock_ext(&clk);
|
||||||
@@ -3941,9 +3939,24 @@ void kvm_s390_set_tod_clock(struct kvm *kvm,
|
|||||||
|
|
||||||
kvm_s390_vcpu_unblock_all(kvm);
|
kvm_s390_vcpu_unblock_all(kvm);
|
||||||
preempt_enable();
|
preempt_enable();
|
||||||
|
}
|
||||||
|
|
||||||
|
void kvm_s390_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod)
|
||||||
|
{
|
||||||
|
mutex_lock(&kvm->lock);
|
||||||
|
__kvm_s390_set_tod_clock(kvm, gtod);
|
||||||
mutex_unlock(&kvm->lock);
|
mutex_unlock(&kvm->lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int kvm_s390_try_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod)
|
||||||
|
{
|
||||||
|
if (!mutex_trylock(&kvm->lock))
|
||||||
|
return 0;
|
||||||
|
__kvm_s390_set_tod_clock(kvm, gtod);
|
||||||
|
mutex_unlock(&kvm->lock);
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* kvm_arch_fault_in_page - fault-in guest page if necessary
|
* kvm_arch_fault_in_page - fault-in guest page if necessary
|
||||||
* @vcpu: The corresponding virtual cpu
|
* @vcpu: The corresponding virtual cpu
|
||||||
|
|||||||
@@ -326,8 +326,8 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu);
|
|||||||
int kvm_s390_handle_sigp_pei(struct kvm_vcpu *vcpu);
|
int kvm_s390_handle_sigp_pei(struct kvm_vcpu *vcpu);
|
||||||
|
|
||||||
/* implemented in kvm-s390.c */
|
/* implemented in kvm-s390.c */
|
||||||
void kvm_s390_set_tod_clock(struct kvm *kvm,
|
void kvm_s390_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod);
|
||||||
const struct kvm_s390_vm_tod_clock *gtod);
|
int kvm_s390_try_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod);
|
||||||
long kvm_arch_fault_in_page(struct kvm_vcpu *vcpu, gpa_t gpa, int writable);
|
long kvm_arch_fault_in_page(struct kvm_vcpu *vcpu, gpa_t gpa, int writable);
|
||||||
int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long addr);
|
int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long addr);
|
||||||
int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr);
|
int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr);
|
||||||
|
|||||||
@@ -102,7 +102,20 @@ static int handle_set_clock(struct kvm_vcpu *vcpu)
|
|||||||
return kvm_s390_inject_prog_cond(vcpu, rc);
|
return kvm_s390_inject_prog_cond(vcpu, rc);
|
||||||
|
|
||||||
VCPU_EVENT(vcpu, 3, "SCK: setting guest TOD to 0x%llx", gtod.tod);
|
VCPU_EVENT(vcpu, 3, "SCK: setting guest TOD to 0x%llx", gtod.tod);
|
||||||
kvm_s390_set_tod_clock(vcpu->kvm, >od);
|
/*
|
||||||
|
* To set the TOD clock the kvm lock must be taken, but the vcpu lock
|
||||||
|
* is already held in handle_set_clock. The usual lock order is the
|
||||||
|
* opposite. As SCK is deprecated and should not be used in several
|
||||||
|
* cases, for example when the multiple epoch facility or TOD clock
|
||||||
|
* steering facility is installed (see Principles of Operation), a
|
||||||
|
* slow path can be used. If the lock can not be taken via try_lock,
|
||||||
|
* the instruction will be retried via -EAGAIN at a later point in
|
||||||
|
* time.
|
||||||
|
*/
|
||||||
|
if (!kvm_s390_try_set_tod_clock(vcpu->kvm, >od)) {
|
||||||
|
kvm_s390_retry_instr(vcpu);
|
||||||
|
return -EAGAIN;
|
||||||
|
}
|
||||||
|
|
||||||
kvm_s390_set_psw_cc(vcpu, 0);
|
kvm_s390_set_psw_cc(vcpu, 0);
|
||||||
return 0;
|
return 0;
|
||||||
|
|||||||
@@ -1297,10 +1297,12 @@ static void kill_me_maybe(struct callback_head *cb)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* -EHWPOISON from memory_failure() means that it already sent SIGBUS
|
* -EHWPOISON from memory_failure() means that it already sent SIGBUS
|
||||||
* to the current process with the proper error info, so no need to
|
* to the current process with the proper error info,
|
||||||
* send SIGBUS here again.
|
* -EOPNOTSUPP means hwpoison_filter() filtered the error event,
|
||||||
|
*
|
||||||
|
* In both cases, no further processing is required.
|
||||||
*/
|
*/
|
||||||
if (ret == -EHWPOISON)
|
if (ret == -EHWPOISON || ret == -EOPNOTSUPP)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
if (p->mce_vaddr != (void __user *)-1l) {
|
if (p->mce_vaddr != (void __user *)-1l) {
|
||||||
|
|||||||
@@ -36,7 +36,7 @@ int kvm_page_track_create_memslot(struct kvm_memory_slot *slot,
|
|||||||
|
|
||||||
for (i = 0; i < KVM_PAGE_TRACK_MAX; i++) {
|
for (i = 0; i < KVM_PAGE_TRACK_MAX; i++) {
|
||||||
slot->arch.gfn_track[i] =
|
slot->arch.gfn_track[i] =
|
||||||
kvcalloc(npages, sizeof(*slot->arch.gfn_track[i]),
|
__vcalloc(npages, sizeof(*slot->arch.gfn_track[i]),
|
||||||
GFP_KERNEL_ACCOUNT);
|
GFP_KERNEL_ACCOUNT);
|
||||||
if (!slot->arch.gfn_track[i])
|
if (!slot->arch.gfn_track[i])
|
||||||
goto track_free;
|
goto track_free;
|
||||||
|
|||||||
@@ -1098,13 +1098,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
|
|||||||
bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
|
bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
|
||||||
bool flush)
|
bool flush)
|
||||||
{
|
{
|
||||||
struct kvm_mmu_page *root;
|
return __kvm_tdp_mmu_zap_gfn_range(kvm, range->slot->as_id, range->start,
|
||||||
|
range->end, range->may_block, flush);
|
||||||
for_each_tdp_mmu_root(kvm, root, range->slot->as_id)
|
|
||||||
flush = zap_gfn_range(kvm, root, range->start, range->end,
|
|
||||||
range->may_block, flush, false);
|
|
||||||
|
|
||||||
return flush;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
typedef bool (*tdp_handler_t)(struct kvm *kvm, struct tdp_iter *iter,
|
typedef bool (*tdp_handler_t)(struct kvm *kvm, struct tdp_iter *iter,
|
||||||
|
|||||||
@@ -11552,7 +11552,7 @@ static int memslot_rmap_alloc(struct kvm_memory_slot *slot,
|
|||||||
if (slot->arch.rmap[i])
|
if (slot->arch.rmap[i])
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
slot->arch.rmap[i] = kvcalloc(lpages, sz, GFP_KERNEL_ACCOUNT);
|
slot->arch.rmap[i] = __vcalloc(lpages, sz, GFP_KERNEL_ACCOUNT);
|
||||||
if (!slot->arch.rmap[i]) {
|
if (!slot->arch.rmap[i]) {
|
||||||
memslot_rmap_free(slot);
|
memslot_rmap_free(slot);
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
@@ -11633,7 +11633,7 @@ static int kvm_alloc_memslot_metadata(struct kvm *kvm,
|
|||||||
|
|
||||||
lpages = __kvm_mmu_slot_lpages(slot, npages, level);
|
lpages = __kvm_mmu_slot_lpages(slot, npages, level);
|
||||||
|
|
||||||
linfo = kvcalloc(lpages, sizeof(*linfo), GFP_KERNEL_ACCOUNT);
|
linfo = __vcalloc(lpages, sizeof(*linfo), GFP_KERNEL_ACCOUNT);
|
||||||
if (!linfo)
|
if (!linfo)
|
||||||
goto out_free;
|
goto out_free;
|
||||||
|
|
||||||
|
|||||||
11
block/bio.c
11
block/bio.c
@@ -913,7 +913,7 @@ EXPORT_SYMBOL(bio_add_pc_page);
|
|||||||
int bio_add_zone_append_page(struct bio *bio, struct page *page,
|
int bio_add_zone_append_page(struct bio *bio, struct page *page,
|
||||||
unsigned int len, unsigned int offset)
|
unsigned int len, unsigned int offset)
|
||||||
{
|
{
|
||||||
struct request_queue *q = bio->bi_bdev->bd_disk->queue;
|
struct request_queue *q = bdev_get_queue(bio->bi_bdev);
|
||||||
bool same_page = false;
|
bool same_page = false;
|
||||||
|
|
||||||
if (WARN_ON_ONCE(bio_op(bio) != REQ_OP_ZONE_APPEND))
|
if (WARN_ON_ONCE(bio_op(bio) != REQ_OP_ZONE_APPEND))
|
||||||
@@ -1057,7 +1057,7 @@ static int bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter)
|
|||||||
|
|
||||||
static int bio_iov_bvec_set_append(struct bio *bio, struct iov_iter *iter)
|
static int bio_iov_bvec_set_append(struct bio *bio, struct iov_iter *iter)
|
||||||
{
|
{
|
||||||
struct request_queue *q = bio->bi_bdev->bd_disk->queue;
|
struct request_queue *q = bdev_get_queue(bio->bi_bdev);
|
||||||
struct iov_iter i = *iter;
|
struct iov_iter i = *iter;
|
||||||
|
|
||||||
iov_iter_truncate(&i, queue_max_zone_append_sectors(q) << 9);
|
iov_iter_truncate(&i, queue_max_zone_append_sectors(q) << 9);
|
||||||
@@ -1135,7 +1135,7 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
|
|||||||
{
|
{
|
||||||
unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
|
unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
|
||||||
unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
|
unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
|
||||||
struct request_queue *q = bio->bi_bdev->bd_disk->queue;
|
struct request_queue *q = bdev_get_queue(bio->bi_bdev);
|
||||||
unsigned int max_append_sectors = queue_max_zone_append_sectors(q);
|
unsigned int max_append_sectors = queue_max_zone_append_sectors(q);
|
||||||
struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
|
struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
|
||||||
struct page **pages = (struct page **)bv;
|
struct page **pages = (struct page **)bv;
|
||||||
@@ -1473,11 +1473,10 @@ again:
|
|||||||
if (!bio_integrity_endio(bio))
|
if (!bio_integrity_endio(bio))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
if (bio->bi_bdev && bio_flagged(bio, BIO_TRACKED))
|
rq_qos_done_bio(bio);
|
||||||
rq_qos_done_bio(bio->bi_bdev->bd_disk->queue, bio);
|
|
||||||
|
|
||||||
if (bio->bi_bdev && bio_flagged(bio, BIO_TRACE_COMPLETION)) {
|
if (bio->bi_bdev && bio_flagged(bio, BIO_TRACE_COMPLETION)) {
|
||||||
trace_block_bio_complete(bio->bi_bdev->bd_disk->queue, bio);
|
trace_block_bio_complete(bdev_get_queue(bio->bi_bdev), bio);
|
||||||
bio_clear_flag(bio, BIO_TRACE_COMPLETION);
|
bio_clear_flag(bio, BIO_TRACE_COMPLETION);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -601,7 +601,7 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
|
|||||||
int inflight = 0;
|
int inflight = 0;
|
||||||
|
|
||||||
blkg = bio->bi_blkg;
|
blkg = bio->bi_blkg;
|
||||||
if (!blkg || !bio_flagged(bio, BIO_TRACKED))
|
if (!blkg || !bio_flagged(bio, BIO_QOS_THROTTLED))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
iolat = blkg_to_lat(bio->bi_blkg);
|
iolat = blkg_to_lat(bio->bi_blkg);
|
||||||
|
|||||||
@@ -177,22 +177,23 @@ static inline void rq_qos_requeue(struct request_queue *q, struct request *rq)
|
|||||||
__rq_qos_requeue(q->rq_qos, rq);
|
__rq_qos_requeue(q->rq_qos, rq);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void rq_qos_done_bio(struct request_queue *q, struct bio *bio)
|
static inline void rq_qos_done_bio(struct bio *bio)
|
||||||
{
|
{
|
||||||
|
if (bio->bi_bdev && (bio_flagged(bio, BIO_QOS_THROTTLED) ||
|
||||||
|
bio_flagged(bio, BIO_QOS_MERGED))) {
|
||||||
|
struct request_queue *q = bdev_get_queue(bio->bi_bdev);
|
||||||
if (q->rq_qos)
|
if (q->rq_qos)
|
||||||
__rq_qos_done_bio(q->rq_qos, bio);
|
__rq_qos_done_bio(q->rq_qos, bio);
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio)
|
static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio)
|
||||||
{
|
{
|
||||||
/*
|
if (q->rq_qos) {
|
||||||
* BIO_TRACKED lets controllers know that a bio went through the
|
bio_set_flag(bio, BIO_QOS_THROTTLED);
|
||||||
* normal rq_qos path.
|
|
||||||
*/
|
|
||||||
bio_set_flag(bio, BIO_TRACKED);
|
|
||||||
if (q->rq_qos)
|
|
||||||
__rq_qos_throttle(q->rq_qos, bio);
|
__rq_qos_throttle(q->rq_qos, bio);
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static inline void rq_qos_track(struct request_queue *q, struct request *rq,
|
static inline void rq_qos_track(struct request_queue *q, struct request *rq,
|
||||||
struct bio *bio)
|
struct bio *bio)
|
||||||
@@ -204,9 +205,11 @@ static inline void rq_qos_track(struct request_queue *q, struct request *rq,
|
|||||||
static inline void rq_qos_merge(struct request_queue *q, struct request *rq,
|
static inline void rq_qos_merge(struct request_queue *q, struct request *rq,
|
||||||
struct bio *bio)
|
struct bio *bio)
|
||||||
{
|
{
|
||||||
if (q->rq_qos)
|
if (q->rq_qos) {
|
||||||
|
bio_set_flag(bio, BIO_QOS_MERGED);
|
||||||
__rq_qos_merge(q->rq_qos, rq, bio);
|
__rq_qos_merge(q->rq_qos, rq, bio);
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static inline void rq_qos_queue_depth_changed(struct request_queue *q)
|
static inline void rq_qos_queue_depth_changed(struct request_queue *q)
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -485,7 +485,8 @@ static void device_link_release_fn(struct work_struct *work)
|
|||||||
/* Ensure that all references to the link object have been dropped. */
|
/* Ensure that all references to the link object have been dropped. */
|
||||||
device_link_synchronize_removal();
|
device_link_synchronize_removal();
|
||||||
|
|
||||||
pm_runtime_release_supplier(link, true);
|
pm_runtime_release_supplier(link);
|
||||||
|
pm_request_idle(link->supplier);
|
||||||
|
|
||||||
put_device(link->consumer);
|
put_device(link->consumer);
|
||||||
put_device(link->supplier);
|
put_device(link->supplier);
|
||||||
|
|||||||
@@ -555,6 +555,8 @@ static ssize_t hard_offline_page_store(struct device *dev,
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
pfn >>= PAGE_SHIFT;
|
pfn >>= PAGE_SHIFT;
|
||||||
ret = memory_failure(pfn, 0);
|
ret = memory_failure(pfn, 0);
|
||||||
|
if (ret == -EOPNOTSUPP)
|
||||||
|
ret = 0;
|
||||||
return ret ? ret : count;
|
return ret ? ret : count;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -308,13 +308,10 @@ static int rpm_get_suppliers(struct device *dev)
|
|||||||
/**
|
/**
|
||||||
* pm_runtime_release_supplier - Drop references to device link's supplier.
|
* pm_runtime_release_supplier - Drop references to device link's supplier.
|
||||||
* @link: Target device link.
|
* @link: Target device link.
|
||||||
* @check_idle: Whether or not to check if the supplier device is idle.
|
|
||||||
*
|
*
|
||||||
* Drop all runtime PM references associated with @link to its supplier device
|
* Drop all runtime PM references associated with @link to its supplier device.
|
||||||
* and if @check_idle is set, check if that device is idle (and so it can be
|
|
||||||
* suspended).
|
|
||||||
*/
|
*/
|
||||||
void pm_runtime_release_supplier(struct device_link *link, bool check_idle)
|
void pm_runtime_release_supplier(struct device_link *link)
|
||||||
{
|
{
|
||||||
struct device *supplier = link->supplier;
|
struct device *supplier = link->supplier;
|
||||||
|
|
||||||
@@ -327,9 +324,6 @@ void pm_runtime_release_supplier(struct device_link *link, bool check_idle)
|
|||||||
while (refcount_dec_not_one(&link->rpm_active) &&
|
while (refcount_dec_not_one(&link->rpm_active) &&
|
||||||
atomic_read(&supplier->power.usage_count) > 0)
|
atomic_read(&supplier->power.usage_count) > 0)
|
||||||
pm_runtime_put_noidle(supplier);
|
pm_runtime_put_noidle(supplier);
|
||||||
|
|
||||||
if (check_idle)
|
|
||||||
pm_request_idle(supplier);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend)
|
static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend)
|
||||||
@@ -337,8 +331,11 @@ static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend)
|
|||||||
struct device_link *link;
|
struct device_link *link;
|
||||||
|
|
||||||
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,
|
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,
|
||||||
device_links_read_lock_held())
|
device_links_read_lock_held()) {
|
||||||
pm_runtime_release_supplier(link, try_to_suspend);
|
pm_runtime_release_supplier(link);
|
||||||
|
if (try_to_suspend)
|
||||||
|
pm_request_idle(link->supplier);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void rpm_put_suppliers(struct device *dev)
|
static void rpm_put_suppliers(struct device *dev)
|
||||||
@@ -1791,7 +1788,8 @@ void pm_runtime_drop_link(struct device_link *link)
|
|||||||
return;
|
return;
|
||||||
|
|
||||||
pm_runtime_drop_link_count(link->consumer);
|
pm_runtime_drop_link_count(link->consumer);
|
||||||
pm_runtime_release_supplier(link, true);
|
pm_runtime_release_supplier(link);
|
||||||
|
pm_request_idle(link->supplier);
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool pm_runtime_need_not_resume(struct device *dev)
|
static bool pm_runtime_need_not_resume(struct device *dev)
|
||||||
|
|||||||
@@ -410,6 +410,7 @@ config XEN_BLKDEV_BACKEND
|
|||||||
config VIRTIO_BLK
|
config VIRTIO_BLK
|
||||||
tristate "Virtio block driver"
|
tristate "Virtio block driver"
|
||||||
depends on VIRTIO
|
depends on VIRTIO
|
||||||
|
select SG_POOL
|
||||||
help
|
help
|
||||||
This is the virtual block driver for virtio. It can be used with
|
This is the virtual block driver for virtio. It can be used with
|
||||||
QEMU based VMMs (like KVM or Xen). Say Y or M.
|
QEMU based VMMs (like KVM or Xen). Say Y or M.
|
||||||
|
|||||||
@@ -2795,10 +2795,12 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
|
|||||||
|
|
||||||
if (init_submitter(device)) {
|
if (init_submitter(device)) {
|
||||||
err = ERR_NOMEM;
|
err = ERR_NOMEM;
|
||||||
goto out_idr_remove_vol;
|
goto out_idr_remove_from_resource;
|
||||||
}
|
}
|
||||||
|
|
||||||
add_disk(disk);
|
err = add_disk(disk);
|
||||||
|
if (err)
|
||||||
|
goto out_idr_remove_from_resource;
|
||||||
|
|
||||||
/* inherit the connection state */
|
/* inherit the connection state */
|
||||||
device->state.conn = first_connection(resource)->cstate;
|
device->state.conn = first_connection(resource)->cstate;
|
||||||
@@ -2812,8 +2814,6 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
|
|||||||
drbd_debugfs_device_add(device);
|
drbd_debugfs_device_add(device);
|
||||||
return NO_ERROR;
|
return NO_ERROR;
|
||||||
|
|
||||||
out_idr_remove_vol:
|
|
||||||
idr_remove(&connection->peer_devices, vnr);
|
|
||||||
out_idr_remove_from_resource:
|
out_idr_remove_from_resource:
|
||||||
for_each_connection(connection, resource) {
|
for_each_connection(connection, resource) {
|
||||||
peer_device = idr_remove(&connection->peer_devices, vnr);
|
peer_device = idr_remove(&connection->peer_devices, vnr);
|
||||||
|
|||||||
@@ -24,6 +24,12 @@
|
|||||||
/* The maximum number of sg elements that fit into a virtqueue */
|
/* The maximum number of sg elements that fit into a virtqueue */
|
||||||
#define VIRTIO_BLK_MAX_SG_ELEMS 32768
|
#define VIRTIO_BLK_MAX_SG_ELEMS 32768
|
||||||
|
|
||||||
|
#ifdef CONFIG_ARCH_NO_SG_CHAIN
|
||||||
|
#define VIRTIO_BLK_INLINE_SG_CNT 0
|
||||||
|
#else
|
||||||
|
#define VIRTIO_BLK_INLINE_SG_CNT 2
|
||||||
|
#endif
|
||||||
|
|
||||||
static int major;
|
static int major;
|
||||||
static DEFINE_IDA(vd_index_ida);
|
static DEFINE_IDA(vd_index_ida);
|
||||||
|
|
||||||
@@ -77,6 +83,7 @@ struct virtio_blk {
|
|||||||
struct virtblk_req {
|
struct virtblk_req {
|
||||||
struct virtio_blk_outhdr out_hdr;
|
struct virtio_blk_outhdr out_hdr;
|
||||||
u8 status;
|
u8 status;
|
||||||
|
struct sg_table sg_table;
|
||||||
struct scatterlist sg[];
|
struct scatterlist sg[];
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -162,12 +169,92 @@ static int virtblk_setup_discard_write_zeroes(struct request *req, bool unmap)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void virtblk_unmap_data(struct request *req, struct virtblk_req *vbr)
|
||||||
|
{
|
||||||
|
if (blk_rq_nr_phys_segments(req))
|
||||||
|
sg_free_table_chained(&vbr->sg_table,
|
||||||
|
VIRTIO_BLK_INLINE_SG_CNT);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int virtblk_map_data(struct blk_mq_hw_ctx *hctx, struct request *req,
|
||||||
|
struct virtblk_req *vbr)
|
||||||
|
{
|
||||||
|
int err;
|
||||||
|
|
||||||
|
if (!blk_rq_nr_phys_segments(req))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
vbr->sg_table.sgl = vbr->sg;
|
||||||
|
err = sg_alloc_table_chained(&vbr->sg_table,
|
||||||
|
blk_rq_nr_phys_segments(req),
|
||||||
|
vbr->sg_table.sgl,
|
||||||
|
VIRTIO_BLK_INLINE_SG_CNT);
|
||||||
|
if (unlikely(err))
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
|
return blk_rq_map_sg(hctx->queue, req, vbr->sg_table.sgl);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void virtblk_cleanup_cmd(struct request *req)
|
||||||
|
{
|
||||||
|
if (req->rq_flags & RQF_SPECIAL_PAYLOAD)
|
||||||
|
kfree(bvec_virt(&req->special_vec));
|
||||||
|
}
|
||||||
|
|
||||||
|
static int virtblk_setup_cmd(struct virtio_device *vdev, struct request *req,
|
||||||
|
struct virtblk_req *vbr)
|
||||||
|
{
|
||||||
|
bool unmap = false;
|
||||||
|
u32 type;
|
||||||
|
|
||||||
|
vbr->out_hdr.sector = 0;
|
||||||
|
|
||||||
|
switch (req_op(req)) {
|
||||||
|
case REQ_OP_READ:
|
||||||
|
type = VIRTIO_BLK_T_IN;
|
||||||
|
vbr->out_hdr.sector = cpu_to_virtio64(vdev,
|
||||||
|
blk_rq_pos(req));
|
||||||
|
break;
|
||||||
|
case REQ_OP_WRITE:
|
||||||
|
type = VIRTIO_BLK_T_OUT;
|
||||||
|
vbr->out_hdr.sector = cpu_to_virtio64(vdev,
|
||||||
|
blk_rq_pos(req));
|
||||||
|
break;
|
||||||
|
case REQ_OP_FLUSH:
|
||||||
|
type = VIRTIO_BLK_T_FLUSH;
|
||||||
|
break;
|
||||||
|
case REQ_OP_DISCARD:
|
||||||
|
type = VIRTIO_BLK_T_DISCARD;
|
||||||
|
break;
|
||||||
|
case REQ_OP_WRITE_ZEROES:
|
||||||
|
type = VIRTIO_BLK_T_WRITE_ZEROES;
|
||||||
|
unmap = !(req->cmd_flags & REQ_NOUNMAP);
|
||||||
|
break;
|
||||||
|
case REQ_OP_DRV_IN:
|
||||||
|
type = VIRTIO_BLK_T_GET_ID;
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
WARN_ON_ONCE(1);
|
||||||
|
return BLK_STS_IOERR;
|
||||||
|
}
|
||||||
|
|
||||||
|
vbr->out_hdr.type = cpu_to_virtio32(vdev, type);
|
||||||
|
vbr->out_hdr.ioprio = cpu_to_virtio32(vdev, req_get_ioprio(req));
|
||||||
|
|
||||||
|
if (type == VIRTIO_BLK_T_DISCARD || type == VIRTIO_BLK_T_WRITE_ZEROES) {
|
||||||
|
if (virtblk_setup_discard_write_zeroes(req, unmap))
|
||||||
|
return BLK_STS_RESOURCE;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static inline void virtblk_request_done(struct request *req)
|
static inline void virtblk_request_done(struct request *req)
|
||||||
{
|
{
|
||||||
struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
|
struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
|
||||||
|
|
||||||
if (req->rq_flags & RQF_SPECIAL_PAYLOAD)
|
virtblk_unmap_data(req, vbr);
|
||||||
kfree(bvec_virt(&req->special_vec));
|
virtblk_cleanup_cmd(req);
|
||||||
blk_mq_end_request(req, virtblk_result(vbr));
|
blk_mq_end_request(req, virtblk_result(vbr));
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -221,63 +308,25 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
|
|||||||
struct request *req = bd->rq;
|
struct request *req = bd->rq;
|
||||||
struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
|
struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
unsigned int num;
|
int num;
|
||||||
int qid = hctx->queue_num;
|
int qid = hctx->queue_num;
|
||||||
int err;
|
int err;
|
||||||
bool notify = false;
|
bool notify = false;
|
||||||
bool unmap = false;
|
|
||||||
u32 type;
|
|
||||||
|
|
||||||
switch (req_op(req)) {
|
err = virtblk_setup_cmd(vblk->vdev, req, vbr);
|
||||||
case REQ_OP_READ:
|
if (unlikely(err))
|
||||||
case REQ_OP_WRITE:
|
return err;
|
||||||
type = 0;
|
|
||||||
break;
|
|
||||||
case REQ_OP_FLUSH:
|
|
||||||
type = VIRTIO_BLK_T_FLUSH;
|
|
||||||
break;
|
|
||||||
case REQ_OP_DISCARD:
|
|
||||||
type = VIRTIO_BLK_T_DISCARD;
|
|
||||||
break;
|
|
||||||
case REQ_OP_WRITE_ZEROES:
|
|
||||||
type = VIRTIO_BLK_T_WRITE_ZEROES;
|
|
||||||
unmap = !(req->cmd_flags & REQ_NOUNMAP);
|
|
||||||
break;
|
|
||||||
case REQ_OP_DRV_IN:
|
|
||||||
type = VIRTIO_BLK_T_GET_ID;
|
|
||||||
break;
|
|
||||||
default:
|
|
||||||
WARN_ON_ONCE(1);
|
|
||||||
return BLK_STS_IOERR;
|
|
||||||
}
|
|
||||||
|
|
||||||
BUG_ON(type != VIRTIO_BLK_T_DISCARD &&
|
|
||||||
type != VIRTIO_BLK_T_WRITE_ZEROES &&
|
|
||||||
(req->nr_phys_segments + 2 > vblk->sg_elems));
|
|
||||||
|
|
||||||
vbr->out_hdr.type = cpu_to_virtio32(vblk->vdev, type);
|
|
||||||
vbr->out_hdr.sector = type ?
|
|
||||||
0 : cpu_to_virtio64(vblk->vdev, blk_rq_pos(req));
|
|
||||||
vbr->out_hdr.ioprio = cpu_to_virtio32(vblk->vdev, req_get_ioprio(req));
|
|
||||||
|
|
||||||
blk_mq_start_request(req);
|
blk_mq_start_request(req);
|
||||||
|
|
||||||
if (type == VIRTIO_BLK_T_DISCARD || type == VIRTIO_BLK_T_WRITE_ZEROES) {
|
num = virtblk_map_data(hctx, req, vbr);
|
||||||
err = virtblk_setup_discard_write_zeroes(req, unmap);
|
if (unlikely(num < 0)) {
|
||||||
if (err)
|
virtblk_cleanup_cmd(req);
|
||||||
return BLK_STS_RESOURCE;
|
return BLK_STS_RESOURCE;
|
||||||
}
|
}
|
||||||
|
|
||||||
num = blk_rq_map_sg(hctx->queue, req, vbr->sg);
|
|
||||||
if (num) {
|
|
||||||
if (rq_data_dir(req) == WRITE)
|
|
||||||
vbr->out_hdr.type |= cpu_to_virtio32(vblk->vdev, VIRTIO_BLK_T_OUT);
|
|
||||||
else
|
|
||||||
vbr->out_hdr.type |= cpu_to_virtio32(vblk->vdev, VIRTIO_BLK_T_IN);
|
|
||||||
}
|
|
||||||
|
|
||||||
spin_lock_irqsave(&vblk->vqs[qid].lock, flags);
|
spin_lock_irqsave(&vblk->vqs[qid].lock, flags);
|
||||||
err = virtblk_add_req(vblk->vqs[qid].vq, vbr, vbr->sg, num);
|
err = virtblk_add_req(vblk->vqs[qid].vq, vbr, vbr->sg_table.sgl, num);
|
||||||
if (err) {
|
if (err) {
|
||||||
virtqueue_kick(vblk->vqs[qid].vq);
|
virtqueue_kick(vblk->vqs[qid].vq);
|
||||||
/* Don't stop the queue if -ENOMEM: we may have failed to
|
/* Don't stop the queue if -ENOMEM: we may have failed to
|
||||||
@@ -286,6 +335,8 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
|
|||||||
if (err == -ENOSPC)
|
if (err == -ENOSPC)
|
||||||
blk_mq_stop_hw_queue(hctx);
|
blk_mq_stop_hw_queue(hctx);
|
||||||
spin_unlock_irqrestore(&vblk->vqs[qid].lock, flags);
|
spin_unlock_irqrestore(&vblk->vqs[qid].lock, flags);
|
||||||
|
virtblk_unmap_data(req, vbr);
|
||||||
|
virtblk_cleanup_cmd(req);
|
||||||
switch (err) {
|
switch (err) {
|
||||||
case -ENOSPC:
|
case -ENOSPC:
|
||||||
return BLK_STS_DEV_RESOURCE;
|
return BLK_STS_DEV_RESOURCE;
|
||||||
@@ -666,16 +717,6 @@ static const struct attribute_group *virtblk_attr_groups[] = {
|
|||||||
NULL,
|
NULL,
|
||||||
};
|
};
|
||||||
|
|
||||||
static int virtblk_init_request(struct blk_mq_tag_set *set, struct request *rq,
|
|
||||||
unsigned int hctx_idx, unsigned int numa_node)
|
|
||||||
{
|
|
||||||
struct virtio_blk *vblk = set->driver_data;
|
|
||||||
struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
|
|
||||||
|
|
||||||
sg_init_table(vbr->sg, vblk->sg_elems);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int virtblk_map_queues(struct blk_mq_tag_set *set)
|
static int virtblk_map_queues(struct blk_mq_tag_set *set)
|
||||||
{
|
{
|
||||||
struct virtio_blk *vblk = set->driver_data;
|
struct virtio_blk *vblk = set->driver_data;
|
||||||
@@ -688,7 +729,6 @@ static const struct blk_mq_ops virtio_mq_ops = {
|
|||||||
.queue_rq = virtio_queue_rq,
|
.queue_rq = virtio_queue_rq,
|
||||||
.commit_rqs = virtio_commit_rqs,
|
.commit_rqs = virtio_commit_rqs,
|
||||||
.complete = virtblk_request_done,
|
.complete = virtblk_request_done,
|
||||||
.init_request = virtblk_init_request,
|
|
||||||
.map_queues = virtblk_map_queues,
|
.map_queues = virtblk_map_queues,
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -768,7 +808,7 @@ static int virtblk_probe(struct virtio_device *vdev)
|
|||||||
vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
|
vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
|
||||||
vblk->tag_set.cmd_size =
|
vblk->tag_set.cmd_size =
|
||||||
sizeof(struct virtblk_req) +
|
sizeof(struct virtblk_req) +
|
||||||
sizeof(struct scatterlist) * sg_elems;
|
sizeof(struct scatterlist) * VIRTIO_BLK_INLINE_SG_CNT;
|
||||||
vblk->tag_set.driver_data = vblk;
|
vblk->tag_set.driver_data = vblk;
|
||||||
vblk->tag_set.nr_hw_queues = vblk->num_vqs;
|
vblk->tag_set.nr_hw_queues = vblk->num_vqs;
|
||||||
|
|
||||||
|
|||||||
@@ -331,6 +331,7 @@ static int btmtksdio_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
|
|||||||
{
|
{
|
||||||
struct btmtksdio_dev *bdev = hci_get_drvdata(hdev);
|
struct btmtksdio_dev *bdev = hci_get_drvdata(hdev);
|
||||||
struct hci_event_hdr *hdr = (void *)skb->data;
|
struct hci_event_hdr *hdr = (void *)skb->data;
|
||||||
|
u8 evt = hdr->evt;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
/* Fix up the vendor event id with 0xff for vendor specific instead
|
/* Fix up the vendor event id with 0xff for vendor specific instead
|
||||||
@@ -355,7 +356,7 @@ static int btmtksdio_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
|
|||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto err_free_skb;
|
goto err_free_skb;
|
||||||
|
|
||||||
if (hdr->evt == HCI_EV_VENDOR) {
|
if (evt == HCI_EV_VENDOR) {
|
||||||
if (test_and_clear_bit(BTMTKSDIO_TX_WAIT_VND_EVT,
|
if (test_and_clear_bit(BTMTKSDIO_TX_WAIT_VND_EVT,
|
||||||
&bdev->tx_state)) {
|
&bdev->tx_state)) {
|
||||||
/* Barrier to sync with other CPUs */
|
/* Barrier to sync with other CPUs */
|
||||||
|
|||||||
@@ -77,11 +77,14 @@ static const char * const mhi_pm_state_str[] = {
|
|||||||
[MHI_PM_STATE_LD_ERR_FATAL_DETECT] = "Linkdown or Error Fatal Detect",
|
[MHI_PM_STATE_LD_ERR_FATAL_DETECT] = "Linkdown or Error Fatal Detect",
|
||||||
};
|
};
|
||||||
|
|
||||||
const char *to_mhi_pm_state_str(enum mhi_pm_state state)
|
const char *to_mhi_pm_state_str(u32 state)
|
||||||
{
|
{
|
||||||
int index = find_last_bit((unsigned long *)&state, 32);
|
int index;
|
||||||
|
|
||||||
if (index >= ARRAY_SIZE(mhi_pm_state_str))
|
if (state)
|
||||||
|
index = __fls(state);
|
||||||
|
|
||||||
|
if (!state || index >= ARRAY_SIZE(mhi_pm_state_str))
|
||||||
return "Invalid State";
|
return "Invalid State";
|
||||||
|
|
||||||
return mhi_pm_state_str[index];
|
return mhi_pm_state_str[index];
|
||||||
|
|||||||
@@ -622,7 +622,7 @@ void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
|
|||||||
enum mhi_pm_state __must_check mhi_tryset_pm_state(
|
enum mhi_pm_state __must_check mhi_tryset_pm_state(
|
||||||
struct mhi_controller *mhi_cntrl,
|
struct mhi_controller *mhi_cntrl,
|
||||||
enum mhi_pm_state state);
|
enum mhi_pm_state state);
|
||||||
const char *to_mhi_pm_state_str(enum mhi_pm_state state);
|
const char *to_mhi_pm_state_str(u32 state);
|
||||||
int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
|
int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
|
||||||
enum dev_st_transition state);
|
enum dev_st_transition state);
|
||||||
void mhi_pm_st_worker(struct work_struct *work);
|
void mhi_pm_st_worker(struct work_struct *work);
|
||||||
|
|||||||
@@ -61,8 +61,8 @@ static const struct cpg_core_clk r9a07g044_core_clks[] __initconst = {
|
|||||||
DEF_FIXED(".osc", R9A07G044_OSCCLK, CLK_EXTAL, 1, 1),
|
DEF_FIXED(".osc", R9A07G044_OSCCLK, CLK_EXTAL, 1, 1),
|
||||||
DEF_FIXED(".osc_div1000", CLK_OSC_DIV1000, CLK_EXTAL, 1, 1000),
|
DEF_FIXED(".osc_div1000", CLK_OSC_DIV1000, CLK_EXTAL, 1, 1000),
|
||||||
DEF_SAMPLL(".pll1", CLK_PLL1, CLK_EXTAL, PLL146_CONF(0)),
|
DEF_SAMPLL(".pll1", CLK_PLL1, CLK_EXTAL, PLL146_CONF(0)),
|
||||||
DEF_FIXED(".pll2", CLK_PLL2, CLK_EXTAL, 133, 2),
|
DEF_FIXED(".pll2", CLK_PLL2, CLK_EXTAL, 200, 3),
|
||||||
DEF_FIXED(".pll3", CLK_PLL3, CLK_EXTAL, 133, 2),
|
DEF_FIXED(".pll3", CLK_PLL3, CLK_EXTAL, 200, 3),
|
||||||
|
|
||||||
DEF_FIXED(".pll2_div2", CLK_PLL2_DIV2, CLK_PLL2, 1, 2),
|
DEF_FIXED(".pll2_div2", CLK_PLL2_DIV2, CLK_PLL2, 1, 2),
|
||||||
DEF_FIXED(".pll2_div16", CLK_PLL2_DIV16, CLK_PLL2, 1, 16),
|
DEF_FIXED(".pll2_div16", CLK_PLL2_DIV16, CLK_PLL2, 1, 16),
|
||||||
|
|||||||
@@ -182,6 +182,7 @@ static void cxl_decoder_release(struct device *dev)
|
|||||||
|
|
||||||
ida_free(&port->decoder_ida, cxld->id);
|
ida_free(&port->decoder_ida, cxld->id);
|
||||||
kfree(cxld);
|
kfree(cxld);
|
||||||
|
put_device(&port->dev);
|
||||||
}
|
}
|
||||||
|
|
||||||
static const struct device_type cxl_decoder_switch_type = {
|
static const struct device_type cxl_decoder_switch_type = {
|
||||||
@@ -481,6 +482,9 @@ cxl_decoder_alloc(struct cxl_port *port, int nr_targets, resource_size_t base,
|
|||||||
if (rc < 0)
|
if (rc < 0)
|
||||||
goto err;
|
goto err;
|
||||||
|
|
||||||
|
/* need parent to stick around to release the id */
|
||||||
|
get_device(&port->dev);
|
||||||
|
|
||||||
*cxld = (struct cxl_decoder) {
|
*cxld = (struct cxl_decoder) {
|
||||||
.id = rc,
|
.id = rc,
|
||||||
.range = {
|
.range = {
|
||||||
|
|||||||
@@ -91,12 +91,9 @@ static void dma_buf_release(struct dentry *dentry)
|
|||||||
BUG_ON(dmabuf->vmapping_counter);
|
BUG_ON(dmabuf->vmapping_counter);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Any fences that a dma-buf poll can wait on should be signaled
|
* If you hit this BUG() it could mean:
|
||||||
* before releasing dma-buf. This is the responsibility of each
|
* * There's a file reference imbalance in dma_buf_poll / dma_buf_poll_cb or somewhere else
|
||||||
* driver that uses the reservation objects.
|
* * dmabuf->cb_in/out.active are non-0 despite no pending fence callback
|
||||||
*
|
|
||||||
* If you hit this BUG() it means someone dropped their ref to the
|
|
||||||
* dma-buf while still having pending operation to the buffer.
|
|
||||||
*/
|
*/
|
||||||
BUG_ON(dmabuf->cb_in.active || dmabuf->cb_out.active);
|
BUG_ON(dmabuf->cb_in.active || dmabuf->cb_out.active);
|
||||||
|
|
||||||
@@ -225,6 +222,7 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
|
|||||||
static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
|
static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
|
||||||
{
|
{
|
||||||
struct dma_buf_poll_cb_t *dcb = (struct dma_buf_poll_cb_t *)cb;
|
struct dma_buf_poll_cb_t *dcb = (struct dma_buf_poll_cb_t *)cb;
|
||||||
|
struct dma_buf *dmabuf = container_of(dcb->poll, struct dma_buf, poll);
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
spin_lock_irqsave(&dcb->poll->lock, flags);
|
spin_lock_irqsave(&dcb->poll->lock, flags);
|
||||||
@@ -232,6 +230,8 @@ static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
|
|||||||
dcb->active = 0;
|
dcb->active = 0;
|
||||||
spin_unlock_irqrestore(&dcb->poll->lock, flags);
|
spin_unlock_irqrestore(&dcb->poll->lock, flags);
|
||||||
dma_fence_put(fence);
|
dma_fence_put(fence);
|
||||||
|
/* Paired with get_file in dma_buf_poll */
|
||||||
|
fput(dmabuf->file);
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool dma_buf_poll_shared(struct dma_resv *resv,
|
static bool dma_buf_poll_shared(struct dma_resv *resv,
|
||||||
@@ -307,8 +307,12 @@ static __poll_t dma_buf_poll(struct file *file, poll_table *poll)
|
|||||||
spin_unlock_irq(&dmabuf->poll.lock);
|
spin_unlock_irq(&dmabuf->poll.lock);
|
||||||
|
|
||||||
if (events & EPOLLOUT) {
|
if (events & EPOLLOUT) {
|
||||||
|
/* Paired with fput in dma_buf_poll_cb */
|
||||||
|
get_file(dmabuf->file);
|
||||||
|
|
||||||
if (!dma_buf_poll_shared(resv, dcb) &&
|
if (!dma_buf_poll_shared(resv, dcb) &&
|
||||||
!dma_buf_poll_excl(resv, dcb))
|
!dma_buf_poll_excl(resv, dcb))
|
||||||
|
|
||||||
/* No callback queued, wake up any other waiters */
|
/* No callback queued, wake up any other waiters */
|
||||||
dma_buf_poll_cb(NULL, &dcb->cb);
|
dma_buf_poll_cb(NULL, &dcb->cb);
|
||||||
else
|
else
|
||||||
@@ -328,6 +332,9 @@ static __poll_t dma_buf_poll(struct file *file, poll_table *poll)
|
|||||||
spin_unlock_irq(&dmabuf->poll.lock);
|
spin_unlock_irq(&dmabuf->poll.lock);
|
||||||
|
|
||||||
if (events & EPOLLIN) {
|
if (events & EPOLLIN) {
|
||||||
|
/* Paired with fput in dma_buf_poll_cb */
|
||||||
|
get_file(dmabuf->file);
|
||||||
|
|
||||||
if (!dma_buf_poll_excl(resv, dcb))
|
if (!dma_buf_poll_excl(resv, dcb))
|
||||||
/* No callback queued, wake up any other waiters */
|
/* No callback queued, wake up any other waiters */
|
||||||
dma_buf_poll_cb(NULL, &dcb->cb);
|
dma_buf_poll_cb(NULL, &dcb->cb);
|
||||||
|
|||||||
@@ -1898,6 +1898,11 @@ static int at_xdmac_alloc_chan_resources(struct dma_chan *chan)
|
|||||||
for (i = 0; i < init_nr_desc_per_channel; i++) {
|
for (i = 0; i < init_nr_desc_per_channel; i++) {
|
||||||
desc = at_xdmac_alloc_desc(chan, GFP_KERNEL);
|
desc = at_xdmac_alloc_desc(chan, GFP_KERNEL);
|
||||||
if (!desc) {
|
if (!desc) {
|
||||||
|
if (i == 0) {
|
||||||
|
dev_warn(chan2dev(chan),
|
||||||
|
"can't allocate any descriptors\n");
|
||||||
|
return -EIO;
|
||||||
|
}
|
||||||
dev_warn(chan2dev(chan),
|
dev_warn(chan2dev(chan),
|
||||||
"only %d descriptors have been allocated\n", i);
|
"only %d descriptors have been allocated\n", i);
|
||||||
break;
|
break;
|
||||||
|
|||||||
@@ -720,10 +720,7 @@ static void idxd_device_wqs_clear_state(struct idxd_device *idxd)
|
|||||||
for (i = 0; i < idxd->max_wqs; i++) {
|
for (i = 0; i < idxd->max_wqs; i++) {
|
||||||
struct idxd_wq *wq = idxd->wqs[i];
|
struct idxd_wq *wq = idxd->wqs[i];
|
||||||
|
|
||||||
if (wq->state == IDXD_WQ_ENABLED) {
|
|
||||||
idxd_wq_disable_cleanup(wq);
|
idxd_wq_disable_cleanup(wq);
|
||||||
wq->state = IDXD_WQ_DISABLED;
|
|
||||||
}
|
|
||||||
idxd_wq_device_reset_cleanup(wq);
|
idxd_wq_device_reset_cleanup(wq);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2264,7 +2264,7 @@ MODULE_DESCRIPTION("i.MX SDMA driver");
|
|||||||
#if IS_ENABLED(CONFIG_SOC_IMX6Q)
|
#if IS_ENABLED(CONFIG_SOC_IMX6Q)
|
||||||
MODULE_FIRMWARE("imx/sdma/sdma-imx6q.bin");
|
MODULE_FIRMWARE("imx/sdma/sdma-imx6q.bin");
|
||||||
#endif
|
#endif
|
||||||
#if IS_ENABLED(CONFIG_SOC_IMX7D)
|
#if IS_ENABLED(CONFIG_SOC_IMX7D) || IS_ENABLED(CONFIG_SOC_IMX8M)
|
||||||
MODULE_FIRMWARE("imx/sdma/sdma-imx7d.bin");
|
MODULE_FIRMWARE("imx/sdma/sdma-imx7d.bin");
|
||||||
#endif
|
#endif
|
||||||
MODULE_LICENSE("GPL");
|
MODULE_LICENSE("GPL");
|
||||||
|
|||||||
@@ -1593,11 +1593,12 @@ static int intel_ldma_probe(struct platform_device *pdev)
|
|||||||
d->core_clk = devm_clk_get_optional(dev, NULL);
|
d->core_clk = devm_clk_get_optional(dev, NULL);
|
||||||
if (IS_ERR(d->core_clk))
|
if (IS_ERR(d->core_clk))
|
||||||
return PTR_ERR(d->core_clk);
|
return PTR_ERR(d->core_clk);
|
||||||
clk_prepare_enable(d->core_clk);
|
|
||||||
|
|
||||||
d->rst = devm_reset_control_get_optional(dev, NULL);
|
d->rst = devm_reset_control_get_optional(dev, NULL);
|
||||||
if (IS_ERR(d->rst))
|
if (IS_ERR(d->rst))
|
||||||
return PTR_ERR(d->rst);
|
return PTR_ERR(d->rst);
|
||||||
|
|
||||||
|
clk_prepare_enable(d->core_clk);
|
||||||
reset_control_deassert(d->rst);
|
reset_control_deassert(d->rst);
|
||||||
|
|
||||||
ret = devm_add_action_or_reset(dev, ldma_clk_disable, d);
|
ret = devm_add_action_or_reset(dev, ldma_clk_disable, d);
|
||||||
|
|||||||
@@ -2589,7 +2589,7 @@ static struct dma_pl330_desc *pl330_get_desc(struct dma_pl330_chan *pch)
|
|||||||
|
|
||||||
/* If the DMAC pool is empty, alloc new */
|
/* If the DMAC pool is empty, alloc new */
|
||||||
if (!desc) {
|
if (!desc) {
|
||||||
DEFINE_SPINLOCK(lock);
|
static DEFINE_SPINLOCK(lock);
|
||||||
LIST_HEAD(pool);
|
LIST_HEAD(pool);
|
||||||
|
|
||||||
if (!add_desc(&pool, &lock, GFP_ATOMIC, 1))
|
if (!add_desc(&pool, &lock, GFP_ATOMIC, 1))
|
||||||
|
|||||||
@@ -515,14 +515,6 @@ static int bam_alloc_chan(struct dma_chan *chan)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int bam_pm_runtime_get_sync(struct device *dev)
|
|
||||||
{
|
|
||||||
if (pm_runtime_enabled(dev))
|
|
||||||
return pm_runtime_get_sync(dev);
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* bam_free_chan - Frees dma resources associated with specific channel
|
* bam_free_chan - Frees dma resources associated with specific channel
|
||||||
* @chan: specified channel
|
* @chan: specified channel
|
||||||
@@ -538,7 +530,7 @@ static void bam_free_chan(struct dma_chan *chan)
|
|||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ret = bam_pm_runtime_get_sync(bdev->dev);
|
ret = pm_runtime_get_sync(bdev->dev);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
@@ -734,7 +726,7 @@ static int bam_pause(struct dma_chan *chan)
|
|||||||
unsigned long flag;
|
unsigned long flag;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ret = bam_pm_runtime_get_sync(bdev->dev);
|
ret = pm_runtime_get_sync(bdev->dev);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
@@ -760,7 +752,7 @@ static int bam_resume(struct dma_chan *chan)
|
|||||||
unsigned long flag;
|
unsigned long flag;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ret = bam_pm_runtime_get_sync(bdev->dev);
|
ret = pm_runtime_get_sync(bdev->dev);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
@@ -869,7 +861,7 @@ static irqreturn_t bam_dma_irq(int irq, void *data)
|
|||||||
if (srcs & P_IRQ)
|
if (srcs & P_IRQ)
|
||||||
tasklet_schedule(&bdev->task);
|
tasklet_schedule(&bdev->task);
|
||||||
|
|
||||||
ret = bam_pm_runtime_get_sync(bdev->dev);
|
ret = pm_runtime_get_sync(bdev->dev);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return IRQ_NONE;
|
return IRQ_NONE;
|
||||||
|
|
||||||
@@ -987,7 +979,7 @@ static void bam_start_dma(struct bam_chan *bchan)
|
|||||||
if (!vd)
|
if (!vd)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
ret = bam_pm_runtime_get_sync(bdev->dev);
|
ret = pm_runtime_get_sync(bdev->dev);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
@@ -1350,11 +1342,6 @@ static int bam_dma_probe(struct platform_device *pdev)
|
|||||||
if (ret)
|
if (ret)
|
||||||
goto err_unregister_dma;
|
goto err_unregister_dma;
|
||||||
|
|
||||||
if (!bdev->bamclk) {
|
|
||||||
pm_runtime_disable(&pdev->dev);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
pm_runtime_irq_safe(&pdev->dev);
|
pm_runtime_irq_safe(&pdev->dev);
|
||||||
pm_runtime_set_autosuspend_delay(&pdev->dev, BAM_DMA_AUTOSUSPEND_DELAY);
|
pm_runtime_set_autosuspend_delay(&pdev->dev, BAM_DMA_AUTOSUSPEND_DELAY);
|
||||||
pm_runtime_use_autosuspend(&pdev->dev);
|
pm_runtime_use_autosuspend(&pdev->dev);
|
||||||
@@ -1438,10 +1425,8 @@ static int __maybe_unused bam_dma_suspend(struct device *dev)
|
|||||||
{
|
{
|
||||||
struct bam_device *bdev = dev_get_drvdata(dev);
|
struct bam_device *bdev = dev_get_drvdata(dev);
|
||||||
|
|
||||||
if (bdev->bamclk) {
|
|
||||||
pm_runtime_force_suspend(dev);
|
pm_runtime_force_suspend(dev);
|
||||||
clk_unprepare(bdev->bamclk);
|
clk_unprepare(bdev->bamclk);
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -1451,13 +1436,11 @@ static int __maybe_unused bam_dma_resume(struct device *dev)
|
|||||||
struct bam_device *bdev = dev_get_drvdata(dev);
|
struct bam_device *bdev = dev_get_drvdata(dev);
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (bdev->bamclk) {
|
|
||||||
ret = clk_prepare(bdev->bamclk);
|
ret = clk_prepare(bdev->bamclk);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
pm_runtime_force_resume(dev);
|
pm_runtime_force_resume(dev);
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -245,6 +245,7 @@ static void *ti_dra7_xbar_route_allocate(struct of_phandle_args *dma_spec,
|
|||||||
if (dma_spec->args[0] >= xbar->xbar_requests) {
|
if (dma_spec->args[0] >= xbar->xbar_requests) {
|
||||||
dev_err(&pdev->dev, "Invalid XBAR request number: %d\n",
|
dev_err(&pdev->dev, "Invalid XBAR request number: %d\n",
|
||||||
dma_spec->args[0]);
|
dma_spec->args[0]);
|
||||||
|
put_device(&pdev->dev);
|
||||||
return ERR_PTR(-EINVAL);
|
return ERR_PTR(-EINVAL);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -252,12 +253,14 @@ static void *ti_dra7_xbar_route_allocate(struct of_phandle_args *dma_spec,
|
|||||||
dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0);
|
dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0);
|
||||||
if (!dma_spec->np) {
|
if (!dma_spec->np) {
|
||||||
dev_err(&pdev->dev, "Can't get DMA master\n");
|
dev_err(&pdev->dev, "Can't get DMA master\n");
|
||||||
|
put_device(&pdev->dev);
|
||||||
return ERR_PTR(-EINVAL);
|
return ERR_PTR(-EINVAL);
|
||||||
}
|
}
|
||||||
|
|
||||||
map = kzalloc(sizeof(*map), GFP_KERNEL);
|
map = kzalloc(sizeof(*map), GFP_KERNEL);
|
||||||
if (!map) {
|
if (!map) {
|
||||||
of_node_put(dma_spec->np);
|
of_node_put(dma_spec->np);
|
||||||
|
put_device(&pdev->dev);
|
||||||
return ERR_PTR(-ENOMEM);
|
return ERR_PTR(-ENOMEM);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -268,6 +271,8 @@ static void *ti_dra7_xbar_route_allocate(struct of_phandle_args *dma_spec,
|
|||||||
mutex_unlock(&xbar->mutex);
|
mutex_unlock(&xbar->mutex);
|
||||||
dev_err(&pdev->dev, "Run out of free DMA requests\n");
|
dev_err(&pdev->dev, "Run out of free DMA requests\n");
|
||||||
kfree(map);
|
kfree(map);
|
||||||
|
of_node_put(dma_spec->np);
|
||||||
|
put_device(&pdev->dev);
|
||||||
return ERR_PTR(-ENOMEM);
|
return ERR_PTR(-ENOMEM);
|
||||||
}
|
}
|
||||||
set_bit(map->xbar_out, xbar->dma_inuse);
|
set_bit(map->xbar_out, xbar->dma_inuse);
|
||||||
|
|||||||
@@ -1285,6 +1285,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
|
|||||||
void amdgpu_device_pci_config_reset(struct amdgpu_device *adev);
|
void amdgpu_device_pci_config_reset(struct amdgpu_device *adev);
|
||||||
int amdgpu_device_pci_reset(struct amdgpu_device *adev);
|
int amdgpu_device_pci_reset(struct amdgpu_device *adev);
|
||||||
bool amdgpu_device_need_post(struct amdgpu_device *adev);
|
bool amdgpu_device_need_post(struct amdgpu_device *adev);
|
||||||
|
bool amdgpu_device_should_use_aspm(struct amdgpu_device *adev);
|
||||||
|
|
||||||
void amdgpu_cs_report_moved_bytes(struct amdgpu_device *adev, u64 num_bytes,
|
void amdgpu_cs_report_moved_bytes(struct amdgpu_device *adev, u64 num_bytes,
|
||||||
u64 num_vis_bytes);
|
u64 num_vis_bytes);
|
||||||
|
|||||||
@@ -1309,6 +1309,31 @@ bool amdgpu_device_need_post(struct amdgpu_device *adev)
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* amdgpu_device_should_use_aspm - check if the device should program ASPM
|
||||||
|
*
|
||||||
|
* @adev: amdgpu_device pointer
|
||||||
|
*
|
||||||
|
* Confirm whether the module parameter and pcie bridge agree that ASPM should
|
||||||
|
* be set for this device.
|
||||||
|
*
|
||||||
|
* Returns true if it should be used or false if not.
|
||||||
|
*/
|
||||||
|
bool amdgpu_device_should_use_aspm(struct amdgpu_device *adev)
|
||||||
|
{
|
||||||
|
switch (amdgpu_aspm) {
|
||||||
|
case -1:
|
||||||
|
break;
|
||||||
|
case 0:
|
||||||
|
return false;
|
||||||
|
case 1:
|
||||||
|
return true;
|
||||||
|
default:
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
return pcie_aspm_enabled(adev->pdev);
|
||||||
|
}
|
||||||
|
|
||||||
/* if we get transitioned to only one device, take VGA back */
|
/* if we get transitioned to only one device, take VGA back */
|
||||||
/**
|
/**
|
||||||
* amdgpu_device_vga_set_decode - enable/disable vga decode
|
* amdgpu_device_vga_set_decode - enable/disable vga decode
|
||||||
|
|||||||
@@ -1719,7 +1719,7 @@ static void cik_program_aspm(struct amdgpu_device *adev)
|
|||||||
bool disable_l0s = false, disable_l1 = false, disable_plloff_in_l1 = false;
|
bool disable_l0s = false, disable_l1 = false, disable_plloff_in_l1 = false;
|
||||||
bool disable_clkreq = false;
|
bool disable_clkreq = false;
|
||||||
|
|
||||||
if (amdgpu_aspm == 0)
|
if (!amdgpu_device_should_use_aspm(adev))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
if (pci_is_root_bus(adev->pdev->bus))
|
if (pci_is_root_bus(adev->pdev->bus))
|
||||||
|
|||||||
@@ -584,7 +584,7 @@ static void nv_pcie_gen3_enable(struct amdgpu_device *adev)
|
|||||||
|
|
||||||
static void nv_program_aspm(struct amdgpu_device *adev)
|
static void nv_program_aspm(struct amdgpu_device *adev)
|
||||||
{
|
{
|
||||||
if (!amdgpu_aspm)
|
if (!amdgpu_device_should_use_aspm(adev))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
if (!(adev->flags & AMD_IS_APU) &&
|
if (!(adev->flags & AMD_IS_APU) &&
|
||||||
|
|||||||
@@ -2453,7 +2453,7 @@ static void si_program_aspm(struct amdgpu_device *adev)
|
|||||||
bool disable_l0s = false, disable_l1 = false, disable_plloff_in_l1 = false;
|
bool disable_l0s = false, disable_l1 = false, disable_plloff_in_l1 = false;
|
||||||
bool disable_clkreq = false;
|
bool disable_clkreq = false;
|
||||||
|
|
||||||
if (amdgpu_aspm == 0)
|
if (!amdgpu_device_should_use_aspm(adev))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
if (adev->flags & AMD_IS_APU)
|
if (adev->flags & AMD_IS_APU)
|
||||||
|
|||||||
@@ -689,7 +689,7 @@ static void soc15_pcie_gen3_enable(struct amdgpu_device *adev)
|
|||||||
|
|
||||||
static void soc15_program_aspm(struct amdgpu_device *adev)
|
static void soc15_program_aspm(struct amdgpu_device *adev)
|
||||||
{
|
{
|
||||||
if (!amdgpu_aspm)
|
if (!amdgpu_device_should_use_aspm(adev))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
if (!(adev->flags & AMD_IS_APU) &&
|
if (!(adev->flags & AMD_IS_APU) &&
|
||||||
|
|||||||
@@ -1511,7 +1511,7 @@ static int vcn_v3_0_stop_dpg_mode(struct amdgpu_device *adev, int inst_idx)
|
|||||||
struct dpg_pause_state state = {.fw_based = VCN_DPG_STATE__UNPAUSE};
|
struct dpg_pause_state state = {.fw_based = VCN_DPG_STATE__UNPAUSE};
|
||||||
uint32_t tmp;
|
uint32_t tmp;
|
||||||
|
|
||||||
vcn_v3_0_pause_dpg_mode(adev, 0, &state);
|
vcn_v3_0_pause_dpg_mode(adev, inst_idx, &state);
|
||||||
|
|
||||||
/* Wait for power status to be 1 */
|
/* Wait for power status to be 1 */
|
||||||
SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
|
SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
|
||||||
|
|||||||
@@ -81,6 +81,10 @@
|
|||||||
#include "mxgpu_vi.h"
|
#include "mxgpu_vi.h"
|
||||||
#include "amdgpu_dm.h"
|
#include "amdgpu_dm.h"
|
||||||
|
|
||||||
|
#if IS_ENABLED(CONFIG_X86)
|
||||||
|
#include <asm/intel-family.h>
|
||||||
|
#endif
|
||||||
|
|
||||||
#define ixPCIE_LC_L1_PM_SUBSTATE 0x100100C6
|
#define ixPCIE_LC_L1_PM_SUBSTATE 0x100100C6
|
||||||
#define PCIE_LC_L1_PM_SUBSTATE__LC_L1_SUBSTATES_OVERRIDE_EN_MASK 0x00000001L
|
#define PCIE_LC_L1_PM_SUBSTATE__LC_L1_SUBSTATES_OVERRIDE_EN_MASK 0x00000001L
|
||||||
#define PCIE_LC_L1_PM_SUBSTATE__LC_PCI_PM_L1_2_OVERRIDE_MASK 0x00000002L
|
#define PCIE_LC_L1_PM_SUBSTATE__LC_PCI_PM_L1_2_OVERRIDE_MASK 0x00000002L
|
||||||
@@ -1134,13 +1138,24 @@ static void vi_enable_aspm(struct amdgpu_device *adev)
|
|||||||
WREG32_PCIE(ixPCIE_LC_CNTL, data);
|
WREG32_PCIE(ixPCIE_LC_CNTL, data);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static bool aspm_support_quirk_check(void)
|
||||||
|
{
|
||||||
|
#if IS_ENABLED(CONFIG_X86)
|
||||||
|
struct cpuinfo_x86 *c = &cpu_data(0);
|
||||||
|
|
||||||
|
return !(c->x86 == 6 && c->x86_model == INTEL_FAM6_ALDERLAKE);
|
||||||
|
#else
|
||||||
|
return true;
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
|
||||||
static void vi_program_aspm(struct amdgpu_device *adev)
|
static void vi_program_aspm(struct amdgpu_device *adev)
|
||||||
{
|
{
|
||||||
u32 data, data1, orig;
|
u32 data, data1, orig;
|
||||||
bool bL1SS = false;
|
bool bL1SS = false;
|
||||||
bool bClkReqSupport = true;
|
bool bClkReqSupport = true;
|
||||||
|
|
||||||
if (!amdgpu_aspm)
|
if (!amdgpu_device_should_use_aspm(adev) || !aspm_support_quirk_check())
|
||||||
return;
|
return;
|
||||||
|
|
||||||
if (adev->flags & AMD_IS_APU ||
|
if (adev->flags & AMD_IS_APU ||
|
||||||
|
|||||||
@@ -1856,7 +1856,7 @@ static struct pipe_ctx *dcn30_find_split_pipe(
|
|||||||
return pipe;
|
return pipe;
|
||||||
}
|
}
|
||||||
|
|
||||||
static noinline bool dcn30_internal_validate_bw(
|
noinline bool dcn30_internal_validate_bw(
|
||||||
struct dc *dc,
|
struct dc *dc,
|
||||||
struct dc_state *context,
|
struct dc_state *context,
|
||||||
display_e2e_pipe_params_st *pipes,
|
display_e2e_pipe_params_st *pipes,
|
||||||
|
|||||||
@@ -55,6 +55,13 @@ unsigned int dcn30_calc_max_scaled_time(
|
|||||||
|
|
||||||
bool dcn30_validate_bandwidth(struct dc *dc, struct dc_state *context,
|
bool dcn30_validate_bandwidth(struct dc *dc, struct dc_state *context,
|
||||||
bool fast_validate);
|
bool fast_validate);
|
||||||
|
bool dcn30_internal_validate_bw(
|
||||||
|
struct dc *dc,
|
||||||
|
struct dc_state *context,
|
||||||
|
display_e2e_pipe_params_st *pipes,
|
||||||
|
int *pipe_cnt_out,
|
||||||
|
int *vlevel_out,
|
||||||
|
bool fast_validate);
|
||||||
void dcn30_calculate_wm_and_dlg(
|
void dcn30_calculate_wm_and_dlg(
|
||||||
struct dc *dc, struct dc_state *context,
|
struct dc *dc, struct dc_state *context,
|
||||||
display_e2e_pipe_params_st *pipes,
|
display_e2e_pipe_params_st *pipes,
|
||||||
|
|||||||
@@ -1664,6 +1664,15 @@ static void dcn31_calculate_wm_and_dlg_fp(
|
|||||||
if (context->bw_ctx.dml.soc.min_dcfclk > dcfclk)
|
if (context->bw_ctx.dml.soc.min_dcfclk > dcfclk)
|
||||||
dcfclk = context->bw_ctx.dml.soc.min_dcfclk;
|
dcfclk = context->bw_ctx.dml.soc.min_dcfclk;
|
||||||
|
|
||||||
|
/* We don't recalculate clocks for 0 pipe configs, which can block
|
||||||
|
* S0i3 as high clocks will block low power states
|
||||||
|
* Override any clocks that can block S0i3 to min here
|
||||||
|
*/
|
||||||
|
if (pipe_cnt == 0) {
|
||||||
|
context->bw_ctx.bw.dcn.clk.dcfclk_khz = dcfclk; // always should be vlevel 0
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
pipes[0].clks_cfg.voltage = vlevel;
|
pipes[0].clks_cfg.voltage = vlevel;
|
||||||
pipes[0].clks_cfg.dcfclk_mhz = dcfclk;
|
pipes[0].clks_cfg.dcfclk_mhz = dcfclk;
|
||||||
pipes[0].clks_cfg.socclk_mhz = context->bw_ctx.dml.soc.clock_limits[vlevel].socclk_mhz;
|
pipes[0].clks_cfg.socclk_mhz = context->bw_ctx.dml.soc.clock_limits[vlevel].socclk_mhz;
|
||||||
@@ -1789,6 +1798,60 @@ static void dcn31_calculate_wm_and_dlg(
|
|||||||
DC_FP_END();
|
DC_FP_END();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool dcn31_validate_bandwidth(struct dc *dc,
|
||||||
|
struct dc_state *context,
|
||||||
|
bool fast_validate)
|
||||||
|
{
|
||||||
|
bool out = false;
|
||||||
|
|
||||||
|
BW_VAL_TRACE_SETUP();
|
||||||
|
|
||||||
|
int vlevel = 0;
|
||||||
|
int pipe_cnt = 0;
|
||||||
|
display_e2e_pipe_params_st *pipes = kzalloc(dc->res_pool->pipe_count * sizeof(display_e2e_pipe_params_st), GFP_KERNEL);
|
||||||
|
DC_LOGGER_INIT(dc->ctx->logger);
|
||||||
|
|
||||||
|
BW_VAL_TRACE_COUNT();
|
||||||
|
|
||||||
|
DC_FP_START();
|
||||||
|
out = dcn30_internal_validate_bw(dc, context, pipes, &pipe_cnt, &vlevel, fast_validate);
|
||||||
|
DC_FP_END();
|
||||||
|
|
||||||
|
// Disable fast_validate to set min dcfclk in alculate_wm_and_dlg
|
||||||
|
if (pipe_cnt == 0)
|
||||||
|
fast_validate = false;
|
||||||
|
|
||||||
|
if (!out)
|
||||||
|
goto validate_fail;
|
||||||
|
|
||||||
|
BW_VAL_TRACE_END_VOLTAGE_LEVEL();
|
||||||
|
|
||||||
|
if (fast_validate) {
|
||||||
|
BW_VAL_TRACE_SKIP(fast);
|
||||||
|
goto validate_out;
|
||||||
|
}
|
||||||
|
|
||||||
|
dc->res_pool->funcs->calculate_wm_and_dlg(dc, context, pipes, pipe_cnt, vlevel);
|
||||||
|
|
||||||
|
BW_VAL_TRACE_END_WATERMARKS();
|
||||||
|
|
||||||
|
goto validate_out;
|
||||||
|
|
||||||
|
validate_fail:
|
||||||
|
DC_LOG_WARNING("Mode Validation Warning: %s failed alidation.\n",
|
||||||
|
dml_get_status_message(context->bw_ctx.dml.vba.ValidationStatus[context->bw_ctx.dml.vba.soc.num_states]));
|
||||||
|
|
||||||
|
BW_VAL_TRACE_SKIP(fail);
|
||||||
|
out = false;
|
||||||
|
|
||||||
|
validate_out:
|
||||||
|
kfree(pipes);
|
||||||
|
|
||||||
|
BW_VAL_TRACE_FINISH();
|
||||||
|
|
||||||
|
return out;
|
||||||
|
}
|
||||||
|
|
||||||
static struct dc_cap_funcs cap_funcs = {
|
static struct dc_cap_funcs cap_funcs = {
|
||||||
.get_dcc_compression_cap = dcn20_get_dcc_compression_cap
|
.get_dcc_compression_cap = dcn20_get_dcc_compression_cap
|
||||||
};
|
};
|
||||||
@@ -1871,7 +1934,7 @@ static struct resource_funcs dcn31_res_pool_funcs = {
|
|||||||
.link_encs_assign = link_enc_cfg_link_encs_assign,
|
.link_encs_assign = link_enc_cfg_link_encs_assign,
|
||||||
.link_enc_unassign = link_enc_cfg_link_enc_unassign,
|
.link_enc_unassign = link_enc_cfg_link_enc_unassign,
|
||||||
.panel_cntl_create = dcn31_panel_cntl_create,
|
.panel_cntl_create = dcn31_panel_cntl_create,
|
||||||
.validate_bandwidth = dcn30_validate_bandwidth,
|
.validate_bandwidth = dcn31_validate_bandwidth,
|
||||||
.calculate_wm_and_dlg = dcn31_calculate_wm_and_dlg,
|
.calculate_wm_and_dlg = dcn31_calculate_wm_and_dlg,
|
||||||
.update_soc_for_wm_a = dcn31_update_soc_for_wm_a,
|
.update_soc_for_wm_a = dcn31_update_soc_for_wm_a,
|
||||||
.populate_dml_pipes = dcn31_populate_dml_pipes_from_context,
|
.populate_dml_pipes = dcn31_populate_dml_pipes_from_context,
|
||||||
|
|||||||
@@ -338,7 +338,7 @@ sienna_cichlid_get_allowed_feature_mask(struct smu_context *smu,
|
|||||||
if (smu->dc_controlled_by_gpio)
|
if (smu->dc_controlled_by_gpio)
|
||||||
*(uint64_t *)feature_mask |= FEATURE_MASK(FEATURE_ACDC_BIT);
|
*(uint64_t *)feature_mask |= FEATURE_MASK(FEATURE_ACDC_BIT);
|
||||||
|
|
||||||
if (amdgpu_aspm)
|
if (amdgpu_device_should_use_aspm(adev))
|
||||||
*(uint64_t *)feature_mask |= FEATURE_MASK(FEATURE_DS_LCLK_BIT);
|
*(uint64_t *)feature_mask |= FEATURE_MASK(FEATURE_DS_LCLK_BIT);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|||||||
@@ -442,6 +442,13 @@ set_proto_ctx_engines_bond(struct i915_user_extension __user *base, void *data)
|
|||||||
u16 idx, num_bonds;
|
u16 idx, num_bonds;
|
||||||
int err, n;
|
int err, n;
|
||||||
|
|
||||||
|
if (GRAPHICS_VER(i915) >= 12 && !IS_TIGERLAKE(i915) &&
|
||||||
|
!IS_ROCKETLAKE(i915) && !IS_ALDERLAKE_S(i915)) {
|
||||||
|
drm_dbg(&i915->drm,
|
||||||
|
"Bonding on gen12+ aside from TGL, RKL, and ADL_S not supported\n");
|
||||||
|
return -ENODEV;
|
||||||
|
}
|
||||||
|
|
||||||
if (get_user(idx, &ext->virtual_index))
|
if (get_user(idx, &ext->virtual_index))
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
|
|
||||||
|
|||||||
@@ -224,6 +224,12 @@ void __i915_gem_free_object(struct drm_i915_gem_object *obj)
|
|||||||
GEM_BUG_ON(vma->obj != obj);
|
GEM_BUG_ON(vma->obj != obj);
|
||||||
spin_unlock(&obj->vma.lock);
|
spin_unlock(&obj->vma.lock);
|
||||||
|
|
||||||
|
/* Verify that the vma is unbound under the vm mutex. */
|
||||||
|
mutex_lock(&vma->vm->mutex);
|
||||||
|
atomic_and(~I915_VMA_PIN_MASK, &vma->flags);
|
||||||
|
__i915_vma_unbind(vma);
|
||||||
|
mutex_unlock(&vma->vm->mutex);
|
||||||
|
|
||||||
__i915_vma_put(vma);
|
__i915_vma_put(vma);
|
||||||
|
|
||||||
spin_lock(&obj->vma.lock);
|
spin_lock(&obj->vma.lock);
|
||||||
|
|||||||
@@ -152,6 +152,14 @@ struct intel_context {
|
|||||||
/** sseu: Control eu/slice partitioning */
|
/** sseu: Control eu/slice partitioning */
|
||||||
struct intel_sseu sseu;
|
struct intel_sseu sseu;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* pinned_contexts_link: List link for the engine's pinned contexts.
|
||||||
|
* This is only used if this is a perma-pinned kernel context and
|
||||||
|
* the list is assumed to only be manipulated during driver load
|
||||||
|
* or unload time so no mutex protection currently.
|
||||||
|
*/
|
||||||
|
struct list_head pinned_contexts_link;
|
||||||
|
|
||||||
u8 wa_bb_page; /* if set, page num reserved for context workarounds */
|
u8 wa_bb_page; /* if set, page num reserved for context workarounds */
|
||||||
|
|
||||||
struct {
|
struct {
|
||||||
|
|||||||
@@ -320,6 +320,7 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id)
|
|||||||
|
|
||||||
BUILD_BUG_ON(BITS_PER_TYPE(engine->mask) < I915_NUM_ENGINES);
|
BUILD_BUG_ON(BITS_PER_TYPE(engine->mask) < I915_NUM_ENGINES);
|
||||||
|
|
||||||
|
INIT_LIST_HEAD(&engine->pinned_contexts_list);
|
||||||
engine->id = id;
|
engine->id = id;
|
||||||
engine->legacy_idx = INVALID_ENGINE;
|
engine->legacy_idx = INVALID_ENGINE;
|
||||||
engine->mask = BIT(id);
|
engine->mask = BIT(id);
|
||||||
@@ -875,6 +876,8 @@ intel_engine_create_pinned_context(struct intel_engine_cs *engine,
|
|||||||
return ERR_PTR(err);
|
return ERR_PTR(err);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
list_add_tail(&ce->pinned_contexts_link, &engine->pinned_contexts_list);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Give our perma-pinned kernel timelines a separate lockdep class,
|
* Give our perma-pinned kernel timelines a separate lockdep class,
|
||||||
* so that we can use them from within the normal user timelines
|
* so that we can use them from within the normal user timelines
|
||||||
@@ -897,6 +900,7 @@ void intel_engine_destroy_pinned_context(struct intel_context *ce)
|
|||||||
list_del(&ce->timeline->engine_link);
|
list_del(&ce->timeline->engine_link);
|
||||||
mutex_unlock(&hwsp->vm->mutex);
|
mutex_unlock(&hwsp->vm->mutex);
|
||||||
|
|
||||||
|
list_del(&ce->pinned_contexts_link);
|
||||||
intel_context_unpin(ce);
|
intel_context_unpin(ce);
|
||||||
intel_context_put(ce);
|
intel_context_put(ce);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -298,6 +298,29 @@ void intel_engine_init__pm(struct intel_engine_cs *engine)
|
|||||||
intel_engine_init_heartbeat(engine);
|
intel_engine_init_heartbeat(engine);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* intel_engine_reset_pinned_contexts - Reset the pinned contexts of
|
||||||
|
* an engine.
|
||||||
|
* @engine: The engine whose pinned contexts we want to reset.
|
||||||
|
*
|
||||||
|
* Typically the pinned context LMEM images lose or get their content
|
||||||
|
* corrupted on suspend. This function resets their images.
|
||||||
|
*/
|
||||||
|
void intel_engine_reset_pinned_contexts(struct intel_engine_cs *engine)
|
||||||
|
{
|
||||||
|
struct intel_context *ce;
|
||||||
|
|
||||||
|
list_for_each_entry(ce, &engine->pinned_contexts_list,
|
||||||
|
pinned_contexts_link) {
|
||||||
|
/* kernel context gets reset at __engine_unpark() */
|
||||||
|
if (ce == engine->kernel_context)
|
||||||
|
continue;
|
||||||
|
|
||||||
|
dbg_poison_ce(ce);
|
||||||
|
ce->ops->reset(ce);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
|
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
|
||||||
#include "selftest_engine_pm.c"
|
#include "selftest_engine_pm.c"
|
||||||
#endif
|
#endif
|
||||||
|
|||||||
@@ -69,4 +69,6 @@ intel_engine_create_kernel_request(struct intel_engine_cs *engine)
|
|||||||
|
|
||||||
void intel_engine_init__pm(struct intel_engine_cs *engine);
|
void intel_engine_init__pm(struct intel_engine_cs *engine);
|
||||||
|
|
||||||
|
void intel_engine_reset_pinned_contexts(struct intel_engine_cs *engine);
|
||||||
|
|
||||||
#endif /* INTEL_ENGINE_PM_H */
|
#endif /* INTEL_ENGINE_PM_H */
|
||||||
|
|||||||
@@ -304,6 +304,13 @@ struct intel_engine_cs {
|
|||||||
|
|
||||||
struct intel_context *kernel_context; /* pinned */
|
struct intel_context *kernel_context; /* pinned */
|
||||||
|
|
||||||
|
/**
|
||||||
|
* pinned_contexts_list: List of pinned contexts. This list is only
|
||||||
|
* assumed to be manipulated during driver load- or unload time and
|
||||||
|
* does therefore not have any additional protection.
|
||||||
|
*/
|
||||||
|
struct list_head pinned_contexts_list;
|
||||||
|
|
||||||
intel_engine_mask_t saturated; /* submitting semaphores too late? */
|
intel_engine_mask_t saturated; /* submitting semaphores too late? */
|
||||||
|
|
||||||
struct {
|
struct {
|
||||||
|
|||||||
@@ -2787,6 +2787,8 @@ static void execlists_sanitize(struct intel_engine_cs *engine)
|
|||||||
|
|
||||||
/* And scrub the dirty cachelines for the HWSP */
|
/* And scrub the dirty cachelines for the HWSP */
|
||||||
clflush_cache_range(engine->status_page.addr, PAGE_SIZE);
|
clflush_cache_range(engine->status_page.addr, PAGE_SIZE);
|
||||||
|
|
||||||
|
intel_engine_reset_pinned_contexts(engine);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void enable_error_interrupt(struct intel_engine_cs *engine)
|
static void enable_error_interrupt(struct intel_engine_cs *engine)
|
||||||
|
|||||||
@@ -17,6 +17,7 @@
|
|||||||
#include "intel_ring.h"
|
#include "intel_ring.h"
|
||||||
#include "shmem_utils.h"
|
#include "shmem_utils.h"
|
||||||
#include "intel_engine_heartbeat.h"
|
#include "intel_engine_heartbeat.h"
|
||||||
|
#include "intel_engine_pm.h"
|
||||||
|
|
||||||
/* Rough estimate of the typical request size, performing a flush,
|
/* Rough estimate of the typical request size, performing a flush,
|
||||||
* set-context and then emitting the batch.
|
* set-context and then emitting the batch.
|
||||||
@@ -291,7 +292,9 @@ static void xcs_sanitize(struct intel_engine_cs *engine)
|
|||||||
sanitize_hwsp(engine);
|
sanitize_hwsp(engine);
|
||||||
|
|
||||||
/* And scrub the dirty cachelines for the HWSP */
|
/* And scrub the dirty cachelines for the HWSP */
|
||||||
clflush_cache_range(engine->status_page.addr, PAGE_SIZE);
|
drm_clflush_virt_range(engine->status_page.addr, PAGE_SIZE);
|
||||||
|
|
||||||
|
intel_engine_reset_pinned_contexts(engine);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void reset_prepare(struct intel_engine_cs *engine)
|
static void reset_prepare(struct intel_engine_cs *engine)
|
||||||
|
|||||||
@@ -376,6 +376,8 @@ int mock_engine_init(struct intel_engine_cs *engine)
|
|||||||
{
|
{
|
||||||
struct intel_context *ce;
|
struct intel_context *ce;
|
||||||
|
|
||||||
|
INIT_LIST_HEAD(&engine->pinned_contexts_list);
|
||||||
|
|
||||||
engine->sched_engine = i915_sched_engine_create(ENGINE_MOCK);
|
engine->sched_engine = i915_sched_engine_create(ENGINE_MOCK);
|
||||||
if (!engine->sched_engine)
|
if (!engine->sched_engine)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|||||||
@@ -2347,6 +2347,8 @@ static void guc_sanitize(struct intel_engine_cs *engine)
|
|||||||
|
|
||||||
/* And scrub the dirty cachelines for the HWSP */
|
/* And scrub the dirty cachelines for the HWSP */
|
||||||
clflush_cache_range(engine->status_page.addr, PAGE_SIZE);
|
clflush_cache_range(engine->status_page.addr, PAGE_SIZE);
|
||||||
|
|
||||||
|
intel_engine_reset_pinned_contexts(engine);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void setup_hwsp(struct intel_engine_cs *engine)
|
static void setup_hwsp(struct intel_engine_cs *engine)
|
||||||
@@ -2422,9 +2424,13 @@ static inline void guc_init_lrc_mapping(struct intel_guc *guc)
|
|||||||
* and even it did this code would be run again.
|
* and even it did this code would be run again.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
for_each_engine(engine, gt, id)
|
for_each_engine(engine, gt, id) {
|
||||||
if (engine->kernel_context)
|
struct intel_context *ce;
|
||||||
guc_kernel_context_pin(guc, engine->kernel_context);
|
|
||||||
|
list_for_each_entry(ce, &engine->pinned_contexts_list,
|
||||||
|
pinned_contexts_link)
|
||||||
|
guc_kernel_context_pin(guc, ce);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void guc_release(struct intel_engine_cs *engine)
|
static void guc_release(struct intel_engine_cs *engine)
|
||||||
|
|||||||
@@ -76,9 +76,11 @@ void mtk_ovl_layer_off(struct device *dev, unsigned int idx,
|
|||||||
void mtk_ovl_start(struct device *dev);
|
void mtk_ovl_start(struct device *dev);
|
||||||
void mtk_ovl_stop(struct device *dev);
|
void mtk_ovl_stop(struct device *dev);
|
||||||
unsigned int mtk_ovl_supported_rotations(struct device *dev);
|
unsigned int mtk_ovl_supported_rotations(struct device *dev);
|
||||||
void mtk_ovl_enable_vblank(struct device *dev,
|
void mtk_ovl_register_vblank_cb(struct device *dev,
|
||||||
void (*vblank_cb)(void *),
|
void (*vblank_cb)(void *),
|
||||||
void *vblank_cb_data);
|
void *vblank_cb_data);
|
||||||
|
void mtk_ovl_unregister_vblank_cb(struct device *dev);
|
||||||
|
void mtk_ovl_enable_vblank(struct device *dev);
|
||||||
void mtk_ovl_disable_vblank(struct device *dev);
|
void mtk_ovl_disable_vblank(struct device *dev);
|
||||||
|
|
||||||
void mtk_rdma_bypass_shadow(struct device *dev);
|
void mtk_rdma_bypass_shadow(struct device *dev);
|
||||||
@@ -93,9 +95,11 @@ void mtk_rdma_layer_config(struct device *dev, unsigned int idx,
|
|||||||
struct cmdq_pkt *cmdq_pkt);
|
struct cmdq_pkt *cmdq_pkt);
|
||||||
void mtk_rdma_start(struct device *dev);
|
void mtk_rdma_start(struct device *dev);
|
||||||
void mtk_rdma_stop(struct device *dev);
|
void mtk_rdma_stop(struct device *dev);
|
||||||
void mtk_rdma_enable_vblank(struct device *dev,
|
void mtk_rdma_register_vblank_cb(struct device *dev,
|
||||||
void (*vblank_cb)(void *),
|
void (*vblank_cb)(void *),
|
||||||
void *vblank_cb_data);
|
void *vblank_cb_data);
|
||||||
|
void mtk_rdma_unregister_vblank_cb(struct device *dev);
|
||||||
|
void mtk_rdma_enable_vblank(struct device *dev);
|
||||||
void mtk_rdma_disable_vblank(struct device *dev);
|
void mtk_rdma_disable_vblank(struct device *dev);
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|||||||
@@ -96,7 +96,7 @@ static irqreturn_t mtk_disp_ovl_irq_handler(int irq, void *dev_id)
|
|||||||
return IRQ_HANDLED;
|
return IRQ_HANDLED;
|
||||||
}
|
}
|
||||||
|
|
||||||
void mtk_ovl_enable_vblank(struct device *dev,
|
void mtk_ovl_register_vblank_cb(struct device *dev,
|
||||||
void (*vblank_cb)(void *),
|
void (*vblank_cb)(void *),
|
||||||
void *vblank_cb_data)
|
void *vblank_cb_data)
|
||||||
{
|
{
|
||||||
@@ -104,6 +104,20 @@ void mtk_ovl_enable_vblank(struct device *dev,
|
|||||||
|
|
||||||
ovl->vblank_cb = vblank_cb;
|
ovl->vblank_cb = vblank_cb;
|
||||||
ovl->vblank_cb_data = vblank_cb_data;
|
ovl->vblank_cb_data = vblank_cb_data;
|
||||||
|
}
|
||||||
|
|
||||||
|
void mtk_ovl_unregister_vblank_cb(struct device *dev)
|
||||||
|
{
|
||||||
|
struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
|
||||||
|
|
||||||
|
ovl->vblank_cb = NULL;
|
||||||
|
ovl->vblank_cb_data = NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
void mtk_ovl_enable_vblank(struct device *dev)
|
||||||
|
{
|
||||||
|
struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
|
||||||
|
|
||||||
writel(0x0, ovl->regs + DISP_REG_OVL_INTSTA);
|
writel(0x0, ovl->regs + DISP_REG_OVL_INTSTA);
|
||||||
writel_relaxed(OVL_FME_CPL_INT, ovl->regs + DISP_REG_OVL_INTEN);
|
writel_relaxed(OVL_FME_CPL_INT, ovl->regs + DISP_REG_OVL_INTEN);
|
||||||
}
|
}
|
||||||
@@ -112,8 +126,6 @@ void mtk_ovl_disable_vblank(struct device *dev)
|
|||||||
{
|
{
|
||||||
struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
|
struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
|
||||||
|
|
||||||
ovl->vblank_cb = NULL;
|
|
||||||
ovl->vblank_cb_data = NULL;
|
|
||||||
writel_relaxed(0x0, ovl->regs + DISP_REG_OVL_INTEN);
|
writel_relaxed(0x0, ovl->regs + DISP_REG_OVL_INTEN);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -94,7 +94,7 @@ static void rdma_update_bits(struct device *dev, unsigned int reg,
|
|||||||
writel(tmp, rdma->regs + reg);
|
writel(tmp, rdma->regs + reg);
|
||||||
}
|
}
|
||||||
|
|
||||||
void mtk_rdma_enable_vblank(struct device *dev,
|
void mtk_rdma_register_vblank_cb(struct device *dev,
|
||||||
void (*vblank_cb)(void *),
|
void (*vblank_cb)(void *),
|
||||||
void *vblank_cb_data)
|
void *vblank_cb_data)
|
||||||
{
|
{
|
||||||
@@ -102,16 +102,24 @@ void mtk_rdma_enable_vblank(struct device *dev,
|
|||||||
|
|
||||||
rdma->vblank_cb = vblank_cb;
|
rdma->vblank_cb = vblank_cb;
|
||||||
rdma->vblank_cb_data = vblank_cb_data;
|
rdma->vblank_cb_data = vblank_cb_data;
|
||||||
|
}
|
||||||
|
|
||||||
|
void mtk_rdma_unregister_vblank_cb(struct device *dev)
|
||||||
|
{
|
||||||
|
struct mtk_disp_rdma *rdma = dev_get_drvdata(dev);
|
||||||
|
|
||||||
|
rdma->vblank_cb = NULL;
|
||||||
|
rdma->vblank_cb_data = NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
void mtk_rdma_enable_vblank(struct device *dev)
|
||||||
|
{
|
||||||
rdma_update_bits(dev, DISP_REG_RDMA_INT_ENABLE, RDMA_FRAME_END_INT,
|
rdma_update_bits(dev, DISP_REG_RDMA_INT_ENABLE, RDMA_FRAME_END_INT,
|
||||||
RDMA_FRAME_END_INT);
|
RDMA_FRAME_END_INT);
|
||||||
}
|
}
|
||||||
|
|
||||||
void mtk_rdma_disable_vblank(struct device *dev)
|
void mtk_rdma_disable_vblank(struct device *dev)
|
||||||
{
|
{
|
||||||
struct mtk_disp_rdma *rdma = dev_get_drvdata(dev);
|
|
||||||
|
|
||||||
rdma->vblank_cb = NULL;
|
|
||||||
rdma->vblank_cb_data = NULL;
|
|
||||||
rdma_update_bits(dev, DISP_REG_RDMA_INT_ENABLE, RDMA_FRAME_END_INT, 0);
|
rdma_update_bits(dev, DISP_REG_RDMA_INT_ENABLE, RDMA_FRAME_END_INT, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -4,6 +4,8 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
#include <linux/clk.h>
|
#include <linux/clk.h>
|
||||||
|
#include <linux/dma-mapping.h>
|
||||||
|
#include <linux/mailbox_controller.h>
|
||||||
#include <linux/pm_runtime.h>
|
#include <linux/pm_runtime.h>
|
||||||
#include <linux/soc/mediatek/mtk-cmdq.h>
|
#include <linux/soc/mediatek/mtk-cmdq.h>
|
||||||
#include <linux/soc/mediatek/mtk-mmsys.h>
|
#include <linux/soc/mediatek/mtk-mmsys.h>
|
||||||
@@ -50,8 +52,10 @@ struct mtk_drm_crtc {
|
|||||||
bool pending_async_planes;
|
bool pending_async_planes;
|
||||||
|
|
||||||
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
|
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
|
||||||
struct cmdq_client *cmdq_client;
|
struct cmdq_client cmdq_client;
|
||||||
|
struct cmdq_pkt cmdq_handle;
|
||||||
u32 cmdq_event;
|
u32 cmdq_event;
|
||||||
|
u32 cmdq_vblank_cnt;
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
struct device *mmsys_dev;
|
struct device *mmsys_dev;
|
||||||
@@ -104,11 +108,63 @@ static void mtk_drm_finish_page_flip(struct mtk_drm_crtc *mtk_crtc)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
|
||||||
|
static int mtk_drm_cmdq_pkt_create(struct cmdq_client *client, struct cmdq_pkt *pkt,
|
||||||
|
size_t size)
|
||||||
|
{
|
||||||
|
struct device *dev;
|
||||||
|
dma_addr_t dma_addr;
|
||||||
|
|
||||||
|
pkt->va_base = kzalloc(size, GFP_KERNEL);
|
||||||
|
if (!pkt->va_base) {
|
||||||
|
kfree(pkt);
|
||||||
|
return -ENOMEM;
|
||||||
|
}
|
||||||
|
pkt->buf_size = size;
|
||||||
|
pkt->cl = (void *)client;
|
||||||
|
|
||||||
|
dev = client->chan->mbox->dev;
|
||||||
|
dma_addr = dma_map_single(dev, pkt->va_base, pkt->buf_size,
|
||||||
|
DMA_TO_DEVICE);
|
||||||
|
if (dma_mapping_error(dev, dma_addr)) {
|
||||||
|
dev_err(dev, "dma map failed, size=%u\n", (u32)(u64)size);
|
||||||
|
kfree(pkt->va_base);
|
||||||
|
kfree(pkt);
|
||||||
|
return -ENOMEM;
|
||||||
|
}
|
||||||
|
|
||||||
|
pkt->pa_base = dma_addr;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mtk_drm_cmdq_pkt_destroy(struct cmdq_pkt *pkt)
|
||||||
|
{
|
||||||
|
struct cmdq_client *client = (struct cmdq_client *)pkt->cl;
|
||||||
|
|
||||||
|
dma_unmap_single(client->chan->mbox->dev, pkt->pa_base, pkt->buf_size,
|
||||||
|
DMA_TO_DEVICE);
|
||||||
|
kfree(pkt->va_base);
|
||||||
|
kfree(pkt);
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
static void mtk_drm_crtc_destroy(struct drm_crtc *crtc)
|
static void mtk_drm_crtc_destroy(struct drm_crtc *crtc)
|
||||||
{
|
{
|
||||||
struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
|
struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
|
||||||
|
int i;
|
||||||
|
|
||||||
mtk_mutex_put(mtk_crtc->mutex);
|
mtk_mutex_put(mtk_crtc->mutex);
|
||||||
|
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
|
||||||
|
mtk_drm_cmdq_pkt_destroy(&mtk_crtc->cmdq_handle);
|
||||||
|
#endif
|
||||||
|
|
||||||
|
for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) {
|
||||||
|
struct mtk_ddp_comp *comp;
|
||||||
|
|
||||||
|
comp = mtk_crtc->ddp_comp[i];
|
||||||
|
mtk_ddp_comp_unregister_vblank_cb(comp);
|
||||||
|
}
|
||||||
|
|
||||||
drm_crtc_cleanup(crtc);
|
drm_crtc_cleanup(crtc);
|
||||||
}
|
}
|
||||||
@@ -222,9 +278,12 @@ struct mtk_ddp_comp *mtk_drm_ddp_comp_for_plane(struct drm_crtc *crtc,
|
|||||||
}
|
}
|
||||||
|
|
||||||
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
|
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
|
||||||
static void ddp_cmdq_cb(struct cmdq_cb_data data)
|
static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg)
|
||||||
{
|
{
|
||||||
cmdq_pkt_destroy(data.data);
|
struct cmdq_client *cmdq_cl = container_of(cl, struct cmdq_client, client);
|
||||||
|
struct mtk_drm_crtc *mtk_crtc = container_of(cmdq_cl, struct mtk_drm_crtc, cmdq_client);
|
||||||
|
|
||||||
|
mtk_crtc->cmdq_vblank_cnt = 0;
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
@@ -430,7 +489,7 @@ static void mtk_drm_crtc_update_config(struct mtk_drm_crtc *mtk_crtc,
|
|||||||
bool needs_vblank)
|
bool needs_vblank)
|
||||||
{
|
{
|
||||||
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
|
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
|
||||||
struct cmdq_pkt *cmdq_handle;
|
struct cmdq_pkt *cmdq_handle = &mtk_crtc->cmdq_handle;
|
||||||
#endif
|
#endif
|
||||||
struct drm_crtc *crtc = &mtk_crtc->base;
|
struct drm_crtc *crtc = &mtk_crtc->base;
|
||||||
struct mtk_drm_private *priv = crtc->dev->dev_private;
|
struct mtk_drm_private *priv = crtc->dev->dev_private;
|
||||||
@@ -468,14 +527,28 @@ static void mtk_drm_crtc_update_config(struct mtk_drm_crtc *mtk_crtc,
|
|||||||
mtk_mutex_release(mtk_crtc->mutex);
|
mtk_mutex_release(mtk_crtc->mutex);
|
||||||
}
|
}
|
||||||
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
|
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
|
||||||
if (mtk_crtc->cmdq_client) {
|
if (mtk_crtc->cmdq_client.chan) {
|
||||||
mbox_flush(mtk_crtc->cmdq_client->chan, 2000);
|
mbox_flush(mtk_crtc->cmdq_client.chan, 2000);
|
||||||
cmdq_handle = cmdq_pkt_create(mtk_crtc->cmdq_client, PAGE_SIZE);
|
cmdq_handle->cmd_buf_size = 0;
|
||||||
cmdq_pkt_clear_event(cmdq_handle, mtk_crtc->cmdq_event);
|
cmdq_pkt_clear_event(cmdq_handle, mtk_crtc->cmdq_event);
|
||||||
cmdq_pkt_wfe(cmdq_handle, mtk_crtc->cmdq_event, false);
|
cmdq_pkt_wfe(cmdq_handle, mtk_crtc->cmdq_event, false);
|
||||||
mtk_crtc_ddp_config(crtc, cmdq_handle);
|
mtk_crtc_ddp_config(crtc, cmdq_handle);
|
||||||
cmdq_pkt_finalize(cmdq_handle);
|
cmdq_pkt_finalize(cmdq_handle);
|
||||||
cmdq_pkt_flush_async(cmdq_handle, ddp_cmdq_cb, cmdq_handle);
|
dma_sync_single_for_device(mtk_crtc->cmdq_client.chan->mbox->dev,
|
||||||
|
cmdq_handle->pa_base,
|
||||||
|
cmdq_handle->cmd_buf_size,
|
||||||
|
DMA_TO_DEVICE);
|
||||||
|
/*
|
||||||
|
* CMDQ command should execute in next 3 vblank.
|
||||||
|
* One vblank interrupt before send message (occasionally)
|
||||||
|
* and one vblank interrupt after cmdq done,
|
||||||
|
* so it's timeout after 3 vblank interrupt.
|
||||||
|
* If it fail to execute in next 3 vblank, timeout happen.
|
||||||
|
*/
|
||||||
|
mtk_crtc->cmdq_vblank_cnt = 3;
|
||||||
|
|
||||||
|
mbox_send_message(mtk_crtc->cmdq_client.chan, cmdq_handle);
|
||||||
|
mbox_client_txdone(mtk_crtc->cmdq_client.chan, 0);
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
mtk_crtc->config_updating = false;
|
mtk_crtc->config_updating = false;
|
||||||
@@ -489,12 +562,15 @@ static void mtk_crtc_ddp_irq(void *data)
|
|||||||
struct mtk_drm_private *priv = crtc->dev->dev_private;
|
struct mtk_drm_private *priv = crtc->dev->dev_private;
|
||||||
|
|
||||||
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
|
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
|
||||||
if (!priv->data->shadow_register && !mtk_crtc->cmdq_client)
|
if (!priv->data->shadow_register && !mtk_crtc->cmdq_client.chan)
|
||||||
|
mtk_crtc_ddp_config(crtc, NULL);
|
||||||
|
else if (mtk_crtc->cmdq_vblank_cnt > 0 && --mtk_crtc->cmdq_vblank_cnt == 0)
|
||||||
|
DRM_ERROR("mtk_crtc %d CMDQ execute command timeout!\n",
|
||||||
|
drm_crtc_index(&mtk_crtc->base));
|
||||||
#else
|
#else
|
||||||
if (!priv->data->shadow_register)
|
if (!priv->data->shadow_register)
|
||||||
#endif
|
|
||||||
mtk_crtc_ddp_config(crtc, NULL);
|
mtk_crtc_ddp_config(crtc, NULL);
|
||||||
|
#endif
|
||||||
mtk_drm_finish_page_flip(mtk_crtc);
|
mtk_drm_finish_page_flip(mtk_crtc);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -503,7 +579,7 @@ static int mtk_drm_crtc_enable_vblank(struct drm_crtc *crtc)
|
|||||||
struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
|
struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
|
||||||
struct mtk_ddp_comp *comp = mtk_crtc->ddp_comp[0];
|
struct mtk_ddp_comp *comp = mtk_crtc->ddp_comp[0];
|
||||||
|
|
||||||
mtk_ddp_comp_enable_vblank(comp, mtk_crtc_ddp_irq, &mtk_crtc->base);
|
mtk_ddp_comp_enable_vblank(comp);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -803,6 +879,9 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
|
|||||||
if (comp->funcs->ctm_set)
|
if (comp->funcs->ctm_set)
|
||||||
has_ctm = true;
|
has_ctm = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
mtk_ddp_comp_register_vblank_cb(comp, mtk_crtc_ddp_irq,
|
||||||
|
&mtk_crtc->base);
|
||||||
}
|
}
|
||||||
|
|
||||||
for (i = 0; i < mtk_crtc->ddp_comp_nr; i++)
|
for (i = 0; i < mtk_crtc->ddp_comp_nr; i++)
|
||||||
@@ -829,16 +908,20 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
|
|||||||
mutex_init(&mtk_crtc->hw_lock);
|
mutex_init(&mtk_crtc->hw_lock);
|
||||||
|
|
||||||
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
|
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
|
||||||
mtk_crtc->cmdq_client =
|
mtk_crtc->cmdq_client.client.dev = mtk_crtc->mmsys_dev;
|
||||||
cmdq_mbox_create(mtk_crtc->mmsys_dev,
|
mtk_crtc->cmdq_client.client.tx_block = false;
|
||||||
|
mtk_crtc->cmdq_client.client.knows_txdone = true;
|
||||||
|
mtk_crtc->cmdq_client.client.rx_callback = ddp_cmdq_cb;
|
||||||
|
mtk_crtc->cmdq_client.chan =
|
||||||
|
mbox_request_channel(&mtk_crtc->cmdq_client.client,
|
||||||
drm_crtc_index(&mtk_crtc->base));
|
drm_crtc_index(&mtk_crtc->base));
|
||||||
if (IS_ERR(mtk_crtc->cmdq_client)) {
|
if (IS_ERR(mtk_crtc->cmdq_client.chan)) {
|
||||||
dev_dbg(dev, "mtk_crtc %d failed to create mailbox client, writing register by CPU now\n",
|
dev_dbg(dev, "mtk_crtc %d failed to create mailbox client, writing register by CPU now\n",
|
||||||
drm_crtc_index(&mtk_crtc->base));
|
drm_crtc_index(&mtk_crtc->base));
|
||||||
mtk_crtc->cmdq_client = NULL;
|
mtk_crtc->cmdq_client.chan = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (mtk_crtc->cmdq_client) {
|
if (mtk_crtc->cmdq_client.chan) {
|
||||||
ret = of_property_read_u32_index(priv->mutex_node,
|
ret = of_property_read_u32_index(priv->mutex_node,
|
||||||
"mediatek,gce-events",
|
"mediatek,gce-events",
|
||||||
drm_crtc_index(&mtk_crtc->base),
|
drm_crtc_index(&mtk_crtc->base),
|
||||||
@@ -846,8 +929,18 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
|
|||||||
if (ret) {
|
if (ret) {
|
||||||
dev_dbg(dev, "mtk_crtc %d failed to get mediatek,gce-events property\n",
|
dev_dbg(dev, "mtk_crtc %d failed to get mediatek,gce-events property\n",
|
||||||
drm_crtc_index(&mtk_crtc->base));
|
drm_crtc_index(&mtk_crtc->base));
|
||||||
cmdq_mbox_destroy(mtk_crtc->cmdq_client);
|
mbox_free_channel(mtk_crtc->cmdq_client.chan);
|
||||||
mtk_crtc->cmdq_client = NULL;
|
mtk_crtc->cmdq_client.chan = NULL;
|
||||||
|
} else {
|
||||||
|
ret = mtk_drm_cmdq_pkt_create(&mtk_crtc->cmdq_client,
|
||||||
|
&mtk_crtc->cmdq_handle,
|
||||||
|
PAGE_SIZE);
|
||||||
|
if (ret) {
|
||||||
|
dev_dbg(dev, "mtk_crtc %d failed to create cmdq packet\n",
|
||||||
|
drm_crtc_index(&mtk_crtc->base));
|
||||||
|
mbox_free_channel(mtk_crtc->cmdq_client.chan);
|
||||||
|
mtk_crtc->cmdq_client.chan = NULL;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user